Recently, several significant reports on the “human dimension” of US drone strikes have gained high-profile media coverage. Among these reports, Living Under Drones, a joint effort of NYU and Stanford law school clinics, and The Civilian Impact of Drone Strikes from a Columbia Law School clinic (the “Columbia study”), demonstrate how, in the absence of a US Government narrative on current drone programs, the public dialogue tends towards moralizing on the matter via questionable data and analytical methodologies.
In presenting empirical data, these reports are partially based upon inherently unreliable source materials or employ questionable methodological frameworks to produce unduly influenced findings presented as unequivocal fact. Though these studies should not be discounted entirely, for reasons discussed later, they must be read with a healthy dose of scrutiny.
Bias enters empirical studies in myriad ways, but is preeminently built in through poor data quality and lacking methodology. Using inherently biased data, meaning data integrity is compromised at the source and not by means of collection, immediately alters the quality of subsequent analysis. In analyzing data, a robust methodology can help mitigate this bias by employing various measures to offset it; a poor framework will exacerbate existing bias or introduce a new element of bias altogether.
The Columbia study relies in part on journalistic reports as data. Media reports on drone strikes have proven to be inherently biased due to the logistical challenges of determining the timing, location, and first-order effects (fatalities) of a given strike, and the perspective of the individual or organization filing the media report. An American media report can vary significantly from its Pakistani counterpart’s report on the same event, and in the absence of official information on drone strikes, it is impossible to reconcile the purported “facts” contained in such reports. Consequently, media reports offer data that is compromised at its source; any study built upon such information must account for this flaw in its methodology.
Living Under Drones primarily rests upon interviews with victims of drone strikes, their relatives, as well as political and social actors. While the interviews may in fact express the unvarnished opinions of the subjects, the methodology behind this study’s data collection is flawed. Most interview subjects were contacted by the Foundation for Fundamental Rights, an organization with an explicit anti-drone agenda, and required to travel outside of FATA where drone strikes occur to meet with non-native researchers working through translators.
This approach introduces selection bias in all its forms – sampling, observer, and reporting – which, as a structural component of the methodology, cannot be mitigated or accounted for in the analysis phase. The Columbia study may also be critiqued for its use of interviews for the same reasons above; but as most of these interviews were independently arranged and drawn from a greater diversity of position and opinion, the selection bias is slightly mitigated, though not eliminated.
As a consequence of such methodologies, these studies appear as if they are simply confirmatory in nature: the authoring organizations had previously decided the results of the study and then endeavored to find corroborating evidence. This is certainly not the case, but it points to the unequivocal importance of having reliable data and airtight methodologies that are still dynamic enough to accommodate new data and associated shifts in its analysis.
This is not to say that each study does not contribute the drone debate: there is absolutely a need to consider the second- and third-order effects of US drone programs, including the potential generation of anti-American sentiments. To this end, these studies focus on a singular aspect of drone strikes, which is necessary to attain a level of detail conducive to analysis and drawing conclusions. However, they forgo the context within which such strikes are occurring, thereby ignoring questions of proportionality and security. Any effort to examine the effects of US drone programs more thoroughly should be commended. However, pushing agenda-driven research skews the facts, thereby becoming deleterious to both the dialogue and the overarching cause –mitigating the global threat of terrorism.
Be sure to check out the latest update to ASP’s annotated bibliography of drone resources, which includes the reports mentioned above.