Introduction
For over a decade, the aid sector has been in the thrall of the humanitarian innovation agenda (Currion, 2019). Burgeoning digital technologies – driven by twin forces of technological change and the marketplace – have created a sense of urgency and inevitability around the very idea of innovation (Jacobsen et al., 2017; Sandvik, 2014; Scott-Smith, 2015). It is now axiomatic that ‘every new emergency seems to trigger some new innovation’ (PLoS, 2012: 1). The pressure to innovate emerges partly from operational necessity, for example, the need to accommodate for the obsolescence of old technologies and integrate new ones or the need to engage with host and donor communities in the same digital spaces they inhabit. But it is also a product of the widely promulgated (and largely untestable) assumption that technological innovation itself saves lives (Scott et al., 2021; Redfield, 2012).
The assumption makes innovation itself a virtue. If innovation saves lives, it is almost a humanitarian duty to take advantage of everything and anything that is available – while simultaneously maintaining or improving standards of performance, safety and ethics. Thus, there are those who argue that a clever, conscientious application of technology will ameliorate, if not eliminate, pervasive hierarchies of inequality in humanitarian action (Meier, 2015; Casey-Maslen, 2018). This pressure to innovate as a virtuous necessity has led to some humanitarian contexts becoming experimental laboratories for international organisations as they deploy data mapping, collation, or analysis software (Sandvik et al., 2017; Givoni, 2016).
Debates about contemporary digital transformation in the humanitarian sector have exposed potential ethical, political and institutional implications of uncritical, impatient adoption of technology and big data in humanitarian settings (e.g. Duffield, 2016). What is more apparent is that innovation by itself cannot save life. Innovation only makes a qualitative impact on well-being when attendant, routine structures and practices such as administration, maintenance, funding and oversight make new technologies safe, scalable and sustainable (Campo, 2021; Scott et al., 2021).
It is worth noting that innovation is not a new practice for humanitarians. The idea of humanitarian action itself was an innovation in moral conscience and social organisation (Fassin, 2012), while technological innovation in humanitarian action is just as storied. Mobile searchlights, for example, began as Red Cross ‘electric locomobiles’ intended to recover the wounded from late-nineteenth-century battlefields (Hutchinson, 1989: 338). If anything is new about the present, it is the scale, speed and rippling consequences of transformation – which often run far ahead of practitioners’ abilities to structure, administer and consolidate developments, and to account for the attendant ethical issues that accompany these technologies.
Nowhere are these problematic dynamics so acute and potentially impactful as in the realm of medical data. Medical data are, by their nature, among the most intimate and sensitive of data. Subject to various layers of legal regimes globally, they can nevertheless be amorphous and hard to define. They may include personal information about patients, details about treatments or facilities, or unrelated data that could be linked to medical conditions. The simple fact of their existence can be controversial, while the very act of ‘processing may create significant risks for a person’s fundamental rights and freedoms’ (Gazi, 2020). While widely practised, patient consent can take the form of a routinised ritual; the actual role and volition of patients in data gathering and processing has been under-analysed in humanitarian contexts. In this setting, the slogan – ‘no innovation without representation’ (Winner, 1992: 291) – becomes particularly relevant, and suggests an alternative formulation: no data collection without representation. Thus, the intersection between medical data and their use in humanitarian response is especially important to analyse.
This review summarises key debates and ethical challenges that emerge from academic and practitioner scholarship on the deployment of biometric and medical data technologies into humanitarian and disaster contexts. Such a broad scope of issues can only be treated in summary; the paper does not aim to provide technical analysis or propose solutions to the dilemmas highlighted herein. Rather, it aims to articulate a series of questions to help inform future-focused research agendas. In some cases, it treats recent adaptations and discourses within the humanitarian field. In others, it highlights developments in related fields; these may yet have to reach humanitarian settings – but they are imminent. By zeroing on the criticisms and ethical complications of technological and medical data innovations in humanitarianism, this article illuminates the vibrant multidisciplinary interest in medical data studies and exposes shared conclusions (and potential collaborations) among these different approaches, disciplines and schools of thought.1
In assessing existing debates, we distilled key issues into three major categories of enquiry, related to governance, power and control; justice and equity; and trust. Each category highlights a key moral, ethical, practical or philosophical tension between technologies, the data they produce, and actors in humanitarian spaces. These divisions are admittedly arbitrary, yet they capture a range of thematic tensions. Issues often overlap – almost all debates around medical data innovations in humanitarian settings can fall into more than one of these categories. But to make issues amenable to analysis some form of compartmentalisation is needed. Thus, these three categories are conceptual aides that draw attention to overarching dynamics shared by otherwise disparate technical issues. In providing this framework, we signpost these issues for both practitioners and researchers, articulating overarching themes to assess how apparent minutiae of hardware, software, data processes and their human corollaries also function as forces at the historical and civilisational level.
Governance, Power and Control
The first series of debates relates to issues of governance, power and control within the humanitarian sector. It highlights how questions about civil liberties, rights and personal security have become increasingly central in discourses about the use of medical data in humanitarian settings.
Deployment in Crisis Contexts
A current concern regarding the crisis narrative in humanitarian response is that it can justify uncritical deployment of biometric and medical data technologies. This can endanger data subjects, as well at the humanitarian organisations in possession of their data. The potential to prolong the power imbalances inherent in humanitarian action deserve attention, as do the potential ethical and authoritarian aspects of technology.
Gross discrepancies in power characterise humanitarian action, almost by definition. Such imbalances are ripe for abuse. Powerful actors have historically used humanitarian action for experiments in societal control and authoritarian practice (Barkawi, 2013). Deployment of experimental technology in humanitarian contexts enables these technologies to be piloted among presumably compliant populations – people struggling for basic life necessities are assumed less likely to insist on fundamental rights – and where the policy or regulatory frameworks may not be as developed. Consequently, proprietors learn how to streamline and market these innovations in their home societies (Sandvik et al., 2017; Jacobsen, 2015).
In such crisis environments, actors and technologies are perceived as ‘cleansed’ through their ‘gift’ to humanitarian actors and people in need (Mauss, 1990 [1925]). This process effectively legitimises the deployment of the technology. Emergency makes for a state of exception and justifies what would otherwise be problematic (Fassin, 2012).
This phenomenon is not unique to humanitarian settings, though humanitarian settings may represent a preeminent example of this practice. As Hosein and Nyst note,
Emerging economies and developing nations across Africa, Asia and Latin America are seeing the rapid deployment of technologies that many more developed countries are hesitant to use, such as national identity registries using biometric technologies, and e-health systems with national registries of sensitive personal information, in the absence of legal safeguards and, indeed, critical analysis. (2013: 7)
Whether or not these technologies are deployed in a state of exception, they hold the high-modernist promise of enforcing legibility, in the classic sense, upon chaotic masses (Scott, 1999). In such cases, it is possible to argue that the promised social or administrative benefit of these technologies outweighs the risk, particularly if technology is portrayed as a ‘silver bullet’ for problems in the sector (Raftree and Nkie, 2011: 46).
Certainly, crisis rationales provided justification for the shift towards biometric identity technologies in the early 2000s. This shift was part of a broader global securitisation of the post-9/11 era and the evolution of an ‘epoch of exceptionalism’ whereby, for example, migrants from the Global South faced demands to provide biometric data in order to be granted an official (but othering) identity and access essential services like healthcare (Muller, 2004; Cheesman, 2020). Studies through a Foucauldian lens revealed the authoritarian relationships inherent in these technological innovations, in which certain groups of people, and certain bodies, became the target of attention (Jacobsen, 2015; Scott-Smith, 2015; Redfield, 2012).
Similarly, during the 2015 European refugee crisis, the political invocation of ‘crisis’ and crisis narratives around migration both demanded and permitted radical measures (Jeandesboz and Pallister-Wilkins, 2016). This ‘fueled demands for new ways of tracking, mapping and predicting human mobility’ (Taylor and Meissner, 2020). Attention to the potential for authoritarian policing, tracking and surveillance – not just of migrant but also of domestic and foreign populations – and the attendant legal and human rights implications, is increasing (Kak, 2020; Lodinová, 2016; Deibert, 2013; Marino, 2021). In a crisis of a different sort, COVID-19 ignited debate over digital tracking in Europe and beyond. Notably, the UK’s National Health Service (NHS) COVID-19 contract-tracing app and resulting parliamentary bills, ignited this debate in the UK (Clarke, 2020), whereas human rights groups highlighted privacy vulnerabilities in Bahrain, Kuwait and Norway contact tracing apps (Amnesty International, 2020).
The uncritical deployment of technologies is particularly concerning in contexts that lack legally enforceable safeguards, as is often the case in medical humanitarian contexts (Hosein and Nyst, 2013; Sandvik, 2020a, 2020b). The development of new technologies often outpaces the corresponding development of national and international legal and regulatory frameworks designed to protect users. This may exacerbate the vulnerabilities of those affected by crisis (Hayes, 2009; Coppi and Fast, 2019). This is especially true for biomedical research and data sharing across international borders or among international organisations (Kaye et al., 2018). This is also particularly relevant given the highly sensitive nature of medical data in humanitarian settings and other politically volatile contexts, where data can be instrumentalised. The choice of what data needs to be collected, with how to safeguard use, storage and sharing is simultaneously a complex ethical, technical and operational problem (OCHA, 2019; ICRC, 2020). It is further complicated by (at times overlapping) global legal privacy regimes (for example, the European Union’s General Data Protection Regulation [GDPR]) which come into play as information is collected, stored or transmitted across multiple national borders (Gazi, 2020).
Data and Colonialism
Another area of discussion relates to data ownership. Technology interests often present a utopic vision of the future, thus obscuring the capitalist, asymmetric power dynamics at the centre of what Thatcher et al. have termed ‘data colonialism’ (2016). As they explain, it is a form of ‘capitalist accumulation by dispossession’ whereby personal aspects of our lives are collected, depersonalised and monetised. Thatcher et al. point to the example of End-User License Agreements as a means of privatising user data harvested through smartphone apps and other ubiquitous technologies. In the humanitarian space, these obscuring dynamics appear in discussions about, for example, refugee movements and ‘good drones’; they raise questions of who owns (and who has the right to gather or use) personal and geospatial knowledge (Greenwood, 2021; Madianou, 2019; Sandvik et al., 2015; Meier, 2016).
In a comparable manner, digital health devices can blur public and private interests (Collier et al., 2017) – raising questions of workability and who exactly is helped by the invention of ‘little development devices’ and humanitarian gadgets. The use of identity biometrics, such as fingerprinting, iris scans and facial recognition, has already expanded within humanitarian contexts (Polk, 2020; Jacobsen, 2015). While biometrics are currently employed for identity verification at point-of-service locations, this humanitarian data harvesting has already expanded into the realm of humanitarian wearables (Sandvik, 2020a). Wearing a symbol of humanitarian aid – granting access to medical care, registration and food – could lead to a shift in how recipients are perceived by the humanitarian industry: from people to data producers, and data subjects. As Sandvik has noted, ‘With the rise of wearables … we recognize the central premise of the global data economy: that beneficiary data is the product, not the tracking device, and that human bodies become data-producing units – aid beneficiaries become data subjects’ (Sandvik, 2020b). Commercial firms such as PricewaterhouseCoopers (PwC) are increasingly involved in digital health innovations and in humanitarian crises, seeing opportunities for expansion and experimentation in the field (PwC, 2017). This simultaneously layers profitability into the already complex ethical calculus of humanitarian data technologies.
As Sandvik notes, while there is a literature on humanitarian goods and reciprocity in humanitarian settings, the ‘gift’ of data in humanitarian settings has not permeated these analyses (Sandvik, 2020b). Those who are provided the ‘gift’ of medicine or service provision via a corporate humanitarian gadget complete the cycle of reciprocity by providing their own data in return. Similarly, while there is an established literature on patient perceptions of medical data management in the global healthcare sector (Shen et al., 2019), this literature emerges predominantly from high-income, high-technology settings. Our survey did not reveal a literature on patient privacy perspectives in humanitarian settings, an absence that calls out for empirical enquiry.
Cyberattacks
Emerging paradigms of electronic warfare and criminality are another manifestation of power with significant implications for humanitarian actors. Hardware and software are vulnerable to leaks and cyberattacks (Parker, 2020), while personal security is linked to privacy concerns, since humanitarian data can be valuable to state agencies (Rahman, 2021; Eckenwiler and Hunt, 2014).
Humanitarian agencies are every bit as vulnerable as other actors (and conceivably more vulnerable in certain settings) to data security breaches. Denial of service attacks, cyberattacks on infrastructure and ransomware demands are increasingly common factors in conflict settings, while criminal actors are active globally. For example, malware targeted vulnerabilities in national hospital databases of the NHS in 2017 (National Audit Office, 2018).
International humanitarian law affords humanitarians and civilians only limited protections against these attacks (Buchanan and Tsagourias 2022; Rodenhäuser, 2020). As the International Committee of the Red Cross (ICRC) noted in 2015, ‘the obligation to respect and protect medical facilities must be understood as extending to medical data belonging to those facilities’. However, swathes of personal data fall into a grey area that may ‘not benefit from such specific protection, such as social security data, tax records, bank accounts, companies’ client files or election lists’ (ICRC, 2015: 43). Similarly, the UN Office for the Coordination of Humanitarian Affairs (OCHA) has noted that, ‘while personal data can categorically be considered sensitive, more nuanced issues arise for non-personal data. For example, locations of medical facilities in conflict settings can expose patients and staff to risk, even if this data is not personal’ (OCHA, 2019: 7).
Overall, debates related to governance, power and control raise key questions: to what extent does the use of biometric and medical data technologies in humanitarian settings support a global tilt towards authoritarianism? Can these technologies become ‘late-modern mechanisms of social exclusion’ and control (Aas, 2006)? How can the ethics of innovation relate to humanitarian principles and ‘do no harm’ medical doctrine? These questions hold implications for communities outside of the immediate sphere of humanitarian intervention, both as recipients of a technology of questionable origins and through ‘the commodification of good intentions’ (Korf et al., 2009; Sandvik et al., 2017; Hosein and Nyst, 2013).
Justice and Equity
Another area of debates relates to justice and equity. Technology is not neutral. All technologies, including data systems, are inherently political, rather than passive ‘technical artefacts’ (Winner, 1980; Heeks et al., 2019: 16). In some circumstances, technology and digitisation can serve to magnify asymmetries and undermine rather than improve accountability and transparency (Madianou et al., 2016; Martin and Taylor, 2020).
All technologies have a maker and a maintainer; humans – with all their attendant biases, cultural assumptions and unequal power dynamics – are ultimately the ones who extract and process data. Even (or especially) when data are mediated by algorithm, or so-called artificial intelligence, bias enters in through the programmer, the dataset, the user or some unanticipated interaction between them (Parada et al., 2023; Owens and Walker, 2020; Zou and Schiebinger, 2018).
Thus, technology and data can be influenced by, replicate and even reinforce existing human and health inequalities (Moran, 2021; Raza, 2022). Few, if any, legal safeguards exist on surveillance technologies within development and humanitarian contexts, posing serious risks to individual human rights and privacy (Hosein and Nyst, 2013; ICRC, 2020). Hierarchies of healthcare – mediated by geography, economic status, gender, race, class, sexual orientation and a multitude of other factors – have been well documented by doctors and academics alike (Iwai et al., 2020; Khan et al., 2022). A ‘digital divide’ exists in geographic and humanitarian contexts, contributing to further visible inequality along gender, racial, class or other lines (Bryant, 2019; Dodson et al., 2013; Bryant et al., 2020; Bryant, 2022). Power dynamics are exacerbated in settings where deep inequalities already pervade international aid and relief distribution, revealing multiple tensions between the humanitarian and the recipient, the Global North and Global South, doctor and patient, benefactor and beneficiary, agent and migrant as technology increasingly becomes a tool of control in humanitarian settings (Jacobsen, 2015; Latonero and Kift, 2018). Religious or cultural objections to biometric data could exclude communities from access to identity cards and documentation, further compounding asymmetric power dynamics within the humanitarian/disaster context (Lodinová, 2016).
These dynamics raise questions around justice and equity that deserve reflection in advance of any deployment of data technologies in humanitarian settings. These might include: who is the service or software designed to satisfy? How can it be subverted, weaponised or otherwise used to cause harm? Who is ignored, missed or misrepresented by the data?
‘Gifted’ technology also raises questions of maintenance, obsolescence and sustainability. Proprietary software, hardware turnover and planned obsolescence are fundamental to the technology sector business model. This places the financial burden of updates, maintenance and specialist support on the consumer – in this case the humanitarian agency or their host population. Products no longer profitable may be discontinued, regardless of who might be reliant upon them. Similar issues are at play in the maintenance and upkeep of websites – as links become broken and information outdated – a phenomenon that Benton (2019) has termed ‘digital litter’. In a related vein, Fast and Waugaman (2017) have examined practical issues related to online and offline functionality, leading to the redundant use of paper to supplement electronic systems, and the implications for data quality and usability. Humanitarian practice thus becomes beholden to the constancy of software, vulnerable to political or unintentional failures, bugs or flaws in updates, as well as the redundancy inherent in technological progress.
Medical data and biometric technologies raise additional issues of justice and equity, particularly in relation to the ethics of consent. Data sharing may fail to take adequate account of consent, creating breaches of trust, privacy and informed consent, and may be used for purposes outside of those originally identified (Hosein and Martin, 2010: 16). Those producing or providing data may not realise they are generating useful and valuable data/information (PLoS, 2012; Lawlor and Stone, 2001).
During humanitarian crises and environmental disasters, it is common for people to lose their personal documentation, paperwork and identity cards. The use of biometric data for identity cards or records has been promoted as a ‘lifesaver’ for migrants who require ID to access services and provisions by international organisations (Polk, 2020; Burt, 2019; Raftree and Steinacker, 2019). However, the extraction of biometric information for identity processes, collected by NGOs, can also cause harm. Migrants in interviews have revealed how they have given access to identity data or personal technology (such as smartphones or laptops) in exchange for resources, without meaningful consent or understanding of how NGOs or agencies will use their data (Latonero et al., 2019; Bellanova et al., 2016). Recorded medical data then becomes the person’s identity, placing a significant burden on the bureaucratic structures, data storage and personnel on the ground, with severe implications for human error on the refugee/migrant. Participation in assistance programmes may be contingent upon providing biometrics, without full awareness of what this means. These programmes are often implemented within the context of public–private partnerships (Latonero et al., 2019; Jacobsen, 2015). Concerns about representations and images of aid recipients is not new to humanitarianism scholarship (de Laat and Gorin, 2016); however, discussions about medical data and technology have prompted further critiques about how these existing vulnerabilities might be amplified by consent and privacy concerns (Macias, 2019).
Trust
Humanitarian organisations rely upon trust as core to their acceptance and security (Fast et al., 2014). It is also the foundation of the medical act. Trust is likewise central to ensuring safe and secure data collection by actors in the field since this affects what and how much data people may be willing to share (HHI, 2011: 38).
Technological mediation of the humanitarian encounter is in step with other responses to the increasing insecurity of aid work, such as humanitarian subcontracting and the ‘bunkerisation’ of the aid compound (Duffield, 2010). This move from ‘face-to-face interactions to face-to-screen’ (Hunt et al., 2016; Donini and Maxwell, 2013) represents a shift away from the humanitarian tenet of proximity – the person-to-person gesture that is meant to be at the heart of the humanitarian act (Healy et al., 2019; Fast, 2017; Duffield, 2019). Those who live in humanitarian environments already make decisions, deviations and construct networks in response to their understanding of these dilemmas and implications of medical data on the ground. Such adaptations are rarely satisfactory. For instance, the World Food Programme ceased food deliveries in Yemen’s capital city after Houthis refused to allow the registration of recipients’ details for an anti-fraud database (Parker and Slemrod, 2019; Raftree and Steinacker, 2019).
This dilemma – the move away from proximity and the familiarity and trust it is supposed to engender – happens alongside a burgeoning ecosystem of misinformation and disinformation as weapons of war and tools of diplomacy (van Solinge and Marelli, 2021).
There are negative implications for the principle of neutrality if data collection, processing or analysis is undertaken externally to the humanitarian organisation (ICRC, 2020). Yet humanitarian organisations – as part of the global information technology infrastructure – overwhelmingly rely upon major third-party service providers to house data, provide IT infrastructure and, increasingly, with the advent of workable AI, process and analyse that data. Humanitarian actors are thus intertwined with a capitalist industrial complex that is far from neutral in the eyes of many patients and host communities.
Unsafe data sharing, particularly in situations of armed conflict, can compromise the safety of ‘data subjects’ (ICRC, 2020) and poses new questions around access (Jacobsen and Fast, 2019; Fast, 2022). Should aid agencies transmit data to third parties with weak protection standards, differing motivations or imposed legal obligations, it could expose patients and other vulnerable categories of people to security risks both within the immediate humanitarian context and in their future life outside of crisis (Nonnecke, 2017; GPPi, 2021; Fast, 2022; Diepeveen and Bryant, 2022). Interoperability can also enable ‘function creep’ whereby information is used for purposes beyond the original intent (Soliman, 2016; Hosein and Nyst, 2013: 7; Taylor et al., 2016).
In large-scale crises, the vast amounts of data generated – and the new technologies often imported or invented to deal with it – can cause confusion. As digital mapping and tracing increasingly become ‘tools of the trade’ for emergency response, reams of data are amassed by agencies during humanitarian crises (Altay and Labonte, 2014). Troves of data can ‘paralyse’ a humanitarian organisation as much as inform it, through volume and sheer complexity (Meier, 2015).
Interoperability of data is also a factor in large-scale interventions, since datasets gathered by different agencies on different platforms cannot always be made to cohere. New hardware or software may be intended to address these issues, but will often initially slow processes in the field, as implementation brings unforeseen issues ‘such as insufficient battery charge, printer malfunction and basic unfamiliarity with the system’ (Jafar, 2020: 47). New software can stymie workers, while uncertainty over the reliability of crowd-sourced data can slow response (Deibert and Scott-Railton, 2016: 327). Instead, adaptations of existing technology may be more effective in crisis contexts (Fast and Waugaman, 2016; HHI, 2011: 30–3). As with all innovations in crisis, these data technologies may inadvertently shift risk onto more vulnerable communities (Kalkman, 2018: 3) or exclude them altogether (Davis, 2020).
Conclusion
This has only been a partial summary of contemporary issues influencing medical data technologies in humanitarian settings. Attempting an overview of key issues and authors, we raised key issues that surface within the themes of governance, power and control; justice and equity; and trust. The humanitarian act – like the medical act – has always been predicated on trust: beneficence as the principal motive and outcome. Trust is the central constitutive component of the relationship. Naturally, motive and means have never been pure; questions of power and justice are always at play; the medical humanitarian act is fraught with moral complexity. But accelerating medical data technologies confound this relationship further. Patient data can travel unanticipated and sometimes untraceable pathways, further muddying the ethics of informed consent, choice and care. This can multiply existing power differentials and inequalities. These are overlapping, not exclusive, categories; most of the phenomena and dilemmas discussed here evoke more than one of these themes. Yet divisions are conceptually necessary to treat such a heterogeneous subject matter, and call attention to the preeminent implications for patient well-being and humanitarian operations as a whole.
Medical humanitarian settings might not make for qualitatively different data ethics, but they do make for quantitatively different consequences. In any setting, a medical data breach can do long-lasting harm; in authoritarian or conflict environments they can be deadly. Where existing vulnerabilities, inequities and power differentials are already acute, it matters how medical data is managed. This is the urgent relevance of medical humanitarian data studies as a branch of broader and established enquiries into humanitarian and data ethics. We hope this article will spark further dialogue and research around the topic of medical humanitarian data technologies, with the ultimate aim of improved safety and autonomy for those living in crisis and conflict.
Note
An early draft version of this document was circulated among participants of the Medical Data Studies in Humanitarianism (MDaSH) network, an online discussion group of practitioners and academics focused on taking stock of existing research and practice in this arena. Funding for the network, including a symposium, was provided by the Wellcome Trust.
Works Cited
Aas, K. F. (2006), ‘“The Body Does Not Lie”: Identity, Risk and Trust in Technoculture’, Crime, Media, Culture: An International Journal, 2:2, 143–58.
Altay, N. and Labonte, M. (2014), ‘Challenges in Humanitarian Information Management and Exchange: Evidence from Haiti’, Disasters, 38:S1, S50–S72, doi: 10.1111/disa.12052.
Amnesty International (2020), ‘Bahrain, Kuwait and Norway Contact Tracing Apps among Most Dangerous for Privacy’, Amnesty International, www.amnesty.org/en/latest/news/2020/06/bahrain-kuwait-norway-contact-tracing-apps-danger-for-privacy/ (accessed 1 July 2022).
Barkawi, T. (2013), ‘The “War on Terror” Comes of Age’, Opinions: Al Jazeera, 16 June, www.aljazeera.com/opinions/2013/6/16/the-war-on-terror-comes-of-age (accessed 1 July 2022).
Bellanova, R., Jumbert, M. G. and Gellert, R. (2016), ‘Give Us Your Phone and We May Grant You Asylum’, Peace Research Institute Oslo Blog, 17 October, https://blogs.prio.org/2016/10/give-us-your-phone-and-we-may-grant-you-asylum/ (accessed 1 July 2022).
Benton, M. (2019), ‘Digital Litter: The Downside of Using Technology to Help Refugees’, Migration Policy Institute, 20 June, www.migrationpolicy.org/article/digital-litter-downside-using-technology-help-refugees (accessed 1 July 2022).
Bryant, J. (2019), ‘New Technologies Are Changing Humanitarian Action, but Don’t Assume They’re Inclusive’, ODI Blog, 29 November, www.odi.org/blogs/new-technologies-are-changing-humanitarian-action-don-t-assume-they-ll-make-responses-more (accessed 1 July 2022).
Bryant, J. (2022), Digital Technologies and Inclusion in Humanitarian Response, Humanitarian Policy Group Report (London: Overseas Development Institute).
Bryant, J., Holloway, K., Lough, O. and Willitts-King, B. (2020), Bridging Humanitarian Digital Divides during Covid-19, HPG Briefing Note (London: Overseas Development Institute).
Buchan, R. and Tsagourias, N. (2022), ‘Hacking International Organizations: The Role of Privileges, Immunities, Good Faith and the Principle of State Sovereignty’, International Review of the Red Cross, 104:919, 1–28.
Burt, C. (2019), ‘UNHCR Reaches 7.2M Biometric Records but Critics Express Concern’, Biometric Update, 24 June, www.biometricupdate.com/201906/unhcr-reaches-7-2m-biometric-records-but-critics-express-concern (accessed 1 July 2022).
Campo, S. (2021), panel discussant remarks, ‘Theory’, Medical Data Studies in Humanitarianism (MDaSH): A Critical Research Agenda conference, 15 November.
Casey-Maslen, M. (2018), ‘There’s No Place for Hierarchy in Safeguarding’, CDAC Network, 3 October, www.cdacnetwork.org/news/theres-no-place-for-hierarchy-in-safeguarding (accessed 1 July 2022).
Cheesman, M. (2020), ‘Self-Sovereignty for Refugees? The Contested Horizons of Digital Identity’, Geopolitics, 27:1, 134–59.
Clarke, L. (2020), ‘How the Fight against Coronavirus Paves the Way for Technological Authoritarianism’, New Statesman, 23 March, www.newstatesman.com/science-tech/privacy/2020/03/coronavirus-surveillance-technological-authoritarianism-lockdown (accessed 1 July 2022).
Collier, S. J. et al. (eds) (2017), ‘Issue 9: Little Development Devices / Humanitarian Goods’, LIMN, https://limn.it/issues/little-development-devices-humanitarian-goods/ (accessed 1 July 2022).
Collinson, S. and Duffield, M. (2013), Paradoxes of Presence: Risk Management and Aid Culture in Challenging Environments, ODI Humanitarian Policy Group Report, https://cdn.odi.org/media/documents/8428.pdf (accessed 1 July 2022).
Coppi, G. and Fast, L. (2019), Blockchain and Distributed Ledger Technologies in the Humanitarian Sector, HPG Commissioned Report, https://cdn.odi.org/media/documents/12605.pdf (accessed 1 July 2022).
Culbertson, S., Dimarogonas, J., Costello, K. and Lanna, S. (2019), Crossing the Digital Divide: Applying Technology to the Global Refugee Crisis, RAND Corporation, www.rand.org/content/dam/rand/pubs/research_reports/RR4300/RR4322/RAND_RR4322.pdf (accessed 1 July 2022).
Currion, P. (2019), ‘The Black Hole of Humanitarian Innovation’, Journal of Humanitarian Affairs, 1(3): 42–5, doi: 10.7227/JHA.024.
Davis, S. L. M. (2020), The Uncounted: Politics of Data in Global Health (Cambridge: Cambridge University Press).
de Laat, S. and Gorin, V. (2016), ‘Iconographies of Humanitarian Aid in Africa’, in Bennett, C., et al. (eds), Learning from the Past to Shape the Future: Lessons from the History of Humanitarian Action in Africa. HPG Working Paper (London: ODI), pp. 15–30, https://cdn.odi.org/media/documents/11148.pdf (accessed 1 July 2022).
Deibert, R. J. (2013), Black Code: Inside the Battle for Cyberspace (Toronto: McClelland & Stewart).
Deibert, R. and Scott-Railton, J. (2016), ‘Digitally Armed and Dangerous: Humanitarian Intervention in the Wired World’, in Williams P. and Fiddner, D. (eds.), Cyberspace: Malevolent Actors, Criminal Opportunities and Strategic Competition (Carlisle: United States Army War College), pp. 319–68.
Diepeveen, S., Bryant, J., Mohamud, F., Wasuge M. and Guled, H. (2022) Data Sharing and Third-Party Monitoring in Humanitarian Response. HPG Working Paper (London: ODI), www.odi.org/en/publications/data-sharing-and-third-party-monitoring-in-humanitarian-response/ (accessed 1 July 2023).
Dodson, L. L., Revi Sterling, S. and Bennett, J. K. (2013), ‘Minding the Gaps: Cultural, Technical and Gender-Based Barriers to Mobile Use in Oral-Language Berber Communities in Morocco’, ICTD ‘13: Proceedings of the Sixth International Conference on Information and Communication Technologies and Development, pp. 79–88.
Donini, A. and Maxwell, D. (2013), ‘From Face-to-Face to Face-to-Screen: Remote Management, Effectiveness and Accountability of Humanitarian Action in Insecure Environments’, International Review of the Red Cross, 95:890, 383–413. doi: 10.1017/s1816383114000265.
Duffield, M. (2010), ‘Risk-Management and the Fortified Aid Compound: Everyday Life in Post -Interventionary Society’, Journal of Intervention and Statebuidling, 4:4, 453–74.
Duffield, M. (2016), ‘The Resilience of the Ruins: Towards a Critique of Digital Humanitarianism’, Resilience, 4:3, 147–65.
Duffield, M. (2019), Post-Humanitarianism: Governing Precarity in the Digital World (Cambridge: Polity Press).
Eckenwiler, L. and Hunt, M. (2014), ‘Counterterrorism, Ethics, and Global Health’, Hastings Center Report, 44:3, 12–13.
Fassin, D. (2012), Humanitarian Reason: A Moral History of the Present (Berkeley, CA: University of California Press).
Fast, L. (2017), ‘Diverging Data: Exploring the Epistemologies of Data Collection and Use among Those Working on and in Conflict’, International Peacekeeping, 24:5, 706–32.
Fast, L. (2022), Data Sharing between Humanitarian Organizations and Donors: Toward Understanding and Advancing Responsible Practice. Working Paper (Oslo/Bergen: Norwegian Centre for Humanitarian Studies/NCHS), https://bit.ly/37HRu5R.
Fast, L. and Waugaman, A. (2016), Fighting Ebola with Information: Learning from Data and Information Flows in the West Africa Ebola Response (Washington, DC: USAID).
Fast, L., Freeman, F., O’Neill, M. and Rowley, E. (2014), ‘The Promise of Acceptance as an NGO Security Management Approach’, Disasters, 39:2, 208–31.
Gazi, T. (2020), ‘Data to the Rescue: How Humanitarian Aid NGOs Should Collect Information Based on the GDPR’, Journal of International Humanitarian Action, 5:9, 1–7.
Givoni, M. (2016), ‘Between Micro Mappers and Missing Maps: Digital Humanitarianism and the Politics of Material Participation in Disaster Response,’ Environment and Planning D: Society and Space, 34:6, 1025–43, doi: 10.1177/0263775816652899.
Global Public Policy Institute (GPPi) (2021), Risks Associated with Humanitarian Data Sharing with Donors, www.gppi.net/media/GPPi_DonorDataSharingRisks_Report_August2021.pdf (accessed 28 September 2023).
Greenwood, F. (2021), ‘Drones and Distrust in Humanitarian Aid’, Humanitarian Law & Policy Blog, 22 July, https://blogs.icrc.org/law-and-policy/2021/07/22/drones-distrust-humanitarian/ (accessed 1 Nov 2022).
Harvard Humanitarian Initiative (HHI) (2011), Disaster Relief 2.0: The Future of Information Sharing in Humanitarian Emergencies, https://hhi.harvard.edu/publications/disaster-relief-20-future-informationsharing-humanitarian-emergencies (accessed 28 September 2023).
Hayes, B. (2009), NeoConOpticon: The EU Security-Industrial Complex, Statewatch, www.statewatch.org/media/documents/analyses/neoconopticon-report.pdf (accessed 1 July 2022).
Healy, S., Aneja, U., DuBois, M., Harvey, P. and Poole, L. (2019), ‘Working with Local Actors: MSF’s Approach’, Humanitarian Policy Group Blog, 28 November, https://odihpn.org/blog/working-with-local-actors-msf/ (accessed 1 July 2022).
Heeks, R., Rakesh, V., Sengupta, R., Chattapadhyay, S. and Foster, C. (2019), ‘Datafication, Value and Power in Developing Countries: Big Data in Two Indian Public Service Organizations’, Development Policy Review, 39:1, 82–102, doi: 10.1111/dpr.12477.
Hosein, G. and Martin, A. (2010), Electronic Health Privacy and Security in Developing Countries and Humanitarian Operations, Policy Engagement Network for the International Development Research Centre, http://personal.lse.ac.uk/martinak/eHealth.pdf (accessed 1 July 2022).
Hosein, G. and Nyst, C. (2013), ‘Aiding Surveillance: An Exploration of How Development and Humanitarian Aid Initiatives Are Enabling Surveillance in Developing Countries’, Privacy International, https://privacyinternational.org/sites/default/files/2017-12/Aiding%20Surveillance.pdf (accessed 1 July 2022).
Hunt, M., Pringle J., Christen, M., Eckenwiler, L., Schwartz, L. and Davé, A. (2016), ‘Ethics of Emergent Information and Communication Technology Applications in Humanitarian Medical Assistance’, International Health, 8:4, 239–45.
Hutchinson, J. (1989), ‘The History of the Red Cross Is Anything but Dull’, CMAJ (Canadian Medical Association Journal), 141:4, 336–9.
International Committee of the Red Cross (ICRC) (2015), ‘International Humanitarian Law and the Challenges of Contemporary Armed Conflicts’, 32IC/15/11, 32nd International Conference of the Red Cross and Red Crescent, Geneva, Switzerland, www.icrc.org/en/download/file/15061/32ic-report-on-ihl-and-challenges-of-armed-conflicts.pdf (accessed 1 July 2022).
International Committee of the Red Cross (ICRC) (2020), Handbook on Data Protection in Humanitarian Action, Kuner, C. and Marelli, M. (eds) (Geneva: ICRC, 2nd edn), www.icrc.org/en/data-protection-humanitarian-action-handbook (accessed 1 July 2022).
Iwai, Y., Khan, Z. H. and DasGupta, S. (2020), ‘Abolition Medicine’, The Lancet, 396:10245, 158–9.
Jacobsen, K. L. (2015), The Politics of Humanitarian Technology: Good Intentions, Unintended Consequences and Insecurity (Abingdon: Routledge).
Jacobsen, K. L. and Fast, L. (2019), ‘Rethinking Access: How Humanitarian Technology Governance Blurs Control and Care’, Disasters, 43:S2, S151–S168.
Jacobsen, K. L., Sandvik, K. B. and McDonald, S. M. (2017), ‘Humanitarian Experimentation’, Humanitarian Law & Policy Blog, 28 November, https://blogs.icrc.org/law-and-policy/2017/11/28/humanitarian-experimentation/ (accessed 1 July 2022).
Jafar, A. (2020), ‘Medical Documentation in Humanitarian Emergencies: Building High-tech Castles in the Air?’, Journal of Humanitarian Affairs, 1:3, 46–7.
Jeandesboz, J. and Pallister-Wilkins, P. (2016) Crisis, Routine, Consolidation: The Politics of the Mediterranean Migration Crisis, Mediterranean Politics, 21:2, 316–20, doi: 10.1080/13629395.2016.1145825.
Kak, A. (ed.) (2020), ‘Regulating Biometrics: Global Approaches and Urgent Questions’, AI Now, https://ainowinstitute.org/regulatingbiometrics.pdf (accessed 1 July 2022).
Kalkman, J. P. (2018), ‘Practices and Consequences of Using Humanitarian Technologies in Volatile Aid Settings’, Journal of International Humanitarian Action, 3:1, 1–12.
Kaye, J., et al. (2018), ‘Including All Voices in International Data-Sharing Governance’, Human Genomics, 12, Art. 13, doi: 10.1186/s40246-018-0143-9.
Khan, Z. H., Iwai, Y. and DasGupta, S. (2022), ‘Abolitionist Reimaginings of Health’, AMA Journal of Ethics, 24:3, 239–46, doi: 10.1001/amajethics.2022.239.
Korf, B., Habullah, S., Hollenbach, P. and Klem, B. (2009), ‘The Gift of Disaster: The Commodification of Good Intentions in Post-Tsunami Sri Lanka’, Disasters, 34:S1, S60–S77.
Latonero, M. and Kift, P. (2018), ‘On Digital Passages and Borders – Refugees and the New Infrastructure for Movement and Control’, Social Media + Society, 4:1, 1–11.
Latonero, M., Hiatt, K., Napolitano, A., Clericetti, G. and Penagos, M. (2019), Digital Identity in the Migration & Refugee Context: Italy Case Study, Data & Society, https://datasociety.net/wp-content/uploads/2019/04/DataSociety_DigitalIdentity.pdf (accessed 1 July 2022).
Lawlor, D. A. and Stone, T. (2001), ‘Public Health and Data Protection: An Inevitable Collision or Potential for a Meeting of Minds?’, International Journal of Epidemiology, 30:6, 1221–5.
Lodinová, A. (2016), ‘Application of Biometrics as a Means of Refugee Registration: Focusing on UNHCR’s Strategy’, Development, Environment and Foresight, 2:2, 91–100.
Macias, L. (2019), ‘Entre contrôle et protection: ce que les technologies de l’information et de la communication font au camp de réfugiés’, Communications, 104:1, 107–17, doi: https://doi.org/10.3917/commu.104.0107.
Madianou, M. (2019), ‘Technocolonialism: Digital Innovation and Data Practices in the Humanitarian Response to Refugee Crises’, Social Media + Society, 5:3, doi: 10.1177/2056305119863146.
Madianou, M., Ong, J. C., Longboan, L. and Cornelio, J. S. (2016), ‘The Appearance of Accountability: Communication Technologies and Power Asymmetries in Humanitarian Aid and Disaster Recovery’, Journal of Communication, 66:6, 960–81.
Marino, S. (2021), Mediating the Refugee Crisis: Digital Solidarity, Humanitarian Technologies and Border Regimes (London: Palgrave Macmillan).
Martin, A. and Taylor, L. (2020), ‘Exclusion and Inclusion in Identification: Regulation, Displacement and Data Justice’, Information Technology for Development, 27:1, 50–66, doi: 10.1080/02681102.2020.1811943.
Mauss, M. (1990 [1925]), The Gift: The Form and Reason for Exchange in Archaic Societies. Translated by W. D. Halls (London; New York: Routledge).
Meier, P. (2015), Digital Humanitarians: How Big Data Is Changing the Face of Humanitarian Response (Boca Raton, FL: CRC Press).
Meier, P. (2016), Drones in Humanitarian Action: A Guide to the Use of Airborne Systems in Humanitarian Crises (Geneva: FSD (Fondation suisse de déminage)), https://reliefweb.int/sites/reliefweb.int/files/resources/Drones%20in%20Humanitarian%20Action.pdf (accessed 1 July 2022).
Moran, T. C. (2021), ‘Racial Technological Bias and the White, Feminine Voice of AI VAs,’ Communication and Critical/Cultural Studies, 18:1, 19–36, doi: 10.1080/14791420.2020.1820059.
Muller, B. J. (2004), ‘(Dis)qualified Bodies: Securitization, Citizenship and “Identity Management”‘, Citizenship Studies, 8:3, 279–94.
National Audit Office (2018), Investigation: WannaCry Cyber Attack and the NHS, UK Department of Health, www.nao.org.uk/wp-content/uploads/2017/10/Investigation-WannaCry-cyber-attack-and-the-NHS.pdf (accessed 1 July 2022).
Nonnecke, B. M. (2017), ‘Risks of Recognition,’ New America Blog, 5 September, https://context.newamerica.org/risks-of-recognition-98d7ee22fc1?gi=7c6ca5b7ff9b (accessed 1 July 2022).
Office for the Coordination of Humanitarian Affairs (OCHA) (2019), ‘Data Responsibility Guidelines: Working Draft’, https://centre.humdata.org/wp-content/uploads/2019/03/OCHA-DR-Guidelines-working-draft-032019.pdf (accessed 1 July 2022).
Owens, K. and Walker, A. (2020), ‘Those Designing Healthcare Algorithms Must Become Actively Anti-Racist’, Nature Medicine, 26(9): 1327–28, doi: 10.1038/s41591-020-1020-3.
Parada, V., Fast, L., Briody, C., Wille, C. and Coninx, R. (2023), ‘Underestimating Attacks: Comparing Two Sources of Publicly-Available Data about Attacks on Health Care in 2017’, Conflict and Health, 17, Art. 3, doi: 10.1186/s13031-023-00498-w.
Parker, B. (2020), ‘Dozens of NGOs Hit by Hack on US Fundraising Database’, New Humanitarian, 4 August, https://www.thenewhumanitarian.org/news/2020/08/04/NGO-fundraising-database-hack (accessed 1 July 2022).
Parker, B. and Slemrod, A. (2019), ‘UN Gives Ultimatum to Yemen Rebels over Reports of Aid Theft’, New Humanitarian, 17 June, www.thenewhumanitarian.org/news/2019/06/17/un-yemen-rebels-aid-theft-biometrics (accessed 1 July 2022).
PLoS Medicine Editors (2012), ‘Digital Humanitarianism: Collective Intelligence Emerging’, PLoS Medicine, 9:7, e1001278, doi: 10.1371/journal.pmed.1001278.
Polk, A. (2020), ‘Big Brother Turns Its Eye on Refugees’, Foreign Policy: Argument, 2 September, https://foreignpolicy.com/2020/09/02/big-brother-turns-its-eye-on-refugees/ (accessed 1 July 2022).
PwC (2017), ‘Managing the Refugee and Migrant Crisis: The Role of Governments, Private Sector and Technology’, Global Crisis Centre, https://www.pwc.com/gx/en/issues/crisis-solutions/refugee-and-migrant-crisis-report.pdf (accessed 1 July 2022).
Raftree, L. and Nkie, J. (2011), ‘Digital Mapping: A Silver Bullet for Enhancing Youth Participation in Governance?’, IIED (International Institute for Environment and Development), https://pubs.iied.org/G03192/ (accessed 1 July 2022).
Raftree, L. and Steinacker, K. (2019), ‘Head to Head: Biometrics and Aid’, New Humanitarian – Aid and Policy Opinion, 17 July, www.thenewhumanitarian.org/opinion/2019/07/17/head-head-biometrics-and-aid (accessed 1 July 2022).
Rahman, Z. (2021) ‘The UN’s Refugee Data Shame’, New Humanitarian – Aid and Policy Opinion, 21 June, www.thenewhumanitarian.org/opinion/2021/6/21/rohingya-data-protection-and-UN-betrayal (accessed 2 Nov 2022).
Raza, S. (2022), ‘Artificial Unintelligence: How ‘Smart’ and AI Technologies Perpetuate Bias and Systemic Discrimination’, in Fellows, J. J. and Smith L. (eds), Gender, Sex, and Tech!: An Intersectional Feminist Guide (Toronto: Canadian Scholars), pp. 185–204.
Redfield, P. (2012), ‘Bioexpectations: Life Technologies as Humanitarian Goods’, Public Culture, 24:1, 157–84.
Rodenhäuser, T. (2020), ‘Hacking Humanitarians? IHL and the Protection of Humanitarian Organizations against Cyber Operations’, EJIL:Talk! Blog, 16 March, www.ejiltalk.org/hacking-humanitarians-ihl-and-the-protection-of-humanitarian-organizations-against-cyber-operations/ (accessed 1 July 2022).
Sandvik, K. B. (2014), ‘Humanitarian Innovation, Humanitarian Renewal?’, Forced Migration Review, Supplement: Innovation and Refugees, www.fmreview.org/innovation/sandvik (accessed 1 July 2022).
Sandvik, K. B. (2020a), ‘Humanitarian Wearables and the Future of Aid in the Global Data Economy’, Global Policy Journal Blog, 31 March, https://www.globalpolicyjournal.com/blog/31/03/2020/humanitarian-wearables-and-future-aid-global-data-economy (accessed 1 July 2022).
Sandvik, K. B. (2020b). ‘Perpetuating Data Colonialism through Digital Humanitarian Technologies’, ISS Blog, 10 June, https://issblog.nl/2020/06/10/perpetuating-data-colonialism-through-digital-humanitarian-technologies-by-kristin-bergtora-sandvik/(accessed 1 July 2022).
Sandvik, K. B., Jumbert, M. G. and Lohne, K. (2015), ‘Conceptualizing the Role of Good Drones in Global Governance’, SSRN, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3188043 (accessed 1 July 2022).
Sandvik, K. B., Jacobsen, K. L. and McDonald, S. M. (2017), ‘Do No Harm: A Taxonomy of the Challenges of Humanitarian Experimentation’, International Review of the Red Cross, 99:104, 319–44.
Scott, J. C. (1999), Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven, CT: Yale University Press.)
Scott, P., Emerson, K. and Henderson-Reay, T. (2021), ‘Data Saves Lives’, BMJ, 374(1694), doi: 10.1136/bmj.n1694.
Scott-Smith, T. (2015), ‘Control and Biopower in Contemporary Humanitarian Aid: The Case of Supplementary Feeding’, Journal of Refugee Studies, 38:1, 21–37.
Scott-Smith, T. (2016), ‘Humanitarian Neophilia: The ‘Innovation Turn’ and Its Implications’, Third World Quarterly, 37:12, 2229–51.
Shen, N., Bernier, T., Sequeira, L., Strauss, J., Pannor Silver, M., Carter-Langford, A. and Wiljer, D. (2019), ‘Understanding the Patient Privacy Perspective on Health Information Exchange: A Systematic Review’, International Journal of Medical Informatics, 125(May): 1–12, doi: 10.1016/j.ijmedinf.2019.01.014.
Soliman, S. (2016), ‘Tracking Refugees with Biometrics: More Questions Than Answers,’ War on the Rocks, 9 March, https://warontherocks.com/2016/03/tracking-refugees-with-biometrics-more-questions-than-answers/ (accessed 1 July 2022).
Taylor, L. and Meissner, F. (2020), ‘A Crisis of Opportunity: Market-Making, Big Data, and the Consolidation of Migration as Risk’, Antipode, 52:1, 27090.
Taylor, L., Floridi, L. and van der Sloot, B. (2016), Group Privacy: New Challenges of Data Technologies (London: Springer).
Thatcher, J., O’Sullivan, D. and Mahmoudi, D. (2016), ‘Data Colonialism through Accumulation by Dispossession: New Metaphors for Daily Data’, Environment and Planning D: Society and Space, 34:6, 990–1006.
van Solinge, D. and Marelli, M. (2021), ‘Q and A: Humanitarian Operations, the Spread of Harmful Information and Data Protection: In Conversation with Delphine van Solinge, the ICRC’s Protection Advisor on Digital Risks for Populations in Armed Conflict, and Massimo Marelli, Head of the ICRC’s Data Protection Office’, International Review of the Red Cross, 102:913, 27–41, doi: 10.3316/agispt.20210420044868.
Winner, L. (1980), ‘Do Artifacts Have Politics?’, Daedalus, 109:1, 121–36.
Winner, L. (1992), ‘Artifact/Ideas and Political Culture’, in Teich, A. H. (ed.), Technology and the Future (New York: St. Martin’s Press, 6th edn), pp. 283–92, http://archive.org/details/technologyfuture0006unse (accessed 1 July 2023).
Zou, J. and Schiebinger, L. (2018), ‘AI Can Be Sexist and Racist – It’s Time to Make It Fair’, Nature, 559(7714): 324–26, doi: 10.1038/d41586-018-05707-8.