As innovations in military technologies race toward ever-greater levels of automation and autonomy, debates over the ethics of violent technologies tread water. Discussions about whether lethal drones are the most moral and effective tools to combat terrorism, or whether killer robots could kill more ethically than humans, often end up conflating efficiency with morality and legality with ethicality. Such conceptual confusions raise urgent questions about what is at work in the relationship between lethal technologies, their uses, and the ethical justifications provided for technologised practices of political violence. What enables the framing of instruments for killing as inherently ethical? What socio-political rationale underpins these processes? And what kind of ethical framework for violence is produced in such a socio-political context? Death Machines reframes current debates on the ethics of technologised practices of violence, arguing that the way we conceive of the ethics of contemporary warfare is itself imbued with a set of bio-technological rationalities that work as limits. The task for critical thought must therefore be to unpack, engage, and challenge these limits. Drawing on the work of Hannah Arendt, the book offers a close reading of the technology-biopolitics-complex that informs and produces contemporary subjectivities, highlighting the perilous implications this has for how we think about the ethics of political violence, both now and in the future.
outmoded as humans. Today's machines are designed to
outpace human capabilities. In contrast, old-fashioned human organisms lack
comparable processing capabilities and might, eventually, ‘face extinction’ (Singer
2009 : 415). Echoing this anxiety, technology tycoon
Elon Musk has issued a dire warning about the dangers of rapidly advancing AI and
the prospects of killerrobots as capable of ‘deleting humans like
spam’ (Musk 2014 ; Gibbs 2017 ). Musk is not alone
hopefully effective method of science communication to stimulate critical debate and achieve a ban on lethal autonomous weapons in the long run, because ‘serious discourse and academic argument are not enough to get the message through.’ 7 Negotiations over a ban on lethal autonomous weapons have been ongoing at the Convention on Certain Conventional Weapons in Geneva since 2014, 8 with few results. At the same time, many non-governmental organisations and investigative journalist organisations such as the Campaign to Stop KillerRobots, 9 Code Pink, 10 the Bureau of
October 2017 ).
Bowcott , Owen . 2015 . ‘ UK Opposes International Ban on Developing “KillerRobots” ’, The
Guardian , 13 April.
Leveringhaus , Alex . 2016 . Ethics and Autonomous Weapons . Oxford : Palgrave Macmillan .
Muoio , Danielle . 2015 . ‘ Russia and China are Building Highly Autonomous KillerRobots ’, Business Insider , 15
Rosenberg , Matthew and John
within which they take place. This question is concerned as much with what
is happening in the present as it is concerned with why this present might
be as it is. In such a vein, this book is motivated by questions about the
‘what’ and the ‘why’ of contemporary technologies of
violence and the underpinnings of their ethics. The emergence of new technologies
for violent practices – from lethal drones to so-called ‘killerrobots’, to weaponised Artificial
organisation, Jutta Weber actualises this sinister scenario in her chapter on the ethical implications of self-regulating swarms and killerrobots. According to Weber, our current imagining of swarms and artificial intelligence (AI) is heavily coloured by military fantasies of autonomous and self-regulating systems on the one hand, and dystopic images of killerrobots such as Skynet from the Terminator universe on the other. Weber critically engages with these persistent imaginaries through readings of some of the most widespread cases. In Slaughterbots , 28 for instance
seek to accelerate the development and deployment of UAS.
William Knight and Karen Hao, “Never Mind KillerRobots
– Here Are Six Real AI Dangers to Watch Out for in 2019,” MIT Technology
Review , January 7, 2019, www.technologyreview.com/2019/01/07/137929/never-mind-killer-robotshere-are-six-real-ai-dangers-to-watch-out-for-in-2019/
(accessed July 10, 2019).
The US DoD has developed directives restricting the development and
use of systems with
why the use of AI ML in
the context of weapon systems has, thus far, been limited to experimental research. Heather
Roff and P.W. Singer, “The next president, will decide the fate of killerrobots and
the future of war,” Wired , September 6, 2016, www.wired.com/2016/09/next-president-will-decide-fate-killer-robots-future-war/
(accessed December 10, 2019).
This paradox suggests that when states’ capabilities are
dependent resources (e.g. manpower or data-sets) that can be
Taking the role of non-governmental organisations in customary international lawmaking seriously
’ in A Bianchi (ed), Non-State Actors and International Law ( Routledge 2009 ).
38 On the participation of non-governmental organisation in the Rome Statute conference, Lindblom (n 8) 463ff. Z Pearson , ‘ Non-Governmental Organisations and the International Criminal Court ’ ( 2006 ) 39 Cornell Journal of International Law 243 .
39 See generally: Bernaz and Pietropaoli (n 9).
40 Non-governmental organisations’ ‘Campaign to Stop KillerRobots’, Official Website www.stopkillerrobots.org/ accessed 16 July 2017.
41 On the work of the
their operation relies.
Simple triggers and tactical animism
While the dystopian prospect of autonomous killerrobots figures prominently in the collective anxiety regarding drones, the dangers of this technology might just as easily be described according to the exact opposite scenario, that is, the all too human nature of drones. In this regard, distinguishing the automated processes of targeting that occur via the martial networks of drone warfare from what is typically considered to be the more thoughtful human modes of sighting is perhaps not as easy as we