Digitalized Warfare: Responsibility, Intentionality and the Rule of Law
Increasingly, war is conducted by means of computer-controlled machines, such as drones. Likewise, most human decisions taken at times of war depend on digital technologies that identify targets or assess risks. Each of these technologies functions in accordance with algorithms – instructions expressed in the numerical language of computers. Our project investigates how the use of these algorithms affects traditional understandings of the rule of law at times of war. We do this by following three related strands of inquiry. First, we identify to which extent the capabilities of digital technologies are shaped by their human creators. This is important for determining which person should be responsible for the conduct of these digital technologies. Second, we study how international law should respond when algorithms behave in a manner that is unexpected, that a human code-writer did not anticipate. This matters in order to avoid that the utilization of digital technologies leads to unintended consequences. Our third strand of inquiry focusses on the problem that it is often impossible to know how a digital machine will react before it has reacted. Compared to human conduct which is regulated prospectively, the regulation of digital technologies often occurs retrospectively. This retrospective regulation creates legal uncertainty for developers and citizens. Thus, our project identifies strategies of containing these detrimental effects of digital technologies.
Final report
1. The Purpose of our Project
Since the inception of our project, we have been guided by the question whether the utilization of digital modes of warfare conflict with traditional understandings of the rule of law related to international laws of war. Three strands of inquiry were identified. The first sheds light on the complex relationship between human thought and digital code, and the legal significance of their interaction. The second seeks to demarcate the boundaries of normative spaces within the digital architecture. The third asks whether the utilization of digital warfare technologies conflict with the traditional understanding of the rule of law related to the laws of war. The substantive tenets of our project remain largely unchanged throughout its implementation.
2. Implementation
The main change has taken place in the project’s line-up of researchers. Early on in the project, Valentin Jeutner had secured a generous external research funding, liberating the salary budgeted for him in our Foundation project. Jeutner continued to contribute to the project with his salary share covered by that external source. The Foundation agreed that the budgeted salary be used for a two-year post-doc position at 80% fte, whose holder was to be identified in an open and competitive process at the host institution. Within that process, Moa Dahlbeck applied successfully and has worked within the project from September 2018 until its end. Gregor Noll’s contribution at 30% fte remained unchanged. Working in a tightly knit group of three rather than two researchers turned out to be a considerable benefit. Informal contacts apart, we have had three workshops where each of us presents of work in progress and offers peer critique.
3. Outcomes
Our research group sees strong indications that the interaction between human-made and language-based law with digital normativity expressed in code will be anything but smooth. Technical developments in robotics and artificial intelligence (AI), particularly with regard to weapons that are constituted as man-machine ensembles, no longer allow for the application of fundamental legal concepts. Within such developments, there are no discrete, identifiable agents to which intentionality and responsibility can be ascribed. The ‘law’ of these technologies becomes self-validating; it is not a law that is incarnated through learning and then applied through legal process, but rather a law that is excarnate, acting as blind practice. We encounter nothing less than an epochal shift where the historical relevance of law, its rule and its forms of responsibility assignment are entering a phase of decline.
This challenge, however, is insufficiently understood by advocates, makers and practitioners of international law. We observe that a debate on a legal ban on Lethal Autonomous Weapons Systems (popularly known as ‘killer robots’) unfolded as if digital and artificially intelligent technologies could be outlawed by fiat. We are actively contemplating ways and means to make relevant segments of the public to appreciate more fully the complexity of any attempt at regulation, as expounded in our research.
4. New Research Questions
In Noll’s research, the question has arisen how shifts in regulatory ontologies and the construction of responsibility were dealt with in earlier historical phases. The scientific revolution, industrialization and the spread of strict liability is a case that merits further study to shed light on present regulatory dilemmas through historical precursors.
In Jeutner’s research, the nexus between human action and responsible conduct has shifted towards the centre of attention. Specifically, Jeutner is actively researching to which extent delegation of decision-making competences to digital agents necessarily undermines responsible decision-making.
Dahlbeck’s research focuses on the question of how different philosophical approaches to intelligence and individuality sets boundaries for how to approach AI within different legal practices.
5. The International Dimensions of the Project
Moa Dahlbeck has presented the outcomes of her research in the following international contexts:
- Spinoza and the unfamiliar (University of Gothenburg, January, 2018).
- Exemplarism as a method for constructing (legal) subjectivity (Malmö University, 24-25 October 2019).
Valentin Jeutner has presented research outcomes in the international domain as follows:
- Ethical and Regulatory Challenges of AI (St. Gallen University, 17 November 2021)
- Empathy at War (Swedish Defence College, 4 June 2021),
- Law’s Image of the Human (Linköping University, 27 August 2020),
- An Individual’s responsibility within totalitarian regimes (Lund University, 26 September 2019),
- Individual decision-making processes at times of exception (Cambridge University (25 June 2019); Hong Kong University (7 March 2019); Macau University (4 March 2019); Stockholm University (19 November 2018); the UN Headquarters (New York, 10 April 2018); Oxford University (8 February 2018)),
- The utility of proportionality tests to decide conflicts between ius cogens norms (Lund University, 20 March 2019),
- The social implications of AI (Robert Bosch United World College, Freiburg (18 October 2018),
- The responsibility of lawyers in imperfect rule-of-law states (IBA Kazakhstan, 6 September 2018)
- The digitalized reasonable person (Los Angeles (18 April 2018); New York (24 March 2018)).
Gregor Noll has presented research outcomes in the international domain as follows:
- Invited paper on “AI, War and Law” at a conference on Humanitarianism and the Remaking of International Law: History, Ideology, Practice, Technology, 31 May-2 June, Melbourne Law School, Melbourne,
- Accepted paper on the same topic at the Law and Society Association 2018 Annual Meeting in Toronto, 7-10 June 2018,
- Invited paper on ‘War and Algorithm: The End of Law?’ at a workshop held at Kent Law School on 30 January 2019.
- Presentation of the edited volume on War and Algorithm at a research seminar organized by NYU Law School on 3 March 2020
- Presentation of my edited volume chapter (War and Algorithm: The End of Law?) at a research seminar at Harvard Law School on 4 March 2020
- Presentation of the edited volume on War and Algorithm at a book launch jointly organized with NYU Steinhart on 5 March 2020
- Online keynote presentation on AI and Legal Responsibility at System and Software Security conference 2020, 25 November 2020
6. Dissemination and Collaboration
With a view to its limited length and funds, this project has generated an impressive list of peer-reviewed research publications, addressing multiple research audiences within and beyond law. Work on these publications has enabled us to disseminate results at a considerable number of conferences, meetings and publication launches, even under pandemic conditions. We have listed the research collaborations and meeting participation this project enabled in some detail in our intermediary report, suggesting that it tangibly developed our ability to build domestic and international research networks.
Our project was chosen by RJ for a pilot cooperation with Eva Krutmeijer, a film director and Karin Wegsjö, a science journalist. Based on our research and a set of interviews with the team, they produced GUILTY NOT GUILTY, a short film aestheticizing the core normative problem we identified. GUILTY NOT GUILTY is scheduled to be screened at Vetenskapsfestivalen in Gothenburg and at other platforms.
Since the inception of our project, we have been guided by the question whether the utilization of digital modes of warfare conflict with traditional understandings of the rule of law related to international laws of war. Three strands of inquiry were identified. The first sheds light on the complex relationship between human thought and digital code, and the legal significance of their interaction. The second seeks to demarcate the boundaries of normative spaces within the digital architecture. The third asks whether the utilization of digital warfare technologies conflict with the traditional understanding of the rule of law related to the laws of war. The substantive tenets of our project remain largely unchanged throughout its implementation.
2. Implementation
The main change has taken place in the project’s line-up of researchers. Early on in the project, Valentin Jeutner had secured a generous external research funding, liberating the salary budgeted for him in our Foundation project. Jeutner continued to contribute to the project with his salary share covered by that external source. The Foundation agreed that the budgeted salary be used for a two-year post-doc position at 80% fte, whose holder was to be identified in an open and competitive process at the host institution. Within that process, Moa Dahlbeck applied successfully and has worked within the project from September 2018 until its end. Gregor Noll’s contribution at 30% fte remained unchanged. Working in a tightly knit group of three rather than two researchers turned out to be a considerable benefit. Informal contacts apart, we have had three workshops where each of us presents of work in progress and offers peer critique.
3. Outcomes
Our research group sees strong indications that the interaction between human-made and language-based law with digital normativity expressed in code will be anything but smooth. Technical developments in robotics and artificial intelligence (AI), particularly with regard to weapons that are constituted as man-machine ensembles, no longer allow for the application of fundamental legal concepts. Within such developments, there are no discrete, identifiable agents to which intentionality and responsibility can be ascribed. The ‘law’ of these technologies becomes self-validating; it is not a law that is incarnated through learning and then applied through legal process, but rather a law that is excarnate, acting as blind practice. We encounter nothing less than an epochal shift where the historical relevance of law, its rule and its forms of responsibility assignment are entering a phase of decline.
This challenge, however, is insufficiently understood by advocates, makers and practitioners of international law. We observe that a debate on a legal ban on Lethal Autonomous Weapons Systems (popularly known as ‘killer robots’) unfolded as if digital and artificially intelligent technologies could be outlawed by fiat. We are actively contemplating ways and means to make relevant segments of the public to appreciate more fully the complexity of any attempt at regulation, as expounded in our research.
4. New Research Questions
In Noll’s research, the question has arisen how shifts in regulatory ontologies and the construction of responsibility were dealt with in earlier historical phases. The scientific revolution, industrialization and the spread of strict liability is a case that merits further study to shed light on present regulatory dilemmas through historical precursors.
In Jeutner’s research, the nexus between human action and responsible conduct has shifted towards the centre of attention. Specifically, Jeutner is actively researching to which extent delegation of decision-making competences to digital agents necessarily undermines responsible decision-making.
Dahlbeck’s research focuses on the question of how different philosophical approaches to intelligence and individuality sets boundaries for how to approach AI within different legal practices.
5. The International Dimensions of the Project
Moa Dahlbeck has presented the outcomes of her research in the following international contexts:
- Spinoza and the unfamiliar (University of Gothenburg, January, 2018).
- Exemplarism as a method for constructing (legal) subjectivity (Malmö University, 24-25 October 2019).
Valentin Jeutner has presented research outcomes in the international domain as follows:
- Ethical and Regulatory Challenges of AI (St. Gallen University, 17 November 2021)
- Empathy at War (Swedish Defence College, 4 June 2021),
- Law’s Image of the Human (Linköping University, 27 August 2020),
- An Individual’s responsibility within totalitarian regimes (Lund University, 26 September 2019),
- Individual decision-making processes at times of exception (Cambridge University (25 June 2019); Hong Kong University (7 March 2019); Macau University (4 March 2019); Stockholm University (19 November 2018); the UN Headquarters (New York, 10 April 2018); Oxford University (8 February 2018)),
- The utility of proportionality tests to decide conflicts between ius cogens norms (Lund University, 20 March 2019),
- The social implications of AI (Robert Bosch United World College, Freiburg (18 October 2018),
- The responsibility of lawyers in imperfect rule-of-law states (IBA Kazakhstan, 6 September 2018)
- The digitalized reasonable person (Los Angeles (18 April 2018); New York (24 March 2018)).
Gregor Noll has presented research outcomes in the international domain as follows:
- Invited paper on “AI, War and Law” at a conference on Humanitarianism and the Remaking of International Law: History, Ideology, Practice, Technology, 31 May-2 June, Melbourne Law School, Melbourne,
- Accepted paper on the same topic at the Law and Society Association 2018 Annual Meeting in Toronto, 7-10 June 2018,
- Invited paper on ‘War and Algorithm: The End of Law?’ at a workshop held at Kent Law School on 30 January 2019.
- Presentation of the edited volume on War and Algorithm at a research seminar organized by NYU Law School on 3 March 2020
- Presentation of my edited volume chapter (War and Algorithm: The End of Law?) at a research seminar at Harvard Law School on 4 March 2020
- Presentation of the edited volume on War and Algorithm at a book launch jointly organized with NYU Steinhart on 5 March 2020
- Online keynote presentation on AI and Legal Responsibility at System and Software Security conference 2020, 25 November 2020
6. Dissemination and Collaboration
With a view to its limited length and funds, this project has generated an impressive list of peer-reviewed research publications, addressing multiple research audiences within and beyond law. Work on these publications has enabled us to disseminate results at a considerable number of conferences, meetings and publication launches, even under pandemic conditions. We have listed the research collaborations and meeting participation this project enabled in some detail in our intermediary report, suggesting that it tangibly developed our ability to build domestic and international research networks.
Our project was chosen by RJ for a pilot cooperation with Eva Krutmeijer, a film director and Karin Wegsjö, a science journalist. Based on our research and a set of interviews with the team, they produced GUILTY NOT GUILTY, a short film aestheticizing the core normative problem we identified. GUILTY NOT GUILTY is scheduled to be screened at Vetenskapsfestivalen in Gothenburg and at other platforms.