top of page
Search
  • Writer's pictureBath TSERG

Tech Bros and Techno-War: Big Idea Series #3

Updated: Aug 16, 2019

John R. Emery, Ph.D. :

'I am purely evil;

Hear the thrum Of my evil engine;

Evilly I come. The stars as thick as flowers

In the meadows of July;

A fine night for murder

Winging through the sky.

Bombs shall be the bounty

Of the lovely night;

Death the desecration Of the fields of light. I am purely evil,

Come to destroy

Beauty and goodness,

Tenderness and joy.'

–Ethel Mannin “Song of the Bomber” (1936)





How do our ethical assumptions of modernity affect the way we conceptualize technological problem-solving in warfare? There is a predominant logic today that complex ethico-political dilemmas are reducible to engineering problems or mathematical risk assessment equations, awaiting the correct answer via a technological determinism. This techno-logic manifests itself in novel ways in contemporary discourses and practices of war. Although there has been a plethora of literature on this subject in recent years–from smart bombs, drones, to AI, and lethal autonomous weapons systems (LAWS)–here I explore how this focus on technology as “solving” the dilemmas of late modern war ultimately disengages our ethical intuitions and enables what it seeks to constrain by making killing more palatable to the liberal conscience. First, I begin with some underlying modern assumptions that frames ethical decision-making as universalizable and quantifiable problems to-be-solved; focusing on a distinction found within western philosophy between more Aristotelean and Cartesian ways of thinking about the world, ethics, and the place of technology within it. I then look at a recent example of how these types of underlying assumptions shape contemporary discussions of ethics and technology in practice in Silicon Valley. Elke Schwarz asks: how might technology reshape our capacity to think ethically; I add to this question how do our modern assumptions always already shape our ethical assumptions? Ultimately, as social scientists, we know that when we quantify the social world, something or someone is always excluded; the models never fully map onto the complexities and intricacies of social reality. Nevertheless, we generalize to better understand the world while recognizing these limitations. However, given the depths of military technological advancement, we should ask ourselves: would you be willing to bet someone’s life that your models are the most accurate representation of social reality in all contexts? That is precisely what the tech bro is doing today, yet without the understanding that these quantifications are often crude imperfect placeholders that never capture the totality of diverse socio-political contexts.


Ethics in Modernity: Humanism vs. Cartesianism

Intellectual historian Stephen Toulmin in his Cosmopolis: The Hidden Agenda of Modernity outlines our modern assumptions: i.e. 17th C. rational modes of inquiry based on the epistemology of Descartes, the physics of Galileo, and the political theory of Hobbes. Today, I will focus on the Cartesian legacy in contemporary techno-ethic. In particular, Toulmin asked the question what if modernity had taken on a more humanistic skepticism as opposed to a more Cartesian skepticism. The former epitomized by Michel de Montaigne who looked out into the world, utilized other cultures and ways of life to describe the strangeness of European customs that were assumed to be universal values, the later Cartesian skepticism looked within his own consciousness and doubt even his own sense data to establish first principles in his famous cogito ergo sum. We will take a brief foray into the distinction between a humanist skepticism as an alternative to the Cartesian skepticism that formed the foundations of modern epistemology; an alternative I believe that we need in order to counteract a techno-ethics of tomorrow.


As we attempt to quantify the global battlefield, construct a science of warfare, eliminate uncertainty, and code a techno-ethics that is calculable and predictable, Montaigne’s humanist skepticism offers to temper our unfounded optimism in techno-logical problem solving. Montaigne opens his masterpiece Essais by pushing back against the idea that humans act with law-like predictably with: “That by Diverse Means Men Arrive at the Same End.” In this essay, he argues that humans are unpredictable especially in warfare and that bravery and valor can lead to both clemency and revenge in the heart of your enemy. Thus, man does not act in scientifically predictable ways, since “Man (in good earnest) is a marvelous vain, fickle, and unstable subject, and on whom it is very hard to form any certain and uniform judgment.” First, he gives the example of Edward, Prince of Wales, having been “highly incensed by the Limousins, and taking their city by assault, was not, either by the cries of the people, or the prayers and tears of the women and children, abandoned to slaughter and prostrate at his feet for mercy, to be stayed from prosecuting his revenge; till, penetrating further into the town, he at last took notice of three French gentlemen—who with incredible bravery alone sustained the power of his victorious army.” It was from this incredible bravery and valor of the three men against all odds, which moved Edward to exercise clemency and stop the “torrent of his fury” against these three men and the remaining civilians of the city. In direct contrast to this, when Alexander the Great entered the city of Gaza after immense difficulties and found the commander Betis whose “valour in the time of this siege he had most marvelous manifest proof, alone, forsaken by all his soldiers, his armour hacked and hewed to pieces, covered all over with blood and wounds, and yet still fighting in the crowd of a number of Macedonians, who were laying on him on all sides,” Alexander said to him, “Thou shalt not die, Betis, as thou dost intend; be sure thou shall suffer all the torments that can be inflicted on a captive.” To which menace the other returning no other answer, but only a fierce and disdainful look; “What,” says Alexander, observing his haughty and obstinate silence, “is he too stiff to bend a knee! Is he too proud to utter one suppliant word! Truly, I will conquer this silence; and if I cannot force a word from his mouth, I will, at least, extract a groan from his heart.” And thereupon converting his anger into fury, presently commanded his heels to be bored through, causing him, alive, to be dragged, mangled, and dismembered at a cart’s tail. In the end, this type of humanist skepticism about the predictability of human affairs demonstrates that one must understand the context in which one is operating, in direct contrast to law-like principles of the social world that so plague social science and ethics today.


The Cartesian program for philosophy swept aside the reasonable uncertainties of 16th C. skeptics like Montaigne in favor of new, mathematical kinds of “rational” certainty and proof. There was a devaluation of the oral, local, timely, and concrete and an elevation of the formally “rational theory grounded on abstract, universal, and timeless concepts.” Rhetoric became subordinate to logic; validity and truth of rational arguments were independent of who presents them to whom, or in what context (Toulmin 1990: 75). According to Toulmin, modernity and the Cartesian legacy framed all questions in terms that rendered them independent of their context. Toulmin’s procedure is to recontextualize, what was lost with 17th C. philosophers who assumed tests of “rationality” carried over from one context or situation to another (Toulmin 1990: 21). Throughout the Renaissance there was a reinvigoration of classical thought especially Plato and Aristotle. For those 17th Century philosophers that followed Plato, they limited “rationality” to theoretical arguments that achieve a quasi-geometrical certainty or necessity. Thus, theoretical physics was rational study in a way law or ethics were not. Ultimately, Descartes legacy attempts brings all subjects into a formal theory – formally valid demonstrations, and questions of ethics are diminished as they cannot achieve the same certainty as other forms of knowledge production (Toulmin 1990: 20). In contrast, the Renaissance Aristotelians handled moral issues using case analysis–casuistry–from Nicomachean Ethics, such that, “The Good, has no universal form, regardless of the subject matter or situation: sound moral judgment always respects the detailed circumstances of specific kinds of cases”– Pros ton kairon – as occasion requires (Toulmin 1990: 31-32). Ultimately, it becomes clear how these assumptions about universalizability, generalizability, timelessness, that plague political sciences today are inherited from a Cartesian legacy that permeates modernity. The (false) belief that ethical questions can be universalized and quantified in everyday life – let alone with the uncertainties of war – represents a dangerous techno-logic that neglects alternative pre-modern visions of ethics as local, contextual, and casuistic in character. In essence, Aristotelian ethics used to go from the particular toward the universal, Cartesian Platonic ethics from the universal down to the particular; what the Montaignian skeptic brings to light is the tendency to universalize our context-specific particular experiences, neglecting other forms of knowledge and epistemology.





If one takes an Aristotelian approach to ethics (as I do) opposed to rule-based utilitarian or deontological standpoint, it is easy to see how the prospect of programming a universal ethics in ‘killer robots’ or AI in general is antithetical to a context-specific ethical framework. For pre-modern Aristotelian casuists every ethical position was that of a given kind of person in given circumstances, and in spatial relations with other specific people: the concrete particularity of a case was “of the essence.” Thus, “ethics was a field not for theoretical analysis, but for practical wisdom, and it was a mistake to treat it as a universal or abstract science” (Toulmin 1990: 76). Modernity and today’s iterations of programming ethics involved forgetting the practical nature of ethical decision-making. Chris Brown is one of the few who takes Toulmin’s work to heart today and utilizes it as part of his foundational understanding of ethics as practical judgment. “What is important here is to remember that rule-based moral reasoning is not the only way to approach ethical issues. Rule-based moral reasoning attempts to produce an algorithm that will give a general answer to the question of what is right and what is wrong” and with the complex cases from humanitarian intervention to drone strikes today, where duties are in radical conflict, “this approach is unlikely to succeed” (Brown 2010: 244). Thus, for Brown, an Aristotelian – or more generally a classical – approach to ethics is…more promising. Toulmin’s version of Aristotle’s ethics is particularly helpful in this respect” (Brown 2010: 244). The point is that, in dealing with complex situations, such as deciding whether it is right that one state should preventively use force against another, or against “terrorists” operating within the space between war and peace, “there is no substitute for a form of moral reasoning that involves a judgement that takes into account the totality of circumstances, rather than seeks for a rule to apply” (Brown 2010: 245). Rule-based moral logic has been pervasive in contemporary ethical debates, especially in warfare. Applying the Kantian categorical imperative or making utilitarian calculations necessarily involves prudential judgment, especially in the context of the uncertainty of war. As Col. Andrew Bacevich aptly notes, “War remains today as it has always been–elusive, untamed, costly, difficult to control, fraught with surprise, and sure to give rise to unexpected consequences” (Bacevich 2008, 159-160). Indeed, the enduring nature of war is that it is an experiment in catastrophe, yet we strive to construct a science of warfare and program away algorithmically the ethico-political dilemmas of killing in war. Hence, the aura of objectivity and neutrality that techno-ethics purports to offer decision-makers not only allows them to bury the ethical dilemmas of practical judgment into the algorithmic code, it simultaneously removes them one causal step from the act of killing.

The problem with practical judgment and taking a more Aristotelian approach to ethics in the techno-logical era is that it does not easily lend itself to quantification, because “making a judgement is often more intellectually demanding than following a rule” (Brown 2010: 230). What makes rule-based moral reasoning so appealing is that it “appears to offer a degree of moral security to individuals in an uncertain age such as our own”, a kind of objective assurance that they are doing the right thing (Brown 2010: 230). But following a rule necessarily involves exercising judgment, as moral dilemmas are often ambiguous, contradictory, and must be made quickly. Emphasis on speed as opposed to judgment was best summed up by former CIA director John Brennan: “U.S. decision making processes need to be streamlined and accelerated...Because the problems [of today] are not going to wait for traditional discussions.” The ethics that is enabled by the techno-logics that emphasize speed, efficiency, and reducing ethico-political dilemmas to technical problems with clear and distinct quantifiable solutions, dissolves human worth to their role as a mere cog in the system. It falsely asserts a universalizable ethics that can be algorithmically programmed and eliminate all the negatives of human error, bias, and panic in war in favor of an abstract rationality that purports to encompass every element of human life. Life-affirming ethics of practical judgment based on the concrete circumstances of war is reduced to life-destroying technical applications of rule-based algorithmic programming. We are all heirs of the Cartesian system, it is up to us as academics to push back against the narrow techno-logics that have become prominent in contemporary discourses and practices of war, as demonstrated by the case of the ‘tech bro’ below.


Tech Bro Gives Ethics a Whirl in Project Maven

Meet Palmer Luckey, the ‘tech bro’ creator of the virtual reality headset Oculus Rift and recipient of a lucrative Department of Defense contract for military AI with his new defense company Anduril. Take a look at his photo, this is who we are entrusting to program ethics in our ‘killer robots’ of tomorrow. In his view: “Technological superiority is a prerequisite for ethical superiority.” Moreover, he justifies his company leading the U.S. military down this path because if we don’t do it first our enemies will: “I have no hopes that a digital Geneva Convention, whatever it will be, will prevent China from using surveillance tools to watch every citizen in their country. I have very little confidence that it will prevent Russia from building autonomous systems that can acquire and fire on targets without any kind of human intervention whatsoever.”



In 2014, at 22 years old, Luckey sold the Oculus company to Facebook for $3 billion (he netted an estimated $700 million). By 2017, he had already combined forces with the financial backing Peter Theil (the founder of Palantir who recently prodded President Trump to investigate Google for treason via Fox & Friends) and Trae Stephens (Palantir employee, and former intelligence official in charge of Trump’s Dept. of Defense transition team) to form the defense-based tech company Anduril. Luckey was awarded an undisclosed DOD contract as a part of project Maven, which rose to prominence with Google’s employee uprising against their participation in the program. The goal of Project Maven according to Air Force Lt. Gen. John N.T. Shanahan: “is to turn the enormous volume of data available to DoD into actionable intelligence and insights.” Essentially, Maven’s coordination with the tech world is a concerted attempt they are looking to quantify the global battlefield in order to act kinetically against what amounts to heterogenous statistical correlations of metadata. As explored in the previous section, this type of logic and ethics reduces humanity into calculable mathematical problems-to-be-solved, as opposed to ethico-political dilemmas that require the exercise of judgment; weighing the strategic and ethical costs/consequences of killing in war.


Lee Fang writing for The Intercept has published an incredible piece of journalism, piecing together the web of defense tech industry and the Trump administration. It amounts to a small group with unscrupulous capitalists looking to make the world safe for democracy and portraying themselves as the ethical saviors of the U.S. military as their software aids in ICE deportations , NSA spying, and the DOD. Luckey’s Anduril industries was funded by Peter Theil’s venture capital firm, who founded the Palantir system (recently awarded an $800 million U.S. Army Contract). As this Bloomberg article noted: Palantir was integral to U.S. intelligence in the War on Terror “they’re more like a spy’s brain, collecting and analyzing information that’s fed in from the hands, eyes, nose, and ears. The software combs through disparate data sources—financial documents, airline reservations, cellphone records, social media postings—and searches for connections that human analysts might miss. It then presents the linkages in colorful, easy-to-interpret graphics that look like spider webs. U.S. spies and special forces loved it immediately; they deployed Palantir to synthesize and sort the blizzard of battlefield intelligence.” Such an understanding of how the ‘brain works’ is a purely idealized Cartesian ‘rational’ cogito; devoid of emotion or being, that misses the essence of what it means to be human and in which our context, language, history, and beliefs shape our essence of humanity. When Google employees famously told the U.S. DOD that they wanted out of project Maven. Luckey and Stephens penned a Washington Post op-ed in which they echoed Google’s desire to use tech for good but in their patriotic support for the program: “But ostracizing the U.S. military could have the opposite effect of what these protesters intend: If tech companies want to promote peace, they should stand with, not against, the United States’ defense community.” What they failed to mention in the op-ed, is that they were awarded the same Maven contract, through Stephens’ position as leader of the Trump transition team for the DOD. Thus, the tech bro constructs himself as ethical by proxy of supporting the ‘good guys’ i.e. the U.S. in developing algorithmic software to kill by automation. Harkening to some notion of perfect Cartesian omniscience of the battlefield, devoid of concrete context with which traditional practical judgment normally would take place.


I will let Palmer explain the ways in which he envisions the battlefield of the future, where the genuine tensions of urban warfare are reduced to the quantification of everything and ‘objective’ algorithmic prediction: “What we’re working on is taking data from lots of different sensors, putting it into an AI-powered sensor fusion platform so that you can build a perfect 3D model of everything that’s going on in a large area…Then we take that data and run predictive analytics on it, and tag everything with metadata, find what’s relevant, then push it to people who are out in the field…Practically speaking, in the future, I think soldiers are going to be superheroes who have the power of perfect omniscience over their area of operations, where they know where every enemy is, every friend is, every asset is” (emphasis added).


The video-game mentality with which Luckey, un-problematically deploys the quantification of the battlefield and the reduction of human life to metadata constructions of ‘friend’ and ‘enemy’ as static stable categories. It evokes an image of the future of war like video game where the ‘good guys’ show up as green and the ‘bad guys’ are painted red, from which killer robots can carry out targeted strikes sparing civilian life.





In the end, this post is a call for all academics to look toward the future by looking back toward different epistemological starting points for discussions of ethics of war today. The quantification of the battlefield, the search for a science of warfare that dissolves our responsibility for difficult ethico-political decision-making of having to kill in war; a necessary exercise in practical judgment. Instead, rule-based moral reasoning has run amuck in the reduction of humanity to the quantification of their lives into legitimate targets based on heterogenous statistical correlations of metadata buried in the algorithmic code by some tech bro in flip flops sitting on his hundreds of millions of dollars in Silicon Valley. There are alternatives; hope is not lost. Yet, we must fundamentally reassess these ethical assumptions of modernity outlined by Toulmin in order to produce alternative visions of the future of war, beyond the reduction human life to a line of code in the algorithmic construction of reality. If the essence of war is uncertainty, contingency, and chance, any attempt to ‘solve’ these problems technologically will ultimately fail with dire consequences. Ethical practical judgment can never be eliminated, only buried in the algorithmic code, removed from democratic debate and accountability, into the hands of predominantly white male software engineers.


“At its best, computing in warfare allows us to achieve just objectives to protect the nation and our vital national interests, while minimizing unnecessary destruction and risk to our military and innocent civilians. I would argue that, to this point in history, computing in warfare has allowed us to make better decisions as combatants. War is a horrible thing, and it remains imprecise, but the jus in bello effect of computers has been generally a movement toward greater precision and more narrow applications of force.” –Heather Wilson, Secretary of U.S. Air Force

373 views0 comments
Post: Blog2_Post
bottom of page