Dilemmas of Emergent Technology Control: Big Idea Series #6
Updated: Sep 28, 2019
A trick, which authors often unwittingly play on readers, when it comes to technology and discoveries- is to start with a given moment, in a given place, with a given inventor. Entirely understandable of course. It makes stories personable, and our writer is in good company historically. Scientific journalism is smattered with allusions to genies in bottles, Faustian pacts; as well as individual acts of personal sacrifice, greed and hubris by inventors. I will not subjugate you to an amateur expedition into Greek mythology- but these types of image are certainly older than contemporary scientific institutions.
And this, hobbles us from the start if we are trying to grasp the true context of innovation and its broader material effects and societal resonance. It leads to a type of analysis- cogently captured in an article published in Science in the 1970’s (something I’ve shamelessly lifted from here). The article describes the form of a standard essay on the societal impacts of new innovations. The author is talking about computer science - but I think they have hit upon something a little deeper than the realities of their own particular corner of techno-scientific innovation:
''First there is an “on the one hand” statement. It tells all the good things computers have already done for society and often even attempts to argue that the social order would already have collapsed were it not for the “computer revolution.” This is usually followed by an “on the other hand” caution which tells of certain problems of the introduction of computers brings in its wake.
Finally, the glorious present and prospective achievements of the computers are applauded, while the dangers alluded to in the second part are shown to be capable of being alleviated by sophisticated technological fixes.
The closing paragraph consists of a plea for generous societal support for more, and more large-scale, computer research and development. This is usually coupled to the more or less subtle assertion that only computer science, hence only the computer scientist, can guard the world against the admittedly hazardous fallout of applied computer technology.”
I think it is fair to say, that these types of observation stand up today as well as they did then. And just as well for proliferation and militarization concerns related to biochemical innovation (my own particular area of interest) – as they do for other questions about the societal and environmental impacts of technology.
In my recent book, I try and get behind the question of why new technologies always seem to be discussed in the same old way. I argue that getting to grips with this question is based on a key insight. Specifically, that problems in this space tend to be framed in relation to how our societies address three perennial questions, which are now crudely introduced.
The innovator’s paradox: This centres on the idea that Innovation can produce both good and negative consequences. This then appears to generate conflicting ethical responsibilities for those that create, and those that facilitate creation.
The innovation governance paradox: This centres on the idea that societies seek security through the development and maintenance of innovation systems, but innovation can also generate insecurity. This then appears to create conflicting demands for exploitation and precaution. Appreciation of this leads to what has often been referred to as the Colligridge dilemma.
The global insecurity paradox: This centres on the idea that the route to greater national security is typically understood to require more global approaches management of technology. But state centric, rather than global, conceptions of security are at the centre of current approaches to governance.
These paradoxes have manifested in emergent publicly funded techno-scientific projects which are primarily of a civilian nature. This includes those in the fields of biotechnology and AI. These fields are sites at which long-standing ideational and power struggles play out in the creation of visions for the future of field, and political norms of governance. Around these fields, are spaces in which new ways of governing innovation, and visions of innovation, are imagined, experimented with and co-opted into the existing landscape.
In my work, I show how attempts to govern emergent technology (through attempting to resolve these paradox) are reliant on a number of key forms of collective practice including: ethical evaluation (i.e. establishing what dominant social conventions require), policy design (i.e. working to achieve these ‘ethical’ outcomes) and political decision-making (i.e reasserting or transforming the scope of what is deemed ethical and what is deemed practical). All these activities occur in the context of muddle and struggle.
I hope to show that the first step to addressing ethical concerns about emergent technology is to appreciate the broader political games and deliberation is part of- something which involves moving beyond the comfort zones of both STS and security studies.
This means it can all get a bit messy ( I am looking at you Actor Network Theory), but gives new ideas the space to breath.