Techno-Citizenship: Big Idea Series #5
Anna Roessing :
In recent months, a number of soap-operas have played which have centred on workers and leaders in big Silicon Valley enterprises; including Google, Amazon, and Uber. Isolated incidents of whistleblowing have evolved into wider protests on a range of issues- including workers’ rights and broader questions about the power and associated responsibilities of tech-leaders. In recent years, it feels as if much of the initial societal enthusiasm, or at least disinterest in the power of visions and technologies pioneered in Silicon Valley, has increasingly given way to scepticism towards the agendas and interests of its major institutions.
It is unclear yet as to the causes, extent and broader significance of these episodes. Such criticism can be understood as the flip-side of increasingly aspiration ethical visions publicly espoused by tech-companies such. It can also be placed in the context of increasing societal awareness and activism in relation to emergent technologies. We can also understand it as a reflection of changing U.S. domestic and international politics, the broader decline of its dominance on the world stage, and the declining abilities of the US to shape collective global social visions of technology.
Ensuing debates about techno-ethics then, potentially reveal much about the contemporary politics of emergent technologies – not just in term of societal values, but also the power relations which structure governance in this area. Debates about the responsibilities of Facebook to prevent the spread of disinformation and state propaganda have focused not only on the responsivities of the companies, and the accountability mechanisms they should be subject to- but also how these norms should be set. For better or for worse- the creators of these technologies have been front and centre of societal discussion- as are the limits upon their agency to control economic and political realities.
This all reminds me, of Goethe’s ballade Der Zauberlehrling (the sorcerer apprentice), which makes for a colourful analogy about the hubris of relying too heavily on technology and it’s creators to save the day. An apprentice inadvertently summons spirits that he cannot control- chaos ensues. In the end, the apprentice is rescued by the sorcerer, who has the proficiency and knowledge to control the spirits.
In contemporary discourse about technology it remains unclear who is supposed to play the wizard’s role: the state?, the tech-sector? or the tech-workers?
Similarly, Mary Shelley's 19th century gothic novel Frankenstein, or the modern Prometheus has become a frequent reference point in descriptions of the ambiguous relationship between science and society, creators and their ‘monsters’, scientific responsibility and ethics, and the question of who benefits from progress.
Times have changed since the 18th and 19th centuries, and yet the technological innovators are still often framed as the mad sorcerers on the hill, beyond the pale, and beyond control- but some how essential to both functioning and progress of society. Although the tech-sector is temporarily under the spotlight of public attention and scrutiny within political debate, it is unclear if and how this might relate to deeper changes in the relationship between innovation and democratic institutions.
Part of the problem, as critical philosophers such as Andrew Feenberg, as well as scholars in the field of Science and Technologies Studies have remarked, is the way social inquiry into technology is structured and how ‘the problem of technology’ is perceived and articulated.
Regardless of which next big technology we are talking about, the way it ends up being framed in discussion tends to be informed by two problematic types of working assumption. The first, frames technology (and technological progress) as been an exogenous social factor, essentially an unchanging historical force- which shapes social and political realities, rather than being something co-produced by them. This then lends a timeless and placeless – essentialist quality to innovation. The second is instrumentalism, which frames technology as a value neutral tool. Both essentialism and instrumentalism, in consequence, promote a form of technological determinism. In this understanding, then the technologies have ‘an autonomous functional logic’, following a sequence of necessary (rational) steps similar to science and mathematics, which makes it intrinsically independent of the social world.
As an essentially exogenous force to society, technology while influencing society’s ‘fate’, is not ‘suffering’ any reciprocal effects on its purposes, designs or its ends (Feenberg, 2010, p. 8). Its autonomous logic therefore prevents any meaningful impact of political institutions on technological progress. Determinist assumptions hence leave little space for drives for root and branch change to innovation policy to take hold.
In this intellectual context, it is no surprise that regulators, activists, and tech-inventors alike remain focused on technical solutions to mitigate the diverse effects of new technologies on democratic systems. The design of ethical principles into technological artefacts could be counted as such a strategy.
Recent renewed interest in how innovations emerge – and who participates in the process of techno-design surrounding biases in technological features has contributed to wider awareness about the many ways that vulnerabilities, exclusion, racism, sexism, etc. materialise in our institutions as well as the importance of engineers and scientists as ethical actors.
Calls for ‘ethical’ design however also reveals where agency and power are located in the complex material and ideational production of technology. It placed innovators front and centre in terms of who should be entrusted with responding to concerns about the democratic deficit of technology. Expectations thereby follow traditional assumptions about the relationship between expertise and traditional political agency in our democratic system. With it come assumptions about who counts as an expert, and whose voice, therefore, is deemed authoritative in questions surrounding the purpose but also means of technological governance. In their essence, contemporary debates conserve the structure of technology-society relationships of almost half a century ago, as was summarised by Joseph Weizenbaum in 1972:
“The structure of the typical essay on “The impact of computer on society” is as follows: First there is an “on the one hand” statement. It tells all the good things computers have already done for society and often even attempts to argue that the social order would already have collapsed were it not for the “computer revolution.” This is usually followed by an “on the other hand” caution which tells of certain problems of the introduction of computers brings in its wake. (...) Finally, the glorious present and prospective achievements of the computers are applauded, while the dangers alluded to in the second part are shown to be capable of being alleviated by sophisticated technological fixes. The closing paragraph consists of a plea for generous societal support for more, and more large-scale, computer research and development. This is usually coupled to the more or less subtle assertion that only computer science, hence only the computer scientist, can guard the world against the admittedly hazardous fallout of applied computer technology.”
Since Weizenbaum’s remarks, not much seems to have changed. Priority is still given to elitist groups of engineers and scientists who are entrusted with the unenviable task of embedding the ‘right’ ethical principles in technical designs- something which often unwittingly, but to a great extent inevitably forestalls debate about whether they should.
As Andrew Feenberg has argued, the question of whether there is political agency in the technical sphere after all, and if so, with whom it resides is excluded from our political debate.
Yet, if “there is nothing in our experience of material objects that discloses this unequal distribution of power” in technological production or that is maintained through its practices, as Daniel McCarthy recently reminded us, how is meaningful participation possible? As long as we fetishize technology and disregard or even celebrate its deterministic capacity, how can we align democratic principles with our technological institutions and practices?
With critical interrogations of some of these assumptions, the growing field of Science and Technology Studies has much to offer analysis of innovation, technology as political entities. Its research agenda scrutinised the often over-simplified models on which prominent claims rest: the artificial separation of scientific expertise from the political and economic sphere in which it is embedded in, on the one hand, and the greater disregard of what is generally seen as “lay” knowledge for considerations of whose opinions are worth consulting on the other.
A thoughtful account of agency and the relationship between expertise, lay knowledge, and the means of participation in the technical sphere has been moreover provided in critical technological philosophy. As its premise, agency neither is nor ought to be limited to expert groups but is, as is suggested by Feenberg (2017), available to individuals through their interaction with and experience of technologies and associated systems. The socio-technical relationship he describes is thereby both co-productive and the result of an “entangled hierarchy”, in which social groups on the one hand “exist through the technologies that bind their members together”, yet through this membership also gain power over technological developments, “through their choices and protests.“
Through the participation and enrolment in these technical networks, we (can) acquire situated knowledge and further develop politically salient interests, which otherwise remained dormant, and fail to be imagined or realised. Although Feenberg draws a qualitative distinction between the knowledge and power of experts and laypeople in producing technology, he still identifies a platform/momentum for its users to change the design codes that shape the qualities of the network. These “democratic interventions” comprise as much the ideational processes surrounding technology, its “meaning-making”, the choice over certain technological artefacts, as well as the active participation in their design.
Controversies surrounding GMO technologies especially within the Member States of the European Union, for instance, have demonstrated that through public intervention technological institutions, their codes, practices, and regulations can be altered. Agendas such as ‘Responsible Research Innovation’ imitative promoted within the EU Horizon 2020 framework allow and enhance public participation throughout the techno design process in hybrid public-expert forums .Further more, the growth of tech-hacking subcultures seems to challenge traditional boundaries of knowledge and skill that designated the expert's authoritative position and power.
Undoubtedly, such interventions remain structured by and embedded in pre-existing norms and economic interests, which for instance have formed European sensitivities over the desirability of human intervention into nature. But it is clear that growing individual and group awareness and reflexivity potentially for the bases of what Feenberg refers to as “conscious coproduction” and the chance for change.
Yet, it is worth remembering that while acritical philosophical agenda contributes to our understanding of the philosophical assumptions underlying socio-technical relationships, in doing so, it also deliberately detaches its analysis from our everyday encounters with technology. Such theory remains abstract about how different roles, hierarchies, technical codes and layers within and between technical networks eventually translate into politically salient interests and vice versa. And the questions of how to eventually realise the qualities of “technical citizenship” not only as an abstract, philosophical concept but something inherent in our everyday practice. Put simply, action in this space, as with other forms of innovation, seems dependent on continued experimentation.
 Hurlbut, B. (2017) ; Hurlbut, B. and Jasanoff, S. (2017)
 Weizenbaum, J. (1972) On the Impact of the Computer on Society. How do one insult a machine? Science 176 (4035), pp. 609-614
 or which experts should sit in the ethical boards of global tech companies
 Feenberg, A. (2017) Technosystem. The Social Life of Reason, Cambridge: Harvard University Press.
cover image rights (Raoul Haussmann, Tatlin at home, 1920)