top of page
Search
Writer's pictureBath TSERG

Technology and Nihilism: Big Idea Series #7

Updated: Jan 23, 2021

John Smith: My undergraduate dissertation began with questions about the ethics and governance of AI technology- but quickly evolved into a much deeper investigation into the relationship between technology and politics in our contemporary societies. Little did I know it, when I first began, but the questions I had about technology, politics and history had already been anticipated – and had spawned a wide range of contemporary responses in western thought- from the nihilism of Heidegger through to existentialism. And, in this brief essay, I want to discuss my engagement with some of the key thinkers and concepts which helped deepen and refine my study into AI governance in the UK.

In terms of key concepts, and thinkers- my perspective was influenced by Feenberg’s Frankfurt School-inspired ‘alternative modernity’ concept, which centres on the fundamental significance of technologically mediated changes for our cultural norms and values- and the critical necessity to democratise technology politics. In this context, I also drew upon Jasanoff’s (and Kim’s) more recent conceptualisations of sociotechnical imaginaries. This idea centres on the processes through which new political futures are imagined, contested and shaped around emergent fields of science and technology. In particular, I found Jasanoff’s newer works, which deal with discrepancies between different cultures- and the potential consequences of these differences- quite helpful.

In my study, I took a deep dive into public discourses and expert discussion of AI in government contexts. I found that existing debates tended to discuss notions of accountability, responsibility and the possibilities of auditing for transparency in algorithmic systems, within existing or slightly modified legislative frameworks. This has often served to reinforce, entrench and institutionalise existing norms, and ultimately narrowed the range of policy options which are seriously discussed and even imagined. For example, during this study, I only rarely came across criticism of technology and technological approaches to addressing social issues. What was far more common- and has been discussed in this blog previously, was a quite formulaic approach to the discussion of technology, which involved the breathless promissory discussion of the potential of AI, on one hand, followed by an often cursory glance at potential negative implications. In such discussions, Innovation tends to be characterised and assumed as self-evidently positive, regardless of the dangers and possible alternatives foreclosed by such a framing.

Here, the underlying, incumbent structures of power and privilege- facilitating the unquestioned framing of ‘progress’ in purely economic terms- was/is often justified by tangential engagements with commonly agreed-upon societal challenges. In this sense, these symbolic considerations were often deliberated through regulation, such as through the implementation of the likes of quotas, the consideration of ‘responsibility’ and philanthropic efforts and donating small portions of monetary surplus to charity. As a consequence, there remained little true engagement with the societal challenges that the identified approach often claimed to address- and little prospect of opening up working assumptions to critique. Such insights, of course, resonate with critical theorists of technology- who have repeatedly highlighted how official assessments of emergent technology serve to externalise much of the broader material, economic, and political contexts they occur in- and the inherent uncertainties and struggles they involve.

This is reflected for example the insistence upon the need for certainty (in the form of measurable risk-assessments, fixed definitions etc.) that policymaking and research often seem to require. This mode of policymaking and thinking often leads to the standardisation and black-boxing of algorithms, amongst many other forms of technology.

I think this is why I found it relatively easy to agree with some of the primary points that technological nihilism makes: that we are increasingly allowing/encouraging technology to create and reinforce its own set of norms and principles, without due consideration of where those cumulative value systems actually come from.

The implications of these reflections for my own thinking? Modern technology, such as algorithms, function by accumulating thought-to-be truths and stockpiling them as ‘standing reserve’ (a term Heidegger (1977) used in an analogy of energy storage). Algorithms are the mechanism through which the Internet’s standing reserve- which one might equate to data- is organised and quantified. Attempting to reveal these concealed processes, through the likes of increasing an algorithmic system’s transparency, as was suggested in many of the analysed debates, “gathers man into ordering” standing reserves (ibid, p.19). Crucially, the process of enframing (Gestell) this series of standing reserves, institutionalises biases and untruthful visions of a ‘status quo’, when presented back to a given person or community; as technology is incapable of illustrating its own ‘creative processes’ (ibid). In this regard, technology overtakes and replaces ‘man’, as the vehicle of ordering and disturbing through which the world is experienced.

In summary then, while this work has not made me into an expert in either the philosophy of technology or AI governance- the engagement with both has given me a greater appreciation of what is potentially at stake when societies evaluate the promise and perils of emergent fields.


This piece was written by John Smith, who is currently studying for a MSc In Computer Science at the University of Bath. The piece is based on some of the ideas raised by his Undergraduate dissertation in Politics.


cover image

107 views0 comments

Comments


Post: Blog2_Post
bottom of page