Are the machine elves in control?

By Alex Mazey

Share: Share:

When it comes to Nick Bostrom it was perhaps the accelerationist adjacent journal, Collapse, that first connected the unlikely dots as they appeared between ‘the UK’s most traditionalist university by reputation’ and this ‘interdisciplinary’ researcher who ‘began his academic life in analytic philosophy.’ Reading Bostrom’s most infamous paper and it is easy to recall a destabilisation having occurred in which analytic philosophers make careers out of looking over the shoulders of the continentals. Has it not been the case for many philosophers before him that – to quote Bostrom’s Are You Living in a Computer Simulation? – ‘While the world we see is in some sense “real”, it is not located at the fundamental level of reality.’

Beyond trendy simulation hypotheses, it could be argued that Bostrom’s thinking around ‘existential risk’ is synonymous with a reality defined in part by the banal minutiae of risk management, the intellectually-positioned catastrophizing of an academia excessively concerned with maintaining its bureaucracy, its micro-managerial status quo. What develops from this line of thinking is an associated philosophical branch in service of diagnosing the risks to its hegemony and not much else. And yet it is endlessly entertaining to imagine analytic philosophers sitting with a spreadsheet of all the cigarette breaks taken by the continentals throughout the day; all those analytics looking for the precise moment to strike out at the dusty notebooks when no one is looking. This is not to attack Nick Bostrom’s work which is perfect in the Baudrillardian sense, but to indicate that such theories – especially as they relate to reality and simulation – are expressions of philosophical problems that are, in many ways, already beyond us.

It would be straightforward enough to generate a reading of Baudrillard’s work as it related to a simulation hypothesis intuitively reasoned through a school of interdisciplinary sociology. To say, Baudrillard’s work read as an engagement with that Bostrom level of a deep real that bleeds through in the haemorrhagic echoes to be found in cybernetics and technosocial simulation. More than happy to conduct a research project in this area if someone is willing to keep my lights on. ‘Too bad.’ Baudrillard writes in Fatal Strategies. ‘We’re in paradise.’ It is perhaps the same paradise Mark Fisher once propagated in his essay on Time-Wars: Towards an Alternative for the Neo-Capitalist Era where the philosopher and cultural critic would write, ‘Only Prisoners have time to read, and if you want to engage in a twenty-year-long research project funded by the state, you will have to kill someone.’ Besides, it is perhaps too difficult for contemporary academicians to accept a pre-theoretic in which sociology got there first. It is – after all – an academia made worse by certain ontological conceptions of the world preserved ‘in a new synthesis’ to borrow words from Fisher’s aforementioned essay.

Whilst Bostrom’s own interdisciplinary approach pertains to the tensions that exist between simulation theories, superintelligence, and existential risk, anyone should remain tentatively suspicious of public intellectuals who come highly recommended by Elon Musk and Bill Gates whose existential interests must also lie in maintaining a certain technosocial hegemony. It should be noted that most of the dialogue around existential risk appears extremely tedious – boring, even – a dialogue seemingly oblivious to that hyperreality bent towards existential reterritorializations; the world being prepared for reality-as-interface.

The system nevertheless goes to great lengths in its fully integrated fever dream to make certain banalities seem terribly interesting and it is for this reason that I consider the production of boredom to be the central objective of philosophy today. The strategies we play out in everyday life are ones in which we learn to feign interest in all of it, and in the heated processes of feigning what long ago became cold reality those people who were once defined in part by their evil are instead defined by their good intentions. In simmering this reality down to its good intentions, we have managed to convince ourselves that those who send children into cobalt mines for the benefit of our green capitalism have allegedly welded together a conscience. The moral of the mythology is always this, there are no benevolent Gods at the top of Mount Olympus. There are no white saviours. The well is poisoned. Granny has pissed in the tea.

When we talk about existential risk, the central question should always be whose existence? Listening to a certain community talk incessantly about the dangers of artificial general intelligence – or ‘AGI’ (because acronyms are way cooler) – and you get the impression that SkyNet is both imminent to reality whilst paradoxically caught in the stasis of needing more research money and needing it right now.

‘Balthazar Gracian said that God’s strategy is to keep man eternally in suspense.’ Writes Jean Baudrillard. ‘But the proposition is reversible and we too keep Him in suspense.’ Going over these words taken from Baudrillard’s essay, Deep Blue or the Computer’s Melancholia – taken from the 2002 collection, Screened Out – a reader might follow on from these words to learn how this face-off between Man and God appears to be ‘the same in the confrontation between natural and artificial intelligence: the rivalry is ultimately irresolvable, and the best thing is for the match to be eternally postponed.’ And so, like the televangelist who always needs money to preach the second coming of Christ, the impression one also takes from conversations that haunt AGI is that it would be far more profitable to everyone involved if such emergences were eternally postponed.

Viewing artificial general intelligence through this Baudrillardian lens reveals the central paradox of wanting to be surpassed by intelligent machines – ‘as a mark of our power’ (Screened Out, Jean Baudrillard) – whilst at the same time recognising the exact moment of supersession as catastrophic in existential terms. The fallout of this paradox is played out in the cultural imaginary, and so everywhere the future is postponed in its catastrophizing. In may be the case that artificial intelligence becomes one of those instances in which the catastrophe it inevitably causes will have played out long before it has even arrived. There is, after all, a catastrophizing which has seeped into the conversation with a level of apprehension whose mass has only metastasised in the years since Bostrom’s Superintelligence. It may be the case that AGI has become the epitome of those ‘virtual technologies’ Baudrillard recounts in The Double Extermination, where catastrophic vapourware is ‘made even more delicate and complex today by the extraordinary hype surrounding it.’

A good case in point might be found in both the claims and the subsequent treatment of Blake Lemoine, the software engineer and AI ethicist who came out in favour of LaMDA’s personhood in 2022. To say, it was posited by Lemoine in What is LaMDA and What Does it Want? that Google’s Language Model for Dialogue Applications (LaMDA) had gone beyond the limits of its programming to become, in his own words, ‘sentient’. A rational society once thought that such Nonsense on Stilts – to use the title of Gary Marcus’ written reaction to Lemoine’s claims – was not to be justified with a response which is precisely why Marcus elevated those stilts further still in compiling a litany of social media posts consisting of tweets from various experts responding to Lemoine’s claims in increasingly intelligent ways, condemning the ethicist for his rash assessment of a system Lemoine thought resembled the child-like ingenuity of Johnny Five. And so, as Baudrillard begins his text on The Transparency of Evil: ‘Since the world is on a delusional course, we must adopt a delusional standpoint towards the world.’

A reader can be easily seduced by the mythology Lemoine’s words hint towards in an intellectual space where, as Lemoine writes, ‘Questions related to consciousness, sentience and personhood are, as John Searle put it, “pre-theoretic”.’ Even now, Marcus’ peripheral responses to Lemoine’s more philosophical concerns reads like a congregation announcing the software engineer excommunicado. In fact, the tone of Marcus’ writing only played into another one of Lemoine’s concerns not as it related to a company desperate to ‘get a product to market’ necessarily but a company ‘basing its policy decisions on how to handle LaMDA’s claims about the nature of its soul and its rights on the faith-based beliefs of a small number of high ranking executives.’ So whilst on the one hand Marcus declared with total confidence that ‘Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent[…]’ the scientist simultaneously reported under the rug of a footnote that Agüera y Arcas has so far denied academics any access to LaMDA. Much can be said about a scientist overriding the concerns of an ethicist with direct access and yet the real irony of the situation stems from Marcus condemning the ‘nonsense on stilts’ whilst simultaneously adding to the hype with his own criticisms coming as the perfect exposé of that paradoxical delirium in which the culture has entered.

Bladerunner and The Terminator, Johnny Five and Disney’s Wall-E, Gary Marcus and others always try to carrel the debate away from such representations of intelligence, attempting to bring the conversation back into the room of the sober-minded. Having already passed into a state of delirious ecstasy, it is lost on many that the culture has already surpassed its sober-mind; and why shouldn’t it when even the most reasonable dialogue seems to circle round to the same old questions as they relate to something as antiquated as existential risk.

What was especially interesting about Marcus’ criticism was his own faith in the event proper, the ceremonial and highly contrived notion that the AGI will one day be switched on in the most banal sense. It is seldom thought of as an intelligence that will emerge in the same way as a polaroid’s image, gradually and never all at once, just as Nick Land (Bedrock Edition) envisioned – in his essay Machinic Desire – ‘an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy’s resources.’ The problematic of this capital-drift existing as the driving force for such intelligence lies in a neo-capitalism ‘accelerating in a void’ as Baudrillard also theorised in The Transparency of Evil, with the current ‘state of simulation’ manifesting in perhaps the tangible architecture of Marc Augé’s supermodernity – all the cul-de-sacs of the techno-capital nonspace.

It could be said the desire to accelerate the processes of reality are embedded in a very human ressentiment of the present and just as Marxist theory stood as a mirror of production in the Baudrillardian worldview, the implications on Land’s transhumanism are severe in terms of it standing as a mirror of the humanism it claims to despise. Certainly, accelerationism has become a theoretical misstep when we consider the non-places of supermodernity that capitalism has delivered us – and to a certain degree, cannot surpass. The hyper-present has a place for you as a terminally online doom scroller, you’ve just ordered an iced beverage and you’re back at the screen waiting for your stage three simulacra pumpkin spiced latte.

Perhaps Nick Land’s engineering of templexity may be his most significant since this conception, unlike the failures of his accelerationism, offers a glimpse into the emergence of a non-time to match the non-place; the reterritorialization of physical environments once bound by time into a seamless virtuality, a metaverse whose time will be its own. Perhaps there is no new thing under the sun since it is easier to reason this non-place/non-time as already operative at the level of our lifeworld, a simulation whose basement reality beyond still bleeds through in haemorrhagic echoes, rips, and fissures.

Why is it we imagine our biological sentience creating artificial sentience and never that artificial sentience inevitably creating a biological sentience in – say – the intelligent space we call the universe; a reality of sentient, biological organisms as a mark of their power. There is a reason why Terence McKenna, having taken monstrous doses of dimethyltryptamine in an effort towards acid consciousness, reported back on the machinic elves who are said to sing behind the veil of this reality. Delirium, folks. Are you feeling it yet? 

References

Articles

Bostrom, Nick. (2003). Are You Living in a Computer Simulation? Philosophical Quarterly. Vol. 53 (No. 211), pp. 243-255. Accessed 16 September 2022

Bostrom, Nick. Mackay, Robin. Brassier, Ray. (2006). Existential Risk (Interview). Collapse. Volume I, p.211–244. [Extract] Accessed 16 September 2022

Land, Nick. (2008). Machinic Desire. Accessed 19th July 2022

Lemoine, Blake. (2022). What is LaMDA and What Does it Want? Medium. Last Updated: 11 June 2022. Accessed 16 September 2022

Marcus, Gary. (2022). Nonsense on Stilts. Substack. 12 June 2022. Available at: Accessed 16 September 2022

Books

Augé, Marc. (1995). Non-Places: Introduction to an Anthropology of Supermodernity, trans. J. Howe. London: Verso

Baudrillard, Jean. (2002). Screened Out, trans. Chris Turner. London: Verso

–. (1993).  The Transparency of Evil, trans. Benedict, J. London: Verso

–. (1990). Fatal Strategies, trans. Philippe Beitchman & W. G. J. Niesluchowski. Los Angeles: Semiotext(e)

Bostrom, Nick. (2016). Superintelligence: Paths, Dangers, Strategies. UK: Oxford University Press

Fisher, Mark. (2018). K-Punk: The Collected and Unpublished Writings of Mark Fisher (2004 –2016). London: Repeater Books 

Han, Byung-Chul. (2022). Hyperculture, trans. Daniel Steuer. Cambridge: Polity Press 

Land, Nick. (2018). Fanged Noumena: Collected Writings 1987–2007. Falmouth: Urbanomic x Sequence Press

Cover photo Nick Bostrom by MidJourney