David Eliot
Search for other papers by David Eliot in
Current site
Google Scholar
PubMed
Close
and
Rod Bantjes
Search for other papers by Rod Bantjes in
Current site
Google Scholar
PubMed
Close
Climate science vs denial machines
How AI could manufacture scientific authority for far-right disinformation

This chapter explores how text-generating artificial intelligence systems such as GPT are likely to be used by the far right to undermine democratic processes and climate science. It examines how information is mediated to the public through institutions, and how the right-wing denial machine has attacked these institutions in order to promote its interests. Building from theoretic foundations, it is proposed that text-generating AI systems – such as OpenAI’s GPT products – present a threat to the process of scientific peer review. Beyond the commonly reported issues of fake news, it is suggested that generative AI may be used to construct artificial academic/scientific consensus or debate. The construction of such artificial consensus or debate is not a new phenomenon. However, it is proposed that the use of AI gives the process a velocity that will create novel challenges for systems such as peer review. The discussion of AI, climate denial and its uptake by the far right is placed within the context of structural and historical trends that have developed since the start of the neoliberal revolution in the 1970s.

We originally wrote this text for the Political Economies of the Far Right Conference in 2019. At the time, the most advanced publicly known text generating AI was GPT-2. Our social theories were originally developed in reference to our technical understanding of GPT-2. After being selected for this publication, OpenAI announced and demoed GPT-3. In response we updated the technical exploration to include the initial advancements made in the technology. As is highlighted in the text, at the time of initial authorship GPT-3 was not publicly available; however, in early 2023 OpenAI launched Chat-GPT, a web application that provided public access to GPT-3 (OpenAI, 2023). In March of 2023 OpenAI released GPT-4, and provided public access to the system via Chat-GPT (Weitzman, 2023). Unfortunately, due to publication time constraints and concerns regarding appropriate rigour, we were unable to update our technical analysis and explore the repercussions of these recent developments. However, we believe that recent advances demonstrate the viability of the predictive technical claims made in this text. In March 2023 The Guardian reported that peer reviewers for an education journal unknowingly approved the publication of a paper written completely by Chat-GPT, providing concrete support for our claims that the peer-review system will soon be under threat from the machines (Fazackerley, 2023).

Introduction

Dunlap and McCright (2012) have used the metaphor of a machine – the ‘denial machine’ – to characterize the institutional complex that produces and distributes artificial, knowingly fictional counterclaims against the incontrovertible evidence of catastrophic risk from climate change. ‘Machine’ is a good metaphor for an institutional system that follows a determinate logic (i.e. the logic of profit for fossil capital). In this chapter we analyse the increasing automation of this machine – a theme that has so far been overlooked in the literature. We extrapolate from automation processes already underway to argue that near-future developments will involve artificial intelligence (AI) able to write persuasive denialist arguments at unprecedented speed and volume. Our argument relies in part on an analysis of new technological affordances that are not well understood outside the tech community – for example, OpenAI's GPT3. However, we place our discussion of AI, climate denial and its uptake by the far right within the context of structural and historical trends that have developed since the start of the neoliberal revolution in the 1970s.

The denial machine and the Intergovernmental Panel on Climate Change (IPCC) are institutional mediations of our knowledge of the climate and its historical dynamics. When denialists believe that they are challenging climate science based on ‘their own research’ they are rather trusting the denial machine, accessed through the mediation of the internet, right-wing talk radio and the like. Theirs is not, as many have represented it, a problem of distrust in institutions, it is a problem of trusting the wrong institutions. Oreskes and Conway (2010: 154, 269) argue that one of the best reasons for non-scientists to trust the knowledge produced by the IPCC is that it is extensively vetted through peer review. Peer review, a feature of institutional design, is our best available guarantee of the empirical and theoretical adequacy of knowledge (our closest approximation of ‘truth’). Not surprisingly, attempts to undermine the authority of peer review have already been part of the denialist project (Dunlap & McCright, 2012: 5). Our aim in this chapter is to assess near-future risk, much in the same way that the IPCC does. The risk that we focus on, both for its probability and for the seriousness of its implications, is that text-generating AI will soon develop to the point where it could deceive peer reviewers and thereby breach the barrier of peer review. The result would be an epistemological crisis in which all institutionally mediated knowledge could genuinely be placed in doubt.

Until now, the institutions of climate change denial – corporate funders, think-tanks, front groups, ersatz experts, lobbyists – have attempted to manipulate what we call ‘downstream mediation’ – formal and informal news media, internet chat and policy discussion. AI-generated peer-reviewed denial would distort upstream mediation – where reliable scientific knowledge is produced and can currently be accessed to discount downstream disinformation. Hard denial – that the climate is not changing or that that change is not caused by human institutions – still circulates with impunity downstream in what Benkler (2021: 44) calls the right-wing media ecology ‘anchored by Fox News and Breitbart’. We expect that upstream denial would attack softer targets – minimizing the severity of climate change risks and supporting overblown claims for technological solutions, what McLaren and Markusson (2020) call ‘technological prevarication’.

The neoliberal project and ideological misdirection

Dunlap and McCright (2012), along with Oreskes and Conway (2010), have made clear the neoliberal logic of denial. Neoliberal market solutions cannot solve the climate crisis (Klein, 2014). To acknowledge climate change was to invite regulatory solutions to it – a prospect that the neoliberal faithful were unwilling to face. The radical anti-state position of neoliberal theorists like Hayek had an epistemological basis. According to Hayek (1945), individual minds are incapable of encompassing system complexities at the scale of a society (and, presumably, of interrelated global climate systems). All collective planning efforts – their main target here was socialist planned economies – are therefore doomed to failure. Individuals know only how to pursue local self-interest and must allow market systems to ‘automatically’ calculate and coordinate system-level complexities. They were advocating total subordination to the most abstract, disembedded, global institution of modernity – the market – and to do so in a way that was sure to benefit elite interests. (Note also the theme of automation here – to link any human endeavour to market logic is to place it under a supposedly automatic control system.)

This was a radical, fringe idea with few adherents when it was floated in the 1940s, a period when popular mobilization was firmly behind the expansion of the welfare state. Neoliberalism shifted from a set of ideas to a societal project when it was taken up by what David Harvey (2005) characterizes as a social movement of the corporate elite in the United States. The aim of the neoliberal movement was always primarily structural – to change laws and policies in ways that would expand market dominance (privatization, global free trade, etc.) and also to change the structure of civil society in order to create publics willing to accede to deeply unpopular policies. The assault, beginning in the 1970s, was directed against the new social movements (notably, for our purposes, the environmental movement) and the labour unions, but also, importantly and frequently overlooked, the farm unions and the protections from market forces that had created security for the rural petty bourgeoisie. The latter was to become a core constituency for the ‘New Right’ that has morphed into the twenty-first-century far right.

The rise of right-wing populism and the authoritarian far right is one of the fruits, perhaps an unintended consequence, of neoliberal efforts to engineer civil society (hardly a precise science). What was intentional was the fusion of incompatible ideologies and interests – neoliberal libertarianism with religious, patriarchal and racial authoritarianism – evident in the American ‘New Right’ as early as the 1980s (Himmelstein, 1983). Fusing – Ernesto Laclau (1977) would say ‘articulating’ – ideas that serve the interests of the corporate elite to worldviews of the working class and petty bourgeoisie creates ideological monsters and a good deal of ideological misdirection. Populist rage against the abstract projects of ‘globalist’ elites (e.g. the UN Framework Convention on Climate Change (UNFCCC)) is paradoxically fused to the uncritical support of the abstract projects of ‘globalist’ elites (neoliberal market dominance). Populist rage against false ‘experts’ creating fake climate science for financial gain (purportedly the IPCC) is paradoxically fused to the uncritical support of false ‘experts’ creating fake climate science for financial gain (the denial machine).

The denial machine is one element of a larger neoliberal project. Understanding how the two work in tandem, originating in the United States but increasingly exported worldwide (Kaiser, 2020), is key to understanding the ideological perplexities of the far right. The European far right, for instance, raises the alarm about putative global threats to the local – such as global migration and immigration, global free trade and global spread of invasive species. Despite an environmentalism that links the preservation of ethno-nationalist identities to the preservation of local nature, they are wilfully and perversely blind to the global climate threats to all local nature (Forchtner, 2020: 7). This contradiction is a product of Laclauian articulation. It is no surprise that in Sweden, for example, far-right climate denial was mobilized ‘from above’ by a think-tank, the Stockholm Initiative, that closely followed a climate-denialist playbook developed and tested in the US (Hultman et al., 2020: 122–123).

Mediated knowledge, trust and risk

Ulrich Beck's seminal analysis of the politics of risk perception in ‘risk society’ hinges on an epistemological conundrum. He laments that amid the claims and counterclaims about the safety of trace chemicals in drinking water, radiation from nuclear accidents or genetically modified foods, citizens can no longer rely on direct, sensory evidence to assess the truth – they no longer have ‘cognitive sovereignty’ (1992: 27, 53–54). To sense nuclear radiation, they must rely on technological mediation – a Geiger counter. To make sense of their exposure to that radiation they must further rely on scientific experts who have built the device and can interpret its readings and assess associated health risks in relation to long-term epidemiological studies. Beck, like many on the left at the time, was acutely aware of the role that scientific experts, institutionally bound to corporate and state interests, played in minimizing many of the risks of the risk society. How could environmental activists critique scientific perceptions of risk when they were equally dependent on the ‘sensory organs of science’ to detect risk (Beck, 1992: 27)?

Individuals have perhaps never had privileged, unmediated access to the truth, whether by sense perception or Cartesian pure reason (Bantjes, 2019). However, this personally empowering idea has been a core myth of liberal democracies. Epistemological individualism still makes it difficult to see the complex institutional mediations through which we come to know our world. Beck's solution to his conundrum is not less, but more politicization of knowledge – an opening up of science to the voices of the public and organized social movements. An exemplar of this would be the anti-toxics movement of the 1980s. In the case of Love Canal, Lois Gibbs (2011) and her Love Canal Homeowners Association did ‘citizen science’, going door to door to collect data on the incidence of diseases such as cancer. In no instance here were the patients or the door-to-door canvassers ‘observing’ cases of different types of cancer. The sufferers sense pain, but identifying pancreatic cancer requires diagnostic technology, tests and interpretative expertise – an institutional complex whose rationality is validated in peer-reviewed medical literature. Similarly, to establish the causal connection between rates of illness and the witch's brew of chemicals seeping into people's basements, they contacted a scientist who was a relative of one of their members and he put them in touch with the relevant peer-reviewed literature.

They were mobilizing experts against experts. And to make sense of the contradictory claims of competing experts they learned to do a kind of lay sociology of knowledge. Gibbs was warned, for example, that test samples of urine could not be entrusted to the New York State Health Department because of its contractual relations with the polluter, Hooker Chemical, and that university labs feared appearing to be in conflict with the Department or the State (Gibbs, 2011: 112–113). What we are calling lay sociology of knowledge, Frank Fischer treats somewhat pejoratively as ‘social cognition’, which he characterizes in the following terms:

Citizens want to know how and why decisions were reached, whose interests are at stake, if the process reflects a hidden agenda, who is responsible, what protection they have if something goes wrong and so on. If, for example, citizens have experiences that suggest they should be distrustful of particular government officials, such information will tend to override the data itself. (2019: 140)

Fischer seems not to understand that these are questions we all must attempt to answer, citizens and experts alike, when faced with scientific claims outside our expertise. We cannot appeal to our individual sense data (as he seems to imply with his invocation of Popperian falsification); we need to assess the reliability of the institutional complexes that produce the knowledge in question.

The Love Canal Homeowners’ Association was doing relatively good sociology of knowledge, placing trust where there were institutional guarantees of science quality (peer review). Peer review is an instance of a general set of mechanisms of transparency and accountability designed to minimize corruption and incompetence in all of the ‘expert systems’ (Giddens, 1990) worthy of our trust. The Homeowners’ Association was also doing a power analysis of the conflicts of interest that might distort science mediation – focusing in particular on the sources of experts’ research funding. The climate change deniers whose ‘social cognition’ Fischer (2019: 148) gives ambivalent support for, are, by contrast, doing bad sociology of knowledge. They are taking at face value tendentious claims that false ‘experts’ are creating fake climate science for financial gain in the case of the IPCC, and wilfully ignoring strong evidence that false ‘experts’ are creating fake climate science for financial gain in the case of the denial machine.

The epistemological thrust of strategies to mobilize the far right has been to amplify distrust of institutional mediation. The idea is that elite ‘experts’ have robbed us, the people, of cognitive sovereignty and we need to take it back. Online forums and ‘news’ platforms that promote far-right conspiracy theories, such as OANN news, RSBN, Parler, 8Chan and Reddit, adopt a tone that implies that the reader is too smart to be deceived by the media, censored Google searches, ‘“official” scientific authorities’ or ‘globalist’ experts (Work, 2019). Don't trust the groupthink ‘experts’: you must do your own research to find the truth about a ‘hoax’ such as climate change, readers are cautioned (The Rush Limbaugh Show, 2020). What is on display here is not so much the ‘irrationality’ that some think is definitive of the far right, but rather the delusions of liberal reason. When it is up to you, personally, to get to the bottom of the Pizzagate conspiracy, it makes sense to show up in person at the Comet Ping Pong restaurant and demand, by force if necessary, to be shown the basement where Hillary Clinton is running a child-sex-trafficking operation (Fisher et al., 2016). The purpose of this invocation of cognitive sovereignty is ideological misdirection whose aim is to make the institutional structure of the disinformation machine disappear from view.

Appeals to local knowledge resonated with rural constituencies – farmers, ranchers, woodworkers – who were the recruiting grounds both for the anti-environmental ‘wise use’ movement (Helvarg, 1994; Bantjes, 2007: 264–272) and the far-right militias of the 1980s and 1990s (Freilich & Pridemore, 2005). Even where the interests of these groups coincided with those of environmentalists, the right was successful in amplifying divisions based on class and epistemology (Dunk, 1994). Anti-environmentalism worked on a binary opposition between authentic/inauthentic, trustworthy/suspect, freedom/tyranny. On the trustworthy side: hands-on practical knowledge through direct experience with nature. On the suspect side: over-educated government experts peering at data models on their computer screens (Dunk, 1994). On the one hand individual freedom, self-sufficiency and property rights; on the other government overreach and intrusive regulation. These ideological threads of locally conceived freedoms and authority were effectively articulated, at least in the US, to the deregulation interests of a (global) corporate class. We want to stress that there was nothing inevitable about this articulation. As Dunk's study of woodworkers and the literature on the environmental justice movement makes clear, there was tremendous potential to mobilize working-class and petty-bourgeois environmentalism that was explicitly anti-corporate. Indeed, it was precisely this ‘threat’ that fuelled the corporate counter-attack, carried out by proxy through a network of think-tanks, front groups, PR firms and the like (Bantjes, 2007: 258–275).

The myth of cognitive sovereignty dovetails with the liberal faith in ‘counter-speech’ and the ‘marketplace of ideas’ as means of challenging and correcting misinformation. Denialists have championed this supposed open democracy of ideas, setting it in contrast to peer review and scientific consensus as though these were forms of irrational ‘groupthink’ (Dunlap & McCright, 2012: 5). The implication is that the peer-reviewed scientist, as an individual knower, has no more authority than you or I. The true test of her beliefs is how well she stands up to reasoned debate within the free marketplace of ideas. The appeal here is again to an implicit localism, as though we all debated together in the same coffee house, or as though we were all part of the same small reading circle, a ‘republic of letters’, and could hear and adequately assess the arguments of every one of our Enlightenment colleagues (Habermas, 1989).

Napoli (2018) makes a strong case for the growing irrelevance of these liberal assumptions in the age of AI-driven disinformation. The first problem, according to Napoli, is that increasingly effective targeted marketing ensures that we inhabit separate ideological market spaces. Our ‘filter bubbles’ filter out the counter-speech that is supposed to help us challenge and critically reassess bad information. The second problem has to do with the increased volume and speed of circulation of disinformation. Lies (which rely only on invention) are easier to produce than truths (which rely on research). Automation tools such as text-generating AI and online bots can accelerate the production and distribution of lies, making it all the more easy for far-right strategists like Steve Bannon to ‘flood the zone with shit’ (Stengel, 2020). Regardless of how easy it might be to rebut each new piece of disinformation, the overwhelming flood will mean that there is never enough time to do so. Counter-speech in this way becomes ineffective and free-speech ideology is invoked by the far right to protect the flood of lies.

Artificial intelligence, text-generating AI and the economic forces driving it

The right has long been quick to capitalize on new technological affordances to promote the circulation of ideology and disinformation – such as computer databases to perfect direct-mail mobilization in the 1980s (Beder, 2002: 32–34), unregulated talk-radio in the 1990s (Benkler, 2021; Benkler et al., 2017) and the deployment of online ‘bot armies’ in the early twenty-first century (Bessi & Ferrara, 2016; Marlow et al., 2021). The publics that have been cultivated in this way are increasingly disembedded from face-to-face personal networks and are largely animated from above by those who design and sponsor the mediation. The bot army is pure automation, pure astroturfing, in which actual humans have been replaced by machines.

The recent dramatic shift towards digital mediation of personal interaction, political mobilization, state governance, and consumer and corporate activity has led to two quick-succession revolutions in capital accumulation: surveillance capitalism (Zuboff, 2018) and an as-yet-unnamed revolution based in AI (Eliot & Murakami Wood, 2021). The growing digital technosphere simultaneously mediates and records human behaviour. The tech-based companies best positioned to harvest this unimaginable new data stream – Apple, Amazon and, pre-eminently, Google – developed powerful software tools to analyse it, modelling human behaviour in detail with the aim of prediction and control. The first strategy to monetize big data analytics was to sell its capabilities to advertisers, in other words, to manipulate consumer behaviour. The next wave of capital accumulation, that the big tech firms are already jockeying to dominate (Eliot & Murakami Wood, 2021), will be to model all forms of work that can be digitally mediated with the aim of replicating it so that expensive human workers can be replaced by digital machines. The new wave of automation will rely on big data and artificial intelligence.

Despite the varied connotations that we attach to the term ‘intelligence’, AI experts such as Stuart Russell give the term a quite restricted meaning. An intelligent actor is an entity that ‘does what is likely to achieve what it wants, given what it has perceived’ (Russell, 2019: 14). This is a kind of instrumental rationality where the ‘agent’ does not necessarily determine its own goals, nor does it necessarily have intentionality and certainly not consciousness. Something as simple as an E-coli bacterium is ‘intelligent’ by this definition because it pursues a goal (consuming glucose); it ‘perceives’ its environment (the intestinal tract); and, importantly, it adapts its behaviour to changes in that environment – moving always towards more glucose-rich regions. There have been efforts to make intelligent machines using genetic engineering (e.g. Craig Venter's artificial bacteria) and mechanical hardware, but all of the recent advances have been with computer programs or algorithms operating in digital media.

The key innovation has been the shift from machine programming to machine learning. In the former, the rules of action are exhaustively defined by a human engineer; in the latter, the machine itself develops its own rules under controlled ‘training’ conditions. Training, for example, the categories of perception that enable the machine to distinguish between a dog and a cat follows a logic similar to the learning of a human child. Through exposure to dogs and cats in differing contexts, the child makes labelling attempts and is corrected. It tries out rules – large four-legged creatures versus small four-legged creatures – and abandons or modifies them as new data emerges – for example a small Pug. The AI develops complex rules quickly because of the scale of data – potentially every dog and cat image on the internet – and the speed of its exposure and processing – fractions of a second. An important feature of current AI is that the internal logic of its ‘neural networks’ is ‘black boxed’ so that we cannot reverse engineer its decision-making architecture.

Machine learning is a kind of automation of automation. But it still requires a training engineer. ‘Competitive self-play’ is a technique that reduces further the amount of time the software engineer needs to spend in training. In this model the AI is trained by competing with itself. An illustrative example is Google's AlphaZero chess-playing AI. Within only four hours it was able to learn not just the rules of chess, but the secrets of chess strategy sufficient to defeat the previous chess-playing AI champion, Stockfish 8, which had in turn beaten every human chess master in 2016 (Harari, 2018). The core learning algorithms of these AIs are marvels of engineering but are often relatively simple. It is the training data, amassed by a few key tech firms at incomprehensible scale, that makes them powerful.

The competition among tech giants to lead in AI is currently focused on producing the best AI personal assistant (Eliot & Murakami Wood, 2022). For this application what needs to be automated is intelligent speech recognition (to perfectly understand our commands) and speech generation (to speak to booking agents on our behalf and troubleshoot travel arrangements, for example). In the tech world this is called natural language processing (NLP). The personal assistant will work in highly abstract areas of competency – doing research for us, writing up summaries, making arrangements with clients, friends, colleagues, perhaps making arguments for us. Once designed, it can be easily replicated and offered to millions, such that its access to data, experience and training snowballs exponentially. We should point out that devices such as this will likely use the recently developed strategy of ‘federated learning’, where the learning algorithm drops into your local device and learns from your personal data but does not extract it from the device. The algorithm keeps the slightly fine-tuned rules (black boxed), but not the data that it used to do the fine tuning (Eliot & Murakami Wood, 2021). Federated learning was developed to allay privacy concerns and facilitate the widespread acceptance of this and similar data-gathering devices. In what follows, we will indicate how far NLP has so far gotten towards the possibility of writing peer-reviewed academic articles, but we want to emphasize that, given the logic of machine learning and the economic interests at stake in developing NLP, near-future developments are likely to accelerate.

Natural language processing advancements are already being applied to the writing of increasingly complex texts – an application that IT specialists call text-generating AI (TGAI). TGAI term papers are already being marketed to students through Essaybot. Provided only with a simple prompt (e.g. ‘the risks of TGAI’), Essaybot advertises that they will completely generate your term paper – complete with citations – with a zero percent chance of being caught for plagiarism (Winkie, 2019). Readers may also be surprised to learn that a significant amount of the news they consume is now also generated by TGAIs. In fact, Bloomberg News, an industry leader in AI-assisted news reporting, in 2019 used some form of the tech on roughly a third of its articles (Peiser, 2019). Bloomberg is not alone in this practice, as TGAIs are currently used by reputable publications like the Washington Post, Associated Press, Forbes and The Guardian. The software accesses relevant information and compiles a rough draft that human writers work into finished pieces. The TGAI speeds production and cuts costs (Peiser, 2019). TGAIs currently in use still produce crude results that need significant human revision; however, new developments, such as OpenAI's GPT project, show promise for more autonomous high-quality text generation.

The GPT model that was available when we first presented our research in 2019 was GPT-2. GPT-2 was trained on a dataset of Reddit articles (Hern, 2019; Martin, 2019; Radford et al., 2019). OpenAI was already concerned about potential malicious use in areas such as ‘fake news’ generation. While they typically release their innovations as open-source code, they initially held back GPT-2, before eventually releasing it, first, as a watered-down demo, then in full form (Hern, 2019; Martin, 2019). Unlike previous TGAIs, GPT-2 demonstrated its ability to produce convincing text from human prompts, and had little problem replicating human syntax (Hern, 2019). GPT-2 could gauge the tone of its prompts and produce writing that matched the implied genre (e.g. dystopian fiction, newspaper, academic text) (Hern, 2019). Tests of the system revealed that a watered-down version of GPT-2 released to the public could produce articles almost as convincing as real articles by the New York Times 72 per cent of the time (Jack et al., 2019). To the researchers’ surprise, GPT-2's machine-learning system had acquired abilities that it had not been designed for: excelling at such tasks as question answering, reading comprehension, summarization and translation (Radford et al., 2019). Building on GPT-2's reading comprehension abilities, OpenAI has now issued an even more advanced GPT-3.

GPT-3 was trained using a diverse mix of professionally curated data sets, and other text samples, including books and Wikipedia pages (Brown et al., 2020: 8). Millions of academic texts are now available online through Google's digitization efforts and in proprietary academic databases. The new GPT-3 could be trained on this data, and its academic style prompted by one (one-shot) or multiple (few-shot) tailored sample texts (Brown et al., 2020: 5). Others are now beginning to recognize the risks that we warned of in 2019 (Eliot, 2019). The text produced by GPT-3 is still not strong enough, in our opinion, to threaten academia; however, it is another step towards a TGAI with that capability. GPT-3's creators themselves are aware of the risk. Brown et al. (2020: 35) write:

Any socially harmful activity that relies on generating text could be augmented by powerful language models. Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting. Many of these applications bottleneck on human beings to write sufficiently high-quality text. Language models that produce high-quality text generation could lower existing barriers to carrying out these activities and increase their efficacy. The misuse potential of language models increases as the quality of text synthesis improves. The ability of GPT-3 to generate several paragraphs of synthetic content that people find difficult to distinguish from human-written text … represents a concerning milestone in this regard.

OpenAI has sought to minimize the risk by refusing to release GPT-3's source code and instead offering GPT-3 only as an application programming interface. Those interested in using GPT-3 must apply to OpenAI, and OpenAI retains the right to cut off access if an actor is misusing the program (OpenAI, 2020). However, there is currently strong economic motivation and intense competition to produce ever-more sophisticated natural language processing AI. OpenAI's proof of concept will certainly be replicated and there is no guarantee that others will not put similar TGAIs to malicious purposes.

Why the far right is likely to lead the trend towards automated disinformation

Since the 1970s, disinformation, particularly on the scale and audacity of climate change denial, has been almost exclusively a product of the right, initiated and financed by neoliberal interests. The neoliberal project of social and environmental deregulation was bound to produce what Beck (1992) calls ‘negative side-effects’ – widespread economic precarity and worsening environmental degradation. From the start, that project was advanced and ‘popularized’ under cover of deception (McLaren & Markusson, 2020) and ideological misdirection. Neoliberal precarity had real effects on that segment of the public that has since been mobilizing under the banner of the far right. To the injury of increasing economic failure, liberal ideology added the insult of personal responsibility and blame (Peacock et al., 2014). The powerful sense of injury and abandonment that fuels far-right rage is real (Hochschild, 2016). The neoliberal right has had unmatched success in deflecting that rage away from themselves (i.e. the corporate elite) and redirecting it towards governmental and scientific elites, as well as furnishing emotionally potent identity-compensations for those that they have injured, compensations that take the form of racial and gender supremacy or (particularly in the US) religious righteousness (Benkler, 2021: 62) (see chapter 8 in this volume by Robert Horwitz).

An extensive empirical analysis has recently confirmed that systematic disinformation is primarily a right-wing phenomenon in the US (Benkler, 2021). Disinformation is produced and amplified within a right-wing media ecology that includes television news (Fox), talk radio (e.g. Rush Limbaugh) and online outlets (e.g. Breitbart, Drudge Report). Outside that echo chamber (left and liberal) disinformation is produced, but its spread is still contained by competitive criticism and fact checking between news sources. Benkler's work is a corrective to the idea that political polarization is driven by algorithms that create ‘filter bubbles’ across the ideological spectrum. A bubble exists, one that excludes fact checking and counter-speech, but its boundaries are maintained by competitive ideological policing among far-right news outlets and exclusive loyalty from far-right publics. Far-right outrage is now demand driven and has fanatically loyal ‘consumers’, but that is a somewhat perverse and misdirected demand that has been carefully cultivated over a long sweep of history.

We argue further that there is a social constructivist neoliberal tendency, an arrogance that power can simply make truth. Oreskes and Conway (2010) have shown that the corporate politics of risk, pioneered by the tobacco industry, has from the outset involved the denial of accepted science and the promotion of what firms and their PR consultants knew to be disinformation. Exxon's own scientists were aware of the risks of climate change in the early 1980s (Franta, 2021). Firms get seduced by their own marketing culture that values the simulacrum above the thing in itself (Klein, 2000). Ideas and meanings become products like any other, and if you succeed in selling them, then they are effectively ‘true’. ‘Truth’ is merely what works, what has effects in the world. A cavalier attitude towards the truth is also a mark of those who believe that they have the power to remake the world. ‘When we act’, a senior Bush official famously proclaimed in 2004, ‘we create our own reality.’ The power-fuelled arrogance of the neoliberal right seemed to justify contempt for facts, and for those in the ‘reality-based community’ who respect them (Suskind, 2004).

The corporate sector has an interest in conflating the kind of free speech that serves the public interest in a democracy and advertising or paid speech that serves private, commercial interests often at the expense of the public (Bantjes, 2019). The corporate elite command financial resources that make it possible to flood downstream media with paid speech. Acquiring the big data and software engineering needed to produce powerful AI also takes money, or else, for the emerging players in surveillance capitalism, the data resources that they are already accumulating. The paid speech model will generally favour the neoliberal ideals of elites articulated, as they have been since the 1980s, with the authoritarianism of the far right.

However, it may be that the costs of producing AI will drop sufficiently that climate science ‘truth bots’ and left-wing counter-speech bots might become potential tools for less powerful actors to wield in the battle over disinformation. We suggest that genuine democrats should be reluctant to take this route. The risk is that AI-versus-AI debate will function like competitive self-play, rapidly ‘training up’ powerful automated speech on both sides, and crowding out human speech. A democracy of robots would be difficult to distinguish from authoritarianism.

The future of peer-reviewed disinformation extrapolated from existing trends

The automation of human capacities has historically followed two steps, the first of which is to make actual human activity machine like before fully replacing the human component with machine systems (Bakardjieva, 2015). We take the private, for-profit ‘contract research organization’ (CRO), as analysed by Mirowski and van Horn (2005), to be a good example of the first step towards a fully automated AI ‘researcher’. The CRO gives important hints as to what the final automation will look like in terms of the type of disinformation it will produce. CROs are mainly contracted to conduct drug trials, and their clients are pharmaceutical firms seeking regulatory approval. Within the CRO, research leading up to publication is subject to the division of labour in detail and deskilling of the component tasks. The model is ‘efficient’ in terms of speed and the lowered costs of deskilled labour. It also reduces the autonomy of these ‘assembly line’ researchers to pursue intellectual aims, such as free inquiry, careful replication of results or critique, that are at odds with those of their employer. The CRO's aim, Mirowski and van Horn put it quite bluntly, is to deliver ‘positive’ results to its pharmaceutical clients. In other words, like the Denial Machine, their purpose is to minimize evidence of risk. To that end they produce tendentious arguments, cherry pick evidence, and suppress counterevidence or lines of inquiry likely to uncover it (Mirowski and van Horn, 2005; Bantjes, 2019).

These are the techniques that we expect AI to be trained to follow in order to breach genuine peer review. They will not invent fake evidence (the ‘unicorn’ style of falsification) in ways that human peer reviewers should be able to catch. Steering clear of pure fantasy would give what we call the ‘Sokal bot’ (after the famous breach of peer review by Alan Sokal (Hilgartner, 1997)) the greatest likelihood of passing peer review. The Sokal bot will rather ‘massage’ evidence tendentiously but persuasively. We also think it unlikely, at this point, that AI could – even by following the ‘credible doubt’ strategy – produce convincing ‘hard denial’, that is, denial that climate change is occurring or is caused by the fossil economy. The target is more likely to be risk response and policy. McLaren and Markusson (2020) have shown that policy at the level of the UNFCCC has been steered away from effective regulatory measures by what they call ‘technological prevarication’. By this they mean overstated promises of technological fixes such as carbon capture and storage that have little to no hope of being scaled up without regulatory intervention. AI publication could plausibly add peer-reviewed authority to technological prevarication.

We need to remember, however, that the power of AI will not simply be in persuasiveness, but also in the speed and volume with which it will be capable of producing disinformation. So, a possible effect could be to erode peer-review standards by overwhelming the peer-review process. Changes in for-profit academic publishing are setting the stage for a pre-emptive weakening of peer review. Peer review is time consuming and increasingly onerous for faculty who are expected to provide it for free in productivity-oriented neoliberal universities. Private, often predatory, enterprise has responded to this situation in which peer review is in short supply and the papers seeking publication are in oversupply. Most academics are now bombarded with offers of quick-turnaround peer-reviewed publication for a fee. Some of these publishers may be defraying the costs of genuine peer review. Most are in the business of turning peer-reviewed publication into paid speech. In the growing pay-to-publish world, unicorn-style AI falsification could easily ‘flood the zone’ of ersatz peer-reviewed publication. While academics might have the resources to distinguish genuine from fake peer review within their own specialty, non-specialists, including academics in other fields, journalists and the general public, will not.

Conclusion

In this chapter we have assessed rapidly evolving capabilities of existing text-generating AI, and extrapolated a near-future scenario where AI will be able to produce work that can pass the test of peer review. We have also documented the long-term structural trends driving a disinformation campaign (with climate change denial as its flagship achievement) on the far right. We offer a risk assessment, and the grave risk that we think merits serious attention is an epistemological one. The lodging of disinformation ‘upstream’ in our systems of knowledge production could result in a systemic failure to discriminate between scientific truth and the fictions that flatter the interests of power. At the very least, public discourse would be robbed of any moorings in reliable research, with fatal consequences for already weakened democratic decision making and informed public opposition.

We argue that climate change denial is the exemplar of malicious disinformation in risk societies. It is a special case of a much broader project of the denial of the environmental and social risks that attend the kind of unregulated economic activity that neoliberalism promotes. Those who benefit from such disinformation have a long track record of promoting it and of harnessing far-right authoritarianism to mobilize broad popular support for it.

Our main intent has been to demonstrate the seriousness of the risk. However, our discussion here is not sufficient to ground policy recommendations. We invite others to begin that discussion. OpenAI is aware that its software poses the kinds of risks we outline here. So far, their approach to minimizing those risks has been careful licensing of the release of the software. We expect that now that proof of concept is in place, equally powerful software is likely to be independently developed and widely distributed. So, much more far-reaching measures need to be taken.

Our analysis suggests that the AI threat is part of a larger set of institutional developments that are transforming knowledge production generally. It is to those larger developments in addition to the particular technology that we need to address our attention. Long-held liberal assumptions about cognitive sovereignty, free speech and the marketplace of ideas that direct our attention towards the individual knower need to be questioned and more attention directed towards the institutional design of our systems of knowledge production. The trend towards privatization and marketization of universities, research and academic publication needs to be rethought. A simple first step towards reform could be the regulation of for-profit peer review and increased public support of independent academic peer review.

References

Bakardjieva, M. (2015). Rationalizing Sociality: An Unfinished Script for Socialbots. Information Society, 31(3): 244–256.

Bantjes, R. (2007). Social Movements in a Global Context: Canadian Perspectives. Toronto: Canadian Scholars’ Press.

Bantjes, R. (2019). The epistemic crisis of liberalism and the rise of the far right. (Conference paper) Political Ecologies of the Far Right, Lund University, Lund, Sweden, 16 November, www.academia.edu/41081072/The_Epistemic_Crisis_of_Liberalism_and_the_Rise_of_the_Far_Right (accessed 13 November 2023).

Beck, U. (1992). Risk Society: Towards a New Modernity. Newbury Park: Sage Publications.

Beder, S. (2002). Global Spin: The Corporate Assault on Environmentalism. Totnes: Chelsea Green Publishing.

Benkler, Y. (2021). A Political Economy of the Origins of Asymmetric Propaganda in American Media. In W. L. Bennett & S. Livingston (eds), The Disinformation Age: Politics, Technology, and Disruptive Communication in the United States. New York: Cambridge University Press.

Benkler, Y. , Faris, R. , Roberts, H. , & Zuckerman, E. (2017). Study: Breitbart-Led Right-Wing Media Ecosystem Altered Broader Media Agenda. Columbia Journalism Review, 3 March, www.cjr.org/analysis/breitbart-media-trump-harvard-study.phpm (accessed 13 January 2020).

Bessi, A. , & Ferrara, E. (2016). Social bots distort the 2016 US presidential election online discussion. First Monday, 21(11), http://firstmonday.org/ojs/index.php/fm/article/view/7090/5653 (accessed 13 November 2023).

Brown, T. B. , Mann, B. , Ryder, N. , et al. (2020). Language Models Are Few-Shot Learners. arXiv Preprint, arXiv:2005.14165 (accessed 13 November 2023).

Dunk, T. (1994). Talking about Trees: Environment and Society in Forest Workers’ Culture. Canadian Review of Sociology and Anthropology, 31(1): 14–34.

Dunlap, R. E. , & McCright, A. M. (2012). Organized Climate Change Denial. In John S. Dryzek, Richard B. Norgaard & David Schlosberg (eds), The Oxford Handbook of Climate Change and Society. Oxford: Oxford University Press, pp. 144–160.

Eliot, D. (2019). Climate science vs. the machines: how the radical right can use AI tech to undermine climate science. (Conference paper) Political Ecologies of the Far Right, Lund University, Lund, Sweden, 16 November.

Eliot, D. , & Murakami Wood, D. (2021). Minding the flocs: Google's marketing moves, AI, privacy and the data commons. Centre for International Governance Innovation, 20 May, www.cigionline.org/articles/minding-flocs-googles-marketing-moves-ai-privacy-and-data-commons/ (accessed 30 August 2021).

Eliot, D. , & Murakami Wood, D. (2022). Culling the FLoC: Market Forces, Regulatory Regimes and Google's (Mis)steps on the Path away from Targeted Advertising. Information Polity, 1–16. doi.org/10.3233/IP-211535.

Fazackerley, A. (2023). AI makes plagiarism harder to detect, argue academics – in paper written by chatbot. The Guardian, 19 March, www.theguardian.com/technology/2023/mar/19/ai-makes-plagiarism-harder-to-detect-argue-academics-in-paper-written-by-chatbot (accessed 28 March 2023).

Fischer, F. (2019). Knowledge Politics and Post-Truth in Climate Denial: On the Social Construction of Alternative Facts. Critical Policy Studies, 13(2): 133–152.

Fisher, M. , Cox, J. W. , & Hermann, P. (2016). Pizzagate: from rumor, to hashtag, to gunfire in D.C. Washington Post, 6 December, www.washingtonpost.com/local/pizzagate-from-rumor-to-hashtag-to-gunfire-in-dc/2016/12/06/4c7def50-bbd4–11e6–94ac-3d324840106c_story.html (accessed 22 November 2023).

Forchtner, B. (2020). Far Right Articulations of the Natural Environent. In B. Forchtner (ed.), The Far Right and the Environment: Politics, Discourse and Communication. New York: Routledge.

Franta, B. (2021). Early Oil Industry Disinformation on Global Warming. Environmental Politics, 30(4): 1–6.

Freilich, J. D. , & Pridemore, W. A. (2005). A Reassessment of State-Level Covariates of Militia Groups. Behavioral Sciences & the Law, 23(4): 527–546.

Gibbs, L. M. (2011). Love Canal: And the Birth of the Environmental Health Movement. Washington: Island Press.

Giddens, A. (1990). The Consequences of Modernity. Stanford: Stanford University Press.

Habermas, J. (1989). The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Cambridge: MIT Press.

Harari, Y. N. (2018). Why technology favors tyranny. The Atlantic, October, www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/ (accessed 27 August, 2020).

Harvey, D. (2005). A Brief History of Neoliberalism. Oxford: Oxford University Press.

Hayek, F. A. (1945). The Use of Knowledge in Society. American Economic Review, 35(4): 519–530.

Helvarg, D. (1994). The War Against the Greens: The Wise-Use Movement, the New Right and Anti-environmental Violence. San Francisco: Sierra Club Books.

Hern, A. (2019). New AI fake text generator may be too dangerous to release, say creators. The Guardian, www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction (accessed 29 May 2022).

Hilgartner, S. (1997). The Sokal Affair in Context. Science, Technology, & Human Values, 22(4): 506–522.

Himmelstein, J. (1983). The New Right. In R. C. Liebman & R. Wuthnow (eds), The New Christian Right: Mobilization and Legitimation. New York: Aldine.

Hochschild, A. R. (2016). The Great Paradox. In Strangers in Their Own Land: Anger and Mourning on the American Right. New York: New Press.

Hultman, M. , Bjork, A. , & Viinikka, T. (2020). The Far Right and Climate Change Denial: Denouncing Environmental Challenges Via Anti-establishment Rhetoric, Marketing Doubts, Industrial/Breadwinner Masculinities, Enactments and Ethno-Nationalism. In B. Forchtner (ed.), The Far Right and the Environment: Politics, Discourse and Communication. New York: Routledge.

Jack, C. , Brundage., M & Solaiman, I. (2019). GPT-2: 6-month follow-up. OpenAI, https://openai.com/blog/gpt-2–6-month-follow-up (accessed 29 May 2022).

Kaiser, J. (2020). In the Heartland of Climate Scepticism: A Hyperlink Network Analysis of German Climate Sceptics and the US Right Wing. In B. Forchtner (ed.), The Far Right and the Environment: Politics, Discourse and Communication. New York: Routledge.

Klein, N. (2000). No Logo: Taking Aim at the Brand Bullies. Toronto: Random House.

Klein, N. (2014). This Changes Everything: Capitalism Vs. The Climate. New York: Simon & Schuster.

Laclau, E. (1977). Toward a Theory of Populism. In Post-Marxism, Populism and Critique. London: NLB, ch. 6.

Marlow, T. , Miller, S. , & Roberts, J. T. (2021). Bots and Online Climate Discourses: Twitter Discourse on President Trump's Announcement of US Withdrawal from the Paris Agreement. Climate Policy, 21(6): 765– 777.

Martin, N. (2019). New AI development so advanced it's too dangerous to release, says scientist. Forbes, 19 February, www.forbes.com/sites/nicolemartin1/2019/02/19/new-ai-development-so-advanced-its-too-dangerous-to-release-says-scientists/?sh=2a2488d4a801 (accessed 29 May 2022).

McCright, A. M. , & Dunlap, R. E. (2010). Anti-reflexivity: The American Conservative Movement's Success in Undermining Climate Science and Policy. Theory, Culture & Society, 27(2–3): 100–133.

McLaren, D. , & Markusson, N. (2020). The Co-evolution of Technological Promises, Modelling, Policies and Climate Change Targets. Nature Climate Change, 10(5): 392–397. doi: 10.1038/s41558-020-0740-1 .

Mirowski, P. , & van Horn, R. (2005). The Contract Research Organization and the Commercialization of Scientific Research. Social Studies of Science, 35(4): 503–548. doi: 10.1177/0306312705052103 .

N. C. (2019). Conspiracy theories are dangerous – here's how to crush them. The Economist, 12 August, www.economist.com/open-future/2019/08/12/conspiracy-theories-are-dangerous-heres-how-to-crush-them (accessed 13 November 2023).

Napoli, P. M. (2018). What If More Speech is no Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. Federal Communications Law Journal, 70(1): 55–87.

OpenAI (2020). ‘OpenAI API.’ OpenAI, https://openai.com/blog/openai-api/ (accessed 18 January 2020).

OpenAI (2023). Introducing ChatGPT. OpenAI, https://openai.com/blog/chatgpt (accessed 28 March 2023).

Oreskes, N. , & Conway E. M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Press.

Peacock, M. , Bissell, P. , & Owen, J. (2014). Shaming Encounters: Reflections on Contemporary Understandings of Social Inequality and Health. Sociology, 48(2): 387–402. doi: 10.1177/0038038513490353 .

Peiser, J. (2019). The rise of the robot reporter. New York Times, 5 February, www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html (accessed 23 November 2023).

Radford, A. , Jeffrey, W. , Amodei, D. , et al. (2019). Better language models and their implications). OpenAI, https://openai.com/blog/better-language-models/ (accessed 29 May 2022).

The Rush Limbaugh Show (2020). 15-Year-Old credits your host for making her a critical thinker. The Rush Limbaugh Show, 3 April, www.rushlimbaugh.com/daily/2020/04/03/15-year-old-credits-your-host-for-making-her-a-critical-thinker-2/ (accessed 29 August 2021).

Russell, S. J. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking.

Stengel, R. (2020). Domestic disinformation is a greater menace than foreign disinformation. Time, 26 June, https://time.com/5860215/domestic-disinformation-growing-menace-america/ accessed 22 November 2023).

Suskind, R. (2004). Faith, certainty and the presidency of George W. Bush. New York Times Magazine, 17 October, www.nytimes.com/2004/10/17/magazine/faith-certainty-and-the-presidency-of-george-w-bush.html (accessed 23 November 2023).

Weitzman, T. (2023). Council post: GPT-4 released: what it means for the future of your business. Forbes, 28 March, www.forbes.com/sites/forbesbusinesscouncil/2023/03/28/gpt-4-released-what-it-means-for-the-future-of-your-business/?sh=762e4bf62dc6 (accessed 28 March 2023).

Winkie, L. (2019). Essaybot will do your homework. But it won't get you an A. Vox, www.vox.com/the-goods/2019/4/15/18311367/essaybot-ai-homework-passing (accessed 20 September 2020).

Work, D. (2019). Climate alarmists blissfully ignorant of globalist agenda. Cowichan Valley Citizen, 17 December, www.cowichanvalleycitizen.com/opinion/climate-alarmists-blissfully-ignorant-of-globalist-agenda/ (accessed 29 August 2021).

Zuboff, S. (2018). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.

  • Collapse
  • Expand

All of MUP's digital content including Open Access books and journals is now available on manchesterhive.

 

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 99 99 3
PDF Downloads 334 334 7