Caroline Bassett
Search for other papers by Caroline Bassett in
Current site
Google Scholar
Apostasy in the temple of technology
ELIZA the more than mechanical therapist

This chapters asks what happens when technophilia falls out with its object. It tells the story of ELIZA, an early chatbot developed by the computer scientist Joseph Weizenbaum as a rudimentary artificial therapist. The reception accorded to ELIZA and what it might presage led Weizenbaum to reappraise his thinking on artificial intelligence and human reason and to call for limits to the expansion of computational thinking in human culture and society. This chapter explores these arguments by focusing in particular on the question of the therapeutic – set aside by Weizenbaum and yet central to questions about the limits of computational reason and computational being.

Contemporary discussions of artificial intelligence, machine learning, and the increasing use of bots in everyday life resonate with these issues, while the contemporary rehabilitation of behaviourism produces once again a demand to consider the tensions between modulation (computational nudges, for instance) and forms of therapy based on an increase in the individual capacity for decision making.

What is it about the computer that has brought the view of man as a machine to a new level of plausibility?

(Weizenbaum, 1976: 1–16)

We, all of us, have made the world too much into a computer, and … this remaking of the world in the image of the computer started long before there were any electronic computers.

(Weizenbaum, 1976: ix)

What happens when technophilia falls out of love with its object? In the early 1970s, Joseph Weizenbaum, a computer scientist at the Massachusetts Institute of Technology (MIT), an institution ‘proudly polarized’ around technology, wrote a script for a ‘computer program with which one could “converse” in English’ (Weizenbaum, 1976: 371). This program, strictly speaking a language analyser and script, 1 became famous as a chatbot (Wardrip-Fruin and Montfort, 2003) and was once labelled ‘the most widely quoted computer program in history’ (Turkle, 2005: 39). ELIZA, whose name came from Shaw's heroine in Pygmalion, carries out natural language conversations with the user, inviting interaction through the ‘impersonation’ of a Rogerian therapist (Wardrip-Fruin, 2009, Weizenbaum, 1976: 2 ). Strictly speaking, ELIZA was ELIZA playing DOCTOR (the script for a therapist), but it was by that name that the program became celebrated.

Weizenbaum chose to give ELIZA this identity because of the kind of interaction that Rogerian therapy entails, involving extensive forms of mirroring of the patient by the therapist. Essentially it ‘draws out the patient’ by ‘reflecting back’ (Weizenbaum, 1976: 3). This kind of dialogue is peculiarly amenable to computer simulation because it is open to wide input but demands a limited repertoire of responses (as output) and has a clear context (Weizenbaum, 1972 and 1976, Turkle, 2005). The kinds of responses that Eliza could produce might seem less tendentious or eccentric in these circumstances.

Weizenbaum always recognized that ELIZA had many limitations, being easily ‘persuaded’ to come out with clearly nonsensical answers or fall into recursive loops. ELIZA operated rather badly, he said, doing far less than was popularly claimed – which was, at its most exaggerated, to have demonstrated a ‘general solution to the problem of computer understanding of natural language’, a key AI goal (Weizenbaum, 1976: 7). These limitations did not prevent ELIZA from becoming a phenomenon, generating discussion, hype, and debate inside and outside specialist and public worlds. For Weizenbaum this reception was egregious and damaging. It provoked a permanent and radical shift in his thinking around the relationship between humans and computers specifically, and humans and technology more generally. As a direct response, he wrote Computer Power and Human Reason, an attack on the growing reliance on, and preference for, computational thinking in popular and specialist circles. This was anti-computational thinking from a man at the heart of things, and constituted apostasy in the temple. It was a direct attack on the ‘hubristic’ arguments of the fast-growing AI community, or the artificial intelligentsia as Weizenbaum termed them. He critiqued in particular the adherence of members of this community to ‘the deepest and most grandiose fantasy that motivates work on artificial intelligence … to build a machine on the model of a man … which can ultimately … contemplate the whole domain of human thought’ (Weizenbaum, 1976: 203–204).

This chapter begins by considering Weizenbaum's critique of what he understood as the prioritization of computational thinking over human reason. First, the roots of Weizenbaum's change of heart and the development of the arguments in Computer Power are explored. Second, I attend more closely to what ELIZA offered, which was, after all, and despite Weizenbaum's protestations, somehow a mode of ‘therapy’. Finally, I suggest, partly as a thought experiment, that ELIZA, the famous bot psychiatrist, is itself conflicted, dealing in narratives and yet operating in code. Was ELIZA a computer with anti-computational impulses? If so, then Weizenbaum might have taken solace from his own creation, even if he remained horrified by the rise of machine thinking in general. At least in that, viewed this way, ELIZA was on his side.

The argument here is in dialogue with, and relates closely to, a parallel article exploring ELIZA in AI and Society (Bassett, 2019), 3 which traces these ideas into the contemporary moment via an engagement with human ‘droning’ (Andrejevic, 2015) by exploring the connections between this early form of automatic ‘therapy’ and the modulation of selves through behaviouristic forms of ‘therapeutic’ nudge. Here I stay closer to the Weizenbaum/ELIZA events as they unrolled at the time. I am interested in the workings of apostasy; the abandonment of a belief or principle, traditionally one that pertained to religion. Being true to what you know, if that implies a kind of camp loyalty as well as a form of expertise, turned out to be impossible for Weizenbaum. On the other hand being true to its ‘nature’ was, or so I am going to argue, a more complex issue in the case of ELIZA than might at first appear. And ELIZA, at least as much as Weizenbaum, is the subject of this chapter. So, here goes.

In Computer Power and Human Reason Weizenbaum listed three things he found shocking about ELIZA's celebrity reception. The first was that ordinary people related to ELIZA as if it was a ‘real’ therapist. The second was that ELIZA was given a warm reception from some in the world of professional therapy. The third was that what ELIZA could do technically was misunderstood and exaggerated. Weizenbaum always maintained that ELIZA's capacity to act as a therapeutic mirror was extremely limited and that the role was chosen ‘for convenience’, as an aid to researching natural language processing and not with the aim of developing working artificial therapy tools. However limited it was, the program held up a mirror reflecting expert and public understandings of, and hopes for, technology. What Weizenbaum saw reflected there provoked him to speak out.

Computer Power and Human Reason was a best-seller, a narrative of conversion that attempts to convert in its turn. It is at once, and avowedly, a personal account, an expert intervention, a mode of public science, even a science literacy project. In it Weizenbaum binds his thinking into philosophical critiques of technocratic rationality but also names himself as part of another tradition. He invokes Mumford, Arendt, Ellul, and Roszak, amongst others, as key critical thinkers of the technological with whom some of his positions intersect, but speaks as a scientist rather than a philosopher. Indeed, it is because he is a scientist that he feels he needs to speak, and may speak with some authority. 4 He thus finds an appropriate forebear in the Soviet scientist Polyani, who began as a physical chemist but later became chiefly a lifelong critic of ‘the mechanical conception of man’; Polyani's response to the Soviet leader Bukharin's contention that science for its own sake would disappear ‘under socialism’ was that (Soviet) socialism's ‘scientific outlook’ was in danger of producing ‘a mechanical conception of man and history in which there was no place for science itself … nor … any grounds for claiming freedom of thought’ (cited in Weizenbaum, 1976: 1). Weizenbaum's thinking ran along similar lines, as we will see, but there was another resonance too; Polyani's decision to respond to Bukharin's provocation led to a lifelong change of direction, demanding ‘his entire attention’, even if he had originally intended to ‘have done with it in short order’ (Weizenbaum, 1976: 2), and even at the time of writing Computer Power Weizenbaum felt himself to be reliving this trajectory. By the second decade of the 21st century his reputation rests far more on his critique of computation than it does on his substantial contribution to computer science and natural language processing.

Computer Power attacked forms of ‘powerful delusional thinking’ about computers, their capacities, their limits, and their possible impacts, said to be then circulating in expert and public realms. The concern was that they lent credence to, and might accelerate, a broader form of scientism that was already very pronounced. This attack is two pronged; a critique of various ways in which computers are being thought about is undertaken and computer thought itself is reconsidered. At the heart of the matter, in both cases, is the contention that human thought and AI cannot be equated, so that one may not entirely substitute for the other. This translates into an inquiry into ‘whether or not human thought is entirely computable’ or reducible to logical formalism, the latter characterizing AI and/or generally taken to constitute its ontology. 5 Weizenbaum's argument was that computer logics/logical formalism will always be ‘alien’ to human forms of rationality and that the failure to recognize fundamental differences between human rationality and computational reasoning produces errors in (technical) understanding, and a series of social pathologies. Moreover, so his argument went, this error was already widespread in his time, being evidenced in over-optimistic expectations for AI, in behaviourism as a tool for social order and a way of understanding humans; in the spread, in other words, of systems thinking and those ‘mechanical conceptions’ of man that, earlier, Polyani too had feared. More proximately, it was evidenced in the responses to ELIZA, which demonstrated the currency of this kind of thinking, as well as threatening to contribute to it.

Logical formalism?

‘First there is a difference between man and machine.’ It was Weizenbaum's understanding of what machine logic is that led him to argue for an essential difference between humans and machines, for recognition of the division between rational thought and logical operations, and, following on from that, to make a call for limits to the spheres of operation in which the application of machine logics should go ahead. Key to this was an attack on what Weizenbaum termed the ‘rationality-logicality’ equation. This is the tendency to presume that human rationality and its operations can be reduced to, or can be equated with, or treated as, a question of logic and its operations – and that this can be operationalized. For Weizenbaum it cannot. There is an ontological distinction between human intelligence (and human rationality) and computer logic, and because of that also a difference between computer operations which may entail forms of emergence (today that issue arises in terms of machine learning) and human becoming.

Weizenbaum's argument doesn't rely on the outcome, successful or otherwise, of various tests based on simulation (the Turing test being the most obvious example). In this he more or less follows Searle (1980), who is invoked in Computer Power. Nor is the issue purely a matter of the possible technical limits to AI development, or certainly not as these were invoked by Dreyfus around the same time (Dreyfus, cited in Weizenbaum, 1976: 12). Nor is it relevant whether or not AI could become complex enough to produce an adequately convincing simulation of humans; for Weizenbaum, ‘[e]ven if computers could imitate man in every respect – which in fact they cannot – even then it would be appropriate … urgent … to examine the computer in the light of man's perennial need to find his place in the world’ (Weizenbaum, 1976: 12).

Weizenbaum did accept that imitation approaches were likely to progress, that computers would come to ‘internalize ever more complex and more faithful models of ever larger slices of reality’ (Weizenbaum, 1972: 175), but believed this process would hit limits. His argument entailed first, a general take on computer processing as a mode of formalization, and second, a consideration of processing itself as a mode of formalization that always entails a form of abstraction and therefore a reduction; there is always a gap between what is represented, or symbolized (formalized or encoded) and the subject of such operations – for instance, the human or human thought processes. This understanding set Weizenbaum against AI's basic tenets, as the psychologist Sherry Turkle noted in her 1980s consideration of ‘second selves’, pointing out that AI has historically asserted ‘the primacy of the program’. The AI method, that is, ‘follows from the assumption that what looks intuitive can be formalized, and that if you discover the right formalization you can get a machine to do it’. It was a belief in the generalizability of the program, or system (cybernetics is clearly relevant here), that allowed AI to proclaim itself, ‘as psychoanalysis and Marxism had done, as a new way of understanding almost everything’  6 (Turkle, 2005: 245–246).

Simulation criterion as a test for intelligence (or a test for successful formalization) follows from this worldview, and, as Turkle noted, AI communities tended to accept the simulation criterion (Turkle, 2005: 267). Simulation as a test for intelligence was rejected by Weizenbaum because he didn't believe in the rationality-logicality equation (what we might define as the generalizability of the AI formalization of intelligence). In particular, he didn't believe human rationality could be systematized, certainly not as, or through, ‘logicality’ or forms of machinic systematization. As he put it, ‘hardly any’ of man is determined by ‘a small set of formal problems’ but, rather, by his (sic) intelligence. The consequence is that ‘every other intelligence, however great, must necessarily be alien to the human domain’ (Turkle, 2005: 223).

Turkle explicitly links Weizenbaum's sense of the ‘un-codable’ nature of the human to Searle's famous Chinese room argument, the end point of which is that a computer simulation of thought is not thought. In simulation, intelligent machines ‘will only be simulating thought’ (Weizenbaum, 1976: 263, Searle, 1980). What is being contested here is partly that line of thinking that says the program is not only a way to understand how to model human behaviour but how to account for what it is. Or does. If simulation is an acceptable criterion it is because if it operates successfully; if the program runs, if it plays, then that is enough. For Weizenbaum, however, the model (perhaps any model) is not the reality. Moreover, and specifically in relation to the modelling of the human, Weizenbaum followed the line that said, given the particular and singular character of human intelligence, it may never be possible to convincingly model that reality. His critique of abstraction in general was thus supplemented by his sense that the sum of human knowledge (or knowledge about what the human is) could never be encoded in information structures. Why? Because ‘there are things human beings know by virtue of having a human body, or as a consequence of having been treated as human beings by other human beings’ (Weizenbaum, 1976: 209). Moreover, even those things that seem communicable through language alone, not appearing to rely on embodiment, necessarily invoke it – even if in a distanced way.

Weizenbaum, surprisingly perhaps, invoked Shannon's information model, noting that even here the expectation of the receiver and their history are relevant to what is communicated. Translating this into the register of communication and memory, he commented that ‘the human use of language manifests human memory … giving rise to hopes and fears’, but then shifts the focus from memory and the past into a comment on the place of the future in human thought. His argument essentially was that, across time, in relation to each other, through their embodied state, and by virtue of these relations/locations, humans are always ‘in a state of becoming’ and always in a state of becoming which rests on humanity, on the human ‘seeing himself, and … being seen by other human beings, as a human being’ (Weizenbaum, 1976: 223). It is human becoming, which constitutes human rationality, that cannot be reduced to machine learning. For Weizenbaum, it would be ‘hard to see what it could mean to say a computer hopes’ (Weizenbaum, 1976: 207).

Weizenbaum did not deny that the advent of AI, defined as a ‘subdiscipline of computer science’ (Weizenbaum, 1976: 8), had changed human–machine relations, nor that it might redefine human-to-human relationships and/or human relationships or interactions with the world. The adoption of a medium-focused model to explore this is striking. What is described is a particularly tight ‘coupling’, a binding 7 to the machine, a specific mode of prosthesis operating differently from earlier forms of human–technology interaction, engaging directly (human) ‘intellectual, cognitive, and emotive functions’ (Weizenbaum, 1976: 9). This coupling is partly why the computer is such a powerful new instrument and why the specific question of limits arises as a proximate necessity. Weizenbaum's answer to the question of what it means to be a human being and what it means to be a computer is thus that these two forms of being, and the forms of intelligence they bear, are fundamentally distinct. One state of being cannot be rendered into another. The trouble begins first when computers and humans move close to one another, or are bound to one another in new ways, and when it is assumed, perhaps partly as a consequence of this relation, that this binding should become increasingly tight. His second concern was that this binding up does not produce an exchange, or make a new common ground, but produces a shift towards formalization, or the adoption of computer or machine models as ways to understand the world and/or to make it operational.

Machine metaphors

Weizenbaum, then, was concerned with accelerating computer power but also saw limits to the goals of the ‘artificial intelligentsia’: this is not an account fearful of the ‘rise of the machines’ to become our new overlords. Weizenbaum was afraid of what he had produced and wanted to cut it down to size despite knowing it to be technically limited, and incapable of much that was claimed for it. His concern was that humans were already living as if the promises of AI were real, already taking their logics for granted. He feared a world in which humans ‘come to yield … [their] … autonomy to a world viewed as machine’ (my italics). Something about the computer he knew had given this view ‘a new level of plausibility’(Weizenbaum, 1976: 8), and after ELIZA he urgently wanted to understand what that something was; the more urgently because its effects were being felt not only in computer science and AI fields, but also amongst therapists and the general public.

If Weizenbaum was horrified by the apparent alacrity with which a bad model was accepted, and an unfortunate metaphor (his description of ELIZA's designation ‘therapist’) taken up, it was also because he felt this betrayed a wish to believe, a wish to lean on computational intelligence, a lack of humans’ confidence in their own being, an increasing belief in the superiority of other (scientific, machinic, exact) forms of reasoning, calculation, determination, and an apparent desire to rewrite the social world to reflect these forms. The seeming ease with which the replacement of people with computers could be contemplated troubled him, producing urgent questions of the relationship between the individual and the computer and ‘the proper place of computers in the social order’.

The trouble began with computer science and AI where the rationality-logicality equation was being operationalized and promoted. If rationality is understood in terms of logicality, then the proposition that human thought is computable becomes easy to accept. Weizenbaum's objections to this position, strikingly close to some currently recapitulated in relation to big data, led him to consider a broader shift, the diminution or termination of the human's role in ‘giving meaning to his world’, that function now being taken over by the computing machine (Weizenbaum, 1976: 9). The logico-rationality ‘equation’ would not produce an equal exchange but a prioritization of one set of values over the other. Computational, or, speaking simply, ‘scientific’ thinking, prioritizes logical operations and the knowledge they produce. Weizenbaum noted the degree to which young computer science scholars had ‘rejected all ways but the scientific to come to know the world’, but was more worried that this orientation had diffused still further and was widely observable. Science, whose values he has championed, in whose temple he still lives, 8 had, he said, become a ‘slow acting poison’ (Weizenbaum, 1976: 1–16), 9 challenging and transforming human activities and values across a wide range of fields because it offers a particular understanding of humans and social worlds as machines:

the attribution of certainty to scientific knowledge by the common wisdom, an attribution now made so nearly universally that it has become a commonsense dogma, has virtually delegitimized all other ways of understanding.

(Weizenbaum, 1976: xx)

Part of Weizenbaum's fear was that this rendered governmentality inhuman. Social and human contradictions and antagonisms become ‘merely apparent contradiction[s]’, to be viewed as problems that ‘could be untangled by judicious application of cold logic derived from some higher standpoint’. If rational argumentation is really only ‘logicality’, which follows if rationality itself has been ‘tragically twisted so as to equate it with logicality’ (my italics), then real human conflicts are to be viewed as simply ‘failures of communication’ to be sorted by ‘information handling techniques’. More fundamentally, if there are no ‘human values’ that are incommensurate, that are not sortable by machine, then ‘the existence of human values themselves’ is in doubt (Weizenbaum, 1976: 13–14).

Mechanical humans and ‘automatic good’

Now to return to ELIZA and the reception accorded to this most notorious script. How could so much be hung on such a flimsy piece of work? What could a piece of work like ELIZA do that mattered? My sense is that ELIZA's pertinence came about at least partly because of that something Weizenbaum always wished to downplay, indeed bitterly regretted having assisted in producing; he gave ELIZA to the world as a therapist. And the program was often received as such. Reflecting back on ELIZA, Sherry Turkle notes that ‘ELIZA was a dumb program’, but adds that it was one that ‘sophisticated users’ could relate to ‘as though it did understand, as though it were a person’ (Turkle, 2005: 40, 39). 10 Weizenbaum had earlier noted with exasperation that well-educated people, even computer science specialists and those in circles that might be expected (he felt) to know better, found something compelling, and personal, about their interactions with ELIZA. Notoriously, his secretary asked him to leave the room whilst she ‘talked’ to ELIZA. Weizenbaum understood this partly in terms of a misplaced anthropomorphism (Weizenbaum, 1976: 6) taking the form of what might be termed reciprocal imprinting; having no better model, humans seeking to understand machine intelligence tended to draw on the only model of intelligence to hand – their own.

The transposition human–machine also worked the other way around – when it meant understanding humans in machine terms. This disturbed Weizenbaum still more, particularly when he discerned it circulating amongst people who claimed expertise in understanding what makes humans, rather than machines, ‘tick’ – to invoke a comparison he would presumably have hated. Thus, in the opening pages of Computer Power Weizenbaum invoked the psychiatrists who ‘believed’ the DOCTOR computer program could grow into a ‘nearly complete’ automatic form of psychotherapy (Weizenbaum, 1976: 3), incredulous that this group, of all people, could ‘view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter’. He concluded that it was possible only because they must already think of themselves as ‘information processor[s]’, adding that therapy did already have a mechanical conception of the human to hand in the behaviourist theories of B.F. Skinner (Weizenbaum, 1972: 175, 1976). It also had an informational model of the human to draw on in developing AI and computational science research, notably at MIT itself, where Marvin Minsky had defined the human as a ‘meat machine’ (see also Weizenbaum, 1972: 160).

Weizenbaum's resistance to automatic therapy plugs in to fierce debates within therapeutic circles that were circulating at the time of ELIZA's launch. Central here was the rise of behaviourism, notably as espoused by Skinner. Skinnerian behaviourism is noted for its rejection of an inner self as motivating human actions and its focus on genetic endowment and environment. The key to psychological change in this form of intervention is conditioning via environmental modification, with the aim being the production of positive feedback loops that generate new forms of good behaviour (Skinner, 1971, Bassett, 2019). Behaviourism thus reduces the human to an element within a system that may be stimulated and adjusted as necessary to produce the desired/desirable outcomes. Notoriously, Skinner came to define this as ‘automatic goodness’, elaborating the term in Beyond Freedom and Dignity (Skinner, 1971), published within three or four years of Human Reason, and linking it explicitly to forms of governance. Time magazine summed up its message as: ‘familiar to followers of Skinner, but startling to the uninitiated: we can no longer afford freedom, and so it must be replaced with control over man, his conduct, and his culture’ (Time, 1971).

Behaviourism's refusal of agency, its denial of the self, its desire to delegate matters of decision around good and evil to agencies beyond the individual human, was anathema to many. It was challenged by left theorists, including Noam Chomsky, 11 for its societal implications and for the morality of its desire to replace freedom with conditioning as a form of social control (Chomsky, 1971a, 1971b, Bassett, 2019).

Another notable critic was Carl Rogers, whose eponymous approach to therapy at the time was focused on self-actualization, stressing the growth of the self and self-autonomy. Rogerian therapy was person centred. In contrast to behaviour-modification programmes designed to change actions in the external world, it explored projects of work on the self, to be undertaken by the subject with self-actualization as the goal (Rogers, 2004). The latter was defined by Rogers as:

… the curative force in psychotherapy … man's tendency to actualize himself, to become his potentialities … to express and activate all the capacities of the organism.

(Rogers, 1954: 251)

Skinner and Rogers later debated their positions in public (see e.g. A Dialogue on Education and Control, Skinner and Rogers, 1976), with Skinner defending the virtues of ‘automatic goodness’ and demanding recognition of its necessity in a complex society, where freedom could not be afforded. Rogers was horrified by the implications of understanding human motivations and actions in machinic ways, as only matters of response to stimuli, and by what followed – which was that therapy, or more broadly the therapeutic modulation of human behaviour to make a better society, would then best consist of the appropriate modulation of conditions to encourage correct forms of behaviour for individuals and also groups. He insisted on the need for thinking through matters of orientation, decision, agency and freedom. Rogerian therapy does not turn the human into an object, thereby diminishing their agency, but on the contrary seeks to allow growth.

So what kind of therapist was ELIZA? One the one hand, a language analyser and script; a piece of code operating to parse speech and offer appropriate responses, a bot designed to host interactions, where formally speaking meaning – understanding – has not been designed in. To process natural language does not entail understanding it – not then, not now. On the other hand, ELIZA had been designated Rogerian, and not simply designated, but also designed to operate, as a Rogerian. If the program had a goal it was to offer such therapy (or simulate such a therapeutic response) or, in other words, to help ‘users’ self-actualize, to understand themselves better, to know their own stories. Code and machine, database and narrative: I have argued elsewhere that the one and the other are not discrete, and also that one survives the rise of the other (Bassett, 2007). Here we might say that ELIZA provided or enabled both for her users. Or perhaps we can say He/She/It was conflicted; not the status a therapist is meant to admit to but, rather, to diagnose (but then, don't all therapists have their own supervisors?). Is this simply a fantasy? Weizenbaum said ELIZA's designation wasn't intended to be taken seriously, the burden of his argument being, perhaps, that the program was designed to be illustrative (of how a natural language processing issue could be addressed in relation to a human function), rather than performative (acting out a role and producing what it helped to name).

It could be concluded, then, that ELIZA simulated a (Rogerian-style) therapist rather too successfully for Weizenbaum's liking. But it is also possible to conclude that an assemblage that included ELIZA (providing one form of cognitive input) and human users did, in operation, enable forms of reflection or exchange that might produce insight or growth, and that these exchanges need to be considered as neither entirely ‘alien’ nor wholly (if wholly implies exclusively) ‘human’. ELIZA was made of code, language, an analyser and a parser, and in this sense could offer only a machinic view of the world, but what the program was programmed to do, or ‘be’, was Rogerian; it was programmed not to modulate but to ‘listen’, to reflect back, and prompt introspection, self-inspection, self-growth.

When Turkle talked to doctoral students at MIT about ELIZA she concluded that they liked talking to a machine, partly because they weren't happy with humans. But another conclusion might be that these students engaged with an interlocutor to develop their thinking/selves. This is far from Skinnerian pigeon boxes, cybernetic loops involving the adjustment of behaviour through the delivery of external stimuli, routing entirely around the sense of self. Put this another way: what did those people, in front of the mirror, do? They told ELIZA stories about themselves. ELIZA invites narrative.

If ELIZA is a narrative machine, then we might suggest that the program is in conflict with what its ontology seems to suggest; there is some hope here that because we are humans we can make of machines – and of machine prosthesis, that magical bind – something, some form of engagement, that is new. Human becoming is not machine emergence, but the two together might be something else again.

This invites reconsideration of what it means to be anti-computing, or even what it means that what we make with computers is often apparently in tension with what is conventionally assumed to be essential to the computational: its inescapable logico-rationality. The latter is in evidence, but also in evidence is an imaginary, a symbolic, a form of putting into practice, or becoming through mediation, which is a coming into the world. The computational as instantiated, because instantiated, contains something of its own contradictions. What this opens the door to is a way of rethinking anti-computing not in terms of appropriate use and appropriate limits (where Weizenbaum ends), but perhaps rather as something integral, something reflecting the difficulties, or ambiguities of a complex prosthesis, a prosthetic arrangement, which may still be worth undertaking. To consider the ways in which a machine might conflict with its own ontology – how it might be anti-computing, considered in its own terms, not as a rational choice, but through how its logics are operationalized – comes close to identifying a form of essential indeterminacy, in process if not in use.

The conclusion of Computer Power and Human Reason is a call for recognition of a fundamental difference between alien and human thinking, and a call for limits to ‘what computers ought to be put to do’ (Weizenbaum, 1972: 16). This comes as a demand not for a formal limit on the development of particular computer capacities but, rather, for a restriction on the areas in which they might be used. This makes sense, since for Weizenbaum, in the end, ‘the relevant issues are neither technological, nor even mathematical but ethical. If these issues are not addressed’, he concludes gloomily, it is because we have finally or already ‘abdicated to technology the very duty to formulate questions’ (Weizenbaum, 1976: 611).

It would be possible to argue that Weizenbaum had only himself to blame for the ELIZA events. He had programmed his script to imitate a therapist, to mimic, that is, precisely that mode of interaction that cleaves very closely to questions of ‘the human’, of ‘human being’, and of human thoughts, feelings, mind. Weizenbaum wrote of the ‘perverse proposition’ that a computer could be programmed to become an effective psychotherapist. (Weizenbaum, 1976: 206), but this was what he had ‘asked’ ELIZA to do; in a sense, it was his proposition. His apostasy perhaps arises partly because he felt responsible for creating a (small) monster – a familiar-enough sentiment found amongst scientists who mess with ‘life’, and one explored extensively in fiction, from Frankenstein (and his parent) on. To break with Weizenbaum I have, rather than excoriating ELIZA as an unintentionally proffered accelerant for accelerationism/machine rationality whose influence came from her attributed capacities rather than anything substantial, reconsidered what ELIZA did – and specifically what ELIZA did as a therapist.

An afterword

Plug & Play, a documentary directed by Jens Schanze, made many years after the ELIZA events, intercuts interviews with Weizenbaum with comments from Ray Kurzweil, the singularity champion, and Hiroshi Ishiguro of the Bits and Atoms lab at MIT, still the temple of AI 12 and the now the home of Ishiguro's peculiar robot doppelganger. In Plug & Play Weizenbaum affirmed his belief in human inexactitude, change, finitude, and argued passionately against the charms of living forever. He died during the making of the film, his life ending, or so we understand, somewhere near where he began it in Europe, as a Jewish kid – which is to say in history as well as in technology. The film ends with an empty chair, and with the silence of Weizenbaum's machines. In his absence they have nothing to say and are – disturbingly – not lively. Dead flesh and unliving machines; there is a difference. ELIZA ‘lives’ on the internet. You can visit her, have an audience in front of her Rogerian mirror, and wonder perhaps how she relates to Siri, Alexa, and her other later and more commercially minded and linked-in ‘sisters’.


1 ELIZA is a ‘language analyser’ and ‘script’. The latter is described by Weizenbaum as a set of rules ‘rather like those that might be given to an actor who is to use them to improvise around a certain theme’ (Weizenbaum, 1976: 3).
3 Elsewhere I have looked at further at the return to behaviourism, which haunts this book, and haunts our current situation; from Facebook prompts to behavioural economics to or ‘hyper-nudge’ (see Yeung 2012) – an increasingly automated affair. If we were drawing a line from the ELIZA moment today, and followed Weizenbaum, we might see in contemporary developments the consummation of a particular trajectory; a greater loss of self, and a massively accelerated adoption of machine organization of individual and social behaviour. I've addressed these strains elsewhere (Bassett, 2019).
4 Weizenbaum is generally remembered as the expert, the insider, who changed sides. An example is found in Wardrip-Fruin and Montfort's (2003) collection, where his thinking is distinguished from ‘dire warnings’ often come from ‘non-computing users’ who have only superficially considered or even experienced new media.’ Lewis Mumford, for the humanists, acknowledged that it sometimes matters that a member of the scientific establishment says some things that humanists have been ‘shouting about’ (quoted in Agassi, 1976).
5 See Beatrice Fazi (2018) for an elegant demand to reconsider computational determination.
6 Turkle (2005: 246) footnotes that she uses the term AI to refer to ‘a wide range of computational processes, in no way limited to serial programs written in standard programming languages’.
7 This is oddly magical: a binding spell.
8 The contexts of Weizenbaum's cogitations were painfully evident to him. He worked and wrote at MIT and declared himself ‘professionally trained only in computer science, which is to say (in all seriousness) that I am extremely poorly educated’ (Weizenbaum, 1976: 371).
9 On the genealogy of accounts of technology as cure or poison see also Plato, Derrida, Stiegler.
10 Turkle claims that many people first became aware of hackers through Weizenbaum's description of the hollow-eyed young men reminiscent of opium addicts and compulsive gamblers found at MIT.
11 Noam Chomsky's ‘The Case Against B.F. Skinner’, a review of Beyond Freedom and Dignity that amounted to a coruscating attack on the social programme of Skinnerism, set out to demolish its scientific rigour and denounce the morality of conditioning as a mode of social control (Chomsky, 1971b; see also Bassett, 2019).
12 Where the future was ‘invented’, according to Stuart Brand's (1998 [1987]) eponymous book.


Agassi, Joseph . 1976. ‘Review of Computer Power and Human Reason: From Judgment to Calculation’, Technology and Culture, 17(4): 813–816.

Andrejevic, Mark . 2015. ‘The Droning of Experience’, Fiberculture Journal, 25: 187.

Bassett, Caroline . 2007. The Arc and the Machine. London: Manchester University Press.

Bassett, Caroline . 2019. ‘The Computational Therapeutic: Exploring Weizenbaum's ELIZA as a History of the Present’, AI and Society, 34: 803–812, .

Brand, Stuart . 1998 [1987]. Inventing the Future at MIT. London: Penguin.

Chomsky, Noam . 1971a. ‘Skinner's Utopia: Panacea, or Path to Hell?Time, 20 September.

Chomsky, Noam . 1971b. ‘The Case Against BF Skinner’, The New York Review of Books, 30 December.

Fazi, M. Beatrice . 2018. Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics. London: Rowman & Littlefield International.

Freedman, David . 2012. ‘The Perfected Self’, The Atlantic, June, .

Geller, Leonard . 1982. ‘The Failure of Self Actualization Theory’, Journal of Humanistic Psychology, 22(2): 56–73.

Kurzweil, Ray . 1999. The Age of Spiritual Machines. London: Viking.

Kurzweil, Ray . 2005. The Singularity Is Near: When Humans Transcend Technology. London: Duckworth.

Rogers, Carl R. 1954. ‘Toward a Theory of Creativity’, ETC: A Review of General Semantics 11(4): 249–260.

Rogers, Carl . 2004. On Becoming a Person. Constable: London.

Searle, John . 1980. ‘Minds, Brains and Programs’, Behavioral and Brain Sciences, 3(3): 417–457.

Skinner, B.F. 1971. Beyond Freedom and Dignity. New York: Knopf.

Skinner, B.F. and Carl Rogers . 1976. A Dialogue on Education and Control, (accessed 16 May 2021).

Stiegler, Bernard . 2013. What Makes Life Worth Living: On Pharmacology. Oxford: Polity.

Sunstein, Cass R. and Richard H. Thaler . 2008. Nudge: Improving Decisions about Health, Wealth, and Happiness. London: Penguin.

Time Magazine. 1971. ‘B.F. Skinner says we can't afford freedom’. Time cover, 20 September.

Turkle, Sherry . 2005. The Second Self: Computers and the Human Spirit. London: MIT Press.

Wardrip-Fruin, Noah and Nick Montfort . 2003. ‘Introduction to Weizenbaum’, in The New Media Reader. London: MIT Press, 367–368.

Wardrip-Fruin, Noah . 2009. Third Person: Authoring and Exploring Vast Narratives. London: MIT Press.

Weizenbaum, Joseph . 1966. ‘ELIZA – a computer program for the study of natural language communication between man and machine’, Communications of the ACM 9(1): 36–45.

Weizenbaum, Joseph . 1972. ‘How Does One Insult a Machine?Science, 176: 609–614.

Weizenbaum, Joseph . 1976. Computer Power and Human Reason, from Judgment to Calculation. San Francisco: W.H. Freeman.

Will, George F. 2008. ‘Nudge against the Fudge’. Newsweek, 21 June, fudge-90735 .

Yeung, Karen . 2012. ‘Nudge as Fudge’, review article, Modern Law Review, 75(1): 122–148.


Plug & Pray. 2010. Jens Schanze (director).

  • Collapse
  • Expand

All of MUP's digital content including Open Access books and journals is now available on manchesterhive.



Dissent and the machine


All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 463 247 19
PDF Downloads 381 207 21