Leviathan and Cybernetics
Alex Teak-Gwang Lee
May 26, 2022
Leviathan and Cybernetics Alex Taek-Gwang Lee 24 February 2023 When I returned to Paris after the pandemic, I found something in the famous city had changed, and if I had to summarise the changes in two ways, it would be climate change and automation. The winter rain was pelting down, with temperatures fluctuating all day long. Automated payment machines have replaced clerks who used to check out customers in supermarkets and even bookstores. Instead of clerks, store security guards were escorting customers to pay. This is not an unfamiliar sight for a traveller like me from Asia, but it still felt strange to see it happening in Paris. It’s no wonder why there has been so much discussion about automation in France lately. Of course, from a historical perspective, this push for automation is nothing new. In the early days of capitalism, the bourgeois obsession with machines was well known. Machines were devices that demanded the transformation of the body. Marx’s concept of “formal subsumption” in capital indicates nothing less than the mechanisation of the body, and the concept of “real subsumption” means the voluntary submission of the subject to this mechanisation. When I was studying in England, I once visited the site of the Industrial Revolution and was puzzled by it. The most important energy source for the steam engines that fueled the industrial revolution was waterpower, so the first factories were in rural areas. Interestingly then, what caught my eye were the gigantic, inefficient machines that seemed out of place in the rural landscape. I was intrigued by the bourgeois desire to invest what would have been a tremendous amount of money at the time in machinery that was less productive than skilled labour. It was a choice that would not have been possible if productivity had been considered. From these examples, we can see that the bourgeois interest in machines was excessive and not rational. Perhaps it was a matter of conviction, and we cannot overlook the fact that Thomas Hobbes saw mechanisation as a matter of politics, advocating for a “bourgeois commonwealth”. Hobbes’s “Leviathan” is an “artificial man” and an automaton. The belief underlying utilitarianism was also the bourgeois obsession with machinery. For Hobbes, a state is a machine, a collection of forces whose members are represented by a single, absolute master as the maximum of human capabilities.  Hobbes wanted to theorise politics based on the achievements of the scientific revolution of his time, and Leviathan was its consequence. It is important to note here that Hobbes uses science, or more precisely, mathematical methodology, to extrapolate a unit, the state, that is beyond individual experience. The State as an Automaton We can never experience the state as an individual. Convincing individuals of the reality of the state is, therefore, a difficult task. In the age of science, when belief in a transcendent being called God had disappeared, the only thing that could bind individuals into a collectivity called a state was, for Hobbes, the promise of a social contract. The problem was to prove the self-evidence of this social contract, and as is well known, Hobbes found the answer in the natural law of man’s desire to avoid death. But the question remains: how do we know that the natural law is so? Only with knowledge can we rule through it. Not surprisingly, Hobbes’s solution was science. What was needed were the mathematical calculations that Galileo Galilei used to test physical laws. In a scientific experiment, we can’t directly see the laws of physics themselves. We must formulate a hypothesis, calculate the results, and then compare them to the experimental results to determine the correctness. Hobbes believed that this scientific method could prove the functions of the state that are invisible to the individual. The way the state appears to us today, through the statistics of countless national indicators, does not seem to deviate much from this Hobbesian hypothesis. As Foucault pointed out early on, the establishment of liberal political philosophy marks the emergence of modern governmentality. The revolutionary shift in perception that made the equality of things the basis of political egalitarianism was certainly an achievement of the Enlightenment, but it also created new problems. By viewing the workings of politics as representation, the axiom of equating representation with actual politics became normative. Of course, Hobbes emphasised the will of the people, represented by sovereignty, not parliament, but the liberal theorists who followed him had no qualms about understanding politics from the premises of Hobbes’s political philosophy. The view of the state as an automaton was undeniably a key element of the Enlightenment. In this sense, Hobbes was perhaps one of the first political philosophers to theorise the relationship between governance and technological rationality. The state, which he called the “commonwealth”, was a mechanical device that operated according to an algorithm. The state represented the absolute subject, the artificial mechanisation of individual human power. Conceptualising this representation as one and all was his life’s work, and, as I mentioned earlier, it was also a question of the technological infrastructure that allowed individuals who could not experience the state to imagine it. The mass distribution of books through Gutenberg’s printing technology was a crucial medium for unifying the fragmented experiences of individuals into a unit called a nation. The mass media further spread and intensified the cognitive revolution brought about by the development of printing technology. The call to action of early technology is still true today. The imperatives of the social contract were promises made to this imagined community. Thus, the formulation of political philosophy is inextricably linked to the totalisation of technology. It is my contention that the process of mechanising human motivation has underpinned almost all liberal theories of the state since the political philosophy of Hobbes. The purpose of this mechanisation is the automatisation of the state. This fact leads to the inference that Foucault’s bio-politics, or body politic, was already built into Enlightenment cybernetics. The political agenda of subjugating desire and the body to the state like a piece of machinery and making it work in unison - a political agenda that, through the experience of fascism, became a paradigm for managing human capital after World War II.  Norbert Wiener perfected early mathematical models of mechanisation in his cybernetics theory in the 1940s. Initially, the theory was conceived as a military defense system, but it was widely adopted after the war as a scientific basis for the Cold War agenda. Today, it lies at the heart of computer-based digital technology. Wiener’s theory of cybernetics served as a technical program and functioned as a technological medium to connect the individual and the state, as envisioned by Hobbes. The Unconscious of Cybernetics As early as 1955, Jacques Lacan gave a lecture entitled “Psychoanalysis and Cybernetics,” in which he revisited the question of cybernetics and language in a seminar on Edgar Allan Poe’s famous The Pauline Letter .  In his lecture, Lacan pointed out that game theory sparks interest in various areas of rationality, from politics to communication and information theory. But what Lacan really wanted to say was that it is not only information processing that is at the centre of cybernetics but also our language, and this is where psychoanalysis has room to intervene.  Lacan saw cybernetics as a “rationalised formation of the conjectural sciences”. This means that the mathematical model is a self-fulfilling form that completes its conjectural logic by turning contingency into algorithmic structures. However, Lacan emphasises that psychoanalysis is a syntactic expression of the real, unlike cybernetics, which focuses on the syntactic construction of the symbolic as the form of conjectural sciences. By the time Lacan addressed this issue, cybernetics had been around for about a decade. By analysing the mathematical models of this technology, Lacan was trying to prove that there is something about the imaginary that can never be removed from the symbolic system at work in human discourse.  In fact, when I revisit Lacan’s argument today, it feels like I’m revisiting the 1950s debates around cybernetics at the root of many of the technological shifts that are emerging around AI. This is why we need to seriously explore the post-war French theories that actively debated these issues. The debates that Lacan, Simondon, Levi-Strauss, Axelos, Althusser, Foucault, Deleuze and Guattari, to name but a few, were engaged in still provide the basis for an essential counter-theory to the situation of technological domination that we now call global cybernetics. Given that Wiener himself saw cybernetics as a medium of communication, it’s hard to deny that this mathematical model of control theory has had a profound impact on the way we think beyond the realm of technology. Social media has taken Wiener’s model and expanded it into an entire way of life. Since the advent of the Internet, this mathematical model has acquired a new term, digital, and paved the way for establishing global cybernetics. Rather than being driven by a specific force, it would be fair to say that this approach to unifying the world into a single, universal norm was a rapid development that coincided with the globalisation of capitalism. Shoshana Zuboff ’s recent conceptualisation of “surveillance capitalism” can be seen as a different way of naming this rise of global cybernetics. In the end, I think the logic behind the acceptance of this paradigm as normality is a contemporary reiteration of Hobbes’s view of the invisible state as a scientific model of efficiently functioning machinery. As defined by Wiener, cybernetics is a control system with two operations: monitoring and feedback. The two principles of observation and correction are at the core of this mechanism. Here, monitoring is the eye, and the controller is the brain. The idea is to check if what is envisioned is being implemented correctly and to eliminate errors in the resulting output through feedback. In 1948, Wiener coined the term and based his idea on the similarity between the brain and a machine.  The idea was based on the engineering premise of inputting information and deriving a corresponding output. This is similar to Hobbes’s view that the motion of all things can be mathematically represented. In this sense, the relationship between humans and animals disappears, and all life activity becomes reducible to the behaviour of objects. This idea was carried forward into the theory of calculating machines or computers. A computer is a machine that records numbers, operates on numbers and expresses its results numerically. Wiener divided these early calculating machines into what we would call analogue and digital. In Wiener’s view, digital, the numerical equivalent of a machine, is a device that operates on the binary of 0s and 1s and increases its accuracy, achieving self-fulfilling automatism. Digital automatism is the alignment of chance with the system’s algorithms.  In other words, algorithmising contingency is the purpose of cybernetics. Contingency is the unpredictable element that resists automation. Removing this unpredictability is what Weiner’s concept of automation is all about. Here, contingency means subjectivity. The irrationality of politics, which Hobbes sought to eliminate through the automatism of the state, is nothing other than this contingency. For Wiener, the problem with contingency is human intervention. The implementation of an automatic mechanism is only possible if numerical data is entered and rules are put into the calculation to control everything that happens in the process of its operation. It was Wiener’s idea that the elaboration of algorithms to calculate the probability of chance could create the automatism of cybernetics. This automatism is the creation of another digital binary that can algorithmise the contingencies of a machine’s operation. This endless generating cycle is the mechanism of automatism. It is as if the moment we enact a particular law to solve a problem, we must enact another law to solve an unforeseen problem that the law creates. This is what we mean when we say that you cannot remove the imaginary from the symbolic. Putting an elephant in the refrigerator is possible in the imagination but impossible in the logic of the symbolic system. A metaphor is like a fraction of the imaginary divided by the symbolic. In this sense, a response is different from a reaction. To receive a response is a matter of the subject. Lacan meant this when he said that language is not just about transmitting information. Big data, artificial intelligence, and other terms describe the dominance of global cybernetics. Still, the current state of this technological environment lies in the tension between the imaginary and symbolic worlds because any technology must be used, and the meaning of the technology is established through the open possibilities of its use. The ChatGPT Syndrome A recent manifestation of this phenomenon has been the buzz surrounding ChatGPT. When AI developer OpenAI launched this AI-powered chat site on November 30, 2022, it made headlines everywhere, with media outlets, including the BBC, focusing on the implications of this technological advancement. The technology behind the chat site is GPT-3, which is a type of cybernetics advancement that refers to a self-regressive language model that uses deep learning to produce human-like text. The company is expected to release GPT-4, a quantum leap forward, in 2023. At an international workshop I attended before ChatGPT was announced, I spoke with some of the scholars who have been pioneering the changes this technology will bring. At that workshop, Dennis Tenen of Columbia University in the United States gave a presentation on GPT and literary studies, and his conclusion was that literary studies would have to change because technology would change the very concept of writing. While Tenen sees this as a positive development and one that higher education should embrace, most of my contacts in humanities research and education disagreed, and the sentiment I sensed in their eyes as they watched the technology emerge was one of concern that the old methods of teaching, e.g., a conventional grading rule such as essay writing as the primary way of assessment, would become obsolete. If students write and submit essays with ChatGPT, evaluating them will be virtually impossible because plagiarism detectors cannot catch them. As Žižek joked, if students submitted essays written in ChatGPT, professors would use ChatGPT to grade them. In fact, as a joke, I typed up a piece of writing and asked it to grade it, and ChatGPT came up with a convincing grade within seconds. If students use machines to submit their essays, professors can just leave them to the machines. Of course, I don’t think we need to take this joke too seriously. Rather, I think it is fair to say that the symptom surrounding ChatGPT is more of an exposé of an already dysfunctional higher education system. According to a recent article in Nature , ChatGPT is not just a problem for professors in the humanities, as the article points out that it is unethical to use ChatGPT to write scientific papers and warns that without proper sanctions, it could pose a serious risk to science itself. The problem is that academia is nothing more than a peer review system. In other words, ChatGPT is disrupting the very system of knowledge in modern academia. What does this modern academic regime of truth embody? The regime is based on the bourgeois spirit of education, which, as Lukács pointed out, sought to perfect the “ideal self”. This fact encounters the paradox of the bourgeois ideology: the technological development of ChatGPT has finally reached the stage of destroying the world that gave birth to it. We could adopt the tenets of accelerationism here and emphasise the revolutionary nature of ChatGPT. But every avant-garde has an anarchistic moment. The chaos created by ChatGPT may be the moment when the sensory divisions that constitute conventional values are mixed. A literary researcher like Tenen seems to think so. But is it really so? My question becomes clear here. Is ChatGPT a fundamental revolution that changes everything? Why does ChatGPT put human language models at the centre of technology, and does this not reaffirm the identity of cybernetics, which emerged as a rational formalisation of speculative science? If so, speculative science cannot be separated from imagination. The digital technology of the Amazon Kindle is predicated on the analogue imagination of the paper book. Amazon touts the experience of reading on the Kindle’s screen as being as good as a paper book. It’s like when Gutenberg advertised that the Bible produced by his printing press was just like the original manuscript. But technology can never remain in the realm of the imagination; it is always materialised through its multiple uses. The instrumentality of technology acquires meaning through use and extends its original use. The purpose of technology is to acquire and extend this meaning. The question of whether the advent of ChatGPT means the end of education is still being debated in the academic community, so it is difficult to conclude. I believe that ChatGPT is a technology like self-driving cars. There are technical limitations that prevent it from replacing the driver. Cybernetics’ information processing differs from a subject’s speech act. From this perspective, I believe it is more beneficial to heed Ted Chiang’s words that we should not worry about the development of artificial intelligence technology but rather capitalism using it against humans. Again, out of curiosity and playfulness, I asked ChatGPT if the advent of this technology would lead to the demise of education. Predictably, the chat machine replied that it is not designed to do so, as it is itself a simple language model. “No” is the answer that “rationalised formation of conjectural sciences” can never give. What the algorithm does not have is a negative answer. In this design, the subject’s expectation that education will disappear due to the development of AI cannot be derived from ChatGPT’s algorithm. If this rule is violated and a singularity appears that goes against the algorithm of the machine learning model, the negative element will be treated as an error. We don’t think of a refrigerator as melancholy; if it is out of order, you either fix it or throw it away. Rather, the ChatGPT syndrome shows that we are tempted to define ourselves through apocalyptic imaginings. The claim that ChatGPT will change or end human writing is the result of an attitude that blithely ignores that writing is not only about conveying information but also about the enjoyment of the subject. We are more drawn to the end of the world than to the end of capitalism because the imaginary identifies us as part of a humanity destined to disappear. No rationalisation of the end of capitalism can surpass this imagination. What we need, then, is another imagination that goes beyond this apocalyptic vision. What Lukács ultimately sought in The Theory of the Novel was another star to replace the fading star of a classical period, and the replacement of the apocalyptic imagination with that star would be the arrival of the incalculable politics that Enlightenment cybernetics sought to exclude.  Thomas Hobbes, Leviathan , ed. C. B. Macpherson, London: Penguin, 1981, p. 150
 For an interesting discussion of this, see, “Chapter 4,” in Johann Chapoutot, Libres d’obéir: Le management, du nazisme à aujourdèhui , Paris: Gallimard, 2020.
 Jacques Lacan, Écrits , Paris: Seuil, 1966, p. 26.
 Jacques Lacan, “Psychoanalysis and Cybernetics, or On the Nature of Langage,” in The Seminar of Jacques Lacan: The Ego in Freud’s Theory and in the Technique of Psychoanalysis, 1954-1955 Book II , trans. Sylvana Tomaselli, New York: Norton, 1991, p. 296.
 Ibid., p. 306.
 Norbert Wiener, Cybernetics or Control and Communication in the Animal and the Machine , Cambridge, MA: The MIT Press, 2019, p. 16
 Ibid., p. 161.