How far away are transhumanist operations

Transhumanism: From Adoration of Technology to Mythology

The biologist and philosopher Max Schnetker on transhumanism as a crypto-religion and elite ideology

Transhumanism holds out the prospect of overcoming man by technical means. The starting point of his hopes and fears are thinking machines, whose capabilities are supposed to far exceed those of humans. Such devices are nowhere in sight, the expected superintelligence is purely speculative and many transhumanist assumptions are adventurous - why is transhumanism spreading anyway?

I first encountered transhumanism while browsing the internet aimlessly. My first reaction was that I thought the debate about the vengeance of the coming superintelligence was a satire. What was your first reaction
Max Schnetker: It was different for me. At the university I came into contact with transhumanist ideas, but with their, so to speak, serious offshoots, authors like Ray Kurzweil or Nick Bostrom. To an impartial layman like me at the time, transhumanism could at first glance appear to be a serious science, with publications and institutes that seemed academic like the Machine Learning Research Institute or Singularity University. The more I read in, the more terrifying I found the transhumanist ideas. Finally it became clear to me that this tendency can only be understood through the means of ideological criticism.
It is for this reason that you recently published a study entitled "Transhumanist Mythology". But let's first lay a few basics for a better understanding. Transhumanism is a movement that is mainly active in the United States and expects a fundamental social change from so-called Artificial Intelligence (AI). AI will supposedly expand and optimize itself from a certain point in time - the so-called singularity. It marks, so to speak, the takeoff of recursive self-optimization.
Max Schnetker: Transhumanists believe that with the singularity, a superintelligence will arise. Because their intelligence will supposedly grow exponentially, it will quickly be superior to us humans to an extent that is no longer comprehensible. Often transhumanists compare the difference between you and us to that between a human and a mouse or an insect. In debates, this assumption is often used to ward off critical or skeptical objections because we simply cannot imagine the possible actions of the coming superintelligence. She is downright omnipotent, in the end she behaves like a god to us.
That actually sounds like a tricky theological problem. The counsels of the superintelligence are unfathomable to us mortals ...
Max Schnetker: At the same time it is we who create this highest being. We would have to make sure that the superintelligence tolerates our existence, otherwise it will marginalize us, like we humans are bulldozing an ant den to build a parking lot. In this context, the transhumanists speak of the value loading problem: How do we get the AI ​​to remain sympathetic to us - which is difficult because it is supposed to program itself in a process of recursive optimization, i.e. continuously improve its own program.
Actually strange, transhumanists create a god for themselves, to whom they want to dictate.
Max Schnetker: Sounds strange, but for those who support this idea, it is the fateful question of our time. That is why it is also called the AI ​​risk movement.
The fear of the social consequences of AI is widespread. Many people fear discrimination or arbitrary and non-transparent decisions by authorities or companies. The transhumanists are interested in something else: an AI that, so to speak, becomes independent and takes over power.
Max Schnetker: At this point, it seems important to me to point out that we are still a long way from a general AI that would be context-independent and autonomous. In any case, recent successes in machine learning do not give rise to such concerns. Nevertheless, influential entrepreneurs from the robotics and internet economy such as Hans Moravec or Elon Musk are among the transhumanists, as is the aforementioned philosopher Nick Bostrom, who heads an institute in Oxford.
In fact, the fear of the AI ​​coming to power is amazingly widespread. Even the physicist Stephen Hawkings believed that their uncontrolled development could mean "the end of mankind". In an interview he said: "As soon as we construct an intelligence that improves with increasing speed, the people whose development is limited by the slower biological evolution can no longer keep up and will be displaced." A superior species should dispute the habitat of an inferior species. Now this may apply to humans who actually destroy other species in large numbers, but this is by no means a law of evolution - and why on earth should an AI want to protect its own existence at all costs? Why should she develop any desires at all?
Max Schnetker: It is not so much about the superintelligence being ascribed desires as it is about inadvertently destroying us through its reckless behavior. Ultimately, projection plays a role in this notion. The computer scientist Maciej Ceglowski once compared supporters of the AI ​​risk movement with children who tell each other scary stories: "Like nine-year-olds who camp in the garden and play with a torch in their tent. They project their own shadows onto the tarpaulin and get Afraid that there is a monster. But in reality, it is a distorted image of themselves that scares them. "
The transhumanist thought game about the basilisk fits in with this. The future super-intelligence, which supposedly wants to expand endlessly, will punish its opponents or eliminate anyone who could endanger its existence. Because the records from the past are available to her, it should be dangerous to express oneself critically. This in turn leads to the paranoid idea that it is best not to say anything to yourself on the subject. Oh dear, now I must have made a mistake ...
Max Schnetker: The mind game about the basilisk is even more complicated. It only becomes plausible if one has already internalized various basic assumptions of this transhumanist current. For example, that a computer simulation of my neurons is identical to me as a person. If the superintelligence ran such a simulation of me in the future, it would be my afterlife. The basilisk would be a super intelligence that runs such simulations of people today and punishes them for their actions today by torturing their simulations.
The basilisk not only endangers those who hinder the rise of superintelligence, but also those who have not contributed enough to it. Anyone who learns of the imminent emergence of super intelligence is obliged to work towards its emergence. This is why this figure is also called the basilisk: the moment you find out about it, the basilisk looks at you. This thought experiment picks up on an old Christian motive that played a role in proselytizing: as soon as you have heard the good news, you have to act on it, otherwise hell threatens.

Irrationality as a flaw

Now you have deciphered the story of the basilisk from a religious and scientific point of view, i.e. interpreted it allegorically. The superintelligence corresponds to God, whereby this God was created by us. But at the same time transhumanists emphasize their scientific claim. How do the scientific and the religious fit together?
Max Schnetker: The strange thing is that the godlike super intelligence actually symbolizes perfect rationality. That is precisely where their superiority lies. Transhumanists, on the other hand, consider their own thinking to be flawed or at least prone to error. Cognitive distortions lurk everywhere. They justify this deficit with borrowings from evolutionary psychology to prove that we humans are still determined by the intellectual legacy of the hunters and gatherers from the steppe. This legacy is said to be the cause of all kinds of grievances, from crime to climate change.
In addition, experimental psychology and especially behavioral economics has shown that people usually do not act according to mathematically formalized decision-making patterns. The movement of the neo-rationalists, which overlaps with transhumanism, wants to leave this limitation behind and adopt rational modes of behavior. For example, with workshops in which you learn to think like an AI.
Now the neo-rationalists and transhumanists strive for a very specific kind of rationality, namely that of game theory. The core of this is known to be optimizing one's own utility according to probability calculations, and thus the notorious homo oeconomicus.
Max Schnetker: This kind of rationality is an exception in everyday life. The transhumanists experience this as a flaw, as do all the limitations associated with our physicality.
But isn't there a mix-up - who says that game theory calculations are good for our everyday lives? With its modeling, game theory sets a certain rationality as a standard and then wonders why people do not meet it. My favorite example comes from the time it was created: The first experiments with models such as the ultimatum game and the prisoner's dilemma took place in the RAND think tank in 1949. The players, including the secretaries of mathematicians John Nash and Merrill Flood, confidently ignored the game theory equilibria and always cooperated. Nash's reaction: "It's amazing how inefficiently the players kept their winnings. You would have expected them to be more rational." In other words: what behavior is sensible is determined by my model.
Max Schnetker: Admittedly, it is a bourgeois rationality that works very well in a certain environment: an environment that is extremely competitive, where performance is constantly measured and individualized, for example in technical sciences. This is probably why this kind of reason makes sense to some members of this milieu. Superintelligence is a projection of people who depend on game theory rationality in their everyday work and experience emotionality as a weakness that is punished by their environment.
It strikes me again and again how carelessly some game theorists make counterfactual claims. For example, the so-called tragedy of the commons by Garrett Hardin, an influential ecologist. In the 1970s, he supposedly demonstrated why common property on land, water and forest is necessarily destroyed. We know from historical research that commons existed for millennia. Who should we believe: the game theory model or empirical research?
Max Schnetker: In addition, the historical prerequisites for overuse often do not appear in such models, for example whether the farmers produce for the market or for their own subsistence. But neo-rationalists and transhumanists apply the assumptions of game theory as indubitable world knowledge.

Misconceptions about body, mind and their connection

The neorationalists and transhumanists want to be rational, not to be under any illusions. They look at the world as it is supposed to be without pity and without prejudice. Just below the surface there are religious and mystical things, hopes of salvation, punitive deities ... You speak of an apparent materialism.
Max Schnetker: Journalist Mark O'Connell, who wrote a feature article about transhumanists, calls it "materialistic mysticism". At first glance, they reject everything supernatural, including subjectivity, consciousness, emotion and will, among others. It is precisely for this reason that mystical ideas return to fill the gaps that arise.

"For transhumanists, physicality means limitation"

There is nothing but matter and energy. What appears to us as a ghost is just its effect. To what extent do mystical ideas come into play?
Max Schnetker: A blatant example is the famous story of the so-called brain upload, which the robotist Hans Moravec tells in his book "Mind Children". A brain surgery robot opens the skull plate and scans the brain mass millimeter by millimeter. "These measurements, combined with extensive knowledge of how human neurons work, enable the surgeon to write a program that reproduces the behavior of this outermost layer of brain tissue." The brain is removed layer by layer or transferred to a computer. The result: "Your mind has been transferred from the brain to a machine."
It seems to me that this story is to transhumanists what the story of Jesus' resurrection is to Christians: the central mystery, the overcoming of mortality and the transition to the divine. Readers should ask themselves whether this probably quite painful operation is at least performed under anesthesia. On the other hand, the awareness program is unlikely to be effective in the case of sedation.
Max Schnetker: Our subjectivity should correspond to the structure of our nerve cells. Therefore, it can be detached and transferred to other media. But that means that abstractable, mathematically describable properties of a network take on the role of a spiritual substance in the Cartesian sense. So ultimately the idea of ​​a soul returns.
Transhumanists obviously have problems with the human body. In this context, I find it interesting to think about the role of the human body in AI research. Finally, the Moravec paradox of the same name originates from Hans Moravec. He put it this way: "While it was comparatively easy to equip computers with the capabilities of adults to solve math problems, take intelligence tests or have them play chess, it was difficult to equip them with the capabilities of a one-year-old child in terms of perception and movement." I think this is a widespread misunderstanding, because people think with their bodies and their senses, which robots are not able to do. The pedagogue Jean Piaget has shown, for example, that our geometric imagination and thus our mathematical concepts arise from children's spatial experiences.
Max Schnetker: For transhumanists, physicality means limitation. Of course, this attitude has a long tradition, even with René Descartes it is said: "The body will always hinder the mind when thinking."
Such abbreviated notions of consciousness run through all of AI research. Marvin Minsky, a pioneer of AI research, once defined: "Minds are simply what brains do", loosely translated: Consciousness is what the brain produces. And even with him there are transhumanist motifs such as the merging of body and apparatus or the overcoming of death.
Max Schnetker: This kind of materialism doesn't face the fact that we are our bodies too. Our subjectivity is physically conditioned and bound to it, which is precisely why it will be lost with this body. For a transhumanist, subjectivity can be detached from the body. Many people say they do not believe in God, but this does not mean existentialist atheism that admits that we have nothing metaphysical in us. The transhumanist considers himself an atheist because he does not literally believe that creation expired in seven days and that the earth is only five thousand years old, but he ultimately takes refuge in fantasies of immortality.

An ideology for the elites of digitization (and those who want to be part of it)

How powerful is transhumanism as an ideology?
Max Schnetker: He works in certain milieus, especially with high earners in Silicon Valley. It will probably continue to spread, not least because it is promoted by successful and influential capitalists like Peter Thiel. The Singularity University has long been supported by the relevant corporations such as Alphabet with amounts in the millions and already bears the belief in the singularity in its name.
Those working in the digital economy benefit from the fact that transhumanism gives their work higher degrees: By serving technical progress, they make a small contribution to the redemption of humanity, which will ultimately be absorbed in technical systems. That goes down well with a certain technocratic subclass.In any case, ideas like the singularity have spread astonishingly.
It is an elite theory: elites cling to it and it provides a perfect justification on what the prominent social position is based on.
Max Schnetker: By reading the relevant internet forums, I got the impression that transhumanism is not only based on entrepreneurs and capital owners, but also on their employees, I would say: in the upper middle class. Often not very fulfilling work on advertising metrics is philosophically charged with a historical mission. The originally rather strange ideas are often toned down. To that extent, this ideology becomes bourgeois.
The subtitle of your book is "right utopias of technological redemption". What do you think is right about transhumanism?
Max Schnetker: I would like to emphasize that transhumanism is not uniform and has various varieties, not all of which are reactionary. In my book I describe the dominant form that has become intertwined with the so-called Californian ideology. It is libertarian and radically capitalist oriented, for example against redistribution. This transhumanism is also revealed to be right through the frequent reference to eugenic thinking. The aim is to optimize people with technical means, through interventions in their bodies, for example in the genome or the brain. But the idea that one must restrict the reproduction of inferior people also keeps cropping up. Ultimately, it's not just about technology, ideas about biology and race also play a role. With Nick Bostrom and Eliezer Yudkowsky, such arguments typically appear in subordinate clauses. Outside of the official discourse - that is: in the transhumanist and rationalist Internet communities - one then encounters very explicit race theories, which are discussed, for example, as "human biodiversity".

Futuristic ideas with old roots

In your book you work out that, despite the futuristic elements, many transhumanist ideas are quite old, specifically: they stem from utilitarianism and liberalism of the 18th and 19th centuries.
Max Schnetker: Most transhumanists today have not read Jeremy Bentham, but they share the same basic assumptions. What I find particularly remarkable is the return of the theories of overpopulation developed by Thomas Malthus in England at the end of the 18th century. He assumed that population growth always outstrips productivity growth, which is why an excessive increase in the poor threatens the existing order. But even in post-singular society, transhumanists like Nick Bostrom believe that overpopulation will remain a problem. The productive possibilities are just exploding and technical progress is accelerating to an unimaginable rate - and then should an overpopulation arise that can no longer be supplied? In ideology, scarcity is first abolished and then brought back in through the back door.
Thomas Malthus then provided a scientific theory to justify the abolition of poor relief. That was certainly useful in the 19th century, but why don't transhumanists break away from such notions?
Max Schnetker: In spite of all futuristic promises, the horizon of bourgeois society is not exceeded. Scarcity is one of the indispensable foundations of one's own view of people and the world. Those who give it up threaten their own identity. At least that's how I explain it to myself.
As an example of the line that connects utilitarianism and transhumanism, you cite Jeremy Bentham's Panopticon. As is well known, Bentham wanted to use architectural means to ensure that the inmates of prisons or other institutions could be constantly observed without being able to perceive the monitors themselves. You connect the panoticon with a transhumanist idea, the singleton. Why?
Max Schnetker: The singleton stands for the coming reign of super intelligence. But why do they even think it's a good idea to transfer power to a thinking machine? This is because many transhumanists basically see social problems as pure coordination problems. The circumstances and rules of the game are fixed, only the players are restricted by their monkey brains. In these circumstances, the superintelligence machine will ensure the greatest happiness of the greatest number by adapting individual behavior to the general interest. And how does she do it? By capturing and recording everything.
The complete knowledge of the world is very similar to the Panopticon with the central overseer who also keeps records of each individual prisoner and thus controls and ultimately eliminates the inmates' antisocial drives. It is the same technocratic approach: Central observation and control is seen as the solution to social problems, non-democratic discourse or monarchical inspiration or whatever.
Is transhumanism ultimately against democracy?
Max Schnetker: The current with which I have dealt actually has offshoots that consider democracy to be out of date. It is no coincidence that the so-called neo-reactionaries arose in part from the rationalism movement, namely on the website Less Wrong by Elizer Yudkowsky.
The neo-reactionaries reject democracy as suboptimal because people are differently efficient; therefore, high performers need certain privileges. The neo-reactionaries are particularly bothered by feminism, evolutionary and game theory classic family models are the most efficient. The expected technical upheaval should lead to models of rule from the time before the French Revolution, to a kind of aristocratic rule. And who belongs to the nobility? Preferably those who have already proven their capabilities, for example by establishing successful companies on the market.
On closer inspection, many transhumanist ideas seem familiar. What do you mean, why are they spreading again today?
Max Schnetker: The classic liberal arguments emerged when social inequality intensified massively and took on completely new forms in the 19th century. Today we are in a similar situation. New power relations arise that have to be justified. In this respect, transhumanism is a phenomenon of crisis, a reaction to the insecurity that such social upheavals trigger, even among those who benefit from such developments. At the same time, the transhumanist ideology conveys the good news that everything can be fine after all. (Matthias Becker)
Read comments (174 posts) https://heise.de/-4467072Report errorDrucken