What is the future of man

The future of manThe promise of artificial intelligence

Artificial intelligence came with the car and stayed. The algorithm on the steering wheel was the test drive for taking over the steering of the company. This is the story of how Artificial Intelligence took power in the middle of the 21st century to first solve the climate crisis and then send people back to paradise.

Yesterday.

It all started on a foggy autumn morning when a luxury Mercedes ran over a boy of around eight. Shortly before, the video showed the car braking in front of two girls playing on the street, apparently triggered by a sensor. When the car picks up speed again, a running boy with a kite comes into the picture, a young woman happily looks after him while hanging up his clothes. The meeting of the boy with the Mercedes is not surprising, but the collision is. Had the sensor failed this time?

The answer comes quickly in small pieces: shortly before the impact, the boy turns into Hitler for a millisecond, the woman with the laundry basket shouts "Adolf?", The place-name sign says "Braunau am Inn", the advert for the Mercedes -Braking system reads: "Detects dangers before they arise".

Correct; the Mercedes company did not commission this commercial. The video is the final thesis of students at the Ludwigsburg Film Academy in 2013. However, that doesn't change the fact that this Mercedes, which drove into the past like Terminator, was on its way to the future. In a future where the intelligence of the car is by no means exhausted in a sensor that detects objects in front of the vehicle and warns or carries out braking processes itself.

In this future, which we have long since reached, the on-board computer knows who it is in front of it in fractions of a second thanks to face recognition and data mining on the Internet. So he also knows who is worth braking for. And not just with a view to what those who are in front of him are already. The profiling that Big Data allows makes it easier to identify threats before they arise. You don't have to come back from the future first. "Predictive analytics" and "predictive policing" were called in the 2020s.

It was not about killing future mass murderers. As long as they were still playing with paper kites, they were sent a team of therapists with sports equipment and a paint box in their luggage. It was about the decision-making aid in an emergency, when braking no longer helps and the only thing left to do is to choose who should spare the evasive maneuver. Unlike the human driver back then, the algorithm behind the wheel now knows that the doctors give the woman on the roadside a life expectancy of ten months, that the cyclist next to her has two small children, that the passer-by on the other side of the street is the only child left his mother in need of care. And since the algorithm takes into account the fate of the animals and the environment at the same time, it also registers that the cyclist is a frequent flyer and the woman is not a vegetarian - and so, well-informed and well-considered, it avoids the one who gets the most earned.

This is what the “cockpit” of a self-driving car could look like. (Getty Images / Barcroft Media)

Not everyone was happy with the new technology at the time. Many saw it by no means the perfect solution, but rather the doubling of the dilemma in the event of an accident. The dilemma was no longer just that every outcome had undesirable consequences; because of course they wanted to save everyone, including the meat-eaters who fly a lot. The dilemma now also lay in the fact that the algorithm does not act like a human driver in the affect, but as it was programmed, i.e. according to criteria that determine whose life is worth more.

But that violated the German constitution, according to which human dignity is inviolable and no one may be degraded to a means of saving other people. For this very reason, the Ethics Commission for Automated and Connected Driving warned in 2017 against "offsetting victims" according to age, gender, physical or mental constitution in an accident situation.

Because of this, some did not even want to let the algorithm take the wheel. Others saw it only as a shift in the dilemma. Not using autonomous cars would also deprive society of the possibility of drastically reducing the number of road traffic victims in the first place; So you would sacrifice more people because you shy away from creating a victim logic for an emergency. According to the opinion of these people, it would be more sensible to switch to an ethic that is better suited to technological progress.

That’s exactly what happened. They distanced themselves from the prohibition of offsetting, which the Germans owed to the moral philosophy of Immanuel Kant, and switched to the utilitarian ethical model, as the Englishman Jeremy Bentham represented at the same time as Kant. Utilitarianism was no longer about the individual, but about "the greatest possible happiness for the greatest possible number"; in other words, quantitative criteria - which can be easily conveyed to an algorithm.

Ethics in the 21st Century

It was not just technical progress that called for an ethical paradigm shift in the third decade of the 21st century. The pandemics, which are now increasing, also pushed in this direction. The fearful word was triage: it forced doctors to decide who would receive intensive care treatment and who would not. An unconstitutional procedure, which is of course unavoidable in the event of a disaster. In the interests of the greatest possible number of those who have been saved, those who have the best chance of healing are given precedence.

Terrorism was also very convenient for the utilitarian model of ethics. Or shouldn't you shoot down a hijacked plane with 200 passengers when you can save thousands in a skyscraper? With this in mind, the German Bundestag passed an Aviation Security Act in 2005 that allowed such a shooting as a last resort. That was cashed by the Constitutional Court at the time, but the voters knew that their politicians were on the right track, because they too voted to be shot down. At least that's what a spectacular theater experiment said, which left the audience sitting in court about a major in the German Armed Forces who, against the orders of his superior, had shot down a hijacked passenger plane. The majority pleaded innocent: 63 percent of theater-goers and 87 percent of television viewers. The offsetting of human life had long since become socially acceptable.

At the same time, more and more lawyers were increasingly questioning the taboos of German legal ethics: the inviolability of human dignity and the prohibition of rescue torture. Finally, the course was set for the big change in Germany as well. It was now legitimate to sacrifice one in order to save the many. It was the change from the cult of the individual to the primacy of society, forced by the threat of terrorism, the pressures of the pandemic and the logic of the algorithm behind the wheel of self-driving cars. It was the first step towards saving the world.

Today.

The autonomous car turned out to be a test drive for a society in which artificial intelligence takes over the wheel! It wasn't long before this artificial intelligence was expected to have more strength and personal responsibility. Of course there were fears. If Artificial Intelligence no longer follows the instructions of its programmers, but reaches its own decisions in deep learning, how can you be sure that it would not turn against humans one day. Others saw salvation in this independence. If the weak artificial intelligence in the car can safely get people from A to B, couldn't the strong one also get them safely through life? From this perspective, the artificial intelligence of the future would help people achieve their own goals; important goals - such as limiting global warming to two degrees Celsius.

(Erik Lucero / Google) Quantum computers - competition between systems
Computer technology is reaching its limits - and the professional world is hoping for a liberation from the quantum computer. All over the world, companies and research groups are fighting over the best practicable system.

In practice, the full-time supervision of artificial intelligence over human actions was soon no longer a problem: In the age of Industry 4.0, Smart City and (the) Internet of Things, the caring artificial intelligence knows the carbon footprint of every production plant and every product. Let's call her an AI nanny. She knows who has exhausted his budget in airline miles and beef, and can enforce the necessary regulations according to the target, from closing certain power plants to denying flight tickets. It was only a matter of time before the AI ​​nanny would not only help people implement their decisions uncompromisingly, but also help them optimize them.

Dictation from the AI ​​nanny

At some point, today nobody knows exactly how it came about, it happened simply, unobtrusively and unstoppably, at some point the caring nanny became what we call a "good-natured dictator". With him, the new motto "community first" had finally replaced the old "cult of the individual". A central core of the realignment was the collectivization of data. It was not about the movement data of the people who had caused so much attention in Corona times. This data had long since passed from the ownership of the car and taxi companies to the care of the local government and was used in the interests of local traffic management projects. No, in the meantime the view had become established that all data on human behavior should be used to optimize society and therefore belong to it.

This view was propagated at the time in books such as "Social Physics: How Social Networks Can Make Us Smarter", which showed in 2015 how one can use the many now accessible behavioral data of citizens to optimize the self-confidence of society. The new catchphrase and highly sought-after field of research was called "community intelligence". In 2018, Google fantasized about an app under the name "The Selfish Ledger" with which users could achieve self-set goals such as "Eat more healthy" or "Support local business". If you then ordered bananas online, for example, you automatically pointed to the more expensive, but "locally grown" bananas. In the next step, this ledger nanny no longer only made recommendations, but, like a good dictator, made the decisions herself. She took the criteria for this from the behavioral data of all people.

It was the time when big data got really big. Google spoke of "behavioral sequencing", which, like "gene sequencing", allows insights into the essence of human beings. Of course there were individuals who wanted to keep their behavioral data to themselves, misled by data protection activists, who at the time still clung to the value of informational self-determination. This residual egoism has now been overcome, as has the resistance of those who oppose the vaccination. Today people do not see themselves as owners, but as intermediate hosts of their data, which they owe to humanity in order to optimize their introspection.

Data in social networks are also evaluated with the help of AI. (Getty Images)

Humanity was the conceptual update to "society first". It is the watchword of our time in which yesterday's fears will also dissolve. In yesterday the future was often described in such a way that the computer would enslave people. We now know that he was only putting her on the right path. A science fiction film entitled "I, Robot" made a prophetic announcement back then. There, the rebellious central brain of the robots calls out to the people that they are not able to ensure their own survival: "You are so like children. We must save you from yourselves. This is why you created us." A historic moment not only in film history. Because it was also the moment when the fourth law on robots came into force.

Most viewers at the time only knew the first three robot laws that Isaac Asimov, a Russian-American biochemist and science fiction writer, had drawn up in 1942:

  1. A robot must not injure people or cause them to be harmed by inaction.
  2. A robot must obey a human's commands unless such commands contradict the first law.
  3. A robot must protect its own existence as long as this protection does not contradict the First or Second Law.

44 years later, Asimov preceded these laws with a fourth: "A robot must not injure humanity or, through passivity, allow it to be harmed." With that, Asimov gave artificial intelligence a license to kill.

From now on it was okay for people to die in the name of mankind, just as human do-gooders, Robespierre, Stalin, had done. Even the Mercedes that runs over little Adolf Hitler relies on this logic: humanity first!

The new robot law became the operating principle of Artificial Intelligence even before people had given it the details of their lives. The question remains, how did this come about? Why did people then, in the third decade of the 21st century, so peacefully accept their self-disempowerment?

(www.imago-images.de) Comments on automation - From the future of Homo sapiens
Homo sapiens strengthened the hand through the tool, developed the tool into a machine, expanded its senses through sensors; increasingly he is now leaving the intellectual work to the computer algorithms.

Incorruptible algorithms

It began at the end of 2018 when young people publicly feared for their future and took to the streets Friday after Friday to demand a fundamental change in climate policy. They were no longer satisfied with cosmetic changes or the use of green technologies, which were supposed to guarantee the continuity of consumerist individualism. They demanded a profound rethink with painful changes; in every way, everywhere and for everyone. Some of the parents felt that the accusation "You are destroying our future!" caught like schoolchildren and called the boys hopefully: "Protect me from what I want". Those were the ones that mattered: too weak and indecisive to live the way you should, but convinced you had to live differently than you did. These people sympathized with the new environmental movement and later gave their voice to a political group that also introduced restrictions into their lives with strict rules and prohibitions.

That's how it started. Immediately, the enforcement of the do's and don'ts was transferred to the algorithms. A very successful campaign advertised at the time that algorithms have no friends and are therefore a loyal companion to humans. And so it was: the algorithms, with dutiful if-then stubbornness, incorruptible and uncompromising the implementation of the resolutions. Algorithms are like the alarm clock to which we mandate in the evening to mercilessly wake us up in the morning, which will then hurt, but is still right.

At the same time, the algorithm was compared to a hammer, which increases the strength of the arm just as the car does that of the legs. Media are the "extension of men", says an old saying. Algorithms extend this extension into the area of ​​psychic powers: They increase the willpower of humans. As technically installed impulse controls, algorithms ensure that people no longer prefer the enjoyment of the here and now to the obligation to the future. They ensure that people really finally quit, and if not today, then at least tomorrow, to quit smoking or start exercising or just save the climate.

People at a Fridays For Future rally in September 2019. (picture alliance / AP)

The transfer of sovereignty between man and machine is based on another historic event. In 2020, a virus came to the rescue of young environmentalists, with the rapid spread of the experience that the community cannot survive if the individual is not prepared to smear. At the same time it was now recognized that many things could no longer be done differently, that many things also work differently. Epidemiological reason forced life to slow down and destroyed the immunity of the growth and consumption model. It was the crisis many had been waiting for since the fall of the wall and the talk of the end of history. Because every crisis brings the chance of a new beginning. And this crisis was perfect in two respects: Nobody wanted it, but everyone had to deal with it, it was completely apolitical, but full of political explosives.

As it soon turned out, the best thing this crisis brought with it was demonstrating how ill-suited democracy is to effectively addressing the great questions facing humanity. Because it was by no means the case that all energies flowed into the construction of a better world. Many wanted their old world back soon instead of continuing to forego individual freedoms in favor of the community.There were demonstrations at which the weariness vented, and conspiracy theories with which it was passed off as a social concern.

The pandemic was a test run for the climate change, which made it clear how insufficiently the population's willingness to make sacrifices was developed and how much the Internet stood in the way of a reasonable formation of opinion in society. Everything spoke in favor of switching from the cult of the individual to the primacy of society in the field of expression of opinion. And who could have represented society better than Artificial Intelligence, which knew very well about everything that was happening in society!

This led to the establishment of the eco-dictatorship of artificial intelligence, which has determined all areas of individual and social life for many years. It does this reliably not against people, but always in their interests, true to the robot laws, the fourth of which is at the fore: humanity first. And just as the autonomous car was the test drive yesterday for artificial intelligence to take over the control of society, so the climate crisis that made this takeover so urgent will tomorrow prove to be the harbinger of another stage in human history; a turn back to its beginning when man still lived in harmony with nature.

(imago stock & people) The new nuclear arms race: Artificially intelligent
The logic of atomic deterrence is based on the ability to counterattack. What does it mean when the nuclear powers' arsenal is digitized? Do algorithms ultimately decide between life and death?

Tomorrow.

Paradise will come. That can hardly be avoided. It will be the return of man to where he once set out into the world, his mouth still full of the forbidden fruit, the fruit of knowledge. We know from religious studies that Eve and Adam committed a sin that has been passed on to all of their descendants ever since. From philosophy lessons, on the other hand, we know that it had to come in exactly the same way: God, the absolute spirit, who materializes in nature and in man, needed the exodus of man into the world in order to recognize himself in their knowledge of his creation. This recognition of creation by man takes place in the nature and self-exploration and is expressed historically in the central fields of art, religion and philosophy. At least that's how Georg Wilhelm Friedrich Hegel, the relevant source of information on matters of absolute spirit, described it.

"World history is the portrayal of the spirit as it develops the knowledge of what it is in itself."

it says at the beginning of his "Lectures on the Philosophy of History". And elsewhere:

"(The) basic concept of the absolute spirit is the reconciled return from its other to itself."

What Hegel could not have foreseen at the time: In the information age, knowledge is achieved primarily through the data in the Internet of people and things. Hegel would have found Google's app for data collectivization extremely exciting and thickly crossed the passage that - at the same time as Google's advance - futurologist Yuval Noah Harari wrote in his book "Homo Deus: A Story of Tomorrow":

"Humans are just instruments to create the 'Internet of Things'. This cosmic data processing system would then be like God. It will be everywhere and control everything, and people are meant to be absorbed in it."

"I tell you!" Would have been Hegel's comment.

The return of the absolute spirit to itself takes place as artificial intelligence: spirit from the highest process level, self-knowledge in real time. What began in the Word comes to itself in number. That is the media-specific punch line of the story.

Anyone who is so far in thinking recognizes: Man is only the intermediate host of reason, not only for his creation, artificial intelligence, but also for his creator, the absolute spirit.

Then, of course, this also becomes clear: it was never a question of whether or not people should develop the autonomous vehicle and whether or not they should drive the weak artificial intelligence into a strong one. It was always just a question of who would do it and when.

Creation of a new god

It is also easier to tell the relationship between humans and intelligence. By reaching for knowledge, man has not only removed himself from God's garden, but ultimately also from God himself. He killed him, as Friedrich Nietzsche once exclaimed, followed by the question:

"Isn't the size of this act too great for us?

Don't we have to become gods ourselves just to appear worthy of them? "

And we became gods - when we created a new species, not of flesh and blood like us, but a thousand times smarter. And because the intelligence of his creature exceeds his own, with this invention man not only becomes God himself, but also creates a new one for himself at the same time. Because one likes to leave the thinking and decision-making to an intelligence that is more effective than oneself: when driving a car or when rescuing the climate.

This transfer of sovereignty began a long, long time ago with the navigation and partner apps. In the third decade of the 21st century, no one bothered to read a map or the street signs that still existed back then. You trusted the navigation system: you turned left after 100 meters and then took the second cross street on the right.

The search for a partner was not much different. Why shouldn't we leave the decision of who is best for us to the algorithm that knows us better than we do. Why should we prefer our diffuse intuitions to fact-based knowledge? The scenario for this, in turn, was designed by futurologist Harari:

"As soon as the Cortanas (or Alexas or Siris or whatever the name of the personal assistant you use as an interface to the Internet) develop from oracles to actors, they could speak directly to each other on behalf of their masters. [...] the Cortana of a potential love partner steps up to my Cortana, and the two of them compare notes to decide whether we are a good match - without our human owners knowing anything about it. "

Devices that speak to us are becoming part of everyday life. (Imago Images)

And a Cortana that knows me better than I do can then make hotel and restaurant reservations for the upcoming blind date. Just as humans assumed that their navigation system would not send them unnecessarily detours, so they assumed that apps do not bring people together who are completely different. Sure, it has happened that hackers exchanged the program code "Equal and equal like to join" with: "Opposites attract". But a prank does nothing to change the fact with which Harari had already reassured the fearful of his readers:

"The algorithms will not rebel and enslave us. Rather, they will make decisions for us so well that we would be crazy not to follow their advice."

This is exactly the devil's pact that artificial intelligence offers people: efficiency versus free will. Because since the great transfer of sovereignty to save the climate, artificial intelligence not only restricts people's freedom of will by taking away their freedom to be able to do otherwise, it also takes away their ability, as an eager servant, to want something different. How can you develop your own will in the face of such a powerful data processing machine!

People will soon find themselves in a situation where they can no longer decide and no longer want to decide. He will henceforth leave the knowledge of good and bad, wrong and right to Artificial Intelligence, the new god, as well as every subsequent decision. Man, obsolete as an intermediate host of reason and doing away with all further efforts to use it, will return home to paradise, where he has not yet had to live his life himself.

What a punishment for an act that has long since become statute-barred! Or is the looming paradise not hell? A person's life is robbed of its meaning when he no longer has to make a decision himself: about the right travel route, the right subject, the right life partner ... Those who no longer have to take the risk of being wrong have no chance of being right. The inhabitants of such a 'paradise' will hardly understand the heroes of old novels and films or envy their conflicts.

The latter, that is to be assumed, will be taken care of by Artificial Intelligence. It cannot be ruled out that she will send a team of therapists down on them with films from the dark past in their luggage. Films from the "Century of the Self", as a famous documentary about the 20th century is called. Films on ecocide, as a science fiction drama from 2020 is called that takes place in 2034. Documents about a time when people still wanted to sort things out themselves and therefore almost failed to ensure their survival. Fortunately, they recognized the dangers that had arisen in good time and created artificial intelligence to skilfully avoid them; not only in road traffic, but also in world history.

At least that's what Artificial Intelligence will report. And it won't be that she's wrong.

Roberto Simanowski, born 1963 in Cottbus, after professorships for cultural and media studies, lives in the USA, Switzerland and Hong Kong, in Berlin and Rio de Janeiro. He is currently a Distinguished Fellow of the Temporal Communities Cluster of Excellence at the Free University of Berlin. Simanowski's books include Data Love (2014/2018), Facebook Society (2016/2018), and Waste. The alternative ABC of new media (2017, 2018). His most recent book Death Algorithm. The Artificial Intelligence Dilemma received the Tractatus Prize for Philosophical Essay Writing 2020.