Will the AI ​​lose control one day?

What can artificial intelligence do?

When the keyword “AI” is used, many people think of a kind of super brain that can do everything better, faster and more flawlessly than we humans, teach itself things at breakneck speed, develop a self-confidence and one day replace us. How realistic is such a scenario from your point of view?
The American author and futurist Ray Kurzweil claims - and others claim - that such higher intelligence will be a reality in the next 20 years. However, I think that these are sensational predictions from crazy people who have no idea about the matter.

Serious experts assume that we are still more than 200 years away from real artificial intelligence. Even experts working on autonomous cars estimate that it will take 30 to 40 years before the systems actually understand traffic situations - unless a conceptual breakthrough happens. I believe that some persistent prejudices are blocking the current development. The first prejudice would be referred to in English as the “intelligent design” attitude: If you want intelligence in the machine, humans have to empower them with their intelligence. This is the algorithmic principle that forms the backbone of digitization. Here a person solves a problem area in his head and programs this solution into the machine using an algorithm. This has to be replaced with a concept through which the intelligent structures in the machine grow in an organized manner.

The other prejudice is the so far inadequate answer to the question of how the nerve cells in our brain create this theater that we experience every day from within - like I am about the scene here in my office. The currently dominant answer is: Every nerve cell is an elementary symbol, so: red color, vertical edge, my grandmother, a pencil or a dog. What is lacking in the theory of elementary symbols is what we use to assemble words from letters and sentences from words: namely, an arrangement. I described this as the "attachment problem" 40 years ago. But if this question about the structure of the brain is answered correctly, the vision of the intelligent and conscious machine that can think about itself and about the world can very quickly become a reality.

So a real intelligence in the machine would also have self-confidence, motivations and self-set goals?
We were all born with given abstract goals: we don't want to freeze, we don't want to starve, we want to be with mom, we want to avoid dangers and reproduce. In this sense, at least in a first generation of such artificial intelligence beings, goals would have to be built - because they don't develop by themselves. And then artificial intelligence will be able to think about them and define new goals in a second generation. And then at some point we could lose control.

Shouldn't we then keep our hands off the further development of AI with immediate effect in order to avoid creating a potential danger as far as possible?
That's a nice thought. However, our entire social and economic system is designed to become better and more efficient. The economy should grow, costs should be reduced, human labor should be made easier or replaced. This objective is deeply anchored in our economic system. And the logical means to achieve this goal is automation - that is, to make processes and thus people superfluous. If you think that through to the end, it leads to a terrible dystopia.