How would artificial intelligence make artificial flowers?

In the story "The Wizard of Oz" the girl Dorothy travels to a magical land and meets a magician who rules an emerald city. Dorothy later finds out that everything is just show. The magician is a normal person who fakes his magical abilities with an apparatus. He creates an optical illusion that anyone who doesn't look closely will fall for it, perhaps because they want to believe in magic. It is similar with artificial intelligence (AI). She, too, is said to have almost magical abilities - and she, too, is often an illusion.

In the meantime, artificial intelligence is in almost every product that the IT industry throws on the market - at least this is the impression created. But it is wrong: some companies only sell conventional software as AI, others promise with AI that they have solved a problem that technology cannot yet solve. One reason is the immense demand - it is now so high that start-ups just have to call AI and hold their hands and the money pours in. A second reason is the vague definition of artificial intelligence, so that everyone can use this term.

"80 percent of the times I read about AI, it's just plain wrong information," said Stewart Russell, professor at the University of California at Berkeley. There is a misunderstanding of what AI really is, and that has largely enabled fake AI to emerge. "Today, AI encompasses a broad spectrum of problem-solving behavior, ranging from naive and short-sighted to well-informed and strategic solutions," says Klaus-Robert Müller from the machine learning group at the Technical University of Berlin.

The Coca-Cola company, for example, evaluated the data from its dispensing machines, which customers can use to mix their own drinks. Obviously, Cola Cherry and Sprite are a popular combination, which is why the group released this as an independent new flavor - a success for the application of artificial intelligence, it was said, but it was just a simple data analysis.

Online shops that recommend a flower pot to customers because they have previously bought seeds do not need AI to do so

Part of what is known as robot journalism is another example. The software creates finished articles from statistics - for example from sports or financial market data. But mostly there are no complex algorithms behind it. Standard instructions such as "If x, then y" are sufficient for such applications: "If Bayern Munich = 1 and Borussia Dortmund = 0, then write: Bayern took the lead". Even online shops that recommend a flower pot to customers because they have previously looked at seeds and watering cans do not need AI for this.

In science, such mundane systems would not be called artificial intelligence. This term is no longer in use in everyday research because it is too watered down and basically old-fashioned. In 1954 it was first mentioned at a scientific conference in the US city of Dartmouth. Almost ten years later, mathematician and AI researcher Marvin Minsky wrote that artificial intelligence is a given when machines do things that are supposed to be carried out by humans. Since then, scientific AI has gone through various phases, initially so-called knowledge-based systems. Researchers tried to map expert knowledge in technology. This approach failed because it was difficult to scale - after all, every change required the cooperation of experts.

The phase was replaced by machine learning - learning from large amounts of data. In the past few years, so-called artificial neural networks have prevailed. Their structure is similar to that of neural connections in the human brain. Such a network learns by going through training data sets over and over again, comparing the results with the target project, and then reweighting the "neurons" - mathematical functions - and their variables. Such a network tries, for example, to learn how tumor cells differ from healthy cells. It first compares thousands of images of the two and then tries to transfer what it has learned to unfamiliar photos. It makes mistakes, corrects some parameters, and checks again. If the error rate is lower, the system knows that it is on the right track.

Such systems often consist of hundreds of layers of neurons and millions of parameters. Training an AI model can therefore cost hundreds of thousands of euros in computing resources, says software engineer Martin Casado of the venture capital company Andreessen Horowitz. And these are not necessarily one-off costs, because the data that the AI ​​models feed would often change over time. Therefore, the models sometimes have to be retrained on a regular basis. Companies that rely on "real" AI would therefore need a lot of investment.

Companies that calculate with simple algorithms do not incur these costs. But they are careful not to rectify that. Out of 2,830 startups in Europe that were classified as AI companies, only 1,580 matched that description, according to research by MMC, a London-based venture capital firm. "We looked at every company, their material, their product, the website and the product documents," says David Kelnar, research director at MMC. "In 40 percent of the cases, we couldn't find any evidence of the actual use of AI." Kelnar says that while these startups don't necessarily apply as AI firms themselves. Rather, they would be classified in this way by third parties. But they don't correct that. Why also? Start-ups in the field of artificial intelligence attract 15 to 50 percent more than others in their financing rounds.

People often trust machines more than real people

This leads some companies to particularly brazen self-portrayal. Engineer.ai, based in London and Los Angeles, for example, has raised $ 29.5 million. The company said it is currently developing artificial intelligence to automatically program apps. People without programming knowledge can put together an app on the company's website with a click of the mouse. Engineer.ai's founder, Sachin Dev Duggal, said in an interview that 82 percent of an app the company developed was automatically created using the company's own technology. But reporters of the Wall Street Journals reported that the company relied on human engineers in India to do most of this work.

According to Eric Siegel, however, it is not just companies that are trusted to do magic. The former AI researcher at Columbia University and today's author writes that the media and science are not entirely innocent of the misery. Headlines like "AI Can Tell If You're Gay" or "AI-powered Scans Can Identify People at Risk of Fatal Heart Attack Almost a Decade in Advance" would raise expectations that machine learning could reliably predict all of this. This is a lie because in most cases it is just too difficult. Scientists and journalists formed an alliance to sell research attractively, but distort the view of the real performance of AI.

It is particularly problematic when AI is simulated in so-called chatbots. Studies have shown that people sometimes trust machines more than human chat partners because they assume no one will find out. There are start-ups that also pay people in this area to behave like machines and then sell their system as machine learning. In 2016 the news agency revealed Bloombergthat some people spent twelve hours a day pretending to be chatbots for calendar services like X.ai and Clara.

According to several surveys, people around the world do not trust AI very much. In a survey in the UK in January 2020, 58 percent said they would not prefer artificial intelligence for cancer detection. But it is precisely in these areas that AI systems are being used more and more often - with good results in some cases. It is all the more important that the technology is refined. The false promises not only harm those who invest naively, but also the science, which needs a lot of money to advance real AI - this money could fail if the magic is blown and companies are rightly more suspicious.