AI - we are so lost

I spent the whole of last week with people from the AI industry. There were founders of companies that do things with AI and their techies who are in charge of it. And what I heard was really bad.

Definition of AI

“A huge bunch of if-else blocks paired with statistical data.“

Of course, regular readers here already know this definition of AI and why it is actually not AI.
None of the people there objected to this definition. To be honest, I had expected more resistance.

AI – the destruction of the earth?

I was asked by an employee of one of the largest tech companies whether I believe that AI could destroy the earth.

My answer: “Nope!”

Why I don’t think AI will destroy the earth? Because, by my definition, we don’t have AI. It’s like asking if a combination of Excel and an SQL database could destroy the world.

It’s all just coincidence.

Developers know the feeling: something works, but at first glance nobody knows why. Or why not. But in the end it’s all deterministic.

And then there were developers from a very well-known AI company giving talks. When asked why you can set up the exact same setup twice, including fresh contexts, and get two different answers for the same question, they shrugged their shoulders wildly.

I’ll tell you: they have no goddamn idea what they’re really building. We’re talking about algorithms here.

Some genius, and no, it wasn’t Einstein, once said:

Insanity is doing the same thing over and over again and expecting different results.

AI is now making exactly that possible. So is it insane? I would say: only insanely bad.

Wrong answers? Who cares!

Someone mentioned that he is a professor at a university here in Europe and that he is increasingly struggling with AI content. This even goes so far that after a peer review, Paper includes paragraphs like:

Hello! How can I help you today?

Here are a few ideas of what we can do together: …

Other techies said that they do strict code reviews of the junior coders at the beginning, because otherwise code would go into production that had neither been written nor tested. And I don’t even mean unit tests. No, they’ve gotten so lazy that the few lines they had generated weren’t even tested by hand.

They quickly realized that the generated unit tests didn’t really test anything and that the generated junk simply threw errors and sometimes even broke things.

Are you surprised? I’m not.

My alarm bells went off immediately, and I also added the request that they please kindly do strict code reviews on ALL developers. The people found that strange at first, but after just a second to think about it, they realized that maybe that wouldn’t be completely stupid.

Chip crisis? What chip crisis?

I had the most interesting talk with the AI guy from an Israeli start-up. We were talking about optimizations as usual and I suggested that the AI couldn’t even do the specific optimizations we were talking about.

However, the discussion then took a turn that I hadn’t expected. He questioned: Why optimize? It doesn’t matter! According to him, RAM, CPU and GPU are always available at all times and at such low prices that any optimization is complete nonsense.

Apart from the poor result, I find this statement really simple-minded!
On the one hand, the chip crisis, which according to him does not even exist, is really bad right now. On the other hand, this “what-costs-the-world” mentality of some startups is a disaster. The operators of the AIs are happy about the money they make.

Meanwhile, I would rather answer the above question of whether I think AI could destroy the world with “AI can’t, but the people who use it, can.”

Teaching assignment completed?

What really silenced many people who work with AI all day was always my demand to know what else the AIs should learn when they have already completely indexed the internet. And we all know that these AIs are currently causing a lot of traffic for website operators, with constant crawling.

Sure, there is still something added every day, but much of it has already been produced by AI and is therefore useless for further training.

Sure, you can throw a few more algorithms at the problems, but the teaching material is practically at an end. Nothing more will come.

But maybe we’ll just throw even more AI at the problems in the end, so that after a great deal of ignorance of the facts and feasibility, everything will simply drift away for good.


There is no comment section here. Do you have something to comment? Then blog yourself. Or write about it with your comment on a social network of your choice.