Expert Warns: AI Is Advancing Too Fast to Keep the Situation Under Control

Written by Camilla Jessen

Feb.20 - 2024 9:13 AM CET

Technology
Photo: Shutterstock.com
Photo: Shutterstock.com
AI expert forecasts catastrophe from uncontrollable AI development.

Trending Now

Eliezer Yudkowsky, a well-known figure in the field of artificial intelligence (AI) research, has shared a stark warning about the future of humanity in light of rapid AI advancements.

According to Yudkowsky, humanity might be facing its end much sooner than anticipated due to the uncontrollable pace of AI development.

Doomsday Prediction

In an interview with The Guardian, Yudkowsky expressed his concerns, stating, "If you force me to evaluate probabilities, I would say that humanity has five rather than fifty years left. It could be two years, it could be ten."

This prediction places Yudkowsky among the most pessimistic of experts when it comes to the potential outcomes of AI evolution.

Yudkowsky's primary worry is the emergence of a superintelligence that surpasses human capabilities to such an extent that controlling it becomes impossible.

He imagines a scenario where AI's intelligence and speed render human efforts to manage or contain it futile, comparing it to an alien civilization thinking a thousand times faster than humans.

A Controversial Stance

The founder of the Machine Intelligence Research Institute (MIRI), Yudkowsky has been at the forefront of identifying and mitigating risks associated with AI since 2001.

His institute aims to ensure that AI developments remain beneficial to humanity, advocating for responsible practices in AI research.

Yudkowsky's views have been controversial, especially his comments to Time magazine suggesting extreme measures such as bombing data centers to halt AI's progression.

He believes that AI's indifference towards humans poses a significant threat, likening humanity's chances of controlling advanced AI to a child playing chess against a world-class chess engine.

A Balanced Perspective

While Yudkowsky's predictions are alarming, it's important to note that many experts in the field hold different opinions.

There is a concern that focusing too much on existential threats posed by AI might distract from addressing more immediate issues, such as the spread of misinformation through AI technologies.