Geoffrey Hinton Warns About AGI and the Race Toward Superintelligence

geoffrey hinton warns about agi and the race toward superintelligence

When 78-year-old Geoffrey Hinton sat in front of the camera and said this to hundreds of attendees, the entire room fell silent for a few seconds.

“They want a super-fast car with no steering wheel.”

Recently, this 2024 Nobel Prize in Physics laureate—an elderly figure who has been called the “Godfather of AI” for over a decade—once again sounded the alarm for humanity at the Global Digital World Conference. Almost pleading, Hinton warned:

“We don’t know whether humans can coexist with superintelligent AI.”

“But we are building it.”

Hinton on AGI: Only 1% on Safety, 99% on Acceleration

In his speech, Hinton laid out a very clear calculation.

The global AI industry is expanding at a speed unprecedented in human history. According to UNCTAD data, the global AI market was valued at $189 billion in 2023 and is projected to skyrocket to $4.8 trillion by 2033.

That means in just ten years, humanity has created an economic entity larger than Japan’s GDP—from scratch.

And where is all that money going?

Into building larger models and running more compute.

What about safety?

Hinton gave a number: about 1%.

Only around 1% of global AI R&D investment is spent on “how to make sure this thing doesn’t go wrong.”

His comment on that:

“It’s crazy.”

AI tech lobbying groups, he said, are spending heavily on advertising to push an analogy: AI is the accelerator, and regulation is the brake. Their message—don’t hit the brakes, it will slow us down.

Hinton rejected this completely.

“The accelerator is progress, sure. But regulation is not the brake—it’s the steering wheel.”

“They want a high-speed car, but without a steering wheel.”

Sitting next to him, Terry Sejnowski immediately added:

“Have you ever driven a car without brakes? You’ll know how bad it is going downhill.”

But what’s worse is—we don’t even have a steering wheel.

Gas pedal to the floor. Steering wheel removed.

That is the real state of the global AI race.

From Award Ceremony to Hinton’s AGI Reckoning

From Award Ceremony to Hinton’s AGI Reckoning

The theme of the 2026 World Digital Conference was “AI and Social Development.”

At this event, Hinton and Sejnowski were awarded for inventing the Boltzmann machine in the 1980s—a breakthrough that later became a catalyst for the deep learning revolution.

The award presenter was Li Deng from Microsoft, himself a beneficiary of that invention. Between 2009 and 2010, he invited Hinton to collaborate at Microsoft, where they used Boltzmann machines to pretrain large-scale speech recognition systems—leading to one of the first major industrial successes of deep learning.

The first half of the session was about scientific history, academic glory, and shared memories.

Hinton and Sejnowski recalled a moment in the 1980s at a conference in Rochester—how they combined the Hopfield network with simulated annealing.

Sejnowski remembered it clearly:

“We were sitting there and suddenly realized we could heat up the Hopfield network, making it probabilistic.”

Hinton added a detail: at the time, he had just been working in San Diego with David Rumelhart on backpropagation using logistic units. And when temperature was introduced into the Hopfield network, it also produced logistic units.

Two completely different paths converged into the same mathematical form.

In the history of science, such moments are called “crystallization moments.”

Interestingly, Hinton still believes Boltzmann machines are more elegant than backpropagation.

“It’s a much better idea. Just not very practical.”

Sejnowski laughed and agreed:

“It was already a generative neural network decades before generative AI became popular.”

Hinton: “AGI Is a Stupid Term”

When the discussion turned to AGI and societal risks, Hinton completely switched modes.

Li Deng asked a question many people have: how do you define AGI? What benchmarks indicate its arrival?

Hinton did not hold back:

“AGI is a stupid term.”

The reason is simple: it assumes intelligence is one-dimensional, like a thermometer—the higher the number, the smarter.

“But intelligence is clearly highly multidimensional.”

“There is no single point where AI equals humans. Its abilities relative to humans are jagged—far beyond us in some areas, still behind in others.”

He gave an example: ask any large model today about Slovenia’s tax deadlines or how to waterproof a porch—it will answer fluently.

In general knowledge, AI has already surpassed humans by far.

But in certain reasoning tasks, it hasn’t fully caught up.

“So the term AGI is meaningless.”

Beyond AGI: Hinton on Superintelligence

So what term does matter?

In Hinton’s view, it is “superintelligence.”

Its definition is clear: being better than humans at almost all intellectual tasks.

And we believe it is coming.

Then came the core question of the entire discussion:

When superintelligence arrives, will humans still have meaningful control over the systems they created?

Hinton answered:

“We don’t know whether we can coexist with superintelligent AI.”

“But since we are building it, we still have a lot of control right now.”

“We should build it carefully, so that we can continue to exist and coexist with it.”

In known models, there is only one example where something far more intelligent willingly gives freedom to something far less intelligent:

A mother and her baby.

Because the mother truly cares.

Hinton’s Three Categories of AI Risk

Hinton divided AI risks into three categories.

Deliberate Misuse

People intentionally using AI for harm:

Deepfakes to erode democracy, engineered viruses to trigger pandemics, cyberattacks.

This is the most direct threat.

Profit-Driven Side Effects

Unintended consequences when people try to make money:

Generating illegal images, recommendation algorithms pushing increasingly extreme content, eventually splitting society into groups with no shared language.

“They’re just trying to make money—but the side effect is social division.”

Existential Risk Beyond AGI

AI taking control on its own.

Hinton believes this third category might lead to international cooperation—because everyone fears it.

But the first two?

Especially the first—countries will talk about cooperation, but in reality, they will attack each other.

That makes it much harder to deal with.

Hinton’s Warning: The Tobacco and Asbestos Parallel

Hinton offered an analogy:

Look at the history of tobacco and asbestos.

Countries that produced them—like Canada—introduced regulations domestically to protect their citizens.

But they continued selling these products to developing countries.

So even if AI-producing nations implement the “right” regulations, they may still export AI systems elsewhere—where harmful consequences unfold.

There is nothing new under the sun.

The same script may play out again.

AGI Debate: Are Large Language Models a Dead End?

Yann LeCun has said large language models are a dead end for achieving AGI. What does Hinton think?

He split the question into two parts.

First, a philosophical one:

Can a system that only predicts the next word understand space?

Answer: yes.

“That’s very surprising.”

Then, a practical one:

Is it an efficient way to understand space?

Answer: no.

If you have a camera and can manipulate objects, you will learn spatial understanding and basic physics much more efficiently.

So in practice, a multimodal AI—with vision, action, and language—will learn faster and with less data than a pure language model.

But philosophically, with enough language data, even a pure language model might be enough.

The $4.8 Trillion AGI Economy: Who Gets to Benefit?

Another fracture exposed at the conference is distribution.

Pedro Manuel Moreno, Acting Secretary-General of UNCTAD, pointed out directly: the ability to build and shape AI is concentrated in a few economies and companies.

Doreen Bogdan-Martin highlighted a stark contrast:

Developed countries are adopting AI at nearly twice the rate of developing nations.

“If this is not addressed, it will become a second great divergence.”

The gap between countries that build AI and those that only consume it is widening visibly.

The $4.8 trillion market—its infrastructure, investment, and talent—is concentrated in a few points in the Northern Hemisphere.

The rest of the world may not even get a seat at the table.

Who Holds the Steering Wheel in the AGI Era?

If you zoom out, this conversation is the culmination of Hinton’s warnings over the past three years.

In 2023, when he left Google, he said:

“I regret my life’s work.”

In 2024, upon receiving the Nobel Prize, he called for attention to AI safety.

In 2025, he repeatedly emphasized the urgency of regulation.

By 2026, his language has become more concrete.

Yet what’s equally striking is his technical clarity.

At 78, after discussing AGI risks and existential threats, he can immediately switch back to explaining why restricted Boltzmann machines represent correct Bayesian inference, why current image generation models only use half of the wake-sleep algorithm, and how combining generative and recognition models may be the right next step.

He lives in two worlds at once:

One thinking about how AI becomes more powerful.

The other about how humanity avoids being destroyed by that power.

Hinton, AGI, and the Final Window for Humanity

The engine is already roaring.

A $4.8 trillion machine accelerating at full speed.

Whether there is a steering wheel depends on what happens in the next few years—and whether those in the driver’s seat, governments, corporations, and scientists, are willing to reach for it.

We are standing at a very particular moment in time.

Before AGI or superintelligent AI becomes more capable than us, this may be the only window where humanity still decides the rules of the game.

When Hinton left Google three years ago, many thought he was being alarmist.

Three years later, he is still saying the same things.

The difference now is—more people understand his concern.

But that car without a steering wheel?

It’s still accelerating.

Nach oben scrollen