{"id":2692,"date":"2026-05-01T02:41:49","date_gmt":"2026-05-01T02:41:49","guid":{"rendered":"https:\/\/deepinsightai.io\/?p=2692"},"modified":"2026-05-01T02:41:50","modified_gmt":"2026-05-01T02:41:50","slug":"geoffrey-hinton-warns-about-agi","status":"publish","type":"post","link":"https:\/\/deepinsightai.io\/ja\/geoffrey-hinton-warns-about-agi\/","title":{"rendered":"Geoffrey Hinton Warns About AGI and the Race Toward Superintelligence"},"content":{"rendered":"<p>When 78-year-old Geoffrey Hinton sat in front of the camera and said this to hundreds of attendees, the entire room fell silent for a few seconds.<\/p>\n\n\n\n<p>\u201cThey want a super-fast car with no steering wheel.\u201d<\/p>\n\n\n\n<p>Recently, this 2024 Nobel Prize in Physics laureate\u2014an elderly figure who has been called the \u201cGodfather of AI\u201d for over a decade\u2014once again sounded the alarm for humanity at the Global Digital World Conference. Almost pleading, Hinton warned:<\/p>\n\n\n\n<p>\u201cWe don\u2019t know whether humans can coexist with superintelligent AI.\u201d<\/p>\n\n\n\n<p>\u201cBut we are building it.\u201d<\/p>\n\n\n\n\n\n<h2 class=\"wp-block-heading\">Hinton on AGI: Only 1% on Safety, 99% on Acceleration<\/h2>\n\n\n\n<p>In his speech, Hinton laid out a very clear calculation.<\/p>\n\n\n\n<p>The global AI industry is expanding at a speed unprecedented in human history. According to UNCTAD data, the global AI market was valued at $189 billion in 2023 and is projected to skyrocket to $4.8 trillion by 2033.<\/p>\n\n\n\n<p>That means in just ten years, humanity has created an economic entity larger than Japan\u2019s GDP\u2014from scratch.<\/p>\n\n\n\n<p>And where is all that money going?<\/p>\n\n\n\n<p>Into building larger models and running more compute.<\/p>\n\n\n\n<p>What about safety?<\/p>\n\n\n\n<p>Hinton gave a number: about 1%.<\/p>\n\n\n\n<p>Only around 1% of global AI R&amp;D investment is spent on \u201chow to make sure this thing doesn\u2019t go wrong.\u201d<\/p>\n\n\n\n<p>His comment on that:<\/p>\n\n\n\n<p>\u201cIt\u2019s crazy.\u201d<\/p>\n\n\n\n<p>AI tech lobbying groups, he said, are spending heavily on advertising to push an analogy: AI is the accelerator, and regulation is the brake. Their message\u2014don\u2019t hit the brakes, it will slow us down.<\/p>\n\n\n\n<p>Hinton rejected this completely.<\/p>\n\n\n\n<p>\u201cThe accelerator is progress, sure. But regulation is not the brake\u2014it\u2019s the steering wheel.\u201d<\/p>\n\n\n\n<p>\u201cThey want a high-speed car, but without a steering wheel.\u201d<\/p>\n\n\n\n<p>Sitting next to him, Terry Sejnowski immediately added:<\/p>\n\n\n\n<p>\u201cHave you ever driven a car without brakes? You\u2019ll know how bad it is going downhill.\u201d<\/p>\n\n\n\n<p>But what\u2019s worse is\u2014we don\u2019t even have a steering wheel.<\/p>\n\n\n\n<p>Gas pedal to the floor. Steering wheel removed.<\/p>\n\n\n\n<p>That is the real state of the global AI race.<\/p>\n\n\n\n\n<h2 class=\"wp-block-heading\">From Award Ceremony to Hinton\u2019s AGI Reckoning<\/h2>\n\n\n\n<figure data-spectra-id=\"spectra-momb24of-a5ohqc\" class=\"wp-block-image aligncenter size-full\"><img fetchpriority=\"high\" decoding=\"async\" width=\"813\" height=\"435\" src=\"https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/image-1.png\" alt=\"From Award Ceremony to Hinton\u2019s AGI Reckoning\" class=\"wp-image-2695\" title=\"Geoffrey Hinton Warns About AGI and the Race Toward Superintelligence\" srcset=\"https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/image-1.png 813w, https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/image-1-300x161.png 300w, https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/image-1-768x411.png 768w, https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/image-1-18x10.png 18w\" sizes=\"(max-width: 813px) 100vw, 813px\" \/><\/figure>\n\n\n\n<p>The theme of the 2026 World Digital Conference was \u201cAI and Social Development.\u201d<\/p>\n\n\n\n<p>At this event, Hinton and Sejnowski were awarded for inventing the Boltzmann machine in the 1980s\u2014a breakthrough that later became a catalyst for the deep learning revolution.<\/p>\n\n\n\n<p>The award presenter was Li Deng from Microsoft, himself a beneficiary of that invention. Between 2009 and 2010, he invited Hinton to collaborate at Microsoft, where they used Boltzmann machines to pretrain large-scale speech recognition systems\u2014leading to one of the first major industrial successes of deep learning.<\/p>\n\n\n\n<p>The first half of the session was about scientific history, academic glory, and shared memories.<\/p>\n\n\n\n<p>Hinton and Sejnowski recalled a moment in the 1980s at a conference in Rochester\u2014how they combined the Hopfield network with simulated annealing.<\/p>\n\n\n\n<p>Sejnowski remembered it clearly:<\/p>\n\n\n\n<p>\u201cWe were sitting there and suddenly realized we could heat up the Hopfield network, making it probabilistic.\u201d<\/p>\n\n\n\n<p>Hinton added a detail: at the time, he had just been working in San Diego with David Rumelhart on backpropagation using logistic units. And when temperature was introduced into the Hopfield network, it also produced logistic units.<\/p>\n\n\n\n<p>Two completely different paths converged into the same mathematical form.<\/p>\n\n\n\n<p>In the history of science, such moments are called \u201ccrystallization moments.\u201d<\/p>\n\n\n\n<p>Interestingly, Hinton still believes Boltzmann machines are more elegant than backpropagation.<\/p>\n\n\n\n<p>\u201cIt\u2019s a much better idea. Just not very practical.\u201d<\/p>\n\n\n\n<p>Sejnowski laughed and agreed:<\/p>\n\n\n\n<p>\u201cIt was already a generative neural network decades before generative AI became popular.\u201d<\/p>\n\n\n\n\n<h2 class=\"wp-block-heading\">Hinton: \u201cAGI Is a Stupid Term\u201d<\/h2>\n\n\n\n<p>When the discussion turned to AGI and societal risks, Hinton completely switched modes.<\/p>\n\n\n\n<p>Li Deng asked a question many people have: how do you define AGI? What benchmarks indicate its arrival?<\/p>\n\n\n\n<p>Hinton did not hold back:<\/p>\n\n\n\n<p>\u201cAGI is a stupid term.\u201d<\/p>\n\n\n\n<p>The reason is simple: it assumes intelligence is one-dimensional, like a thermometer\u2014the higher the number, the smarter.<\/p>\n\n\n\n<p>\u201cBut intelligence is clearly highly multidimensional.\u201d<\/p>\n\n\n\n<p>\u201cThere is no single point where AI equals humans. Its abilities relative to humans are jagged\u2014far beyond us in some areas, still behind in others.\u201d<\/p>\n\n\n\n<p>He gave an example: ask any large model today about Slovenia\u2019s tax deadlines or how to waterproof a porch\u2014it will answer fluently.<\/p>\n\n\n\n<p>In general knowledge, AI has already surpassed humans by far.<\/p>\n\n\n\n<p>But in certain reasoning tasks, it hasn\u2019t fully caught up.<\/p>\n\n\n\n<p>\u201cSo the term AGI is meaningless.\u201d<\/p>\n\n\n\n\n\n<h2 class=\"wp-block-heading\">Beyond AGI: Hinton on Superintelligence<\/h2>\n\n\n\n<p>So what term does matter?<\/p>\n\n\n\n<p>In Hinton\u2019s view, it is \u201csuperintelligence.\u201d<\/p>\n\n\n\n<p>Its definition is clear: being better than humans at almost all intellectual tasks.<\/p>\n\n\n\n<p>And we believe it is coming.<\/p>\n\n\n\n<p>Then came the core question of the entire discussion:<\/p>\n\n\n\n<p>When superintelligence arrives, will humans still have meaningful control over the systems they created?<\/p>\n\n\n\n<p>Hinton answered:<\/p>\n\n\n\n<p>\u201cWe don\u2019t know whether we can coexist with superintelligent AI.\u201d<\/p>\n\n\n\n<p>\u201cBut since we are building it, we still have a lot of control right now.\u201d<\/p>\n\n\n\n<p>\u201cWe should build it carefully, so that we can continue to exist and coexist with it.\u201d<\/p>\n\n\n\n<p>In known models, there is only one example where something far more intelligent willingly gives freedom to something far less intelligent:<\/p>\n\n\n\n<p>A mother and her baby.<\/p>\n\n\n\n<p>Because the mother truly cares.<\/p>\n\n\n\n\n\n<h2 class=\"wp-block-heading\">Hinton\u2019s Three Categories of AI Risk<\/h2>\n\n\n\n<p>Hinton divided AI risks into three categories.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Deliberate Misuse<\/h3>\n\n\n\n<p>People intentionally using AI for harm:<\/p>\n\n\n\n<p>Deepfakes to erode democracy, engineered viruses to trigger pandemics, cyberattacks.<\/p>\n\n\n\n<p>This is the most direct threat.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Profit-Driven Side Effects<\/h3>\n\n\n\n<p>Unintended consequences when people try to make money:<\/p>\n\n\n\n<p>Generating illegal images, recommendation algorithms pushing increasingly extreme content, eventually splitting society into groups with no shared language.<\/p>\n\n\n\n<p>\u201cThey\u2019re just trying to make money\u2014but the side effect is social division.\u201d<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Existential Risk Beyond AGI<\/h3>\n\n\n\n<p>AI taking control on its own.<\/p>\n\n\n\n<p>Hinton believes this third category might lead to international cooperation\u2014because everyone fears it.<\/p>\n\n\n\n<p>But the first two?<\/p>\n\n\n\n<p>Especially the first\u2014countries will talk about cooperation, but in reality, they will attack each other.<\/p>\n\n\n\n<p>That makes it much harder to deal with.<\/p>\n\n\n\n\n\n<h2 class=\"wp-block-heading\">Hinton\u2019s Warning: The Tobacco and Asbestos Parallel<\/h2>\n\n\n\n<p>Hinton offered an analogy:<\/p>\n\n\n\n<p>Look at the history of tobacco and asbestos.<\/p>\n\n\n\n<p>Countries that produced them\u2014like Canada\u2014introduced regulations domestically to protect their citizens.<\/p>\n\n\n\n<p>But they continued selling these products to developing countries.<\/p>\n\n\n\n<p>So even if AI-producing nations implement the \u201cright\u201d regulations, they may still export AI systems elsewhere\u2014where harmful consequences unfold.<\/p>\n\n\n\n<p>There is nothing new under the sun.<\/p>\n\n\n\n<p>The same script may play out again.<\/p>\n\n\n\n\n\n<h2 class=\"wp-block-heading\">AGI Debate: Are Large Language Models a Dead End?<\/h2>\n\n\n\n<p>Yann LeCun has said large language models are a dead end for achieving AGI. What does Hinton think?<\/p>\n\n\n\n<p>He split the question into two parts.<\/p>\n\n\n\n<p>First, a philosophical one:<\/p>\n\n\n\n<p>Can a system that only predicts the next word understand space?<\/p>\n\n\n\n<p>Answer: yes.<\/p>\n\n\n\n<p>\u201cThat\u2019s very surprising.\u201d<\/p>\n\n\n\n<p>Then, a practical one:<\/p>\n\n\n\n<p>Is it an efficient way to understand space?<\/p>\n\n\n\n<p>Answer: no.<\/p>\n\n\n\n<p>If you have a camera and can manipulate objects, you will learn spatial understanding and basic physics much more efficiently.<\/p>\n\n\n\n<p>So in practice, a multimodal AI\u2014with vision, action, and language\u2014will learn faster and with less data than a pure language model.<\/p>\n\n\n\n<p>But philosophically, with enough language data, even a pure language model might be enough.<\/p>\n\n\n\n\n<h2 class=\"wp-block-heading\">The $4.8 Trillion AGI Economy: Who Gets to Benefit?<\/h2>\n\n\n\n<p>Another fracture exposed at the conference is distribution.<\/p>\n\n\n\n<p>Pedro Manuel Moreno, Acting Secretary-General of UNCTAD, pointed out directly: the ability to build and shape AI is concentrated in a few economies and companies.<\/p>\n\n\n\n<p>Doreen Bogdan-Martin highlighted a stark contrast:<\/p>\n\n\n\n<p>Developed countries are adopting AI at nearly twice the rate of developing nations.<\/p>\n\n\n\n<p>\u201cIf this is not addressed, it will become a second great divergence.\u201d<\/p>\n\n\n\n<p>The gap between countries that build AI and those that only consume it is widening visibly.<\/p>\n\n\n\n<p>The $4.8 trillion market\u2014its infrastructure, investment, and talent\u2014is concentrated in a few points in the Northern Hemisphere.<\/p>\n\n\n\n<p>The rest of the world may not even get a seat at the table.<\/p>\n\n\n\n\n<h2 class=\"wp-block-heading\">Who Holds the Steering Wheel in the AGI Era?<\/h2>\n\n\n\n<p>If you zoom out, this conversation is the culmination of Hinton\u2019s warnings over the past three years.<\/p>\n\n\n\n<p>In 2023, when he left Google, he said:<\/p>\n\n\n\n<p>\u201cI regret my life\u2019s work.\u201d<\/p>\n\n\n\n<p>In 2024, upon receiving the Nobel Prize, he called for attention to AI safety.<\/p>\n\n\n\n<p>In 2025, he repeatedly emphasized the urgency of regulation.<\/p>\n\n\n\n<p>By 2026, his language has become more concrete.<\/p>\n\n\n\n<p>Yet what\u2019s equally striking is his technical clarity.<\/p>\n\n\n\n<p>At 78, after discussing AGI risks and existential threats, he can immediately switch back to explaining why restricted Boltzmann machines represent correct Bayesian inference, why current image generation models only use half of the wake-sleep algorithm, and how combining generative and recognition models may be the right next step.<\/p>\n\n\n\n<p>He lives in two worlds at once:<\/p>\n\n\n\n<p>One thinking about how AI becomes more powerful.<\/p>\n\n\n\n<p>The other about how humanity avoids being destroyed by that power.<\/p>\n\n\n\n\n<h2 class=\"wp-block-heading\">Hinton, AGI, and the Final Window for Humanity<\/h2>\n\n\n\n<p>The engine is already roaring.<\/p>\n\n\n\n<p>A $4.8 trillion machine accelerating at full speed.<\/p>\n\n\n\n<p>Whether there is a steering wheel depends on what happens in the next few years\u2014and whether those in the driver\u2019s seat, governments, corporations, and scientists, are willing to reach for it.<\/p>\n\n\n\n<p>We are standing at a very particular moment in time.<\/p>\n\n\n\n<p>Before AGI or superintelligent AI becomes more capable than us, this may be the only window where humanity still decides the rules of the game.<\/p>\n\n\n\n<p>When Hinton left Google three years ago, many thought he was being alarmist.<\/p>\n\n\n\n<p>Three years later, he is still saying the same things.<\/p>\n\n\n\n<p>The difference now is\u2014more people understand his concern.<\/p>\n\n\n\n<p>But that car without a steering wheel?<\/p>\n\n\n\n<p>It\u2019s still accelerating.<\/p>\n\n\n\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>When 78-year-old Geoffrey Hinton sat in front of the camera and said this to hundreds of attendees, the entire room fell silent for a few seconds. \u201cThey want a super-fast car with no steering wheel.\u201d Recently, this 2024 Nobel Prize in Physics laureate\u2014an elderly figure who has been called the \u201cGodfather of AI\u201d for over a decade\u2014once again sounded the alarm for humanity at the Global Digital World Conference. Almost pleading, Hinton warned: \u201cWe don\u2019t know whether humans can coexist with superintelligent AI.\u201d \u201cBut we are building it.\u201d Hinton on AGI: Only 1% on Safety, 99% on Acceleration In his speech, Hinton laid out a very clear calculation. The global [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":2696,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"none","_seopress_titles_title":"%%post_title%%","_seopress_titles_desc":"Geoffrey Hinton warns AGI may arrive before we\u2019re ready. With only 1% of AI effort on safety, are we heading toward a superintelligent future we can\u2019t control?","_seopress_robots_index":"","_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[2],"tags":[],"class_list":["post-2692","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-news"],"uagb_featured_image_src":{"full":["https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/Geoffrey-Hinton-Warns-About-AGI-and-the-Race-Toward-Superintelligence.webp",1536,1024,false],"thumbnail":["https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/Geoffrey-Hinton-Warns-About-AGI-and-the-Race-Toward-Superintelligence-150x150.webp",150,150,true],"medium":["https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/Geoffrey-Hinton-Warns-About-AGI-and-the-Race-Toward-Superintelligence-300x200.webp",300,200,true],"medium_large":["https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/Geoffrey-Hinton-Warns-About-AGI-and-the-Race-Toward-Superintelligence-768x512.webp",768,512,true],"large":["https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/Geoffrey-Hinton-Warns-About-AGI-and-the-Race-Toward-Superintelligence-1024x683.webp",1024,683,true],"1536x1536":["https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/Geoffrey-Hinton-Warns-About-AGI-and-the-Race-Toward-Superintelligence.webp",1536,1024,false],"2048x2048":["https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/Geoffrey-Hinton-Warns-About-AGI-and-the-Race-Toward-Superintelligence.webp",1536,1024,false],"trp-custom-language-flag":["https:\/\/deepinsightai.io\/wp-content\/uploads\/2026\/05\/Geoffrey-Hinton-Warns-About-AGI-and-the-Race-Toward-Superintelligence-18x12.webp",18,12,true]},"uagb_author_info":{"display_name":"Claude Carter","author_link":"https:\/\/deepinsightai.io\/ja\/author\/cloud-han03gmail-com\/"},"uagb_comment_info":0,"uagb_excerpt":"When 78-year-old Geoffrey Hinton sat in front of the camera and said this to hundreds of attendees, the entire room fell silent for a few seconds. \u201cThey want a super-fast car with no steering wheel.\u201d Recently, this 2024 Nobel Prize in Physics laureate\u2014an elderly figure who has been called the \u201cGodfather of AI\u201d for over&hellip;","_links":{"self":[{"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/posts\/2692","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/comments?post=2692"}],"version-history":[{"count":1,"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/posts\/2692\/revisions"}],"predecessor-version":[{"id":2697,"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/posts\/2692\/revisions\/2697"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/media\/2696"}],"wp:attachment":[{"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/media?parent=2692"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/categories?post=2692"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/deepinsightai.io\/ja\/wp-json\/wp\/v2\/tags?post=2692"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}