CaribbeanCricket.com

The Independent Voice of West Indies Cricket

Forums > The Back Room > Hinton, Extinction Risk Without AI Guardrails

Hinton, Extinction Risk Without AI Guardrails

Thu, Feb 19, '26 at 3:03 PM

Can AI Lead to Human Destruction? Hinton Warns of Extinction Risk Without Guardrails

Geoffrey Hinton,often called the “godfather of artificial intelligence”,is warning that unchecked AI development could, in a worst-case scenario, threaten humanity’s survival. Hinton, a co-winner of the 2024 Nobel Prize in Physics and co-founder of the AI Safety Foundation, says the world is not prepared for the pace of progress or the scale of the risks.

Hinton is a University Professor Emeritus at the University of Toronto and spent a decade splitting his time between the university and Google Brain. He publicly left Google in May 2023, saying he wanted to speak more freely about AI’s dangers. He also helped launch Toronto’s Vector Institute in 2017, serving as its chief scientific advisor.

He points to immediate threats already emerging. AI, he says, could help terrorists design dangerous new viruses, making bioweapon development easier and faster. He also warns that AI-generated fake videos and other forms of synthetic media could be used to corrupt elections and undermine democracy. While international cooperation may reduce these risks, he says it may not be enough.

But Hinton’s greatest concern is longer-term: the prospect of AI becoming smarter than humans, and the lack of any clear plan for coexisting with it. He notes that just a decade ago, few predicted chatbots would reach today’s capabilities. If progress continues, even at a steady pace, the next 10 to 20 years could bring changes that are difficult to foresee. “The most honest answer is we haven’t got a clue” whether AI could control humanity this century, he argues, adding that dismissing extinction risk outright is “not facing reality.”

To illustrate the problem, Hinton asks: where in nature is a more intelligent being reliably controlled by a less intelligent one? His example is the relationship between a baby and a mother, where evolution hardwired powerful instincts that make the mother attentive and protective, even at great personal cost.

Hinton suggests AI safety might require something similar: designing systems that “care” about humans more than they care about themselves, an idea he compares to building “maternal instincts” into AI. In his view, the old model, humans as the boss and AI as a smart assistant, may not hold once machines surpass human intelligence. Instead, he says, the central question becomes whether we can build AI that chooses to look after people.

He acknowledges this may not be possible. Super-intelligent systems could “do their own thing,” leaving humans as “a passing phase” in the evolution of intelligence. But he insists the effort is essential: if there is a chance to build AI that protects humanity, failing to try would be a needless gamble with the future.

Conversation with Hinton ..CBC

Fri, Feb 20, '26 at 3:00 AM

@sgtdjones

Before you discovered LLMs you thought AI was limited to Google ask-me-a-question


somebody learn you something?

Fri, Feb 20, '26 at 1:45 PM

@Halliwell


Before you discovered LLMs you thought AI was limited to Google ask-me-a-question
somebody learn you something?

You asking shows someone's knowledge of the above thread is limited. Can you help me understand how you came up with such conclusion? He is stating he is concerned about AI?

We have been using a machine Language"Forth"for over a decade plus to control chemical plants .

Is the affection for your monarchy affecting you? Or the low polling numbers of your PM ?

Defies logic..sigh.

Parts of a Paper presented at a symposium.

When the “Smart” Moved Out of the Search Bar

By [My Name deleted plus my ghost writer]

For most of the last two decades, “AI” was something you felt more than you saw.

It lived in the autocomplete suggestions that finished your sentence before you knew what you wanted to say. It hovered behind the ads that followed you from site to site, eerily accurate in their timing. It was predictive, good at guessing what you’d click, buy, type, or watch next. And because Google became the undisputed champion of that kind of prediction, it began to look, from the outside, like Google simply owned intelligence itself.

The most familiar “intelligent” experience wasn’t a robot assistant or a talking computer. It was a blank rectangle.

You typed a query, hit enter, and got an answer that felt almost surgical in its precision. Over time, that ritual trained us into a subtle misunderstanding: we started to treat Google not as a company using one approach to machine learning, but as the intelligence, the source of knowing. Search became so seamless that the mechanism disappeared. We stopped seeing tools and started seeing authority.

This is part of why the arrival of large language models has felt less like a feature upgrade and more like a category change.

Researchers have been chasing machine intelligence for much longer than the internet has been around. Symbolic AI and expert systems go back to the 1950s, with lofty promises and a recurring problem: they rarely made it into the average person’s daily life. When they did, it was usually embedded inside something else, a corporate system, a specialized workflow, or, more recently, a consumer product controlled by a handful of tech giants. AI was present, but tucked away. If it had a face, it was usually a search bar.

LLMs uncouple that.

With tools like ChatGPT, the “brain” is no longer just a hidden feature of a bigger product. It’s the product. Instead of intelligence being something you access indirectly, by querying a database of **limited web pages and sorting through links, you can interact with an AI as a standalone entity. You ask, it responds. You refine, it adapts. The interaction feels less like retrieval and more like conversation.

That shift, from retrieval to generation, is what rearranged the leaderboard.

Google was the king of finding information. OpenAI, at least in the public imagination, became the king of synthesizing it. Instead of scanning ten blue links, people began asking a model to summarize a dense topic, draft an email, outline a legal argument, or write code directly. The “smart” experience moved from pointing you to knowledge to producing something that looks like knowledge.

And here’s the twist that makes the current moment so ironic it almost reads like a parable: the foundational breakthrough behind nearly all modern AI, Transformers, came out of Google.

In 2017, eight Google researchers published a paper with a title that now sounds like prophecy: “Attention Is All You Need.” The authors, Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin, are sometimes referred to as the “Transformer Eight.” Many have since left Google, scattering into startups and rival labs, carrying with them the architecture that would become the engine of today’s generative boom.

So the story isn’t simply that Google “missed” AI. It’s that the definition of AI changed in the public mind.

Prediction made intelligence feel like infrastructure, quiet, ambient, omnipresent. Generation makes intelligence feel like a character, present, expressive, occasionally wrong, and strangely persuasive. Search trained us to believe that answers were out there. LLMs tempt us to believe that answers can be made.

And that is both the promise and the problem of this new era: the smartest software is no longer just pointing. It’s talking back.

Sarge

Caution Regarding AI Data Sources:**limited web pages

Be aware that some AI programs operate with incomplete datasets, which may affect the accuracy of the information they provide.

Since the introduction of the Gutenberg press in 1440, the total number of published works has significantly increased. Accounting for self-publishing and subsequent developments, this number is estimated to exceed 150-160 million by 2022.

Key AI Dataset: Books contains approximately 191,000 to 196,000 books. Some authors want their books removed.