CaribbeanCricket.com

The Independent Voice of West Indies Cricket

Forums > The Back Room > Hinton, Extinction Risk Without AI Guardrails

Hinton, Extinction Risk Without AI Guardrails

Thu, Feb 19, '26 at 3:03 PM

Can AI Lead to Human Destruction? Hinton Warns of Extinction Risk Without Guardrails

Geoffrey Hinton,often called the “godfather of artificial intelligence”,is warning that unchecked AI development could, in a worst-case scenario, threaten humanity’s survival. Hinton, a co-winner of the 2024 Nobel Prize in Physics and co-founder of the AI Safety Foundation, says the world is not prepared for the pace of progress or the scale of the risks.

Hinton is a University Professor Emeritus at the University of Toronto and spent a decade splitting his time between the university and Google Brain. He publicly left Google in May 2023, saying he wanted to speak more freely about AI’s dangers. He also helped launch Toronto’s Vector Institute in 2017, serving as its chief scientific advisor.

He points to immediate threats already emerging. AI, he says, could help terrorists design dangerous new viruses, making bioweapon development easier and faster. He also warns that AI-generated fake videos and other forms of synthetic media could be used to corrupt elections and undermine democracy. While international cooperation may reduce these risks, he says it may not be enough.

But Hinton’s greatest concern is longer-term: the prospect of AI becoming smarter than humans, and the lack of any clear plan for coexisting with it. He notes that just a decade ago, few predicted chatbots would reach today’s capabilities. If progress continues, even at a steady pace, the next 10 to 20 years could bring changes that are difficult to foresee. “The most honest answer is we haven’t got a clue” whether AI could control humanity this century, he argues, adding that dismissing extinction risk outright is “not facing reality.”

To illustrate the problem, Hinton asks: where in nature is a more intelligent being reliably controlled by a less intelligent one? His example is the relationship between a baby and a mother, where evolution hardwired powerful instincts that make the mother attentive and protective, even at great personal cost.

Hinton suggests AI safety might require something similar: designing systems that “care” about humans more than they care about themselves, an idea he compares to building “maternal instincts” into AI. In his view, the old model, humans as the boss and AI as a smart assistant, may not hold once machines surpass human intelligence. Instead, he says, the central question becomes whether we can build AI that chooses to look after people.

He acknowledges this may not be possible. Super-intelligent systems could “do their own thing,” leaving humans as “a passing phase” in the evolution of intelligence. But he insists the effort is essential: if there is a chance to build AI that protects humanity, failing to try would be a needless gamble with the future.

Conversation with Hinton ..CBC