📜 Superintelligence is in Trouble for Global Ban
We just got another flashpoint in the AI safety war: a new open letter backed by top thinkers is calling on governments to ban the development of artificial superintelligence (ASI).
The Future of Life Institute released the letter, urging world govs to prohibit ASI development at least until we’re 100% sure it’s controllable and society says yes.
The signers argue unchecked ASI could lead to: Human obsolescence in the economy
Loss of civil liberties and autonomy
Complete extinction, in worst-case scenarios
Do you know who signed it? → Yoshua Bengio and Geoffrey Hinton (Godfathers of AI) + Steve Wozniak (Apple co-founder) + Leo Gao (OpenAI staffer) & more.
But notably absent leadership we noticed from the biggest AI labs like: OpenAI / Google DeepMind / Anthropic / Meta / xAI (all big names right?)
Why it matters:
Just 5% want companies to build it without oversight. But there’s no clear definition of what “superintelligence” even is. And without cooperation from OpenAI, Google, or Meta, this could end up more symbolic than effective.