Here is a 3-minute read that is well worth your time: Statement from Dario Amodei on the Paris AI Action Summit \ Anthropic
This section in particular:
Time is short, and we must accelerate our actions to match accelerating AI progress. Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring. There are potentially greater economic, scientific, and humanitarian opportunities than for any previous technology in human history—but also serious risks to be managed.
There is going to be a ‘life before’ and ‘life after’ AGI -Artificial General Intelligence line that we are going to cross soon, and we won’t recognize the world we live in 2-3 years after we cross that line.
From labour and factories to stock markets and corporations, humans won’t be able to compete with AI… in almost any field… but the field that’s most scary is war. The ‘free world’ may not be free too much longer when the ability to act in bad faith becomes easy to do on a massive scale. I find myself simultaneously excited and horrified by the possibilities. We are literally playing a coin flip game with the future of humanity.
I recently wrote a short tongue-in-cheek post that there is a secret ASI – Artificial Super Intelligence waiting for robotics technology to catch up before taking over the world. But I’m not actually afraid of AI taking over the world. What I do fear is people with bad intentions using AI for nefarious purposes: Hacking banks or hospitals; crashing the stock market; developing deadly viruses; and creating weapons of war that think, react, and are more deadly than any human on their own could ever be.
There is so much potential good that can come from AGI. For example, we aren’t even there yet and we are seeing incredible advancements in medicine, how quickly will they come when AGI is here? But my fear is that while thousands and hundreds of thousands of people will be using AGI for good, that power held in the hands of just a few powerful people with bad intentions has the potential to undermine the good that’s happening.
What I think people don’t realize is that this AGI infused future isn’t decades away, it’s just a few short years away.
“Possibly by 2026 or 2027 (and almost certainly no later than 2030), the capabilities of AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people appearing on the global stage—a “country of geniuses in a datacenter”—with the profound economic, societal, and security implications that would bring.”
Who controls that intelligence is what will really matter.