AGI by 2027. What are the implications for the world and the future of mankind and communism?
>Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
>Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
>Let me tell you what we see.
<I. From GPT-4 to AGI: Counting the OOMs
>AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.
<II. From AGI to Superintelligence: the Intelligence Explosion
>AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.
>>29961All modern pandemics are a result of human failures, yes. COVID wouldn't have happened if humans were capable of proper disease control, had better conditions for meat farms (or didn't rely on animal cruelty at all to sustain their luxuries), and made an effort to enforce said standards globally.
It's rather
UNMATERIALIST of you to even imply otherwise.
>>29964No one's will EVER buy your NFT
Your NFT will NEVER be the next VRChat
You WILL run out of tech startups to hype up
You WILL have to get a real job
>>29999>let alone replacing blue collar jobs.No one said it would replace blue collar.
>>29996>While improving it's "coding abilities" would require selecting only high-quality code,Forgive me for doubting your knowledge of the subject when you are talking about basic RL stuff that is already in use.
>While improving it's "coding abilities" would require selecting only high-quality code,Yeah no shit. That's how it has already been done.
The next step is making it able to solve a problem at different levels of abstraction the way a human would. Weighting it to select "good code" is the absolute basics.
>>29957Everybody unironically talking about AGI in the near future has a double-digit I.Q.
There's a new goofy hype story that’s been going around about OpenAI’s o3 model sabotaging its own shutdown mechanism, absolutely nothing happened! It’s like a few years ago when that Google employee claimed to have achieved AGI because he asked a model if it’s sentient and got a “yes” back.
Unique IPs: 23