As some are preparing for Artificial General Intelligence, Elon gets major FOMO
OpenAI, the company behind ChatGPT and Dall-E, has announced plans for when they achieve AGI (Artificial general Intelligence). Elon wants back in.
AGI is the real deal, the kind of stuff the New York Times writers should be actually terrified about. (No, it’s not Adjusted Gross Income, although that also sounds scary to me)
Jokes aside, it’s not necessarily scary, but people have been freaking out about AI chatbots taking over the world, and AGI is actually the first real mind-blowing Sci-Fi concept that I might get to see become a reality in my lifetime (if I get my shit together diet-wise).
The Wikipedia definition for Artificial general intelligence (AGI) is “the ability of an intelligent agent to understand or learn any intellectual task that human beings or other animals can.” For years, the Touring test was considered the benchmark for a machine that aspires to rival human intellect, but scientists, as always, don’t fully agree on all aspects of deciding how AGI should be evaluated.
Still, there is some general agreement among artificial intelligence researchers that intelligence is required to do the following:
reason, use strategy, solve puzzles, and make judgments under uncertainty;
represent knowledge, including common sense knowledge;
plan;
learn;
communicate in natural language;
ChatGPT is still far from those requirements, but it does show very good progress in natural language communication and learning. (and there are experts saying that it matches a chimpanzee in intelligence.)
The tricky thing is that AGI will eventually have more general capacities than humans and, in the worst-case scenario, this superintelligent AI could be, by design, fatal to humanity. And, continuing this worst-case scenario, in a few years (or sooner), it breaks away from our control and kills us. (dum, dum duuuum!)
Humans wouldn’t allow this, you say? Humans, as stated in previous posts, are known to be horrible, unstable, and blinded by the need to get rich. But who am I to judge?
So, preparing for Artificial General Intelligence is like preparing for aliens to visit Earth, no one knows what it’s going to happen. Actually, some experts aren’t even sure the AGI can be achieved. (it can)
I wouldn’t be surprised if humans achieve AGI rather soon
OpenAI is, obviously, in pole position, as they have taken huge leaps in developing generative AI and have amassed an army of artificial intelligence experts and engineers plus billions in funding.
And to make things even more interesting they published a blog post about their plans once they achieve AGI. (yes they are planning for it and that’s good)
Here’s what it said:
- OpenAI’s mission is to create artificial general intelligence (AGI) that benefits all of humanity. (Of course, it is)
- AGI would open a lot of opportunities such as increasing abundance, solving global problems, enhancing human capabilities, and creating new forms of life. (and more)
- There are risks and uncertainties that AGI could pose, such as misalignment, misuse, competition, and existential threats.
- OpenAI proposes some principles and actions that they will follow to ensure that AGI won’t kill us, stuff like:
Building a diverse and inclusive team
Engaging with stakeholders and experts
Developing technical safeguards and standards
Sharing knowledge and resources
Supporting global cooperation and governance
It’s a good plan, for now, and there are some very good points in the text to be lauded, besides what you read above.
1) They are being transparent and even if their development is probably much more advanced than what they are communicating right now, at least we are getting some updates. Think about what happens in opaque regimes like China and how much will we know about their AGI developments (yes, there are many companies and countries working on similar projects)
2) Sam Altman’s remark that “the first AGI will be just a point along the continuum of intelligence” just fills my soul with hope and excitement for the future. Can’t explain why, but I fully agree with this remark. It’s like looking distantly into the future with awe and seeing humanity transforming into an interplanetary civilization.
What happens on the sidelines?
Well, Elon Musk, the recently anointed richest man alive (again), has been sharing his AGI anxiety on Twitter and making alarmist remarks on how unpredictable AI development is and how “scary good” ChatGPT turned out to be.
He also criticized OpenAI for abandoning its original goal of being open source and a non-profit “counterweight to Google” saying it has now become “a closed source, maximum-profit company effectively controlled by Microsoft.”
Elon, who had co-founded OpenAI along with other investors in 2015 as a nonprofit startup, had left its board in 2018, and is now likely feeling major FOMO.
And what did he do? He began discussions with AI experts in “recent weeks” to build a competitor to OpenAI’s ChatGPT.
Maybe it’s a good thing, Maybe Elon is just scared he’s not going to be the one enabling humans to become interplanetary, who knows :))
As always, I believe fun (and scary) times are ahead! Cheers!