Investors are teeming hundreds of billions of dollars into the AI industry right now, and much of that is going toward the development of a still speculative technology: artificial general intelligence.
OpenAI, the maker of the buzzy chatbot ChatGPT, has made creating AGI a top priority. Its Big Tech contenders, Google, Meta, and Microsoft, are also devoting their top researchers to the same goal.
But not everyone’s definition of AGI is the same, chief to some confusion over just how close the industry is to inventing this world-changing technology.
Generally speaking, AGI is unreservedly an advanced AI that can reason like humans. For some, it’s more than that. Ian Hogarth, the co-author of the annual “Stage of AI” report and an investor, defined it as a “God-like AI.” Tom Everitt, an AGI safety researcher at DeepMind, described AGI as AI systems that can solve chide in ways that aren’t limited to how they are trained.
Advertisement
Andrew Ng, a leading AI researcher, said in a recent conversation with Techsauce that AGI should be able to do “any intellectual tasks that a human can.” It should be able to learn to control a car, fly a plane, or write a Ph.D. thesis.
According to Ng, though, we’re still decades away from seeing anything close to that.
“I expectation we get there in our lifetime, but I’m not sure,” he said, adding that companies that claim AGI is imminent use dubious definitions of the an arrangement. “Some companies are using very non-standard definitions of AGI, and if you redefine AGI to be a lower bar, then of course we could get there in 1 to 2 years.”