What is AGI and how is it different from AI?

As artificial intelligence continues to develop at a rapid pace, it’s easy to wonder where this new age is headed.

The likes of ChatGPT, Midjourney and Sora are transforming the way we work through chatbots, text-to-image and text-to-video generators, while robots and self-driving cars are helping us perform day-to-day tasks. The latter isn’t as mainstream as the former, but it’s only a matter of time.

But where’s the limit? Are we headed towards a dystopian world run by computers and robots? Artificial general intelligence (AGI) is essentially the next step but as things stand, we’re a little way off from that becoming a reality.

AGI explained

AGI is considered to be ‘strong’ AI, whereas ‘narrow’ AI is what we know to be generative chatbots, image generators and coffee-making robots.

Strong AI refers to software that has the same, or better, cognitive abilities as a human being, meaning it can solve problems, achieve goals, think and learn on its own, without any human input or assistance. Narrow AI can solve one problem or complete one task at a time, without any sentience or consciousness.

This level of AI is only seen in the movies at the moment, but we’re likely headed towards this level of AI-driven technology in the future. When that might be remains open to debate – some experts claim it’s centuries away, others believe it could only be years. However, Ray Kurzweil’s book ‘The Singularity is Near’ predicts it to be between 2015 and 2045, which was seen as a plausible timeline by the AGI research community in 2007…although it’s a pretty broad timeline.

Given how quickly narrow AI is developing, it’s easy to imagine a form of AGI in society within the next 20 years.

It is increasingly clear that all roads lead to AGI. Tesla is building an extremely compute-efficient mini AGI for FSD.

— Elon Musk (@elonmusk) December 29, 2023

What can AGI do?

Despite not yet existing, AGI can theoretically perform in ways that are indistinguishable from humans and will likely exceed human capacities due to fast access to huge data sets. While it might seem like you’re engaging with a human when using something like ChatGPT, AGI would theoretically be able to engage with humans without necessarily having any human intervention.

An AGI system’s capabilities would include the likes of common sense, background knowledge and abstract thinking, as well as practical capabilities, such as creativity, fine motor skills, natural language understanding (NLU), navigation and sensory perception.

When will we see Artificial General Intelligence, or AGI — when AI can do nearly anything the human brain can do (and perhaps do it better) — become a reality?

We asked 33 AI-focused YC founders, and here’s what they had to say.

▶️ https://t.co/ciLInPGglj pic.twitter.com/gBSU5mO8vi

— Y Combinator (@ycombinator) November 21, 2023

A combination of all of those abilities will essentially give AGI systems high-level capabilities, such as being able to understand symbol systems, create fixed structures for all tasks, use different kinds of knowledge, engage in metacognition, handle several types of learning algorithms and understand belief systems.

That means AGI systems will be ultra-intelligent and may also possess additional traits, such as imagination and autonomy, while physical traits like the ability to sense, detect and act could also be present.

AGI vs AI?

We know that narrow AI systems are widely being used in public today and are fast becoming part of everyday life, but it currently needs a human to function at all levels. It requires machine learning and natural language processing, before requiring human-delivered prompts in order to execute a task. It executes the task based on what it has previously learned and can essentially only be as intelligent as the level of information humans give it.

However, the results we see from narrow AI systems are not beyond what is possible from the human brain. It is simply there to assist us, not replace or be more intelligent than humans.

Theoretically, AGI should be able to undertake any task and portray a high level of intelligence without human intervention. It will be able to perform better than humans and narrow AI at almost every level.

What is the future of AGI?

Stephen Hawking warned of the dangers of AI in 2014, when he told the BBC: “The development of full artificial intelligence could spell the end of the human race.

“It would off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Kurzweil followed up his prediction in ‘The Singularity is Near’ by saying in 2017 that computers will achieve human levels of intelligence by 2029. He predicted that AI itself will get better exponentially, leading to it being able to operate at levels beyond human comprehension and control.

He then went on to say: “I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billionfold by merging with the intelligence we have created.”

These discussions and predictions have, of course, sparked debates surrounding the responsible use of CGI. The AI we know today is viewed to be ‘responsible’ and there are calls to regulate many of the AI companies to ensure these systems do not get out of hand. We’ve already seen how controversial and unethical the use of AI can be when in the wrong hands. It’s unsurprising, then, that the same debate is happening around AGI.

In reality, society must approach the development of AGI with severe caution. The ethical problems surrounding AI now, such as the ability to control biases within its knowledge base, certainly point to a similar issue with AGI, but on a more harmful level.

If an AGI system can essentially think for itself and no longer has the need to be influenced by humans, there is a danger that Stephen Hawking’s vision might become a reality.

Featured Image: Ideogram

Read More

James Jones