When we might meet the first intelligent machines

Image Credit: Yuichiro Chino/Getty

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


How close are we to living in a world where human-level intelligence is exceeded by machines? Over the course of my career, I’ve regularly engaged in a thought experiment where I try to “think like the computer” in order to imagine a solution to a programming challenge or opportunity. The gulf between human reasoning and software code was always pretty clear.

Then, a few weeks ago, after conversing with the LaMDA chatbot for several months, now “former” Google AI engineer Blake Lemoine said he thought LaMDA was sentient [subscription required]. Two days before Lemoine’s announcement, Pulitzer Prize-winning AI pioneer and cognitive scientist Douglas Hofstadter wrote an article saying [subscription required] that artificial neural networks (the software technology behind LaMDA) are not conscious. He also came to that conclusion after a series of conversations with another powerful AI chatbot named GPT-3. Hofstadter ended the article by estimating that we are still decades away from machine consciousness.

A few weeks later, Yann LeCun, the chief scientist at Meta’s artificial intelligence (AI) Lab and winner of the 2018 Turing Award, released a paper titled “A Path Towards Autonomous Machine Intelligence.” He shares in the paper an architecture that goes beyond consciousness and sentience to propose a pathway to programming an AI with the ability to reason and plan like humans. Researchers call this artificial general intelligence or AGI.

I think we will come to regard LeCun’s paper with the same reverence that we reserve today for Alan Turing’s 1936 paper that described the architecture for the modern digital computer. Here’s why. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

Simulating action using a world model

LeCun’s first breakthrough is in imagining a way past the limitations of today’s specialized AIs with his concept of a “world model.” This is made possible in part by the invention of a hierarchical architecture for predictive models that learn to represent the world at multiple levels of abstraction and over multiple time scales.

With this world model, we can predict possible future states by simulating action sequences. In the paper, he notes, “This may enable reasoning by analogy, by applying the model configured for one situation to another situation.”

A configurator module to drive new learning

This brings us to the second major innovation in LeCun’s paper. As he notes, “One can imagine a ‘generic’ world model for the environment with a small portion of the parameters being modulated by the configurator for the task at hand.” He leaves open the question of how the configurator learns to decompose a complex task into a sequence of subgoals. But this is basically how the human mind uses analogies.

For example, imagine if you woke up this morning in a hotel room and had to operate the shower in the room for the first time. Chances are that you rapidly broke up the task into a series of subgoals by drawing on analogies learned by operating other showers. First, determine how to turn on the water using the handle, then confirm which direction to turn the handle to make the water warmer, etc. You could ignore the vast majority of data points in the room to focus on just a few that are relevant to those goals.

Once started, all intelligent-machine learning is self-study

The third major advance is the most powerful. LeCun’s architecture runs on a self-supervised learning paradigm. This means that the AI is able to learn by itself by watching videos, reading text, interacting with humans, processing sensor data or processing any other input source. Most AIs today must be trained on a diet of specially labeled data prepared by human trainers.

Google’s DeepMind just released a public database produced by their AlphaFold AI. It contains the estimated shape of nearly all 200 million proteins known to science. Previously, it took researchers 3-5 years to predict the shape of just “one” protein experimentally. DeepMind’s AI trainers and AlphaFold finished almost 200 million within the same five-year window.

What will it mean when an AI can plan and reason by itself without human trainers? Today’s leading AI technologies – machine learning, robotic process automation, chatbots – are already transforming organizations in industries varying from pharma research labs to insurance companies.

When they do arrive, whether in a few decades or a few years, intelligent machines will introduce both vast new opportunities and surprising new risks.

Brian Mulconrey is SVP at Sureify Labs and a futurist. He lives in Austin, Texas. 

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Read More

Brian Mulconrey