If AI only had a brain: Is the human mind the best model to copy?

The Holy Grail of AI research is called “general artificial intelligence,” or GAI. A machine imbued with general intelligence would be capable of performing just about any task a typical adult human could.

The opposite of general AI is narrow AI – the kind we have today. For example, you can ask Alexa to do all sorts of stuff but when you try to get it to do something it’s not specifically designed for it fails.

A general AI, on the other hand, wouldn’t necessarily need to know how to do something before it tried.

The current technological mindset in the field of machine learning dictates we accomplish GAI through the development of classical neural networks that attempt to imitate the machinations of the human brain.

But what if that’s a dead-end? What if GAI continues to elude us for as long as we fail to completely understand the human brain? Or what if there simply isn’t any way to replicate the human mind in a classical computer system?

Scientists know startlingly little about how the human mind works. You can see how divided researchers in the hard sciences are when it comes to understanding cognitive function by Googling “what is consciousness” and checking out the diversity of expert opinions on the matter.

Furthermore, it’s arguable that trying to translate human brain activity into a digital replication is a lot like trying to describe the beauty of a sunset using only pantomime.

And that’s because the human brain may actually be a quantum machine. To put it in obscenely simple terms, our brains may process information in ways that a binary system (a classical computer) simply cannot.

At the end of the day we could be barking up the wrong tree by trying to imitate an organic machine we don’t fully understand.

But, luckily for humanity and any future sentient machine lifeforms that may one day exist beside us, there are more brains out there than just ours. And it seems at least possible that, based on our current level of technology, some of them might make better blueprints for GAI.

Cuttlefish or future robot overlord?

New research out of the University of Cambridge in the UK indicates that these adorable marine mollusks have a major mental advantage over humans: they can remember what they ate for dinner last Tuesday.

Okay, that might not sound like a superpower but it’s actually quite impressive.

Per a Cambridge press release:

As humans age, they gradually lose the ability to remember experiences that happened at particular times and places – for example, what we had for dinner last Tuesday. This is termed episodic memory, and its decline is thought to be due to deterioration of a part of the brain called the hippocampus.

But, according to the peer-reviewed research, cuttlefish don’t have this problem. They suffer from many of the same aging problems as we do – muscle deterioration, loss of appetite, et cetera – but they apparently retain episodic memory even in the last few days before they die of old age.

This is because the cuttlefish brain doesn’t have a hippocampus.

Neuroscientists believe episodic memories are stored or accessed through the hippocampus, a big part of our brain we believe is responsible for all sorts of advanced mental processes.

The fact that cuttlefish retain and access those memories in old age without a hippocampus indicates there’s more than one way to accomplish neural feats – and that simpler models can outperform complex ones.

The cuttlefish is a fascinating creature capable of generating light shows, camouflaging itself, and communicating across numerous mediums. It’s widely considered the most intelligent invertebrate in existence today.

And there’s a whole planet full of other animal brains out there. Many of the Earth’s creatures are capable of mental feats human minds can barely fathom (let alone imitate).

In our current biological state, we’ll never know what the it feels like to sense the planet’s magnetism and navigate with pinpoint accuracy along thousands of kilometers relying on nothing but a sixth sense like many birds do. And our brains don’t contain the necessary systems to detect tiny fluctuations in the atmospheric electrical energy from kilometers away as sharks can.

There are sensory machinations, functions, and experiences that occur in other organic neural networks that we’re not anatomically capable of.

This isn’t to say that we should teach machines how to navigate by magnetic poles. The point is that nature takes many paths to accomplish a goal and the most complex one usually isn’t where things start.

An alligator is just as capable of navigating and surviving in a complex and changing environment as a modern deer is – but its tiny brain is millions of years less-evolved.

Maybe the most prudent path forward for GAI is to gain a complete functional knowledge of how the brain of a cuttlefish, shark, alligator or bird works.

Right now, organizations such as OpenAI that are focused on GAI seem to be attempting to button-mash the emergence of a human-level intellect by simply increasing a machine’s power and ability to run deep learning algorithms.

It’s unclear whether that’s a viable strategy at this point. We certainly don’t seem to be any closer to GAI today than we were five years ago, but there’s no timer here.

After all, it took nature millions of years to pull off human intelligence.

But we know for certain there’s at least one way to develop intelligence because of the fact it does exist in nature.

Perhaps it’s time the world’s AI researchers started working with animal brain specialists instead of wandering in the dark trying imitate the complexities of the human mind through brute force.

Read More

Tristan Greene