Why Board Directors And CEO’s Need To Learn AI Knowledge Foundations: Building AI Leadership Brain Trust

Side face of AI robot by particle form.

getty

In my last blog on board director and CEO leadership needs, I identified a series of AI leadership questions to advance AI successfully and introduced basic AI concepts such as defining basic terms like: AI, algorithm, AI model. I also described different AI model methods like: unsupervised learning versus supervised learning to provide some foundational concepts that every board director or CEO should understand.

If you want a good starter on the responsibility and duty of care of C suite leadership on AI, I recommend you read an earlier blog here.

Over the past six months in the AI Leadership Brain Trust Series, I have identified over 50 skills required to help evolve talent in organizations committed to advancing AI literacy. The last few blogs have been discussing the technical skills relevancy. To see the full AI Brain Trust Framework and skill list introduced in the first blog, reference here.

We are currently focused on the technical skills in the AI Brain Trust Framework advancing the key AI and machine learning literacy terms.

Technical Skills:

1.    Research Methods Literacy

2.   Agile Methods Literacy

3.  User Centered Design Literacy

4.   Data Analytics Literacy

5.   Digital Literacy (Cloud, SaaS, Computers, etc.)

6.   Mathematics Literacy

7.   Statistics Literacy

8.  Sciences (Computing Science, Complexity Science, Physics) Literacy

9.   Artificial Intelligence (AI) and Machine Learning (ML) Literacy

10.Sustainability Literacy

In this blog, I will summarize the three main categories of AI: Artificial Narrow Intelligence (ANI) , Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). It is important that Board Directors and CEO’s understand the breadth of AI approaches that can be used to solve different use cases or business problems, and also understand that AI is an exploratory research science that evolves over time as additional methods may be tried to increase the prediction accuracy of the problem that is being solved. So just like in manufacturing drugs and being patient to allow the scientists time to discover to secure the most optimal outcomes is key versus rushing AI science programs, versus having a half baked solution that is not robust nor will stand the test of time.

See this blog reference for additional information on basic AI terms, and some simple learning visualizations to make your AI learning easier and more fun. This site also has a good introductory list of basic AI terms.

Artificial Narrow Intelligence (ANI)

Narrow AI often referred to as (ANI) is a specific type of AI in which a technology out performs humans in some very narrowly defined task. Unlike general artificial intelligence, narrow artificial intelligence focuses on a single subset of cognitive abilities and advances in that domain. ANI systems pull information from a specific data-set and are designed to perform a single/narrow task. Unlike General or “Strong” AI, Narrow AI is not conscious, sentient, or driven by emotion the way that humans are. Narrow AI operates in a pre-defined range solving a specific use case.

Many examples of narrow AI are found in devices like: Alexa, Google Assistance, self-driving cars, customer call center chat bots, retail shopping engines that browse history and make recommendations to consumers – all are examples of narrow AI where a specific task is performed to improve decision making, and operational efficiencies. IBM Watson helping to solve medical challenges using AI to detect cancer with advanced image detection and neural networks is also an example of narrow AI.

Like with all forms of AI, high quality data sets is imperative to drive accurate outcomes, and the majority of organizations in mid and large enterprises are still advancing centralized data repositories, migrating data to the cloud and are all faced with high learning curves to set up effective AI operational infrastructures, let alone the talent realities of attracting the right talent to design, build and run ANI systems.

Artificial General Intelligence (AGI)

AGI, or “strong AI,” allows a machine to apply knowledge and skills in different contexts. This more closely mirrors human intelligence by providing opportunities for autonomous learning and problem-solving, and is more akin to an intelligence robot performing multiple functions and demonstrating human like behaviours. A good vision is Star Wars R2D2, or today think of Sophia from Hanson Robots, a female robot, that is leveraging many forms of AI, including computer vision to monitor a humans’ facial recognition to classify emotion in order to demonstrate more empathy and processing human language from natural language processing methods (NLP) to be able to form an appropriate response back and learn billions of computational logic to perform and communicate more accurately.

The challenge now is to move from ANI to AGI in advanced fields like computer vision and natural language processing.

AGI as an approach is trying to emulate the whole brain, and mental state and operates through a system of neurons called neural networks. To put where AI is compared to a human brain, scientists have been able to successfully replicate the brain to over 300 neurons, but a human brain to put in perspective contains 100 billion neurons, so ANI still has a long ways to go to be on par of the incredible computational power of a human brain. That being said, the advancement of quantum computers will accelerate AGI feasibility as these higher processing systems will drive 1000x improvements from current computer systems.

Developing artificial consciousness requires subjective, conscious experience in addition to pure intellectual horsepower. Experts have varying opinions when we will reach AGI enablements going mainstream. Patrick Winston, MIT professor and former director of the MIT Artificial Intelligence Laboratory, forecasts by 2040 AGI will be pervasive.

According to Research and Market reports, the quantum computing market valued at $507.1 million in 2019, and is projected to grow at a CAGR of 56.0% during 2020-2030 (forecast period), to ultimately reach $64,988.3 million by 2030.

Machine learning (ML) is expected to progress at the highest CAGR, during the forecast period, among all application categories, owing to the fact that quantum computing is being integrated in ML for improving the latter’s use case.

With the accelerated investment push by leading VC’s and businesses desire to secure higher computational power, perhaps a more realistic outlook for AGI is that in less than ten years AGI will have gone mainstream.

A key leadership question to reflect upon though is will we have the right legal and ethical governance policies in place? – if a ten year trajectory is in fact achievable which I believe it is

Artificial Super-Intelligence (ASI) is is more advanced than AGI and is when artificial intelligence exceeds human capabilities to operate at a genius level. There are no real limits to what ASI could accomplish, from building nanotechnology to producing objects to preventing aging or dominating power over the human species, which many are concerned of.

As AI continues to take over more jobs, there are major concerns over the ethics of AI and whether governments will govern its growth and set legal legislation to ensure the AI won’t increase discrimination, impact personal privacy, pose security threats through autonomous weapons.

Oxford philosopher, Nick Bostrom defines super-intelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.

As we look ahead ASI is what many people like Elon Musk are worried about – as ASI could lead to the extinction of the human race. Other futurists like Ray Kurzweil, predicts that by 2045, we will be able to increase our intelligence a billion fold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud – essentially meshing human and machine – allowing us to discover unexplored aspects of human and machine interfaces forming a sentiment computer life form.

Leaders must get more involved in AI and shape the ethical policies, legal and governance structures as waiting for governments to get this right will be far too slow. The speed that we are evolving at, this is an unprecedented time, where we can envision a future where both man and machine – together are more self-aware, more intelligent and achieving unified consciousness – often called the singularity effect- requires more thoughtful leadership and reflection.

Conclusion

I believe that board directors and CEOs need to accelerate their knowledge of AI as the implications of their strategies in adopting AI more for good and thinking carefully about their responsibility to their employees, their society and in particular to future generations. The ethical duty of care to ensure we evolve man and machine interfaces is a complex area to govern – but its likely one of the top three priority governance priorities facing leadership.

More Information:

To see the full AI Brain Trust Framework introduced in the first blog, reference here. 

To learn more about Artificial Intelligence, and the challenges, both positive and negative, refer to my new book, The AI Dilemma, to guide leaders foreword.

Note:

If you have any ideas, please do advise as I welcome your thoughts and perspectives.

Read More