Doubling down on AI: Pursue one clear path and beware two critical risks

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


According to a 2021 survey by NewVantage Partners, 77.8% of companies report AI capabilities to be in widespread or limited production, up from 65.8% last year. This growth helps drive the cost of AI down (as noted by the Stanford Institute for Human-Centered Artificial Intelligence) while increasing the odds that organizations of all sizes will benefit.

However, doubling down on AI can bring double trouble. In particular, there are two problems leading to two critical kinds of AI risk:

1. The risk of talent shortages grinding value realization to a halt.

Trying to get value from AI with small, overburdened data science teams is like trying to drink vital nourishment through a too-narrow straw. AI can’t help your decision-making and automation processes scale if model training and management are backed up in an ever-lengthening queue. 

Without enabling others outside your data science team to help bring more models to production faster, you’ll risk failing business leadership’s test—“How much value are we realizing from these AI projects?”

2. The risk of “black box” AI fueling legal issues, fines, and loss of reputation.

Not knowing what’s in your AI systems and processes can be costly. Having an auditable, transparent record of the data and algorithms used in your AI systems and processes is table stakes to stay in line with current and planned AI regulatory compliance laws. Transparency also supports ESG initiatives and can help preserve your company’s reputation.

If you think you won’t have any issues with bias, think again. According to the Stanford Institute for Human-Centered Artificial Intelligence “2022 AI Index” report, the data shows as AI increases in capabilities, there’s a corresponding increase in the potential severity of biases. And AI capabilities are increasing in leaps and bounds.

One path to avoiding AI double trouble: A robust ModelOps platform

Unlocking the power of AI for scale, while de-risking AI-infused processes, can be achieved through governed, scalable ModelOps — or AI model operationalization — that enables the management of key elements of the AI and decision model lifecycle. AI models are machine-learning algorithms, trained on real or synthetic data, that emulate logical decision-making based on the available data. Models are typically developed by data scientists to help solve specific business or operations problems, in partnership with analytics and data management teams. 

The National University Health System (NUHS) in Singapore has been able to derive real AI value through ModelOps. NUHS needed a 360-degree view of the patient journey to address the country’s growing number of patients and aging population. To do so, NUHS created a new platform, called ENDEAVOUR AI, which uses ModelOps management. With their new platform, NUHS clinicians now have a complete view of patient records with real-time diagnostic information, and the system can make diagnostic predictions. NUHS has seen enough value from AI that they plan to operationalize many more AI tools on ENDEAVOUR AI. 

ModelOps combines technologies, people, and processes to manage model development environments, testing, versioning, model stores, and model rollback. Too often, models are managed through a collection of poorly integrated tools. A unified, interoperable approach to ModelOps will simplify the collaboration needed to help ModelOps scale.

Two major challenges ModelOps can help address include:

  • Model complexity and opacity. Machine learning algorithms can be complex, depending on the number of parameters they use, and how they interact. With complexity comes opacity — the inability of a human to interpret how a model makes its decisions. Without interpretability, it’s difficult to determine whether a system is biased, and if so, what approaches can reduce or eliminate the bias. Through the governance and transparency provided by a ModelOps platform, regulatory risk and bias risk are reduced.
  • Model creation at scale. Scale isn’t just the number of models; scale refers to how broadly AI is integrated into an organization’s offerings and processes. More integration means more models are needed, which ultimately means more potential benefits from AI. But if there aren’t enough data scientists to support this — and if a model is drifting, or opaque, or deployment is a challenge — failed AI initiatives can be the result. By democratizing ModelOps so it can scale, organizations can move from incremental advantage to breakthrough advantage.

Robust, scalable ModelOps delivers the technology and processes needed for the faster creation of well-governed, more easily deployed machine-learning models. ModelOps enables data scientists to focus on model creation, and democratizes AI by enabling data engineers and data analysts to deploy more AI throughout the organization. As noted by Dr. Ngiam Kee Yuan, group chief technology officer at NUHS, “Our state-of-the-art ENDEAVOUR AI platform drives smarter, better, and more effective healthcare in Singapore. We expect ModelOps will accelerate the deployment of safe and effective AI-informed processes in a more scalable, containerized way.”

You don’t need to let talent shortages add friction to AI value realization, or take on “black box” AI legal and reputation risk. With a scalable, robust ModelOps platform, you can “de-risk with benefits,” gaining AI adaptability for changing needs, and governance agility for an ever-changing regulatory environment.

Lori Witzel ia director of research at TIBCO.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Read More

Lori Witzel TIBCO