As tech applications increasingly include artificial intelligence (AI) aspects, people involved in building or using them cannot overlook the need for data governance. It should address details such as:
Where does an AI product’s data exist?
Who can access that information?
What measures are in place to ensure availability?
Which safeguards keep the data secure?
Which entities help maintain data governance?
Has the company defined the data it uses?
What is the function of AI in data governance itself?
It’s important to clarify that the above list is not exhaustive, and more factors to consider will inevitably become apparent as the future of data governance evolves. Even so, many analysts understandably argue that AI deployments will not succeed unless people focus on data governance from the start. Here’s a closer look at why that’s the case.
Decision-Makers Often Distrust AI
Numerous studies have confirmed that executives often don’t trust AI when reaching decisions. Some research even suggests people prefer to rely on gut instinct or intuition instead of letting technology help.
There’s no universal reason why they have those feelings, nor is there a quick way to reduce them. However, having a solid data governance strategy backed by best practices could help a company’s leaders feel better about using AI and, more importantly, feel confident about the conclusions the technology reaches.
A good starting point is to ensure the AI algorithms only receive high-quality data. Otherwise, the associated projects will be more likely to fail and offer less-than-reliable results. Additionally, a company’s data governance strategy must include specifics on how to keep new data clean. It’s one thing to send existing information into an AI tool, but what must a company’s team collectively do so that the quality remains high as new data arrives?
Discussions about the influence of AI in data governance decisions should also address how to tackle bias. What checks and balances will exist to verify the data used in high-level decision-making is authenticated and reliably sourced? Having multiple parties involved in vetting the data is a great strategy since all humans have inherent biases.
Data Governance Can Promote Information Sharing
You may think that establishing data governance principles before or during AI adoption means the information must stay relatively restricted, even to the extent that certain company departments can’t access it. However, that’s not necessarily true. Creating the appropriate parameters can facilitate instances where different groups have genuine needs to see and work with data.
Becoming more open about information sharing doesn’t mean giving every individual unrestricted access. It might mean only providing it to the parties who have the necessary context first. Additionally, applying permissions to the information upholds data security and integrity.
Having those controls is essential even if the data does not relate to AI. The future of data governance will likely mean there will be more instances where it’s beneficial to share data with others for the greater good.
The European Union (EU) also showed support for creating more data-sharing rules to promote innovation and growth, particularly in the area of AI. Some of the proposals included facilitating the reuse of certain public-sector data and limiting exclusive data access rights to no longer than one year across EU member states.
Success and Restrictions When Sharing AI in Health Care
In one recent example, researchers used statewide data to create AI models that predicted the likelihood of individual patients needing hospitalization due to COVID-19. The team suggested that their findings could help develop better public health strategies and improve the utilization of medical resources. However, this work could not have happened without procedures that allowed the researchers to access the state data and dictated how they used it.
People who study the impact of AI in data governance discussions also note that the lack of access across organizations also limits what the technology might achieve. A primary issue about using AI data for better patient decisions is that the content is often localized at hospitals rather than accessible across the person’s health care journey. That problem makes it harder to develop AI algorithms.
However, if the data experts employed by hospitals work together to agree on data governance principles, progress could occur in overcoming this challenge. That effort could be especially beneficial for applying AI to help patients with certain rare diseases who would be underrepresented by the data held by a single hospital or system.
Data Governance May Encourage Public Willingness About AI
With more people in the general public becoming aware of ways to use AI, some of them are becoming more insistent about knowing how the AI reached those decisions. Evidence has already shown that AI algorithms used to scan resumes for candidate suitability gave lower scores to people who’d attended women’s colleges. Companies also use AI to help determine whether to offer applicants loans or to set their insurance premium rates.
Law enforcement agencies may depend on AI while searching for suspected criminals or spotting crime warning signs. People should never rely on AI so much that they suspend their problem-solving and critical-thinking skills. However, since an AI conclusion could help shape decisions that change the lives of the affected parties, it’s understandable that people want more assurance that the AI is working as intended and does not have biases. Data governance could help provide it.
In one recent study, researchers sought to get participants’ opinions about the use of AI in four applications. They included autonomous weapons, customer service, art, and crime prediction. The data showed that people were significantly more hesitant about AI weaponry than the other three scenarios, although that did not surprise the research team. A more unexpected finding was that many AI-based topics require extensive explanations to close the gap between people’s perceptions and realities.
The future of data governance may involve breaking down what’s gone into ensuring an AI algorithm will perform safely and not cause any unintended consequences. Putting that information into accessible language and tying it to data governance principles could make the public more agreeable to the growing use of AI in society.
How Will You Support the Future of Data Governance?
Regardless of whether you’re actively involved in AI development, you almost certainly work with data in your career. That information could eventually get used for an artificial intelligence project. Even if it doesn’t, you must actively take part in following or helping create data governance principles in your organization. Doing those things helps ensure data gets used responsibly and effectively.
artificial intelligence (ai)
Opinions expressed by DZone contributors are their own.