Rahul Jalali, SVP and CIO, Union Pacific Railroad.
As Uncle Ben says in Spiderman, “With great power comes great responsibility.” This is one of my favorite sayings because it holds true for every walk of life, especially as a leader in the workplace, where you are responsible for the success of the business you own and the people you work with.
Many technology applications are becoming more powerful, impactful and far-reaching than ever before, especially in the realm of artificial intelligence (AI), where systems can make life-altering decisions based on real-time data from sensors, remote inputs and existing (historical) data with minimal or no human intervention.
With this power, what should the responsibility be? Should these AI systems be held to the same accountability standards as humans?
Where is AI today?
To answer these questions, let’s unpack a few of the current use cases for AI.
1. Finance: Robo-advisors create investment portfolios, and bots determine if loans can be granted or not based on a variety of attributes about the borrower. In high-frequency stock trading, for example, computers have replaced human decision making for matching buy/sell orders.
2. National security: AI plays a front and center role in national security, such as insights mined from the troves of surveillance data and videos that would profoundly impact intelligence analysis.
3. Healthcare: The healthcare industry increasingly relies on intelligence, such as for cancer detection techniques and management of congestive heart failure, by predicting challenges and recommending proactive interventions.
4. Retail: AI determines which products to showcase to customers based on their purchase history and preferences.
5. Transportation: In my favorite example, autonomous vehicles are self-driving vehicles that analyze information from a variety of sensors, cameras and data feeds to adapt to circumstances such as avoiding vehicles around them and changing weather or road conditions. There are also many other applications of AI for the railroads, such as in managing yard operations and predicting equipment failures across the network.
In all these examples, AI can help to minimize bias by removing humans from the equation; however, it can also help propagate bias at scale because of its reliance on historical data that could have inequities baked in. For example, a mortgage-lending algorithm could reduce lending based on age since it learned from historical information that older people were more likely to default on their payments.
That said, there are several approaches to ensure fairness, including pre- and post-processing of data to either cleanse it of any information regarding sensitive attributes or to transform the predictions after they are made to satisfy certain fairness constraints.
Explainability techniques (explaining how the system reached a particular decision) also help in identifying and mitigating bias. This could also drive holding humans to higher standards of fairness in decision making because we uncover what human decisions may be underlying these biases. In addition, human judgment (using an interdisciplinary approach spanning social sciences, law and ethics) may still be needed to deploy AI systems with an eye toward minimizing bias and ensuring fairness.
What does the future hold for responsible AI?
Let’s take it one step further: Not only do we need to concern ourselves with bias but also with accountability for legally or morally wrong actions committed by AI. For the most part, we have encountered the good AI — used to solve global problems — but bad AI is also coming to light, i.e., the type of AI that could do harm or be destructive, based on what it is learning and how it is being used.
A great example is a much-debated potential for AI-triggered autonomous lethal weapons. A more practical day-to-day example is a fully autonomous self-driving car. If the car were to accidentally collide with another vehicle causing an injury or worse, then the normal course of action would be to pursue the humans behind the AI: the automakers, AI developers and so on.
However, we should consider extending that accountability to the AI itself if we are expecting it to meet and exceed the limits of human intelligence.
Today, companies create their own responsible AI guidelines and self-regulate, but this is not consistent. We need to determine how best to regulate AI across the board, as Europe has done with its General Data Protection Regulation (GDPR).
While that is coming, we must ensure that any AI we build is interpretable/explainable, fair, secure, maintains privacy and incorporates a high standard of data governance in order to create a robust accountability framework. This ultimately leads to building trust with the users; otherwise, AI solutions will not be used as they should for the betterment of humankind.