We Need ESG Criteria for Artificial Intelligence

[Reposted from Cornerstone Capital’s Journal of Sustainable Finance and Banking, here]

I have a great investment opportunity for you. I have identified 60 corporations that have begun employing a novel method for sourcing products that leads to a 50% cut in overhead. There’s just one catch: The method these businesses employ has been linked to producing and maintaining racist procedures in the criminal justice system, gender pay inequity, and producing large but unforeseen consequences. So, how much would you like to invest?

The answer is clear if you are vetting investments by ESG criteria. If this is all the information you have, you simply cannot invest in those companies, at least not in a way that is commensurate with your ESG goals.

But this is the situation social impact investors are in when it comes to the introduction of artificial intelligence (AI) to business practices. By “artificial intelligence” I mean the set of tools computer scientists and engineers are developing that are meant to mimic or replace the kind of reasoning human beings instantiate when engaged in task-specific behaviors (e.g. judging the appropriate sentence for a person found guilty of a particular crime, recognizing where the street ends and the sidewalk begins, and determining whether a particular financial transaction is sufficiently anomalous to warrant further investigation).  The details are complicated but we need not concern ourselves with them here. All we need to know are two facts:

  1. Artificial intelligence has been linked to each of those ethically problematic results, and many more.

  2. Artificial intelligence is increasingly incorporated into businesses practices in virtually every industry

ESG investors have three options. The first is to snub those businesses that employ AI. But that would ultimately result in leaving the investment game altogether. That is not going to happen. The second is to ignore the problem, or pretend it doesn’t exist, and go on investing as though it were not happening. But that amounts to giving up on ESG analysis. It is the path of despair and forfeit. 

The third and only viable option is to insist that companies meet certain criteria with regards to their use of artificial intelligence. Just as there are criteria for how businesses source their raw materials, there ought to be criteria for how businesses use one of the most powerful tools any business has ever seen. But what should those criteria be?

ESG investors should insist that businesses incorporate ethical oversight of the creation and use of AI in their business practices, preferably by an independent party, in three ways.

First, businesses need a review board to vet AI that is either developed in-house or acquired from an AI company. That board would investigate, for instance, whether the data fed into the algorithm was checked for bias, whether there is a space for human deliberation and decision in between the output of an algorithm and an action, and whether the inputted data or the output violate people’s right to privacy.

Second, the business needs a review board – perhaps the same one that performs the initial review – to systematically monitor the impacts of the use of that technology. What are the unintended consequences? Are they ethically acceptable?  What do our stakeholders think of these impacts?

Third, the findings at both stages must be communicated to the developers of the AI so that appropriate alterations can be made.

ESG investors are in a crucial position. They have the responsibility – given their own identity-defining goals – to integrate AI ESG criteria into their investment strategies. And they have the power to push corporations to develop and incorporate new sustainable and ethical practices where they are desperately needed.

Reid Blackman