What Does an AI Ethicist Do? Breaking Down “AI for Not Bad”
You might hear the term “AI ethicist” and picture an armchair philosopher pondering the mysteries of technology and (after much contemplation) pronouncing certain things good and other things bad. Or it might seem that an AI ethicist’s job is to tell you what “good AI” to build, or how technology can “benefit humanity.”
Admittedly, there are such AI ethicists. They are not, however, the one’s you’ll find helping organizations to design and implement the kinds of frameworks needed in a business context.
The work of AI ethicists in business is to help organizations build and use what we might call AI For Not Bad, which is very different from AI For Good. While the latter focuses on using the powerful tool that is AI to pursue explicitly ethical goals, AI for Not Bad focuses on mitigating the ethical, reputational, and legal risks of AI in pursuit of ordinary business objectives. The goal is to go about for-profit enterprise and innovation efforts without the unintended bad consequences that put organizations at operational, ethical, legal, regulatory, and reputational risk. They do this by implementing a robust strategy that mitigates those risks.
I know this because I am a professional AI ethicist and this is what I do. (And as a former philosophy professor, I’m more than familiar with a life of contemplation) My team of experts at Virtue Consultants implements comprehensive approaches to AI ethical and regulatory risk mitigation. I’ve served as a Senior Advisor to the AI committee at Deloitte AI Institute, was a founding member of EY’s AI Advisory Board, and I advise the Canadian government on their federal regulations for AI, the AI and Data Act (AIDA). I even wrote the – okay, a – book on AI Ethical risks in business , “Ethical Machines, published by the Harvard Business Review Press.
What is the role of ethics in AI?
It’s easy to see how one might dismiss ethical considerations as unnecessary: technology is not ethical, you might argue: it’s just code. Neither moral nor immoral. Or you might think that ethics is worthy of discussion but in a more casual sense: something you talk about late at night or at parties but not sufficiently relevant to business operations for the corporate conference room.
Yet the reality of ethical risk proves how misguided that view is. From trademark violations making their way through the courts as a result of Generative AI usage (to say nothing of the privacy issues to be dealt with), to infamous examples like United Healthcare releasing an AI that recommended to healthcare practitioners that they pay more attention to white patients than to sicker black patients,: it’s not up for debate that there is clear risk to enterprises from ethical lapses. This is true regardless of what moral theory or opinions you subscribe to.
So the role of ethics — in particular, of ethical risk management — in AI is to mitigate the risk of things going bad, even when nobody involved acts with the intention to do something wrong. The team at United Healthcare that built the discriminatory AI was not trying to discriminate against Black patients. It’s that they paid insufficient attention to ethical risk considerations when designing and testing it. This happened because, at the time, there was no comprehensive and robust ethical risk strategy in place to manage unintended consequences like these.
So the role of ethics and ethicists here is not to enforce a standard of personal integrity in a code of conduct, but rather to help the developers, procurers, and users of AI products avoid unintended negative outcomes of building and using the sort of technology that can take one small mistake, and repeat it thousands of times faster than a human could as it scales its outputs.
What does an AI ethicist do?
Hopefully by now you’re on board with two claims I’ve made: one, that ethics is in fact extremely relevant to AI; and two, that ethical, reputational, and legal risks need to be addressed in concrete ways by any organization that seeks to leverage the power of AI..
It’s now worth outlining what an AI ethicist does. I opened the article by contrasting ethical risk management work with armchair philosophizing. But what might the day to day work look like for an AI ethicist developing and implementing AI risk strategy for a large organization?
While I offer a great deal of detail in my book and my articles, I’ll provide a high level overview here. Here are the kinds of things an AI ethicist does, in no particular order:
- Bring a great deal of both philosophy training as well as business and technical experience to companies in a way that helps them set their ethical risk standards. This sometimes takes the form of an AI Ethics or Responsible AI Statement. One crucial aspect of this statement is that it then allows the AI ethicist and their team to perform a gap analysis: how far away is your organization from living up to those ethical risk standards?
- Help C-suite ethical risk management program owners think through tough questions about a company’s “ethical nightmares”, or scenarios they want to avoid at all costs, to figure out what exactly the organization stands for, and how those values are actually connected to their mission
- Develop granular frameworks that help engineering leaders know exactly what situations require ethics committee consultation, and why
- Design ethics committees that are robust enough to survive individual job changes, that have all of the necessary types of experience and expertise, and that possess the authority and power to mitigate ethical risk in practice
- Distinguish ethical risk management from legal risk and consumer perception risk management
- Coach people on how to adopt the actions required by the ethics statements, to stay on track for the risk-mitigating outcomes the organization needs
- De-abstract and disambiguate ethical risk strategy discussions when teams get bogged down
- Translate high level articulations of ethical values to procedures and, ultimately, to metrics that track the rollout, compliance, and impact of your AI ethical risk/Responsible AI program.
In other words: AI Ethicists operationalize ethics. They don’t just think about it.
What qualifies someone to be an AI ethicist?
AI ethicists, also referred to as ethics consultants or ethics strategists, are not all themselves interchangeable. They don’t all share the same level and type of competence and experience. Here’s what you should look for in an AI ethicist that isn’t trying to get you to save the world (remember: the goal is AI For Not Bad).
Deep understanding of AI
Not great: people who “think a lot about the Singularity” and have watched Terminator one time too many. Much better: ethicists who understand how machine learning works, how AI is trained, its myriad use cases, who builds it within the organization, what stakeholders it impacts, how to operationalize it — and, most importantly, all of the ways that AI can have unintended consequences for companies whose employees are, in all other respects, acting with wonderful moral intentions. A good AI ethicist should respect the incredible power of AI fully, which comes with knowing the territory of all the things it can (and can’t) do.
Corporate and regulatory experience
A fantastic engineer who has built AI systems and happens to have a philosophical background would seem like a qualified candidate for the role of AI ethicist. But as every corporate leader knows, the key to getting anything done is knowing how to get executive buy-in and how to work cross functionally to get initiatives adopted and moved forward. An AI ethicist needs this kind experience for their partnership with you to do what you’re hoping your ethical risk management program will do, or else the resulting program will fail to launch because it wasn’t designed with pragmatic employee adoption in mind.
Doctorate level training in philosophy and ethics
On the other hand, all the corporate and AI experience in the world is no good if an AI ethicist doesn’t bring a deep and nuanced understanding of how to think through ethical considerations. Ethics can’t be boiled down to PR, or to consumer perception and market research, or to what (outdated) laws say. And the risk involved is too great to delegate to your product owners and call it a day. AI ethicists with PhDs in philosophy or at least a Master’s in medical ethics (if you operate in the healthcare or health tech industry) are needed. Advanced academic training is far from the only relevant qualification, but it is a prerequisite for the work.
Why dedicated AI Ethicists are needed
Building and buying AI that is trustworthy and low-risk means it must be sufficiently de-biased, free of untenable privacy violations, and as explainable as the AI needs to be for its use case. It’s possible to grasp that notion at some level and still not believe an ethicist is needed to accomplish those goals. Let’s address two of the common perceived substitutes for an AI ethicist, and why they don’t quite fit the bill.
Attorneys alone are not a substitute
You absolutely should have a lawyer on your AI ethics committee. Specifically, one well versed in privacy and bias issues. But an attorney is not an ethicist: keeping in compliance with the law is a worthy goal and must be taken seriously. But remember that your company can act legally, and still find itself in the midst of a PR crisis or an internal employee revolt. What’s more, the law lags far behind technological innovation, resulting in a large gap between the current laws and regulations, on the one hand, and what can be done with AI, on the other. Inside that widening gap sits the ethical and reputational risks of AI.
Engineers alone are not a substitute
Making better software is not the issue, when it comes to AI ethics. Even the most thoughtful machine learning software developers are not ethicists and do not have experience articulating organization-wide values or designing ethics committees with corporate power structures in mind. And most ethical risks cannot be managed with better technology: procuring the best training data for your AI application is a data science problem, and if you solve it in the most efficient way, that might be in conflict with your company’s ethical values and risk management goals. In other words, the more powerful and effective version of a product might also be the riskier one. But how do you resolve this conflict in a nuanced way that doesn’t unduly slow down innovation? This is where the combination of the technologist and the ethicist is needed, to actually mitigate risk without throwing cold water on your company’s worthy innovation ambitions.
Who should be in charge of AI ethics?
There’s a limit to what an ethicist can do for you: they can’t be the one in charge. A member of the C-suite must own ethical risk management as a priority, because the potential consequences of not getting it right are potentially very damaging, if not existential. There’s no one standard executive that must own this in every organization, such as the CTO or CDO: developing an ethical risk management program that delivers is too complex for a simplistic rule of thumb. But it can’t be someone not part of the C-suite, or they won’t be able to make real change happen across multiple departments of your organization.
So while I’ve written quite a few words about the crucial role of AI ethicists in managing risk, an AI ethicist cannot be in charge of AI ethics for your company. Virtue can help design your strategy, create a framework that includes how to craft your AI Review Board or Ethics Board, design operating procedures, augment existing workflows, and create customized metrics (KPIs and OKRs) related to responsible and ethical AI. But the buck cannot stop with us or with the AI ethicist.
Why? Putting anyone but a member of the C-suite in this ultimate position of authority on the matter is an easy way for your employees to see that AI ethics is not taken seriously by the organization, which effectively prevents risk management from ever even getting off the ground. Further, only they have the authority to ensure that policy is thoughtfully designed, implemented, and complied with in a way that is relevant to, among other things, employee performance reviews.
Does your company need a chief AI ethics officer?
Perhaps you have the opposite reaction and decide you might be best off creating a new C-suite position called the Chief AI Ethics Officer, and empowering them with all of these duties. This is not uncommon and comes from a well-intentioned place. But is it a good idea?
Sometimes, maybe. But in other cases it’s not. As we’ve discussed, I urge you to resist the temptation to lump all of the responsibilities and sets of expertise and perspective that go into an AI ethics team onto one person’s lap. Creating or elevating a role of Chief AI Ethics Officer might put you in danger of doing so, by punting these responsibilities entirely to the AI ethicist and simply giving that person a C-suite title. (This pitfall is routinely realized in the DEI space).
In our work with clients, we standardly recommend that an AI ethical risk program be rolled out in phases and that the creation of a new role like Chief/VP/Director of AI ethics should be tied to a milestone within those phases. That said, if the organizational commitment to AI ethical risk management is sufficiently high and there is cross-functional alignment on this need, creating a new executive role to lead the charge can be an effective strategy.
Barring that existing buy-in, build a working group that reports to an executive sponsor. Just make sure a competent AI ethicist is involved.
How Virtue can help
I’m Reid Blackman. As the CEO of Virtue Consultants, my team of AI ethicists as well as experts in risk, compliance, and cybersecurity and I work with enterprises to implement comprehensive approaches to AI ethical and regulatory risk mitigation. Our experts build and integrate AI ethics systems into legal, regulatory, compliance, performance improvement, and cyber defense frameworks — tailored to each client we work with. I also speak and advise on this topic. Consider reading more about my approach in my book, Ethical Machines (Harvard Business Review Press, 2022).
If it’s the right time for you to set up an ethical risk management strategy consultation, feel free to contact me by email or read more about my advisory work here.