What Is an Ethics Consultant and How to Tell if Your Organization Needs One
What is a business ethics consultant?
“Business ethics” is a broad term that refers to issues ranging from corporate governance to how a company treats its customers and employees. A business ethics consultant helps enterprises build better and fairer products, and mitigate their ethical risk. They should be philosophy doctorates specializing in ethics, perhaps also with advanced degrees and deep experience in the ethics of your industry. In addition to academic credentials, they also need to have practical experience in large corporations, as well as sharp internal communication skills.
This article focuses on the enterprise use of business ethics as it relates to AI and machine learning. I’ll discuss how and why ethics consultants can be useful in mitigating the unique and myriad ethical risks associated with AI product procurement, development, and use.
What do ethics consultants do?
Ethics consultants aren’t making business decisions for you: they are not arbiters of your product or market strategy. They’re not even there to make ethics-related decisions for you. Instead, think of an ethics consultant — or, depending on the consultant’s specialty and background, an ethics strategist — as the architect of your enterprise’s ethics strategy and overall framework.
This governance framework will serve to guide internal ethics related conversations and help the organization make specific ethics decisions: for example, about this product or that AI model. An ethics consultant may also serve as a facilitator of your internal ethics committee, though not a voting) member.
They’re there to help develop the organization’s ethics strategy and guide a cross-functional conversation about ethical risk, by bringing specialized training and knowledge that data scientists, lawyers, software developers, and business people don’t typically bring. Their goal is to help these disparate groups work together to engage in clear and productive ethical deliberations, assessing strategic questions thoughtfully.
Virtue Consultants (my firm) primarily works with enterprises to develop comprehensive ethical risk management strategies. Virtue also works at the project or model level of ethical risk management, but the bulk of our consulting work is done at the executive level: we build strategic frameworks and roadmaps, and provide implementation oversight.
Why do enterprises hire ethics consultants?
Large organizations procuring or building products with artificial intelligence (AI) are particularly interested in enterprise-wide AI governance and risk management, and hiring an ethics consultant or ethics strategist to develop their risk management framework and help facilitate an AI ethics committee or working group, is an ideal way to tackle this problem.
Why? Too often, business leaders are highly motivated to manage ethical risk well, but they don’t know how to start. Ethical risk is hard to mitigate when you don’t have a clear grasp of the underlying concepts and problems, and technologists by themselves bring a perspective that is only part of a whole.
Organizations often devise a list of corporate values like “fairness” and “transparency” as part of an an AI ethics statement, but they don’t know how to operationalize that list into policies,procedures, and metrics, especially while keeping innovation alive and well. They also tend to harbor the nagging feeling that they haven’t thought through all of the relevant ethical problems and challenges deeply enough.
The role of the ethics consultant and strategist is to clarify the ethical risk management framework, optimize it, and implement it across the organization.
How ethics consulting programs are structured
The next question is: how exactly do ethics consultants operate, and how do they develop strategies and implement them so that they effectively mitigate ethical risk?
There is no off-the-shelf approach to AI ethics. Complexity abounds: companies value different things, operate in different markets and cultures, and are bound by different regulatory regimes. However, in broad strokes at least, a consulting engagement will often look similar to the following:
1. Look to the existing governance structures and processes to help incorporate AI risk strategy decisions
If the organization has an existing governance board that discusses privacy, cyber, compliance, and other data-related risks, this is often a good starting point. If there is no existing governance board, the organization should form one — and include third-party AI ethicists that can help them think through key issues.
In a lighter-touch framework, the product management team feeds discussion topics and proposes issues to the board, which can then escalate those to the C-suite when needed. Alternatively, a more stringent framework involves alert criteria or a flagging system, such that when certain areas are involved (types of data, types of end-customers or stakeholders affected, and so forth) the project must be sent to the ethics board or other governance structure for a consult, the results of which must be respected. This approach is common in the case of high-risk applications of AI as defined by the EU’s AI Act.
While executives won’t be able to identify in-the-trenches risk points as easily as front line product team members, protecting the brand from reputational, regulatory, and legal risk is ultimately a high-stakes responsibility that must fall to the C-suite.
2. Form a framework of ethical issues that suits your industry
Clearly document the ethical standards — the Do’s and Definitely Don’ts — that the organization will abide by. Outline the structure of your governance committee, who will be on it, and why that mix of stakeholders makes sense. As the organization changes and matures, describe how roles will be filled without assuming certain individuals will stay put at their jobs indefinitely.
This framework should also weave ethical risk mitigation into operations, offering practical guidance for data scientists, product managers, software developers, and for senior leadership. Your industry will determine how this framework is created. Considerations differ in finance (digital identities, international transactions), in health care (extra protections built around privacy, precision medicine considerations), or in retail (recommendation engine development, product association and stereotype mitigation).
4. Build ethical practices into granular product guidance
Data scientists need to know how to make tradeoffs in their training models, and product managers need granular guidance on how to build products.
One example: they must know when to prioritize accurate AI applications, and explainable ones: those that provide outputs based on plain-English reasons that can be explained easily to others. Internal tooling can often help. For example, an algorithm that determines the relative importance of explainability for a given product concept based on the sensitivity of what it’s supposed to do. Some ML applications have lower risk of bias or privacy violations, and as such can be fine-tuned for more accuracy and less explainability. For other applications, such as those outputs governed by regulations, the guidance will be the opposite.
5. Communicate ethical issues clearly in your organization
Everyone who deals with the company’s data, in particular its AI product development and deployment, must understand the AI ethical risk policy and associated procedures as they pertain to their role. This should not be done on a need-to-know basis: senior leadership must affirm their commitment to ethical risk management openly, loudly, and often. Conversations around managing ethical risks should be encouraged, creating a culture in which a data and AI ethics strategy can be successfully built and maintained. Communicate in particular that this is not about PR, but is a core part of the company’s value set.
I also want to emphasize that a great ethics consultant does not work alone: when Virtue engages an organization, we bring consulting partners that have expertise in legal, cybersecurity, HIPAA, and other specialized business areas depending on our client’s specific industry. Our team approach allows us to bring multiple outside perspectives to battle-test the governance framework that we build and the way we help implement that framework.
Operationalizing data and AI ethics is not an easy task. It requires buy-in from senior leadership, cross-functional collaboration, and strategic integration of the ethics risk program into other enterprise priorities. Companies that make the investment, however, will not only see mitigated risk but also more efficient adoption of the technologies they need to forge ahead. And finally, they’ll be exactly what their clients, consumers, and employees are looking for: trustworthy.
How Virtue can help
I’m Reid Blackman. As the CEO of Virtue Consultants, my team of AI ethicists and I work with enterprises to implement comprehensive approaches to AI ethical and regulatory risk mitigation. Our experts build and integrate digital ethics systems into legal, regulatory, compliance, performance improvement, and cyber defense frameworks — tailored to each client we work with. I also speak and advise on this topic. Consider reading more about my approach in my book, Ethical Machines (Harvard Business Review Press, 2022).