An Interview with Ben DiPietro of LRN
Founded in 1994, LRN has helped hundreds of companies with issues of ethics and compliance. Ben DiPietro wanted to learn more about the cutting edge of ethical risk management and interviewed me for their newsletter. You can find the original here, though I’ve copied it below.
Reid Blackman is the founder and chief executive of Virtue, a company devoted to working with technology businesses to make sure ethics are a part of their risk management practices. He shares his insights on how ethical risk analysis can improve companies, and how ethical frameworks can shape development of artificial intelligence.
What is ethical risk management? In what ways does it differ from the type of risk management practiced at most companies?
Blackman: Today’s consumers and employees want to purchase from, and work for, ethical companies. Ethical risk management is about identifying the ways companies are at risk of falling short of that standard and devising strategies to shore up those vulnerabilities. This is not legal risk management. When companies go viral for the wrong reasons, or get beat up on Glassdoor, it’s often not due to a failure of legal compliance. It’s because the company prioritized profit over people, short-term gain over long-term sustainability, shareholders over stakeholders. Ethical risk management is about getting a business’s ethical house in order to guard against consumer and employee fallout.
What are the components of an ethical risk analysis? What can this tell a company? Once completed, how can they best apply the results?
Blackman: At Virtue our ethical risk analyses have two points of focus: organization and emerging technology products [such as] artificial intelligence, virtual/augmented reality, biotech). When we look at organizations, we look to see how well the ethical values of the company are defined, if at all, and the extent to which there are processes built around those values. When we look at emerging tech products we perform a systematic and exhaustive ethical risk due diligence process. Both approaches tell us where a company is vulnerable to running afoul of the ethical standards of consumers and employees. Not only that, they tell us what’s high risk, what’s medium risk, and what’s low risk. That makes the results easy to apply: tackle the high risk areas first by building processes that systematically address those vulnerabilities.
Why did you start Virtue, and why focus on the tech sector?
Blackman: I was an academic ethicist for nearly 20 years, but I was also a business owner--I co-founded and operated a fireworks wholesaling company for 15 years--and always wanted to combine my love of ethics with my love of business. But it wasn’t until a couple of years ago when I saw that ethics in business mattered in a way it hadn’t before. Millennials were making their voices heard with the power of social media and businesses were literally paying for it. I saw that being ethical for businesses is now a necessity to protect the bottom line, and once I identified the return on investment for ethics, I jumped in.
I focus on the tech sector because engineers are ringing the ethics alarm bells around a range of new technologies: artificial intelligence, virtual/automated reality, gene editing, and more. The opportunities are huge, but so are the risks, and businesses that run afoul of ethical behavior will lose consumer confidence and employee loyalty. Recognizing this risk, combined with the intrinsically fascinating topic of emerging technologies, naturally led to my focusing on it for Virtue.
What are the two things people should be excited about as it relates to artificial intelligence? Two things that should cause them concern? How can ethical frameworks and standards help tilt us toward the good benefits and away from negative outcomes?
Blackman: There are a lot of things to be excited about, and sometimes those are also the things that should cause concern. Self-driving cars are exciting, for instance. They have the potential to reshape cities, what it means to commute to work, whether car ownership will make sense for most people, whether travel can be safer than ever. But they’re also a cause for great concern, most obviously those relating to safety and the ethical “decisions” such cars will have to make when impending accidents force a tradeoff between the lives in column A and the lives in column B.
Perhaps the biggest concern with all of this, though, is the way artificial intelligence--or more specifically, machine learning--requires massive troves of data that are standardly collected without the meaningful, informed consent of those whose data it is. This is where massive breaches of privacy are already occurring and, if they go unchecked, will continue to spiral out of control. Consumers are largely ignorant of what’s going on, but after Facebook’s Cambridge Analytica scandal they’re wising up. Once they really understand, big tech may have a backlash.
One more area that’s exciting and cause for concern is automation. Artificial intelligence is increasingly allowing a range of tasks to be automated. As some people put it, AI takes care of the dumb, the dangerous and the dull. So AI might free people from those kinds of work. The cause for concern, of course, is mass unemployment as a result. AI will also create many jobs, but there is a huge problem with regards to determining who will pay for already-out-of-school workers to be retrained. Businesses won’t want to foot the bill, and we’re unlikely to see a governmental program-at least on the national scale--that fixes the problem. I actually think this is where some businesses can lead and have a big impact. The businesses that choose to take care of these workers by retraining them will benefit not only from the educated workforce, but also from the boost to their reputation for taking on the moral responsibility of caring for those whose jobs are eliminated by AI.
When it comes to an ethical framework, I prefer to think in terms of a due diligence process. At Virtue we created an ethical risk due diligence process that systematically and exhaustively identifies the ethical risks of emerging tech products. The businesses that are smart enough to look around the corner will also be the ones that avoid disaster.
You spent the first part of your career in academia, focused on ethics. How does the academic discussion of ethics differ from its practical application in companies? Can organizations use academic research to improve their ethics programs and initiatives?
Blackman: Academic ethicists often talk about fascinating questions that are irrelevant to businesses. For instance, an academic ethicist might ask, “Should we have X--facial recognition technology, robots that mimic human behavior, self-driving cars?” That question is irrelevant because it’s just a given that people are already employing, or will employ, those technologies. So the right question for a practicing business ethicist to ask is, “Given that we’re going to do X, what’s the way to do it that also mitigates the associated risks?”
I’m not sure organizations can use academic research to improve their ethics programs; ethics research tends to be pretty obscure. But I do think they can use academic researchers to translate that work into meaningful, transformative processes that mitigate ethical risks. I also think they can use those researchers as members of advisory panels. Academic ethicists are trained to think big picture and systematically about ethical issues, a skill that is hard to master. Businesses would do well to have their deliberations shaped by the input of ethicists.
What are two things organizations can do better to create ethical, values-based cultures?
Blackman: I’d say there’s one thing with two parts. First, they need an ethics statement that goes beyond legal compliance and the boilerplate corporate language of “trust” and “integrity.” Today’s ethical risks are about more than the former, and the latter is too “thin” to be actionable. Organizations need ethics statements with some meat on the bones. Second, and crucially, those ethics statements need to be translated into concrete processes and practices. Organizations can’t rely on employees to read that statement, have a change of heart and mind, and strive to live up to the values of the businesses. They need to create processes that are a party of everyday work life so that the values of the business are realized systematically.