Ethics Boards Won’t Save Big Tech
Tech companies need more than advisory boards if they want to create ethical products and services
Tech companies need more than advisory boards if they want to create ethical products and services
On March 26, Google announced the formation of an external advisory group to help the company navigate complex questions around the ethical and responsible development of new technologies, including artificial intelligence. By April 4, however, the council had been disbanded, and Google acknowledged that the company was “going back to the drawing board.”
Ironically, Google’s new group of ethics advisors fell apart because of ethical challenges. But apart from underlining just how fragile the current state of technology ethics is, the incident attests to a much larger challenge tech companies are facing: How can a company ensure that the products it develops — especially A.I. — are as good for society as they are for the company’s bottom line?
Google’s advisory council was established to help the company implement its A.I. principles—an “ethical charter to guide the responsible development and use of AI in our research and products.”
Launched last June, the principles articulate ideals and aspirations that few would dispute, including developing socially beneficial technologies, avoiding unfair bias, and ensuring safety. They mirror similar efforts from companies like Microsoft to develop an ethical foundation for A.I. development. They also reflect frameworks such as the Institute of Electrical and Electronics Engineers’ (IEEE) guidelines on ethically aligned design. At a time when there is legitimate growing concern over the potentially harmful personal and social impacts of A.I. and other technologies, these principles are laudable.
Ethics are essential to establishing a guiding basis for how powerful new technologies are developed and used.
And yet, as Google found out the hard way, framing socially responsible and beneficial development in terms of ethics is far from easy.
Part of the issue Google and other companies face is that while ethics involve enforcing social norms around what is considered right and appropriate versus what is wrong and inappropriate, ethics on their own don’t provide robust mechanisms for developing and building safe and responsible products.
Ethics are essential to establishing a guiding basis for how powerful new technologies are developed and used. Yet they are worth little without mechanisms and processes that ensure research, development, and commercialization decisions will lead to outcomes that are socially responsible and beneficial.
Entrepreneurial ethics
I have been grappling with this disconnect between ethical aspirations and social impact since 2013, when I first started teaching ethics to budding entrepreneurs as part of the (now defunct) University of Michigan Master of Entrepreneurship. Even though this program predated the current flurry of excitement around A.I. and technology ethics, it recognized that emerging tech entrepreneurs need a strong foundation in ethics. And yet, as I developed my entrepreneurial ethics course, it quickly became obvious that these graduate students needed more than an ethical compass as they grappled with the realities of launching their own startup.
As a result, the course ended up focusing on equipping students to make tough business decisions in the real world while remaining true to personal and institutional values and social norms. It focused on building skills around five pillars:
The basic principles of entrepreneurial ethics.
Personal values and how they integrate with institutional values.
Critical processes for codifying values within enterprises.
Interacting and engaging with key constituencies.
Socially responsible practices and products.
Students taking the course developed an understanding of the importance of ethics in technology innovation. But they also left it understanding how to translate good intentions into strong business practices and societally beneficial outcomes.
In teaching the course, I was especially interested in how this approach to entrepreneurial ethics could foster a culture of responsibility within the startup community. But I was also focused on the future success of these students and how practical ethics could help them navigate the innovation challenges they would inevitably face.
It became apparent that these future founders could have the best technical idea in the world, but if they couldn’t wrap their heads around the social impacts, they risked failure.
As I was developing the course, my research and that of others was making it increasingly clear that the broader social-risk landscape around new technologies was becoming a growing threat to entrepreneurial success. It became apparent that these future founders could have the best technical idea in the world, but if they couldn’t wrap their heads around the social impacts of what they were planning, they risked failure. Six years later, this is more apparent than ever as a growing number of companies face the consequences of social ignorance.
Operationalizing values and social norms
Each year I taught the entrepreneurial ethics course, I was reminded of how most entrepreneurs really are striving to make the world a better place. These students weren’t in it for the money (at least most of them weren’t). Rather, they wanted to cure disease, curb climate change, protect the environment, and make the world a better place. They were driven to transform their values into a future that looked better and brighter for others.
The trouble was that their values were naive and ephemeral. They had good intentions but no idea how to translate them into good practice. As a result, they were all too easily subsumed by the harsh realities of building a business. These students needed a way to hardwire their values and aspirations into their businesses so they could weather the tough choices every innovator encounters.
My students needed a solid foundation in ethics — but they also needed a way to operationalize what was important to them so their enterprises didn’t fail at the first ethical hurdle. More than this, they needed to understand how to navigate and thrive in a complex social landscape where every action they took potentially threatened something of importance to someone else, and where social norms and expectations could throw up barriers that no amount of technical expertise prepared them for.
These are the types of real-world challenges that I set out to prepare my students for, and they are not the types of challenges that are readily addressable through convening ethics boards or advisory groups.
This is reflected in more practical approaches to technology ethics, such as the IEEE approach to ethically aligned design, which sets out to “provide pragmatic and directional insights and recommendations” to technologists, educators, and policymakers. The IEEE approach begins to help operationalize ethical approaches to technology innovation. And yet, despite the availability of resources like this, there’s still a remarkable level of naivety around the use of ethics boards and similar advisory structures within technology companies. And this is partly why Google’s efforts to get things right imploded.
Beyond ethics
Much of current naivety around A.I. ethics, and technology ethics more broadly, arises from assuming that developing ethics guidelines and convening advisory groups are sufficient to ensure socially responsible innovation.
Unfortunately, they are not.
At worst, ethics advisory boards can easily become a smoke-and-mirrors attempt to mask business as usual under the guise of social responsibility. But as companies like Google and others are beginning to discover, people aren’t so easily fooled.
At the other end of the spectrum, ethics advisory boards have a valuable role to play in helping set the ground rules and boundaries for innovation practices. But on its own, this does little to ensure socially beneficial innovation unless there are associated mechanisms to operationalize ethical rules, boundaries, and expectations within an organization. As the Verge recently reported, the evidence so far is that such boards do little to affect actual outcomes.
So, what are the alternatives? How does a company like Google go about ensuring that its actions and products increase societal value while causing as little harm to society as possible?
At worst, ethics advisory boards can easily become a smoke-and-mirrors attempt to mask business as usual under the guise of social responsibility.
The first step is not to throw the ethics baby out with the bathwater. The ethics of technology innovation are important. They are essential for solidifying nebulous good intentions into concrete values that reflect social norms and expectations.
But ethics alone aren’t enough. Businesses also need to focus on outcomes, the often tortuous pathway between aspirational goals and what happens when the rubber hits the road.
This is where ethical innovation and social responsibility take a very practical turn and require developers and decision-makers at every step of the process to be working toward creating products and services that do good without causing harm.
Delivering on socially responsible and beneficial outcomes requires a plethora of tools and processes, including internal and industrywide standards, measurable expectations, enforceable checks and balances, meaningful policies, and a culture of social responsibility. They also demand buy-in across the complete enterprise ecosystem, from employees and executives to investors, business partners, consumers, affected communities, and regulators.
And most important, they require a strategic commitment to training and education, ensuring that employees and business leaders two, five, and 10 years from now have the skills they need to build successful enterprises that are socially responsible. This in turn will require an openness to new skill sets that don’t fit old and outmoded job titles and a willingness to engage with experts who understand how to innovate responsibly and successfully in a rapidly changing world.
What we don’t need are ethics boards that are more of a liability than an asset.
Of course, technology ethics (including A.I. ethics) are vitally important — they are at the very heart of what it means to innovate responsibly. Yet many tech companies, especially in the A.I. sector, have fallen into the trap of equating ethics with success while skipping over the hard stuff in between. They’ve become enamored of the allure of ethics boards and ethical principles as a way to blunt criticism, instead of as an avenue to improve their companies. As a consequence, tech companies have risked becoming blinded to the much tougher challenge of successfully running a business while creating social value.
This is where companies like Google are desperately in need of an ethics reset. Not so they can abandon their commitment to ethical innovation, but so they can make it count where it matters.