The Ethics of Artificial Intelligence: It’s Trickier Than You Think

Recently I had the opportunity to join the O’Reilly Foo Camp, where around 150 leading technologists, academics, economists, and business innovators come to talk about the future of technology. It was an inspiring and energizing few days, and one where almost every session seem to revolve around the ethics, responsibility, and potentially dangerous role of AI.

This is an important topic and one which is being discussed all over the world. Intelligent algorithms will inform our doctors, our travel, our shopping, and of course our news and information. In the world of HR and business management, AI has incredible potential, as I detail some of the applications in AI in HR: A Killer App, but it brings risks as well. 

In this article I’ll briefly discuss some of the ethical issues behind AI in social media, and share what we discussed as a group.

Social Systems: Intelligent Targeting Driven by AI

While there are hundreds of social networks in the world, the three most prominent players in the US market are Google (“Don’t be Evil”), Facebook (“Connect the World”), and Twitter. These three companies primarily make money through advertising.

Note: Amazon is also a social media company (research shows that 45-50% of product searches start on Amazon) and LinkedIn serves almost 600 million users as well, but their business model is based on product sales or other services.

While Google is rapidly increasing their fee-based services, these companies primarily make money through advertising, which is essentially a business of targeting. The more effectively an advertiser reaches his intended audience, the more they are willing to pay.

The business of capturing our attention is not new, by the way – newspapers and magazines have done this for years. (The book The Attention Merchants by Tim Wu explains it magnificently.) What is new is how much data these companies can collect and how effective their AI has become at targeting.

Facebook, for example, now has AI-based ad-targeting technology with more than 10,000 features, each focused on making sure advertisers reach precisely the right demographic group, geographic segment, or person. These algorithms analyze click streams, mouse movements, (Facebook knows where your mouse moves and how long your “dwell” on an image, for example), and your behavior online, so it can essentially predict what you are most likely to read. Powerful stuff.

While this technology was never intended to do harm, as we all know it can be misused. Political actors can use the technology to sway our opinion; people can masquerade and promote a biased truth; and many studies have now found that the algorithms hurt our psychological well-being. These are essentially unintended consequences which are fueled by algorithms and AI.

Is the platform somehow unethical? What should these companies do? Do we need a new, refreshed set of principles for AI-based social systems?

(Clearly we do.  Readers should look at Tristan Harris’s work, founder of Humanetech, who is trying to unravel this problem and create standards to help engineers make these systems more useful.)

Engineering Safety and Ethics in Mechanical Design

History can give us some advice: the problem of engineering “safe use” has been solved before. Let me cite a few examples.

I spent my college years studying Mechanical Engineering and Physics. As a mechanical engineer, we are taught how to calculate stress, strain, materials properties, and a variety of other physics-related drivers that may cause a bridge or other mechanical device to collapse. We know that if our machine breaks, bends, or fails to operate as designed, we are at fault, we may hurt people, and we could easily be liable. So, as a result, we have spent decades building safety factors, design guides, accreditation programs, and professional practices to make sure bridges, machines, and materials are safe.

One of the panelists gave the example of the airline industry. While the early airplanes were dangerous and often killed people, over the last 180 years we have developed safety programs, manufacturing practices, and integrated testing and certification programs that have made airline travel one of the safest ways to get around. (Today you have a 1 in 114 chance of dying in a car crash, vs. 1 in 9,821 of dying in any form of flight incident.)

In both of these cases, engineers are now responsible for safety. They know what can go wrong (we’ve studied failures for hundreds of years) and over decades we have engineered away most errors, failures, and design mistakes. It wasn’t easy, and it took a lot of time.

In social media, we aren’t even close to figuring this out.

Facebook’s Challenge With Employment Discrimination

Look at the most recent issue Facebook has as it faces an Employment Discrimination Lawsuit for enabling employers to target job ads to people of a certain age, race, or other demographic. The features which make Facebook a fantastic ad-targeting platform also turn it into a tool for potentially illegal job ads that could discriminate against older people. (This lawsuit was filed by the Communications Workers of America on behalf of a job-seeker who was over the age of 45 and did not “see” job ads that her daughter did.) 

Fig 1:  Facebook Ad in Question, from Vox Media

The Facebook ad platform performed as designed. 

Did Facebook intend an advertiser to break the law?  Did it create an “unethical system?”

Of course not. Perhaps Facebook should have been more aware of this potential legal issue, but you can be sure the company will consider these laws going forward.

What is the solution? It’s not an easy answer.

As I talked with Rob Goldman, the engineering lead for Facebook’s ad platform, he explained how he is struggling to find a way to prevent such abuse in the future. Should Facebook ask your race, for example, so the company can algorithmically assure that job and other regulated services are evenly distributed by racial profile? Should the ad platform warn advertisers not to use it in a discriminating way? These are not easy questions to answer.

These Are Not Simply Engineering Problems: They Are Also Leadership Issues

As we discussed these issues in Sebastopol we had a very multi-disciplinary discussion. At every point in the social media process there can be bias and possible abuse. For example, several people I met at Foo Camp were doing research on bias in social media, including the fact that most of the Wikipedia editors are male, so most of what we see in Wikipedia is gender-biased.

Yes, we agreed, AI systems must be more transparent, auditable, and fair by design. And many organizations are now working on this. One of the engineers at Foo Camp is building the world’s largest Genomic database – you can bet he’s thinking about abuse and security.

And sometimes AI systems will have ethical problems by design, so we have to monitor them carefully. The very nature of a “learning system” is that it looks for patterns and trends, which in themselves may be biased. If you use an AI-based tool for recruiting and it tries to replicate the career success of your current workforce, for example (which is similar to what Facebook does with ads), you may wind up hiring only white males for years to come. (This is why Pymetrics, an AI recruiting company, built a “bias detector” library to help developers spot these problems.)

But ultimately these issues demand a responsible set of leadership principles, and this starts with the CEO.

Look at Google’s open process to let employees complain about the company’s recent decision to create a search engine for China; this is an example of CEO-led ethical decision making at work. IBM has published its own ethical standards and code of conduct for AI, which I discussed in detail with IBM’s head of research. Microsoft has also published its ethical standards for AI, which reflect one of the broadest perspectives on the problem. Every company needs to think this way.

Do we need external regulation? Right now I believe we do. Just as engineering, airline, and auto emissions safety was forced by government regulation, we need the same in AI and social media today. Look at how much momentum GDPR has created in the EU. Almost every major data-collection business is now focused on this problem, and new tools and approaches are being created every day.

Business models must mature as well. Making money cannot be the only thing these companies do. Facebook, Google, Twitter and others are now realizing that “risk” may be a bigger strategic issue than “growth.” As Rob from Facebook put it, “we may have to increase the cost of ads to fix this problem, but ultimately it’s the right thing to do.”

Ethics in AI Demands A Holistic Approach

After a weekend of debate, I came away convinced that while the topic is somewhat technical, issues of misuse, trust, and bias are problems of leadership, standards and rules. They require responsible leadership and a relentless focus on safety.

As we wrote about in the 2018 Deloitte Global Human Capital Trends earlier this year, social responsibility has become of the most important strategies in business. Banks, pharmaceutical companies, oil companies, and all types of manufacturers are considering their role as global citizens, and this mentality has to reach tech companies as well.

For AI and social media companies, the time for such responsibility is now. Creating “ethical AI” will not be easy, but we clearly need a holistic approach. None of us want plane crashes and failing bridges in our daily social lives.