People Analytics and AI in the Workplace: Four Dimensions of Trust

AI and People Analytics have taken off. As I’ve written about in the past, the workplace has become a highly instrumented place. Companies use surveys and feedback tools to get our opinions, new tools monitor emails and our network of communications (ONA), we capture data on travel, location, and mobility, and organizations now have data on our wellbeing, fitness, and health.

And added to this is a new stream of data which includes video (every video conference can be recorded and more than 40% of job interviews are recorded), audio (tools that record meetings can sense mood), and image recognition that recognizes faces wherever we are. 

In the early days of HR analytics, companies captured employee data to measure span of control, the distribution of performance ratings, succession pipeline, and other talent-related topics. Today, with all this new information entering the workplace (virtually everywhere you click at work is stored somewhere), the domain of people analytics is getting very personal.

While I know HR professionals take the job of ethics and safety seriously, I’d like to point out some ethical issues we need to consider.

The Risk of Data Abuse

First, let me give you a little motivation. While you may go out and buy a great new employee engagement tool or “retention risk predictor” from your HR software company, these new systems bring risk. When you buy the system you really don’t know how it works, so every decision, recommendation, or suggestion it makes becomes your organization’s problem.

Suppose, for example, you use Pymetrics, HireVue, or another advanced assessment technology to assess job candidates. While these vendors work hard to remove racial, gender, and generational bias from their tools, if you implement them and a job candidate sues you, your company is responsible. And this happens all the time. (Read how Amazon inadvertently created its own gender-biased recruitment system. )

I had this happen to me. We were interviewing a candidate for a secretarial position many years ago, and I had to go out of town the day of the interview. The candidate came to the office and our office manager told her we had to reschedule the meeting. She immediately sued us for discrimination, because she was a member of a protected class.  I felt terrible and we paid her for her time, but I can see how she felt.

Another example I want to point out. A company that turned on the “retention predictor” from their HCM system told me their managers looked at these ratings and do all sorts of strange things when they see flight risk. Some managers actually stop talking to these people and reduce the support they get at work because I guess they think “they’re thinking about leaving anyway.” Obviously, this is not good management, but if we don’t use this data well, people can use it incorrectly.

And of course, there are other things that could go wrong. If you have access to employee health data and you use it to assess or discuss an employee’s performance, I”m sure you’re in legal jeopardy. (I”m not an employment lawyer.) If you leak or inadvertently publish employee health data you violate HIPAA rules. 

There are lots and lots of places to get in trouble. Just look at what’s happened to Facebook, Equifax, Marriott, and every other major company who thought they were protecting data. People make mistakes; employees do bad things; we have to protect the data, the algorithms, and managerial behaviors.

And as AI becomes more prevalent, we no longer see the data but rather we see a “nudge” or “recommendation.” What if that “nudge” is biased in some way and an employee becomes upset? Do you know how that software works and can you go back and make sure it didn’t discriminate based on some incorrect criteria? 

Finally, if you’re an analyst and you do your own analysis (read my article on ONA and how email traffic predicts performance), are you ready to defend your findings and recommendations under attack? If someone challenges your findings and wants to understand the data by age, gender, race, or even location or season – are you ready to make sure it’s valid and reliable?  I know this is all something we can do with statistical tools, but we do have to be careful.

And remember, trust is one of the most important things we have in business today. Not only might you lose your job if something bad happens, but the damage to company reputation can be enormous.

What Should We Do?

I’ve been doing a lot of work in this area, including spending quite a bit of time with IBM, the folks at O’Reilly, and of course talking with many HR leaders, people analytics leaders, and vendors.  To help you understand the issue of ethics with people analytics, let me present the following framework.

ethics, ai, trust, framework

As you can see from this framework, there are two dimensions to ethics.

  • First, is the data and algorithm you’re using fair?  Does it accurately reflect the performance or productivity data you want without excluding, discriminating, or inadvertently biasing the result? This is tricky and I’ll discuss it below.
  • Second, is the data system and algorithm safe?  Are we protecting privacy, confidentiality, and security? Who has access and how do we audit its use and path through the company? This is a well-known problem in IT but now one we have to deal with in HR.

When you look at these two dimensions, you essentially find that there are four dimensions to trust.

trust, ethics, ai, responsibility, framework

1. Privacy

The first ethical issue to consider is privacy. As the chart above shows, companies like Facebook, CVS, Yahoo, and many others have gotten in trouble here. When an employee joins your company they give you the rights to collect a lot of data, but we as employers do not have the rights to expose this data, share it, or link it with personally identified information. 

In GDPR rules, organizations also have to “forget” this data if the employee asks, so there are some important business practices to consider. If you look at a few of the questions above, they all deal with issues of disclosure and protection. Who can access this data and have these people been trained on privacy rules and procedures?

At Deloitte, all consultants take a mandatory annual course on privacy, our PCs were scanned, and we are trained not to store any client information in a form it could be disclosed. In the case of HR, we need to tell employees what data we are collecting and make sure they understand that this data is being used for positive purposes. 

While many of us may feel comfortable sharing our personal stories on social media and other places (I personally don’t do this), others are much more private – so even an internal employee directory can be problematic.  A large tech company recently told me a story about an engineer that created an internal social network that showed employees who worked in what office and what jobs they had in the past. Employees were very upset to “discover” this website and because they were not consulted in advance, protested its use. The employee, who was just an engineer trying to make the company a better place to work, had to shut down the system.

And the amount of data being captured keeps increasing. One of the fastest growing areas in L&D, for example, is Virtual Reality (now called Immersive Learning). VR programs capture all types of individual performance data – your attention span, eye movement, as well as your ability to deal with stress.  Pymetrics assessments measure your risk-taking ability and cognitive processing. This type of data may be useful for purpose (training, job fit) but can also be misused if not kept private.

Tell people what you’re doing, explain the “opt-in” policies you have, and make sure you have good privacy policies in place for all employee data. (GDPR rules mandate that you obtain such consent, and also that you enable employees to see what data you collected.)

2. Security

The sister of privacy is security. Is the data stored and protected in a place where others cannot find it? Do you have password policies, encryption, and other data protection practices in place so an employee can’t take the data home, send it to a third party, or accidentally release it into the internet?  These are IT issues which all companies have to deal with, and when we have sensitive information like pay, job history, healthcare data, and other personal information we have to protect it well.

In the European Union this has become a law. One of the GDPR rules is the need to create a Data Protection Officer and design your systems for data protection. If your company is found to lapse in these areas you can be fined up to 2% of revenue, which is an enormous risk.

3. Bias

The third and most difficult (and new) problem we have in People Analytics is bias. Whether you are analyzing the data yourself or buying an AI tool from a vendor, we have to remember that all algorithmic systems are based on existing data. And if the existing data is biased, the predictions and recommendations will be biased.

This is a very difficult problem to solve, and many organizations are working on it. (IBM Research has an excellent video on this topic.). For example:

  • Systems that try to assess fair pay will compare an employee to peers but may not understand issues of race, location, and age
  • Systems that predict retention may discriminate against minorities or others who leave the company for cultural reasons
  • Systems that assess fit to a job may institutionalize old, discriminating hiring practices that are embedded into hiring history
  • Systems that use organizational network analysis to identify performance may not realize that gender or age plays a huge role in trust and relationships
  • Systems that predict high performers will be biased toward existing highly rated individuals (who may be white men).

Every predictive analytics system you buy or build is going to have bias built-in.  (The word “bias” means “influence based on the past,” which is really what AI is trying to do.) 

The best thing you can do to reduce bias is to monitor and train your analytics systems. In other words, look at the predictions and recommendations it is making, and just inspect to see if the results are biased. Amazon discovered that its hiring bot was biased toward women. IBM continuously monitors its internal pay recommendation engine and online managerial coach (both powered by Watson) by “robot trainers” who constantly tune the system to deal with new condition.

I remember a company years ago that told me its compensation policy was not working well in China, where salaries were increasing at twice the rate of the US. Your system may not know this, so it may bias against raises in China or overly bias toward raises in the US. These are not necessarily unethical decisions, but this type of bias can hurt your company.

Vendors are quite concerned about this. Pymetrics has taken this so seriously the company now open sources its algorithms to reduce bias. Other vendors should do the same.

When we blithely train algorithms on historical data, to a large extent we are setting ourselves up to merely repeat the past. … We’ll need to do more, which means examining the bias embedded in the data. – Cathy O’Neill, “Weapons of Math Destruction”

What can you do?  Monitor, assess, and train your data-driven systems. IBM, for example, has pioneered the use of AI to help improve career development, management practices, and pay. The company does regular reviews of its Watson-based HR predictors and trains them to be smarter and less biased. (IBM told me that their AI-based HR chat-bot now delivers over 96% employee satisfaction in its answers to questions.)

Explainable, Transparent, or Trusted AI

There is a major movement in the AI community to make systems “explainable.”  Why did the system recommend this salary change, for example? If you understand why a prediction was made, you can act on it more intelligently.

explainable ai

Many vendors are building tools detect AI bias, including IBM’s bias detection cloud service, Audit AI from Pymetrics. MIT researchers now released automated bias detection as well as methods to remove AI bias without loss of accuracy. As a buyer of HR systems, you should ask about these features.

4. People Impact

The fourth dimension of trust is perhaps the most important. What is your intent in capturing this data?

As GDPR rules clearly state, it’s not ok to capture data to “see what it might tell us.” If employees believe they are being monitored for the wrong reasons, the impact will be negative. So I believe you should sit down and document why you are capturing a given stream of data and clearly set out a goal for the project. Facebook clearly did not do this in their business, and they’re still recovering from reputational damage.

The big question to ask is this: why are you implementing this particular analytics or AI tool?  Will it help people? Or is it being used for monitoring or surreptitious performance assessment?  

Most of the vendors have the best of intentions.

  • Phenom People’s new talent experience platform uses AI to help job candidates find the right opening, helps internal candidates find the right job, and helps the chatbot ask you intelligent questions to understand your job needs.
  • Glint’s new Manager Concierge uses AI to recommend behavior changes and courses to help you become a better leader. ADP’s Compass tool and Zugata by CultureAmp does the same. Humu is doing this for team and operational performance.
  • Watson Candidate Assistant from IBM uses your resume to identify your skills as a job seeker and finds you the best job, dramatically improving quality of hire and time to hire.
  • EdCast, Valamis, Fuse, and Volley are using AI to recommend learning content, and BetterUp uses AI to find you the best coach.
  • Oracle, Workday, and SuccessFactors use AI for many features. Oracle HCM recommends salary adjustments and even customizes the screens you see based on your own role and behavior, simplifying the system itself.
  • And vendors like Spring Health now use AI to diagnose your mental health and recommend the right tips, counselors, or doctors.

In fact, it’s pretty clear to me that all HR technology vendors are pushing AI toward positive people impact. We as buyers, however, just have to make sure we use it well.

As an example, here are some things to avoid:

  • Do not use monitoring data to surreptitiously inform performance reviews. A financial services firm, for example, used a form of heat and motion detector to determine who was coming into the office. Yahoo famously reviewed VPN logs to see when people were working at home and when people weren’t. These kinds of activities will damage your employees’ sense of trust and almost always lead to poor decisions.
  • Do not use any form of wellbeing data for any other purpose than legally allowed. It’s legal to use certain health data for insurance pricing: it’s not ok to use it for succession planning, performance reviews, or any other form of employee coaching.
  • Do not use training data (program performance) for performance evaluation. This not only reduces trust but could put you in legal jeopardy.
  • Do not cross boundaries between personal and professional data. If you’re tracking data from employee’s phones make sure you are not giving people access to personal information. While the device may be owned by the company, invasion of privacy will get you into trouble.

In fact, in most big companies there should be a legal review before you start capturing data. Does your project adhere to GDPR guidelines, HIPAA rules, and other confidentiality protections?

Remember also, AI-based scheduling and work provisioning problems are also risky tools. Marriot, for example, implemented a new system to schedule housekeepers and wound up with a union labor dispute because workers were treated unfairly. The system was driving housekeepers crazy running from room to room. In other words, it was not designed to “help people,” just to “help the company.”

The simple advice I can give is this: focus your analytics program on strategies that positively impact people. If you’re tracking people to measure work productivity and the data will be used to make work better, then you’re moving in the right direction. If you’re using the data to weed out low performers, you’re probably violating your company’s management principles.

Bottom Line:  Use Good Sense, Consider Ethics A “Safety” Problem

More and more companies have hired “chief ethics officers” and other staff to help with these projects. Others are creating “ethical use committees” to make sure all analytics projects are evaluated carefully. All these are important ideas. 

Just like diversity and inclusion is more like a “safety program” than a “training problem,” so is the ethical use of data. The most diverse organizations use metrics and committees to make sure their D&I strategy is reinforced. We have to do exactly the same thing in the ethical use of employee data. 

And when you start a new analytics program you need a checklist of issues to consider. Ask yourself “how would it look if this program appeared on the front page of the NY Times?”  Would it damage the company’s reputation?

If the answer to this is yes, you need to do a little more homework.

Finally, let’s use the consumer experiences with data as a guide. Companies that expose massive amounts of consumer data have suffered in terrible ways. Today, trust is one of the most important business assets we have. Take it seriously and make sure your efforts to make management data-driven move in the right direction. You’ll be glad you did.

PS. Watch for much more on this topic in the upcoming People Analytics Excellence course we will launch later this summer in the Josh Bersin Academy.