I’ll never forget something that psychologist Daniel Kahneman said a couple of years ago at a people analytics conference. The keynote was largely about how algorithms can reduce “noise” (random, irrelevant factors that cloud our judgment) when we’re rating job candidates and trying to predict people’s performance. About halfway through, Kahneman made a quick, almost offhand comment that really struck me: He said he was “quite worried” about AI’s dark, dystopian possibilities, despite its great potential for good. The better AI becomes at making decisions, the less we’ll need human judgment — and that, he suggested, will threaten the power structure in organizations. Leaders won’t like that, so they’ll resist adopting the technology for their biggest, most important decisions.

That rang true when Kahneman said it, and it still does. It’s consistent with the ongoing human struggle to maintain control in the face of technological advancement, though it’s not one of the unintended consequences we usually think of regarding AI in the workplace. We tend to focus on other risks — amplifying cognitive biases, cannibalizing livelihoods, disrupting businesses. Those concerns are more than justified, but perhaps we have been a bit myopic and have overlooked something that’s as dangerous as relying too heavily on technology or allowing it to run amok: failing to reap its benefits, out of a deep, paradoxically self-destructive desire to keep calling the shots and preserve our status.

It’s an unsettling thought. But two articles in this issue of MIT SMR (and others we’ve recently published) offer useful reminders that there’s reason for optimism, too. As keen as we may be to retain power and its privileges, we also see ourselves as fair, and we want to live up to that self-image. AI is starting to help us in that regard. As Josh Bersin and Tomas Chamorro-Premuzic argue in “New Ways to Gauge Talent and Potential ” — and as Kahneman himself said at the conference — AI-enabled tools can greatly reduce the role of bias in hiring decisions by screening for traits that affect performance and by disregarding those that don’t, such as the extent to which people look or sound like us. And in “Using Artificial Intelligence to Promote Diversity ,” Paul Daugherty, H. James Wilson, and Rumman Chowdhury, too, urge us to hold ourselves to a higher standard of organizational behavior. They call on makers of AI systems to design, train, and refine applications that “ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand.”

Though these articles acknowledge that there’s plenty of room for our darker instincts to assert themselves, they suggest ways to bring us into the light — which feels constructive.

Lisa Burrell • December 11, 2018