During last Wednesday’s congressional hearing about Twitter transparency, Twitter CEO Jack Dorsey was forced to take accountability for the damaging cultural and political effects of his company. Soft-spoken and contrite, Dorsey provided a stark contrast to Facebook’s Mark Zuckerberg, who seemed more confident when he appeared before Congress in April. In the months since, collective faith in the fabric of the internet has been anything but restored; instead, consumers, politicians, and the tech companies themselves continue to grapple with the aftermath of what social platforms hath wrought.
During the hearing, Representative Debbie Dingell asked Dorsey if Twitter’s algorithms are able to learn from the decisions they make—like who they suggest users follow, which tweets rise to the top, and in some cases what gets flagged for violating the platform’s terms of service or even who gets banned—and also if Dorsey could explain how all of this works.
“Great question,” Dorsey responded, seemingly excited at a line of questioning that piqued his intellectual curiosity. He then invoked the phrase Explainable AI, or “explainability,” a field of research he said Twitter is currently investing in, though he noted it’s early days yet.
Dorsey didn’t elaborate further, and on the hearing went. But this wasn’t the first time the CEO mentioned explainability. In August, Dorsey appeared on Fox News Radio to talk about the shadow bans Twitter allegedly enacted against conservatives. “The net of this is we need to do a much better job at explaining how our algorithms work. Ideally opening them up so that people can actually see how they work,” he said. “This is not easy for anyone to do. In fact there’s a whole field of research in AI called ‘explainability’ that is trying to understand how to make algorithms explain how they make decisions in this criteria.” Dorsey said Twitter is “subscribed to that research,” saying the company is helping fund and lead the charge in this new field.
Most simply put, Explainable AI (also referred to as XAI) are artificial intelligence systems whose actions humans can understand. Historically, the most common approach to AI is the “black box” line of thinking: human input goes in, AI-made action comes out, and what happens in between can be studied, but never totally or accurately explained. Explainable AI might not be necessary for, say, understanding why Netflix or Amazon recommended that movie or that desk organizer for you (personally interesting, sure, but not necessary). But when it comes to deciphering answers about AI in fields like health care, personal finances, or the justice system, it becomes more important to understand an algorithm’s actions.
I contacted x.ai, a company that makes AI-powered assistants, to ask about Explainable AI. CEO Dennis Mortensen says that while explainability and “the black box” are at opposite ends of the algorithm spectrum, there is a sliding scale that exists between the two. That said, he believes that even just beginning to ask “why?” is important and overdue. “Hand over heart, the whole industry hasn’t spent much time if any on the ‘why’ part, even when it’s technically feasible,” says Mortensen.
Why now then? For one, it’s the age of the algorithm. “We’ve seen an explosion of the use of algorithms in decision-making,” says Mortensen. Where most important decisions—like who gets a home loan or what a prison sentence will be—were once made by humans or humans assisted by machines, now many are entirely algorithm-based. “I think we always had some sort in inherent belief of humans at least trying to do the right thing,” he says. People aren’t ready to inherently trust a machine the same way. Which brings up the more urgent reason Explainable AI has become a pressing issue: Consumers are witnessing algorithms backfire. Facebook’s fight against misinformation and Twitter’s war on bots are only two of the recent instances in which algorithms spectacularly failed. Until very recently, Mortensen says, it was acceptable for a technology company to defend a failure by pointing out that it wasn’t caused by a person or a team, but by an algorithm: “So it’s not that your employees are evil, it’s the machine that’s evil.” That doesn’t fly anymore. “These companies now understand it’s not an excuse and if anything, it might even be worse,” he says. “It doesn’t matter if a person does it or a machine does it, you need to be able to explain it.”
One very important element of Explainable AI is determining who the algorithm’s decisions are explainable to. There is no collective answer, but Mortensen believes it should be the average user. “It’s all good and fine that developers and data scientists understand it, but people should understand it,” he says. He points out that while General Data Protection Regulation (an EU regulation on individual data protection) primarily focuses on privacy, it also requires increased algorithmic transparency. So what do those explanations look like? In some cases, it’s something as simple as a right-click option that opens a window containing a few sentences explaining why a user is being presented with certain information. In a health care scenario, a doctor would want a detailed, thorough analysis of why a machine suggested a diagnosis.
It seems obvious why developers would opt for an XAI system. The problem is that by their nature, XAIs have limitations. “The more complex a system is, the less explainable it will be,” artificial intelligence researcher John Zerilli told me via email. “If you want your system to be explainable, you’re going to have to make do with a simpler system that isn’t as powerful or accurate.”
Zerilli is part of a group of researchers from the University of Otago who recently published a paper about Explainable AI. Titled “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?,” the paper asks whether it’s fair to require algorithms to explain themselves when humans have trouble doing so. Setting up some sort of regulation for explainability in AI “could end up setting the bar higher than is necessary or indeed helpful.” (The other serious limitation is that exposing a company’s AI could also expose its proprietary technology.) Instead of changing the entire architecture of the internet’s algorithms, the group argues for using intentional stance explanations, which means giving consumers more information about the data points used by an algorithm in coming to a decision.
While the group wants to dissuade regulators from overtasking AI systems with explaining themselves, they do believe social platforms will benefit from being more transparent about their algorithms. Alistair Knott, another of the paper’s writers, pointed out that Google and Facebook are beginning to do this with the “Why am I seeing this ad?” explanations. However, Knott pointed me to new research that shows those explanations might have unexpected effects. “[It] suggests that they ‘backfire’ as tools to gain user acceptance if the data supporting the ad choice appear to come from some distant unknown source,” Knott wrote. When these explanations are either vague about where or how they gathered data or there isn’t enough information about where it even came from in the first place, users are more likely to distrust it. “A perfectly targeted ad can be rendered ineffective if unsavory practices that underlie it are exposed,” the paper concludes.
“We need to do a better job explaining how our algorithm works” is an increasingly common refrain coming from tech CEOs. In some ways, a full-throttle push into Explainable AI systems that begins at the design level lets the humans building them off the hook. It feels like skipping a step to require a machine to explain why it recommended this Twitter account or why it assigned that prison sentence before requiring Jack Dorsey, Mark Zuckerberg, Jeff Bezos, and others to detail the ways in which they’re harvesting our data. Because those are the people who are behind the machines in question