Humans are increasingly using artificial intelligence (AI) to inform decisions about our lives. For example, AI contributes to make hiring choices And offer medical diagnoses.
If you were affected, you may want an explanation as to why an AI system made the decision it did. Yet AI systems are often so computationally complex that even their designers don't really know how the decisions were produced. This is why the development of “explainable AI” (or XAI) is booming. Explainable AI includes systems that are either themselves simple enough to be fully understood by people, or that produce easily understandable explanations of other, more complex AI models. outputs.
Explainable AI systems help AI engineers monitor and correct the processing of their models. They also help users make informed decisions about whether to trust or how best to use AI results.
Not all AI systems are need to be explainable. But in high-stakes domains, we can expect XAI to become more widespread. For example, the recently passed law European AI law, a precursor to similar laws around the world, protects a “right to explanation.” Citizens have the right to receive an explanation of an AI decision that affects their other rights.
But what if something like your cultural background affects the explanations you expect from an AI?
In a recent systematic review we analyzed more than 200 studies from the last ten years (2012-2022) in which the explanations given by XAI systems were tested on people. We wanted to see the extent to which researchers indicated they were aware of cultural variations potentially relevant to designing satisfactory explainable AI.
Our results suggest that many existing systems can produce explanations that are primarily tailored to individualistic, typically Western populations (e.g., residents of the United States or the United Kingdom). Additionally, most studies of XAI users have only sampled Western populations, but unjustified generalizations The results for non-Western populations were all over the place.
Cultural differences in explanations
There are two common ways to explain a person's actions. The first is to invoke the person's beliefs and desires. This explanation is internalist, focusing on what's going on inside someone's head. The other is externalist, citing factors such as social norms, rules, or other factors outside the person.
To see the difference, think about how we might explain a driver stopping at a red light. We could say, “They think the light is red and don’t want to break any traffic rules, so they decided to stop.” This is an internalist explanation. But we could also say: “The lights are red and the highway code requires drivers to stop at red lights, so the driver stopped.” This is an externalist explanation.
Learn more: Defining what is ethical in artificial intelligence requires African input
A lot psychological Studies suggest that internalist explanations are preferred in “individualist” countries where people often view themselves as more independent of others. Those countries tend to be Western, educated, industrialized, wealthy and democratic.
However, such explanations are obviously not preferred over externalist explanations in “collectivist” societies, such as those commonly found in Africa or South Asia, where people often view themselves as interdependent.
Preferences in explaining behavior are relevant to what a successful XAI output might be. An AI that offers a medical diagnosis might be accompanied by an explanation such as: “Since your symptoms are fever, sore throat, and headache, the classifier thinks you have the flu. » This is internalist because the explanation evokes an “internal” state of the AI – what it “thinks” – albeit metaphorically. Alternatively, the diagnosis could be accompanied by an explanation that does not mention an internal condition, such as: “Since your symptoms are fever, sore throat, and headache, based on his training on the criteria diagnostic inclusion, the classifier produces the result that you have flu.” This is externalist. The explanation relies on “external” factors like the inclusion criteria, in the same way that one could explain stopping at a traffic light in accordance with the highway code.
If people from different cultures prefer different types of explanations, this is important for designing inclusive explainable AI systems.
Our research suggests, however, that XAI developers are not sensitive to potential cultural differences in explanation preferences.
Not taking cultural differences into account
Strikingly, 93.7% of the studies we reviewed did not indicate an awareness of cultural variations potentially relevant to the design of explainable AI. Additionally, when we checked the cultural background of those tested in the studies, we found that 48.1% of the studies did not report on cultural background. This suggests that the researchers did not consider cultural context as a factor that could influence the generalizability of the results.
Of those who reported their cultural background, 81.3% sampled only Western, industrialized, educated, wealthy, and democratic populations. Only 8.4% of non-Western populations were sampled and 10.3% of mixed populations were sampled.
Sampling only one type of population is not necessarily a problem if the conclusions are limited to that population or if researchers provide reason to believe that other populations are similar. Yet among studies that looked at cultural context, 70.1% extended their findings beyond the population studied – to users, to people, to humans in general – and most studies contained no evidence of reflection on cultural similarity.
Learn more: Artificial intelligence in South Africa comes with unique dilemmas, plus the usual risks
To see how deep culture monitoring runs in explainable AI research, we added a “meta” systematic review of 34 reviews of existing literature in the field. Surprisingly, only two reviews commented on asymmetric sampling in user research, and only one mentioned overgeneralizations of the XAI study results.
That is problematic.
Why results matter
If findings about explainable AI systems apply only to one type of population, these systems may fail to meet the explanatory requirements of others affected by them or who use them. This may decrease trust in AI. When AI systems make high-stakes decisions but don't give you a satisfactory explanation, you'll likely distrust them even if their decisions (like medical diagnoses) are accurate and important to you.
To address this cultural bias in XAI, developers and psychologists must collaborate to test for relevant cultural differences. We also recommend that the cultural backgrounds of samples be reported with the results of the XAI user study.
Researchers should State whether their study sample represents a larger population. They can also use qualifiers such as “American users” or “Western participants” to report their findings.
As AI is used around the world to make important decisions, systems must provide explanations that people from different cultures find acceptable. As things stand, large populations that could benefit from the potential of explainable AI risk being overlooked in XAI research.
Marie Carmanlecturer in philosophy, University of the Witwatersrand
Uwe Petersassistant professor of philosophy, Utrecht University