Artificial intelligence (AI) is evolving at an incredibly rapid pace, bringing with it a changing set of opportunities and potential risks. The global rollout of agentic AI across business functions, including customer service, software development, finance, and corporate communications, presents critical risks related to piracy, errors, and workplace disruption.
In this three-part series, we examine these risks and provide strategies organizations can implement to respond to and mitigate potentially costly liabilities.
Artificial intelligence is only as good as the data on which it has been trained and the rules it has been given to process information. AI models, as at least some will admit, don’t actually “understand” culture; they follow statistical patterns and programmed instructions. AI often struggles with grey areas where the data is insufficient or rules conflict. In many situations, it also still lacks the human sensibility to know what might be perceived as offensive or nonsensical.
Earlier forms of AI were criticized for obvious flaws, such as high error rates in facial recognition of non-white individuals resulting from limited training samples. Attempts to address this lack of representation later led to over-corrections such as AI-generated images of Black and Asian people as WWII German soldiers, a woman of color as a U.S. founding father and another as a pope, and a South Asian person as a Viking. In these cases, rules intended to ensure representational diversity overrode other rules for historical accuracy, with ludicrous results.
AI errors of this type have now generally become more subtle. Current trends include the development of more context-aware and indigenous models that incorporate broader data sets from specific communities and are programmed to ask users questions about their circumstances and intent. AI inquiries about context might include, for instance, a question about how a requested image would be used—for example, a Chinese wedding would require quite different garments, colors, and participants than a wedding in Italy.
The movement toward “native alignment,” or training AI on more indigenous datasets from the start, is still nascent. Most countries contain tremendous diversity in regional differences, languages, ethnic groups, socioeconomic gaps, education, and customs that are not included in current AI training materials. An example of indigenization is an effort in India to incorporate the country’s incredibly rich local data into AI training, including its 22 official languages and cultural information on history, traditions, images, idioms, holidays, and so on.
However, even AI models programmed to ask users for additional contextual information and trained on more diverse data sets still have major flaws. Among these are homogenization (e.g., grouping Native Americans into a single generic category), condescension toward less fluent language inquiries, and perpetuating negative stereotypes about people based on the country, state, or city where they reside. As one research study concluded, “this bias is fundamentally structural, and no amount of fine-tuning fully removes the geopolitical hierarchies baked into their data and design.”
Persistent issues with the accuracy of AI models can be attributed in part to “cultural skew.” Google’s Gemini defines this term as:
“Systematic distortion in an AI model’s outputs that favors the values, logic, and social norms of the dominant culture present in its training data (historically Western, English-speaking, and individualistic).
Because most Large Language Models (LLMs) are trained on massive scrapes of the Western internet, they inherit ‘invisible defaults.’ Even if the AI is functionally accurate, its ‘perspective’ is skewed, which creates significant risks for global businesses.”
Deeply embedded systemic distortion in the output of AI agents may still affect crucial global business activities such as evaluating job candidates, providing tailored customer service, developing accurate personas for product marketing, determining the best approach to high-stakes negotiations, or ensuring legal compliance with national or state laws to avoid “algorithmic discrimination.”
Companies must ensure that information generated by their AI applications for employees and customers is as accurate and refined as possible to avoid generic stereotypes or ethnocentric assumptions. Possible countermeasures include:
In a rapidly shifting landscape, Aperian empowers global teams to stay agile and resilient. By combining curated digital content and AI functionality with expert-led human development, we bridge the gap between technical scale and interpersonal excellence. Get in touch to discover the GlobeSmart® Profile, book a keynote, or learn about our easy platform integrations.