Artificial intelligence (AI) is evolving at an incredibly rapid pace, bringing with it a changing set of opportunities and potential risks. The global rollout of agentic AI across business functions, including customer service, software development, finance, and corporate communications, presents critical risks related to piracy, errors, and workplace disruption.
In this three-part series, we examine these risks and provide strategies organizations can implement to respond to and mitigate potentially costly liabilities.
Many workers today are increasingly concerned about the accelerating spread of AI agents and their impact within companies. Among the negative impacts are job losses, unwelcome surveillance, and overreliance on AI by leaders and managers.
A number of companies, including Salesforce, Meta, and Workday, have already announced layoffs and curtailed hiring due to AI’s advances in software development capabilities. Additional professions, such as law, finance, accounting, and customer service, are also being significantly affected. In fact, any task that can be easily replicated is potentially replaceable by AI or AI-guided machinery. Whole enterprises are in the process of rising as AI champions, falling as victims to more efficient AI-enabled competitors, or urgently altering their existing business practices to incorporate AI.
Countries specializing in outsourced back-office functions such as the Philippines or India will likely face drastic changes: “Now, A.I. threatens to do to India what its outsourcing model did to the rest of the world: replace hundreds of thousands of office workers.” Outsourcing firms have begun laying off workers or slowing their hiring pace, with as many as 500,000 jobs estimated to be at risk over the next two to three years. Although such cuts may provide cost savings to employers, they will mean large-scale dislocation for workers and their communities. To remain employed, individuals must find new ways to add value when their previous tasks are subsumed by AI agents, and many will need to aggressively upskill to leverage AI capabilities.
Chinese institutions have been criticized for large-scale deployment of surveillance and monitoring systems, both in public spaces and in workplaces. The phenomenon of AI-augmented “Bossware,” now spreading across U.S. companies, is less well known. Increased use of this form of worker monitoring, fueled by the prevalence of remote work during the Covid epidemic, includes detailed oversight methods like tracking keyboard strokes, taking desktop screenshots, and monitoring activity pauses.
There is obvious allure for employers in having job performance metrics they can track and attempt to influence. However, the overall impact of such intrusive techniques on productivity is uncertain, not to mention that there is great potential to undermine workplace trust.
Leaders should ask themselves the following questions when considering implementing workplace surveillance systems:
AI agents are also becoming a more integral part of team meetings through simultaneous translation, recording of team member interactions, and meeting summaries. As these functions become more widespread and sophisticated, employees may be reluctant to offer candid views or unpolished ideas that they know will become a part of an official record.
A host of vendors are introducing new AI-based support services for teams. AI usage within teams can be augmented by:
Just as personalization of responses to individual users is a major thrust of AI companies (and their advertisers), related technology is being applied to customize the participation of AI agents within teams. This has obvious advantages such as instantaneous recall of proprietary information, or the ability to ask an AI agent with access to a vast database, “What are we missing here?” At the same time, possible unintended consequences of a constant AI presence include diminished psychological safety of team members who know their interactions are being recorded and analyzed. Moreover, well-intended “Human-in-the-loop” policies still do not ensure that managers will have the desire or the confidence to carefully review AI recommendations.
In the rush to digitize complex workflows, organizations often fall into the trap of oversimplifying human roles into data points. When AI tools are deployed without accounting for the nuanced, situational intelligence of experienced employees, they frequently create more friction than they resolve.
Example: AI in the hospital
The BioButton is a small sensor worn on the chest that monitors vital signs of critically ill patients. Although it was advertised as being a “transformational” form of support for busy hospital staff, in practice it reportedly generated frequent false alarms, needlessly consuming additional nursing time.
Nurses who raised questions were seen as resistant to new technology and unable to grasp its true value. However, it turned out that the BioButton’s sensory parameters were too narrow to function reliably. Nurses using all of their senses along with experience-based intuition were able to perceive subtle sensory cues, analyze information, and react more quickly and accurately.
For this BioButton product, as for other systems using AI technology, the initial hype exceeded the practical value in real world settings, with counter-productive results.
A recent study by Wharton School researchers uses the term “cognitive surrender,” a type of overreliance on AI defined as “adopting AI outputs with minimal scrutiny, overriding intuition and deliberation.” Research participants tended to adopt AI-generated information even when it was wrong, and did so with greater confidence in the correctness of their responses. “They borrowed the machine’s confidence—always quite high—without checking its accuracy.”
Going back to the BioButton example, the hospital administrators who sponsored the BioButton initiative compounded this form of overreliance by attributing greater credibility to AI-generated signals than to the real-time observations of their own nursing staff, while requiring nurses, too, to respond automatically, even to false alarms.
Such cognitive surrender to AI could affect all levels of the workplace. Individuals may feel it is safer to accept and disseminate AI output rather than to question it, think for themselves, and propose alternative views. Managers and executives may happily choose to delegate difficult strategic decisions or thorny personnel issues—feedback, performance evaluation, conflict resolution—to AI because this saves them time and allows them to assign blame elsewhere when things go wrong.
Mistaken beliefs in AI capabilities could be further reinforced by what has long been known as the “Eliza effect,” or the projection of human traits such as empathy onto computer systems. Users attributed to these systems “intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve.” Such mistaken attribution of human characteristics to AI is almost certain to increase as AI agents sound more convincingly human-like (e.g., see “Chinese Women Say AI Boyfriends are Better…”), despite their underlying limitations. Employees, managers, and executives looking for convenient shortcuts will find it more possible than ever to assign time-consuming and awkward tasks to AI that actually require real interpersonal skills.
Rather than the upskilling that companies often tout as their response to AI, creeping dependence on borrowed intelligence could result in “downskilling” in the form of underdeveloped or atrophied decision-making and human interaction skills. Such an outcome might sound far-fetched, but a high potential manager in a major multinational firm recently said of her boss just before leaving her job, “He’s much less impressive in person, without AI support.”
From an organizational standpoint, AI disruption creates an enormous, ongoing change management project. Several decades ago, Peter Vaill coined the term “permanent whitewater” to describe the continuous pace of change in organizations. This pace has only accelerated with the growth in AI capabilities, and the workplace turbulence appears to be moving swiftly from whitewater to a series of treacherous waterfalls downstream. Vaill’s recommended response, “learning as a way of being,” nonetheless remains valid, both for organizations and for individuals. This is the very opposite of cognitive surrender.
Ongoing labor market transformation appears inevitable as AI evolves, and will include the loss of many jobs as well as the creation of new ones. Companies can choose to address this transformation in either hasty, self-destructive ways or through a more strategic approach. Each new AI agent requires learning about how to use it, what it can and can’t do, and how to make improvements. It is critical to leverage thoughtful deployment, iterative development, skepticism of AI hype and hallucination, input from frontline workers, and a focus on where the technology can be best leveraged. Employees going through unprecedented workplace changes will appreciate support, but must also take initiative and demonstrate openness themselves to learning new ways of contributing.
AI surveillance, while perhaps useful for some purposes, is not the best way to win the competition for talent. Beyond deploying the latest technologies, however disruptive they might be, companies will still need to attract, retain, and engage high quality human talent. Research on employee engagement has long demonstrated the continuing importance of having a manager who cares about each employee as a person, positions them for success, values their opinions, and provides feedback regarding progress. These management capabilities are predominantly human, and continuous learning must involve both technical and human skill-building, especially in diverse global workplace environments.
Rather than becoming overly reliant on AI output or competing with it to provide information, workers at every level need to learn how to ask the right questions, assess the quality of the responses they receive, use this information well, and understand AI’s limitations. Emotional intelligence, cultural agility, good judgment, and the ability to align and motivate a team are still vital interpersonal leadership skills: “AI can draft, summarize, analyze, recommend, and generate at speed. But it cannot replace what teams most need from leaders and colleagues when things get real: emotional steadiness, psychological safety, trust, courage, accountability, and the ability to listen and connect when tension rises.”
The changes now taking place, with many more to come, require enormous energy and fortitude to address, and because AI is evolving so rapidly, it will be easy to get things wrong. A measure of human wisdom, creativity, and bridge-building—human-to-AI and human-to-human—can help navigate the journey ahead.
In a rapidly shifting landscape, Aperian empowers global teams to stay agile and resilient. By combining curated digital content and AI functionality with expert-led human development, we bridge the gap between technical scale and interpersonal excellence. Get in touch to discover the GlobeSmart® Profile, book a keynote, or learn about our easy platform integrations.