Skip to main content
The Psychological Roots of AI Resistance

The Psychological Roots of AI Resistance

As artificial intelligence (AI) transforms workplaces, many employees remain skeptical about its impact. AI promises efficiency and innovation, however workers often react with uncertainty, anxiety, or resistance. As we explored in our last blog post, resistance to change remains one of the biggest challenges for leaders and change managers. However, while it is a challenge to overcome these barriers, we have to keep in mind that this skepticism is not a new phenomenon—history shows that major technological shifts have always been met with fear and hesitation.

From the Luddite movement during the Industrial Revolution to the rise of computers in the 20th century, workers have often worried about losing their jobs, struggling with new skills, or being replaced by machines. Behavioral science helps explain these reactions through concepts like loss aversion, reactance, or cognitive load theory, which shape how people perceive disruptive technologies.

In this article I will explore four key psychological concerns, summarized in two different categories driving AI resistance:

  • Lack of knowledge, control, and autonomy:
    • “I don’t really understand how AI works.”
    • “I don’t want a machine making decisions for me.”
  • Fear of job displacement and loss of competence:
    • “What if AI takes my job?”
    • “If AI does the complex parts of my job, I’ll lose my skills.”

By examining AI skepticism through a behavioral and historical lens, organizations can better address employee concerns and create a smoother, more human-centered AI adoption process.

1. Lack of Knowledge, Control, and Autonomy:

“I don’t really understand how AI works” / “I don’t want a machine making decisions for me.”

Modern Context

As AI becomes increasingly embedded in workplace tools and systems—from certain AI models to algorithmic decision-making in hiring and performance evaluations—many employees are struggling to keep up. Today’s AI technologies rely on specialized terminology and abstract concepts, such as deep learning, neural networks, and algorithmic bias. For employees without a background in data science or computer engineering, this complexity can be difficult to understand. The result is often confusion, uncertainty, and the sense that AI is an inaccessible “black box.”

This knowledge gap doesn’t just create discomfort, it can also lead to reduced self-confidence and a sense of alienation from AI-driven workplace initiatives. Employees may feel unqualified to contribute to AI-related discussions or projects, even if those projects directly affect their roles. This phenomenon can foster imposter syndrome, where employees doubt their place in an increasingly AI surrounded environment.

At the same time, employees are growing concerned about the influence AI holds over decisions that impact their daily work. Many AI systems operate with limited transparency, hence employees may not understand how an algorithm reaches its conclusions or what data it is using, which creates a sense of opacity and unpredictability. According to a survey by slack, 35% of desk workers saying AI results are only slightly or not at all trustworthy (all references are at the bottom of this article). Subsequently, when people don’t know why a decision was made—especially if it’s perceived as unfair or inaccurate—they are less likely to accept it and rather trust their own abilities.

This issue is compounded when AI is used in sensitive areas like hiring, promotions, or disciplinary actions, where perceived fairness is a key element. Employees may feel as if they are no longer being evaluated by human managers who understand the nuance of their work, but rather by cold, incomprehensible machines.

The combined effect is powerful: employees not only feel overwhelmed by the complexity of AI, but also disempowered by its growing control over their work lives.

Historical Parallel

This mix of confusion and resistance has been seen before. A similar pattern unfolded during the computer revolution of the 1980s. As personal computers and early digital systems entered the workplace, employees with little or no prior IT experience often felt alienated by the rapid technological shift. These tools promised increased efficiency, but their initial complexity and unfamiliarity led many workers to feel their skills were becoming outdated—or worse, irrelevant.

The psychological impact of this shift was significant. As early as 1984, researcher Craig Brod came up with the term technostress to describe a “modern disease of adaptation” caused by an inability to cope with new computer technologies in a healthy way (Brod, 1984). Workers reported feelings of frustration, anxiety, and disempowerment, not because the technology itself was inherently harmful, but because they lacked the tools, support, and understanding to adapt effectively.

Fast forward to today, and we see echoes of the same dynamic. The introduction of AI into decision-making processes, especially when it’s unclear how these systems work or how to challenge their outputs, can produce the same sense of disconnection and resistance. As it was in the 1980s, the fear isn’t just about the tool itself, but about being left behind or devalued in a fast evolving digital environment.

Behavioral Science Insight

Several psychological theories help explain why employees may resist AI when faced with both technical complexity and a perceived loss of autonomy.

First, Cognitive Load Theory, posits that working memory has a limited capacity for processing new information (Sweller, 1988). When individuals are introduced to complex AI concepts like algorithmic decision-making, machine learning models, or natural language processing, they can quickly become overwhelmed. This cognitive overload can lead to mental fatigue, frustration, and ultimately, disengagement. The reaction isn’t necessarily a rejection of technology itself, but rather a psychological coping mechanism against the stress of too much unfamiliar information.

Closely linked to this is Self-Efficacy Theory (Bandura, 1977). This theory centers on individuals’ belief in their ability to succeed in specific situations. When employees feel they lack the knowledge or skills to interact meaningfully with AI, their sense of self-efficacy declines. This makes them less likely to explore, adopt, or even support AI initiatives, as doing so would challenge their already fragile confidence in their abilities. In this context, AI is not seen as a tool for empowerment, but as a threat to professional identity and competence.

Finally, Reactance Theory explains resistance through the lens of autonomy (Brehm, 1966). The theory suggests that when people perceive their freedoms of choice or action are being threatened or restricted such as when an algorithm makes intransparent decisions, evaluates performance, or influences hiring decisions, they experience psychological reactance. This is an aversive response, prompting individuals to resist or push back against the perceived source of control. In the AI context, employees may resent being subjected to decisions made by systems they don’t understand and cannot influence, leading to active or passive forms of resistance.

Combined, these insights paint a clear picture: resistance to AI is not merely a matter of “tech phobia” or irrational fear. It’s rooted in genuine psychological responses to complexity, loss of control, and threats to professional identity.

2. Fear of Job Displacement and Loss of Competence:

“What if AI takes my job?” / “If AI does the complex parts of my job, I’ll lose my skills.”

Modern Context

Past waves of automation primarily impacted manual or repetitive labor, however today’s AI systems are also making significant inroads into knowledge-based professions. White-collar workers in fields such as finance, marketing, customer service, education, journalism, and even law are witnessing AI’s ability to generate reports, analyze data, write code, answer queries, and produce content that was once considered uniquely human. According to a FlexJobs survey from 2024, more than one-third (34%) of respondents believe AI will lead to job displacement in the next five years. 

At the same time, those who retain their jobs may fear the experience of what could be called a “quiet erosion” of competence. As organizations introduce AI tools which can take over more cognitively demanding tasks such as writing emails, troubleshooting problems, or interpreting data, employees may need to begin to rely on these tools. This dependency, while helpful, raises concerns about the gradual decline of expertise that may occur when individuals no longer practice core aspects of their profession. Over time, professionals may lose touch with the judgment, creativity, and critical thinking that once defined their roles.

The result is a two-fold fear: either being replaced by AI, or feeling like your work no longer matters because of it.

Historical Parallel

The fear of being replaced by machines is also not new. Perhaps the most prominent historical example is the Luddite movement of the early 19th century. Between 1811 and 1817, skilled textile workers in England protested against the growing use of mechanized looms and knitting frames. These machines, introduced during the Industrial Revolution, drastically increased productivity, but at a cost. The machines allowed factories to hire less skilled, lower-paid workers, thereby threatening the jobs and social standing of experienced workers.

The Luddites’ response was not rooted in ignorance or anti-technology sentiment, as is often assumed. Rather, it was a desperate and organized pushback against what they saw as the devaluation of their skills and dignity. Their tools of mastery—once symbols of expertise and craftsmanship—were being replaced by automated processes that rewarded speed over skill. In their eyes, progress was happening without them, or worse, against them.

In a similar way, today’s knowledge workers may see AI as a force that not only threatens to replace them but also undermines the very abilities they’ve spent years cultivating. And as with the Luddites, the resistance we see now may be less about rejecting technology itself but more about demanding a fair and inclusive vision of progress.

This parallel underscores an important point: technological transformation often generates winners and losers, not just economically, but psychologically. The fear of exclusion—of no longer mattering—can be as potent as the fear of joblessness.

Behavioral Science Insight

From a psychological perspective, multiple theories help explain why employees may resist or fear AI, even in the absence of immediate job threats.

One of the most central concepts is Loss Aversion (Kahneman & Tversky, 1979). Loss aversion suggests that people feel the pain of losing something—like a job, a skillset, or professional relevance—more acutely than they enjoy gaining something of equal value. In the context of AI, employees are often more focused on what they might lose (control, status, competence, employment) than what they might gain (efficiency, support, or new opportunities). This cognitive bias makes even AI tools that are explicitly designed to assist employees feel threatening.

A closely related concept is Status Quo Bias, which describes the tendency to prefer the current state of affairs and avoid change—even when change could be beneficial (Samuelson & Zeckhauser, 1988). Employees may hesitate to adopt AI-driven processes not because the tools are ineffective, but because they represent a fundamental disruption to a familiar system. This preference for the known over the unknown amplifies fear and reduces openness to innovation.

Another important behavioral concept is Deskilling, explored in depth by Raja Parasuraman and Dietrich Manzey (2010). Their research shows that over-reliance on automation can lead to a decline in essential skills, particularly in fields requiring high levels of attention, judgment, or physical precision. When employees stop practicing these skills regularly—because AI systems are doing the major parts—those abilities decline. The longer this continues, the more people may feel incapable of performing their work without technological assistance.

This sets the stage for learned helplessness, where individuals internalize the belief that they can no longer succeed independently (Seligman & Maier 1967). Ironically, the very tools intended to support employees may ultimately leave them feeling more vulnerable.

Taken together, these behavioral insights explain why the mere presence of AI in the workplace can produce such strong emotional reactions. It’s not just about automation, it’s about autonomy, identity, and self-worth.

Conclusion

Resistance to AI at work isn’t just fear of new tech—it’s a natural human reaction to big changes. Throughout history, from the Industrial Revolution to the rise of computers, people have felt worried when new tools seem to threaten their jobs or change the way they work. Today, AI brings similar concerns. Many employees feel overwhelmed by how AI works or uneasy about machines making decisions for them. Others worry about losing their jobs or seeing their skills become less useful. These feelings are not silly or irrational—they’re based on real fears about control, understanding, and being valued at work.

Psychology helps us make sense of these reactions. People tend to fear losing something more than they enjoy gaining something new. And when change feels too fast or confusing, it’s easy to feel powerless or left behind.

But here’s the encouraging part: this kind of resistance is not only natural—you can also foresee it. And that means it allows us to address it proactively and guide the transition effectively. By understanding the psychological roots behind these concerns, organizations can take practical, evidence-based steps to ease the transition. Strategies like improving transparency, offering training, increasing autonomy, and involving people in the process can make AI adoption feel more fair, human, and empowering. With the right behavioral insights and a thoughtful approach, companies can turn uncertainty into trust and help their people grow alongside the technology, not feel pushed out by it.

References

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. https://doi.org/10.1037/0033-295X.84.2.191

Brehm, J. W. (1966). A theory of psychological reactance. Academic Press.

Brod, C. (1984). Technostress: The human cost of the computer revolution. Addison-Wesley.

Dawkins, M. (2024, September 16). FlexJobs report: 1 in 3 workers fear job displacement due to AI. FlexJobs. https://www.flexjobs.com/blog/post/flexjobs-report-workers-fear-job-displacement-due-to-ai

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. https://doi.org/10.2307/1914185

Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410.  https://doi.org/10.1177/0018720810376055

Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1(1), 7–59. https://doi.org/10.1007/BF00055564

Seligman, M. E. P., & Maier, S. F. (1967). Failure to escape traumatic shock. Journal of Experimental Psychology, 74(1), 1–9. https://doi.org/10.1037/h0024515

Slack Technologies. (2024, June). Despite AI enthusiasm, Workforce Index reveals workers aren’t yet unlocking its benefits. Slack. https://slack.com/intl/de-de/blog/news/the-workforce-index-june-2024

Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. https://doi.org/10.1207/s15516709cog1202_4

Author