Over-reliance on AI is weakening workforce skills, warns recruiter
Dr Ryne Sherman, chief science officer at Hogan Assessments.
Employers need to ensure their teams use AI as a tool to speed up basic tasks, while not developing a reliance that dulls their ability to make decisions and solve problems.
This is the view of Dr Ryne Sherman, chief science officer at Hogan Assessments, a specialist in talent acquisition and development.
Dr Sherman cites the latest Microsoft Work Trend Index finding that more than 75% of knowledge workers are already using Artificial Intelligence (AI) at work.
In this Q&A interview, Dr Sherman advises that, as AI tools become embedded in day-to-day work across Ireland, a new workplace trend is emerging: the rise of so-called “AI zombies”.
He advises employers and employees alike to beware of the dangers of over-reliance on AI in their daily functions, notably cautioning against drifting into abdicating responsibility for key decisions of critical importance to their business.Â
Absolutely, and that’s the real risk organisations need to pay attention to. The concern isn’t that AI is replacing work outright; it’s that it can gradually replace thinking if it becomes a default rather than a tool. Many AI systems are designed to make tasks faster and easier; drafting emails, summarising documents, even suggesting decisions. While this creates short-term productivity gains, it can also lead to a subtle but important shift: people stop engaging deeply with the work itself.
Like any cognitive “muscle,” skills such as critical thinking, problem-solving, and decision-making need regular use to stay sharp. If employees consistently outsource these processes to AI, those capabilities can weaken over time. On the surface, individuals may appear highly productive (producing outputs quickly and efficiently) but underneath, their ability to interrogate information, challenge assumptions, or make independent judgments may begin to erode.
This is particularly relevant for early-career professionals, who traditionally build expertise through repetition and problem-solving. If those learning moments are bypassed, there’s a risk of creating a workforce that is efficient but less capable of handling complexity or ambiguity without technological support. In that sense, over-reliance on AI doesn’t create an immediate skills gap, it creates a delayed one, which may only become visible when organisations need deeper thinking and don’t find it readily available.
It’s important to stress that this behaviour isn’t about intelligence or capability, it’s about natural tendencies in how people approach work and decision-making. Certain personality traits can make individuals more inclined to rely heavily on AI as a “safe” or efficient shortcut.
For example, individuals with lower levels of curiosity may be less motivated to explore alternatives or question AI-generated outputs. If someone is less inclined to ask “why” or “what if,” they are more likely to accept what the system provides at face value. Similarly, higher levels of cautiousness can play a role, people who are more risk-averse may prefer to rely on AI because it feels like a safer, more validated option than making independent decisions.
Lower self-confidence is another factor. When individuals doubt their own judgment, they may defer to AI as an external authority, even when their own insight could add value. Over time, this can reinforce a cycle where confidence declines further because opportunities to practice independent thinking are reduced.
Finally, a strong preference for conformity can also contribute. In environments where AI tools are widely adopted, some employees may feel pressure to align with the “norm” and rely on them without question. This combination doesn’t indicate a flaw, but rather a pattern of behaviour that organisations need to recognise and actively manage.
The key for organisations across Ireland is to consciously balance short-term efficiency gains with long-term capability building. AI can undoubtedly enhance productivity, but businesses need to ensure it doesn’t come at the expense of human judgment and expertise.
One of the most effective ways to do this is by rewarding thinking, not just speed. If performance metrics focus solely on output and efficiency, employees will naturally gravitate toward using AI as much as possible. However, if organisations also recognise critical thinking, creativity, and sound decision-making, they send a clear signal that these skills still matter.
Building AI literacy is another essential step. Employees need to understand not just how to use AI tools, but also their limitations, where outputs may be biased, incomplete, or require human interpretation. This awareness encourages a more balanced approach, where AI is used as a support rather than a substitute for thinking.
Organisations should also create environments where questioning and challenge are encouraged. When employees feel psychologically safe to interrogate AI outputs, suggest alternatives, or take ownership of decisions, they are more likely to stay cognitively engaged. In practice, this might mean embedding moments of reflection into workflows, encouraging teams to explain their reasoning, or even deliberately limiting AI use in certain developmental tasks.
Ultimately, the goal isn’t to reduce AI usage, but to ensure it is used intentionally, enhancing human capability rather than replacing it.
Because technology on its own doesn’t shape behaviour, people do, and leaders in particular play a critical role in setting the tone. AI tools will be widely available across organisations, but the way they are used will vary significantly depending on leadership priorities and cultural signals.
If leaders emphasise speed, efficiency, and output above all else, employees will naturally lean more heavily on AI. In those environments, the risk of “AI zombie” behaviour increases, as individuals optimise for productivity metrics rather than depth of thinking. Over time, this can create a culture where questioning and independent judgment are deprioritised.
Conversely, if leaders actively model curiosity, critical thinking, and thoughtful decision-making, the impact of AI can be very different. When leaders ask probing questions, value diverse perspectives, and demonstrate that it’s acceptable to challenge outputs, whether human or AI-generated, they reinforce the importance of cognitive engagement.
Leaders also influence how safe people feel to think for themselves. In cultures where mistakes are penalised or speed is overvalued, employees may default to AI to minimise perceived risk. In contrast, environments that encourage learning, experimentation, and reflection are more likely to produce individuals who use AI as an amplifier of their thinking rather than a crutch.
In that sense, the future of AI at work is not determined by the sophistication of the technology, but by the choices leaders make about how it is integrated into daily practice. The tool may be the same, but the outcome depends entirely on the culture that surrounds it.




