AI didn't break leadership, it revealed how we think
Step into the future with Rahaf Harfoush at UBA Trends Day 2026
On 19 March 2026, digital anthropologist Rahaf Harfoush takes the stage at UBA Trends Day with a powerful keynote: Leading humans in a machine age. Curious about her vision? Get a sneak preview here.
Most leadership systems in use today were designed for a world that no longer exists. They were built on assumptions of relative stability, predictable growth, clear hierarchies, and legible cause and effect. The idea that with enough data, enough optimization, and the right incentives, organisations could be steered like machines. Inputs adjusted, outputs improved, problems solved one by one. That ideology shaped how we built businesses, how we trained leaders, and how we defined success. But the terrain has changed. Today’s leaders are operating inside overlapping crises. Technological acceleration, geopolitical instability, climate risk, social fragmentation, and now artificial intelligence all converge at once. The ground shifts faster than strategies can be finalised. Decisions ripple across systems leaders do not fully control or even fully understand. And yet many organisations are still governed as if the world were predictable, linear, and slow. This is the quiet tension underneath so much executive anxiety right now. It is not just that leaders feel overwhelmed. It is that the mental models they inherited no longer map onto reality. The old playbooks do not explain what they are seeing. Metrics say one thing. Lived experience says another. This is where AI enters the story. It revealed how much leadership had already drifted away from sense making.
When ideology lags behind reality
Organisations are belief systems. They encode values into processes, technologies, incentives, and language. They teach people what matters and what does not, often without saying so explicitly. For decades, the dominant ideology of management rewarded efficiency, scale, and certainty. Leaders were expected to have answers. Hesitation was framed as weakness. Complexity was something to be reduced, not engaged with. These assumptions work reasonably well in stable environments but they break down in volatile ones. Today, leaders are asked to make decisions amid radical uncertainty. They are expected to integrate technical, ethical, social, and geopolitical considerations simultaneously. They are asked to act decisively without reliable maps. And yet the systems surrounding them still prioritize speed over understanding and output over interpretation.
AI fits neatly into this older ideology. It promises faster synthesis, cleaner answers, frictionless execution. It feels like relief. But relief isn’t the same as clarity.
AI as a cultural mirror
One of the most misunderstood aspects of AI is the belief that it introduces intelligence into organisations that lack it. AI reflects the quality of the thinking already present. AI operationalises existing strategy. It reinforces defined values. It accelerates whatever response to ambiguity already exists. In organisations with strong cultures of inquiry and judgment, AI can be transformative. It expands perspective, surfaces patterns, and supports human decision making. In organisations where clarity is thin and assumptions go unexamined, AI scales confusion. It produces polished outputs that mask unresolved questions and makes incoherence look professional. We are already seeing public examples of this dynamic. In recent months, several high-profile organisations have faced scrutiny after AI-generated reports and analyses were found to contain fabricated citations, incorrect claims, or confident-sounding errors that passed through multiple layers of review. In at least one widely reported case involving a major consulting firm, an AI-assisted document circulated externally before hallucinations were detected, raising uncomfortable questions about oversight, accountability, and the quiet erosion of editorial judgment. This is why so many leaders report a strange disconnect. AI initiatives move quickly, yet decisions feel harder. More information is available, yet confidence erodes. Everything appears optimised, yet something feels off. The technology is working as intended: by faithfully mirroring the system it inhabits.
Cognitive offloading at the top
Long before AI entered the workplace, leadership had already begun to outsource thinking. Dashboards replaced dialogue. KPIs replaced judgment. Executive summaries replaced reading. These tools were meant to support decision making. Over time, they became substitutes for it. This process is called cognitive offloading. Humans shift mental work to external systems to conserve energy and scale activity. Writing, maps, and calculators are all examples. Cognitive offloading is not inherently dangerous. It becomes risky when it replaces sense making rather than supporting it. In many organisations, that line has been crossed. Leaders are surrounded by data but disconnected from context. They are expected to decide quickly yet are rarely given space to integrate complexity. They are rewarded for decisiveness, not for depth of understanding. AI accelerates this pattern. It offers summaries instead of synthesis, answers instead of inquiry, fluency instead of comprehension. It makes it easier to move fast without slowing down to think. This is a form of strategic atrophy. Organisations lose the ability to notice weak signals. They respond to symptoms rather than causes. Decisions make sense locally but fail systemically. Leaders feel perpetually behind, even as tools promise mastery. AI simply amplified what already existed.
When AI moves from cognitive to emotional offloading
There is another, quieter risk emerging alongside cognitive offloading, one that leaders are only beginning to grapple with. AI systems are increasingly designed to simulate empathy, companionship, and emotional attunement. In consumer contexts, this has already produced documented cases where users develop intense emotional reliance on AI systems, sometimes reinforcing delusional beliefs or distorting their sense of reality. In the media, this phenomenon is often described as AI-induced delusion or AI psychosis. While these cases may seem distant from enterprise leadership, the underlying dynamic is not. Emotional offloading follows the same pattern as cognitive offloading. Humans defer not just thinking, but reassurance, validation, and meaning to systems optimised to respond rather than to care. For organisations, the risk is not that employees will mistake AI for a human. It is that emotionally persuasive systems can be used to influence behaviour, shape sentiment, or smooth over ethical friction without leaders fully understanding the impact. When AI is allowed to mediate trust, motivation, or belonging, governance becomes a psychological issue, not just a technical one. Leadership can no longer afford to treat emotional intelligence as separate from system design.
Optimisation without orientation
For years, leaders have been told that optimisation is the path to resilience. Leaner teams. Faster cycles. Clearer metrics. Fewer inefficiencies. In complex systems, optimisation can become a liability. It narrows attention. It removes buffers. It punishes pause. From a cultural standpoint, optimisation teaches organisations what not to value. Reflection becomes indulgent. Uncertainty becomes something to hide. Human judgment is treated as noise rather than signal. AI inherits this value system. It prioritises what can be measured and automated. It sidelines what cannot. Ethics, trust, meaning, and relational intelligence become secondary concerns. This is the paradox many leaders are living inside. The organisation becomes more technologically advanced while becoming less coherent. It moves faster while understanding less. The metrics look fine. The culture does not.
Why human skills matter more than ever
As AI takes over execution, human skills become more important, not less. AI is excellent at pattern recognition, speed, and scale. It is not capable of judgment, responsibility, or meaning making. It cannot decide what matters. It cannot hold competing values in tension. It cannot be accountable. Those capacities belong to humans. And in an AI mediated world, they become strategic infrastructure. Leadership now requires the ability to interpret context. It requires asking better questions. It requires holding ambiguity long enough for insight to emerge. These capacities are often mislabeled as soft skills when they are the skills that prevent systemic failure. Organisations that neglect human sense making will find themselves highly automated and profoundly disoriented.
Sense making as a leadership practice
The leaders navigating this moment most effectively are not the ones with the most sophisticated AI stacks. They are the ones who treat thinking as a shared practice rather than an individual trait. They protect time for synthesis. They encourage disagreement before alignment. They use AI to expand perspective, not to collapse it. They understand that AI reflects culture and they work on the culture first. This does not mean rejecting technology. It means being deliberate about what should never be outsourced. Judgment. Ethics. Orientation. These become the top leadership priorities. The real leadership challenge of this era is ensuring that the systems we rely on reflect the reality of the terrain we are facing. Because AI won’t correct outdated ideologies. It will scale them. And in that sense, AI is not the end of leadership. It is the moment leadership becomes visible again.