Created 28 minutes agoUpdated 16 minutes ago

Leading AI Change

What Organizations Keep Getting Wrong

Most AI projects don't fail on the technology. They fail on everything nobody wanted to look at before launching.

We started a simple conviction: most AI projects don't derail because of the technology. They derail because of what nobody wanted to look at before deploying. What we are sharing here is what remains when you put everything in perspective : the through-lines that matter most when organizations face the real work of transforming with AI.

NAMI Logo
A chess board

🧠 AI doesn't transform your tools. It transforms the way you reason.

This is the starting point and probably the most structurally important conviction to come out of the entire series. When organizations encounter AI, the instinct is to activate a familiar mechanic: framing, pilot, rollout, communication, training. It worked for previous digital projects. The problem is that this reflex rests on a silent assumption: we are transforming tools or processes. This time, that assumption is wrong.

AI enters the reasoning process. It produces analyses, suggests options, influences decisions. We are no longer just transforming execution. We are transforming cognitive work itself.

And that changes what real work looks like. Expertise no longer means "the person who knows", it means the person who knows how to question AI outputs, detect their blind spots, and challenge their coherence. Value no longer lies in the ability to produce an analysis, but in the ability to assess its solidity. And making a decision now means owning a recommendation partly shaped by a system you don't fully control.

These shifts are invisible at first. They don't generate open resistance. But they displace individual reference points and upset team dynamics. And it is precisely because they are invisible that they appear in no standard deployment plan.

NAMI Logo
Stormtroopers

📡 Resistance is not an adoption problem. It is a signal.

Resistance to AI does not look like other kinds of resistance. What researcher David Autissier calls "schizophrenic bewilderment" captures well what we observe: the same people who use AI heavily every day don't actually trust it. They adopt it and reject it simultaneously. This is not bad faith. It is the expression of a tension that tool-focused training cannot resolve : understanding, trust, professional identity, accountability, ethics, all colliding at once.

What we have learned is that every resistance is a signal before it is a problem. It reveals what has not yet been thought through, said out loud, or decided. Ignoring it guarantees it comes back later, in a form that is harder to address. The organizations that handle this well treat resistance as diagnostic data : a map of the real conversation that needs to happen.

What resistance most often points to is not fear of technology. It points to a gap between what has been announced and what has actually been thought through: who is responsible when AI makes a mistake, what happens to roles that shift, what decisions remain entirely human. These are not questions that a training session can answer.

NAMI Logo
Gears

🏗️ Leading this change means building a human architecture.

The idea of relying on AI champions is appealing. Find the enthusiasts, train them, send them to convince their colleagues. The problem is that champions alone do not carry transformation. They localize it. And more often than not, they burn out doing it.

What we observe in the projects that actually last is four levels that genuinely communicate with each other: leaders who set a clear direction and publicly own the difficult tradeoffs, sponsors who stay visibly committed beyond the launch, managers who translate that direction into concrete practices and create the conditions for real adoption within their teams, and champions who make AI usage visible and credible among their peers. When any one of these levels is absent or disconnected from the other three, energy disperses. The transformation doesn't stop, it quietly falls apart.

The same logic applies to capability-building. The sum of well-developed individual skills does not automatically produce an organization capable of working with AI in a coherent way. It also requires shared practices, spaces where the right questions can be asked collectively, and governance that clearly defines who owns what. Individual empowerment without organizational infrastructure is just dispersal.

NAMI Logo
An abstract sea

🔍 The real question is: what is shifting?

This is the thread running through the entire series. Shadow AI (employees using unapproved tools outside official frameworks) is not a discipline problem. It is the symptom of a framework that is too rigid or too disconnected from how work actually gets done. Accountability is not a question you can defer until after deployment: it is the unaddressed assumption in almost every AI project. And every wave of technological acceleration creates a human debt, made up of reference points that haven't had time to reconfigure and conversations that haven't happened yet.

Leading change in the AI era means changing the starting question. Not "how do we deploy AI?" but "what does it shift in how we work, decide, and own our choices?" That change of question is what separates a transformation that holds from an adoption that is only skin-deep.

It also means accepting that some of the most important work happens before any tool is deployed: naming what is at stake, deciding what remains human, and designing the conditions under which people can genuinely take ownership, not just comply.

NAMI Logo

The organizations that succeed in their AI transformation are not the ones with the best tools. They are the ones that asked the right questions before deploying and had the courage not to look away from the answers.

For those who want to go deeper, we've put all of this into a white paper: a space where these convictions have room to fully unfold, with the concrete examples and practical tools that this format simply doesn't allow for.

Share