Transparency!= Trust

A recent McKinsey report outlined the usual risks slowing AI adoption such as accuracy, data privacy, security vulnerabilities, and lack of explainability. But as I read through it, I realised something fundamental was missing. The real obstacle for increased AI adoption isn’t technical. It’s human. It’s trust.

But trust is not just built through transparency or better explainability, although it is tempting to think so. I learned this the hard way, years ago, when I worked at Amazon.

Our team had been tasked with automating inventory management decisions. The vision was pretty cool – algorithms making precise, data-driven calls on managing inventory while humans focused on the more strategic work. But when it was actually time to take our “hands off the wheel” (HOTW), we weren’t exactly ready.

Week after week, we faced resistance. The same leaders who had championed automation now scrutinised every detail, peppering us with questions our system couldn’t quite answer, not because it was wrong, but because it didn’t explain decisions the way humans would. “Insufficient.” “Inconsistent.” “Unclear.” And most damning of all: “Not what we’re used to.”

One particular meeting still lingers in my mind. We had several senior execs from across Europe on call, when a director said, “I understand what the system is doing. I just don’t believe it’s doing the right thing.” That was the moment it hit me. The problem wasn’t transparency. The problem was trust.

This is what most discussions on AI adoption miss.

Humans apply a different standard to AI than they do to each other. Executives make intuitive calls all the time, often without detailed explanations, and no one demands an audit trail of every assumption. They’ve earned trust through years of consistent performance, shared values, and accountability.

But AI gets zero grace. One mistake, one opaque decision, and confidence crumbles. It’s the math class paradox all over again – arrive at the right answer the wrong way or don’t show “steps”, and get no credit.

We thought our challenge was technical. It wasn’t. It was human.

Trust isn’t won through accuracy alone; it’s built through familiarity, predictability, and alignment with human priorities. What ultimately saved our initiative, or at least, provided a stark contrast, wasn’t automation at all. It was a parallel project that took a different approach.

Instead of replacing human judgment, this team built tools to help analysts prioritise supplier management efforts. This saw far less resistance, far more adoption. Not because the tech was better, but because the approach was different. It promised to support people rather than sidelining them.

Looking back, our mistake was obvious. We had tried to summit Everest without first proving we could handle a few steep hikes. We had underestimated how slowly trust moves, how gradually it builds, and how catastrophically it breaks.

This is why so many AI initiatives flounder. It’s not because the models aren’t good enough. It’s because the humans they serve don’t trust them the way they trust each other. Trust isn’t about explainability alone. It’s about knowing what to expect, believing the system aligns with our priorities, and feeling confident that someone, somewhere is accountable when things go wrong.

The industry’s focus on explainability treats a symptom, not the underlying trust deficit. AI adoption accelerates when organisations start with augmentation, not automation. When organisations map trust networks and decision dynamics before designing AI solutions, create explicit accountability structures with mechanisms for human override and adaptation and measure trust alongside technical performance to ensure consistency and confidence, we will truly be ready.

Today, the algorithms are ready. The real question is, are we ready as leaders?