
Most companies that struggle with AI aren't struggling with the technology. They're struggling with themselves.
Date
01.04.2026
Author
Mate Kiss-Gyorgy
The tools are better than ever. The vendors are eager. The business case usually makes sense on paper. But walk into an organization six months after an AI rollout and you'll often find the same story: adoption is patchy, people are working around the tools, and leadership is quietly wondering what went wrong.
In almost every case, the answer isn't the software. It's the culture it landed in.
The assumption that gets companies into trouble
There's a comfortable belief that technology is the hard part. Get the right platform, integrate it properly, train people on how to use it — and you're done. Culture is treated as background noise, something that will sort itself out once people see the results.
But culture isn't background noise. It's the operating system. And AI doesn't override it — it runs on top of it.
If your organization already has strong habits around sharing information openly, experimenting without fear, and trusting colleagues to make good decisions, AI tends to amplify all of that. It gives people better tools to do what they already want to do.
If your organization is siloed, risk-averse, or has a complicated relationship with transparency, AI tends to amplify that too. The tools get used selectively, or performatively, or not at all.
What culture actually shapes
A few things I see consistently with clients:
How people relate to mistakes. AI tools are most useful when people actually use them — which means experimenting, hitting dead ends, trying again. In cultures where mistakes are quietly punished, people stick to what they know. They use AI for low-stakes tasks and leave the rest alone. The ROI stays low, and leadership wonders why.
Whether trust flows in both directions. One of the most common fears I hear from employees isn't "AI will replace me" — it's "AI will be used to watch me." That fear doesn't come from nowhere. It comes from organizations where surveillance dressed up as performance management is already part of the deal. If people don't trust how data about them is used, they won't engage honestly with tools that generate more of it.
Who feels like they have a voice in the process. AI rollouts that happen to people — announced from the top, with adoption tracked as a metric — tend to generate quiet resistance. Rollouts that happen with people, where teams help shape how the tools fit their actual work, tend to stick. The difference isn't just morale. It's whether the people who understand the work day-to-day have any say in how it changes.
This isn't a reason to slow down
None of this is an argument for waiting until culture is "fixed" before touching AI. Culture shifts through action, not through workshops. The point is to be honest about where you're starting from — and to design your approach accordingly.
Some organizations need to start small, with high-visibility wins that build trust. Some need to address specific fears directly before adoption will move. Some have the cultural foundations in place and just need a clear path forward.
The companies that get this right aren't the ones with the biggest AI budgets or the most sophisticated tools. They're the ones that took culture seriously as a strategic variable — not an afterthought.
A question worth sitting with
If you introduced an AI tool tomorrow that genuinely made people's work easier, would they embrace it — or find reasons to route around it? The answer tells you more about your readiness than any technology audit will.
If you're not sure, that's usually the right place to start.
