Skip to content
co-intelligence futurebraining AI fluency

The Two Ai Walls

Huibert Evekink
Huibert Evekink

A couple of weeks ago, I explored how AI intensifies work rather than reducing it—and why partial solutions create the problems they’re meant to solve. This week, we’re focusing on one of the six interconnected capabilities that determine whether that intensification makes you smarter or just busier: expertise.

The timing is perfect. Harvard Business Review, in its March 2026 issue article, "Gen AI won’t make your employees experts," supports what we see every day: AI runs into limits when expertise is missing or becomes rigid.

The first AI wall: when amplification stops working

Researchers conducted a controlled experiment with 78 employees at a UK fintech. They divided them into three groups:

Experts: professional writers who regularly produce content

Adjacent outsiders: marketing specialists with a general understanding but no writing experience

Distant outsiders: developers and data scientists with no marketing or writing background

Each group was asked to conceptualize (imagine what an article could cover) and write articles. Some got gen AI assistance, some didn’t.

The results reveal an important aspect of how AI interacts with human capability.

When conceptualizing an article without help from gen AI, the writers got the highest average score, followed by the marketing specialists and the technologists.

For conceptualization with AI, writers, marketers, and technologists performed similarly when using AI and outperformed writers working alone.

For execution (actually writing the article), the story changed completely: the writers barely won from the marketers, but left the tech specialist behind, who essentially had the same score as without AI.

The technologists encountered what researchers call “the AI wall”—the limit to how much AI can help people perform tasks outside their domain of expertise.

Why did this happen?

One participant offered a perfect metaphor: “Conceptualizing is like imagining running a marathon, but writing is like actually running it.”

Imagining requires a general understanding. You need to evaluate whether an idea is sufficient.

Executing requires a deeper, more structured knowledge. You need to know how to convey the message, what language works, when to adjust, and why certain phrases land better than others.

The marketing specialists could transition to writing because they understood the general language and had sufficient domain knowledge to refine AI-generated suggestions. The technologists couldn’t. They lacked the intuition and judgment to make good decisions about what to keep and what to discard.

So they just copied and pasted what AI gave them.

This is the expertise paradox: AI appears to democratize capability, but it actually increases the value of structured knowledge. The better you understand a domain, the more effectively you can imagine, steer, validate, and improve the outputs of AI.

Without that knowledge, AI doesn’t amplify your capability. It amplifies your ignorance.

The second wall

So, the first wall appears when people lack the expertise to judge or refine what AI produces.

The second wall sits at the other extreme. It appears that when expertise becomes fossilized, people know their domain deeply but become too certain, too narrow, or too closed to explore unfamiliar approaches. One wall produces shallow use of AI. The other blocks renewal, experimentation, and adaptation.

Trapped between two walls and a hard place
Now that we have AI, junior people can do expert work.

Danger: Organizations expand scope faster than they develop capability, creating a workforce that confidently produces more while understanding less.

Now that our experienced people have AI, we need fewer juniors.

Danger: Organizations protect short-term productivity while quietly eroding the pipeline of future expertise. This can be compounded by the fact that senior people often have more domain knowledge but less AI fluency, more confidence in what they already know, and less openness to exploring unfamiliar approaches.

A Stanford Digital Economy Lab study suggests this may already be happening: in AI-exposed occupations, employment declines have been bigger among younger workers, while more experienced workers have remained relatively stable.

One path creates shallow performance without understanding. The other starves the next generation of the experience needed to become experts in all.

The bottom line

AI does not replace expertise. It exposes its absence and, sometimes, its rigidity.

Organizations can now make two mistakes at once: let people work beyond their depth because AI seems to compensate, or freeze out younger talent because experienced people can absorb more of the work. One path creates shallow performance without understanding. The other starves the next generation of the experience needed to become experts at all.

Either way, capability weakens beneath the surface while output still looks productive.

So the real question is not whether people can use AI. It is whether they are still building and refreshing the human judgment needed to challenge, refine, and improve what AI produces.

For individuals, that means keep learning. Study your domain. Read a book or a report beyond summaries. Stay close to the real material. Use AI to extend your thinking, not to replace the struggle that builds judgment.

For organizations, it means treating expertise as infrastructure. Protect deep learning. Design work that still builds judgment. Use AI to augment expertise, not to hollow it out. And do not quietly kill the apprenticeship layer by freezing out younger people.

That is the choice: scale output while capability erodes, or use AI in ways that keep human expertise alive.

Share this post