⚡ Sample report — Digital Strategy & Innovation · Meridian Advisory · 9 members · L0–L3 spread
Section 1 of 6
Team Level Distribution
Where each team member currently operates across the four AI Operator levels.
A four-level spread across nine people is not unusual in a team that has been experimenting with AI for 12 to 18 months. The risk is not the spread itself — it is the invisible coordination cost it creates. When the most advanced member is two full levels above the least advanced, shared judgment about AI quality and risk diverges sharply. L2 members routinely evaluate AI output as acceptable; L1 members accept work they are not yet equipped to critically assess. This gap rarely surfaces as a visible problem. It surfaces as noise: slower decisions, uneven quality, rework, and a growing sense that AI use is becoming more fragmented rather than more coherent.
Section 2 of 6
Capability Readiness Heatmap
Team average scores across the six L-CAP capability dimensions. Each bar shows the team mean. Three capabilities are below threshold.
54
Collective Behaviour Score
The Collective Behaviour Score reflects how well individual capabilities combine into shared, coherent AI practice. A score of 54 indicates genuine individual capability that is not yet translating into team-level consistency.
Three capabilities are below threshold. Curiosity and Expertise are real strengths. The gap between those strengths and the team's Focus and Responsibility scores is the core structural problem this report addresses.
Section 3 of 6
Variance & Risk Flags
Where individual scores diverge most within the team. Variance reveals alignment risk that averages cannot.
HIGH
Variance Risk
Team averages do not show the full picture. Two teams with the same average Focus score can look completely different — one where everyone struggles equally, another where half the team is disciplined and the other half is scattered. Variance shows which problem you are facing. High variance in a critical capability is a governance and coordination problem, not a training problem. The bars below show each capability's minimum, average, and maximum individual score within this team.
Critical flag: Focus variance 18–71
A Focus range of 18 to 71 within the same team is a structural problem. The 18-point minimum is not someone who is occasionally distracted — it is someone whose AI use is fundamentally ungoverned. Without a shared floor for focus and workflow discipline, the team cannot sustain consistent quality. The most focused members will compensate invisibly — until they stop.
Section 4 of 6
Team Pattern
The behavioural signature that connects the team's data into a single, actionable picture.
MORE ENGINE THAN BRAKES
Team Pattern
This team has high energy, genuine domain capability, and real curiosity about AI. Curiosity at 71 and Expertise at 67 are clear strengths — people here want to explore, and they have the knowledge to add genuine value when they do. What the team lacks is the collective structure to channel that energy into results that hold up over time.
Focus at 38 is the brake that is missing. Responsibility at 44 means that when AI produces an output, the team does not yet have a shared sense of who owns it. These are not individual failures — they are coordination gaps that will widen as AI use intensifies. The team is building speed faster than it is building governance.
If left unaddressed, the pattern compounds. High-curiosity teams with weak focus produce increasing amounts of AI-accelerated output that no one is fully accountable for. The failures that result — a client deliverable that relied too heavily on AI, a decision shaped by AI output no one critically evaluated — are not traceable to any single person. That makes them hard to fix and hard to prevent.
Section 5 of 6
Three Gaps. All Visible. All Addressable.
The specific structural gaps this team needs to close before investing further in AI tools or training.
Structural Gap
The team is experimenting — but not consolidating.
Curiosity at 71 is running ahead of Focus at 38. High exploration, low workflow discipline. The team is busy. It is not yet compounding. Without shared, repeatable workflows, individual AI gains stay individual.
Governance Gap
No one owns what AI produces.
Responsibility at 44 with medium variance means team members operate from different models of who reviews and owns AI output. In a client-facing team, this is a reputational risk. AI-generated work can reach clients without adequate human review because accountability is assumed, not designed.
Coordination Gap
A four-level spread is creating invisible drag.
When L0 and L3 sit in the same team, shared decisions about AI quality are impossible. The most advanced member sets standards others cannot yet reach. The least advanced accepts outputs that others would flag. The gap is structural, not motivational.
The signal that connects all three
This team has the raw material — the curiosity, the expertise, the motivation — that most teams at this stage do not. What it lacks is the architecture to connect individual capability into collective intelligence. The three gaps above are not independent problems. They are three expressions of the same root cause: AI adoption has been left to individuals, and no one has yet designed the shared layer that turns individual AI use into team performance.
Section 6 of 6
What This Means for Your Next Decision
Specific guidance on where to direct — and withhold — investment based on this team's current state.
Not yet worth investing in
—More AI tools or licences. The bottleneck is not access — it is the discipline to use what already exists well.
—Generic AI training programmes. The team's gaps are specific and calibrated. Generic content will not close them.
—Scaling AI workflows before shared standards exist. Scaling fragmented practice accelerates fragmentation.
—Individual performance management around AI adoption. This is a team design problem, not a motivation problem.
Worth investing in now
✓Visible leadership modelling of AI use. One deliberate act changes what the team understands AI development to mean.
✓Shared workflow design — not individual use. One workflow, documented and owned collectively, is worth more than nine private ones.
✓Governance clarity: who reviews AI output before it reaches a client. Design the accountability. Do not assume it.
✓Targeted development in AI Literacy and Responsibility — the two bottlenecks with the clearest activation path.
This is the shape. The Full Diagnostic shows you the engine — individual profiles for each team member, precise capability gaps by person, and an ordered activation roadmap for the whole team. The data is already there. The analysis is one step away.
Get the Full Diagnostic
9 individual profiles. Precise gaps. An ordered plan.
Each of these nine people has a full individual Pulse Report already generated. The Full Diagnostic compiles them into a team-level view with cross-member gap analysis, coordination recommendations, and a sequenced intervention plan.
This is a sample. Your team's report is generated from your team's actual data — specific to your roles, your context, and your organisation.
- Each person completes the individual assessment in their own time — no workshops, no scheduling
- Individual Pulse Reports generated automatically on completion
- Team report compiled and delivered by Futurebraining
Talk to us about running this with your team →