PRACTICE · III · WORKFORCE
Workflows, redesigned around agents.
The hardest part of an agent program isn’t the models. It’s the org chart underneath. The roles, the KPIs, the review loops, the incentive structures, all of them drawn for a workforce that didn’t include software that acts. The redraw is the work.
We’ve watched this play out at every scale: the agent ships, the workflow doesn’t change, and within a quarter the agent is being routed around. Not because it didn’t work. Because nobody adjusted the role definitions, the SLAs, or the performance reviews to make using it the rational choice. The agent gets blamed. The org chart should have.
We do this work first, on paper, with the people who own the workflow. New role definitions get written. KPIs get rewired so the agent’s output is credited, not invisible. Review loops get a place for the human-in-the-loop signal to land. Then the agent ships into a structure that already knows what to do with it.
Role redesign, on paper
New role definitions written before the agent ships. What the human owns, what the agent owns, where the seam is.
KPI & incentive rewiring
Performance metrics rebuilt to credit human-and-agent output, not just human throughput. Compensation follows.
Review loops & HITL
A defined place for the human-in-the-loop signal to land, sampled, queried, and fed back into the agent.
Adoption & change
Enablement that meets the team where they are. Less “AI training,” more “here is how your week changes.”
On adoption
Agents don’t fail. The org charts around them do.
The thing this pillar doesn’t come with is a quiz. We could have shipped one. We’d rather have a working session: the workflow on a wall, the team in the room, and a clear picture of what changes on Monday.
START · THE WORKFORCE BRIEF
Whose week actually changes?
Take the Agentic Readiness Assessment to baseline workforce alongside the other five dimensions. Or skip the form and book the working session.