An ‘orchestrator’ or ‘fixer’ of AI owns a company’s internal AI operating layer – deciding where AI should live and how it should be used, writes Annie Liao.

Most companies don’t fail at AI. They have just never decided who owned it.
Despite near-universal adoption, very few organisations feel meaningfully transformed. Productivity gains are hard to measure and 95% of pilots are failing to reach production, according to McKinsey.
The problem isn’t ambition or access. It’s ownership, or more precisely, the way modern enterprises are structured to avoid it.
The ownership gap
In most organisations, work is divided into cost centres. People report into functional hierarchies. Performance reviews reward delivery within narrow scopes. AI, by contrast, cuts horizontally. It reshapes how work flows across sales, finance, HR, legal, and operations. As a result, it belongs everywhere, which often means it belongs to no one.
You can see this play out in organisations that proudly announce AI initiatives, only to discover six months later that usage is inconsistent, governance is unclear, and ROI is hard to prove. AI ends up everywhere and nowhere at once.
What’s missing isn’t AI tools. It’s “orchestration”.
Historically, companies tried to solve transformation by centralising expertise: hiring specialised engineers, forming innovation labs, or spinning up task-forces. This approach worked when software was static, but AI isn’t. Without someone making ongoing judgment calls on what to automate, what to train, the system gets left behind.
It gets worse when AI adoption isn’t reflected in KPIs, promotions, or hiring decisions – it stays optional. Managers optimise for what they’re measured on and AI becomes extracurricular.
Enter the AI fixer
Throughout 2025, a new role has quietly taken shape inside the companies that are making progress with AI. It doesn’t come with a standard title, and it rarely sits neatly inside engineering or IT. Internally, people call them “the person who fixes AI”, also known as the AI Fixer.
This role usually reports directly to a CEO or COO, not because it’s flashy, but because it breaks the cost-centre logic most companies operate under. The AI Fixer’s job isn’t to build models or tools. It’s to own the internal AI operating layer, deciding where AI should live, how it should be used, and how those decisions compound over time.
What’s interesting is that the best AI Fixers rarely start in that role. At Build Club, where we work closely with teams navigating AI adoption, we’ve repeatedly seen the same progression. Someone begins as an AI Champion, a curious internal operator within ops, product, or finance, who experiments early and helps other teammates unblock work, documenting workflows.
Over time, leadership realises this person isn’t just “good at AI” – they’re good at deciding where AI actually helps. That’s when the role evolves. They’re given a broader mandate: map high-leverage use cases, standardise workflows, set guardrails, and help teams build real capability rather than one-off AI hacks.
AI Fixers work because they understand the business deeply before they understand the tools.
How teams actually scale AI
We’re now seeing companies formalise this insight.
For example, Zapier, the software company powering workflow automation, has articulated an AI transformation staffing model that mirrors what’s emerging organically: a small central function responsible for tools, governance, leadership alignment, and impact measurement, paired with distributed roles embedded across teams.
These roles include AI fluency champions, automation engineers, and innovation leads close to the work, but connected by shared standards, rituals, and metrics.
The shape is telling.
“AI transformation doesn’t succeed through strict centralisation or chaotic experimentation.”
Annie Liao
It succeeds when there is a clear centre of gravity, and when leadership explicitly empowers that centre.
In practice, the companies that move fastest pair this structure with a strong top-down mandate. The CEO makes it clear AI is not optional. KPIs reflect adoption and outcomes. Critically, HR is involved early, embedding AI fluency into onboarding, role design, and performance reviews.
Judgement, not automation
The AI Fixer often sits at the centre of this system. They translate leadership intent into daily behaviour. They decide which workflows are worth automating now, which require human judgment, and which demand upskilling instead. They track adoption, not just access. They enforce policy quietly, before risk becomes a crisis.
This is where most AI initiatives fail, not at the ideation stage, but at the point where no one is accountable for outcomes. Tools don’t create transformation. Decisions do.

When the right decisions are made early about governance, incentives, hiring, ownership and every subsequent AI initiative becomes easier. Adoption accelerates and so does ROI. Teams stop asking whether they should use AI and start asking where it can create the most value next.
Platforms like Solaris, which we built after watching hundreds of pilots stall, make this work visible. When usage, skill gaps, governance, and hours saved are measurable, AI stops being an abstract promise and becomes an operational discipline.
We are not in an “AI arms race”. We are in an operating-model transition.
The companies that win won’t be the ones with the most sophisticated models. They’ll be the ones who recognised that AI needed an owner, grew one from within, gave them authority, and rewired the system around them.
If you don’t know who your AI Fixer is, that’s probably the problem.
Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.