Your AI Strategy Is Showing - Whether You Have One or Not
- CyberWorqs

- 2 days ago
- 7 min read

There's a quiet crisis unfolding inside organisations right now. It doesn't look like a crisis. It looks like a Slack channel full of people pasting ChatGPT outputs, a few enthusiastic employees who have gone rogue with AI tools, and a leadership team that has approved a budget line for AI initiatives without being entirely sure what that means.
That's not an AI strategy. And the gap between having one and not having one is getting wider every quarter, including right here in Australia.
What the Data is Actually Telling Us
The latest Gallup workforce data, surveying over 23,000 U.S. employees, paints a picture that should stop any executive in their tracks. Australian research tells a strikingly similar story.
Half of U.S. workers now report using AI on the job at least a few times a year. In Australia, 49% of workers report using generative AI in the past year, up from 38% in 2023, with 74% of those users saying they rely on it for work. On the surface, that sounds like encouraging progress. Look closer, and the story gets uncomfortable.
Only 37% of U.S. employees say their organisation has formally implemented AI to improve productivity or quality. A full 40% say their organisation hasn't adopted AI at all. And here's the number that should keep leaders up at night - nearly a quarter of employees simply don't know whether their company has an AI strategy.
Australia's picture is no cleaner. Local research reveals what's being called a "two-speed" digital economy: while 64–84% of Australian SMBs report using AI in some capacity, a clear maturity gap exists between those experimenting casually and those embedding AI strategically into how they work. The tools are in people's hands. The strategy to make them count is largely absent.
This is not a data-literacy problem. It's a communication problem. It's a strategy problem. When frontline workers, individual contributors, and part-time employees are operating in the dark about something as foundational as how their organisation plans to use transformative technology, that's a leadership failure.
The Productivity Paradox
Here's the cruel irony in all of this: AI is working, at the individual level. Employees who use AI frequently report genuine improvements in their productivity and efficiency. Australian workers report using it primarily to assist with writing (75%), brainstorming (69%), and problem solving (70%) strategic, high-value tasks.
But individual productivity gains are not the same as organisational performance improvements.
A recent MIT study found that 95% of generative AI pilot projects are failing to move beyond the proof-of-concept stage, and only 29% of businesses report meaningful ROI from their AI investments. BCG's 2025 AI Value Gap study of 1,250 firms found that a mere 5% of companies are realising AI value at scale, while 60% report little or no return despite significant investment.
The Gallup data confirms this. Only about one in ten employees in AI-adopting organisations strongly agree that AI has fundamentally transformed how work gets done across their organisation. Individual task efficiency? Yes. Deep, structural change to how work flows? Barely.
McKinsey put it plainly: "The challenge of AI in the workplace is not a technology challenge. It is a business challenge."
The tools are not the problem. The absence of intentional strategy around those tools is.
The Shadow AI Problem
When a company has no clear AI strategy, nature abhors a vacuum - and employees fill it themselves.
Gallup's research suggests many workers are already using personal AI tools, chatbots, writing assistants, and research tools, without any visibility into, or guidance from, their organisation. This is "shadow AI": the enterprise equivalent of shadow IT from the 2010s, except the stakes are considerably higher.
Australia has a specific version of this problem. Research shows a clear gap between the responsible AI practices that local SMEs intend to implement and those they have actually deployed. Businesses are committed to responsible AI in principle but many face practical barriers in translating those intentions into operational reality. This is particularly acute in industries where AI awareness is lowest but operational exposure is highest: construction, manufacturing, and agriculture. These are sectors where AI-enabled automation, predictive maintenance, and supply chain optimisation have among the clearest ROI cases and yet they are the least likely to have any governance framework in place for the AI tools their workers are already quietly using.
Shadow AI means:
No data governance. Employees pasting sensitive client data, financial projections, or internal documents into consumer AI tools, often without realising the implications.
No consistency. Ten people in the same team using ten different tools, ten different prompts, ten different quality thresholds, with no way to compare, quality-check, or scale the outputs.
No learning loops. The organisation gets no institutional benefit from individual experimentation. When that employee leaves, the knowledge leaves with them.
No accountability. When an AI-assisted output causes a problem such as a hallucinated fact in a report, a flawed legal summary, a biased recommendation, no one knows where it came from or how to prevent it happening again.
An AI strategy doesn't mean locking AI down. It means creating the conditions in which AI can be used safely, consistently, and to the benefit of the organisation as a whole.
The Board, Leadership, and IT Disconnect
Here's perhaps the most damaging structural problem in AI adoption today, and the least talked about: the board thinks the C-suite owns it, the C-suite can't agree on who owns it, and IT is left holding the bag.
A Pearl Meyer survey of 108 executives and board members lays this bare. When asked who should lead AI initiatives, 90% of board members pointed squarely to the C-suite. But within the C-suite itself, consensus collapses: 32% say it belongs to the C-suite collectively, 27% assign it to individual business unit leaders, 22% believe it sits one level below the C-suite, and 17% place it with functional heads like HR, finance, or legal.
That's not a governance model. That's organised confusion. And it has a direct cost.
When AI ownership is ambiguous at the top, the default is for AI to get treated as a technology project, which means it falls to IT. But as AI strategy advisor Karina Arteaga puts it, most companies are stuck treating AI as an IT problem "when it is fundamentally an operating-model problem." The result is a perfect storm: high expectations from the board, low organisational readiness on the ground, and fragmented decision-making in between.
CIOs and IT leaders are then caught in an impossible position. They are being held accountable for AI outcomes they don't fully control, judged by how systems perform at scale, while business units are out selecting their own use cases, running their own pilots, and making promises that the underlying infrastructure wasn't built to support. As one CIO advisor put it: "The biggest risk right now is allowing the business to launch dozens of disconnected AI experiments. Weak foundations make every failure look like an IT failure."
Meanwhile, boards are receiving confident updates from the top floor while internally the C-suite is still working out the details. The Pearl Meyer survey's conclusion is stark: leadership systems are not evolving fast enough to support either strategy or AI.
This plays out predictably on the ground:
IT builds infrastructure nobody uses because business units weren't consulted and don't trust tools they didn't choose.
Business units run rogue pilots because IT moves too slowly and leadership hasn't defined a process.
The board approves budgets for AI without clear KPIs, success criteria, or accountability structures.
Employees get caught in the middle receiving conflicting signals, inadequate training, and no clear answer to "are we actually doing this or not?"
The McKinsey State of AI report reinforces this: 88% of organisations are using AI in some form, but almost two-thirds have not implemented it at scale, and only 39% can demonstrate a measurable profit impact. The technology is not failing. The operating model around it is.
What a Real AI Strategy Actually Looks Like
An AI strategy is not:
A list of approved tools
A budget allocation
A task force with no decision-making power
A pilot that's been running for 18 months with no defined success criteria
An AI strategy is a deliberate answer to three questions:
1. Where will AI change how we create value? Not where can AI be applied (everywhere), but where does AI-enabled capability change our competitive position, our customer outcomes, or our cost structure in a way that matters?
2. How will we build the organisational capability to capture that value? This means training, yes but more importantly it means redesigning workflows, not just inserting AI into existing ones. It means managers who model and support AI adoption. It means clear governance on data, quality, and accountability. Australia's National AI Centre framework is explicit on this: someone must be responsible for the AI's output, customers must know when they're interacting with AI, and critical decisions must be reviewable by humans. That's not a compliance checklist - it's the skeleton of a governance model.
3. How will we know if it's working, and how will we adapt? The organisations seeing ROI from AI are iterating quickly. They measure, they learn, they adjust. IBM's guidance for 2026 is clear: define success metrics before deployment, implement tracking that attributes business outcomes to specific AI capabilities, and create feedback loops that report those outcomes across the organisation. "AI strategy" is not a document. It's a practice.
The Uncomfortable Conclusion
The data, both globally and in Australia, reveals something important about where most organisations actually stand: AI is spreading through the workforce whether leaders plan for it or not. Employees are experimenting, adapting, and building habits around these tools in the absence of guidance. The question is not whether AI will change your organisation. It already is. The question is whether you're shaping that change or just watching it happen.
The economic modelling is unambiguous: AI adoption could add $44–50 billion to the Australian economy annually by 2030. It is the single largest lever available to lift national productivity. But that figure assumes adoption that is intentional, governed, and built on clear organisational accountability. Fragmented, ungoverned, strategy-free adoption (the kind most organisations currently have) captures very little of that value. The gains flow to the organisations that treated this seriously before it became urgent.
Australia has a genuine window right now: government support programs, a workforce that is broadly optimistic, and an economic challenge AI is uniquely positioned to address. That window will not stay open indefinitely.
The absence of a strategy is itself a choice, and the data is increasingly clear about what kind of choice it is.
Sources: Gallup Workforce Panel (Feb 2026, n=23,717); Pearl Meyer AI Governance Survey (2026); McKinsey State of AI 2025; Conference Board C-Suite Outlook 2026; IBM AI & Technology Leaders Report 2026; Tech Council of Australia — Future Ready Report (Aug 2025); Google/Ipsos
Australia AI Adoption Survey (2025); Australian Government Department of Industry AI Adoption Tracker (Q1 2025); BCG AI Value Gap Study 2025; LinkedIn Jobs on the Rise Australia 2026; Indeed Hiring Lab Australia (Apr 2026); MIT Generative AI Pilot Research; AI Lab Australia SMB Report 2026.
