From tick-box to top-table: what Australia's new privacy laws mean for boards and GRC leaders
- 3 days ago
- 4 min read

Australia's privacy reforms have quietly turned data privacy into a first-order governance problem. A new statutory tort in force since June 2025, tougher enforcement powers, and rising expectations around AI and automated decision-making (ADM) mean privacy now sits alongside cyber, conduct and workplace safety as a board-level exposure.
Here's what's changed, what's still coming, and what boards and GRC leaders should do about it.
What's already in effect
The Privacy and Other Legislation Amendment Act 2024 (POLA Act) passed in late November 2024 and received Royal Assent on 10 December 2024. Most Tranche 1 provisions commenced immediately. Three changes deserve particular board attention:
Stronger enforcement powers and penalties. The regulator now has enhanced investigation and enforcement powers, including new search and seizure powers, alongside a tiered penalty regime for a broader range of contraventions.
A new statutory tort for serious invasions of privacy, in force since 10 June 2025. For the first time, individuals have a direct legal avenue to seek damages from organisations for serious privacy invasions, without needing to go through the regulator. Your legal advisers can brief you on what this means for your specific business, but at a governance level the key shift is that privacy failure can now generate litigation risk, not just regulatory risk.
Cybersecurity uplift. Organisations are now explicitly required to take steps that include technical and organisational measures to protect personal information, not just technical controls alone.
What's still coming
ADM and automated decision-making transparency, December 2026. From 10 December 2026, organisations will be required to update their privacy policies to disclose when automated processes are used to make decisions affecting individuals. The two-year grace period is deliberate, but it is not long given the governance work involved.
Small business coverage, July 2026. From 1 July 2026, an estimated 100,000+ small businesses will become regulated by the Privacy Act for the first time. If your supply chain or service partners include smaller organisations, this has third-party risk implications.
Children's Online Privacy Code, by December 2026. The OAIC is developing a binding code for organisations that handle children's data online. Consultation is underway now.
Tranche 2 reforms, timing unknown. The Attorney General confirmed in early 2026 that further reforms are being progressed, but no Bill has been introduced yet. Boards should monitor this space.
Why the statutory tort changes the governance picture
Most boards think about privacy through the lens of data breach notification. The tort shifts that frame considerably.
Privacy failure can now generate litigation risk alongside regulatory risk. Governance decisions about what your policies say, whether your practices match them, and how you respond to incidents may be examined in court, not just by a regulator. That changes the evidential value of risk assessments, training records and audit trails considerably.
It also moves privacy closer to personal harm territory, more like workplace safety or discrimination than IT compliance. The implication for boards is that privacy governance should be treated with similar rigour to those frameworks: documented, tested and owned at a senior level.
A few practical governance questions worth putting to management:
Does your risk taxonomy explicitly recognise privacy litigation risk, not just "data breach risk"?
Do your incident response playbooks account for the possibility of follow-on claims, not just notification obligations?
Has your insurance program (cyber, professional indemnity, D&O) been reviewed in light of the tort? This is a conversation to have with your broker and legal advisers.
Governing AI and automated decisions before December 2026
The December 2026 deadline for ADM transparency may feel distant. It is not, given the governance work required. If your organisation uses algorithms to score, rank or make decisions about customers, tenants or employees, that is a material and growing privacy risk.
Practical steps to take now:
Build your ADM/AI register. You need a single, maintained view of where AI and automated decision-making are used across the organisation, capturing purpose, data sources, degree of human involvement, key risks and controls.
Check that privacy notices match reality. If your policy says "we may use automated tools" but you are running scoring or decisioning that materially affects people, that gap is a governance problem before the formal obligation even kicks in.
Treat model change as risk change. Retraining a model, onboarding a new vendor or adding a data source should trigger an assessment, not just an engineering ticket.
ISO 42001, the emerging AI management system standard, is worth considering here. It provides a structure for managing AI across its lifecycle and connects directly to your existing privacy and information security frameworks.
Integrating privacy into your GRC framework
Privacy should not be managed as a separate silo. Three practical moves:
Elevate privacy in the risk register. Give it its own entries (ADM non-compliance, potential privacy litigation, systemic over-collection, weak vendor oversight) with risk appetite statements and measurable KRIs such as volume of privacy complaints, number of high-risk PIAs completed, and time to remediate privacy incidents.
Make policies control behaviour. Map them to concrete procedures and system configurations, not just PDFs on an intranet. Check that data retention practices actually match what your policy promises, as over-retention is a specific enforcement target.
Embed privacy in internal audit. Test whether policies are understood, followed and evidenced. Regulators have flagged common failure themes including over-collection, opaque ADM, poor consent practices and weak governance of children's data. Feed findings back to the board so directors have a realistic view of privacy maturity, not just a self-reported compliance status.
The bottom line
With the statutory tort already in force and ADM transparency obligations arriving in December 2026, the window for reactive governance is closing. Boards that treat privacy as a compliance checklist will find themselves explaining not just what went wrong, but why it was not properly reflected in their governance structures.
The organisations that move now, integrating privacy into their GRC stack and governing AI with the same discipline applied to financial and safety risks, will be better positioned for the next wave of regulatory scrutiny, including whatever Tranche 2 brings.
