Insights

Operationalising AI risk management: AUS AI risk management frameworks

Mike

Thu Nov 21 202410 min read

Australia has new national frameworks for managing AI risk and safety. Our CTO Mike shares insights for investors and tech leaders including starter questions for portfolio companies and a downloadable resource for tech leaders to help operationalise AI risk management.

For investors and tech leaders navigating the complex landscape of AI adoption, Australia's new Voluntary AI Safety Standard and AI Impact Navigator offer valuable frameworks for risk management and stakeholder communication. These tools provide a window into assessing, measuring, and communicating with stakeholders in relation to AI impact. Being largely voluntary, these guidelines present both opportunities and challenges. This article explores this and offers some tools to work through that complexity.

This article includes:

  • starter questions for investors to ask within a portfolio business, and

  • downloadable tool for tech leaders to move to operationalised action

A Proactive Approach to AI Governance

The release of a national AI Safety Standard is a signal that the Government is committed to a culture of responsible AI innovation in this country. It sets out guardrails, and leaves room for businesses to interpret how these apply based on their own use cases. 
While a voluntary model, like that of the UK, it is likely that mandatory guardrails in high-risk settings will be deployed soon. Other jurisdictions (eg EU) are moving towards more widespread binding regulation fron the outset.

The AI Impact Navigator is a set of tools to help businesses with stakeholder communication and confidence building, a key determinant of successful AI implementation and central to meeting the AI Safety Standard. It is designed to help businesses identify potential harms and consequences from the use of AI, with a focus on safety, diversity, inclusion and fairness.

By their nature, the two frameworks omit detailed instructional guidance. How businesses operationalise management of AI risks remains largely at their discretion.

Alignment with International Standards

A strength of Australia's AI guidelines is their alignment with the US National Institute of Standards and Technology (NIST). This ensures that Australian businesses will be working towards globally recognised best practices, facilitating international collaboration and competitiveness. The question for businesses already using NIST is how to leverage that current state.

Challenges in implementation

While the Standard, and AI Navigator represent positive steps, their flexibility opens the path to questions and misinterpretation about concrete implementation strategies.

To bridge this gap, we have developed:

For investors: questions to explore AI governance and compliance

For tech leaders: downloadable tool to help move from assessment to tangible action, based on international frameworks.

Questions to explore AI governance and compliance

The following starter questions will help build a deeper understanding of AI governance and compliance in the context of the frameworks:

  • What accountability processes are in place for AI systems?

    Listen for how the organisation establishes governance, assigns responsibility, and ensures compliance with national frameworks .

  • How are risks associated with AI technologies assessed and managed?

    Dig into the organisation's approach to identifying, evaluating, and mitigating potential risks linked to ongoing AI development.

  • Describe your data governance framework and how it ensures data quality and security?

    Look for measures taken to protect data integrity, provenance, and cybersecurity in the context of AI.

  • How are stakeholders engaged, including affected communities, in your AI development process?

    Understand the manner in which impacted stakeholder views are sought and considered, in an ongoing way.

These questions can help investors gain deeper insights into how portfolio companies are navigating the complexities of AI implementation, and may uncover areas for further exploration.

For Technical Leaders: Download our tool

We track evolving NIST Frameworks and we have correlated the 10 Guardrails of the national standard against NIST to enable AI activities within an enterprise to be plotted against both. It's a simple exel format, but we think it's helpful.

We’re offering this as a free download, to help you translate guidelines into actionable tasks and chart tangible progress towards improved AI governance. It's a simple exel format that you can start using straight away.

Use form below to receive yours.

Moving Forward

Australia's AI Safety Standard and Impact Navigator provide essential frameworks for responsible AI adoption, and they wont be the last of their kind. The key challenge for investors and tech leaders is translating voluntary guidelines into actionable strategies, and mandated guidelines should they emerge. By leveraging targeted assessment tools and practical implementation plans, organisations can strike the right balance between AI opportunities and risks while proactively building stakeholder trust. This approach not only enhances governance but also positions businesses at the forefront of responsible AI implementation in the evolving tech landscape in this country.

Call today,Or book us to call you.+61 429 342 051connect@ctolabs.com.auRequest callback