Artificial Intelligence (AI) is revolutionising industries and transforming everyday life, but with its rapid advancement comes the pressing need for robust safety and ethical standards. 

In Australia, the development and deployment of AI technologies are governed by existing laws, as well as principles, standards and frameworks that have been introduced to build trust in AI with an initial focus on state and federal government. 

The potential benefits of AI are immense, ranging from improved healthcare to enhanced customer experiences. However, these technologies also pose significant risks, including privacy violations, biased decision-making, misinformation and security vulnerabilities. Ensuring AI safety and promoting responsible AI use are crucial for mitigating these risks and building public trust in AI systems.

AI has been used in various forms by organisations for many years. The introduction of Generative AI has democratised access to AI technology and dramatically increased the number of use cases, but this also brings additional risks. A recent example involves an LJ Hooker real estate agency that used ChatGPT to create content for property listings. The agency falsely advertised a property as being located near two local schools that didn't exist. This occurred due to a phenomenon known as a ‘hallucination,’ a common issue with Generative AI. Such risks can be challenging to manage and often lead to diminished trust in AI systems. These risks may prevent organisations from advancing from the proof-of-concept stage and into full production, or they may expose organisations to unacceptable risks. 

Hallucinations are just one of the potential risks. Below we explore other risks, relevant legal and ethical frameworks and strategies for mitigating these risks and building trust in AI systems.

AI governance: Ethical principles, standards and frameworks

AI-related activities are regulated through a combination of existing laws, principles, standards and frameworks that apply in various contexts and aspects of AI development and deployment. Central to the governance of AI are the Australian AI Ethics Principles.

AI Ethics Principles AI

The Australian AI Ethics Principles have 8 core principles:

  1. Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  2. Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  3. Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  4. Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  5. Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  6. Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  7. Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  8. Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

Voluntary AI Safety Standard

Aligned with the AI Ethics Principles and international standards, the Voluntary AI Safety Standard released in September 2024, outlines ten voluntary guardrails for developers and deployers of AI:

  1. Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
  2. Establish and implement a risk management process to identify and mitigate risks. 
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance. 
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed. 
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight. 
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content. 
  7. Establish processes for people impacted by AI systems to challenge use or outcomes. 
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks. 
  9. Keep and maintain records to allow third parties to assess compliance with guardrails. 
  10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

The standards are proposed as voluntary steps organisations can take now to improve record keeping, transparency and testing approaches.

National Framework for the Assurance of AI in Government

While primarily aimed at state and federal government agencies, this assurance framework provides valuable insights for organisations on AI assurance practices. This framework aligns with Australia's AI Ethics Principles and the guardrails outlined in the Voluntary AI Safety Standard.

The Australian Government's intention is to lead from the front. Organisations should consider following the standard and applying the assurance framework in anticipation of similarly aligned mandatory requirements which may apply in future. 

Key risks and protections

Privacy and Data Protection Document

A primary consideration with AI is the handling of personal data. Organisations must comply with the Australian Privacy Principles (APPs) which govern how organisation collect, use and disclose personal information.

AI systems that use personal data must adhere to these principles, ensuring privacy and protection of the data used. Additionally, organisations may only use or disclose information for the primary purpose for which it was collected, or a permitted secondary purpose by exception. Exceptions include where a person could reasonably expect the data to be used or disclosed such as normal internal business practice such as auditing, business planning, billing or de-identifying the personal information (APP 6.22).

When personal data is collected, organisations require consent from individuals to use and disclose for a specific purpose. Where personal data is input into an AI and the organisation maintains control, it's considered a use of personal data. In cases where the data is processed by a third-party this is considered a disclosure

Strategies to prevent unnecessary or unauthorised disclosure of personal information include:

  • The use of private, small language models may be effective for confidential and sensitive use cases. 
  • When using hosted large language models, guardrails that prevent personal information from being disclosed can be used. 
  • Guardrails can also be used to prevent personal information that may be present in outputs from being disclosed to end-users. 

Discrimination

AI technologies can inadvertently perpetuate historical biases that exist in training data, leading to discriminatory outcomes. In Australia, anti-discrimination laws prohibit discrimination on various grounds, including race, sex, disability and age.

Organisations developing and deploying AI systems must comply with these laws by taking proactive steps to identify and mitigate biases in their systems. 

The use of personal data raises concerns about discrimination, particularly when AI systems rely on personal information to make automated decisions. Bias can arise from factors such as the ethnicity implied by a person's name, which may lead to negative outcomes for the individual. These situations must be handled with care and diligence. To prevent potential harm, automated decision-making should be avoided, and a human should always remain in control of the process, ensuring meaningful oversight throughout the decision-making. 

Additional strategies such as conducting regular audits, testing models for bias, guardrails for bias detection, and fostering diverse and inclusive teams to oversee AI development can help to mitigate these risks.

Consumer protection

Currently under review, the Australian Consumer Law (ACL) provides protections to consumers against misleading or deceptive conduct, unfair contract terms, and unsafe products. AI systems that interact with consumers or impact their purchasing decisions must comply with the ACL. This includes ensuring that AI-driven recommendations and decisions are transparent, fair, and do not mislead consumers.

Strategies to navigate AI safety

Organisations developing or deploying AI systems should implement comprehensive compliance strategies to navigate the complex legal, ethical and regulatory landscape, key strategies to consider include:

1. Risk assessments

Organisations should conduct thorough risk assessments to identify potential legal and ethical risks associated with their AI systems. This involves evaluating data sources, algorithmic biases, security vulnerabilities, and potential impacts on privacy and human rights. Evaluate the full range of potential harms in consultation with stakeholders on an ongoing basis.

2. Guardrails

Design and test guardrails to address risks identified and appropriate for the use cases, examples include preventing leakage of personal information and detecting hallucinations.

3. Governance frameworks

Establishing a robust governance framework is essential for overseeing AI development and deployment. This includes defining roles and responsibilities, setting up ethical review boards, and developing policies and procedures for AI management. Apply data governance principles, in particular data quality controls over input data.

4. Training and awareness

Ensure employees are well-informed about AI regulations and ethical guidelines. Regular training can help build a culture of compliance and ethical responsibility within the organisation. Ensure non-technical consumers and uses of the AI system are aware of the risks and limitations of the AI system. Where applicable, ensure employees are adequately trained to maintain meaningful oversight of the AI system.

5. Engaging with stakeholders

Engaging with stakeholders, including regulators, industry bodies, and the public, can provide valuable insights and support in developing responsible AI systems. Collaborative efforts can help address emerging challenges and align AI practices with societal values.

6. Regular audits and monitoring

Regular audits and continuous monitoring of AI systems are necessary to ensure ongoing compliance with legal and ethical standards. This includes reviewing data practices, assessing algorithmic performance, and implementing corrective actions when necessary.


As AI continues to evolve and integrate into various aspects of life, ensuring its safety and responsible use becomes increasingly important. In Australia, a combination of legal obligations, ethical guidelines, and proactive compliance strategies is essential for developing trust in AI. By adopting the voluntary standards and assurance frameworks early, businesses can navigate the regulatory landscape, mitigate risks, and harness the full potential of AI innovations while upholding public trust and societal values.

 

For more information, please contact:
Srdjan Dragutinovic, Partner, Data Analytics 
Gerard Sayers, Senior Manager, Data Analytics

 

Resources and further readingComputer or Phone

 

Contact Us

  GET IN TOUCH    

How can we help?