Add more content here...
October, 2024

How do the Australian Privacy Principles (APPs) apply to AI?

This must be one of the most common questions during presentations and privacy update sessions. The growth of the use of AI, particularly now that it is imbedded in so many platforms (Chat GPT, Copilot, Einstein AI, Dynamics AI, Zoho Zia etc.), means that many organisations are potentially using personal information to train these powerful tools. This begs the question – is this allowed under the APPs?

To be clear, I am not an expert in AI. However, I follow the news with interest and can see both the benefits and risks from this rapid growth of the technology. Marketing is being impacted in many ways. It offers tremendous benefits such as making us more efficient and enabling us to bring our creative ideas to life. It is also a disrupter with many functions being replaced with AI. I I am not about to go into all the nuances of AI use, however, given the interest in AI and the increasing focus on data privacy and the ethical use of data, as the course facilitator for the AMI Privacy & Compliance for marketer’s course and a privacy consultant, I thought would share my thoughts on how the current APPs apply.

Privacy Landscape in Australia

In Australia, privacy laws are starting to address the use of artificial intelligence (AI) and its impact on personal information, though the legal framework is still evolving. Key aspects of how AI intersects with privacy law include:

Australian Privacy Principles (APPs) and AI

The Privacy Act 1988 and the Australian Privacy Principles (APPs) govern how personal information is collected, used, and disclosed here in Australia. While the privacy act is long overdue for an update (coming soon, I am sure), when it comes to AI, these regulations are certainly relevant because AI systems often rely on large datasets, which may include personal data. Organizations using AI must comply with several obligations under the APPs, such as:

  • Transparency (APP 1): Organizations must be transparent about how personal information is handled, including how AI processes that data. This means stating clearly in your privacy policy that you use personal data for AI processing and what if any decisions may be made. It would be critical to also ensure that your Data Privacy Officer (or whoever may respond to privacy query) is trained on how best to answer this question. We will likely see the need to disclose the use of automated decision-making in the privacy act update coming soon.
  • Disclosure (APP 5): The regulations do not explicitly mention automated decision-making by name. However, APP 5 requires that an organization, when collecting personal information, must take reasonable steps to ensure that the individual is aware of certain matters, including the purposes for which the organization is collecting the information; any organization to which the information is usually disclosed; how the individual can access and correct their personal information. In this context, we can assume disclosing the use of AI is required. Again, we will see this expressed clearly in the update.
  • Purpose limitation (APP 6): AI systems must only use personal information for the purpose for which it was collected, or for a related purpose that would be expected by the individual. Again, if you have not disclosed to people at the point of collecting their data, that one of the primary or related purposes was to use AI, you are likely breaching current regulations.
  • Cross-border disclosure (APP 8): Governments are understandably concerned about where and how their citizens’ data is sent. Most regulations require sharing personal data overseas to be disclosed (which countries and for what purpose). Processing PI through AI may result in the data being “disclosed” overseas. Therefore, reviewing where data is being processed and what happens after processing is critical. If your AI processing is overseas, this should be clearly stated in privacy policies, and you should review what protections are in place from your service provider.
  • Data security (APP 11): Organizations must take reasonable steps to secure personal information, which extends to data used in AI systems. The danger of some AI models is that you may not truly know where and how the data may be accessed outside your organisation. It is also key, that you limit what data an AI can access – treat your AI like any other person or organisation and ensure they need access to personal data before letting them lose on your data.
  • Data accuracy (APP 10): AI systems should ensure that the personal information used is accurate, up-to-date, and complete. A challenge for AI is that sometimes we see bias and inaccuracies due to poor data quality. This is particularly important if you are making decisions that impact individuals.

Privacy Update

There is growing recognition that the existing privacy laws may not fully address the unique challenges posed by AI, especially around transparency, accountability, and the potential for bias. We have seen guidelines on what to expect in the upcoming privacy act update. Potential reforms could include:

  • Strengthening requirements for transparency around the use of AI.
  • Stronger consent requirements.
  • Providing individuals with more control over automated decision-making.
  • Clarifying the application of privacy principles to new AI technologies.
  • Expanding protections to address issues like algorithmic bias and data discrimination.

Ethical Considerations and Guidelines

In addition to legal frameworks, Australia has issued ethical guidelines for the use of AI. The Australian Government’s AI Ethics Framework outlines principles such as fairness, accountability, and transparency that align with privacy protections, encouraging organizations to consider the ethical implications of using AI on personal data.

AI and Sensitive Information

If AI systems process sensitive information, such as health data, organizations face stricter requirements under the Privacy Act. They must ensure higher levels of protection, limit access, and obtain explicit consent before processing sensitive data.

Conclusion

In summary, while Australia’s current privacy laws, particularly the Privacy Act and the APPs, apply to the use of AI in handling personal information, there are gaps, especially in areas like automated decision-making and algorithmic transparency. For those thinking about how to adapt consider the following.

  1. Review your privacy policy to ensure you are including the use of AI in your possible uses of personal data. Pay particular attention to the primary or related purposes when the data is collected.
  2. Start developing collection notices. Many organisations rely on the use of a link to the privacy policy at the bottom of their web pages. We will likely see the need to have specific, clear, unambiguous collection statements at all data-collecting points. These notices should include the use of AI.
  3. Review your AI use. Controlling what data an AI can access will become critical. We will see more control over how data is used, given to consumers. This will mean, if they ask for their data not to be used for AI, a process must be in place to comply. For many organisations, data subject requests like this remain a weakness in their compliance and now is the time to start getting ready for the “right to be forgotten or restrict processing” world.

If you are concerned about your current level of compliance and how to prepare for the changes, get in touch with Richard Harris at Data Design Consulting, AMI’s Privacy & Compliance expert or look out for one of AMIs Privacy and Compliance Masterclasses.