With the widespread and rapid implementation of Artificial Intelligence (AI) around the world, the Australian Government is reviewing its approach to AI governance. In June 2023, the Department of Industry, Science and Resources (the Department) released a discussion paper on safe and responsible AI in Australia.
auDA participated in this public consultation. A summary of our views on AI governance, as expressed in our submission to the Department, is provided below.
Adoption of a human-centric approach to AI governance
auDA’s view is that AI governance should be shaped to emphasise the importance of using the technology for good. This requires policymakers and regulators to adopt an inclusive and human-centric approach to innovation to unlock positive economic and social value for all Australians.
An inclusive and human-centric approach would require government to consider the risks and opportunities AI poses for individuals. For example, ensuring AI applications are secure and respect individuals’ privacy, will enable Australians to gain confidence and trust in deploying AI. This will enhance Australians’ digital wellbeing and accelerate our nation’s digital transformation.
Harmonisation of AI definition and terminology
In its discussion paper (p. 4), the Department notes that there is no single, universally agreed-upon definition of AI. To describe AI and its applications, it uses multiple definitions and terms from different sources, including the International Standards Organization (ISO), the Commonwealth Ombudsman and academia.
auDA believes it is important to use internationally compatible terminology and definitions for AI. Some definitions already agreed upon and adopted include those by the European Union, United States, Canada and the OECD. This will help minimise differences in understanding the technology and optimally address the risks and threats posed by AI.
Coordination of AI governance across government
auDA believes that cross-government coordination is essential for effective AI governance. This is because AI is industry-agnostic and therefore applies across a variety of policy areas that fall within the responsibilities of different policymakers and regulators. By coordinating their efforts, policymakers and regulators can take a coherent approach towards AI governance.
The Department proposed regulatory sandboxes as an option to support collaboration between regulators and industry. A regulatory sandbox is a controlled environment with reduced regulatory constraints that allows businesses to test and develop AI applications. They enable businesses and regulators to understand how new technologies can be developed and regulated in a responsible and ethical way.
auDA supports the idea of a one-stop multi-regulator sandbox for AI systems and applications, similar to the UK Government’s approach. This would allow concepts and ideas to be shared and help ensure that AI governance tools are consistent and complementary across government. In auDA’s view, this sandbox approach would benefit from the adoption of a multi-stakeholder approach, which includes involvement of all relevant stakeholders.
Considering the international policy landscape, auDA supports cross-border cooperation and harmonisation of frameworks for AI governance.
Potential bans of high-risk applications
For ethical reasons and to protect Australians’ fundamental rights, auDA believes bans should be considered for AI applications that pose unacceptable risks to Australians and their digital wellbeing. Such bans would help Australians increase and maintain trust in the technology, potentially leading to greater uptake of AI applications.
To avoid hampering innovation and technological progress, the precise AI applications or ‘use cases’ that would fall under such bans must be clearly defined.
Cyber security in the context of AI
From a cyber security perspective, auDA recognises that AI can be a double-edged sword – it can be deployed for cyber security defence as well as for cyber disruptions.
auDA believes that the Department should pay attention to potential cyber security implications of AI applications and coordinate its regulatory efforts with the Department of Home Affairs’ cyber security strategy and broader cyber security reform work.
Enhancing cyber security of AI applications is likely to increase Australians’ trust in the technology.
Talk about AI at these upcoming events
auDA actively supports and encourages multi-stakeholder engagements and welcomes all stakeholders to participate in discussions around AI and other technologies. In particular, auDA notes the 2023 Asia Pacific Regional Internet Governance Forum (APrIGF) in Brisbane from 29-31 August 2023 is an optimal forum for continued discussion on this topic. The forum will take place online and in-person in Brisbane, is free for participants and we encourage your attendance.
auDA’s submission will be published on our website once it has been reviewed and published by the Department. In the meantime, you can read our positions on key policy matters on auDA’s submissions webpage. You can also read about our policy and advocacy priorities for 2023-24 in auDA’s inaugural Public Policy Agenda.