Trust is central to the acceptance of AI
Artificial intelligence (AI) is here to stay, and it’s transforming the world in ways we are only beginning to grasp. However, the use of AI poses risks and challenges, raising concerns about whether AI systems are trustworthy.
Realising the benefits AI offers and encouraging the take-up of the technology requires maintaining and sustaining the public’s trust in the technology. Our society needs to be confident AI is being developed and used in a responsible and trustworthy manner.
Several recent studies demonstrate that trust plays a key role in the advancement of AI: The UTS Human Technology Institute found that only one third of Australians trust AI systems, and less than half believe the benefits of AI outweigh the risks.
A global study by the University of Queensland found that almost three quarters of people across the globe are concerned about the risks posed by AI, with cyber security and privacy breaches ranking among the top concerns1.
Similar findings emerge from auDA’s latest Digital Lives of Australians 2023: Readiness for emerging technologies report, which concludes that Australian consumers’ concerns about security and privacy are the biggest barriers to greater take-up of AI (and other emerging technologies).
auDA’s research also found that more than 75 per cent of consumers and small businesses with higher knowledge of AI would feel more comfortable and confident if there were stronger regulatory safeguards in place.
Such findings underpin arguments raised by academics, consumer advocacy groups, human rights advocates, and the tech industry itself, that privacy and security safeguards should be foremost considerations, not retrofitted as an afterthought, as they are critical to user trust.
The latest global developments in AI governance
Responding to requests for enhanced regulatory certainty, regulators and policymakers around the globe have started addressing issues relating to privacy and security in the context of AI.2
In the United States, President Biden released the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO) in October, which outlines a coordinated, government-wide approach to AI and establishes new standards for AI security and safety. Provisions include:
- Notification of training for foundation models that could pose serious risks, and sharing the results of red-team tests on these models ;
- Creation of guidance for content authentication, watermarking and labelling for AI-generated content, to help build trust and transparency;
- Acceleration of efforts to develop privacy-preserving techniques for the training process of AI systems.
In the European Union, this year saw final stages in the development of the AI Act which will provide a world-first comprehensive regulatory framework for AI technologies. The Act takes a risk-management approach focused on public safety among other factors, and seeks to drive human oversight of AI systems.
A November 2023 summit organised by the United Kingdom saw a number of countries including Australia signing on to the Bletchley Declaration, again focused on safety of AI systems and initiating an annual series of summits on this topic.
At home in Australia, the Digital Platform Regulators forum (DP-REG)4 emphasised in its joint submission on the ‘Safe and responsible AI in Australia’ Discussion Paper that existing regulation covering competition, the media, digital safety, and data privacy already brush up against AI.
The DR-REG clarifies that any legislative or regulatory responses to AI should first consider how existing rules could be expanded. The Department of Industry, Science and Resources (DISR) is yet to publish the final report on its consultation.
The Australian Government has released its 2023-2030 Australian Cyber Security Strategy in which safe and responsible use of AI has formed a key part. It sets out multi-stakeholder processes for developing robust standards for emerging technologies such as AI.
Need for a harmonised and flexible approach to AI governance
With governments pursing their own reforms, there’s a risk of regulatory fragmentation – of regulatory inconsistency across jurisdictions.
While watermarking, privacy-enhancing designs, and oversight of AI systems pre-release tests sound promising, there is arguably limited value in such safeguards and measures if implemented by few jurisdictions only.
Like the internet, AI transcends geographical boundaries and requires international collaboration, coordination and harmonisation of regulations and laws across borders, so that individuals and businesses around the globe can have greater trust and confidence in AI.
The United Nations (UN) recently established the global multi-stakeholder Advisory Body on AI (Advisory Body), which is a promising step for collaboration and coordination at a state level.
The Advisory Body will engage relevant stakeholders and coordinate existing multi-stakeholder initiatives (e.g., UN Internet Governance Forums initiatives, G7, Global Partnership on Artificial Intelligence, World Economic Forum, OECD, White House commitments, UK AI Summit) to reinforce synergies across different national and regional efforts, promote interoperability and international collaboration on AI governance.
As the advancement of AI systems and application progresses, continuous multi-stakeholder engagement is imperative to ensure that governance mechanisms remain effective, are continuously improved, and address trends in this rapidly changing environment. Such an approach would also ensure that individuals and businesses around the world can proceed the adoption of AI with trust and confidence.
On auDA’s website, you can read auDA’s submission to DISR’s consultation on ‘Safe and responsible AI in Australia’. You can also find out more about our positions on key digital policy matters in auDA’s Public Policy Agenda.
1Other concerns centre around manipulation and harmful use, loss of jobs and deskilling, system failure, the erosion of human rights, and inaccurate or biased outcomes.
2Other topics on regulators’ agenda include safety, bias, transparency, copyright, intellectual property, education, and economic impacts (work force effects, productivity).
3Due to the non-regulatory nature of the EO, companies are not obliged to comply with the requirements.
4The DP-REG comprises the Australian Competition and Consumer Commission (ACCC), the Australian Communications and Media Authority (ACMA), the eSafety Commissioner (eSafety) and the Office of the Australian Information Commissioner (OAIC).