May 21, 2024

Krazee Geek

Unlocking the future: AI news, daily.

Mandatory guardrails for “high-risk” AI being considered in Australia

3 min read

During public consultations, concerns have been raised about the development and deployment of AI in “high-risk” settings, leading Australia to contemplate implementing compulsory guardrails to address these concerns.

According to Ed Husic, the Minister for Industry and Science, Australians recognize the significance of artificial intelligence, but they also want to address and mitigate any potential risks associated with it.

The Australian government’s proposed measures were outlined in a report titled: “Consultation on Ensuring the Safety and Responsibility of AI in Australia.”

The Department of Industry, Science and Resources (@IndustryGovAu) announced that they have taken into account the feedback received on their #ResponsibleAI consultation. This will be reflected in the interim response from #AusGov, influencing future regulations on #AI for the benefit of people, government and businesses in Australia.

The report mentioned that the government will take into account the implementation of mandatory measures to protect those involved in the development or use of AI systems in valid, high-risk environments. This will aid in guaranteeing the safety of AI systems in situations where reversing potential harms is challenging or impossible.

The article recognized that there were differing opinions on what could be considered as “high-risk” environments, but it provided the list used in the EU AI Act as illustrations.

The following are examples of crucial infrastructure that may be targeted by cyber attacks: water, gas, and electricity systems, as well as medical devices. Additionally, cyber criminals may also target systems that determine access to educational institutions or are used for recruiting individuals. Other potential targets include systems utilized in law enforcement, border control, and the administration of justice, as well as those that involve biometric identification and emotion recognition.

Moreover, it also provided illustrations of how AI could potentially be utilized, such as in predicting an individual’s chances of recidivism, evaluating their eligibility for employment, and managing a driverless car.

The proposal suggests that in order to prevent irreversible damage from potential AI glitches, it is necessary to establish mandatory laws governing the development and deployment of this technology.

Several suggested measures have been put forward to address AI-generated content, such as implementing digital labels or watermarks for identification purposes, incorporating ‘human-in-the-loop’ protocols, and potentially prohibiting AI applications that pose unacceptable risks.

Some suggested instances of applications that are not acceptable included influencing behavior, evaluating individuals based on societal standards, and instant widespread identification using facial recognition technology.

Current Implementation of Voluntary Regulation

There is a possibility that it will take a while for the suggested compulsory regulations to be officially introduced. However, during a conversation with ABC News, Husic stated that immediate action is needed to establish voluntary safety standards, in order to educate and guide the industry on their expected responsibilities and methods of implementation.

OpenAI and Microsoft both expressed their support for voluntary regulations in their submissions, rather than immediately implementing mandatory ones.

Toby Walsh, a Professor of Artificial Intelligence at the University of New South Wales, expressed disapproval of this method and the absence of specific measures in the preliminary report.

According to Professor Walsh, the situation is slightly inadequate and it has been delayed. There is not a significant amount of solid evidence, and most of it relies on voluntary agreements with the industry. However, history has shown that relying on the industry to monitor their own progress may not be the most reliable approach.

According to the report, the implementation of AI and automation could potentially contribute an extra $170 billion to $600 billion annually to Australia’s GDP by 2030.

It may prove to be challenging for AI developers in Australia to reach those numbers if the country’s reputation for strict regulations persists and they are burdened with excessive bureaucratic processes.

Leave a Reply

Your email address will not be published. Required fields are marked *