ethical AI

The Five Rules of Ethical AI

The Ethical AI Manifesto outlines AI’s risks and how to avoid them.

Controlling the potential harms of artificial intelligence (AI) is “both necessary and achievable”, the US Government has declared in launching a blueprint for an ‘AI Bill of Rights’ that urges adopters of AI to consider its implications across five key areas.

by David Braue

Developed by the White House Office of Science and Technology Policy (OSTP), the blueprint lays down clear design goals and expectations from automated systems – including, for example, expectations that AI systems be safe and effectivepreservedata privacy, and provide protection from algorithmic discrimination.

“Algorithms used across many sectors are plagued by bias and discrimination,” the OSTP noted in launching the blueprint, “and too often developed without regard to their real-world consequences and without the input of the people who will have to live with their results.”

AI systems should, the blueprint specifies, also provide notice and explanation whenever they are being used.

Organisations using AI should also provide a human fallback option that can help citizens and customers “quickly consider and remedy problems” when the automated service fails to support them as intended.

Designed as “an overlapping set of backstops against potential harms,” the framework – which provides an AI manifesto reminiscent of Isaac Asimov’s Three Laws of Robotics – is designed to apply to automated systems that “have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” the OSTP noted.

“These Rights, opportunities, and access to critical resources or services should be enjoyed equally and be fully protected, regardless of the changing role that automated systems may play in our lives.”

Aiming to normalise issues around the ethical use of AI, the blueprint’s five principles are accompanied by a handbook, called From Principles to Practice, that provides detailed steps for those wishing to integrate the blueprint into their own AI deployments.

The policy has been developed through a year-long OSTP consultation that included community and expert input, panel discussions, public listening sessions, meetings, public servants, international residents, and other groups – all in the name of canvassing public opinion around the key issues that AI’s adoption poses.

A blueprint for ethical AI

Because of the often skewed data sets on which AI algorithms are trained, many early systems have been found to produce biased and discriminatory results that are often amplified over time.

The need to improve AI’s inclusion led civil liberties organisations to welcome the Biden administration’s new AI manifesto, with, with American Civil Liberties Union (ACLU) Racial Justice Program ReNika Moore commending the Biden administration’s engagement and warning that “[left] unchecked, AI exacerbates existing disparities and creates new roadblocks for already marginalised groups, including communities of colour and people with disabilities.”

The Blueprint is “an important step in addressing the harms of AI,” Moore said, adding that “there should be no loopholes or carveouts for these protections” and calling for “more concrete enforcement efforts, regulation, and oversight across the federal government to make the rights in the Blueprint real.”

Such protections will become even more important as AI is integrated into increasingly intimate applications with ever more consequential outcomes.

newly launched app built by RMIT University researchers, for example, can detect severe COVID-19 and Parkinson’s by using AI to analyse just 10 seconds of voice recordings – compared with current techniques that rely on a 90-minute neurological examination.

Another research project, conducted by researchers at UNSW, has found that AI can be better than humans at predicting the risk of suicide – potentially triggering life-saving intervention.

The Australian Federal Police, for its part, is collecting childhood photos to train an AI to spot telltale signs of potential child abuse.

Such applications of AI can provide significant benefits, but their aggregation and analysis of sensitive data means they can also risk harmful outcomes if, for example, they are found to be wrongly denying insurance covercompromising personal privacy, imposing massive and illegal fines, or using data matching for rich and invasive personal profiling.

The need to ensure ethical application of AI has driven a cottage industry that includes tools for detecting bias and dedicated institutes exploring ethical AI – although not all companies are equally committed to the idea, with Meta recently closing its Responsible Innovation team after just one year.

David Braue is an award-winning technology journalist who has covered Australia’s technology industry since 1995. A lifelong technophile, he has written and edited content for a broad range of audiences across myriad topics, with a particular focus on the intersection of technological innovation and business transformation.

This article originally appeared in ACS InformationAge. Photo by Vadim Bogulov on Unsplash.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Article
data-informed strategy

Data-Driven vs. Data-Informed: What's the Difference?

Next Article
winning at real loyalty

Passikoff: Winning at REAL Loyalty

Related Posts

Subscribe to TheCustomer Report

Customer Enlightenment Delivered Daily.

    Get the latest insights, tips, and technologies to help you build and protect your customer estate.