lawyermonthly 1100x100 oct2024eb sj lawyermonthly 800x90 dalyblack (1)

Is It Time for an International Code of AI Ethics?

In this Article
Reading Time:
3
 minutes
Posted: 13th December 2019 by
Rachel Hillier
Last updated 11th December 2019
Share this article

AI technology has advanced at such a rapid pace, that government oversight has not been able to keep up. But even if it were possible to regulate AI – should we?

There are major concerns about how governance (while bringing many benefits) would also inevitably result in the stifling of innovation, and the delay in potential benefits to human life.

So how far can we realistically go in regulating an industry that is already operating in a universe far beyond the capabilities and remits of law and government? What are the options? What is possible?

Below Lawyer Monthly hears from Rachel Hillier at Capital Law who discusses the possibility of, and whether we both could and should implement an international code of AI ethics.

In 2017, the UK Government announced the launch of The Centre for Data Ethics and Innovation (“CDEI”). This organisation, funded by government, aims to identify how society can enjoy the potential benefits of data-driven technology within the ethical and social constraints of a liberal, democratic society.

Lofty ideals, but what, if any, difference can this UK initiative make when the algorithms that drive AI are often built outside of the UK?  For example, my daily newsfeed is powered by Google, who decides what information I get to read every morning through its AI developed in the Silicon Valley.

One of CDEI’s remits is to “build international profile to influence and lead international debate and shape discussion as solutions may need international collaboration to be most effective”. A cooperative international code of ethics appeals to western values, but each nation has a different set of moral drivers, collective prejudices and acceptable behaviours. And within each nation, each individual has its own personal ethics, influenced by various factors including age, ethnicity, wealth, religion, class and gender.

A cooperative international code of ethics appeals to western values, but each nation has a different set of moral drivers, collective prejudices and acceptable behaviours.

So, it seems impossible to impose a single international “one size fits all” code of ethics – and even if one was agreed, it’s unsure how effective would it be. A study, carried out by North Carolina State University in 2018, concluded that specifically instructing a group of software developers to comply with an ethical code of conduct had no observed effect, when compared with a control group not told to comply with that same ethical code. I suspect the reason is that coding is inescapably influenced by each developer’s individual moral code, including conscious and unconscious bias.

The moral stance of one coder is not in itself a danger to society, but because AI applies algorithms to a massive amount of data, it can amplify bias to a point where the results may adversely affect citizens. Last year, Philip Alston, an international legal scholar at New York University’s School of Law, gave a statement to the United Nations on extreme poverty and human rights in the UK. In a section focusing on the effect of AI on UK citizens, he states that there is nothing inherent in AI that threatens human rights, but that it is the outcome that needs to be controlled.

The current legal system can help address some of AI’s biggest ethical issues – privacy, bias and discrimination.

The use of a European citizen’s personal data, in relation to AI, is subject to the provisions of the General Data Protection Regulations (GDPR).  At the time of the Cambridge Analytica scandal, the ICO fined Facebook £500,000 for breaching the UK’s data protection laws. Had GDPR been in force at the time, it could have fined Facebook over £1 billion.

[ymal]

It is also well documented that AI facial recognition systems have a harder time recognising faces of women and people of colour than those of white men. Research by the MIT Media Law on Microsoft, IBM and Face++ systems resulted in the gender of 35% of dark-skinned women being misidentified compared to 1% of light-skinned Caucasians. The Equality Act 2010 provides protection and compensation for those discriminated against through AI.

I anticipate that voluntary codes of ethics in AI will continue to evolve. But as much as I applaud the principles of an international AI ethical code and appreciate the nod to the three laws of robotics devised by Isaac Asimov[1], ethical codes are only useful to raise awareness of the dangers - they can’t replace a robust legal framework protecting individuals.

I’m hopeful that whilst domestic and international law is often slow to keep up with evolving technology and changing moral values, it will continue to evolve to protect our citizens against the often unforeseen detrimental consequences of AI, whilst allowing us to continue to enjoy its benefits.

As Philip Alston, told delegates of the AI Now 2018 Symposium in New York, “you can make your own ethics, but you can’t make up what are human rights.”

[1] See Asimov, Isaac (1950), “Runaround”. I, Robot (The Asaac Asimov Collection ed). New York City: Doubleday, p.40, for the three laws: 1) a robot may not injure a human being or, through inaction allow a human being to come to harm, 2) a robot must obey the orders given to it by human beings except where such orders would conflict with the first law, 3) a robot must protect its own existence as long as such protection does not conflict with the first or second law.

About Lawyer Monthly

Lawyer Monthly is a news website and monthly legal publication with content that is entirely defined by the significant legal news from around the world.