Artificial Intelligence

Building trust in AI systems is essential

[ad_1]

Whatever the precise rights and wrongs of the dispute between Google and two of its top ethics researchers, the departure of Margaret Mitchell and Timnit Gebru has badly dented the company’s credibility in the field of artificial intelligence. The two well-respected researchers say they were fired by the company after criticising Google’s lack of diversity and warning of the risks of some of its AI systems. While expressing regret for their departure, Google said the two researchers had flouted their rules and breached their code of conduct.

At the core of the disagreement lies differences of view over the safety of massive language generation models, such as Google’s BERT, that are used in the company’s search engine. How big technology companies deploy such powerful AI systems shape our economies and societies in important but mostly invisible ways. Gebru and Mitchell said they risk baking in historic biases because they rely on unrepresentative data sets — and consume excessive amounts of electricity, too. “Algorithms are opinions embedded in code,” as the author Cathy O’Neil has written. It matters whose opinions are reflected when writing code.

Most of the biggest tech companies, which have been at the forefront of the AI revolution, are well aware of the risks of deploying flawed systems at scale. Tech companies publicly acknowledge the need for societal acceptance if their systems are to be trusted. Although historically allergic to government intervention, some industry bosses are even calling for stricter regulation in areas such as privacy and facial recognition technology.

A parallel is often drawn between two conferences held in Asilomar, California, in 1975 and 2017. At the first, a group of biologists, lawyers and doctors created a set of ethical guidelines around research into recombinant DNA. This opened an era of responsible and fruitful biomedical research that has helped us deal with the Covid-19 pandemic today. Inspired by the example, a group of AI experts repeated the exercise 42 years later and came up with an impressive set of guidelines for the beneficial use of the technology. 

Translating such high principles into everyday practice is hard, especially when so much money is at stake. But three rules should always apply. First, teams that develop AI systems must be as diverse as possible to reduce the risk of bias. Second, complex AI systems should never be deployed in any field unless they offer a demonstrable improvement on what already exists. Third, algorithms that companies and governments deploy in sensitive areas such as healthcare, education, policing, justice and workplace monitoring should be subject to audit and comprehension by outside experts. 

The US Congress has been considering an Algorithmic Accountability Act, which would compel companies to assess the probable real-world impact of automated decision-making systems. There is even a case for creating the algorithmic equivalent of the US Food and Drug Administration to preapprove the use of AI in sensitive areas. Criminal liability for those who deploy irresponsible AI systems might also help concentrate minds.

The AI industry has talked a good game about AI ethics. But if some of the most sophisticated companies in this field cannot even convince their own employees of their good intentions, they will struggle to convince anyone else. That could result in a fierce public backlash against companies using AI. Worse, it may yet impede the real benefits of using AI for societal good in areas such as healthcare. The tech sector has to restore credibility for all our sakes.

[ad_2]

Source link

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *