Michelle Bachelet, the UN High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications that do not comply with international human rights laws.
Apps that should be banned include government “social scoring” systems that judge people based on their behavior and certain artificial intelligence-based tools that classify people into groups, for example, by ethnicity or gender.
Technologies based on artificial intelligence can be a force for good, but they can also “have negative, even catastrophic effects if used without sufficient regard for how they affect people’s human rights,” Bachelet said in a statement.
His comments came alongside a new UN report examining how countries and companies have been quick to apply artificial intelligence systems that affect people’s lives and livelihoods without putting in place adequate safeguards to prevent discrimination and other damages.
“It’s not about not having AI” Peggy hicksthe thematic engagement director of the human rights office told reporters while presenting the report in Geneva. “It’s about recognizing that if AI is to be used in these very critical human rights function areas, it must be done in the right way. And we just haven’t yet put in place a framework to ensure that happens. ”
Bachelet did not call for an outright ban on facial recognition technology, but said governments should stop scanning people’s characteristics in real time until they can demonstrate that the technology is accurate, non-discriminatory and meets certain privacy standards. and data protection.
While the countries were not mentioned by name in the report, China has been among the countries that have implemented facial recognition technology, particularly for surveillance in the western Xinjiang region, where many of the minority Uyghers live. Key authors of the report said that naming specific countries was not part of their mandate and that doing so could even backfire.
“In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications that target particular communities,” Hicks said.
He cited several court cases in the United States and Australia in which artificial intelligence had been misapplied.
The report also expresses caution about tools that attempt to deduce people’s emotional and mental states by analyzing their facial expressions or body movements, and says that such technology is susceptible to prejudice, misinterpretation, and lacks scientific basis.
“The use of emotion recognition systems by public authorities, for example to identify people for police arrests or detentions or to assess the veracity of statements during interrogations, runs the risk of undermining human rights such as the rights to privacy, liberty and justice. trial, ”says the report.
The report’s recommendations echo the thinking of many political leaders in Western democracies, who hope to harness the economic and social potential of AI while addressing growing concerns about the reliability of tools that can track and profile AI. people and make recommendations about who has access to jobs and loans. and educational opportunities.
European regulators have already taken steps to control the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of artificial intelligence, such as real-time scanning of facial features, and strictly control others that could threaten people’s safety or rights.
The administration of US President Joe Biden has expressed similar concerns, although it has yet to describe a detailed approach to reducing them. A newly formed group called Trade and Technology Council, led jointly by US and European officials, has sought to collaborate on the development of shared rules for AI and other technology policies.
Efforts to limit the riskiest uses of AI have been backed by Microsoft and other US tech giants who hope to guide the rules affecting the technology. Microsoft has worked and provided funding to the UN rights office to help improve its use of technology, but funding for the report came from the rights office’s regular budget, Hicks said.
Western countries have been at the forefront in expressing concern about the discriminatory use of AI.
“If you think about the ways that artificial intelligence could be used in a discriminatory way or to further strengthen discriminatory tendencies, it’s pretty scary,” said the US Secretary of Commerce. Gina raimondo during a virtual conference in June. “We have to make sure we don’t let that happen.”
She was speaking with Margrethe Vestager, the European Commission’s executive vice president for the digital age, who suggested that some uses of artificial intelligence should be completely off limits in “democracies like ours.” He cited social punctuation, which can shut down someone’s privileges in society, and the “widespread and widespread use of remote biometric identification in public space.”
#urges #moratorium #imperils #human #rights