Leading human rights organization Amnesty International defends its choice to use the AI image generator To depict protests and police brutality in Colombya. Amnesty told Gizmodo that it used the AI generator to depict human rights abuses for Preserve the anonymity of vulnerable protesters. experts Fear, however, that The use of technology may actually undermine the credibility of advocacy groups besieged by authoritarian governments ejaculate Doubt the veracity of the truth screenshots.
Amnesty International’s regional account is published in Norway Three photos in a series of tweets over the weekend acknowledging two years since a major protest in KuluMbeya where the police Brutalized protesters committed “gross violations of human rights,The organization wrote. One of the photos depicts a crowd of police officers in armor, Other features An officer with a red spot on his face. Another picture shows a The protester who was violently taken away by the police. Pictures, each of which is distinguished by its own characteristics Obvious artifacts of images generated by artificial intelligence You also have a little note in the lower left corner that says: “Illustrations produced by artificial intelligence. “
Commenters reacted negatively to the images, with many expressing discomfort with Amensty’s use of the technology for the most part. commitment to Whimsical art and memes To depict human rights violations. AI responded, telling Gizmodo that it chose to use artificial intelligence in order to film the events “without endangering anyone who was present.” Amnesty International claims to have consulted with partner organizations in Kolombia and eventually decided to use technology as a privacy-preserving alternative to show the protesters’ real faces.
“Many of the people who participated in the national strike covered their faces for fear of being oppressed and stigmatized by the state security forces,” An Amnesty spokesperson said in an email. “Those who showed their faces are still in danger and some of them are being incriminated by the Colombian authorities.”
Amnesty International went on to say that the AI-generated imagery was a necessary surrogate to illustrate the event since many citations allege rights violations occurred under Dark cover after coloSecurity forces cut off the electricity. A spokesman for the organization said Added a disclaimer at the bottom of the image stating that it was created using artificial intelligence in an effort to avoid misleading anyone.
“We believe that had Amnesty International used the real faces of those who took part in the protests, it would have put them at risk of reprisals,” the spokesperson added.
Critics say rights abusers can use AI imagery to distort true claims
Critical human rights experts who spoke with Gizmodo hit back at Amnesty International, claiming that the use of artificial intelligence could set an alarming precedent and further undermine the credibility of human rights defenders.. Sam Gregory, who leads a witnessAmnesty International, a global human rights network focused on the use of video, said Amnesty International’s images do more harm than good.
“We’ve spent the past five years talking to hundreds of activists, journalists, and others globally who are already facing delegitimization of their photos and videos for allegedly being fake,” Gregory told Gizmodo. Increasingly, Gregory said, authoritarian leaders are trying to bury an audio or video depicting a human rights violation by immediately claiming that it is deeply fake.
“This puts all the pressure on journalists and human rights advocates to ‘establish the truth’,” Gray saidHe goessy said. “This could happen beforeEmpty too, with governments setting it up so that if some compromising footage appears, They can claim that they said there would be “fake footage”.
Gregory acknowledged the importance of anonymizing individuals portrayed in human rights media, but said there were many other ways to effectively display violations without resorting to artificial intelligence image generators or “taking advantage of hype cycles”. media scientist and author Roland Meyer agreed and said AI’s use of artificial intelligence could “devalue” the work done by reporters and photographers who Documented violations in Colombia.
A potentially dangerous precedent
Amnesty told Gizmodo it doesn’t It currently has no policies for or against the use of AI-generated imagery although a spokesperson said that leaders of the organization do as well. Are aware of the potential for abuse and attempt to use technology sparingly.
“We currently only use it when it is in the interest of protecting human rights defenders,” said the spokesperson. “Amnesty International is aware of the risk of misinformation if this tool is used in the wrong way.”
No rule or policy is safe, said Gregorysti is not enforced with regard to the use of Amnesty International It can prove important because it can quickly set a precedent that others will follow.
“It is important to think about the role of the major global human rights organizations in terms of setting standards and using tools in such a way that they do not cause collateral damage to smaller local groups who face more extreme pressures and are frequently targeted by their governments to discredit them,” Gregory said.