GLOBAL HUMAN RIGHTS: Ban dangerous facial recognition technology that amplifies racist policing

0
5

Source: MIL-OSI Submissions

Source: Amnesty International Aotearoa New Zealand

Amnesty International today launches a global campaign to ban the use of facial recognition systems, a form of mass surveillance that amplifies racist policing and threatens the right to protest. The Ban the Scan campaign kicks off with New York City and will then expand to focus on the use of facial recognition in other parts of the world in 2021. Facial recognition systems are a form of mass surveillance that violate the right to privacy and threaten the rights to freedom of peaceful assembly and expression.
The technology exacerbates systemic racism as it could disproportionately impact people of colour, who are already subject to discrimination and violations of their human rights by law enforcement officials. Black people are also most at risk of being misidentified by facial recognition systems.
“Facial recognition risks being weaponized by law enforcement against marginalized communities around the world. From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights,” said Matt Mahmoudi, AI and Human Rights Researcher at Amnesty International.
“New Yorkers should be able to go out about their daily lives without being tracked by facial recognition. Other major cities across the US have already banned facial recognition, and New York must do the same.” In New York, Amnesty has joined forces with AI for the People, the Surveillance Technologies Oversight Project, the Immigrant Defence Project, the New York Civil Liberties Union, the New York City Public Advocate’s office, The Privacy NY Coalition, State Senator Brad Hoylman and Rada Studios to campaign for legislation to ban the use of facial recognition technology for mass surveillance by law enforcement in the city.
“Police use of facial recognition technology places innocent New Yorkers on a perpetual line up and violates our privacy rights. Facial recognition is ubiquitous, unregulated and should be banned,” said Mutale Nkonde, Founder and CEO of AI For the People.
Albert Fox Cahn, Surveillance Technology Oversight Project Executive Director at the Urban Justice Centre, said: “Facial recognition is biased, broken, and antithetical to democracy. For years, the NYPD has used facial recognition to track tens of thousands of New Yorkers, putting New Yorkers of colour at risk of false arrest and police violence. Banning facial recognition won’t just protect civil rights: it’s a matter of life and death.”
Facial recognition technology can be developed by scraping millions of images from social media profiles and driver’s licenses, without people’s consent. Software then runs facial analysis of images captured on
CCTV or other video surveillance to search for potential matches against the database of scraped images.
While other US cities, including Boston, Portland and San Francisco, have banned the use of facial technology by law enforcement, New York Police Department [NYPD] continues to use the technology to intimidate and harass law abiding residents, as seen during last year’s the Black Lives Matters protests.
Black Lives Matter
On 7 August 2020, dozens of NYPD police officers tried to force their way in to Derrick “Dwreck” Ingram’s apartment in an attempted arrest. They accused Dwreck, a co-founder of the social justice organization Warriors in the Garden, of allegedly assaulting a police officer by shouting loudly into a megaphone at a June protest.
One officer was caught on camera outside Dwreck’s home holding a document titled “Facial Identification Section Informational Lead Report”, revealing that facial recognition had likely been used to inform Dwreck’s arrest. The document featured Dwreck’s face matched to an Instagram photo.
The NYPD misinformed Dwreck about his rights, threatened to break down his door, attempted to interrogate him without a lawyer, used at least one police helicopter and drones, and stationed dozens of officers in his hallway, on his fire escape, and in tactical positions in and around nearby buildings. The police left only after Dwreck live-streamed the events, a large crowd of protesters gathered, and the media began asking questions.
Police “Wanted” posters with photos taken without his consent from Dwreck’s private Instagram account, were plastered around Dwreck’s neighbourhood. While the NYPD initially confirmed it had used facial recognition technology, it has yet to adequately disclose documentation in Dwreck’s legal case on the use of facial recognition technology.
“We’re being specifically targeted with this technology because of what we’re protesting and because we’re trying to deconstruct a system that the police are a part of,” said Dwreck Ingram.
The discriminatory impact of facial recognition technology goes far beyond its use by law enforcement to target peaceful protestors. In New York, landlords risk using the technology to spy on Black and Brown communities.
In 2018-19, Atlantic Plaza Towers in Ocean Hill-Brownsville, Brooklyn, a predominately Black and Brown community, successfully challenged the installation of facial recognition cameras in the apartment complex by landlord Nelson Management Group.
Residents who initially campaigned against the use of facial recognition were threatened by the landlord with print outs of their faces from surveillance cameras and told to stop organizing. Led by Tranae Moran and Fabian Rogers, residents refused to back down. After the tenants took legal action to stop the invasion of privacy and theft of biometric data from anyone who entered the complex, combined with sustained pressure generated by Tranae and Fabian’s community organizing and collaboration with civil society organisations and media, Nelson Management Group announced in November 2019 at a tenants association meeting they would not install facial recognition in the complex.
Amnesty International’s Ban the Scan campaign launch is accompanied by a website where residents of New
York can generate comments on the NYPD’s use of facial recognition via the Public Oversight on Surveillance Technologies (POST) act, and later in the campaign, generate Freedom of Information requests to see where facial recognition technology is being used in their communities. The site will be expanded in May 2021, when Amnesty Decoders – a worldwide network of digital activists will help geolocate facial recognition-capable surveillance devices in New York so residents know exactly where the technology is being used. The site also features resources to help people better protect themselves at protests and against the use of facial recognition technology.
Amnesty International is calling for a total ban on the use, development, production, and sale, of facial recognition technology for mass surveillance purposes by the police and other government agencies and calling for a ban on exports of the technology systems.

MIL OSI

Previous articleProgrammes under new TTAF target areas announced
Next article60 million children across eight of the biggest humanitarian crises need help to survive this year, warns Save the Children