IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and
Principles of Trust and Transparency
We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.
The IBM CEO’s letter to Congress further advocated for police reform, arguing that more police misconduct cases should be under the purview of the federal court and that Congress should make changes to qualified immunity doctrine. Krishna added that “we need to create more open and equitable pathways for all Americans to acquire marketable skills and training.” He further suggested that Congress consider scaling the P-TECH school model nationally and expand eligibility for Pell Grants.
Due to improvements in artificial intelligence and machine learning technology in recent years, facial recognition software has advanced rapidly. However, due to its rapid advancement, there is little regulation or federal oversight around the technology.
The technology has also been shown to suffer from bias in relation to age, race, and ethnicity, often making the tools unreliable for law enforcement and security and leaving the software open to potential civil rights abuses.
Research by Joy Buolamwini and Timnit Gebru in 2018
revealed how deeply many commercial facial recognition systems were biased, including IBM’s. The research led to multiple further studies and criticism of facial recognition algorithms with attempts by firms to rectify bias.
According to a study by the National Institute of Standards and Technology in 2019, there was “empirical evidence for the existence of a wide range of accuracy across demographic differences in the majority of the current face recognition algorithms that were evaluated.”
Recently, Jeff Bezos’ e-commerce and tech giant Amazon
posted a statement in which the firm expressed solidarity with Black Lives Matter protesters, stating: “The inequitable and brutal treatment of Black people in our country must stop.” The statement further goes on to say that the company stands with the Black community in the fight against “systemic racism and injustice,” as protests over the death of George Floyd rage across the United States.
However, despite claiming to stand with protesters, Amazon has a long history of working with law enforcement, specifically in the development of facial recognition technology. Breitbart News
reported in July of 2019 after 15 months, a pilot test aiming to bring Amazon’s facial recognition system, called Rekognition, to the Orlando Police Department had ended. City police reportedly ended the test after multiple technical issues resulting in the technology failing to work correctly and a lack of resources on the police department’s part.
The software system is designed to utilize facial recognition algorithms to search for and track suspects in real-time. Amazon has previously claimed that the software was used to identify and rescue victims of human trafficking. Orlando police were meant to utilize the system by uploading photos of suspects to it, Rekognition would then search CCTV cameras for the suspect’s face.
The American Civil Liberties Union (ACLU) welcomed the announcement of the end of the project telling the
Verge: “Congratulations to the Orlando Police Department for finally figuring out what we long warned — Amazon’s surveillance technology doesn’t work and is a threat to our privacy and civil liberties.” The ACLU found in a study conducted last July that Amazon’s facial recognition software incorrectly identified 28 members of Congress with images of people who had been arrested.
reported in February of 2019 that a study by MIT found that Amazon’s software has an error rate of approximately 31 percent when identifying the gender of images of women with dark skin while rival software developed by Kairos had an error rate of 22.5 percent and IBM’s software boasted a rate of just 17 percent. However, software from Amazon, Microsoft, and Kairos successfully identified images of light-skinned men 100 percent of the time.
Read more at
Breitbart News here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address firstname.lastname@example.org