Some tech companies have dropped development of facial recognition software, under fears of human rights violations by police
Over the past three weeks, three major players in facial recognition technology made bold decisions to put the brakes on its development or drop out altogether. IBM, Amazon, and Microsoft cite a concern that law enforcement may be misusing the technology and violating human rights.
Law enforcement bodies are being called into account for their irresponsibility and callousness at this moment in the US, with the Black Lives Matter movement bringing a second wave of awareness about underhanded tactics used by police. These tactics are physical, legislative and now, digital.
With facial recognition out of its toolbox, police will rely more on digital footprint tracing. This news is extremely important now, with global anti-racism protests and governments adopting surveillance methods to manage the pandemic.
What makes facial recognition by AI dangerous?
Facial recognition was one of law enforcement’s fastest-growing tools for identifying suspects. The Washington Post reveals concerning numbers: since 2011, 390,000 facial recognition searches were logged by the FBI alone.
A study by the National Institute of Standards and Technology found that AI misidentifies women and people of colour 10 to 100 times more often than white men. This raises concerns about the potential discrimination and biased prosecution based on gender and ethnicity.
AI misidentifies women and people of color 10 to 100 times more often than white men
Despite the big three companies stepping out, other facial recognition developers, like Clearview.ai, are likely to grab the new sales opportunities. The technology Clearview.ai is offering is so advanced that it can scrape a decade-old picture from the internet.
Daniel Markuson, digital privacy expert at NordVPN, commented:
“Keeping society safe comes at a cost. In this case, the technology in question is still very much work in progress. That may bring on increased surveillance, violations of the right to protest, and prosecution of innocent people. So the decision by the major companies puts pressure on other players to re-evaluate their aim to push the technology further.”
Facial recognition out: Digital footprint tracking up
Facial recognition was ushering in a dangerous new wave of government surveillance tools. While it is slowing down due to the drop by the major players, digital footprint tracking is likely to advance.
Smart devices like cell phones, home voice controllers, remote cameras, and smart door locks track or record user activities. That makes them a great source of information for investigations. Activity logs help to ground the evidence, alibis, and statements by witnesses worldwide.
Mark Stokes, the head of Scotland’s Yards digital cyber and communications forensics unit, stressed that police forces in the United Kingdom are trained to analyse traces of location, online activity, and records of online purchases.
In the US, a warrant is required to obtain the history of a person’s cell phone location, but the requirement does not apply to other techniques of cyber tracking. This leaves a window for device bias, especially for criminals forging evidence by hacking devices. On the other hand, law enforcement may also go too far with cyber tracking, leading to discrimination or violation of the right to privacy.
Daniel Markuson commented:
“As it stands now, the burden of protection from potential misidentification or violation of privacy falls onto the shoulders of citizens themselves. Technologies like VPN allow them to keep their digital presence more private. That’s why NordVPN sees an increase in demand for its service whenever there is public unrest. People are taking their digital safety and privacy more seriously, and the unfolding events of 2020 justify the need to do so.”