Your Face in the Virtual Realm by Cristina Ruiz
- crisrhdetoro
- Jan 22, 2025
- 7 min read
Freedom stands as the most crucial foundation of this nation, shaping its core and structure. Freedom is initially referred to as liberation from government oppression. However, are we as free as we think? Our sense of freedom may be an illusion, overshadowed by societal expectations and norms dictating our actions and choices. Sociologist Harold Garfinkel underscores that, although often overlooked, social norms tightly constrain every move we make, challenging the flawed notion of individual freedom (Vom Lehn, D. (2017)). It is only when these social norms are violated that their significance becomes apparent. Facial recognition, for instance, challenges the conventional expectations of anonymity, disrupting the established norms that allow individuals to go about their lives without constant surveillance.
Facial recognition constitutes a form of biometric identification utilizing distinctive markers to identify or verify an individual's identity based on intrinsic physical or behavioral characteristics. Specific details, like the distance between the eyes or the shape of the chin, are then translated into mathematical representations and compared with data from other faces stored in facial recognition databases (Lynch, J. (2020)). This technology finds application in various scenarios, ranging from identifying unknown individuals in surveillance footage to verifying the identity of known individuals (e.g., unlocking your iPhone using face recognition). It can also be employed for searching specific faces in crowded locations, such as identifying wanted individuals in public spaces or determining probability match scores between an unknown face and stored face templates. However, the increasing prevalence of such systems has sparked controversy, with concerns including challenges in accuracy, implications for civil liberties, disproportionate impact on racial minorities due to biases in databases, security risks, and potential misuse by law enforcement.
No system can claim 100% accuracy in all conditions. Often, the application produces either false negatives or false positives. False negatives occur when the face recognition system fails to match a person's face with an image that is present in the database, leading the system to return zero results incorrectly. On the other hand, false positives arise when the face recognition system incorrectly matches a person's face to an image in the database (Lynch, J. (2020)). The system's frequent errors result in misidentification, with innocent individuals being treated as if they were violent offenders. While the system performs well and accurately when comparing photographs taken with similar lighting and from a frontal perspective (like a mugshot), significant error rates arise when comparing images with different lighting, shadows, backgrounds, poses, or expressions, all common occurrences. Low-resolution images or those from videos further exacerbate the challenge of accurate recognition. This inherent inaccuracy, coupled with existing unjust law enforcement practices in the United States, has led to a disproportionate impact on people of color, racial minorities, immigrants, children, and women. Simply being in the wrong place at the wrong time, fitting a stereotype deemed threatening by some in society, or engaging in activities like political protests in public spaces can lead to persecution.
Research conducted on the FBI's photography database revealed that face recognition exhibited a much higher rate of misidentification for African Americans and ethnic minorities compared to Whites (Lynch, J. (2020)). This disparity is due to a variety of factors. Firstly, the historical impact of racially biased police practices has led to the overrepresentation of African Americans, Latinos, and immigrants in all criminal databases, including mugshot datasets. Dr. Gideon Christian, an assistant professor in the Faculty of Law specializing in the intersection of AI and the law, focuses on addressing the race, gender, and privacy impacts of AI facial recognition technology in Canada (Hassanin, N. (2023)). Contrary to the notion that technology is unbiased, Dr. Christian emphasizes that AI has the potential to mirror human bias. In particular, facial recognition technology displays a troubling error rate, with a 99% accuracy in recognizing white male faces but exhibiting significantly higher error rates, approximately 35%, when identifying faces of color, especially Black women (Hassanin, N. (2023)).
This discrepancy in accuracy can have severe consequences, leading to wrongful arrests and detentions. Drawing attention to cases in the U.S. where Black men are misidentified and detained due to facial recognition software errors, there is a bigger emphasis on the technology's potential for misuse. Instances of stripping refugee status from Black women immigrants based on faulty facial recognition matches highlight the real-world implications of these biases. An instance of this misidentification is Google’s image-recognition app labeling African Americans as “gorillas” (Knutson, A. (2021)). This racial bias is not inherent to the technology but is instead embedded during development. When AI models are primarily trained on data featuring white male faces, bias becomes ingrained within the technology, leading to challenges in various aspects of life, such as job seeking that requires a background check. These jobs may rely on FBI data, and if job seekers are mistakenly matched to mug shots in the criminal database, they could be denied employment, despite being blameless. Compounding this issue is the fact that at least 50% of FBI arrest records lack information on the final disposition of the case—whether the person was convicted, acquitted, or if charges against them were dropped (Lynch, J. (2020)). Facial recognition systems may also exhibit lower accuracy for certain populations, including women and young children. These inaccuracies further amplify existing systematic biases towards racial minorities and sexism, potentially resulting in more innocent people being incarcerated or preventing misidentified racial minorities from obtaining jobs or essential documents such as visas.
To address this issue of disproportionate racial profiling, the database should exclusively include pictures of individuals who have committed a crime and would be part of the database even without the facial recognition system in place. Regrettably, this is not the current framework of the system. In 2016, the Federal Government Accountability Office (GAO) disclosed that the FBI could access nearly 641 million images, the majority of which were captured for non-criminal reasons (Lynch, J. (2020)). Federal agencies employ systems like Clearview AI, which scrape publicly available images from the internet, including social media platforms, without user consent. This practice results in innocent people having their pictures included in these databases. Consequently, if such individuals are later arrested, even for minor offenses like blocking a street sign, their non-criminal photographs are combined with their criminal record and become readily available for facial recognition system searches associated with any criminal investigation. The most significant concern arises from the State Motor Vehicle Department (DMV) sharing driver's license information and images with these facial recognition systems in the U.S. Immigration and Customs Enforcement (ICE) later employ this information for the identification and enforcement of undocumented individuals. The major concern arises because the DMV allows everyone, including undocumented individuals, to obtain a driver's license (Kauder, M. M. (2021)). While ICE asserts that its primary focus is on already-identified priority targets, documents reveal requests for bulk information and initiatives aimed at using DMV data for immigration enforcement. Insider threats also pose vulnerabilities, as past instances illustrate improper usage by law enforcement. To address potential issues, systems such as the System of Records Notice (SORN) were implemented, requiring agencies to conduct privacy impact assessments for all programs collecting information on the public. This ensures that the public is informed about how the information is intended to be collected and used. Another system, Privacy Impact Assessment (PIA), was introduced to promote trust between the public and the department by increasing transparency regarding the department's systems and missions. However, the FBI is not adhering to these guidelines. Despite complying when developing its face recognition program in 2008, the FBI did not update the information as they revised their plans, ignoring calls from Congress. It wasn't until 2015, after conducting over 100,000 searches using their facial recognition databases, that they updated the PIA (Kauder, M. M. (2021)). Numerous incidents have been reported, such as in 2009 when FBI employees were accused of using surveillance equipment at a charity event to spy on teenage girls trying on prom dresses (Lynch, J. (2020)).
While these platforms have encountered numerous issues and misuse, there is potential for improvement if these challenges are addressed. One area of growth is enhancing security measures, particularly in places like TSA checks. The tragic events of 9/11 serve as a poignant example, where the hijacker Mohamed Atta bypassed airplane security. Had there been an accurate facial recognition system in place, his image could have been instantly checked against photos of suspected terrorists, potentially preventing the incident. Implementing such a system universally in airports could enhance security for all passengers, regardless of their original country. Another potential benefit is improving the fairness of the judicial system. Renowned cognitive psychologist Elizabeth Loftus has highlighted the malleability and unreliability of human memory, leading to wrongful convictions based on faulty recollections (Loftus, E. F., & Pickrell, J. E. (1995)). Utilizing facial recognition systems could reduce wrongful convictions by providing a more factual and accurate eyewitness. To realize these benefits, improvements must be made to facial recognition technology. Only law enforcement should have access to these databases, and the usage of facial recognition technology should be restricted to them exclusively, banning its use by other entities. Enhancements should be made to accuracy, ensuring it can identify individuals of all races with equal precision. Images added to the system should be of high resolution, and the source should be limited to individuals who have committed a crime, excluding social media platforms or resources like the DMV. Moreover, law enforcement should only use this resource as an additional source of information, preventing unfair advantages or misuse, as exemplified by ICE above. Strict and regular oversight, especially through yearly observations, should be implemented to ensure compliance with these restrictions and fair usage by law enforcement agencies, notably the FBI. These enhancements aim to strike a balance between security and individual freedom, as societal norms continue to shape our perceptions of liberty.
References
Hassanin, N. (2023). Law professor explores racial bias implications in facial recognition technology. University of Calgary. https://ucalgary.ca/news/law-professor-explores-racial-bias-implications-facial-recognition-technology#:~:text=In%20some%20facial%20recognition%20technology,is%20about%2035%20per%20cent.%E2%80%9D
Kauder, M. M. (2021). Out of the Shadows: Regulating Access to Driver's License Databases by Government Agencies. Drake L. Rev., 69, 463.
Kalhan, A. (2013). Immigration policing and federalism through the lens of technology, surveillance, and privacy. Ohio St. LJ, 74, 1105.
Knutson, A. (2021). Saving Face; The Unconstitutional Use of Facial Recognition on Undocumented Immigrants and Solutions in IP. IP Theory, 10, 1.
Loftus, E. F., & Pickrell, J. E. (1995). The formation of false memories. Psychiatric Annals, 25(12), 720-725.
Roy, K. E. (2022). Defrosting the Chill: How Facial Recognition Technology Threatens Free Speech. Roger Williams UL Rev., 27, 185.
Sarabdeen, J. (2022). Protection of the rights of the individual when using facial recognition technology. Heliyon, 8(3).
Stark, L. (2018). Facial recognition, emotion and race in animated social media. First Monday.
Lynch, J. (2020). Face off: Law enforcement use of face recognition technology. Available at SSRN 3909038.
Vom Lehn, D. (2017). Harold Garfinkel: Experimenting with social order. The interactionist imagination: Studying meaning, situation and micro-social order, 233-262.



Comments