Advertisement
Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
business.stankeviciusmgm.com

Big Brother’s Watchful Eye: Clearview AI, Facial Recognition, and the Netherlands’ Great Fine

Facial recognition technology is reshaping how societies navigate digital spaces, but it’s also raising profound ethical questions around privacy and surveillance. Clearview AI, a U.S.-based startup, stands at the center of this debate. Known for its vast database of more than 50 billion facial images scraped from the internet, the Cleawater AI has sparked intense scrutiny worldwide for its ‘frontier mentality’ approach to privacy laws. Recently, the Netherlands imposed a €30.5 million fine on Clearview AI for violating the European Union’s General Data Protection Regulation (GDPR), highlighting the growing tension between the ‘Wild West of AI’ and individuals privacy rights. The Orwellian warning feels more prescient than ever as we confront the implications of a world increasingly monitored by biometric technologies.

Clearview AI: A Brief History

“Many men have been seized and imprisoned under the so-called prophylactic Precrime structure… Accused not of crimes they have committed but of crimes they will commit.” (Dick, P.K. 1956. The Minority Report).

Advertisement

Clearview AI was founded in 2017 by Hoan Ton-That and quickly became notorious for its facial recognition software, used primarily by law enforcement. The company’s tools allow authorities to match faces in photos and videos against a massive database of over 50 billion images. These images, however, were scraped from websites and social media platforms without user consent.

Despite its popularity with U.S. law enforcement, Clearview AI has faced legal challenges worldwide for its controversial practices. The company is emblematic of the global debate over privacy, data protection, and biometric surveillance. Similar concerns have been raised about other corporations, such as Uber, which recently received a €290 million fine from the Dutch Data Protection Authority (DPA) for transferring sensitive personal data of European taxi drivers to the U.S., including taxi licenses, location data, and medical records. Uber has since stopped the practice but intends to appeal the fine, claiming the ruling is flawed.

The European “right to be forgotten,” enshrined in the General Data Protection Regulation (GDPR), allows individuals to request the removal of their personal data from online platforms when it is no longer necessary for the purpose for which it was collected, or when continued access to the data violates privacy rights. This right is important because it empowers individuals to control their digital footprint, protecting them from the long-term consequences of having outdated, inaccurate, or fake information accessible online. By allowing people to erase their past, it helps mitigate reputational damage and reduce the risk of data misuse, contributing to greater individual autonomy and privacy in an increasingly data-driven world.

The U.S. is slowly catching up – after all, who wants to be forgotten in a nation of aspiring influencers? Privacy laws, such as the California Consumer Privacy Act (CCPA) and the Illinois Biometric Information Privacy Act (BIPA), grant individuals the authority to challenge companies like Clearview AI.

Clearview’s GDPR Violations and the Netherlands’ Great Fine

“The Bill of Rights was written before data-mining… The right to freedom of association is fine, but why shouldn’t the cops be allowed to mine your social network to figure out if you’re hanging out with gangbangers and terrorists?” (Doctorow, C. 2008. Little Brother).

Clearview scraping practices have resulted in significant legal consequences, primarily in the European Union. The Netherlands’ data protection authority imposed a €30.5 million fine on Clearview AI for illegal data collection practices. This fine, larger than any previous GDPR sanctions, underscores the EU’s commitment to protecting its citizens’ biometric data. Despite the fine, Clearview AI argues that it is not subject to the GDPR, as it has no operations or customers in the EU. In his statement, Clearview’s chief legal officer, Jack Mulcaire asserted that Clearview AI is not subject to EU data protection regulations. He explained that the company has no business operations, customers, or activities in the Netherlands or the EU that would otherwise bring it under the scope of the GDPR. However, the DPA’s action shows that regulators are taking a more aggressive stance against companies that violate privacy laws, regardless of their regional physical presence.

Clearview AI Justification and Precrime Analogies

“Many men have been seized and imprisoned under the so-called prophylactic Precrime structure,” General Kaplan continued, his voice gaining feeling and strength. “Accused not of crimes they have committed, but of crimes they will commit. It is asserted that these men, if allowed to remain free, will at some future time commit felonies.”(Dick, P.K. (1956). The Minority Report

Clearview AI defends its practices by comparing its operations to those of traditional search engines. According to the company’s official statements:

Clearview AI acts as a search engine of publicly available images – now more than 50 billion -- to support investigative and identification processes by providing highly accurate facial recognition across all demographic groups. Similar to other search engines, which pull and compile publicly available data from across the Internet into an easily searchable universe, Clearview AI compiles only publicly available images from across the Internet into a proprietary image database to be used in combination with Clearview AI's facial recognition technology. When a Clearview AI user uploads an image, Clearview AI’s proprietary technology processes the image and returns links to publicly available images that contain faces similar to the person pictured in the uploaded image.

Clearview AI currently offers its solutions to only one category of customer – government agencies and their agents. It limits the uses of its system to agencies engaged in lawful investigative processes directed at criminal conduct, or at preventing specific, substantial, and imminent threats to people’s lives or physical safety. In each case, Clearview AI requires its government customers to make independent assessments of whether there is a match between the images retrieved by Clearview AI, and the image provided by the customer. Each decision about an identification is made by a professional working on behalf of a government agency, not by an automatic process.

Clearview AI’s facial recognition algorithm is designed to take into account age progression, variations in poses and positions, changes in facial hair, and many visual conditions and to perform at 99% or better across all demographic groups on key tests.Clearview’s Principles: (Quoted ad verbatim from their website)

The company insists that it serves only government agencies involved in lawful investigative processes and claims that its system is not automatic- professionals must verify the results. However, this defense has done little to quell the backlash from privacy advocates, who argue that Clearview’s methods go beyond typical data aggregation and represent an egregious invasion of our collective privacy.

Student Images, Higher Education, and the European AI Act

“You can prove anything you want by coldly logical reason – if you pick the proper postulates.” Elijah Baley in The Naked Sun  Asimov, I. (1957).

Clearview AI’s technology is not only a concern for law enforcement but should also be a clarion call for higher education institutions (HEI), where facial recognition tools are becoming more integrated into security and online proctoring systems. This development poses significant privacy risks for students, faculty, and visitors. Biometric systems installed on campuses could capture and store images without consent, infringing on individuals’ rights. Yet, such systems also provide security. For HEIs, the appeal lies in enhanced safety measures – facial recognition can help prevent unauthorized access, monitor potential threats, and even streamline administrative tasks such as attendance tracking or exam proctoring. But this comes at a cost. HEIs face the complex task of integrating these technologies while ensuring transparency, protecting personal data, and addressing concerns about surveillance in academic spaces. For educators and administrators, the key challenge lies in striking a delicate balance between creating a secure environment and upholding the civil liberties of students, staff, and faculty.

The European AI Act, which classifies biometric surveillance as “high-risk,” introduces new legal responsibilities for HEIs using such technologies. Universities that employ facial recognition tools for security or exam proctoring could face hefty fines similar to Clearview AI’s if they fail to meet transparency and consent requirements. This raises important ethical and practical questions:

  • Should institutions of higher education prioritize security and academic integrity over individual privacy?
  • Will students and staff be fully aware of how their biometric data is being captured and used?

Surveillance Tech and the New Wild West

“The day will come when the machine will control every aspect of our lives.” (Forster E.M. 1909. The Machine Stops).

The story of Clearview AI and its facial recognition technology serves as a stark reminder of the risks that come with over-reliance on powerful technological systems and their ‘black box’ algorithms. In the loosely regulated frontier of facial recognition and AI, technologies like Clearview AI come with inherent biases, posing serious risks to privacy and civil liberties. The author wonders if we are blindly accepting a system where baked-in biases dictate who is watched and controlled. Baked-in bias refers to the unconscious prejudices embedded in AI systems, often the result of algorithms trained on incomplete or skewed data. (It is hard to imagine that Clearview’s Houdini treasure chest of 50 billion scraped images was trained thoroughly and with great care.) In facial recognition technology, for instance, these biases manifest in higher error rates for identifying people of color, women, and other marginalized groups. This technological flaw can lead to disproportionate scrutiny of certain populations, reinforcing societal inequalities and biases rather than correcting them. When combined with the echo chamber effect, where surveillance systems increasingly focus on these groups based on biased data, a dangerous cycle emerges by creating a feedback loop where over-surveillance not only perpetuates biased perceptions but also limits freedom.

The author likens the constant presence of surveillance to a form of “terra farming” of the mind, where the natural landscape of free thought and creativity is artificially reshaped. Just as terraforming alters the environment to suit human needs, surveillance gradually redefines the boundaries of acceptable behavior and thought, molding them to fit the systems in place. The once fertile ground for innovation, dissent, and individual expression is replaced by a controlled, uniform space where only ideas that conform to the imposed parameters can thrive. In this engineered mindscape, autonomy is reduced, and creativity becomes a rare commodity, cultivated only within the rigid structures allowed by surveillance.

This tension between security and privacy evokes a key lesson from E.M. Forster’s The Machine Stops: the dangers of over-dependence on technology. Just as the characters in Forster’s dystopia forfeited their individuality and freedom for convenience, today we risk surrendering our privacy for the promises of safety and efficiency offered by technologies like facial recognition systems. Moreover, The Machine Stops warns against the echo chambers created by isolated systems of control.

As facial recognition becomes more embedded in daily life, from law enforcement to university campuses, we must confront deeper questions: In our pursuit of convenience and security, are we gradually surrendering the essence of what it means to be free? And, perhaps more profoundly, how long before the technologies we create begin to shape the boundaries of our autonomy, subtly shifting the balance of power from human agency to algorithmic control? The trade-offs we make today may well define tomorrow’s liberty.

Further reading:

Sharma, N., Liao, Q. V., & Xiao, Z. (2024). Generative echo chamber? Effects of LLM-powered search systems on diverse information seeking. arXiv. https://arxiv.org/abs/2402.05880v2

This article was written by Dr. Jasmin (Bey) Cowin, Associate Professor and U.S. Department of State English Language Specialist (2024). As a columnist for Stankevicius, she writes on Nicomachean Ethics: Insights at the Intersection of AI and Education. Connect with her on LinkedIn.

author avatar
Dr. Jasmin Cowin

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement