Police officer arrested for unauthorized use of Clearview AI facial recognition resigns

An Indiana police officer resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track social media users who were not linked to a crime.

According to a press release from the Evansville Police Department, this was a clear “misuse” of Clearview AI’s controversial facial scanning technology, which some American cities have banned over fears it gives law enforcement unlimited power. to track people in their daily lives.

To help identify suspects, police can scan what Clearview AI describes on its website as “the world’s largest facial recognition network.” The database brings together more than 40 billion images collected from media outlets, mugshot websites, public social networks and other open sources.

But these scans must always be tied to an investigation, and Evansville Police Chief Philip Smith said the disgraced cop instead repeatedly concealed his personal searches by “deceptively using a real case number associated with a actual incident” to evade detection.

Smith’s department discovered the officer’s unauthorized use after conducting an audit before renewing his Clearview AI subscription in March. That audit showed “an anomaly of very high software usage by an officer whose job performance was not indicative of the number of searches he had.”

Another clue to officer abuse of the tool was that most facial scans performed during investigations are “usually live or CCTV footage” — shots taken in the wild, Smith said. However, the officer who resigned was primarily searching for social media images, which was a red flag.

An investigation quickly “made it clear that this officer was using Clearview AI” for “personal purposes,” Smith said, declining to name the officer or verify whether the targets of these searchers were notified.

As a result, Smith recommended that the department fire the officer. However, the officer resigned “before the Police Merit Commission could make a final determination on the matter,” Smith said.

Easily bypassing Clearview AI’s built-in compliance features

Clearview AI touts the facial imaging network as a public safety resource, promising to help authorities make arrests sooner while also committing to “ethical and responsible” use of the technology.

On its website, the company says it understands that “law enforcement agencies need integrated compliance features for greater oversight, accountability and transparency within their jurisdictions, such as advanced management tools, as well as dashboards, reports and easy-to-use controls. metrics tools.”

To “help deter and detect inappropriate searches,” its website says a case number and type of crime are required, and “each agency must have an assigned administrator who can view a detailed overview of their organization’s search history.” “.

It appears that none of those safeguards stopped the Indiana police officer from repeatedly scanning social media images for undisclosed personal reasons, apparently sealing off the case number and crime type requirement and going unnoticed by his agency administrator. This incident could have broader implications in the US, where police have used its technology extensively to conduct almost 1 million searches, Clearview AI CEO Hoan Ton-That told the BBC last year.

In 2022, Ars reported when Clearview AI told investors it had ambitions to collect more than 100 billion images of faces, ensuring that “almost every person in the world will be identifiable.” As privacy concerns about the controversial technology grew, a heated debate ensued. Facebook took steps to prevent the company from displaying faces on its platform, and the ACLU won a settlement that prohibited Clearview AI from hiring most companies. But the US government maintained access to the technology, including “hundreds of police forces across the United States,” Ton-That told the BBC.

Most law enforcement agencies are hesitant to discuss their Clearview AI tactics in detail, the BBC reported, so it is often unclear who has access and why. But Miami police confirmed that they “use this software for all types of crimes,” the BBC reported.

Now, at least one Indiana police department has confirmed that an officer can surreptitiously abuse the technology and perform unapproved facial scans with apparent ease.

According to Kashmir Hill, the journalist who exposed Clearview AI’s technology, the disgraced cop was following in the footsteps of “billionaires, Silicon Valley investors and some high-profile celebrities” who gained early access to Clearview AI’s technology in 2020. and they considered it. a “superpower on their phone, allowing them to put a name to a face and search for photos online of someone the person may not even realize is online.”

Advocates have warned that stronger privacy laws are needed to prevent authorities from abusing Clearview AI’s network, which Hill described as “a Shazam for people.”

Smith said the officer ignored department guidelines by performing improper facial scans.

“To ensure the software is used for its intended purposes, we have implemented internal operating guidelines and comply with Clearview AI’s terms of service,” Smith said. “Both have language that clearly states that this is a tool for official use and not for personal reasons.

Leave a Comment