A tech company that boasts about its ability to use artificial intelligence to predict crime is in the midst of a privacy lawsuit with Meta, formerly Facebook, that wants it banned from the social media platform.
The New York City and Los Angeles police departments, two of the U.S.’s largest police agencies, are among a growing list of law enforcement agencies in the U.S. and around the world to contract with Voyager Labs.
In 2018, the New York Police Department agreed to a nearly $9 million deal with Voyager Labs, which claims it can use AI to predict crimes, according to documents obtained by the Surveillance Technology Oversight Project (STOP), The Guardian reported.
The company bills itself as a “world leader” in AI-based analytics investigations that can comb through mounds of information from all corners of the internet – including social media and the dark web – to provide insight, uncover potential risks and predict future crimes.
But Meta says in a federal lawsuit that Voyager Labs created at least 55,000 fake accounts on Facebook and Instagram to collect personal data “to uncover … behavior patterns,” “infer human behavior” and “build a comprehensive presence” on their target(s).
That includes 17,000 fake accounts after Meta revoked Voyager Labs’ access after filing the federal lawsuit on Jan. 12.
Essentially, Voyager Labs can use someone’s social media history to retrace anyone’s steps and potentially predict their next movements, according to Meta.
An NYPD spokesperson told The Guardian that “offenders” increasingly “utilize social media in furtherance of their unlawful activities.”
“Voyager assists the NYPD in preventing victimization and apprehending these offenders,” according to the NYPD spokesperson, but the department doesn’t use “features that would be described as predictive of future criminality.”
Fox News Digital’s specific questions to the NYPD weren’t answered.
William Colston, a spokesperson for Voyager Labs, told Fox News Digital that he can’t get into specific cases on how and when the company’s AI was used, but he said they’re “very proud” to have busted child trafficking rings and combated terrorism.
Meanwhile, STOP, a privacy advocacy nonprofit, described Voyager Labs’ tactics as “a new digital form of stop-and-frisk” that targets Black and Latino New Yorkers, according to STOP communications director Will Owen.
“This is invasive, it’s alarming, and it should be illegal,” Owen said in a Sept. 8. statement. “Our constitution requires law enforcement to get a warrant prior to searching the public, but increasingly police and prosecutors just buy our data instead.”
“This isn’t just bad policing, it’s not just enabling companies that steal our data, but it’s a flagrant end-run around the Constitution.”
The ongoing federal lawsuit is a heavyweight bout that pits powerful AI’s potential public safety use against an individual’s privacy.
It’s an ongoing struggle that many experts predict will continue to be waged as AI becomes more advanced and readily available.
Voyager Labs allegedly collected data from more than 600,000 Facebook users between July and at least September 2022, according to Meta’s lawsuit filed in California federal court on Jan. 12.
That includes users’ timeline information, photos and videos, lists of friends, posts, education and employment, and self-disclosed location information, the legal action claims.
The legal action was filed in California because Meta says Voyager Labs allegedly “scraped” information from nonprofits, universities, news organizations, health care facilities, U.S. armed forces and all levels of government associated with the state between July 4-7, 2022.
Among Meta’s demands is Voyager Labs’ removal from all their sites and the deletion of all collected materials.
Voyager Labs hit back, calling Meta’s lawsuit “meritless” in a Jan. 18 statement and said Meta’s legal action “reveals a fundamental misunderstanding of how the software products at issue work and, most importantly, is detrimental to U.S. and global public safety.”
“Law enforcement organizations worldwide use this software to analyze their own internal intelligence data as well as other publicly available information to address critical challenges such as human trafficking, internet crimes against children (ICAC), gang violence, homicide, narcotics trafficking and terrorism.”
Colston said Voyager Labs has denied Meta’s accusations that it “scrapes” data from Facebook users in statements and court filings and “categorically reject” that their software is designed to infringe on civil liberties.
He echoed the NYPD’s statement to The Guardian in an email to Fox News Digital, saying, “Malicious actors utilize social media platforms and other online resources to engage in a variety of nefarious behaviors, all in a manner that is publicly available.”
That “includes targeting and abusing children, and recruiting and training those who engage in violent acts,” Colston said.
WATCH UNCOVERED “FRAUD-FOR-HIRE COMMERCIAL” ON DARK WEB
“It’s very important to note that our software is designed to analyze only publicly available data that has been affirmatively placed in the public sphere – or data that has been obtained through a warrant or other legal process,” Colston said.
“If Meta is truly committed to protecting its users and acting in the public interest, then the use of analytical software by those trying to stop malicious actors should be embraced and encouraged.”
The civil legal action continues to wind its way through federal court. Voyager Labs filed a motion to dismiss the case.
On July 26, a judge denied Voyager Labs’ request to hold discovery until there’s a resolution on its motion to dismiss the case.
The most recent filing was on Aug. 31, when the two sides had a pre-settlement conference via telephone.
Potential conference dates are between Sept. 25 and Nov. 10, but a date hasn’t been set in stone.