You Knew This Was Coming, Right?


When the FBI and DHS fusion centers started building vast, unregulated, facial recognition databases, they shrugged elaborately and said that there weren’t any standard protections for doing so, and that they were just experimenting, and it wasn’t going to be used operationally until the legalities were all sorted out.

What they were actually doing was running out the clock on regulation. They were waiting until it was too late for someone to establish sensible rules for how to operate that sort of database. Regulation would have included important things like mandating the quality of sources, and  having processes for updating incorrect information. For example, perhaps it would mandate that only drivers’ license photos (which are, presumably, authentic) could be used and not content scraped from social media sites. Reglation would address the question of whether or not it was appropriate for the federal agencies building face recognition databases to share them with state law enforcement, or whether or not it was appropriate to strong-arm airlines into ‘sharing’ the photos they take of passengers as they board or check in. You’d imagine that the relationship between public data and private data would be something to clarify, here, but – nope.

The “there is no way to update the database” dodge was successfully used by DHS for the “no fly list” – deny that it exists, then deny that it can be updated. To me, it’s mind-blowing that someone can claim, with a straight face, that it’s impossible to search a database for “Marcus Ranum” and delete images that I flag as not me? That’s exactly what databases do – search and update. So, rather than confront that challenge, let’s deny that it exists and deny that it’s possible. Whatever it takes to run out the clock. It’s not going to be used for anything, right?

Wrong.

[ars]

Cops in Miami, NYC arrest protesters from facial recognition matches

Cops’ use of the tech among the list of things protesters are demonstrating against.

Yup:

Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact.

Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week. The agency has a policy against using facial recognition technology to surveil people exercising “constitutionally protected activities” such as protesting, according to the report.

In other words, Miami police self-regulate their use of facial recognition. That’s a nice way of saying “trust us.”

“If someone is peacefully protesting and not committing a crime, we cannot use it against them,” Miami Police Assistant Chief Armando Aguilar told NBC6. But, Aguilar added, “We have used the technology to identify violent protesters who assaulted police officers, who damaged police property, who set property on fire. We have made several arrests in those cases, and more arrests are coming in the near future.”

An attorney representing the woman said he had no idea how police identified his client until contacted by reporters. “We don’t know where they got the image,” he told NBC6. “So how or where they got her image from begs other privacy rights. Did they dig through her social media? How did they get access to her social media?”

One part of the problem is that the sources of the images has been deliberately obscured. It’s in the database is all anyone knows. How did it get there? Uh. We don’t know. It’s easier not to know than to classify it or have to deal with annoying freedom of information act requests.

This is an example of a particularly subtle form of “parallel construction.” You have footage of someone throwing a rock at a cop, you let the facial recognition suggest some names. Then, you look at the images on the person’s facebook page and – yup, that’s the right person. It gets around one of the problems with facial recognition, namely that it’s not very accurate. It winnows the haystack down to a handful of things that may or may not be needles, and a real intelligence makes the final assessment.

Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported.

This is the same technique, basically, that has been used to identify karens and nazis, too. It’s a generally useful feature of the retro-scope. Clearview AI is careful to point out how their technology is making the world a lot safer, by identifying child porn predators, etc. Of course it also identifies dissenters and people who attack cops – which is especially problematic because some cop agencies (e.g.: DHS) consider it “assault on an officer” if you yell at them. They also consider being present when someone else is rioting, to be part of the riot. Things have not changed much since the police rained bullets into the crowd after the Haymarket bombing. [stderr]

If someone were looking for a conspiracy, the way the facial recognition systems have been deployed is a good example of an emergent conspiracy: everyone in these companies and agencies acted as though it was all someone else’s problem, so now they can say “who, me?” we’re just using an existing resource. That ignores the fact that it did not exist, at some point. It also ignores the fact that Clearview AI appears to have all the photos of everyone in their database and nobody’s asking where they got them and whether that’s not a bit creepy? One question I’d want to have an expensive lawyer ask is “how do you make sure your database is not full of photos of minors?” (Because minors can’t consent to having their images harvested)

But, it’s OK. Apparently, the cops are going to do a good job of regulating themselves. According to the cops:

New York City Mayor Bill de Blasio promised on Monday the NYPD would be “very careful and very limited with our use of anything involving facial recognition,” Gothamist reported. This statement came on the heels of an incident earlier this month when “dozens of NYPD officers – accompanied by police dogs, drones and helicopters” descended on the apartment of a Manhattan activist who was identified by an “artificial intelligence tool” as a person who allegedly used a megaphone to shout into an officer’s ear during a protest in June.

Having dogs, drones, cops, and helicopters drop in to your apartment is not, in any way shape or form, going to affect someone’s willingness to engage in free speech.

It’s just going to get worse. Cops are already treating “you hurt my feelings” as an excuse for shooting someone. What could possibly go wrong?

Strangely, the cybersphere is absent of stories about how AI is being used to identify tax cheats or how Palantir is being used to track “dark money” going into political campaigns. Surely, that oversight is coincidence.

There are companies that offer digital printed breath masks for the COVID era. I wish everyone could have a mask with a print of Donald Trump’s face.

Comments

  1. mikey says

    ” I wish everyone could have a mask with a print of Donald Trump’s face.”

    For more than one reason! As I wait for the results of a ‘rona test, I find myself thinking that we should be taking over one of the cheeto’s propaganda tricks and start calling it the Republican Virus, or maybe Trump Virus.

  2. nastes says

    Yeah, no surprise there. Cops will exploit anything they get their hands on. And I still have to see one data collection system that will not be abused after it has been put in place. No matter what oversight criteria the claim to put in place.

    On a slightly different note, Marcus, did you hear about the invisible masking tool for images (Fawkes)?
    If I understood it correctly, it is supposed to overlay features of a second face on yours which are invisible for humans but sufficiently significant to pull the detection algorithm away from the features it uses to identify your original image. So masking your images on social media with it should create enough distortion in the classification that if a real image of you (surveillance cameras, etc. ) is used to find you in the database will fail.

    I have no idea whether the approach is actually effective in reality (they already have huge database ), but it sounded interesting enough:
    http://sandlab.cs.uchicago.edu/fawkes/

    Take care,
    nastes

  3. mikey says

    How about “I’ve got a bad case of the trumps.” Though that one sounds more like a GI ailment, one that features frequent trips to the loo….

  4. Curious Digressions says

    “If you haven’t done anything wrong, you have nothing to fear.” – A regime supporter, shortly before being arrested

  5. komarov says

    Well, it’s perfectly reasonable then to demand cops wear entirely transparent helmets – or no helmets at all – that leave their faces clearly visible at all times. That way people can take a frame grab from the next cop-murder-video to facebook or an image search to identify the culprit and help the police weed out those bad apples we’ve all heard about.
    Bodycams keep breaking, badges are always attracting bits of tape* and nearby coppers, the gold-standard of eye witnesses, keep misremembering what happened. This is clearly the only workable solution. It’s also perfectly acceptable methodology to the police. Besides, no civilian would ever abuse this. Promise.

    *There is a phyics paper in this on par with “why does toast drop with the butter side down?”

    —-

    Nastes, thanks for posting that link. That looks very interesting indeed.

    Makes me wonder, though, how long it would take for the use of masking algorithms to be deemed “suspicious behaviour” if their use was to become wide-spread. (The authors of this one claim its hard to detect but who knows?)

  6. jrkrideau says

    It gets around one of the problems with facial recognition, namely that it’s not very accurate.

    It would be fascinating to know what the false positive rate is. A lot of people look enough alike that assuming the police “believe” in the system, the fact that the AI suggested a person may lead to a sort of confirmatory bias.

    The police tend to be very credulous about a lot of the “forensic” tools they use and this turn into another.

Leave a Reply