Notes on notes 9: how to become invisible-ish

  • September 30, 2013

Camouflage made out of QR codes and barcodes and faces and social networking logos.

Camouflage made out of QR codes and barcodes and faces and social networking logos.

Has this been done already?

At the time, I was likely thinking of something like the following projects:

These projects all start with the idea that omnipresent computer vision and facial recognition, whether live or after-the-fact through social network tags and such, is avoidable. It could be distracted in much the same way as a police dog may have its concentration broken by a nice, fresh steak. I think this concept treats the situation as an arms race at best: face recognition technology constantly improves, so even with more effective countermeasures, any attempt to hide is still just providing more data for the computer vision methods to work with. If there’s a face, they’ll find it, or they’ll certainly make an educated guess. A sufficiently advanced method may also notice attempts to conceal someone or something.

Evading face recognition continues to be an active subject in art and design. Maybe visual artists are best equipped to hide from face recognition due to their particular aesthetic pursuits. Here are some more recent projects:

Why hide?

If we’re going to treat this idea as something more than graphic design or fashion (not that there’s anything inherently wrong with those fields - I just think there are some interesting technical concepts here as well), we will need to define who may want to hide and what they may want to hide from. We need to do some threat modeling. How and where are we most likely to have our image captured? Who will be doing it? Will it be fully automated, or will there be a human involved? What are we hiding from, anyway?

We likely won’t be able to design a fully effective solution for hiding in every conceivable situation (at that point, we’re talking about an invisibility cloak, a subject of ongoing efforts from multiple groups) but we can address some of the above questions. The ubiquity of cell phone cameras suggests that most individuals in public places are likely to be photographed by personal devices much more often than by automated surveillance systems. Even in places like Beijing with their wide surveillance camera coverage, cameras are rarely at face level and are generally conspicuous in their placement. They can be partially avoided with simple solutions like hats, especially ones with a few LED modifications. Practically anyone with a phone can take direct, clear, surreptitious photos of your face, however, at which point face recognition algorithms like those used by Google and Facebook do the challenging identification work.

As for what we’re trying to avoid, let’s assume we simply don’t want our image to be used without our knowledge. Misrepresentation is all too easy: imagine a situation where you’re photographed walking out of a restaurant later determined by conspiracy theorists to be a hub of salacious activity. An ideal preventative measure will decrease your likelihood of being photographed while also making it more difficult for face recognition methods to identify you.

Surveillance is still a concern, but not one I believe will be easily addressed through garment-based means. For one thing, it seems likely that much of the visual data collected by authorities is kept until it’s useful. Images are collected and stored under the pretense that they’ll eventually be useful for the next generation of face recognition or because it’s supposedly too complicated to erase them. Face recognition using these data can have a staggeringly high false positive rate (interestingly, some police departments employ “super-recognizers” highly skilled at matching faces to photos, so they’re often better at their jobs than automated methods). It’s difficult to anticipate how to avoid a method which isn’t even working as intended, so unfortunately the best solution for avoiding surveillance systems may be to avoid public places and events. This strategy won’t help as much for avoiding personal cameras, for the reasons I’ve mentioned.


What can we do?

With a threat model involving human camera operators and their algorithmic accomplices, we can start to assemble a basic approach to evading both. Traditional camouflage works well for hiding from people, so we can work with its strengths while adapting them to the particulars of automated face recognition. Many of its strengths will apply to both humans and algorithmic approaches anyway.

Camouflage doesn’t make its wearers invisible, just harder to distinguish from their environment. This means we need to remain cognizant of the surroundings where our new patterns may be used. A universal solution is beyond our reach. Based on our threat model of “somebody with a cameraphone”, urban environments with crowds and direct lighting are most likely. We should also avoid pattern elements with noticeably different appearances from their surroundings, so the bar codes and QR codes and logos and such from my initial idea may not work well. They’re also too easy to identify by automated methods.

The concept of pareidolia may work well. It’s the tendency to see faces where there are none. Humans and face recognition systems are both susceptible to it. The HyperFace pattern linked above plays with this idea, and while it’s difficult to determine if it contributes much to evading face recognition (i.e., if a system sees many false faces near a real face, does it ignore them all?), it seems like a fine starting point.

Computer vision algorithms can be fooled outright. There’s a whole field of adversarial work exploring the cases where image classifiers mistake one type of image for another:

These examples are contingent on special cases, potentially where the image data used to train an otherwise powerful and human-like neural network just doesn’t contain many examples of a given item, or that item looks kind of strange anyway. You can probably tell the difference between a photo of a cat and one of guacamole, but a close-up of the latter could look like all kinds of things. Let’s try it out:

I zoomed in on the most feline-appearing part.

I zoomed in on the most feline-appearing part.

The local environment remains important, too. Barack Obama’s face doesn’t just show up at crowds at random, nor do large globs of guacamole, unless there’s been a burrito accident. We can’t perfectly anticipate the local environment beyond what we’ve assumed already, nor can we know the perfect adversarial conditions at all times. Instead, we can attempt to create a pattern likely to produce incidental adversarial conditions. Or, essentially produce pareidolia for faces and all kinds of other shapes.

I think it’s crucial to note at this point that most face recognition methods are still terrible at recognizing faces unlike those in their training data (e.g., anyone who isn’t a white man). Obviously “change your sex and/or ethnicity” isn’t a good solution for addressing how well a system may or may not recognize you, especially if the underlying system is already being employed in a biased way (and more or less every system is, so feel free to argue with me about that). This may just be additional evidence that ubiquitous photography and face recognition are to be avoided on the grounds of personal safety.

I believe a new camouflage pattern should be designed with a threefold strategy:

  • Overwhelm - not just by repeating easily detected patterns, but by using strange combinations and adversarial design

  • Obfuscate - combine the properties of traditional camo patterns with patterns automated methods have difficulty with due to limitations in their training data

  • Overlay - accentuate details that are likely to look like the surrounding space, or at least don’t differ from nearby patterns noticeably (again, much like traditional camo)

Accordingly, I’ll name this project camOOO, after those three O’s. It should be pronounced like cam-oooh, or like Albert Camus.

What can we start with?

Here are a few visual starters:

One of the patterns I’d suspect image classifiers would have the most trouble with is the Polish wz. 68 “moro”. It looks like a rainy puddle reflecting some variety of exotic predatory cat. Or, like this:

Google’s Cloud Vision tools aren’t immediately fooled:

“Tree” is on the list. We can work with that.

“Tree” is on the list. We can work with that.

For comparison, here’s some modern digital camouflage:

The results are fairly obvious - this looks like camouflage. It’s probably widely recognizable to people, too.

Perfect for hiding in a uniform factory.

Perfect for hiding in a uniform factory.

What I’d like to start with, then, is a way to generate that Polish pattern in a variety of muted greys and greens. The famous Disney “go away green” is a good color philosophy.


Is it time to build something yet?

Almost!

Here’s what I want a pattern to incorporate:

  1. A background camo texture, colored blandly but contrastingly

  2. Details more like an urban environment

  3. Potentially adversarial content

  4. Almost obvious matches from training data, chosen at random, and overlapping with Step 2


Ideally I will create and combine these elements in a generative manner. Processing seems like a good set of tools for approaching this set of requirements.

Also potentially of use:

Progress so far

Here’s what I have thus far - it doesn’t look like much, but it’s largely generative rather than just a bunch of bitmaps glued together. More code details later once things have congealed.

Still too dark.

Still too dark.

Needs more faces, I think, or maybe animals, without getting too DeepDream-y.

camOOO-test-1-dd1.jpg

Oops, too much!