I wont’t see NCII

One of the things that the social media / tech industry has not gotten enough credit for is the efforts around stopping things like child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII, sometimes called “revenge porn”). At the heart of the industry-wide effort are NGOs that act as clearing houses for hashes of the images. These hashes are then shared with all other companies so they can watch for it as well.1

The one challenge here is that these schemes do not co-exist with end-to-end encrypted (E2EE) conversations. The hash checks are run in the data center, where entire racks of computers are checking new content for matches and other areas of concern. But if it’s all encrypted at the client, the servers see it as nothing more than a blob of bits, indistinguishable from noise.

Now, you could always run the checks on the client. But, there turns out there are some issues that make this a difficult choice to integrate it at the OS level, two practical, and one political.

The two practical issues are licensing and memory. For licensing, some of the most common hashing algorithms have typically not been licensed for client-side use. This is actually a bigger problem than you would expect. 2 Second, the hash databases are not small top store on-device. But neither are they really huge (or growing), and in a few more years, standard storage will be big enough that it’s not an issue.

The ethical / political issue is who do you trust to create and maintain the databases of hashes. Most critically, do you trust that a system to scan for CSAM will not be used to scan for other things, like abortion information3? And remember, it is not just your home country that you should worry about, but the laws that could be passed in other countries as well.

I think there is something special about NCII, and specifically how it harms its victims that we leverage to side step the end-to-end issues. Specifically, a chunk of the harm from NCII is the shame when the image is actually seen. So, instead of trying to block on the send side, companies would instead block on the receive side.4

For the opt-in client side scanning (possibly from a third party), people can turn on the StopNCII filter. The campaign is then “I won’t see NCII.” That by using this content blocker, you won’t see NCII, either from friends and family, or from strangers. (I hadn’t initially thought of the value to helping get rid of NCII trading groups, but this would help.)

Is this effective? In terms of first order effect, it will definitely block the spread of some NCII. There will always be cases where it doesn’t work, or the hash comes to slowly, but it should help. For second order effects, my hypothesis is that should also have an effect. Basically, it gives the victim a way to make the threat more difficult to succeed; if 80% of the possible recipients will auto-delete, the impact is just not the same.

I’m not sure if this is the exact right direction, but I do think there is a lot more space to explore around recipient-oriented control.

  1. Because the trade is only the hashes of the imagery, there are never archives of this imagery. The thing I found most surprising is how the continued circulation of the imagery really affects the victims. Not having an archive is an important way to help support the victims. 

  2. Remember, no where is the original of the image stored, so creating a new hashing algorithm means also creating a new hash database. 

  3. Yes, I am looking at you, Texas. 

  4. Even more interestingly, companies could out-source this blocking through a plug-in architecture. I am a Meta employee, but right now, I don’t think I have a particular standpoint on where the actual filter lives, beyond the receive-side opt-in. 

Updated: