Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do you not think that? As far as I understand, there is no procedure for reviewing the contents, it is simply a database that law enforcement vouches is full of bad images.


NCMEC, not law enforcement, produces a list of embeddings of known images of child abuse. Facebook and Google run all photos uploaded to their platforms against this list. Those which match are manually reviewed and if confirmed to depict such scenes, are reported to CyberTip. If the list had a ton of false positives, you think they wouldn’t notice that their human reviewers were spending a lot of time looking at pictures of the sky?


It's well-known that this algorithm doesn't have a perfect matching rate. It'd be easy to presume that any false positives are not erroneously tagged images, but the error rate of the underlying algorithm, if all the images were tagged correctly. Who would know?

IIRC Wired reported the algorithm "PhotoDNA" worked around 99% of the time a number of years ago, however newer algorithms may be fuzzier. This is not the same algorithm. And even "PhotoDNA" appears to change over time.

I doubt reviewers of such content are at a liberty to discuss what they see or don't with anyone here. Standard confidentiality agreements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: