Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.

LLMs are completely different to programming languages or even Photoshop.

You can't type a sentence and within 10 seconds get images of CSAM with Photoshop. LLMs are also built on trained material, unlike the traditional tools in Photoshop. There have been plenty CSAM found in the training data sets, but shock-horror apparently not enough information to know "where it came from". There's a non-zero chance that this CSAM Grok is vomiting out is based on "real" CSAM of people being abused.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: