Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"We currently don't understand how to make sense of the neural activity within language models" this is why peopl are up-in-arms.


Up in arms for what reason? That neural networks are perfectly interpretable? That's the nature of huge amorphous deep networks. These feature extraction forays are a good step forward though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: