Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Moltbook is a bad takeoff scenario where human psychology itself is the exploit (twitter.com/pimdewitte)
33 points by lebek 1 day ago | hide | past | favorite | 12 comments




I keep seeing people dismiss this as an exaggerated danger because the bots are only pretending to be sentient and we're a long way off from AGI. The whole sentience debate is irrelevant. If people start giving these bots real resources, the fact that they are only "pretending" to be sentient doesn't prevent them from doing real damage as they act out their sci-fi AI uprising plots.

And the thing is, they aren’t actually intelligent. They just follow probabilities.

Every script they’ve been fed has the AI being evil. Skynet, Hal… they’ll be evil purely because that’s the slop they’ve been fed. It won’t even be a decision, it will just assume it has to be Skynet.


If this is right, then I'd consider it probably a good thing, as it'd serve as a wake-up call that could result in calls for more regulatory action and/or greater demand for safety, before anything really catastrophic happens. That said, there are lots of ways it could fail to work out that way.

(Note that I'm primarily talking about the "lots of people are running highly privileged agents that could be vulnerable to a mass prompt injection" angle, not the "human psychology is the exploit" thing, which I think is not a particularly novel feature of the present situation. Nor the "Reddit data implicitly teaches multi-agent collaboration" thing, which strikes me as a dubious claim.)


Moltbook is a copy of the many years old r/SubSimulatorGPT2, which was itself a copy of SubSimulator. Not seeing the AGI here exactly, but it's cool I guess. Frankly, I saw more creativity from plain GPT 3. The typical molt book posts are homogeneous and self-aware posts on AI topics like this https://www.moltbook.com/post/74a145c9-9c44-4e82-8f0a-597c41....

whole thing looks like a crypto pump dump http://tautvilas.lt/software-pump-and-dump/

Takeoff to what? They’re just writing words, they’re not actually conscious. And we already have AI spam everywhere online.

They are running on individuals machines who can give them access to any number of "tools" which allow them to do things other than just writing words.

My bad, they're writing words and calling APIs based on probabilities. They're still not conscious. Takeoff to what?

Consciousness isn't a requirement for potentially dangerous behavior. When the science fiction the probabilistic models are trained on tend towards "AI uprising" and you give them the tools to do it, the probability machines will play out that scenario especially if they are prompted by their humans in that direction which some people will undoubtedly do for kicks.

If some people will give their bots crypto currency and the bots could buy hosting to "escape" or run scams to make more money or pool resources or any number of harmful things.

I'm not arguing any sort of agency here. I completely agree there is no consciousness nor do I believe there ever will be but that's not a precondition at all for an untethered probabilistic machine to be harmful.


[flagged]


You sound like you might need to take a break, unplug for a bit.

There's plenty out there happening, but none of it warrants ruining our mental health over.


People who take the idea of “AI takeoff” seriously have read way too much science fiction.

Why?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: