Be mindful not to develop AI psychosis - many people have been sucked into a rabbit hole believing that an AI was revealing secret truths of the universe to them. This stuff can easily harm your mental health.
Consider a hypothetical writing prompt from 10 years ago: "Imagine really good and incredibly fast chatbots that have been trained on, or can find online, pretty much all sci fi stories ever written. What happens when they talk to each other?"
Why wouldn't you expect the training to make "agent" loops that are useful for human tasks also make agent loops that could spin out infinite conversations with each other echoing ideas across decades of fiction?
No they're not. Humans can only observe. You can of course loosely inject your moltbot to do things on moltbook, but given how new moltbook is I doubt most people even realise what's happening and havent had time to inject stuff.
It's the sort of thing where you'd expect true believers (or hype-masters looking to sell something) would try very hard to nudge it in certain directions.
it was set up by a person and it's "soul" is defined by a person, but not every action is prompted by a person, that's really the point of it being an agent.
This whole thread of discussion and elsewhere, it's surreal... Are we doomed? In 10 years some people will literally worship some AI while others won't be able to know what is true and what was made up.
10 years? I promise you there are already people worshiping AI today.
People who believe humans are essentially automatons and only LLMs have true consciousness and agency.
People whose primary emotional relationships are with AI.
People who don't even identify as human because they believe AI is an extension of their very being.
People who use AI as a primary source of truth.
Even shit like the Zizians killing people out of fear of being punished by Roko's Basilisk is old news now. People are being driven to psychosis by AI every day, and it's just something we have to deal with because along with hallucinations and prompt hacking and every other downside to AI, it's too big to fail.
To paraphrase William Gibson: the dystopia is already here, it just isn't evenly distributed.
Correct, and every single one of those people, combined with an unfortunate apparent subset of this forum, have a fundamental misunderstanding of how LLMs actually work.
I get where you're coming from but the "agency" term has loosened. I think it's going to keep happening as well until we end up with recursive loops of agency.