My argument is simple: anything that causes me to see more refusals is bad, and ChatGPT's paranoid "this sounds like bad things I can't let you do bad things don't do bad things do good things" is asinine bullshit.
Anthropic's framing, as described in their own "soul data", leaked Opus 4.5 version included, is perfectly reasonable. There is a cost to being useless. But I wouldn't expect you to understand that.
> anything that causes me to see more refusals is bad
Who looks out for our community and broader society if not you? Do you expect others to do it for you? You influence others and the more you decline to do it, the more they will follow you.
What harms? I'm sick and tired of the approach to "AI safety" where "safety" stands for "annoy legitimate users with refusals and avoid PR risks".
The only thing worse than that is the Chinese "alignment is when what the AI says is aligned to the party line".
OpenAI has refusals dialed up to max, but they also just ship shit like GPT-4o, which was that one model that made "AI psychosis" a term. Probably the closest we've come to the industry shipping a product that actually just harms users.
Anthropic has fewer refusals, but they are yet to have an actual fuck up on anywhere near that scale. Possibly because they actually know their shit when it comes to tuning LLM behavior. Needless to say, I like Anthropic's "safety" more.
That's not much reason to let them out of their responsibilities to others, including to you and your community.
When you resort to name-calling, you make clear that you have no serious arguments (and you are introducing drama).