I hope nothing. Maybe if enough people rightfully sue, then these companies will be forced into going out of business since we can't put the executives away for the crimes.
That sounds like an excellent outcome. Also, I don't think executives should go to jail for something like this. Commercial social media going out of business and their executives paying enourmous fines is the best that could happen for the world IMO, but it is also extremely unlikely.
They should be in jail, absolutely they should be in jail. They provided effective tools for powerful and influential Chinese elites, a mafia-like, Satanic cult group. And the gaming companies should also be in jail.
Nice of you to delete their first sentence which includes "delay". Which is what happened if you read the wikipedia article instead of holding water for propagandists, e.g., Bari Weiss.
> What would you expect the behavior of the AI to be? Should it always assume bad data or potentially bad data? If so, that seems like it would defeat the point of having data at all as you could never draw any conclusions from it.
Well, I would expect the AI to provide the same response as a real doctor did from the same information. Which the article went over the doctors were able to.
I also would expect the AI to provide the same answer every time to the same data unlike what it did (from F to B over multiple attempts in the article)
OpenAI is entirely to blame here when they are putting out faulty products, (hallucinations even on accurate data are a fault of them).
> I have no idea what AI changes about this scenario. It's the same scenario as when Mike did this with 1600 lines of his own code ten years ago; it just happens more often, since Mike comes up with 1600 lines of code in a day instead of in a sprint.
So now instead of reviewing 1600 lines of bad code every 2 weeks, you must review 1600 lines of bad code every day (while being told 1600 lines of bad code every day is an improvement because just how much more bad code he's "efficiently" producing! Scale and volume is the change.
Depends what you're doing I suppose. E.g. if keyboards had a 40% error rate you wouldn't find me trying to write a novel on one... but you'd still find me using it for a lot of things. I.e. we don't choose to use tools solely based on how often they malfunction, rather stuff like how often they save us time over not using them on average.
At 40% failure rate, the keyboard would be useless as a keyboard. What would you use it for?! 40% means the backspace, delete key wouldn't work 40% of the time, and even might hit the enter key instead.
Trying to fix the mistakes, would lead to more mistakes! Which I guess is apt, because that sounds a lot like AI.
You could use the keyboard to prop a door open though.
Is it 40% failure per individual back/forth or 40% failure per individual letter output? I guess it really just depends how much one wants to bash AI instead of actually talk about failure rate not normally being what makes using a tool worthwhile :D.
I'm not big on AI for much more than additional "Google search" type usage myself so it's interesting to see how polarized folks are that LLMs either have to be the greatest gift from god to take over the world or completely 100% useless trash which couldn't ever be used for anything because the output is not always correct.
For every worker asking for more wages, there's the executives and capitalist class (who doesn't work) demanding even larger increases for themselves. Media often don't ask "hey, maybe you don't need such exorbitant profit" because they are themselves then owned by capitalists who don't work and don't need such outsized wealth.
I'm a crazy person who reads game credits at the end, and whenever I read about "location scouts" I usually think "oh look, an executive's family took a vacation".
reply