Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If a data science team modeled something incorrectly in their simulation, who's gonna catch it? Usually nobody. At least not until it's too late. Will you say "this doesn't look plausible" about the output?

I recently watched a demo from a data science guy about the impending proliferation of AI in just about all related fields, his position was highly sceptical but with a "let's make the most of it while we can"

The part that stood out to me which I have repeated to colleagues since, was a demo where the guy fed his tame robot a .csv of price trends for apples and bananas, and asked it to visualise this. Sure enough, out comes a nice looking graph with two jagged lines. Pack it ship it move on..

But then he reveals that, as he wrote the data himself, he knows that both lines should just be an upward trend. Expands the axis labels - the LLM has alphabetized the months but said nothing of it in any of the outputs.





Like every anecdote out there where an LLM makes a basic mistake, this one is worthless without knowing the model and prompt.

If choosing the "wrong" model, or not wording your prompt in just the right way, is sufficient to not just degrade your output but make it actively misleading and worse than useless, then what does that say about the narrative that all this sort of work is about to be replaced?

I don't recall the bot he was using, it was a rushed portion of the presentation to make the point that "yes these tools exist, but be mindful of the output - they're not a magic wand"

Always a good idea to spot check the labels and make sure you've got JFMAMJ..JASON Derulo



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: