Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean that's just the consequence of releasing a new model every couple months. If Open AI stayed mostly silent since the GPT-4 release (like they did for most iterations) and only now released 5 then nobody would be complaining about weak gains in benchmarks.


If everyone else had stayed silent as well, then I would agree. But as it is right now they are juuust about managing to match the current pace of the other contenders. Which actually is fine, but they have previously set quite high expectations. So some will probably be disappointed at this.


Well it was their choice to call it GPT 5 and not GPT 4.2.


It is significantly better than 4, so calling it 4.2 would be rather silly.


Is it? That's not super obvious from the results they're showing.


Yes it is, if we're talking about the original GPT-4 release or even GPT-4o. What about the results they've shown is not obvious?


I see incremental improvements in almost all domains?


If they had stayed silent since GPT-4, nobody would care what OpenAI was releasing as they would have become completely irrelevant compared to Gemini/Claude.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: