Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If GPT 5 truly has 400k context, that might be all it needs to meaningfully surpass Opus.


Having a large context window is very different from being able to effectively use a lot of context.

To get great results, it's still very important to manage context well. It doesn't matter if the model allows a very large context window, you can't just throw in the kitchen sink and expect good results


Even with large contexts there's diminishing returns. Just having the ability to stuff more tokens in context doesn't mean the model can effectively use it. As far as I can tell, they always reach a point in which more information makes things worse.


More of a question is its context rot tendency than the size of its context :) LLMs are supposed to load 3 bibles into their context, but they forget what they were about to do after loading a 600LoC of locales.


It's 272,000 input tokens and 128,000 output tokens.


The website clearly lays them out as 400k input and 128k output [1]. I just updated my AI apps to support the new models. I routinely fill the entire context on large code calls. Input is not a "shared" context.

I found 100k was barely enough for a single project without spillover, so 4x allows for linking more adjacent codebases for large scale analysis.

[1] https://platform.openai.com/docs/models/gpt-5


Oh, I had not grasped that the “context window” size advertised had to include both input and output.

But is it really 272k even if the output was say 10k? Cause it does say “max output” in the docs, so I wonder


This is the only model where the input limit and the context limit are different values. OpenAI docs team are working on updating that page.


Woah that's really kind of hidden. But I think you can specify max output tokens. Need to test that!


400k context with 100% on the fiction livebench would make GPT-5 the undisputably best model IMHO. Don't think it will achieve that though, sadly.


Coupled with the humungous price difference...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: