Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, it really is. The problem is it takes a lot more than one frame for most modern software to change a pixel on the screen. I'm sitting in Hawaii on wifi right now and the first random US mainland server I pinged responded in 120ms, which means sending only took 60ms. Now say you're running a 30 Hz game with 2 frames of input lag, and you've already lost before even considering the input lag of the monitor itself.

There are just so many ways to accidentally get many frames of input lag. OS window compositors generally add a whole frame of input lag globally to every windowed app. Anything running in a browser has a second compositor in between it and the display that can add more frames. GPU APIs typically buffer one or two frames by default. And all of that is on top of whatever the app itself does, and whatever the monitor does (and whatever the input device does if you want to count that too).



No it really isn't. Are you really doubling down on this by talking about software that has built in latency?

Any game that runs at half the frame rate of a cheap TV and has an architecture designed to not draw frames immediately has nothing to do with what you're saying. That would be like someone deciding to send packets every 100ms and claiming 100ms extra latency.

All of this is forgetting that packets can be fired off whenever but with vsync on, frames need to wait for a specific timing. If you take that away you can set pixels with less latency.


Once you throw in head-of-line blocking, other requests in flight, and your average website's pile of ads and JavaScript operating systems layered on top of each other to emulate a small library that reimplements much of what browsers natively support:

Yeah I think displays, even when triple buffered, might win on average. Sending a single packet is fighting a straw man when compared against a full rendering pipeline with common habits. Compare minimums or compare common cases, crossing between them is unfair regardless of which direction you go.


> OS window compositors generally add a whole frame of input lag globally to every windowed app.

Is there a way to verify this is the case? In X11 Linux specifically.

Also does variable refresh rate like freesync help with this?


Not exactly what you ask for, but some reviewers measure it like this: https://www.rtings.com/monitor/tests/inputs/input-lag




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: