Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The green subpixels in a Pentile display are (1) smaller and (2) more numerous. #1 is OK because to get a given perceived brightness you need less actual energy in green than in red or blue. #2 is good because accuracy in green matters more than accuracy in red or (especially) blue.

What's not so reasonable is that AIUI these displays' resolution is usually quoted as if there were the same number of pixels as green subpixels, which would only be reasonable if the human eye simply couldn't see details in red and blue at all. Which is almost true for blue, but not for red.



You may be interested to know that nearly all digital cameras advertised resolution is the total number of what you call "subpixels."


Yup. Which, note, is not the same thing as the way Pentile resolutions are generally cited, it's worse.

On the other hand, what you actually get out of the camera (either directly or via whatever rawfile conversion tool you use) is an image with the advertised resolution's worth of RGB pixels. I suppose the equivalent of that would be a display that has as many RGB pixels as it claims, but that suffers from artefacts such that what a given pixel actually displays depends on its neighbours.

It's only just occurred to me that this practice of quoting the number of photosites as the resolution of a camera's sensor may help explain the (otherwise entirely indefensible) tendency of camera-makers to specify the resolution of the display on the back of the camera by giving its total number of subpixels. So, e.g., Nikon say that the D3000 has a "large 3-inch, 230k-dot, high-resolution LCD monitor". "230k-dot" means 320x240 pixels = 76,800 pixels = 230,400 subpixels. Bah!


Not coincidentally, the color camera's bayer filter and the pentile matrix use an awfully similar RGBG matrix arrangement.

Each "pixel" you get RAW off the camera is only R, G, or B. The "full" resolution is synthesized in "demosaicing." In other words, 2/3s of your digital camera's image is faked up. People just don't notice much because a) the error from faking it looks a lot like defocus, b) the cameras often try to cover up their suckiness with denoising algorithms, and c) most people get a JPEG out of the camera anyway... which typically encodes the chroma (color) at a lower resolution and throws out a bunch of the fine (high-frequency) details.

The real weird part is that the pentile matrix should be capable of a higher effective resolution when displaying color images from digital cameras (except from Foveon X3, 3CCD, or even more uncommon setups). It's "subpixels" are arranged the same way as the data is, which saves you from tranforming the image to and from the intermediate RGB format.


"the error from faking it looks a lot like defocus"

Mosaic cameras tend to have an optical antialiasing filter that blurs the image, to reduce the moire effect.


I've read this over and over again, and I can't tell if you're agreeing or disagreeing with me!

Defocus spreads a point of light over more than one point in the image... as does the "antialiasing filter."


There are actually multiple potential sources of error here. [EDIT for disambiguation: I mean error in reconstructing the image at full RGB resolution, not error in what anyone has been saying in comments here.]

1. The antialiasing filter. This behaves very much like a defocusing error.

2. Demosaicing. Exactly what this does depends on the algorithms in the camera. It doesn't necessarily look much like defocus. (For instance, on some cameras I've worked with -- not consumer ones, FWIW -- you get demosaicing artefacts that produce a sort of tartan effect.)

If the antialiasing filter is strong enough that it destroys all information in the image below the Nyquist limit for the R and B photosites, then demosaicing can in principle be perfect -- i.e., lose no information that wasn't already lost by the antialiasing filter. But (a) that ceases to be true in the presence of noise in the sensor, and of course there usually is some, and (b) an antialiasing filter that strong throws away information because then you're not really using all your green photosites. So in practice, sometimes you'll get ugly high-frequency demosaicing artefacts. Presumably designing a good demosaicing algorithm is largely about making this happen as seldom as possible for real-world images.


I don't think I was agreeing or disagreeing! Supplementing?

Mosaic cameras have an optical blurring device, the antialiasing filter, between the lens and the image sensor. It throws away information, so the image will always have residual blur. Even if the demosaicing algorithm does something a bit silly, it will still look like blur.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: