If you want to extract the actual raw data, dcraw doesn't really advertise the way to do it. Many of the options for extracting the data still apply some processing/conversion to the underlying data.
It is a pretty big mess actually. Each manufacturer has different constants that need to be applied to the data, so the raw data from one device is not comparable to another. Convincing dcraw that you actually want the unaltered data is not straight forward.
Anyways, if you really do want to see the raw image, there are a few undocumented flags you can use.
> dcraw -E -4 -T *.CR2
This will give you an unprocessed 16-bit tiff file containing the "raw" data.
What is interesting is that some camera sensors capture data slightly beyond the image you are presented with. The camera will crop in and debayer the image for you, as will your image editor.
See this thread for hilarious lengths people go to in order to get unaltered data from their sensors (monochorme converting a dslr):
That thread is more about removing the physical filters from the DSLR sensors - RGB, UV, infrared to achieve higher resolution and much higher sensitivity.
Really cool article. I'm left with just one question:
Why are the 16-bit RAW values so limited in their dynamic range? Wouldn't sensor manufacturers want to have their pixels able to return values that range the whole way from 0x0000 to 0xffff?
Very few (if any) digital cameras have 16-bit ADCs.
Most cameras use 12 or 14 bit ADCs, so they only use 2^12 or 2^14 of the available values.
Generally, the RAW files aren't actually 16 bit, but rather a packed structure of the native ADC size. In this case, they're using 16-bit intermediate bitmaps. They're not 16 bit from the RAW.
------
Fujifilm actually has a few cameras with a sensor that can be put in 16-bit mode, but IIRC it's not enabled in software.
Essentially only very high end digital backs from hasselblad/phase one have 16 bit ADCs, but people disagree if it’s even required, the last two bits are basically noise.
Scientific cameras with 16 bit ADCs will usually, in my limited experience, have active cooling, eg a peltier element and some fans. Otherwise, 12 bit is probably enough.
I don’t really think that it’s white noise, it’s mostly thermal noise and “reading the sensor” noise (forgot the real name) that have very different shape from white noise.
If they were so easily filtered we’d have far superior denoise effectiveness during raw processing.
Additive white gaussian noise is quite well studied and pretty easy to remove with processing. So even if those last few bits are "just noise", if that noise is well-behaved, on the whole image you still have more information.
There's also the effect of pixel well depth. I forget the exact interplay, but I believe this usually ends up being the limiting factor before ADC resolution.
You can remove white noise with oversampling, but you can't usually oversample unless you're shooting a still-life scene and can take multiple exposures.
Otherwise, you can't strongly delineate between actual scene artifacts and noise.
Regarding pixel well depth, I suppose we can have a sensor with greater pixel size and low pixel count, like a 3840 x 2160 sensor in a full-frame camcorder. Since this would be sold as a video-oriented camera, lack of higher resolution will be a problem problem than in a still-oriented camera.
Oh of course, thank you. I even knew that my camera has a 12 bit DAC, but somehow the way it was presented as 16 bit "literal" values made me forget that fact.
The author notes that 'But, there is no such standard...[a]ll real-world RAW processing programs have their own ideas of a basic default state to apply to a fresh RAW file on load'.
While I can accept that there isn't a universal RAW -> RGB standard, it seems strange to me that 'compute how this photo should appear' is left as an exercise as a reader.
Photographers often view their work as a form of art, and artists are very particular about even the smallest details of their work.
Why, then, would Nikon, Canon, and especially Leica, not have their own definable standards of how to process RAW photos for their particular cameras?
A RAW file is a description of the light captured by the sensor. What good would such a standard do? The RAW file is not supposed to be a finished work of art, it’s designed to be processed.
Interestingly, some major photo competitions require 'original' RAW files to be submitted along with processed jpegs. The idea is that they can check that the image hasn't been modified in a way that breaks the rules.
Of course, software people know that it would be tedious but technically straightforward to concoct a fake RAW file from another 'raw' format such as a PPM image. The stakes in these competitions can be quite high so it's a bit disconcerting that they rely on a presumption that converting RAW files to editable images is a one-way process.
No - it's not straightforward. How the light strikes, sensor imperfections, patterns unique to each camera body even, thermal noise. It's very very hard actually. It's one thing to doctor a finished image - it's another, in a forensic level really, to doctor a RAW file. You can make one which on its own is syntactically correct but which does not show all the telltale signs of being shot in actual hardware, no patterns or imperfections etc. This is trivial.
Next level is to make it look plausible on its own, with plausible imperfections. This is a bit more involved and tedious.
The really hard part is to make it look like it came from the exact same individual purported camera body the photographer allegedly used. This is really hard.
An analogy would perhaps be taking a picture of a room:
Now, build another room that would make the same picture. Make sure everything in the room is plausible yet renders exactly the same picture as the first room. (The room is the RAW file.)
Edit:
(I'm not saying all the competitions have the time and know how to verify a RAW file.)
Couldn't you just take whatever image you want to convert to a RAW file, make a really large high quality print out of it, and then take a picture of that? I would imagine that with the right lighting setup (and a really high quality printout) it would be pretty hard to tell that it is a picture of a picture.
Nope. Light is a 5D wavefront. Each x,y,z of origin in the real world is actually a sum of photons from many directions, scattering to many other directions. So you have two angular components. At times, phase matters too, and frequency, so each photon needs perhaps 7 parameters to capture its state, (x,y,z,θ,φ,ψ,λ).
On top of that, you have diffraction effects in the optics, chromtic aberration, and even some birefringence effects.
Algorithms exist which can detect if an image has been cropped, and what region of the original it came from. I'll leave as an exercise to the reader how that works :) hint: keyword is "media forensics"
The scenario I'm talking about is not synthesising a RAW file from scratch, but being able to freely edit it and save the result back into a RAW file that looks "original". I don't think you've made any statements that indicate that doing that would be complex.
Again, this is not about faking a RAW file from scratch, just being able to edit it in place.
One potential complication just occurred to me. If, (IFF!) each sensor site has a somewhat unique non-linearity in any way, you can't just go around in the image and change "pixels". You have to adjust them in a way that would not change the patterns normal to that camera for that sensor site and for neighboring sensor sites which could have been affected by the same light.
Competitions, when an image is contested, sometimes ask for images made before and after the winning image. If it isn't a studio image, the time-series is hard to fake.
The RAW file is analogous to the latent image on film. It's before most all developing and processing. A big difference is a metric f ton of non-linearity in film, and for practical purposes the RAW file contains linear data.
Adobe Digital Negative (DNG) [1] is that sort of standard (sort of) and there are manufacturers who use DNG as the RAW format in their cameras. Leica tends to use DNG as it's RAW file format. [2]
> Why, then, would Nikon, Canon, and especially Leica, not have their own definable standards of how to process RAW photos for their particular cameras?
Pretty sure they do. Aren't those the manufacturer-specific import profiles in programs like Lightroom?
I think your premise is a bit faulty. A photographer basically never gives anybody a raw image unless they’re giving up control of how the final image looks to somebody else (as some other said, there are some special circumstances, such as competitions - but this is not for anybody to use for presentation, just ensuring that the image hasn’t been altered in ways not allowed - i.e compositing multiple photos to fake some amazing occurrence).
Processing the raw file is almost as much a creative art as composition and photography itself! So if a application processes raw images in a way the photographer likes better than the algorithms another app uses, then all the better. They’re going to export it in the end anyway.
The camera manufacturers of course do release specifications and probably proprietary code for processing raw images from their cameras to companies like Adobe to use in Lightroom under NDA, which I assume they use as a basis, but all these apps have their own ways of doing adjustment and extra processing too.
They do to an extent. It's just that for some camera manufacturers, it's only accessible in-camera (i.e. shooting in JPG mode), for others, they do have a separate RAW converter you can use on your computer (Fujifilm does, unsure for other manufacturers).
Canon can't make Adobe or Phase One use their algorithm for decoding RAWs though.
> and artists are very particular about even the smallest details of their work.
That's not necessarily true and paints way too broad a brush of artists and art. Art and artists need not necessarily be concerned about the smallest details. Particularly if it is an intermediate medium which never is shown to the art viewer/participant.
Something that's fascinating to me is that at 1:1 display size, the output of modern cameras doesn't look that much better than the pictures out of old 0.3Mpixel still cameras of nearly twenty years ago. The dynamic range is better and the colors are more vibrant, and the noise floor is lower for dark scenes, but on the whole it still looks pretty crappy at full resolution. Why is that? Could we fix it by using larger CCD/CMOS sensor pixels and sticking to lower total pixel counts?
With higher resolution sensors, small imperfections are more visible. To extract the most from a high-megapixel camera, you need spot-on focusing, a higher quality (more expensive) lens, and more care (e.g. higher shutter speed or tripod) to avoid camera shake during exposure.
If you nail all these, I wouldn't call it crappy even at 1:1.
But there is a residual imperfection from the Bayer pattern on the sensor, since each pixel records only one color (R, G, or B) and the other two values for that pixel have to be guesstimated from neighboring pixels, so the de-bayering process isn't perfect.
One way to fix it is to use a monochrome sensor and color filters, taking 3-4 exposures (luminance/mono plus R/G/B) and stack them.
A few cameras on the market have a pixel shift feature that can do something similar - multiple exposures, shifting the sensor one pixel between exposures so each pixel get a true R/G/B sample, and stack them in camera or in post.
Edit: Forgot to mention the anti-aliasing filter. It sits in front of the sensor and deliberately blurs the image at the pixel level. This is done to avoid aliasing and moiré artifacts, but obviously has the side effect of not-so-great image quality at the pixel peeping level. The fix for this is to get a camera without an AA filter, many modern high-resolution cameras don't have them.
Part of it is just that "1:1" is misleading because of how camera manufacturers count pixels. A "pixel" on a monitor is a set of R, G, B subpixels but camera manufacturers count every subpixel separately and essentially 2/3 of what you're looking at is interpolated data at "1:1" view.
> whole it still looks pretty crappy at full resolution. Why is that? Could we fix it by using larger CCD/CMOS sensor pixels and sticking to lower total pixel counts?
Indeed amazing things can be achieved through that. Sony has released a camera called a7s with 12 Megapixels (while its resolution-focused counter part a7r has 42), that is crazy sensitive. You can film at night and have the results look like you filmed at day.
What images are you comparing? 20 year old digital cameras were around a Mpixel. A modern camera with a decent lense produces much better images. In both cases you'd have to do some work to make sure the thing you're blowing up is exactly in focus.
which relies on the model of blackbody radiation and a model of the wavelength distribution definitions behind the (R, G, B) values, i.e. the particular color space you're working in. When working with a manufacturer's RAW format,
Auto white-balancing is finding `T_to` so that the color values are "most balanced". It is up to the manufacturer how this is decided (and might be proprietary secret sauce). Custom white-balancing is taking a known white or grayscale object and finding `T_to` which would yield something close to (1, 1, 1) for pixels of that object.
I think I accidentally erased a sentence in an edit...
When working with a manufacturer's RAW format, there is a defined way to get the values (R, G, B, T_from) as a result of demosaicing. For example, in the author's photo post-demosaicing, the manufacturer calibrates the final data to obtain (I'm completely making this up) T_from = 2700K. Then using that information, you can correctly adjust the temperature to any other temperature using `f` above.
It is a pretty big mess actually. Each manufacturer has different constants that need to be applied to the data, so the raw data from one device is not comparable to another. Convincing dcraw that you actually want the unaltered data is not straight forward.
Anyways, if you really do want to see the raw image, there are a few undocumented flags you can use.
> dcraw -E -4 -T *.CR2
This will give you an unprocessed 16-bit tiff file containing the "raw" data.
What is interesting is that some camera sensors capture data slightly beyond the image you are presented with. The camera will crop in and debayer the image for you, as will your image editor.
See this thread for hilarious lengths people go to in order to get unaltered data from their sensors (monochorme converting a dslr):
https://stargazerslounge.com/topic/166334-debayering-a-dslrs...