Posted 24 September 2015 - 10:10 AM
Posted 07 October 2015 - 03:08 PM
What's up with all the Indian DPs and their stupid topics lately ?
Posted 08 October 2015 - 01:14 AM
Tom .. how long were you in the diplomatic corp before you became a DP..
Posted 08 October 2015 - 11:56 AM
Why do digital cameras have less exposure latitude compared to film cameras?
It has taken quite some time to engineer the following.
Sensors that absorb light well, and have acceptable spatial resolution for stills or motion pictures.
Electronic circuits running fast enough, on 'low' power, to allow the circuits to capture 24 fps 'still' images, with 'minor' amounts of noise or artifacts.
Price low enough for ordinary mortals to buy.
When I left the industrial image processing world, the equipment I used to capture 'high quality', 'high resolution' image data, cost about $500K for the system...
Like most things, my iPhone has far better imaging than that system... for about $800 (ok I got the 6S with 128GB memory...) and as such that same iPhone probably has more compute power than many mainframes of the 60's... and with 128GB that would have taken a large aircondition room with a number of 'washmachine' sized disk drives to store the equivalent.
Edited by John E Clark, 08 October 2015 - 11:59 AM.
Posted 09 October 2015 - 05:09 PM
Yea Tom, its probably because he is Indian .... Lots of Sarcasm implied there.
Posted 09 October 2015 - 06:26 PM
basic answer it that the digital cameras still have too much noise to be able to capture very low signal values at usable quality. Digital camera's usable dynamic range falls between the clipping point and the base noise ('noise floor') .
The clipping point is absolute so if you want more dynamic range you have to get more stops from the lowest side of the range, you basically dig those extra stops from the darkest tones which can just barely be extracted from the base noise.
Film's dynamic range is quite a lot based on two emulsion technique, you have one sensitive layer and one insensitive layer per primary color. It can be seen as a kind of HDR technique because it combines two different exposures together inside the film emulsion.
I think the newer Varicam has a little bit of similar technique where two different sensitivity photo sites are combined to one pixel and thus you can also switch the camera to a higher "working ISO" by switching the signal processing to use more extensively the larger and more sensitive photo site.
Film negative's dynamic range is opposite compared to video: it has quite absolute black point but it has no absolute white point, the film just gots denser and denser until your scanner can't see through it and the scanner outputs only its own signal noise.
Posted 09 October 2015 - 07:15 PM
Part of the problem when comparing digital to film, is the terminology. For stills, the 'absolute' black was dependent on the print density, and thus depended on the amount of silver available in the material.
For motion picture, the 'black' was determined not only by the density of the print, but also, the 'darkness' of the theater.
In both image capture, and presentation, one did have to deal with the contrast of the image, and 'light' bleeding into dark areas. In camera 'bleed' would cause the 'dark areas to have more exposure, and create a 'haze', while in the theater high contrast would 'bleed' via reflection/scatter from the theater room.
On the negative there was also the substrate, film base, plus chemical fog. Measurements of density were taken to be density above film-base+fog.
In older film materials, high intensity light could result in a reversal effect and give rise to 'black sun' images. But in most cases, the film was not exposed for maximal density.
But the one factor that seems to be forgotten is that while the negative had more 'latitude', print papers had limited dynamic range (and presumably so did motion picture print stock...) and so attempting to fit a wide dynamic range (aka contrast range) negative to a print would yield a less than good quality image (mushy highlights, greyish blacks...).
And so one attempted to match the output print range with judicious exposure and processing as closely as possible. In the still case, one could ultimately stand with buring and dodging tools... aka fingers and hands for some people... and burn in blown highlights, or dodge out dark areas so as to have detail in the 'thin' portions of the negative.
I would imaging that one could make some sort of masking 'print' to mitigate the intensities for motion pictures, but I would imagine that was only done for compositing effects or the live+animation ala Disney films.
In the case of digital sensors, one issue is the 'read out time' required to maintain a motion picture frame rate. One can 'average out' a certain amount of noise if one is imaging a stationary object. Alternatively some of the HDR methods involve shooting several images, and 'banding' the images with in certain intensity ranges, then compositing them together.
One could sort of do this with a '3 chip' type camera and each 'chip' was configured for a specific intensity range.
The idea to have wells of different sensitivities on the sensor has been around for a while, but again 'expense' and the required read out circuitry increases cost. (I think one of the Fujifilm DSLRs we bought around 2003 had something like this to give better highlight detail...).
Posted 09 October 2015 - 07:55 PM
Print stock and reversal films have a higher contrast because they have to create a reasonable black when the image is projected by a bright light onto a white screen.
Yes, it is hard to use film terms to describe ISO and vague concepts like latitude (as opposed to measurable things like dynamic range) though even the rating for film could be flexible depending on what sort of densities you wanted for printing.
You could do, for example, a test of clipping by rating each digital camera at the manufacturer's suggested rating, but let's say the result was that camera "A" clipped one-stop earlier than camera "B". Because the fall-off in shadow information is so gradual with digital, just falling into the noise floor, you could conceivable rate camera "A" one-stop faster and end up with the same clip point as camera "B" -- but does that mean that you've lost one stop of shadow detail in camera "A"? It all depends on how much noise you want to live with in the blacks. Some people who are noise adverse would rather have a clean but clippy image while others would prefer a wider range on display even if it mean some noise in the shadows (or they plan on some noise reduction.)
Ultimately, I feel that a transmission wedge test, a series of grey patches, rendered in log gamma, is one of the best ways of comparing dynamic range relative to noise. You'd just want to combine that with some real world testing and some color charts to see how colors and details are affected as you move into the clip point and the noise floor. Some of these cameras produce odd results when, let's say, a flesh tone is four stops overexposed, even if there is no clipping.