Jump to content




Photo

Exposure Latitude


  • Please log in to reply
14 replies to this topic

#1 Amit D Auti

Amit D Auti

    New

  • Basic Members
  • Pip
  • 4 posts
  • Cinematographer
  • India

Posted 24 September 2015 - 10:12 AM

Why do digital cameras have less exposure latitude compared to film cameras?
  • 0




#2 John E Clark

John E Clark
  • Basic Members
  • PipPipPipPip
  • 780 posts
  • Other
  • San Diego

Posted 24 September 2015 - 12:06 PM

"Latitude" in the old sense referred to 'how many stops over or under, and with appropriate processing, could one obtain a 'reasonable' image'.

 

Modern terms would be 'dynamic range', and Negative Film has had a wider dynamic range than the sensors used in many digital cameras until recent years. One can discuss 'useful dynamic range' vs 'manufacturer's boasting point specs'...

 

And I specifically refer to Film negative materials. If one speaks of reversal film, or 'slide' film, or the like the dynamic  range (and hence 'latitude') was much more limited. In fact, reversal/slide film has about 6-7 stops of 'dynamic' range, and corresponds to many DSLR senor ranges.

 

Current digital sensors in many cameras are now achieving some parity on the dynamic range front, and so this issue is becoming less of an issue.

 

The other item that no one seems to speak of, was in the olden days of Film and Print, the Print had only about 7 stops of Dynamic range, and so, the name of the game was to expose the negative in such a way, that the result could be printed on a medium, the paper print for stills, the 'positive' projection print for motion pictures, could be viewed 'correctly' relative to the photographer's desired image.

 

So, while Film negative had a larger dynamic range than the print, one had to be careful to process correctly to be able to present the image as one intended.

 

There are physics reasons that the Film negative gave rise to more 'dynamic' range vs the electronic sensor, but with improvement of sensor technology, that difference has been shrinking.


  • 0

#3 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2358 posts
  • Other
  • Los Angeles

Posted 24 September 2015 - 01:29 PM

Why do digital cameras have less exposure latitude compared to film cameras?


We're analog beings and light is an analog system, so is film.

Digital cameras take photon's of light and converts them to one's and zero's. However, there are guidelines about how many one's and zero's you can fit into a given stream of data.

An example of this would be slightly over exposing. The guidelines of the imager and data output, doesn't allow the differential between 99.5% full brightness and 100% full brightness. It's the same with blacks, it's hard to differentiate between full black and 1% brightness.

This is where bit depth comes into play. The higher the bit depth, the more differential there is in the image. Yes, there are other details like quality of imager, but MOST of latitude/dynamic range is limited by the bit depth which in computer terms is called; word size.

In contrast, film is an analog format, it doesn't have a bit depth, there is no translation. When film is scanned to digital, the imager on the scanner itself is very slow in order to write a high bit depth image.

The reason why we don't have high bit depth digital cinema cameras today is quite simple. You have two choices for getting high bit depth with digital formats. One is to have a super fast processor, the other is to have a slow capture. It's hard to have that fast of a processor in a camera, the heat sync necessary to cool it, would be huge. Film scanners hold the frame in place for a lot longer, allowing a slower processor to take it's time and scan the image. All of the CMOS based digital cinema cameras on the market are hybrids. They've found a happy medium between processor speed and bit depth. The very best one's can capture a 14 bit RAW image.

In contrast, modern color negative contains upwards of 32 bits worth of information.

So until small/low wattage processors become more powerful, we will continue to see slow increases in bit depth, with even larger increases in data file size as a consequence. It will be decades before 24bit digital cinema cameras are the main stay and personally, I don't think it will ever get better then that.
  • 0

#4 Carl Looper

Carl Looper
  • Basic Members
  • PipPipPipPip
  • 1367 posts
  • Digital Image Technician
  • Melbourne, Australia

Posted 24 September 2015 - 03:25 PM

I've been looking at specialist film stock such as 2374, which is principally designed for recording optical sound tracks for film. In terms of dynamic range it's only a few stops. So if used as a camera film the resulting image is very high contrast (shadow/highlight details are lost)  - but not without a certain awesome aesthetic in itself.

 

But something I very much like about this stock, regardless of image, is it's resulting density range, which is enormous: 17 stops. In other words projecting this stock, gives you intense blacks, and corresponding intense whites.

 

To exploit this stock as a print medium (to exploit it's density range in projection) requires working out how to project an image on it such that the resulting densities don't lose the highlight/shadow information in the original. I haven't done this yet but will be doing so in due course: the general idea being preparation of an intermediate (from material shot on conventional camera stock) in which the density range of the original's mid-tones are then squashed down to an exposure range of only about 1-2 stops, with the highlights/shadows remaining more or less the same as the original. Requires some very subtle print work but I fully expect the resulting print to be awesome.

 

C


Edited by Carl Looper, 24 September 2015 - 03:29 PM.

  • 0

#5 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 18788 posts
  • Cinematographer
  • Los Angeles

Posted 24 September 2015 - 07:35 PM

Silver halide photochemical technology has an inadvertent (probably) or "natural" advantage due to the fact that the increase in density with exposure is only linear in the midtones, and then above and below that (at the shoulder and toe of what is called the characteristic curve), less density forms relative to the increase or decrease in exposure, in other words, it flattens out (gets lower in contrast).  This has the advantage of causing extreme bright and dark areas of the image to burn up or fall to black more gradually than the midtones.


  • 0

#6 Carl Looper

Carl Looper
  • Basic Members
  • PipPipPipPip
  • 1367 posts
  • Digital Image Technician
  • Melbourne, Australia

Posted 25 September 2015 - 02:10 PM

This is a general idea of the workflow for creating a greyscale image on 2374 where one would otherwise not get such.

 

The right hand side gives an idea of what would occur when using 2374 in a camera (and processing it as reversal). The stock can only see a few stops of mid tones so anything outside this range (ie. the shadows and highlights) is lost.

 

The net effect is what we might call a "contrasty" image.

 

Now one might think the solution is to create a low contrast intermediate.

 

But somewhat counter-intuitively (until you get your head around it), the way to undo the effect is to actually increase the contrast (!) - but not everywhere: only in the shadows and highlights. This is because the loss of details in shadows and highlights is actually due to a loss of contrast in those areas. As David indicated the toe and shoulder of the film's characteristic curve compress the original range of light - making it low contrast (loss of variation) - rendering such as uniform black on one side, and uniform white on the other. So to restore the lost variation (the details) one increases the contrast in such areas (by otherwise decreasing the contrast in the mid-tones).

 

Workflow.jpg


Edited by Carl Looper, 25 September 2015 - 02:22 PM.

  • 0

#7 dan kessler

dan kessler
  • Basic Members
  • PipPipPip
  • 158 posts
  • Other

Posted 26 September 2015 - 12:33 AM

We're analog beings and light is an analog system, so is film.

Digital cameras take photon's of light and converts them to one's and zero's. However, there are guidelines about how many one's and zero's you can fit into a given stream of data.
 

Can't cite the specific source just now, but I'm certain that I read that the sensors in digital cameras are in fact analog devices.  It's the outboard circuitry that digitizes the signal.  Pretty interesting, eh Tyler?


Edited by dan kessler, 26 September 2015 - 12:33 AM.

  • 0

#8 Robin R Probyn

Robin R Probyn
  • Basic Members
  • PipPipPipPip
  • 1035 posts
  • Cinematographer
  • Tokyo

Posted 26 September 2015 - 12:55 AM

The variations in voltage .. caused by light hitting the sensor .. are analog.. these are converted to digital with the A to D converter ..then to DSP(digital signal Processor) in the camera.. AFAIK


Edited by Robin R Probyn, 26 September 2015 - 12:56 AM.

  • 0

#9 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2358 posts
  • Other
  • Los Angeles

Posted 26 September 2015 - 02:02 AM

Can't cite the specific source just now, but I'm certain that I read that the sensors in digital cameras are in fact analog devices.  It's the outboard circuitry that digitizes the signal.  Pretty interesting, eh Tyler?


CMOS sensors are a grid of transistors which consist of a photodiode and an amplifier. When exposed to light, the photodiode reacts by creating a voltage and then amplified. Color is created through an APS pattern, where some pixels create certain colors. The sensor spits out a varied voltage which is then converted to digital and scanned by a processor. CCD don't need the processor as it pulses the voltage data all at once. CMOS doesn't do this in the same way, so it's a digital-only imager.

Yes, it's interesting! Though, I'd throw it all away in a heartbeat. Technology doesn't make better films, it just makes everyone a filmmaker.
  • 0

#10 dan kessler

dan kessler
  • Basic Members
  • PipPipPip
  • 158 posts
  • Other

Posted 26 September 2015 - 11:48 AM

Yes, it's interesting! Though, I'd throw it all away in a heartbeat. Technology doesn't make better films, it just makes everyone a filmmaker.

Remember when those photoelectric signals were called 'video?'  We digitized them so computer chips could process them.


  • 0

#11 John E Clark

John E Clark
  • Basic Members
  • PipPipPipPip
  • 780 posts
  • Other
  • San Diego

Posted 26 September 2015 - 12:39 PM

CMOS sensors are a grid of transistors which consist of a photodiode and an amplifier. When exposed to light, the photodiode reacts by creating a voltage and then amplified. Color is created through an APS pattern, where some pixels create certain colors. The sensor spits out a varied voltage which is then converted to digital and scanned by a processor. CCD don't need the processor as it pulses the voltage data all at once. CMOS doesn't do this in the same way, so it's a digital-only imager.

Yes, it's interesting! Though, I'd throw it all away in a heartbeat. Technology doesn't make better films, it just makes everyone a filmmaker.

 

The process in the CCD/CMOS material that converts photon energy in to electrical charge is 'analog'. The analog voltage of the charge in the CCD/CMOS 'cell' is proportional to the number of photons. The voltage is then converted to a digital number and subsequent processing is 'digital'.

 

The array of CCD/CMOS cells spatially 'quantizes' the imaging area. Early applications used just a single line of cells and were used in copy applications where the object to be 'scanned' was stationary and a line array of cells moved. The electrical charge that each cell acquired was shifted out and the analog voltage converted to a digital number.

 

2-d arrays of course allowed for easier imaging, especially of moving objects, but the fundamental principle is the same.

 

For very low counts of photons other types of sensor devices are used, or the CCD/CMOS array is cooled to very low temperatures to minimize internal electrical noise in the device.


  • 0

#12 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2358 posts
  • Other
  • Los Angeles

Posted 26 September 2015 - 01:25 PM

Right, but where CCD's pulse their data all at once, CMOS doesn't. So if you were to use a CMOS sensor on an analog system, it would have horrible rolling shutter effect, making it ineffective as a moving image device. Fine for stills, if you don't move the camera, but not good for moving images. The solution to this problem was to use digital technology and "scan" the chip's output at high speed. This requires analog to digital conversion, serious processing power and memory.

So today's CMOS technology for moving image (the OP's question) is all digital and couldn't be analog like CCD.
  • 0

#13 Carl Looper

Carl Looper
  • Basic Members
  • PipPipPipPip
  • 1367 posts
  • Digital Image Technician
  • Melbourne, Australia

Posted 26 September 2015 - 03:53 PM

CMOS is cheaper to manufacture than CCD, so for film digitisation applications, CMOS is good choice because the image has already been spatially fixed or frozen by the film exposure/development pipeline.

 

In some respects film has some proto-digital aspects to it. Time, for example, is divided up into discrete samples: the frames. And light, insofar, as it's detected in terms of discrete energy packets (photons) is another digital-like sampling process - each photon detection being a stochastic sampling of the light field (amplified through chemistry).

 

The distinction between analog and digital is probably better defined, not so much by discrete sampling as such, but by the conversion of the physical information (be it continuous or already quantised) into digits.

 

C


Edited by Carl Looper, 26 September 2015 - 04:06 PM.

  • 0

#14 Carl Looper

Carl Looper
  • Basic Members
  • PipPipPipPip
  • 1367 posts
  • Digital Image Technician
  • Melbourne, Australia

Posted 27 September 2015 - 04:31 AM

Just as we might clarify the "digital" as referencing digits rather than quantisation as such, we might also clarify the "analog" as analogy, rather than as simply that which is in a continuous form.

 

This is not to say that the digital doesn't involve quantisation - it does - but it's to say that quantisation doesn't give us the digital. And the analog for it's part involves both quantised signals and continuous ones.

 

Now between an input image and an output image, we can say the relationship between the two is an "analog" one, insofar as the output image is "analogous" to the input image, even if it spends time in between in the form of digits (maintained in terms of a zillion switches).

 

If we otherwise speak of analog to digital conversion, followed by digital to analog conversion, then our end to end pipeline becomes an analog-to-analog pipeline, or more simply: an analog one. By this we mean neither the input image, nor output image, are in the form of digits. Rather the digits operate in the middle of the pipeline - facilitating either simple encoding/decoding of the analog, or otherwise allowing a lot of creativity to occur during that transcoding (special effects etc).

 

In this sense the digital is necessarily an augmentation of the analog. It doesn't actually replace such.

 

C

 

Matrix_code.png


Edited by Carl Looper, 27 September 2015 - 04:43 AM.

  • 0

#15 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2358 posts
  • Other
  • Los Angeles

Posted 27 September 2015 - 12:23 PM

That's true Carl, digital is just an "intermediary" rather then a means to an end.
  • 0


Ritter Battery

Abel Cine

rebotnix Technologies

Zylight

Broadcast Solutions Inc

Visual Products

Technodolly

Willys Widgets

Aerial Filmworks

Pro 8mm

Tai Audio

Paralinx LLC

Glidecam

Rig Wheels Passport

CineTape

CineLab

The Slider

Broadcast Solutions Inc

rebotnix Technologies

The Slider

Ritter Battery

Zylight

CineTape

Pro 8mm

Rig Wheels Passport

Abel Cine

CineLab

Glidecam

Aerial Filmworks

Tai Audio

Technodolly

Willys Widgets

Visual Products

Paralinx LLC