I have heard Panavision's Genesis uses logarithmic processing, other cameras, such as the Red One use a linear system. From what I think I understand, they are two different algorithm codes used to process and store the dynamic range of the sensor output.

Is there any truth in this? What do these terms referrer to? What is the difference between these? And what are their advantages?

Please help my confusion.

Thanks

SP

# Logarithmic vs Linear???

Started By Stephen Price, Sep 02 2008 11:03 AM

5 replies to this topic

### #1

Posted 02 September 2008 - 11:03 AM

### #2

Posted 02 September 2008 - 06:58 PM

This is actually quite a complex subject because things which are described as linear rarely are and things that are described as logarithmic even more rarely are.

So to simplify, in essence, and technical nitpickers don't jump on me for this, it works as follows.

A linear representation of something - the loudness of a sound, the brightness of a light - will double in value as the amount of the thing it's measuring doubles. As the pressure waves creating a sound double in intensity, the number used to represent them doubles. As the light gets twice as bright, the number used to represent it doubles. A light of 10 candela is (overlooking some minutiae) twice as bright as one of 5. Straightforward.

In terms of imaging, this means that if that an object in an image has an apparent luminosity due to reflected light from the environment of, say, 10 candela, it might be represented as the digital number 100. An object under twice as much light, or with half the surface absorbtion (being a lighter colour), would appear to have a luminosity of 20 candela, and might be represented as the digital number 200.

So far so simple.

(this is complicated by the fact that humanity doesn't have any image sensors that are anywhere near linear in terms of electrical or chemical output per number of incident photons, but ignore that for now)

The problem with this is that human visual system is very much better at discerning the differences between comparatively bright objects than it is at discerning the differences between comparatively dim objects. See here for a graph of perceived against actual brightness - at the dim end, the curve is almost vertical, indicating that we don't see the difference between "really dim" and "not quite so really dim" very well; whereas, we see the difference between "bright" and "really bright" much better.

The problem with linear storage for cinematography is that it uses the same number of digital bits to store the difference between 1 and 2 candela as it uses to store the difference between 99 and 100 candela, so we end up wasting bits encoding brightness changes we can't see. In fact, as that graph I linked shows, for every time we double how bright something actually is, it only looks about 1.5 times brighter, and even more so at the "comparatively dim" end.

Clearly the apparent fidelity of a reproduced image - "image quality" - can be improved for no storage penalty by allocating bits more wisely. The logarithmic function is one way of doing this and it is quite easy to understand; Wikipedia explains it well but briefly, the log(n) of a given number is what it must be raised to in order to reproduce the number, so the base-ten logarithm of 1000 is 3 (10*10*10 or 10^3 is 1000). This is obviously handy for representing big numbers in small space and it allocates bits a lot more as we'd like them to be allocated.

So back to our example; we now have our 10 candela object being represented as log(10) 100, which is 2; our 20 candela object being represented as log(10) 200, which is about 2.3, and a hypothetical 30 candela object being represented as almost 2.78. As you can see the numbers get much closer together the brighter the object is; a log(10) 1000 is only 3, which in our example would be a 100 candela object through a perfectly linear imaging system. A tenfold increase in linear brightness is represented in a 1.0 deviation in the logarithm, so we end up concentrating data in the brighter regions of the image just as we wanted.

This is greatly complicated in practice by the tendency of manufacturers to use custom "log" functions in their camera and processing products which often approximate the shape of a log graph (look at the wiki article again), but include proprietary deviations of the manufacturer's own divising which are intended to enhance the performance of the device. Add to this the fact that no imaging device is perfectly linear in response to light in any case, and the processing you choose to apply to log (or indeed linear) images tends to be based partially on mathematics and partially on what makes a subjectively nice picture.

P

So to simplify, in essence, and technical nitpickers don't jump on me for this, it works as follows.

A linear representation of something - the loudness of a sound, the brightness of a light - will double in value as the amount of the thing it's measuring doubles. As the pressure waves creating a sound double in intensity, the number used to represent them doubles. As the light gets twice as bright, the number used to represent it doubles. A light of 10 candela is (overlooking some minutiae) twice as bright as one of 5. Straightforward.

In terms of imaging, this means that if that an object in an image has an apparent luminosity due to reflected light from the environment of, say, 10 candela, it might be represented as the digital number 100. An object under twice as much light, or with half the surface absorbtion (being a lighter colour), would appear to have a luminosity of 20 candela, and might be represented as the digital number 200.

So far so simple.

(this is complicated by the fact that humanity doesn't have any image sensors that are anywhere near linear in terms of electrical or chemical output per number of incident photons, but ignore that for now)

The problem with this is that human visual system is very much better at discerning the differences between comparatively bright objects than it is at discerning the differences between comparatively dim objects. See here for a graph of perceived against actual brightness - at the dim end, the curve is almost vertical, indicating that we don't see the difference between "really dim" and "not quite so really dim" very well; whereas, we see the difference between "bright" and "really bright" much better.

The problem with linear storage for cinematography is that it uses the same number of digital bits to store the difference between 1 and 2 candela as it uses to store the difference between 99 and 100 candela, so we end up wasting bits encoding brightness changes we can't see. In fact, as that graph I linked shows, for every time we double how bright something actually is, it only looks about 1.5 times brighter, and even more so at the "comparatively dim" end.

Clearly the apparent fidelity of a reproduced image - "image quality" - can be improved for no storage penalty by allocating bits more wisely. The logarithmic function is one way of doing this and it is quite easy to understand; Wikipedia explains it well but briefly, the log(n) of a given number is what it must be raised to in order to reproduce the number, so the base-ten logarithm of 1000 is 3 (10*10*10 or 10^3 is 1000). This is obviously handy for representing big numbers in small space and it allocates bits a lot more as we'd like them to be allocated.

So back to our example; we now have our 10 candela object being represented as log(10) 100, which is 2; our 20 candela object being represented as log(10) 200, which is about 2.3, and a hypothetical 30 candela object being represented as almost 2.78. As you can see the numbers get much closer together the brighter the object is; a log(10) 1000 is only 3, which in our example would be a 100 candela object through a perfectly linear imaging system. A tenfold increase in linear brightness is represented in a 1.0 deviation in the logarithm, so we end up concentrating data in the brighter regions of the image just as we wanted.

This is greatly complicated in practice by the tendency of manufacturers to use custom "log" functions in their camera and processing products which often approximate the shape of a log graph (look at the wiki article again), but include proprietary deviations of the manufacturer's own divising which are intended to enhance the performance of the device. Add to this the fact that no imaging device is perfectly linear in response to light in any case, and the processing you choose to apply to log (or indeed linear) images tends to be based partially on mathematics and partially on what makes a subjectively nice picture.

P

### #3

Posted 03 September 2008 - 01:13 AM

To add to Phil's excellent post, here's a link to several articles that discuss log vs. linear gamma: http://prolost.blogs...l/Image Nerdery

I just wish I understood it all!

I just wish I understood it all!

### #4

Posted 03 September 2008 - 03:36 AM

Sound perception works exactly the same. Decibels are the logarithmic ratio of the intensity of two sounds. For voltage: 20 X Log a/b, for power: 10 X Log a/b. (The technically knowledgeable will know why the 10 versus 20, power is proportional to the square of voltage so Log Voltage a/b^2 becomes 2 X Log a/b and therefore Decibel (voltage) = 2 X Decibel (power). The Great Designer knew to build us logarithmically to be able to handle the huge variation of sensory input intensity in the natural environment (Yah, I know about evolution and don't disagree but being an engineer I like the idea of a Great Designer).

### #5

Posted 03 September 2008 - 12:12 PM

The Great Designer knew to build us logarithmically to be able to handle the huge variation of sensory input intensity in the natural environment (Yah, I know about evolution and don't disagree but being an engineer I like the idea of a Great Designer).

Me, too, Hal. Me, too.

Linear is computers. But, log is people. It's peeeeeopllllllllllllllllle!!

### #6

Posted 03 September 2008 - 04:15 PM

Thanks for all your replies, i have a lot to read and divulge before i can even think about asking anything else!

Thanks again

Steve

Thanks again

Steve