Jump to content


Photo

2008 - The New Years Resolution


  • Please log in to reply
25 replies to this topic

#1 Werner Klipsch

Werner Klipsch
  • Basic Members
  • PipPip
  • 86 posts
  • Other

Posted 28 December 2007 - 08:28 PM

My new years resolution was not to get involved in any more of these discussions. Fortunately, it is still 2007 :lol:

I must confess I am rather confused about Mr Jannards recent statements on Reduder about detail correction with the RED.

Actually overall I am somewhat confused about many of the claims made about how much resolution the Mysterium sensor has (well any Bayer mask sensor for that matter).

Phil Rhodes insists that the absolute guaranteed resolution of a Bayer mask 4K sensor can only be 1K if we are talking about red green and blue, and not synthesizing any part of the image.

I do not understand this reasoning. If you consider the Bayer sensor as being made up of sub-pixels, each with two green, one red and one blue photosite, then there will indeed be 2K of these sub-pixels across the sensor , and 1152 vertically. From this you can at least assuredly generate a genuine 2K 16 x 9 image.

After that, with the extra detail information from the extra green photosites, I will allow that more useful detail information can be mathematically calculated and added to the base 2K 'no questions asked' to give a usefully higher resolution image. The D-20 does a good job of this, and I see no reason why the RED should not do as well or better.

Where I and some of my colleagues have trouble is that Mr Jannard in some of his recent posts on Reduser makes the point that some people may find the RED images not as sharp as expected because they "do not add any detail correction".

So, at which point does the synthesized detail above 2K stop being "enhanced picture information" and become "detail correction"? At the point where it becomes irrating perhaps?

I would much rather the finer detail correction was derived from the original RAW data, rather than re-synthesized from already synthesized data, which is what would happen if the extra correction is done downstream.
  • 0

#2 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19769 posts
  • Cinematographer
  • Los Angeles

Posted 28 December 2007 - 09:31 PM

Jim Jannard is referring to edge enhancement, i.e. "sharpening", when he says "detail correction", not de-Bayering algorithms to derive RGB data from the Bayer pattern. The converted RAW files don't use edge enhancement, unless that feature is selected when doing the conversion, so they feel a little soft. Truth is that most digital images ultimately get some degree of sharpening somewhere in the chain. I think most of us would rather the camera not add any artificial sharpening so we can add it to taste later.

Most people say that a de-Bayered image with a decent de-Bayering algorithm has roughly 2/3 or 3/4 of the original resolution, not half or less than half. RED says it's an effective 3.5K, I'm more likely to say 3K, but it doesn't really matter... the numbers aren't the point, the point is how much information, fine detail, does the image seem to resolve on the big screen without aliasing artifacts. All that matters for now is that it is competitive with 35mm, since that is the industry standard. So far, it seems any of the digital cameras in the 2K to 4K range are in the 35mm ballpark, sharpness-wise. Look at "The Golden Compass", which uses a lot of Genesis footage in there, and that's an HD recording. Of course, one reason it blends well is probably the 35mm footage was all posted in 2K, but the movie looked pretty good on the big screen. There was a softness, but it added to the faux period feeling.
  • 0

#3 Jim Jannard

Jim Jannard
  • Basic Members
  • PipPipPipPip
  • 361 posts
  • Cinematographer

Posted 29 December 2007 - 01:21 AM

Single digital sensors need an OLPF (optical low pass filter) to eliminate aliasing. That filter softens the native image. Sharpening, unsharp mask, or OLPF Compensation (maybe a more elegant term) is necessary if you want to get back the original resolution of the sensor. Some do not want to get back the lost sharpness of their footage. They like the "softer, creamier" look of no OLPF Comp. Some want to snap it back. Some may even want to over-sharpen to get it to look more like traditional HD footage.

Canon's new 1Ds MKIII manual has a section dedicated to sharpening and why it is important for their cameras. It is the nature of the beast.

We choose NOT to put any in-camera sharpening so that the user has all his choices. But in head to head tests where RED RAW footage is being compared to in-camera sharpened footage, we just want to make sure everyone knows what's happening.

Jim

Jim Jannard
www.red.com
  • 0

#4 Gavin Greenwalt

Gavin Greenwalt
  • Basic Members
  • PipPipPip
  • 225 posts
  • Other

Posted 30 December 2007 - 12:04 AM

Isn't the primary reason RED applies no sharpening to the image because sharpening isn't even possible on a RAW image. ;)

It's sort of like saying "We don't apply any sharpening to a Zip file."

Edited by Gavin Greenwalt, 30 December 2007 - 12:07 AM.

  • 0

#5 Jim Jannard

Jim Jannard
  • Basic Members
  • PipPipPipPip
  • 361 posts
  • Cinematographer

Posted 30 December 2007 - 01:12 AM

Isn't the primary reason RED applies no sharpening to the image because sharpening isn't even possible on a RAW image. ;)

It's sort of like saying "We don't apply any sharpening to a Zip file."


That is not correct. A lot of things happen (and can happen) between the sensor and a RAW file. We just choose not to do it.

Jim
  • 0

#6 Keith Walters

Keith Walters
  • Sustaining Members
  • 2219 posts
  • Other
  • Sydney Australia

Posted 01 January 2008 - 06:07 AM

Most people say that a de-Bayered image with a decent de-Bayering algorithm has roughly 2/3 or 3/4 of the original resolution, not half or less than half. RED says it's an effective 3.5K, I'm more likely to say 3K, but it doesn't really matter... the numbers aren't the point, the point is how much information, fine detail, does the image seem to resolve on the big screen without aliasing artifacts. All that matters for now is that it is competitive with 35mm, since that is the industry standard.

Slightly off-topic, but a few times here and in other places I have heard people claim that the best practical resolution 35mm film projection can offer is "about 1,500 lines". This is assuming a new print, a competent projectionist and a good quality, clean lens, that hasn't been dropped too many times.

But what do they mean by 1,500 lines?

Do they mean that such a system is only just capable of displaying 750 vertical white lines on a black background that occupies the width of the screen? And if so, how visible are these lines? Does "visible" mean "clearly visible to the average cinema patron with optimally corrected eyesight", or: "the best that can be seen by any means possible"?

Or do they mean "lines" as in: "columns of pixels", which is what I think a lot of people seem to think this means. In other words "1.5K projection", suggesting that this is inferior to 2K digital projection.

For alias-free projection, a display of 1,500 vertical lines would necessitiate a projector with at least 3,000 pixels horizontally, and in theory, a video camera with the same horizontal pixel count.

Another of the many annoying practices in this sort of discussion is the comparison of film prints produced entirely by the traditional photochemical processes, with video derived images which have all the benefits of electronic post-production "crispening" and other image enhancement. In the real world, there is nothing at all to stop images scanned from negative film getting the same treatment.
  • 0

#7 Graeme Nattress

Graeme Nattress
  • Basic Members
  • PipPipPip
  • 145 posts
  • Other

Posted 01 January 2008 - 02:43 PM

1500 lines is 1500 lines per picture height, which is 750 line pairs per picture height. A line pair is a white and black line together. The best reference I've found for this is here, http://www.cst.fr/IM...ion_english.pdf where film was tested through the stages of production through to typical real world cinema projection, where best performance was 875 lines per picture height.

For a projector to display 1500 lines it needs 1500 pixels. You're confusing things by thinking of aliasing at this point, and confusing frequency, which would equate to line pairs, with pixels and lines which are different by the factor 2, lines being twice as many as the line pairs.

Graeme
  • 0

#8 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19769 posts
  • Cinematographer
  • Los Angeles

Posted 01 January 2008 - 06:37 PM

The best reference I've found for this is here, http://www.cst.fr/IM...ion_english.pdf where film was tested through the stages of production through to typical real world cinema projection, where best performance was 875 lines per picture height.


I can believe that -- it explains why 2K projection seems on par with really good 35mm answer print projection. Now if we never, ever had to make a 35mm IN and prints, then 2K D.I.'s and 2K mastering would probably be fine for 2K projection (though 4K scanning would still have some benefits even for downrezzing to 2K).

But my feeling is that if you still have to make movie release prints from an IN/IP, then starting out with the best possible 35mm negative is still a good idea, whether that means a 4K D.I. or no D.I. or 4K digital origination.

Of course there are other benefits from originating at a higher resolution than the presentation format (i.e. oversampling). And there would be some benefits from 4K digital projection in terms of reducing stairstepping, etc.

After seeing how good "Walk Hard: The Dewey Cox Story" looked shot in HD on the Genesis and digitally projected in 2K, it makes you think that the real weak link is the fact that we still make 35mm release prints from dupes.
  • 0

#9 Michael Most

Michael Most
  • Basic Members
  • PipPipPipPip
  • 765 posts
  • Other

Posted 01 January 2008 - 10:08 PM

After seeing how good "Walk Hard: The Dewey Cox Story" looked shot in HD on the Genesis and digitally projected in 2K, it makes you think that the real weak link is the fact that we still make 35mm release prints from dupes.


I saw that picture on film and it looked quite good as well. In fact, I had no idea it was a Genesis origination until the end credits. Maybe it's not as weak a link as you think...
  • 0

#10 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19769 posts
  • Cinematographer
  • Los Angeles

Posted 01 January 2008 - 11:34 PM

I saw that picture on film and it looked quite good as well. In fact, I had no idea it was a Genesis origination until the end credits. Maybe it's not as weak a link as you think...


The last Genesis movies I saw before that were "Superbad", a reel of "Balls of Fury" (just to check out the Genesis photography). And ten minutes of "I Now Pronounce You Chuck and Larry" (what an awful movie...).

"Superbad" was a bit soft and muddy at times in the print I saw. "Balls of Fury" was OK but the skintones were a bit desatured and metallic-looking. What little I saw of "I Now Pronounce You Chuck and Larry" looked pretty good. But they were all somewhat pastel and soft compared to the rich colors and sharpness of the 2K projection of "Walk Hard".

"Flyboys" looked good. I wonder if part of the issue is whether some theaters are getting prints off of the original digital negative?
  • 0

#11 Jonathan Bowerbank

Jonathan Bowerbank
  • Basic Members
  • PipPipPipPip
  • 2815 posts
  • 1st Assistant Camera
  • San Francisco, CA

Posted 02 January 2008 - 02:44 AM

The last Genesis movies I saw before that were "Superbad", a reel of "Balls of Fury" (just to check out the Genesis photography). And ten minutes of "I Now Pronounce You Chuck and Larry" (what an awful movie...).

"Superbad" was a bit soft and muddy at times in the print I saw. "Balls of Fury" was OK but the skintones were a bit desatured and metallic-looking. What little I saw of "I Now Pronounce You Chuck and Larry" looked pretty good. But they were all somewhat pastel and soft compared to the rich colors and sharpness of the 2K projection of "Walk Hard".


I guess these little comedies are a perfect testing ground of sorts for the technology. Very few people go to see them for the cinematography (no offense to the DP's, they know it better than I do). :/
  • 0

#12 Keith Walters

Keith Walters
  • Sustaining Members
  • 2219 posts
  • Other
  • Sydney Australia

Posted 02 January 2008 - 06:23 AM

1500 lines is 1500 lines per picture height, which is 750 line pairs per picture height. A line pair is a white and black line together. The best reference I've found for this is here, http://www.cst.fr/IM...ion_english.pdf where film was tested through the stages of production through to typical real world cinema projection, where best performance was 875 lines per picture height.

For a projector to display 1500 lines it needs 1500 pixels. You're confusing things by thinking of aliasing at this point, and confusing frequency, which would equate to line pairs, with pixels and lines which are different by the factor 2, lines being twice as many as the line pairs.

Graeme

I'm somewhat confused.

Suppose I had a rectangular test card with 1,024 vertical black lines drawn across its width, each one separated from the next by an equal area of white. That would constitute a 2,048-line - "2K" horizontal resolution test pattern would it not?

If we set up a 2K or better HD camera (or used a high resolution film scan) so that the 2,048 lines just fitted into the width of the image capture area, then the resultant pattern of 2,048 lines should be just be projectable by a "2K" projector. If you could see the projected pixels on the screen, they should follow a repeating dark-white-dark-white etc pattern, each horizontal row consisting of 1,024 bright, and 1,024 dark, pixels. This would be an exact reproduction on the original test chart.

Is that correct?
  • 0

#13 Graeme Nattress

Graeme Nattress
  • Basic Members
  • PipPipPip
  • 145 posts
  • Other

Posted 02 January 2008 - 10:18 AM

If you have such a card, divided into 2048 areas, alternate areas filled with black and white, you have a card that contains, if filling the FOV, a resolution of 2k lines or 1k line pairs.

And yes if you have a > 2k camera, you should be able to accurately sample that image. For 2k Projection, you'd be at the mercy of whatever downsampling filter you use to create a 2k image from your >2k camera image.
  • 0

#14 John Sprung

John Sprung
  • Sustaining Members
  • 4635 posts
  • Other

Posted 02 January 2008 - 02:12 PM

I'm somewhat confused.

Suppose I had a rectangular test card with 1,024 vertical black lines drawn across its width, each one separated from the next by an equal area of white. That would constitute a 2,048-line - "2K" horizontal resolution test pattern would it not?

If we set up a 2K or better HD camera (or used a high resolution film scan) so that the 2,048 lines just fitted into the width of the image capture area, then the resultant pattern of 2,048 lines should be just be projectable by a "2K" projector. If you could see the projected pixels on the screen, they should follow a repeating dark-white-dark-white etc pattern, each horizontal row consisting of 1,024 bright, and 1,024 dark, pixels. This would be an exact reproduction on the original test chart.

Is that correct?

If this thought experiment 2k camera is lined up with the chart in such a way that the black and white lines land exactly on columns of photosites, you get the black-white columns on the screen as you say. But suppose it gets panned over by just half a photosite pitch. Then every photosite sees equal amounts of black and white, and the resulting picture is uniform gray. If the camera is a little more than 2k, or not lined up perfectly, some areas on the screen would land exactly black-white, while others land exactly gray, and in between those areas, things would ramp between those extremes, with alternating darker and lighter pixels.

That would result in really horrible pictures, which is why real world cameras have -- and need -- an Optical Low Pass Filter, or OLPF.

Harry Nyquist and Claude Shannon figured all this stuff out a long time ago. The answers in a nutshell are that if you want to get N samples without aliasing like this, you have to filter out all detail above N/2. To do that with an optical filter, because we don't have access to anti-photons, the rolloff has to start at N/4.




-- J.S.
  • 0

#15 Graeme Nattress

Graeme Nattress
  • Basic Members
  • PipPipPip
  • 145 posts
  • Other

Posted 02 January 2008 - 03:04 PM

If you have 2048 lines (samples) that means you can store any frequency up to 2048/2, ie 1024 line pairs. 10204 line pair is of course 2048 lines. Of course, any frequency about 1024 line pairs must be removed to avoid aliasing, and indeed, optical filters don't roll of high frequencies very fast, and hence must be brought in early. As you say, if you could have optical filters with negative taps we'd get faster rolloff, but we may also get ringing and halos too....
  • 0

#16 John Sprung

John Sprung
  • Sustaining Members
  • 4635 posts
  • Other

Posted 02 January 2008 - 06:48 PM

Graeme, would it be possible for you guys to put together a test/demo only camera without the OLPF? That would make it possible to demonstrate visually what all this sampling theory Nyquist/Shannon stuff really means. Bill Schreiber at MIT did something like that 20+ years ago, using a shot of a dollar bill to demonstrate what interlace looks like without filtering. There's nothing like an eye-hammering demo to make the point. ;-)

Come to think of it, would it be possible to make cameras with interchangeable OLPF's, so you could have a variety of them for people to choose from? It might work out somewhat like the subtle differences in look between film stocks.




-- J.S.
  • 0

#17 Graeme Nattress

Graeme Nattress
  • Basic Members
  • PipPipPip
  • 145 posts
  • Other

Posted 02 January 2008 - 07:35 PM

I don't think interchangables would not be good as you'd risk getting dust on the sensor. And you'd have to have a way of recording in the metadata the filter used. It would add a large degree of complexity. Obviously, it can be done, but it could be very tricky to do right. Also, different OLPFs would be different thicknesses and that would throw your focus off unless you could compensate somehow.... It's not trivial.

As for a demo - how about I walk around NAB wearing a zone plate T-Shirt and people can point cameras at me :-)
  • 0

#18 Keith Walters

Keith Walters
  • Sustaining Members
  • 2219 posts
  • Other
  • Sydney Australia

Posted 03 January 2008 - 03:33 AM

If you have 2048 lines (samples) that means you can store any frequency up to 2048/2, ie 1024 line pairs. 10204 line pair is of course 2048 lines.

Are you sure about that?

Imagine a second chart, almost the same as the first, except it has 100 lines less; ie there are only 1948 lines (or 974 line pairs).

At a casual glance the two charts look identical. If I shot an image of the new chart with suitably fine-grained film and projected it with a film projector, there would with any luck be 1948 lines visible on the screen. And 2048 with the earlier chart of course.

When projecting the image of the 2048 line chart with a 2K projector, as mentioned earlier, the light modulating elements of the projector would have to be activated in a light-dark-light-dark-light and so on sequence. That would indeed produce a pattern of 2048 vertical lines on the screen.

But what sequence of light and dark pixels on a fixed-position light emitting-matrix of 2048 elements could produce 1948 vertical lines across the screen?

In some places the white elements would approximately line up with the light generating elements, but in others they would want to straddle two elements. There is no way you can make a pixel bright on one side and dark on the other! What you would tend to get is adjacent pixels that both want to be half-and-half, which will come out as mid-grey. The result would be a series of bands of black and white lines alternating between bursts of the native pixel rate of the projector and mid grey - the classic "aliasing".

This is the fundamental difference between "lines" projected from film, and "lines" projected by a video projector. People freely interchange the terms when they are not interchangeable.
  • 0

#19 Graeme Nattress

Graeme Nattress
  • Basic Members
  • PipPipPip
  • 145 posts
  • Other

Posted 03 January 2008 - 09:30 AM

Lines are like square waves - have infinite frequency, and therefore we've just violated sampling theorem. If instead we'd used a sinusoidal pattern we'd be ok. But video projectors lack a reconstruction filter - they just show the samples as is, rather than do reconstruction as you'd get in a CD player on audio waveforms, for instance.

What you have pointed out is the in-ability of any sampled system to properly sample any image that has frequencies that are too high in it.

Measurement of lines is completely interchangeable - you've just changed the rules mid way. In your thought experiment, you've omitted the optical low pass filter which would send your fine pattern of lines near the maximum limit to a uniform grey. That's what we see (or should see) on any resolution chart approaching maximum resolution. Also, the MTF of the system will be reducing contrast significantly at that point anyway.

You can think of Film as either having it's own built in OLPF due to the random grain structure, or that the random grain structure breaks up any aliasing into a random pattern that you cannot detect.

Sharp edge resolution charts are used because sinusoids are harder to print nicely, but if you want truer measures they're more useful. Also, zone plates are more useful than trumpets are they make the aliasing pop out as extra circles in the image that were not there in the target.
  • 0

#20 John Sprung

John Sprung
  • Sustaining Members
  • 4635 posts
  • Other

Posted 03 January 2008 - 01:06 PM

As for a demo - how about I walk around NAB wearing a zone plate T-Shirt and people can point cameras at me :-)

I found a shirt once that looked a whole lot like multiburst -- and it wasn't even intended to be a joke. ;-)




-- J.S.
  • 0


Ritter Battery

Metropolis Post

Rig Wheels Passport

Tai Audio

The Slider

Media Blackout - Custom Cables and AKS

Willys Widgets

FJS International, LLC

Aerial Filmworks

Visual Products

Paralinx LLC

CineLab

Wooden Camera

Gamma Ray Digital Inc

Technodolly

CineTape

Broadcast Solutions Inc

Glidecam

Opal

Abel Cine

rebotnix Technologies

Ritter Battery

Rig Wheels Passport

rebotnix Technologies

CineLab

Tai Audio

Broadcast Solutions Inc

Willys Widgets

FJS International, LLC

Abel Cine

Technodolly

Visual Products

Media Blackout - Custom Cables and AKS

Gamma Ray Digital Inc

The Slider

CineTape

Wooden Camera

Glidecam

Aerial Filmworks

Metropolis Post

Paralinx LLC

Opal