Jump to content




Photo

No Bayer Pattern CMOS systems


  • Please log in to reply
28 replies to this topic

#1 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2368 posts
  • Other
  • Los Angeles

Posted 21 June 2016 - 12:45 PM

I was having a heated debate with a colleague a few months ago about why the industry doesn't use three CMOS imagers (one for each color) and a beam splitter like CCD cameras do. This would solve many of the issues CMOS cameras have and deliver far greater color detail.

My point is that bean splitters reduce the amount of light hitting the imager and that adding two more imagers would be cost prohibitive, make the cameras larger and require a lot more processing power (expense/weight/heat dissipation).

His point is that nobody cares about cost and size, they'd still be smaller then an average 35mm film camera, electronics pricing is going down, so the cost for the added parts shouldn't be that much. Plus, it would be true RGB instead of this faux stuff we deal with today.

I mean there have been a few cameras made this way, but nothing in the 4k variety that currently exists.

What bothers me the most about this concept/conversation is that, people simply don't care. Nobody does anything about it because what we have now is "good enough". People are focused on higher resolution and not necessarily solving some of these technical issues.

Anyway, I'd like to know what people think about the idea of a S35 sized, 3 CMOS imager camera and if it would be worth developing at this juncture.
  • 0




#2 dan kessler

dan kessler
  • Basic Members
  • PipPipPip
  • 158 posts
  • Other

Posted 21 June 2016 - 01:11 PM

Related to your question, I've often wondered why foveon chips don't get used more.  They have a true, layered RGB structure, similar to film itself, with absolutely no Bayer artifacts, and without the need for beam splitters.  They're in Sigma Merrill DSLR's, and produce stunning images.  Maybe there are good reasons why they haven't been more widely implemented, but I'd like to know what they are.


  • 0

#3 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 18789 posts
  • Cinematographer
  • Los Angeles

Posted 21 June 2016 - 01:41 PM

Just seems that not enough work has been done with the Foveon concept yet to match the quality of Bayer CFA sensors, which keep getting improved.  Every now and then you hear a rumor about future sensor designs that are more like Foveon but they haven't really shown up yet for HD, 2K, and 4K video.

 

Some people feel that silicon itself is poor filter for color separation and thus the color signal from a Foveon is very weak in saturation, which then has to be boosted.


  • 0

#4 aapo lettinen

aapo lettinen
  • Basic Members
  • PipPipPipPip
  • 703 posts
  • Other
  • Finland

Posted 21 June 2016 - 01:47 PM

the probable reason why beam splitters are not used more is because they limit the optics which you can use with the camera and no one wants a camera to limit the lens choices to only a few custom made options when making a movie. 

even when the electronics pricing is going down and down, the optics pricing is staying more or less stable or may even go up every now and then, especially for special optics made for 3mos camera with larger format than 2/3".

 

So I'd say the reason is the lenses, not the 3-imager camera technology itself. it is also much cheaper to make a Bayer sensor camera vs. 3cmos camera but that is less important in cine use I think


  • 0

#5 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 18789 posts
  • Cinematographer
  • Los Angeles

Posted 21 June 2016 - 01:59 PM

There are still 2/3" and smaller 3-sensor prism block cameras made, I think it's mainly that no one wants to make something where the size of the sensors jumps from 10mm wide to 24mm wide, that would require a larger prism block and then special lenses to adjust for that flange depth and the way colors focus on each sensor.

 

3-strip Technicolor used a 2-way prism block and that alone limited the focal lengths used, I think 35mm was the shortest focal length, they tried using wide-angle adaptors to get wider but the quality wasn't there.


  • 0

#6 David Hessel

David Hessel
  • Basic Members
  • PipPip
  • 68 posts
  • Other

Posted 21 June 2016 - 01:59 PM

The Sony F23 had a beam splitter and 3 chips but for the s35 F35 Sony went with the RGB stripped sensor instead. I have heard that the reason for this was there was not enough room to fit the prism and 3 s35 CCD's. To my knowledge there has never been a camera with a sensor larger than 2/3" with a beam splitter, but I could very well be wrong.


  • 0

#7 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 18789 posts
  • Cinematographer
  • Los Angeles

Posted 21 June 2016 - 02:00 PM

There's a 2.5X crop factor difference between 2/3" and 35mm.


  • 0

#8 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2368 posts
  • Other
  • Los Angeles

Posted 21 June 2016 - 03:01 PM

I didn't know about the beam splitter issue and limiting the optics. Is there a diagram somewhere to explain this phenomena?
  • 0

#9 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11232 posts
  • Other

Posted 21 June 2016 - 03:50 PM

The broad strokes of it are fairly straightforward. If you want an 18mm lens, either something has to be 18mm from the film plane, or you need correcting optics (a retrofocal design). You can't put something 18mm from the film plane if there's a prism block in that location.

 

This is part of the purpose of the relay group on the back of a broadcast lens where the back focus adjustment is. In extant 3-chip designs, there are also focus distance tweaks for each of the primaries.

 

The problem is illustrated by attempts to put a broadcast lens on a single chip camera, which I've done: below about f/4, it turns into a glowing, halating mess.

 

My feeling is that modern high resolution sensors have sufficient resolution to oversample their way out of any serious problems with colour precision, although naturally a true cosited RGB design would provide better colour resolution, greater sensitivity and dynamic range, and lower noise. It's still only better if the downsides aren't worse, though, if you see what I mean!

 

P


  • 0

#10 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2368 posts
  • Other
  • Los Angeles

Posted 21 June 2016 - 05:17 PM

But here is what I don't get... why can't you use a optical piece like what a speed booster does?
  • 0

#11 David Hessel

David Hessel
  • Basic Members
  • PipPip
  • 68 posts
  • Other

Posted 21 June 2016 - 05:22 PM

It might not be that it cannot be done but that it adds a considerable amount of size to do so and has not been deemed worth it.


  • 0

#12 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11232 posts
  • Other

Posted 21 June 2016 - 05:42 PM

You can correct the other way; there are adaptors for putting B4 lenses on single-chip cameras which correct for the RGB focus issue and often enlarge the image a bit (often so that a B4 lens with the extender in will cover s35, albeit at significant stop loss).

 

Going the other way may be more complex. The entire relay group on the back of a B4 lens does it. That is presumably what you'd have to add and it is not straightforward.

 

I think the problem is really what issues it would solve. The biggest sensors now have enough resolution that the loss of resolution to Bayer artifacting is not seriously objectionable.

 

P


  • 0

#13 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2368 posts
  • Other
  • Los Angeles

Posted 21 June 2016 - 05:46 PM

Riddle me this, why can't you make an imager where each pixel is red blue and green? Simply divide it just like LED displays do, where the pixels are divided by thirds like a pie.
  • 0

#14 David Hessel

David Hessel
  • Basic Members
  • PipPip
  • 68 posts
  • Other

Posted 21 June 2016 - 05:58 PM

That is what the c100 and c300 already did to a degree, they used the traditionaly bayer pattern but rather than debayering to 4K and downscaling the image they used 4 pixels RGGB to form one final pixel for HD output. The F35 did this using columns of colored pixels and the spliced the colums together into 3 1920 x 2160 images, downscaled them to 1920 x 1080 and then output a true RGB 4:4:4 HD image. One of the only cameras I am aware of to samlple all 3 color channels at the nyquist limit, 2 pixels for every final pixel for each channel. 

 

A sensil in a sensor can only measure a single brightess value so there is no way to make it measure 3 separate values, you need 3 sensels to do that.


Edited by David Hessel, 21 June 2016 - 06:03 PM.

  • 0

#15 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2368 posts
  • Other
  • Los Angeles

Posted 21 June 2016 - 08:14 PM

Eye, that's what I figured... :(
  • 0

#16 aapo lettinen

aapo lettinen
  • Basic Members
  • PipPipPipPip
  • 703 posts
  • Other
  • Finland

Posted 22 June 2016 - 06:01 AM

it's like with film, the silver halide chrystals can only 'see' in black and white and you need color filters to differentiate the colors from each other


  • 0

#17 Keith Walters

Keith Walters
  • Sustaining Members
  • 2131 posts
  • Other
  • Sydney Australia

Posted 22 June 2016 - 08:34 PM

Riddle me this, why can't you make an imager where each pixel is red blue and green? Simply divide it just like LED displays do, where the pixels are divided by thirds like a pie.

You mean like the Panavision Genesis and Sony F35 have been doing for over 10 years?  :rolleyes:

 

There are a number of issues involved.

Dichroic prisms and with three separate sensors are far and away the best way to produce colour images. Period, regardless of what you're read on the Internet. Apart from the better colour filtering, they're much less affected by IR contamination

People keep talking as though single sensor cameras using filter arrays are some fantastic new idea; actually, they've been around since the 1950s. It's the availability of massive portable computing power that made them competitive with 3-sensor designs.

The main impetus behind producing single-sensor cameras for cinematography work had a lot more to do with maintaining compatibility with existing lenses designed for 35mm film cameras, than image quality.

You certainly could make 3-chip "35mm" size sensor cameras, but you would then have to make new lenses to go with them, because as Phil mentioned, in most cases the rear element of existing lenses would want to sit in the middle of the prism block. 

This exactly the problem faced by Panavision's "Panavized" Cine Alta cameras; your range of rentable lenses dropped from thousands to about six.

 

The other major advantage of 3-chip cameras is the lack of "latency"; that is, the "live" pictures coming out of the camera are delayed no more than about 30 milliseconds. Because of the amount of processing required, the latency of most HD Bayer cameras varies between about a second for an Alexa studio camera, and a couple of days for early Red cameras  :P

 

The main reason the  PV Genesis/Sony F35 use RGB colour striping (like an LCD TV screen) is simply that they were principally designed as TV studio cameras (hence 1920 x 1080 sensors), and straight RGB requires minimal processing time for full-resolution HD. 


  • 0

#18 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2368 posts
  • Other
  • Los Angeles

Posted 22 June 2016 - 08:41 PM

Yea I know there were a bunch of "attempts" to make this happen.

What kills me is that you COULD use optics to fix the problem with the flange distance issues.
  • 0

#19 Keith Walters

Keith Walters
  • Sustaining Members
  • 2131 posts
  • Other
  • Sydney Australia

Posted 22 June 2016 - 10:19 PM

Yea I know there were a bunch of "attempts" to make this happen.

What kills me is that you COULD use optics to fix the problem with the flange distance issues.

You appear to be commenting well outside your experience.

Panavision made several abortive attempts at doing just that in the 80s and 90s, to allow 35mm film lenses to work with TV cameras. The results were universally horrible.

Panavision make some of the best lenses in the world; if they couldn't make it work, I seriously doubt anybody else could.

If you have the resources and technical skill to make it happen, by all means, please produce the goods; but  please don't just come on here stating: "It must be possible." 
Or at least describe the technical problems, and then suggest how they might be overcome. Otherwise, you're just writing science fiction.


  • 0

#20 Tyler Purcell

Tyler Purcell
  • Sustaining Members
  • 2368 posts
  • Other
  • Los Angeles

Posted 22 June 2016 - 10:32 PM

I don't work at Panavision, so how could I know about anything they've done? 

 

Furthermore, if you said they did it and the results were horrible, that means it's possible. 

 

I'm not an optical expert, nor do I claim to be. 

 

The technical problem is the fact the imager in this scenario is too far away from the back element of the lens. 

 

So all you need to do is have an adjustable lens that re-focuses the image that goes between the lens and the camera body. It would clearly consist of a few elements, but I can't imagine it being a problem. Good quality results are a totally different story. I would imagine it would be LESS quality then not using one of these gadgets. 


  • 0


Abel Cine

Tai Audio

Paralinx LLC

Glidecam

Willys Widgets

The Slider

Zylight

Pro 8mm

Aerial Filmworks

CineLab

Visual Products

Rig Wheels Passport

Technodolly

CineTape

Ritter Battery

Broadcast Solutions Inc

rebotnix Technologies

Abel Cine

Aerial Filmworks

Technodolly

Visual Products

Zylight

CineTape

rebotnix Technologies

Ritter Battery

Glidecam

Paralinx LLC

Broadcast Solutions Inc

Willys Widgets

Rig Wheels Passport

The Slider

Tai Audio

Pro 8mm

CineLab