Jump to content


Photo

RAW... What is it really?


  • Please log in to reply
27 replies to this topic

#1 Ryan Patrick OHara

Ryan Patrick OHara
  • Basic Members
  • PipPip
  • 83 posts
  • Cinematographer
  • Los Angeles

Posted 02 August 2009 - 06:16 PM

Hello all.

I know this is not a film stock or processing... but it's the digital equivalent.

I know what RAW shooting does as good as the next guy, but I am embarking on writing a very indepth article on what it REALLY is. Down to the technical mumbo jumbo. I am writing this article in an attempt to completely understand and help educate others who don't quite understand what RAW does for the cinematographer and how it's not necessarily a 'do anything' process.

I am also exploring different types of RAW shooting as it is different from camera to camera... and i'll mainly be focusing on Genesis & RED... although I'll be exploring still photography quite a bit.

I think many people in the indie world have the wrong impression of what RAW is, and almost rely on it as a excuse or crutch to cut corners, rather then treating it like it's own digital 'stock'. As when somebody tells me to light 'flat' and get the lighting in post.... clearly many are misinformed.

I am asking anyone here to point me to articles which may explain the advantages and disadvantages as well as technical information of how RAW is pulled off a sensor.

All information is helpful, no matter how basic or advanced.

I thank you all in advance.

Best,

Ryan

Edited by Ryan Patrick OHara, 02 August 2009 - 06:18 PM.

  • 0

#2 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19769 posts
  • Cinematographer
  • Los Angeles

Posted 02 August 2009 - 06:37 PM

http://en.wikipedia....aw_image_format

Generally it refers to the monochrome image off of a Bayer-filtered sensor pre-conversion to RGB.
  • 0

#3 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11944 posts
  • Other

Posted 02 August 2009 - 07:41 PM

You have to be a bit careful because while "raw" customarily referred to unprocessed image data from usually bayer filter array sensors, the guys at Red have cunningly redefined it to mean "bayer data that is extremely heavily compressed". This is, at the very least, a highly questionable use of terminology. Also, not only does Genesis not give you raw output, it would be in a rather different format if it was, since Genesis is not strictly a bayer camera. It does something vaguely similar, but processes it down to RGB prior to output, so it's irrelevant to anyone who isn't the designer.

The only camera currently offering what I'd call raw is the Arri D21.

P
  • 0

#4 Ryan Patrick OHara

Ryan Patrick OHara
  • Basic Members
  • PipPip
  • 83 posts
  • Cinematographer
  • Los Angeles

Posted 03 August 2009 - 07:59 AM

You have to be a bit careful because while "raw" customarily referred to unprocessed image data from usually bayer filter array sensors, the guys at Red have cunningly redefined it to mean "bayer data that is extremely heavily compressed". This is, at the very least, a highly questionable use of terminology. Also, not only does Genesis not give you raw output, it would be in a rather different format if it was, since Genesis is not strictly a bayer camera. It does something vaguely similar, but processes it down to RGB prior to output, so it's irrelevant to anyone who isn't the designer.

The only camera currently offering what I'd call raw is the Arri D21.

P


Yes. I very much agree with you!

This is mainly the reasons for me focusing on the issue... thanks for the help. I was pretty sure Genesis did give you something like the RAW format.... I've spoken with one of the lead visual effects people on 'Superman Returns', when they did a presentation at my college years back. They showed us clips of the 'un-processed' image from the genesis, and it looked like that washed out, desaturated RAW video.

This is why I'm jumping in!

Best,

-Ryan

Edited by Ryan Patrick OHara, 03 August 2009 - 08:01 AM.

  • 0

#5 Byron Karl

Byron Karl
  • Basic Members
  • PipPip
  • 65 posts
  • Other

Posted 03 August 2009 - 11:42 AM

Question for fellow skeptics: How dissimilar is RED's full-debayering, from software "upconversion?" It seems like both are filling in the blanks between pixels.

I obviously have an opinion, but I don't really know for sure.
  • 0

#6 Michael Most

Michael Most
  • Basic Members
  • PipPipPipPip
  • 765 posts
  • Other

Posted 03 August 2009 - 11:51 AM

The only camera currently offering what I'd call raw is the Arri D21.


Not the only one. The Phantom records uncompressed RAW, and the Silicon Imaging 2K allows uncompressed recording as well. And although it's no longer built or offered, so did the Dalsa Origin.

In some ways, the Viper would also qualify, even though it's a 3 sensor, RGB device, in that the signals are not processed in camera in Filmstream mode.
  • 0

#7 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11944 posts
  • Other

Posted 03 August 2009 - 12:43 PM

Ah, I didn't know that SI2K did uncompressed, I thought it was all cineform.

"raw" is as far as I know derived from the DSLR term, so to me it implies bayer data and I'd have trouble applying it to Viper. You might just as well apply it to an F900 with a particularly pumped up gamma curve in it for all the practical difference it makes. There's a difference between "raw" and "not intended for unmodified viewing".

Also:

How dissimilar is RED's full-debayering, from software "upconversion?"


Not very.

It seems like both are filling in the blanks between pixels.


Yes.

The algorithms are rather different, and there are advantages and disadvantages which affect the mathematical precision of the result in each case, but I suspect the question you're asking is "are they making things up in order to be able to call it 4K", and the answer is yes they are.

P
  • 0

#8 Freya Black

Freya Black
  • Basic Members
  • PipPipPipPip
  • 4161 posts
  • Other
  • Went over the edge... Central Europe

Posted 03 August 2009 - 01:35 PM

Question for fellow skeptics: How dissimilar is RED's full-debayering, from software "upconversion?" It seems like both are filling in the blanks between pixels.


I suspect it's one of those questions thats a bit like how long is a piece of string!

I'm not much of an expert on this but I get the impression the major difference is that software upconversion starts with the image already taken by the camera. It tries to do intelligent things based on this data to create an image that appears to be higher resolution.

Things like Red and the HVX presumably work with data directly off the camera head. The CMOS or CCD. As such there may be more data to work from at that stage. For example the data at the head of the HVX might be 4:4:4, whereas the information it has to deliver to the codec is only 4:2:2. It also has 3 different chips it can talk to in order to make a beter guess at the values. As such it kind of has more information to guess from, so it can make a more intelligent guess at what the missing resolution might be. I'm guessing red must do something like this too except it only has a single cmos chip and is supposed to deliver 4:4:4 data so I'm not sure what it gets up to.

love

Freya

Edited by Freya Black, 03 August 2009 - 01:36 PM.

  • 0

#9 John Sprung

John Sprung
  • Sustaining Members
  • 4635 posts
  • Other

Posted 03 August 2009 - 02:55 PM

How dissimilar is RED's full-debayering, from software "upconversion?" It seems like both are filling in the blanks between pixels.


Yes, both are filling in some blanks. The blanks are a little different in the two cases.

Suppose what we want is full 4K RGB 4:4:4. That means that for each of four thousand places across our picture, we have a complete set of three numbers representing Red, Green, and Blue.

If we start with 2K full RGB, that's two thousand complete sets of three colors across, so we have to create new in-between sets of three numbers for each new pixel we want. (Vertically we also have to create complete new in-between rows).

If we start with "4K" Bayer, what we have is four thousand places across our picture, but we have only one color for each place, not three. (One row will alternate Red-Green, the next alternates Green-Blue). To get from one color to three for each position, the math has to supply the other two colors.

So, both are in-betweening problems, but of different kinds -- apples and oranges.




-- J.S.
  • 0

#10 Ryan Patrick OHara

Ryan Patrick OHara
  • Basic Members
  • PipPip
  • 83 posts
  • Cinematographer
  • Los Angeles

Posted 03 August 2009 - 02:56 PM

I suspect it's one of those questions thats a bit like how long is a piece of string!

I'm not much of an expert on this but I get the impression the major difference is that software upconversion starts with the image already taken by the camera. It tries to do intelligent things based on this data to create an image that appears to be higher resolution.

Things like Red and the HVX presumably work with data directly off the camera head. The CMOS or CCD. As such there may be more data to work from at that stage. For example the data at the head of the HVX might be 4:4:4, whereas the information it has to deliver to the codec is only 4:2:2. It also has 3 different chips it can talk to in order to make a beter guess at the values. As such it kind of has more information to guess from, so it can make a more intelligent guess at what the missing resolution might be. I'm guessing red must do something like this too except it only has a single cmos chip and is supposed to deliver 4:4:4 data so I'm not sure what it gets up to.

love

Freya



I really dig these comments.

What is RAW... and what defines it, are things I am also interested in too. It is different with every camera and is rarely truly RAW. Which is basically saying it's just a certain type of light compression... Right?

As for the HVX and DVX.... there used to be someone who would modify the DVX100 with a product called 'Andromeda'. He would tap into the signal from the CCD's before it went to compression and tape. He ended up getting a massive gain in resolution, 4:4:4 color space and higher dynamic range. The data was transferred through an Ethernet cable to computer.

Does that count as RAW? It serves many of the same purposes.

This is just an example of how the future, in my opinion is moving toward image capture with out the 'standard' camera compressions or 'processing' we have today. The future is either capturing RAW & applying/processing later or super computer/cameras which will offer the same level of manipulation to be determined ahead of time, letting the DP have digital emulsions to his/her liking. Basically scene file settings with much greater range and customizable choices. We will more or less learn to light by digital emulsions versus different camera models.

But does that count as RAW as well? For instance, if a camera was capable of RAW like RED but the ability we have with the 'digital processing' which we currently add in post, was able to be applied with the same manipulation, (but predetermined), on set is that RAW?

RAW has to be eventually processed sometime... so eventually when a DP can do pre-production camera tests with RAW video, find the gamma curve and additional digital processing he/she wants, and then set them in a camera which shoots can successfully apply these manipulations straight the the footage, the that is what we will do right? Is that RAW when it's done in camera?

If not then RAW = doing your look in post, versus picking a medium (digital emulsion) knowing how you chose it to reproduce the image and then light for it.

If my previous hypothetical proof is correct... if the future camera technology allows a cinematographer to set almost limitless manipulations to get his/her look in camera by setting the same manipulation you would otherwise do in post anyhow, then I'm interested what the future of RAW holds. (unless camera manufacturers try to hold down spending and simply make cameras that only shoot RAW instead of pioneering great manipulation capability (from preproduction LUTs done on computers) in camera)

Of course there are always instances where obscure reasoning and special circumstances can call for the use of unusual tactics... therefore I am interested in continuing to explore what uses and exclusive advantage RAW can hold, such as if it helps assist VFX shots, as it may. I don't know. If it did, then maybe the future of RAW is reserved for special shots, much like the film industry shoots SFX or VFX shots on vistavision or other large formats.

RAW is a strange and much to ambiguous creature, at least for me, a young cinematographer trying to ensure my future.

You can see how scatterbrained and excited I am about this. Thanks for your help so far. I hope I can answer my questions as fast as I'm finding them... because they are really stacking up.

I hope I am being clear. :)

-Ryan
  • 0

#11 Robert Houllahan

Robert Houllahan
  • Sustaining Members
  • 1585 posts
  • Industry Rep
  • Providence R.I.

Posted 03 August 2009 - 03:26 PM

"I've spoken with one of the lead visual effects people on 'Superman Returns', when they did a presentation at my college years back. They showed us clips of the 'un-processed' image from the genesis, and it looked like that washed out, desaturated RAW video."


I think in this statement you are confusing Ungraded (probably LOG ) which is not RAW in the D-Slr or D-21 sense. 10bit LOG 2K or 4K film scans look similar before being Color-Timed (graded) but neither the Film Scans nor the genesis need to be De-Bayered as they either are already (genesis) or do not need to be in the case of RGB film scans.

-Rob-
  • 0

#12 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11944 posts
  • Other

Posted 03 August 2009 - 04:15 PM

I think there's two senses of "raw" here.

There's the simple adjective, which describes something unprocessed or unmodified from an original state, which can be, and routinely is, used to describe original data from any camera system. This use, in the context of digital cinematography, is highly ambiguous, and I avoid it.

Then there's raw, often (for some reason) given in uppercase, RAW, used to describe uncompressed camera data typically from Bayer or similar colour-subsampled sensors, which requires advanced processing before it is even viewable. I think there's probably broad technical agreement on this use.

Either way, I'd prefer to see a formalisation of this terminology. Describing Viper as "raw" is arguably accurate English but has no widely-agreed meaning in a technical sense. Something of that ilk would be better described as "unprocessed", but since there's only really one camera that does it that way, the onus is on the individual to be independently informed of the capabilities of various systems.

This stuff is important because we should use terms for things which tell us useful things about how they work. We should be able to understand that a Bayer sensor camera will require nontrivial and potentially time-consuming postproduction work to look its best. We should be able to understand that certain other cameras deliver RGB output directly. Muddying the water by claiming "raw" for things that aren't (Red, Viper) just makes us use more words to describe situations when that shouldn't be necessary, and when this is done for marketing reasons, to make things seem to be something they're not or seem to be capable of more than they are, it wastes everyone's time.

P
  • 0

#13 John Sprung

John Sprung
  • Sustaining Members
  • 4635 posts
  • Other

Posted 03 August 2009 - 06:07 PM

What is RAW... and what defines it, .... it's just a certain type of light compression... Right?


It's important to know that the word compression is used with two very different meanings.

There's the computer guys' meaning, which is taking a boatload of data and doing loads of clever math on it to produce a much smaller amount of data from which the original data (lossless), or something close enough in picture quality (lossy) can be recovered by un-doing the math.

There's dynamic range compression, which means cramming typically 14 - 16 bits worth of brightness info into the 8 - 10 bit limit of digital video tape. This is very much like audio compression, where they automatically turn the volume up and down to keep things in a narrower range.

As I understand the term, Raw means not using the second type of compression, but rather sending everything that comes from the sensors through to post.

He would tap into the signal from the CCD's before it went to compression and tape. He ended up getting a massive gain in resolution, 4:4:4 color space and higher dynamic range. Does that count as RAW?


Maybe, depending on what was between the CCD's and the place he tapped in. The idea was certainly tending in the direction of raw.

For instance, if a camera was capable of RAW like RED but the ability we have with the 'digital processing' which we currently add in post, was able to be applied with the same manipulation, (but predetermined), on set is that RAW? Is that RAW when it's done in camera?


No, Raw means not doing anything in camera that you don't absolutely have to. Time is money everywhere, and while post isn't cheap, production is much more expensive. I advocate a sort of meta-principle that complexity should migrate away from where time is more expensive to where it's less expensive. By that criterion, Raw is good.

If not then RAW = doing your look in post, versus picking a medium (digital emulsion) knowing how you chose it to reproduce the image and then light for it.


Yes, Raw means making your look in post. This is a good thing, because you have a dailies colorist working for you, covering your tush. We've tried the on-set LUT thing, it's been a disaster. Getting an adequate quality monitor correctly set up and free of ambient light on location isn't in the cards. If the dailies colorists hadn't thrown out the LUT's, those DP's would have been fired. Establishing LUT's at the post facility during pre-production using tests is what works. You can then carry those LUT's to the set for viewing.

The choice of film emulsions doesn't limit the range of looks you can get from the negative the way baking in a look on tape does. Any given film stock captures whatever range it does, with no in-camera manipulation, and leaves you to extract what you want from a far wider range than any chip can capture. So, in that sense, film is kinda like Raw on steroids. It gives you more to take into post and the DI than any electronic camera can. Film gives you a choice of several different wide dynamic ranges.



-- J.S.
  • 0

#14 Michael Most

Michael Most
  • Basic Members
  • PipPipPipPip
  • 765 posts
  • Other

Posted 03 August 2009 - 07:29 PM

We've tried the on-set LUT thing, it's been a disaster. Getting an adequate quality monitor correctly set up and free of ambient light on location isn't in the cards. If the dailies colorists hadn't thrown out the LUT's, those DP's would have been fired. Establishing LUT's at the post facility during pre-production using tests is what works. You can then carry those LUT's to the set for viewing.


Any guesses as to why so few people (other than you and me, of course) seem to understand that?
  • 0

#15 Paul Bruening

Paul Bruening

    (deceased)

  • Sustaining Members
  • 2858 posts
  • Producer
  • Oxford, Mississippi

Posted 03 August 2009 - 08:24 PM

The choice of film emulsions doesn't limit the range of looks you can get from the negative the way baking in a look on tape does. Any given film stock captures whatever range it does, with no in-camera manipulation, and leaves you to extract what you want from a far wider range than any chip can capture. So, in that sense, film is kinda like Raw on steroids. It gives you more to take into post and the DI than any electronic camera can. Film gives you a choice of several different wide dynamic ranges.



-- J.S.


I like.
  • 0

#16 Adrian Sierkowski

Adrian Sierkowski
  • Sustaining Members
  • 7118 posts
  • Cinematographer
  • Los Angeles, Ca

Posted 03 August 2009 - 08:25 PM

Because that's how it looks! At least that's the answer I've gotten when trying to explain to certain people on set not to worry that x or y doesn't look right on the LCD/Monitor etc.
Ahh if only everyone could understand a waveform....
  • 0

#17 Andrew Rawson

Andrew Rawson
  • Sustaining Members
  • 57 posts
  • Cinematographer
  • Los Angeles

Posted 03 August 2009 - 10:40 PM

Any guesses as to why so few people (other than you and me, of course) seem to understand that?



I don't understand this logic at all. I've been shooting a Genesis show and we create LUT's on the fly all the time, on set. We have a beautiful 24" CRT calibrated monitor, a talented DIT and a dark environment.
We're shooting Panalog so the only real reason for the LUT's is for viewing dailies. I look at it as a quick and easy rough idea to communicate our intentions to studio execs. Everybody seems to get that it's only an approximation and we will time it in post.
  • 0

#18 John Sprung

John Sprung
  • Sustaining Members
  • 4635 posts
  • Other

Posted 04 August 2009 - 01:30 AM

I look at it as a quick and easy rough idea to communicate our intentions to studio execs. Everybody seems to get that it's only an approximation ....


Wow. You have no idea how lucky you are to have execs who get that.



-- J.S.
  • 0

#19 Ryan Patrick OHara

Ryan Patrick OHara
  • Basic Members
  • PipPip
  • 83 posts
  • Cinematographer
  • Los Angeles

Posted 04 August 2009 - 03:45 AM

Wonderful posts!

Since my article will not only be covering the technical and programming side of what RAW is in different formats and the advantage/disadvantage, I am also exploring the use of it, what it means to the production, and it's future.

I fully understand the process of shooting film, which makes me totally not fear RAW (doing in post) workflow. I understand that the film processing stage of any film negative is an important stage outside of the on-set production, in which a cinematographer can use ENR (or likewise type of silver retention), pull, push, or other means of making great and wonderful manipulations to the image... of course all pre-deturmined with tests.

To play the devil's advocate, why does this mean we should bake our look in post? Film developing and processing is something that can't be done in camera, which is why it's not. If a cinematographer could eventually do pretests with a Digital Raw (rawish) stock, find his look, and use then program that determined look into the camera/computer hybrid and shoot it the way intended, on set.... I fail to see how someone in post is taking less time, especially if they are just doing a jumping point of applying the same preproduction/predetermined look up table, besides the uses in specialty applications which I'm sure exist. It's not like non-RAW images can't have slight adjustments via colorist. So if you can get your look on set, why shoot the entire film in RAW? To assist the cinematographer, or to ease the job? I'd like to see Wally (who does photochemical process) and Vittorio be told that getting the look in post is a better, more time saving process. On the other-hand, I like the idea of doing a LUT in preproduction and applying in post, but if this is the future, I've heard many concerns that the cinematographers control is in jeopardy because he/she is not kept on the payroll in post. I'm sure the ASC committes are hard at work to solidify a good way of making everyone happy, but this is a concern of mine and I have not heard anything big since those American Cinematography supplements about finding the image. (can't remember the name of the three supplement series.)

This is of course questions I will be grilling many people I plan to interview, from professional DP's, camera manufacturers, and colorists. So please take this as not an argument but just an in depth survey. No one's comments on this thread will be used in my article unless I contact you privately first. But I am loving the feed back.

I think I'll save the rest of my ramblings for my notes and research!

Thanks again everyone... I knew this was the best place to get going!

Best,

-Ryan
  • 0

#20 Byron Karl

Byron Karl
  • Basic Members
  • PipPip
  • 65 posts
  • Other

Posted 04 August 2009 - 12:04 PM

John... Quick follow up question on 4K-Bayer... My understanding of bayer is that the RGB pixels are scattered across the chip and that between different blue pixels on the same line, the "gap" between them needs to be "filled in." Are you saying a 4K image is taken and it's just a color shift happening from line to line? Meaning that on a 4K chip, my understanding is that it's 1K blue, 1k red and 2k green that some how mathematically combines to 4K, despite 2K being the highest resolution?

The reason I was suspecting it's more akin to a software "upconversion" is the render time involved. If it's just a color shift, then that doesn't account for why it needs a 15:1 ratio to debayer footage to 4K. Are you saying a full 4K image is captured line by line, but in alternating CbCr?
  • 0


Aerial Filmworks

Ritter Battery

CineLab

Metropolis Post

Technodolly

The Slider

Visual Products

Abel Cine

Glidecam

Willys Widgets

CineTape

FJS International, LLC

rebotnix Technologies

Rig Wheels Passport

Media Blackout - Custom Cables and AKS

Tai Audio

Opal

Paralinx LLC

Gamma Ray Digital Inc

Wooden Camera

Broadcast Solutions Inc

Willys Widgets

Technodolly

CineLab

Abel Cine

Wooden Camera

Tai Audio

Glidecam

The Slider

Metropolis Post

Ritter Battery

Gamma Ray Digital Inc

CineTape

Aerial Filmworks

Rig Wheels Passport

Visual Products

rebotnix Technologies

FJS International, LLC

Paralinx LLC

Broadcast Solutions Inc

Opal

Media Blackout - Custom Cables and AKS