Jump to content


Photo

D21 workflow


  • Please log in to reply
20 replies to this topic

#1 Joseph Zizzo

Joseph Zizzo
  • Sustaining Members
  • 78 posts
  • Cinematographer
  • new york

Posted 05 February 2010 - 09:44 PM

what's the best way to go for short jobs, like a commercial or music vid, in terms of a workflow when shooting with the D21? shooting on tape seems a bit stone age to me, since i've been working with the red quite a bit the last year and a half. but are there advantages to tape? there must be, since a number of tv series are done that way, so i've discovered...

thanks in advance.
  • 0

#2 Jamie Metzger

Jamie Metzger
  • Basic Members
  • PipPipPipPip
  • 773 posts
  • Cinematographer
  • San Francisco

Posted 06 February 2010 - 01:31 AM

Depends on if you want a file based workflow or a tape based. Seems if the project were a commercial and going to air, it would end up on tape anyways.

Edited by Jamie Metzger, 06 February 2010 - 01:31 AM.

  • 0

#3 Brian Drysdale

Brian Drysdale
  • Basic Members
  • PipPipPipPip
  • 5070 posts
  • Cinematographer

Posted 08 February 2010 - 04:27 AM

I know of one TV series for which the RED was being considered, they went for an F35. From what I could make out the rendering was going to be a problem on an already tight post production schedule.

One big advantage of tape is that you just hand over the camera rushes in a labelled box.
  • 0

#4 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11939 posts
  • Other

Posted 08 February 2010 - 05:54 AM

Bear in mind, "raw" and "hdcam", while exclusive, are not orthogonal. That is, you can shoot RAW onto disks, but you can also shoot decoded HD onto disks, and still do a file-based workflow.

P
  • 0

#5 Saul Rodgar

Saul Rodgar
  • Basic Members
  • PipPipPipPip
  • 1682 posts
  • Cinematographer

Posted 08 February 2010 - 01:31 PM

I know of one TV series for which the RED was being considered, they went for an F35. From what I could make out the rendering was going to be a problem on an already tight post production schedule.

One big advantage of tape is that you just hand over the camera rushes in a labelled box.


Yeah, rendering is pretty task intensive. In my experience, going tapeless adds at least one more crew member (data wrangler) and it can become more expensive and complicated to deal with in post, particularly with formats that need to be rendered or unwrapped to be edited. Ultimately, it is up to all parties involved in the production / post-production to find the format that best suits their needs in a case-by-case basis.
  • 0

#6 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19761 posts
  • Cinematographer
  • Los Angeles

Posted 08 February 2010 - 01:46 PM

ARRIRAW has only recently been enabled and though people like Geoff Boyle report that the quality is higher than the internal HD conversion, it's still a bit of a rarity in post houses. First of all, you are talking about 2.8K RAW uncompressed, which is a lot of data and you need to use a data recorder such as a Codex or S-Two. Then you need to work out a RAW workflow for dailies conversions and post color-correction.

For most TV work, people have been opting to record HD to an SRW1 deck since almost all post houses can handle that footage.
  • 0

#7 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11939 posts
  • Other

Posted 08 February 2010 - 06:04 PM

I should point out that at least a few months ago, Arri's own raw converter software was, to put it mildly, somewhat naive in the way it worked at a low level and in the opinion of people other than myself could have worked a lot faster with some smarter coding.

P
  • 0

#8 John Sprung

John Sprung
  • Sustaining Members
  • 4635 posts
  • Other

Posted 08 February 2010 - 07:28 PM

Yeah, it's not just Arri. The problem is, will anything be around long enough to justify the time and expense of really careful benchmarked coding? Does anybody even know how to write assembly code for the latest and fastest processors? Software is, and always was, much harder than hardware. ;-)





-- J.S.
  • 0

#9 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11939 posts
  • Other

Posted 08 February 2010 - 08:03 PM

Yeah, it's not just Arri. The problem is, will anything be around long enough to justify the time and expense of really careful benchmarked coding? Does anybody even know how to write assembly code for the latest and fastest processors? Software is, and always was, much harder than hardware. ;-)





-- J.S.


I don't think it has to be hand-optimised asm, it just has to not touch every DPX three times for the header knowing it's likely to be running on a video RAID that's been carefully set up for streaming forward reads.

Grumph!

P

PS - better yet, it should be implemented for GPGPU, then you can run it on one of those $400 plug-in massively parallel supercomputers they call graphics cards these days. Which is of course not in any way what Red Rocket is. It's not a gamer's graphics card with a "Red" logo on it, or anything. Perish the thought.
  • 0

#10 Paul Bruening

Paul Bruening

    (deceased)

  • Sustaining Members
  • 2858 posts
  • Producer
  • Oxford, Mississippi

Posted 08 February 2010 - 08:25 PM

PS - better yet, it should be implemented for GPGPU, then you can run it on one of those $400 plug-in massively parallel supercomputers they call graphics cards these days. Which is of course not in any way what Red Rocket is. It's not a gamer's graphics card with a "Red" logo on it, or anything. Perish the thought.


I agree, Phil. Imagine how much real time flow could be handled if distributed fully over the resources of a dual-quad core mobo with 4 slots of 16X SLI running the latest smokin' cards. It would probably take 2X 1,000W PS to drive it (not to mention melting the guts out of the average 15A wall socket). But the coding for that... sheesh!
  • 0

#11 Joseph Zizzo

Joseph Zizzo
  • Sustaining Members
  • 78 posts
  • Cinematographer
  • new york

Posted 09 February 2010 - 12:43 PM

ok, so i understand that the workflow has not been really worked out yet for ARRIRAW, and that handing over tapes at the end of a job is easier...

but i remember this being the case when the red was new, no one wanted to use it for a commercial because the raw, file-based workflow was not understood, and everyone saw it as a problem. now the red is, in my world at least, the go-to digital format when film has been abandoned... and everyone understands the workflow, data is backed up on set as we shoot, it's no longer a problem...

so, if ARRIRAW is a superior format when compared to shooting s-log to hdcam (and certainly when compared to the red), in terms of resolution, dynamic range and color rendering - i mean, arri went to the trouble of developing it for a reason, i'm sure - would any of you think that it is just a matter of time before people adapt to the work flow issues, as happened with the red, or are there larger problems than that?

in other words, is it worth pursuing raw workflow for short jobs, or am i just going to end up looking like some obsessed perfectionist on a mission!?

thanks.
  • 0

#12 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19761 posts
  • Cinematographer
  • Los Angeles

Posted 09 February 2010 - 01:12 PM

If your production can afford to work with ARRIRAW, then go ahead. It's really just a matter of cost and ease of post, where you can go for post, etc. I don't think it makes much difference on the set, going to an SRW1 versus a Codex or S-Two, other than the back-up issues and cost.

I just think you'll find that a lot of music videos and whatnot have limited post budgets and perhaps already have certain post houses that they work with. I suppose it's possible that all the Red-centric or saavy facilities that music videos go to may be OK with ARRIRAW footage as well -- you're just going to have to investigate that.
  • 0

#13 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11939 posts
  • Other

Posted 09 February 2010 - 01:36 PM

The issue is not so much software as it is just finding somewhere that's used to doing file-per-frame workflows. There's quite big difference between posting some heavily compressed Red stuff on a Mac, and doing a proper DPX workflow as you'd assume might happen for something like a feature DI. Find somewhere that's used to handling the latter, and the overhead presented by handling raw D21 footage is data wrangling and machine time.

Potentially quite a lot of machine time.

P
  • 0

#14 Joseph Zizzo

Joseph Zizzo
  • Sustaining Members
  • 78 posts
  • Cinematographer
  • new york

Posted 09 February 2010 - 01:55 PM

thanks, guys...

phil, if you don't mind elaborating a bit, what is the difference between posting red footage - which i, perhaps mistakenly, thought to be uncompressed data - and posting ARRIRAW data? i understand you're saying the latter is slower... but red footage used to take a day to transcode as well... now it can be done on set as we shoot. ids red data just that much more compressed?

thanks.
  • 0

#15 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19761 posts
  • Cinematographer
  • Los Angeles

Posted 09 February 2010 - 02:21 PM

REDCODE compression is somewhere around 10:1.

Someone with better math skills can tell you the data rate for Redcode 36 for 24P at 4K, versus 2.8K uncompressed ARRIRAW.
  • 0

#16 Joseph Zizzo

Joseph Zizzo
  • Sustaining Members
  • 78 posts
  • Cinematographer
  • new york

Posted 09 February 2010 - 03:11 PM

ok, so ARRIRAW just represents a lot more data to deal with then... which sheds some light on why so many series record to tape.

thanks.
  • 0

#17 Oliver Temmler

Oliver Temmler
  • Basic Members
  • PipPip
  • 22 posts
  • Industry Rep
  • Munich, Germany

Posted 10 February 2010 - 12:48 PM

ok, so ARRIRAW just represents a lot more data to deal with then... which sheds some light on why so many series record to tape.


speaking of which, I'd like to point out a new document on ARRIdigital which I hope you find interesting:
http://www.arridigital.com/downloads -> D-21 Workflow Guidelines

Cheers
Oliver
  • 0

#18 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11939 posts
  • Other

Posted 10 February 2010 - 02:27 PM

phil, if you don't mind elaborating a bit, what is the difference between posting red footage - which i, perhaps mistakenly, thought to be uncompressed data - and posting ARRIRAW data? i understand you're saying the latter is slower... but red footage used to take a day to transcode as well... now it can be done on set as we shoot. ids red data just that much more compressed?


It is a very common misconception that red data is uncompressed. It's compressed between 9 and 12 to one, and if I mention that a 3:1 MJPEG used to be the absolute maximum amount of compression considered tolerable for broadcast television you'll see why that raises some eyebrows. The compression technique used by Red is probably better than JPEG, but not three or four times better. If red had figured out how to store an uncompressed HD image on a flash card, they would have done something really special. They haven't. This is why people like me question Red calling their data "raw". It's a term which has in common use been applied to DSLR stills which are stored as data on a flash card which is both uncompressed and unprocessed. Red's data is unprocessed, but it is certainly not uncompressed. Arri's raw mode on the other hand is both uncompressed and unprocessed and fully entitled to the term.

With regard to the whole bayer thing, as briefly as I can:

Image sensors are intrinsically black and white devices; they see only brightness, not colour. Making them see colour requires putting a colour filter on the front. Both D21 and Red (and F35/Genesis, which use the same sensor, effectively all DSLRs, and a lot of modern HD consumer devices such as the Canon HV-20 and cellphone cameras like iPhone) are single chip cameras, as opposed to the three-chip blocks we get in things like F23 and most television-oriented video cameras.

Three chip cameras use one each for red, green and blue, splitting the light up so that a proportion of it hits each sensor in alignment:

https://eww.pavc.pan...20/img_3ccd.jpg

Single chip cameras use patterns of colour filters printed onto the front of the imaging chip. The most common of these patterns was developed by an imaging scientist called Bayer, and the technique bears his name (notably, Genesis/F35 are not Bayer patterned).

http://www.kodak.com...er_patterns.gif

To recover a full colour image from a 3-chip device, you simply need to read the three chips and assume the information from each represents red, green or blue according to what sort of filter you put in front of the sensor. Clearly you can't do that with a Bayer patterned chip; if you just lined all the pixel values from the sensor up next to one another, you'd get a sort of (but not) checkerboarded pattern with alternate pixels representing the different colour channels, which would have very little meaning:

http://www.guillermo...alraw/bayer.gif

Using a single image sensor in a camera is beneficial in some ways - mainly those ways in which it makes the camera work more like a film camera. Lenses for 3-chip cameras need special considerations to land the image accurately on all three sensors at once. This is rarely done with 100% precision and that's why out-of-focus artifacts on 3-chip cameras often shade magenta to green top and bottom.

http://diglloyd.com/...agentaGreen.jpg

However, recovering a colour image from Bayer patterned data is not trivial, and is a process with inherent compromises. The biggest problem is where the image contains sharp edges with pronounced colour differences. The edge may fall between the widely-spaced RGB channel photosites on a Bayer patterned sensor, leading to uncertainty over where it really is (aliasing). Such a discontinuity in one RGB channel is usually associated with discontinuities in others - say you're looking at a yellow object; it is active in both the green and red channels. Because the green and red photosites are not in the same place (as they are on a 3-chip device), you may get a different idea of where that pronounced colour edge is in two RGB channels. This can lead to strange chromatic aberration:

http://colorcorrecti...-psvscamera.jpg

Here we see the partial solution to the problem: the in-camera de-mosaic uses different mathematics than Photoshop, and achieves a result with less chromatic aberration. However, it probably also has less sharpness, and that's an engineering compromise that's unavoidable with Bayer pattern sensors. This is also why people who shoot test charts on a Red where the test charts are comprised of black markings on white are not really answering any questions.

The mathematics involved in getting the best possible compromise out of this situation is very complicated and takes up a lot of computer time; it involves very careful interpolation of the RGB values that weren't sampled by the sensor. People have various terms for this: Dalsa were in love with the word "algorithms", but it is unavoidably interpolation, it is making up data, and from this it should be fairly obvious that if we want, say, an image 1920 pixels wide that has truly valid and unambiguous colour information, we need to use a Bayer pattern sensor at least twice that large in order to be able to scale down the results and minimise any problems. This is the reason people question Red's "4K" resolution claim; it is widely recognised that the 2K windowed mode on a Red camera is not really good enough for broadcast HD production and this is largely why.

It is probably not particularly more difficult to debayer Red's footage than Arri's, although recovering compressed data is an additional processing load in and of itself and of course Red are using more pixels than Arri are (though the accuracy of the information in those pixels will be massively compromised by the massive amount of compression they're applying).

The fact that a D21 in raw gives you a larger amount of accurate data is not really debatable.

P
  • 0

#19 Paul Bruening

Paul Bruening

    (deceased)

  • Sustaining Members
  • 2858 posts
  • Producer
  • Oxford, Mississippi

Posted 10 February 2010 - 02:42 PM

Holy smokes, Phil. Thanks for all that.
  • 0

#20 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19761 posts
  • Cinematographer
  • Los Angeles

Posted 10 February 2010 - 05:29 PM

Compression seems to be more or less inevitable today, and more and more acceptable.

I remember back in the days of D1 & D2 tape, people were saying that the DCT compression of beta-SP was unacceptable for post work, yet within a few years, D2 was gone and D1 was disappearing, but beta-SP was becoming accepted as a standard-def master tape format.

Then, next, compressed HD was considered a no-no and yet today we have more and more HD cameras using various compression schemes, mostly variants of MPEG-2 or MPEG-4.

Of course, processing and storage has improved to the point where higher data rates are workable, but I think you're not going to see the end of compression even for origination formats, so the main question is how good is the compression versus how limiting it is. I would say that Red's primary success has been due to the quality of REDCODE compression, whereas Dalsa never really solved the practical problem of dealing with uncompressed 4K RAW and even ARRIRAW (near 3K RAW) is a bit of a hurdle to record in the large amounts that a typical production shoots. If Red had taken the "high road" of uncompressed RAW, it would still be a science experiment on the same level as Dalsa was in terms of volume of use.

The typical independent filmmaker is not going to be able to deal with 4K RAW uncompressed for his little indie movie. Nor the typical TV show. That sort of leaves uncompressed RAW for bigger shows and for efx shoots.
  • 0


Glidecam

Metropolis Post

Media Blackout - Custom Cables and AKS

Technodolly

Ritter Battery

Opal

Gamma Ray Digital Inc

Broadcast Solutions Inc

Visual Products

The Slider

CineLab

Tai Audio

Willys Widgets

Paralinx LLC

CineTape

Abel Cine

Aerial Filmworks

FJS International, LLC

rebotnix Technologies

Rig Wheels Passport

Wooden Camera

Abel Cine

CineLab

Technodolly

CineTape

Wooden Camera

Tai Audio

rebotnix Technologies

FJS International, LLC

Broadcast Solutions Inc

Media Blackout - Custom Cables and AKS

The Slider

Gamma Ray Digital Inc

Paralinx LLC

Rig Wheels Passport

Willys Widgets

Glidecam

Visual Products

Metropolis Post

Opal

Aerial Filmworks

Ritter Battery