What is the difference between a traditional digital camcorder that looks like actual video made for multi cam productions like soap operas, & camcorders like Arri Alexa that look as much like film as digital technology can manage? Is it the frame rate? Depth of field? Shutter speed? Resolution? Something ridiculously complicated? What is it about these cameras that makes the footage look that different from one another?
Additionally, what is the official wordage to differentiate between these 2 types of digital camera? The film-mimicking one has a large possibility of names (like ‘film-style digital motion picture camcorder’), but what about the one that produces traditional video footage?
Traditional video has been shot in interlaced-scan -- every image captured sequentially is a FIELD containing every other line of the total video frame -- at a high motion sampling rate like 60X (NTSC) or 50X (PAL) a second. Film has traditionally captured entire frames at a time at a low motion sampling rate like 24X a second. Also, usually a camera set up to capture at 60i or 50i has no shutter engaged so that the exposure time is 1/60th or 1/50th with no temporal gap between each field captured, whereas a film camera has a spinning shutter that is closed 50% of the time (180 degree shutter angle).
Digital cameras used for material traditionally shot on film (though I'd point out that a long time ago, film was used for news, sports, etc.) often are capturing progressive-scan (whole frames captured instead of fields) instead of interlaced-scan and running at 24 fps with an electronic equivalent of a 180 degree shutter, i.e. 1/48th shutter time for 24P.
On top of that, there is also the wider dynamic range captured by recording log or raw, which is closer to what a film negative can capture, compared to the narrower range of broadcast gamma, which is more like reversal film or a print off of a negative. A traditional video camera usually is set-up to record broadcast-range gamma and color for minimal post color-correction if any.
Traditional broadcast video cameras also use 2/3" sensors which are slightly smaller than a 16mm frame, with the attendant increase in depth of field, compared to using a 35mm-sized sensor.
But I would say that the most obvious visual difference comes from shooting interlaced-scan 60i or 50i with no shutter over 24P with the shutter closed for part of the time.
I didn't understand a lot of that, but thanks.
But if the frame rate is the main thing, how come The Hobbit (which was shot using a frame rate almost as high as traditional video) still looks far more film-like than video-like?
I can break it down into smaller chunks if you have a specific area of confusion.
First of all, did you see "The Hobbit" in an HFR (High Frame Rate) presentation, which were 3D-only as well? Because all other screenings and the home video version was made by taking every other frame out of the 48 fps photography to create a 24 fps version, so it's not going to look different than any other 24 fps movie unless you see it shown at 48 fps.
And it did look somewhat video-like at 48 fps, I can tell you. It didn't have interlaced-scan artifacts of course (sawtooth / combed edges around movement) being shot progressive-scan.
I think the 48 fps version was a bit hurt by the decision to shoot with a 270 degree shutter angle. The trouble with using a normal 180 degree shutter is that at 48 fps, the exposure time per frame is 1/96th of a second, so when you pull out every other frame to make a 24 fps version, the blur from 1/96th is the same as if you shot everything with a 90 degree shutter angle. So by using a longer shutter time at 48 fps and getting more blur than if they had used a 180 degree shutter, the 24 fps version looked a bit more normal as if it had been shot with a 135 degree shutter angle (half of the actual 270 degree shutter angle used at 48 fps). So it made the 24 fps version look better but I think it made the 48 fps version look a little smeary in fast motion.
Of course, there was probably a practical reason to use the longer shutter time - you already lose a stop of exposure at 48 fps instead of 24 fps, and you lose another stop from the 3D mirror rigs, so going from 180 degrees to 270 degrees helped compensate slightly for that loss.
Now some people think that the solution for HFR is to go really high, like 72 fps or 120 fps, to get beyond that look of classic video motion. Not having seen anything yet at those frame rates, I'll defer judgement.
Of course, "The Hobbit" had big-budget feature film production design and cinematography behind it, plus was shot on a 35mm sized sensor, all of which helps get you some of that "film look". But you can only judge the 48 fps look by seeing it at 48 fps.
there are a lot of reasons. but to simplify it I would just say one camera is a lot better then the other.
if your talking like a $2,000 video camera or even the 20,000 camera they shoot the news on for VS a digital cinema camera like the ALEXA which most hollywood films shoot on....
the news camera has very little latitude so very bright things "Clip" and turn white very dark things turn solid black. so basically there is less overall information being captured by the camera, it does the same thing with colors too. its capturing less color information. So you end with an image that although we have become used to is very weird looking really...not at all like what our eye sees.
Digital cinema cameras have much more information in terms of both bright and dark and range of colors. They also have a bigger sensor so you have creative control over the background being in or at of focus.
Now we are getting into the more subtle issues like 8-bit 4:2:0 recording, Rec.709 color space, etc. They all add up... but I don't think they are as significant factors as 60i versus 24P, no shutter versus 180 degree shutter, etc.
Keep in mind that when something is shot in Super-8 color reversal, with limited dynamic range and the depth of field of a small capture area, it still looks like film. 35mm movies shown on TV in 8-bit 4:2:2 Rec.709 still look like film.
I can try to dumb down a bit more from David's excellent, though technical explanation.
When television was invented, they used the frequency of the electricity to determine frame rate. In the USA that would be 60hz, so they used 60 fields per second.
In the case of analog television, the glass tube image is made of little lines of information which is scanned top to bottom very fast. For standard definition in the USA that would be 525 individual scan lines from top to bottom. So the image was never solid, it was always changing, it was always scanning even if you had a still image. Each frame consisted of two fields an upper and lower. The television scanned the first line and left the 2nd line black, scanned the 3rd line, then left the 4th line blank, all the way down. Then for the next frame, the same process happened where the 2nd line was scanned, then the 4th line, all the way down. Two fields to create one frame, but the whole thing is in constant motion and your brain makes it all work.
Standard definition analog cameras initially captured the same way as the television, by scanning top to bottom in fields and then converting that signal into a frequency which is sent down the cables and over the airwaves. Later that system turned into a charge coupled device or CCD which "flashed" a single frame at a time and then turned that image into the 525 interlaced system.
So this is one of the reasons why broadcast looks so different. By the way, we still broadcast a similar format today, but our televisions are "active" and don't have a set frame rate like tubes do. So the interlacing is barely seen. However, since you can adjust the frequency of modern televisions from the 60hz of old days to 120hz and 240hz. This makes the motion even more smooth, it will turn anything into a "television" look. Thus demonstrating, the look of television really starts with the scan frequency, or in this case, the frame rate.
#1 Frame rate = 24 progressive frames, no interlacing.
The next big thing is color space. Television broadcast has a very limited range of colors.
We have almost infinite colors in the real world, film has an almost infinite amount of color options as well, but television doesn't. There are three primary colors, red, blue and green: RGB. Television takes the blue and red signals and only captures half of the visible data in those two channels, filling in the rest with green. Then, it only captures a few thousand color combinations, instead of the infinite color available.
Plus, our eyes can see a broad dynamic range. Film can see a broad dynamic range, but television can't. To cram the signal down the line, the dynamic range has to be compressed.
Digital cinema cameras capture full RGB signal AND millions of colors. Plus they have a wide dynamic range like film does.
#2 Color information and dynamic range.
Things like sensor size (depth of field) are important, but not as critical. In my view, those two things I mentioned above are the two most important defining items.
But I think the "video" look is more related to the higher motion sampling rate -- 60X per second -- combined with a lack of shutter, so no temporal gaps in the motion. This gives the moving image a "live" look like it is happening right now in front of the camera, sort of fluid-looking and a little bit smeary. 24 fps with a half-shutter creates more of a steppy, strobing motion. It's not a question of good or bad, obviously it could be argued that it would be better to have a higher sampling rate for motion, it's mainly a matter of conditioning.
I have a theory that the more "hyper-real" the process becomes, the more it makes the fakery of fiction look obvious, sets look like sets, costumes look like costumes, and actors look like actors. This is one reason why these more immersive processes work well for IMAX-type nature documentaries where everything IS real in front of the camera. It suggests that one solution to high frame rate / high resolution / high dynamic range / 3D processes will partly just to be more perfectionist about what goes in front of the camera. But it does imply that 24 fps has the effect of enabling that "willing suspension of disbelief" that we talk about by giving everything a certain motion cadence that moves it from the strictly realistic, like a filter over reality.
Besides 8-bit 4:2:2 or worse color subsampling, another factor in "classic" video was the fact that it was lower in resolution (standard definition, 720 x 480 pixels for NTSC) and to compensate for that, most cameras employed a fairly heavy level of electronic sharpening, edge-enhancement, creating a sort of ringing edge around everything (and making people look pretty bad if they had freckles or lines on their face).
I believe the orginal poster is in the UK, so the 24p in the NtSC land frame rates don't apply. However, film shot at 25 fps still look like film, although 25p video doesn't handle motion the same way, unless you make further changes, it looks like video.
25P with a 1/50th shutter should handle motion in a similar manner as 25 fps film with a 180 degree shutter -- it's 50i photography that is a big difference from traditional 24 fps (or 25 fps) film.
The differences between 24 fps film and 60i NTSC are mainly in that higher sampling rate, so the same issues apply in 25 fps film versus 50i PAL. I didn't bring up the one unique thing about NTSC, which is the 3:2 pulldown that gets applied to 24P when shown on 60i television, but that has nothing to do with the look of video origination.
the thing about color space is not just the technical bit rate end but how the camera sensor deals with the color on the front end. 4:2:2 Alexa vs HVX 200 4:2:2 ....its not at all the same thing
Yes, but not between the Alexa and HVX as the Alexa is CMOS and the HVX is CCD.
CCD (charged coupled device) are actually very good imagers, but they struggle to get decent dynamic range. They also have issues with low light because the image is grabbed as pulses. Where CMOS (complementary metal–oxide–semiconductor) has less dynamic range limitation and grabs images in scan lines, which means the processor plays a bigger role but also means they can be used in low light and have better dynamic range.
The biggest weakness of CCD is dealing with colors. You need to separate the colors on capture, which requires a beam splitter and three chips. This takes up a considerable amount of room, so producing a CCD camera that has bigger sensor like S35, would be very difficult and bulky. So most CCD cameras use very small imagers, thus the technology (which is pretty good) doesn't hold a candle to the mega-sized possibilities of single CMOS sensors, which have multiple layers (like film) that capture the different layers of color without multiple chips.
It's fairly easy to make CCD and CMOS 12 bit 4:4:4. However, the cost to do so can price products outside of the range manufacturers feel is acceptable. This is the biggest problem with manufacturers like Sony and Panasonic. They make cameras to fit particular price brackets, so they limit functionality in one way or another to meet those requirements. Companies like RED, Arri, vision research, Blackmagic, etc… they don't have that philosophy. They make a camera and whatever price range it fits, that's what they'll sell it for. They don't "hinder" the camera's technical ability purposely to hit a particular price.
Finally, the big thing that allows us to capture high dynamic range (RAW) has flipped the industry on it's head. Cameras without RAW capture, won't have the wide dynamic range and as a consequence, won't look as filmic. It's that RAW capture that truly separates the good cameras from the not so good. Most cameras don't capture raw in stock form and those who claim to spit raw out the HDMI or SDI port (which is REC790) are just toys. You can't take high dynamic range material and compress it down to REC709 and expect it to be the same.