Jump to content


Photo

What is RED One, 4K etc.


  • This topic is locked This topic is locked
1 reply to this topic

#1 Stephen Williams

Stephen Williams
  • Sustaining Members
  • 4707 posts
  • Cinematographer
  • Europe

Posted 10 February 2010 - 06:25 AM

My thanks to Kieth Walters for the following:-

What is the RED One?

Notes:
1. This post only addresses the basic physical design and construction of the RED One camera, as might be appreciated by someone who has no idea what the thing actually is. How well the actual camera meets its manufacturer's stated objectives is a matter of sometimes heated debate, and will be discussed separately.
2. If you feel anything has been left out, please let Stephen Williams know and it will be considered for inclusion here. Otherwise, feel free to discuss this in the conventional manner on the forum.


The RED One is a specialized video camera, primarily marketed as a low-cost, functional alternative to 35mm film-based cameras. It features a single 12 megapixel microlensed Bayer-masked CMOS image sensor, with approximately the same width, but not the same height, as a standard Super-35 film frame. As with similar cameras from other manufacturers, the rationale is that this will allow RED users to exploit the vast existing range of precision PL-Mount lenses designed for use with film cameras, giving similar framing and similar control over depth of field. (Although it is widely believed that the larger sensor area also permits higher resolution, this is in fact not the case; 12 megapixel and higher resolutions can be achieved with much smaller sensor sizes).

In a separate operation RED also manufacture their own range of low-cost standard cine-type lenses.

The active chip area measures 24.4 mm by 13.7 mm, with 4,520 by 2,540 active pixels. However, the camera only records data from a maximum 4096 by 2304 pixel area in normal operation. Additionally, the camera only supports frame rates up 30fps with this number of sensor pixels. Higher frame rates are available, but only by accessing a smaller area of the sensor, the so-called “windowing mode”.

On-camera audio recording is now available, which was not available with the original design.

The RED One uses a number of economies of design, in theory allowing high quality images to be captured and processed inexpensively, both by exploiting low-cost technologies originally designed for Computer applications, and allowing Post Processing computer(s) to handle the bulk of the signal processing.
  • The data from the 12 megapixel sensor is more or less recorded “as is” using a proprietary Wavelet-based compression format called “Redcode RAW”. That is, no attempt is made at high-quality decoding of the Bayer Mask data in-camera. Similarly, signal processing parameters (white balance etc) that are normally implemented in real-time in conventional video cameras, are instead encoded with the REDCODE RAW for later implementation in Post Processing, as so-called “Metadata”. A reduced-specification (720p) decoded output reflecting the metadata parameters is available for live monitoring, but RED do not recommend using this as the actual output signal. 1080p in-camera playback is also available, with similar limitations. The RED one does not provide any sort of analog video output connector.
  • The camera does not directly support tape-based media; recordings are made on systems based on low-cost “consumer” media, either specialized Hard Disk modules (RED-DRIVE), Flash Memory modules (RED-RAM) or speed-selected but otherwise conventional Compact Flash (CF) cards. Available recording times start at a few minutes for CF cards, but vary widely, depending on the type of media used, the frame rate, and the level of compression selected. If tape or other conventional industry media are required for editing, Redcode RAW files must first be transcoded by a computer, which can be very lengthy process, possibly exceeding the time required to process and scan an equivalent amount of 35mm film.
  • Because of the proprietary nature of the REDCODE system, the necessary post-production is heavily dependent on software specifically designed for processing RED-sourced files, and may require significant investment in new high-end, largely Mac-based computer systems. This contrasts with competing camera systems that are specifically designed to fit as far as possible into existing Post Production infrastructure.
  • REDCODE Raw is broadly similar to the professional DV and consumer MiniDV formats in that captured footage is stored as a series of discrete “stills”, analogous to a strip of film, rather than being subject to the more rigorous MPEG-type compression. For editing purposes, a DV file can be regarded as a 24/25/30 picture/second JPEG-like “slideshow”; a REDCODE Raw file uses a similar principle except that it uses RED's proprietary form of variable-bit-rate Wavelet compression. The other major difference is that DV and similar formats store more or less “finished” Luminance and Chrominance signals, ready for display/editing, while the RED One simply compresses the RAW data from the sensor chip for subsequent decompression/decoding/processing. The compression ratios available are about 9:1 and 12:1; considerably higher than used by HDCAM and simlar formats.

The implementation of the RED One is thus somewhat like a 12 Megapixel still camera that can shoot up to 30 full-resolution images per second but only in “Compressed RAW” form, and with only limited “live” display capability. If nothing else, it does serve to dramatically illustrate the technological challenges involved in speeding up the capture rate of digital still cameras to achieve 24 fps and higher capture rates. Pocket-size 12 megapixel still cameras are readily available that can take hundreds of images powered by just two AA flashlight cells: Ramping this up to practical motion picture rates at the same resolution, requires a much larger and heavier camera, with around 75 Watts power consumption.
  • 0

#2 Stephen Williams

Stephen Williams
  • Sustaining Members
  • 4707 posts
  • Cinematographer
  • Europe

Posted 19 February 2010 - 03:22 AM

More from Keith:-

Why do people object to the term "4K"?


Red routinely refer to the RED One as having “4K” horizontal resolution. Many people object to this claim; what are their reasons for doing so?
Note: This not intended to be a discussion of the relative merits of the RED One or any other camera. It is merely an attempt to clarify what it is people tend to object to, without getting involved in pointless arguments about which system is better. Remember: this describes the mindset you are up against, not so much the problem itself.

The main objection is simply the fact that in film scanner terminology that long preceded the RED One, “4K” specifically meant that each of the 4096 individual pixels in each horizontal row are individually “interrogated” by separate red, green and blue sensors. That is, the data from a single row of pixels consists of 4096 trios of red, green and blue pixels, a total of 12,288.
Stating that the RED One has “4K” horizontal resolution implies that the decoded colour images produced by the RED One’s Mysterium sensor are similarly made up of 4,096 trios of RGB pixels. That would indeed be the case if the camera used three 4K sensors and a dichroic beam splitter prism as is used in “3-Chip” cameras. In reality, the use of Bayer masking means that in each of the 4,096 rows of photosites on the underlying monochrome sensor, each photosite only responds to a single colour; red, green or blue. That is, two of the colour values of each pixel are not measured and have to be estimated from the values in the surrounding pixels
So while the final “4K” output may well consist of 4,096 RGB values per horizontal row of pixels, these values are the result of calculations and guesswork by the processing computer, not actual measurements. Some people outright regard this as misleading, but most people are simply concerned by the difficulty of pinning down the actual resolution. Manufacturers often show images taken of test charts, but in the past at least, cameras and other types of video apparatus have been notorious for giving far better results shooting test charts than they do shooting real images!
This contrasts with 3-chip designs (or single-chip cameras with straightforward RGB filter arrays such as the Panavision Genesis): In those the output is perceived as very much “what-you-see-is-what is-actually-there” and even relatively non-technical users can feel comfortable that they understand their principle of operation. (Whether they actually do is beside the point, if they are the ones writing the checks!)
While Bayer Masking can produce perfectly acceptable results, and is in fact the principle used by the bulk of digital still cameras, people still tend to question the validity of a system that depends on an automated processor to make “best guesses” about data that is not actually captured. It’s one thing to get a couple of substandard shots in your stills camera, it’s quite another to have to “Please Explain” on a multi-million dollar commercial or ad campaign.
Whether any of the above are valid concerns or not will not be passed upon here. The reality is that in its currently used form, the RED One does not seem offer enough clear advantages to offset the risk of endorsing a system that for many Industry people, remains very much an unknown quantity.

Creative Pixel Accountancy (CPA)
It is important to understand the difference between the source resolution and the number of display pixels.
Before the days of digital signal processing, determining the output resolution of a video source was a pretty straightforward affair. Whatever number of scanning lines, frames per second etc the camera or other video source produced, every TV set or monitor displaying the images had to reproduce exactly those numbers on the screen. In a television broadcast to a large city (or even the whole country) all the millions of TV receivers had to be precisely synchronized to the master oscillators in the TV station. Similarly, every piece of video equipment used to output the program had to be “Genlocked” to the master station sync.
With the development of standards converters, timebase correctors and frame stores in the 1970s it at last became possible to remove the stranglehold of genlocking, meaning that non-synchronized video sources could then be freely intermixed with genlocked studio sources.
This also meant that overseas-sourced video on different TV standards could be fed into the studio data stream, which introduced the concept of re-mapping pixels. For example 625 line 25 frames per second PAL into 525 line 29.97 frames per second NTSC.
With the recent proliferation of low-data-rate video sources, technology became available that could also convert extremely low-resolution images (from Webcams, cellphones and the like), to 625 or 525 line PAL or NTSC. By no means does that mean that the output signals have anything like the possible resolution of PAL or NTSC, all this was ever intended to do was allow the low-resolution, low-frame-rate images to be viewed on conventional TV sets.
Probably the most common application of this is/was allowing digital still cameras to capture basic but useable video on flash memory and display or record it on standard TV equipment. Typically the captured images would be around 320 x 240 pixels in size with a capture frame rate of around 10 fps. The camera’s internal processor would re-map the image data onto PAL or NTSC video that could be displayed on a standard TV set.
While that practice is perfectly legitimate, there is an increasing tendency for manufacturers to apply similar processing to signals that are only ever stored as file data. So we have a rash of pocket “spycam” gadgets which can only produce a measured resolution of, for example, 200 x 150 pixels and 15 frames per second, but they are advertised as “VGA 640 x 480 30 fps” .
Since most of these gadgets have no direct video output capability, there is no meaningful justification for this “uprezzing” of the original data, since the files can only ever be displayed on computers that are perfectly capable of doing this themselves. In any case, even actual 640 x 480 VGA normally gets displayed on whatever number of pixels the user chooses.
“VGA 640 x 480” is basically broadcast standard NTSC, and with the right source material, is capable of extremely good picture quality, as witnessed by NTSC component video from a DVD player. The only thing this has in common with video from a cheap camera is that on a CRT-based TV monitor, the line and frame rates are the same. Otherwise, the effect is approximately equivalent to using the very best NTSC camera available, to shoot VHS playback off a TV screen.
A simple illustration of CPA is to use Microsoft Word’s “Insert > picture” function to insert an image into a blank page. The file can be saved under the name (for example ) “smallpicture.doc”. Then, right-clicking on the image and using one of its “sizing handles” allows it to be stretched to a larger size (say with four times as many pixels). The Word file is then saved under under a different name eg largepicture.doc. Checking “Properties” for each file, it will be noted than they have exactly the same size file! So even though the imbedded image appears to have four times as many pixels, no more actual data has been generated.
Unfortunately, CPA is not confined to pocket spycams and the like. The majority of so-called “1080p” video cameras do not actually produce 1920 x 1080 images. What they do produce is a considerably lower resolution image which is then re-mapped onto 1080 x 1920 pixels.
This might have had some justification a decade ago, when virtually all monitors capable of displaying a 1080 x 1920 raster were CRT-based, but the notion of having to format everything to suit a one-size-fits-all video format is basically a World War II-era concept; most monitors and editing systems these days can handle virtually anything that is fed to them.

“If that was true then…”
Note: Again, this is an awkward question you might be called upon to answer, not an invitation to a debate!
A point that is often raised is that if it was really possible to produce a true “4K” output from a 4K monochrome sensor with a Bayer Mask, it should also be possible to produce true HD (1920 x 1080) RGB output by Bayer Masking a single 1920 x 1080 monochrome sensor, of the type that have been used in 3-chip HDTV cameras for the past 15 years. While cameras with such sensors are available, they do not produce anything like the image quality of a camera that uses 3 such chips and a dichroic prism for RGB separation.
At present there are no 4K versions of the popular 2/3inch 1920 x 1080 3-chip format, although the technology to achieve it is certainly available. The main reason no such products exist is simply that there is currently no perceived market for them. However this could well change with the development of affordable “4K” projection and other display technologies.
One would also have to ask what such a device would be called, since “4K” appears to be already taken.
  • 0






Ritter Battery

Abel Cine

K5600 Lighting

The Slider

CineLab

Cool Lights

Visual Products

System Associates

Robert Starling

Glidecam

Zylight

Cadrage Directors Viewfinder

rebotnix Technologies

Lemo Connectors

Cinelicious

CineTape

Pro 8mm

Paralinx LLC

Aerial Filmworks

Abel Cine

CineLab

Zylight

Visual Products

Aerial Filmworks

K5600 Lighting

System Associates

Cool Lights

rebotnix Technologies

Glidecam

Cadrage Directors Viewfinder

Cinelicious

Ritter Battery

Paralinx LLC

CineTape

Lemo Connectors

The Slider

Pro 8mm

Robert Starling