Jump to content


Photo

RED production schedule


  • This topic is locked This topic is locked
495 replies to this topic

#1 Carl Brighton

Carl Brighton
  • Basic Members
  • PipPipPip
  • 222 posts
  • 1st Assistant Camera

Posted 14 January 2007 - 07:52 PM

I seem to remember reading that Jim Jannard was quoting "late 2006" for the first production prototypes of the RED camera, and delivery to customers in the second quarter of 2007.

However Dec 2006 has come and gone and I haven't heard anything further. Interestingly on the RED website they mention there's a last chance reservation window from Jan 21 to Jan 24 but nothing much else.

The photo gallery is now just a series of "renders" of the RED One, whatever that means and their "News" section contains several items, all with the exact same date: 2006-08-10.

Does Jannard still post anywhere? I haven't seen anything from him for ages.

  • 0

#2 Mitch Gross

Mitch Gross
  • Basic Members
  • PipPipPipPip
  • 2873 posts
  • Cinematographer

Posted 15 January 2007 - 12:37 AM

Apparently there's a new forum, one that is sponsored by RED itself. But to cut to the chase you can get some details from Mike Curtis' hdforindies.com blog. He's very much drunk from the RED Kool-aid, but it can be a decent clearing house spot. RED has a functioning prototype that is actually in the production style camera body. There are some pictures of it sitting next to a Sony F900R and a Panasonic HVX200.

I understand there are some plans to sneak debut the camera soon, but you won't catch me telling the where & when.

They are apparently on or close to schedule, although a recent change in their specs (which they did always say were fully subject to change) make it functionally much more a of very nice 2K camera in the vein of the Silicon Imaging SI-2K than a 4K camera. And frankly, that's what most people would use it for in general production anyway. It does do 4K but capture options are limited. Still, absolutely stunning technology and a real sign of where the industry is eventually headed.
  • 0

#3 Graeme Nattress

Graeme Nattress
  • Basic Members
  • PipPipPip
  • 145 posts
  • Other

Posted 15 January 2007 - 03:55 AM

www.reduser.net is the site if you're interested.

I don't know what you mean about making it more of a 2k camera. I'd say current changes are exactly the opposite of that.

Graeme
  • 0

#4 Carl Brighton

Carl Brighton
  • Basic Members
  • PipPipPip
  • 222 posts
  • 1st Assistant Camera

Posted 15 January 2007 - 08:48 AM

I've gone over to RedUser.net, and I was intrigued by
this post by stephen Williams

He says that "cinematography.com" have banned further discussion of the RED until a working camera is available for testing. I don't recall seeing that here anywhere. They didn't stop me starting this discussion!


www.reduser.net is the site if you're interested.

I don't know what you mean about making it more of a 2k camera. I'd say current changes are exactly the opposite of that.

Graeme

I have to confess this is one bit I don't understand. If you have a 4K chip Bayer-masked, it means the green is only sampled 2,000 times and the red and blue 1,000 times each. You might be able to synthesize 4000 horizontal pixels that are all different from each other, but how representative are they of the actual 4000 pixels of the original light image?

Also some of your images are captioned "Shot without a low pass filter." What exactly does that mean?
  • 0

#5 Stephen Williams

Stephen Williams
  • Sustaining Members
  • 4708 posts
  • Cinematographer
  • Europe

Posted 15 January 2007 - 09:34 AM

I've gone over to RedUser.net, and I was intrigued by
this post by stephen Williams

He says that "cinematography.com" have banned further discussion of the RED until a working camera is available for testing. I don't recall seeing that here anywhere. They didn't stop me starting this discussion!

I have to confess this is one bit I don't understand. If you have a 4K chip Bayer-masked, it means the green is only sampled 2,000 times and the red and blue 1,000 times each. You might be able to synthesize 4000 horizontal pixels that are all different from each other, but how representative are they of the actual 4000 pixels of the original light image?

Also some of your images are captioned "Shot without a low pass filter." What exactly does that mean?


Hi,

Check out the last post by Tim Tyler.
http://www.cinematog...n...18361&st=45

Stephen
  • 0

#6 Graeme Nattress

Graeme Nattress
  • Basic Members
  • PipPipPip
  • 145 posts
  • Other

Posted 15 January 2007 - 09:47 AM

Answer is: very representitive. If you stick a high quality algorithm on the reconstruction, you get really, really nice, images.

Perhaps you'd better come over to reduser.net though if we're not allowed to talk about RED here?

Graeme
  • 0

#7 Carl Brighton

Carl Brighton
  • Basic Members
  • PipPipPip
  • 222 posts
  • 1st Assistant Camera

Posted 15 January 2007 - 10:01 AM

No, if you look at the above mentioned Tim Tyler post, you're not allowed to "tout" a camera that either doesn't yet exist or isn't available for independent evaluation. I think it's OK for one to talk about the RED in general terms, but not as though you've actually used one, which a lot of fanboys were starting to sound like they imagined they actually had.

Perhaps you'd better come over to reduser.net though if we're not allowed to talk about RED here?

Graeme

No thanks, I'll just sit on the sidelines. I don't really care one way or the other about the RED, and I get the distinct impression anybody who asks any sensible questions there is going to be shown the door rather quickly, a la some of the other fanboy-oriented forums.

Edited by Carl Brighton, 15 January 2007 - 10:02 AM.

  • 0

#8 Carl Brighton

Carl Brighton
  • Basic Members
  • PipPipPip
  • 222 posts
  • 1st Assistant Camera

Posted 15 January 2007 - 10:13 AM

Answer is: very representitive. If you stick a high quality algorithm on the reconstruction, you get really, really nice, images.

Graeme

Errr OK, so if I take a small block of pixels from a digital photo, blow them up on Photoshop so you can see the actual pixels, use the Photoshop eyedropper thingie to analyse the Red, green and blue components of each pixel, and then make a new block of pixels using the just red, green and blue components in a standard Bayer pattern, you'll be able to tell me what the red and blue components were on the green pixels, the green and blue components were on the red pixels and the green and red components were on the blue pixels?

I'm not saying you can't do it, but I'd love to know HOW you do it! Can this technique be applied to consumer camcorders?

Edited by Carl Brighton, 15 January 2007 - 10:16 AM.

  • 0

#9 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11941 posts
  • Other

Posted 15 January 2007 - 10:52 AM

Hi,

> Answer is: very representitive.

Look, this is really simple: post your mathematics or expect criticism.

Phil
  • 0

#10 Graeme Nattress

Graeme Nattress
  • Basic Members
  • PipPipPip
  • 145 posts
  • Other

Posted 15 January 2007 - 11:01 AM

Carl, http://scien.stanfor...ngchen/main.htm will probably give you a good idea of how it all works.

Traditionally, consumer cameras have used over-simple algorithms. We can use much better algorithms because we have the horse power in camera and in computer. There's no reason why consumer cameras today couldn't do the same.
  • 0

#11 Carl Brighton

Carl Brighton
  • Basic Members
  • PipPipPip
  • 222 posts
  • 1st Assistant Camera

Posted 15 January 2007 - 11:11 AM

Carl, http://scien.stanfor...ngchen/main.htm will probably give you a good idea of how it all works.

Traditionally, consumer cameras have used over-simple algorithms. We can use much better algorithms because we have the horse power in camera and in computer. There's no reason why consumer cameras today couldn't do the same.


Well, OK, so who is actually developing this algortihm? I didn't find that link especially informative, all he seemed to be telling us is what needs to be achieved, rather than how to achieve it. At the end he says:

"As far as the future work is concerned, an algorithm that is both superior in image reproduction and computationaly efficient is still worth pursuing given the fact that none of the existing algorithm satisfies both criteria."

>> There's no reason why consumer cameras today couldn't do the same.
Then why don't they?
  • 0

#12 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19762 posts
  • Cinematographer
  • Los Angeles

Posted 15 January 2007 - 11:20 AM

You sound like you're trying to pick a fight Carl. What's the point of being so combative?

This is an extremely old argument we've already had here several time on de-Bayering algoriths and "true" resolution. You're not going to get any satisfactory answers.

The simple answer is that a simple algorithm just takes the all the red, green, and blue photosites and derives RGB from them, whereas a complex algorithm makes educated guesses as to the color of the photosite next to another photosite (partially due to the fact that each photosite doesn't perfectly filter out the other two colors) and then reconstructs that information.

As for exact details of that algorithm, to some degree, I'm sure that might come under the category of a corporate secret.

Ultimately all that matters is the end result, not the numbers, so when the camera comes out, test it for yourself and decide if you like the level of resolution.

Until then, my own private belief is that a 4K Bayer-filtered camera would roughly be a "3K" camera -- in other words, I don't believe that half the resolution is lost but I don't believe it is lossless either. But like I said, all that matters are end results, not math.

As for why digital still cameras don't put more powerful processors into them, I think you can make an educated guess as to why.
  • 0

#13 Robert Hughes

Robert Hughes
  • Basic Members
  • PipPipPipPip
  • 873 posts
  • Sound Department
  • Minneapolis

Posted 15 January 2007 - 01:03 PM

From what I'm reading here, I'm getting the impression that this fancy footwork with filtering does not have to be limited to the RED, but may be a more general algorithm for improving resolution with any image gathering device.

I can see it now, running my Super 8 footage through the RED algorithm once, I get 16mm, run it throug again, and get 35mm, a third time IMAX! Now we're talking serious resolution. Isn't software amazing?
  • 0

#14 K Borowski

K Borowski
  • Basic Members
  • PipPipPipPip
  • 3905 posts
  • Camera Operator
  • I.A.T.S.E. Local # 600 Eastern Region

Posted 15 January 2007 - 01:39 PM

Uh Robert, that's just stupid. No offense.

So, weren't these RED threads banned here? Why hasn't this thread been "unstitched" from the site yet?
  • 0

#15 Troy Warr

Troy Warr
  • Basic Members
  • PipPipPip
  • 212 posts
  • Other
  • Austin, TX, USA

Posted 15 January 2007 - 02:19 PM

Well, OK, so who is actually developing this algortihm?


Carl, I think that Graeme himself is in charge of developing this algorithm.

I have a feeling that Tim banned Red One "touting" for exactly this reason - when you have a camera that can't yet be proven, especially one that's attempting to take a novel route like the Red One, there's no way to prevent a technical discussion from turning into a lot of speculation, skepticism, and occasionally, hostility.

Personally, I'm glad that Graeme and the Red team have been so forthcoming about development of the Red One, and I personally enjoy reading his and other Red staff's discussions at reduser.net. I have my healthy skepticisms, as I think most people probably do to some degree - but I think the point is, there's really no need to discuss that aspect of things, as you can have the "I don't think it's possible" vs. "I do think it's possible" debate all day, with nothing resolved. Until we actually find out if it is possible (by the production release and industry testing of the Red One camera), we might as well discuss another topic.

I don't really care one way or the other about the RED, and I get the distinct impression anybody who asks any sensible questions there is going to be shown the door rather quickly, a la some of the other fanboy-oriented forums.


It sounds like you've witnessed a few people "shown the door" at reduser.net, probably not because they asked a sensible question, but because they found it necessary to argue the feasibility of Red's goals. I think the implied point of the forum is that by participating, you're confident in the project and are interested in seeing it succeed. If you're not, why would you waste your time? Let the camera come out and make up your mind about it then, and continue to use and discuss the camera gear that's available today and proven to be effective now.
  • 0

#16 Byron Karl

Byron Karl
  • Basic Members
  • PipPip
  • 65 posts
  • Other

Posted 15 January 2007 - 02:21 PM

Robert's comment just reminded of something I wanted to ask about. Has anyone discussed / heard of something called "Alchemist?" David Lynch mentioned using "Alchemist" on Inland Empire to "upres" the image from his PD-150 to something akin to Hi-Def resolution. I think he mentioned something about how it fills in the missing information when "upresing."

Edited by George Gordon, 15 January 2007 - 02:22 PM.

  • 0

#17 Stephen Williams

Stephen Williams
  • Sustaining Members
  • 4708 posts
  • Cinematographer
  • Europe

Posted 15 January 2007 - 03:00 PM

Robert's comment just reminded of something I wanted to ask about. Has anyone discussed / heard of something called "Alchemist?" David Lynch mentioned using "Alchemist" on Inland Empire to "upres" the image from his PD-150 to something akin to Hi-Def resolution. I think he mentioned something about how it fills in the missing information when "upresing."


Hi,

Saw it at IBC 2 years ago, made by Snell & Wilcox. It's quite impressive.

Stephen
  • 0

#18 Lance Flores

Lance Flores
  • Basic Members
  • PipPipPip
  • 124 posts
  • Producer
  • San Antonio/Dallas/Detroit

Posted 15 January 2007 - 04:45 PM

From what I'm reading here, I'm getting the impression that this fancy footwork with filtering does not have to be limited to the RED, but may be a more general algorithm for improving resolution with any image gathering device.

I can see it now, running my Super 8 footage through the RED algorithm once, I get 16mm, run it throug again, and get 35mm, a third time IMAX! Now we're talking serious resolution. Isn't software amazing?


Ummm, yes and no .. and it depends if you swallowed the red pill or the blue pill Neo. The more data ... the more accurate you can resolve images ... more importantly ... the more accurate the data, the more accurately you (a good algorithm) are able to predict, moreover, you can extrapolate/interpolate information and predict that which extends beyond the data you have captured. You apply heuristics and other algorithms which are deterministic functions to resolve density or color and end up with (hopefully) a more accurate image than you originally acquired. It is a naturally occurring phenomena in nature and synthesized visual recognition.

I read in another thread someone referring to film as storing "real" optical images and digital cameras creating synthetic or artificial, if something to that effect, images. David noted that by that person's own definitions, what we were seeing was artificial. This was an astute observation and corollary, and a point worth reiterating here.

The point is that none of what film, present solid state sensors, and the human visual sensory systems acquire is precisely "real." Because in the real world Neo, the energy which propagates as reflections from images or original emission is discrete. Human vision, digital cameras, and film, essentially sense three (generally speaking, there are methods of sampling more colors) and interpolate this information, or at least portions of the information into images we deem a reasonable facsimile of the "real" image. This is done in real time by both our optic chiasma where the bio analog/digital data is correlated (by algorithms). The data in then pre-processed through several more algorithmically steps at different location and arrives at the visual cortex where the final processing and portions of the recognition process takes place. This is a rather simplified explanation, but the essential point is that we don't see the discreet real world anyway. What we see through our eyes, what cameras see, whether a silver salt emulsion or present solid state, all depend on some algorithmic interpretation of the real world.

So, Super 8 to IMAX algorithms and processing. It's possible, but it would not end up looking like 70mm or 120mm film frames, could be better or worse depending on your perspective of what the out come should be. It's all an illusion, but that's film making, for the most part, is .. illusion. Like purple and lavender .. no such colour. It's just some colors we made up in our evolution that served as a benefit for the organism.

And the probability that such a process would be created for Super 8 ... about the same as the Cubs matching the World Series record of the Yankees in our life time.

Which all begs the question, "what is real" Neo?
  • 0

#19 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 19762 posts
  • Cinematographer
  • Los Angeles

Posted 15 January 2007 - 04:56 PM

Trying to use post-processing to "fill-in the blanks" of resolution, color, exposure information, etc. are somewhat hit and miss -- some things are easier to do successfully than others, but they are all workaround solutions. Plus they can create artifacts if the image is pushed in certain directions.

There is no real substitute for capturing more information to begin with, but in the real world, we make do with what we can afford. Even 35mm seems like a compromise if you look at 65mm photography...

Really high-resolution sensors (let's say 24MP) would help for Bayer-filter cameras because you'd be starting with more information, but that's still not practical for cine cameras. Plus you may have a problem with sensitivity as you make smaller and smaller photosites to fit more of them into a 35mm-sized area.
  • 0

#20 Lance Flores

Lance Flores
  • Basic Members
  • PipPipPip
  • 124 posts
  • Producer
  • San Antonio/Dallas/Detroit

Posted 15 January 2007 - 05:45 PM

Trying to use post-processing to "fill-in the blanks" of resolution, color, exposure information, etc. are somewhat hit and miss -- some things are easier to do successfully than others, but they are all workaround solutions. Plus they can create artifacts if the image is pushed in certain directions.

There is no real substitute for capturing more information to begin with, but in the real world, we make do with what we can afford. Even 35mm seems like a compromise if you look at 65mm photography...

Really high-resolution sensors (let's say 24MP) would help for Bayer-filter cameras because you'd be starting with more information, but that's still not practical for cine cameras. Plus you may have a problem with sensitivity as you make smaller and smaller photosites to fit more of them into a 35mm-sized area.


Exactly, David. The more economically feasible data you can get the better, resolution being only one attribute. Cell size well noted. The problem with just scaling more cells into a 35mm format is that the inherent noise stays about the same reducing the S/N ratio, causing a loss of a couple of bits. Some of this can be corrected using 5T sense amp circuit using more feed back and other mechanisms like sense-line. equilibration. technique, or rather the data line for a optical sense/storage cell. The cost would be to sensitivity, but there are ways to compensate. You're right. Even with such enhancements and the improvements brought about, you can't reduce the cell size to get resolution with appreciable results. You have to go with larger cell, thusly, you must move to a larger format. There are other problems (challenges) you create doing this, but there are many benefits as well, like a substantially (I anticipate, but I'll know in a few hours) higher S/N ratio which will substantially improve the dynamic range.
  • 0


FJS International, LLC

Visual Products

Aerial Filmworks

CineLab

Abel Cine

Wooden Camera

Metropolis Post

CineTape

Media Blackout - Custom Cables and AKS

Broadcast Solutions Inc

Paralinx LLC

rebotnix Technologies

Technodolly

Gamma Ray Digital Inc

Rig Wheels Passport

Willys Widgets

The Slider

Glidecam

Opal

Ritter Battery

Tai Audio

Aerial Filmworks

CineTape

CineLab

Metropolis Post

Visual Products

FJS International, LLC

Tai Audio

Wooden Camera

Opal

Glidecam

Ritter Battery

rebotnix Technologies

Media Blackout - Custom Cables and AKS

Paralinx LLC

Abel Cine

The Slider

Gamma Ray Digital Inc

Technodolly

Broadcast Solutions Inc

Willys Widgets

Rig Wheels Passport