Jump to content




Photo

How did 2K DIs become standard?


  • Please log in to reply
20 replies to this topic

#1 cole t parzenn

cole t parzenn
  • Basic Members
  • PipPipPipPip
  • 287 posts
  • Other

Posted 20 September 2014 - 10:34 PM

And why kibis?


  • 0




#2 cole t parzenn

cole t parzenn
  • Basic Members
  • PipPipPipPip
  • 287 posts
  • Other

Posted 20 September 2014 - 11:09 PM

And, in anticipation of this conversation turning to digital projection, why 1.89?


  • 0

#3 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 18789 posts
  • Cinematographer
  • Los Angeles

Posted 21 September 2014 - 01:19 AM

I think the 4K 10-bit RGB Cineon Log standard was created by Kodak Cinesite when they created the D.I. concept and did the restoration of "Snow White" in the early 1990's, but I could be wrong, 4096 pixels across may have been decided earlier.

2K was essentially a cheaper, faster, "good enough" compromise starting in the late 1990's for digital effects work, it's 1/4 the size of 4K.

1.89 I'm guessing is just the result of DCI deciding that the maximum package container dimensions for 2K would be 1080 pixels tall so 1.85 images would use less width but the 1080 max height and 2.40 images would use the 2048 max width but less height. So essentially it was their compromise between 1.85 and 2.40 (and 1.33, etc.)
  • 0

#4 Brian Drysdale

Brian Drysdale
  • Basic Members
  • PipPipPipPip
  • 4745 posts
  • Cinematographer

Posted 21 September 2014 - 01:38 AM

I'd need to dig out the scanning resolutions for the films, but I've a feeling the ILM guys were using 4k for films at the beginning of the 1990s.

 

The 2k for DI, I suspect comes out of the resolution of 35mm prints, plus no doubt costs . There was much discussion of HDTV being the same.in the 2000s and matching the eye's resolution, although this didn't seem to factor in those people sitting in the front stalls..

 

BTW The front stalls used to be cheaper.


  • 0

#5 cole t parzenn

cole t parzenn
  • Basic Members
  • PipPipPipPip
  • 287 posts
  • Other

Posted 22 September 2014 - 11:22 AM

I understand that 2K is cheaper than 4K but isn't it still more expensive than scanning a timed IP? Is the correct question, "How did DIs become standard?"

 

What's special about 1080?


  • 0

#6 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 18789 posts
  • Cinematographer
  • Los Angeles

Posted 22 September 2014 - 01:58 PM

You're mixing concepts here -- scanning a timed I.P. versus o-neg is still a D.I. if the goal is a cinema release from either a print made from recording the digital files to film again, or from making a DCP.

If you scanned a color-timed I.P. you'd still be picking 2K or 4K, the only cost savings would be that the digital color-correction would take a few days rather than a few weeks, and you wouldn't have to spend time doing a conform session.

If you're asking why a D.I. instead of a photochemical finish, it's a combination of reasons. One is that many filmmakers and DP's want to do D.I.'s for all the electronic color-correction tools over gamma, saturation, black level -- three things that are hard to customize in photochemical post -- plus being able to adjust only portions of the frame using windows. The other is the elimination of optical printer transitions and titles that never match the quality of the surrounding shots. Also, being able to match CGI effects better to surrounding shots. An effects-heavy movie can be half digital outputs already.

Other reasons include the fact that nowadays more people are going to see a digital version anyway. And lastly, doing a D.I. means doing all the color-correction during one period and that master being the basis, with tweaks, for both cinema and broadcast/home video -- rather than doing it again later in a telecine suite to make an HD master for home video. Plus these days, an HD master might not be enough, soon a 2K master won't be enough, the studios will want a 4K master made.

There's a whole history behind the number of scan lines for analog SD and HD and how a digital version got chosen with pixel dimensions for picture. Same goes for numbers like 4096 and 2048.
  • 0

#7 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 18789 posts
  • Cinematographer
  • Los Angeles

Posted 22 September 2014 - 02:19 PM

Since 4096 is a whole value you get working in 12-bit sampling, I suspect that the answer may be something like the scanners that Kodak Cinesite developed in the 1990's worked at 12-bit samples per color channel, hence the number 4096 instead of 4000. Someone like Phil could explain the math better than me.
  • 0

#8 David Mullen ASC

David Mullen ASC
  • Sustaining Members
  • 18789 posts
  • Cinematographer
  • Los Angeles

Posted 22 September 2014 - 02:33 PM

If you can get through the engineering math of this article, it may explain some of the odd numbers for video:
http://www.quadibloc...ence/freq01.htm
  • 0

#9 Freya Black

Freya Black
  • Basic Members
  • PipPipPipPip
  • 4032 posts
  • Other
  • Right on the edge in London

Posted 23 September 2014 - 04:28 AM

Since 4096 is a whole value you get working in 12-bit sampling, I suspect that the answer may be something like the scanners that Kodak Cinesite developed in the 1990's worked at 12-bit samples per color channel, hence the number 4096 instead of 4000. Someone like Phil could explain the math better than me.

 

I think it's mostly just that 4096 is one of those magic computer numbers that originate because bytes are 8bits in size and so a kilobyte is 1024 bytes  and not 1000.

 

What I mean by magic numbers is this:

 

1024

2048

4096

 

See they double up nicely.

 

It isn't just that memory is like this but also the CPU which executes all the commands.

There we go 8bit, 16bit, 32bit and now 64bit processors.

Anyway the upshot is that as a developer it's easier to work with things in chunks that relate to those numbers.

For example back in the days of 8 bit stuff the cpu would work on things 8bits at a time for the most part and the commands available to you tended to be ones that shuffled 8bits around. It was more difficult to work with say 14bits etc and slower because you had to execute commands to work on each byte at a time (this is at the CPU level). So to work with 14 bits you would need to execute commands to work on the first 8 bits and then more commands to work on the second 8 bits.

 

So there are reasons for all this that probably are of the kind of nature as to why train tracks are the width they are.

 

Freya


Edited by Freya Black, 23 September 2014 - 04:32 AM.

  • 0

#10 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11234 posts
  • Other

Posted 23 September 2014 - 04:31 AM

Yeah, it's just base-2 mathematics. Memory, hard disks, CPUs, and everything to do with computers generally come in base-2 sizes (or multiples of 8).

 

Although bear in mind that in many cases the actual active picture area is smaller than the full width.

 

P


  • 0

#11 Freya Black

Freya Black
  • Basic Members
  • PipPipPipPip
  • 4032 posts
  • Other
  • Right on the edge in London

Posted 23 September 2014 - 04:43 AM

I think the big thing would probably be memory actually especially at the time when Kodak were working because memory would have been very expensive but even if it wasn't you wouldn't want to waste any of it really. Memory is based on bytes at the basic level so 1024 bytes is a kilobyte and 4096 is a magic number.

 

Freya


  • 0

#12 Freya Black

Freya Black
  • Basic Members
  • PipPipPipPip
  • 4032 posts
  • Other
  • Right on the edge in London

Posted 23 September 2014 - 04:52 AM

Yeah, it's just base-2 mathematics. Memory, hard disks, CPUs, and everything to do with computers generally come in base-2 sizes (or multiples of 8).

 

Well to be pedantic base 2 would be binary thus 0's and 1's. The word you were trying to find is a very obscure one: Octal!

 

Octal is base 8.

 

Freya


Edited by Freya Black, 23 September 2014 - 04:52 AM.

  • 0

#13 Freya Black

Freya Black
  • Basic Members
  • PipPipPipPip
  • 4032 posts
  • Other
  • Right on the edge in London

Posted 23 September 2014 - 05:17 AM

I tried to find out the origin of bytes but nobody sounds all that sure. Here is what wiki said about this:

 

Historically, the byte was the number of bits used to encode a single character of text in a computer[1][2] and for this reason it is the smallest addressable unit of memory in many computer architectures.

 

 

Although I seem to remember that ASCII requires only 0-127 and not 0-255. 7 bit words might be odd tho. Even numbers seem less complicated.

 

In the wiki article on Octal I found this:

 

Octal became widely used in computing when systems such as the PDP-8, ICL 1900 and IBM mainframes employed 12-bit, 24-bit or 36-bit words. Octal was an ideal abbreviation of binary for these machines because their word size is divisible by three (each octal digit represents three binary digits). So four, eight or twelve digits could concisely display an entire machine word. It also cut costs by allowing Nixie tubes, seven-segment displays, and calculators to be used for the operator consoles, where binary displays were too complex to use, decimal displays needed complex hardware to convert radices, and hexadecimal displays needed to display more numerals.

 

 

I never knew there were computers with 12bit words! Seems very strange now!

 

My impression is that it evolved that way and generally people aren't sure how it happened.The Nixie explanation sounds plausable tho.

 

Freya


  • 0

#14 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11234 posts
  • Other

Posted 23 September 2014 - 05:26 AM

Base 2 in the sense that the maximum count of twelve base-2 digits (that is, bits) is 4096.

 

I'm not sure what sort of scanner technology was in use. If it was a raster scanned laser, which I seem to recall it was, or if it's a flying spot scanner, the resolution is bandwidth and thus scan-time limited, or perhaps optically limited. But either way I suspect that it was determined mainly by what the scanner hardware could do, then packed into the nearest base-2 number, with all of that modified by knowledge of what the process needed in order to work really well, moderated by the equally pressing knowledge of what was practical. It would have been barely practical at the time it was done and must have cost an absolute fortune, especially in long term storage.

 

It wouldn't have been detectably different if it'd been 3095 pixels wide, after all. And as I say, I'm not sure it's all picture.

 

P


  • 0

#15 Chris Millar

Chris Millar
  • Basic Members
  • PipPipPipPip
  • 1642 posts
  • Other

Posted 23 September 2014 - 06:13 AM

Not much to distinguish octal from base 2, in terms of the underlying concept. Even less for hex in practical terms. Base 10 is the odd one out actually :)

Not just memory/architecture/bus width etc. but also also algorithms that are recursive in nature can benefit from 2^n based data structures...
  • 0

#16 John E Clark

John E Clark
  • Basic Members
  • PipPipPipPip
  • 780 posts
  • Other
  • San Diego

Posted 23 September 2014 - 10:19 AM

 

Well to be pedantic base 2 would be binary thus 0's and 1's. The word you were trying to find is a very obscure one: Octal!

 

Octal is base 8.

 

Freya

 

The PDP-11 was  one of the the last popular machines that used octal representation extensively. Most machines after 1980 used Hexadecimal, base 16, for convenient human readable represention of such things as addresses, and data values, unless there was a need to present them in base-10 representations.

 

The use of powers of 2 as represented with base-10, leads to the convenience of such numbers as 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, which are the powers of 2 up to 2^16.

 

This representation is useful for both 'addressing' of memory, as well as integer values, for example representing 'intensity'...

 

Other considerations are sometime used to justify 'strange' numbers. In the case of say NTSC video, the digital resolution was 720x480, but the pixels where retangular with a .9 aspect ratio... in the computer memory addressing, the aspect ratio only figured into caluclating such things as 'circles' if one were to add graphics, or imagery that had been computed using 'square' pixels.

For 'square' pixel represenation, NTSC had 640x480 'square' pixels.

 

For HD and square pixels, it seems that 'round' numbers of powers of 2 for horizontal resolution, and some sort of compromize value for the vertical was the 'guiding' principle.

 

Back in the olden days, one company I worked for adjusted their analog TV cameras' horizontal scan such that the digitizer produced a 512x480 'square' pixel digital image. Since the goal of the equipment was to analyze images with software, and not for 'broadcast TV', it didn't really matter that the camera was 'detuned' from broadcast standards...


Edited by John E Clark, 23 September 2014 - 10:23 AM.

  • 0

#17 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11234 posts
  • Other

Posted 23 September 2014 - 11:19 AM

As far as I know the standard definition sampling rates were worked out from the maximum bandwidth of the signal, for which 720 samples per line (of which I think 704 are actually picture, though this is often overlooked) was considered adequate. The vertical resolution is of course fixed by the line rate, so you end up with... non square pixels. Strictly speaking, though, analogue TV never had symmetrical resolution in the horizontal and vertical either, which is why we're here.

 

P


  • 1

#18 Freya Black

Freya Black
  • Basic Members
  • PipPipPipPip
  • 4032 posts
  • Other
  • Right on the edge in London

Posted 23 September 2014 - 03:02 PM

I actually think the base isn't the issue anyway but the fact that you are working with 8 bit bytes and perhaps more importantly kilobytes of 1024 bytes. It's the fact memory is byte sized that drives the numbers at the end of the day but yes octal is not that common these days as I suggested earlier. I was taught Hex at school and that was loooonnnnng ago back when 8 bit computers were common and even then Octal was not a thing or spoken of especially.

 

I think the issue relates to the way memory is packaged and sold too. I mean 4000 can be divided by 8 but you are still more likely to want to use 4096 because in practice you are just not using that 96 otherwise.

 

For instance if you are baking a cake and you need 400g of sugar (okay it's a really, really big cake) then you will probably find you can only buy sugar in 500g bags which means that 100g of sugar goes to waste (assuming you don't drink tea or coffee and aren't intending on doing any more baking or something)... anyway you get the idea.

 

Freya


Edited by Freya Black, 23 September 2014 - 03:06 PM.

  • 0

#19 Freya Black

Freya Black
  • Basic Members
  • PipPipPipPip
  • 4032 posts
  • Other
  • Right on the edge in London

Posted 23 September 2014 - 03:26 PM

As far as I know the standard definition sampling rates were worked out from the maximum bandwidth of the signal, for which 720 samples per line (of which I think 704 are actually picture, though this is often overlooked) was considered adequate. The vertical resolution is of course fixed by the line rate, so you end up with... non square pixels. Strictly speaking, though, analogue TV never had symmetrical resolution in the horizontal and vertical either, which is why we're here.

 

P

 

Sounds about right, although here is extensive details of the history of this stuff:

 

http://lurkertech.co...ystems/#sqnonsq

 

Freya


  • 0

#20 Phil Rhodes

Phil Rhodes
  • Sustaining Members
  • 11234 posts
  • Other

Posted 23 September 2014 - 05:48 PM

I'm not sure I entirely understand some of the points made there. Presumably I'm missing something, but the piece claims that there is no specified relationship between the horizontal and vertical scan distances. But the number of lines and the rate at which they're scanned absolutely is defined, which would seem to intrinsically define it. I haven't actually read the specs, so presumably I'm overlooking some crucial problem.

 

P


  • 0


Visual Products

Willys Widgets

rebotnix Technologies

Aerial Filmworks

Rig Wheels Passport

Pro 8mm

The Slider

CineTape

Paralinx LLC

Abel Cine

Tai Audio

Broadcast Solutions Inc

Glidecam

Technodolly

CineLab

Ritter Battery

Zylight

Aerial Filmworks

Rig Wheels Passport

Ritter Battery

rebotnix Technologies

Technodolly

CineLab

Zylight

Glidecam

Abel Cine

Paralinx LLC

Tai Audio

Willys Widgets

Pro 8mm

The Slider

Visual Products

Broadcast Solutions Inc

CineTape