I'll know when I see it, Carl. Thanks.
Yeah for sure. Your eyes are your best instrument.
I'd recommend 4K anyway because grain aliasing (if that was ever desireable) can always be reconstructed from such.
There's not much online information on this phenomena called 'grain aliasing'. The earliest online insight into such appears to be the following article (Sept 2000), in which the term "grain aliasing" could very well be the first usage of the term (at least online):
It's certainly a visible phenomena and discussed in many forums, but it appears to be under-theorised. For example there's no Wikipedia article on such a thing (and you'd imagine there would be, even if a badly conceived one). Simple solutions exist - which follow on directly from conventional aliasing theory. But optimum solutions based on more insight into grain aliasing in particular would be preferable.
As previously mentioned, in conventional aliasing theory you would sample (or encode) a signal at twice the display (or decoding) rate, with a filter stage in between. But for grain aliasing this may not be optimum. To minimise grain aliasing you might need to sample at more than 2X. But how much higher? 4X? At the moment, it seems just the higher the better.
The relationship between gain aliasing and what might be called the "native grain" of the film remains to be determined. Indeed it remains uncertain whether "native grain" can actually be disentangled from grain aliasing (ie. as an independant concept).
In other words the choice of sampling rate (and looking at the results) appears to be the only way of currently determining what kind of grain a result will have.
It would be good if there were some more research and theorisation available in this area. Would certainly help to optimise film to digital workflows.
Edited by Carl Looper, 17 June 2015 - 11:11 PM.