Every year we go to these conventions looking for something new, and every year everyone talks about the latest camera, or a new lens, or a new riff on camera stabilisation, or...
Strangely, it's actually quite easy to find genuinely new thinking on the NAB show floor, and it's generally in the same place - the booth belonging to Fraunhofer, the German research institute. They are 25% funded by the German government, and make up the rest of their funding with external commissions and the develop new internal ideas internally which are then licensed to outside organisations. Because of all this, while most of their ideas are pretty blue-sky, they're routinely among the most interesting exhibitors.
Perhaps most interesting thing they're showing this year is an approach for deriving high dynamic range images from a single shot on a single sensor. This involves a sensor that's been masked with a random pattern of ND filters, in much the same way that a single-chip Bayer colour sensor is masked with red, green and blue filters. This isn't a particularly new idea - masking every other pixel with an ND filter has been discussed before. Fraunhofer's new idea is to use a random pattern, which allows reconstruction of the underexposed and overexposed parts of the image with less resolution loss than basic mathematics would sugget. Much as we've heard of this sort of magical mathematics before, I'm not so worried - we live in a world where the very highest resolution cameras have more than adequate resolution for cinema, and sacrificing some of that for more dynamic range might seem entirely sensible.
I suppose the chief advantage of this is that it doesn't require any fundamental advances or major changes to existing technologies to work, so it may be a little closer to market than many of Fraunhofer's innovations. We can already print filters onto sensors at the per-pixel level. The existing implementation works in black and white only and at about one frame per second for the complex interpolation to create both a bright and dark frame, but we're told that it's very experimental and no attempt has been made at optimisation, The job is apparently highly parallel and could be implemented on embedded camera hardware in an FPGA.
All very promising.