Jim or John,
I recently saw a news headline stating that nanotechnology has been used to create a sensor with an 11 stop increase in sensitivity. This is in spite of the fact that each photodiode on the chip has less than half the area of other photodiodes.
My question for either of you is how can nanotechnology be used to increase sensitivity?
I actually believe this all to be some sort of lie. That belief is based on some discrepancies that I have noticed in the news articles. For example, they say nanotechnology is used, but the chips "can be mass-produced using standard CMOS process without additional investment for facilities" Huh? Am I missing something here?
Since when can nanotechnology be mass produced with "standard CMOS process"?
They must have been talking about film.
The largest silver halide grains in the fastest EI-500 films are only about 5 micrometers across. The grains in the "mid" and "slow" emulsions that capture the lighter areas of the scene are often smaller than a micron. Slower films have even finer grain (e.g., Kodak VISION2 50D Color Negative Film 7201 has the finest granularity of any motion picture camera film). Film grains are randomly distributed, and range in size, so you don't have a regular pattern of sensors that can be subject to aliasing.
Indeed, Kodak is at the forefront of nanotechnology in a wide variety of applications:http://www.kodak.com...pq-locale=en_US
On the digital imaging front, the latest Kodak image sensor is 39 megapixels:http://www.kodak.com...onMFsensors.pdfhttp://www.kodak.com...KAF-39000.jhtml
Even in this state-of-the-art digital sensor, the pixel size is 6.8 micrometers --- large by film standards.