Crucifix, your brand new CMOS camera looks already old again. German Fraunhofer Institute just announced an ultra-sensitive CMOS sensor that can read individual photos within a few picoseconds, yielding high quality images even in extremely low light situations — because each and every single photon counts.
As we all know, in imaging every single photon counts. According to Fraunhofer Institute for Microelectronic Circuits and Systems IMS, the institute has advanced the development of CMOS technology dramatically. Its pixel structure can count single photons 1,000 times faster than comparable models.
It is now possible to process digital image signals directly on the microchip, according to new research out of the institute.
From the press release — well worth a read:
Fast and ultra-sensitive optical systems are gaining increasing significance and are being used in a diverse range of applications, for example, in imaging procedures in the fields of medicine and biology, in astronomy and in safety engineering for the automotive industry. Frequently the challenge lies in being able to record high quality images under extremely low light conditions.
Modern photo detectors for image capture typically reach their limits here. They frequently work with light sensitive electronic components that are based on CMOS (Complementary Metal Oxide Semiconductor) or CCD (Charge-Coupled Device) image sensors. The problem is that neither the latest CMOS nor CCD systems can simultaneously guarantee a swift and highly sensitive high quality image recording if there is a paucity of photons to read.
The Fraunhofer Institute has now advanced the development of CMOS technology and introduced an ultra-sensitive image sensor with this technology, based on Single Photon Avalanche Photodiodes (SPAD). Its pixel structure can count individual photons within a few picoseconds, and is therefore a thousand times faster than comparable models. Since each individual photon is taken into consideration, camera images are also possible with extremely weak light sources.
Camera Installed Directly on Chip
To achieve this the new image sensor uses the “internal cascade breakdown effect” — a photoelectric amplification effect. The number of “breakthroughs” corresponds to the number of photons that the pixels hit. In order to count these events, each of the sensor’s pixels comes with very precise digital counters.
At the same time, the scientists have applied microlenses to each sensor chip, which focus the incoming beam in each pixel onto the photoactive surface. Another advantage is that processing the digital image signals is already possible directly on the microchip; therefore, additional analogue signal processing is no longer needed.
The image sensor is a major step toward digital image generation and image processing. It allows us to have the capability to use even very weak light sources for photography.
“The new technology installs the camera directly on the semiconductor and is capable of turning the information from the light into images at a significantly faster pace,” states Dr. Daniel Durini, group manager for optical components at the Fraunhofer Institute IMS.
IMS engineered the sensor under the European research project MiSPiA (Microelectronic Single-Photon 3D Imaging Arrays for low light, high speed Safety and Security Applications). Altogether, seven partners throughout Europe from the fields of research and business are involved in the project.
In the next stage, the scientists from Duisburg are working on a process to produce sensors that are back-lit — and therefore even more powerful. At the same time, the new technology is already being utilized in tests for transportation. Chip-based mini cameras protect vehicles, bicycles and pedestrians from collisions and accidents, or assist in the reliable functioning of safety belts and airbags.
The press release doesn’t say much about resolution and so forth, but I can’t imagine that imagers for digital cameras are that far behind… well in fact they are.
The key to this new fast sensor is not just the avalanche diode, it is its use to make a photon counting sensor. These have been around for a while, but not in CMOS form. However, it needs a photon counter on each pixel, and if the pixels are to count say 64,000 photon events, than that needs to be a 16-bit counter — unless they can be continuously read and cleared.
All in all that’s a lot of circuitry in each pixel. As the illustration seems to suggest this sensor is pretty big, without many pixels and therefore a pretty low resolution.
Explains Eric Fossum (here and here), primary inventor of the CMOS sensor, that for photographers many more “jots” are required in order to form an image with a reasonable resolution.
A jot, he says, is a sort of binary pixel, but many jot samples are required to form a picture element, like many picture elements are required to form an image. Unfortunately, at this time, avalanche diode based jots, like SPADs, are not easily scaled to small dimensions. He suggests we’d want jots that are in the range of 100-200nm pitch, not 20,000nm.
A sensor that forms an image from many jots is a quanta image sensor (QIS). Well, at least that’s what Fossum is calling it. A QIS probably needs a billion jots to be competitive with current SOA CMOS image sensors.
So we have to keep on dreaming about this sensor’s probably spectacular dynamic range et al. It’s all pretty much still up in the air. Might be a sensor your grandchildren will be shooting with.