July 21, 2020

New breakthroughs in computer vision for imaging in the dark

Imagine that you are taking photos in the dark. No matter how you adjust your camera, the images will likely be too dark and too noisy to let you see anything. Computer vision scientists have been asking two questions for decades: Can we remove the noise to recover the image? Will our computer be able to recognize the objects from the noisy images?
image classification procedure
Conventional image classification relies on well-illuminated scenes. In the dark, image classification becomes impossible because the signals detected by the sensor are essentially noise. A team of researchers led by Prof. Stanley Chan at Purdue ECE develop a new classification method based on a single-photon image sensor. [Up] Conventional image classification procedure, which fails when the scene is dark even if one uses a deep neural network denoiser and classifier. [Bottom] With the solution developed by Prof. Chan and students, signals are detected by a new single-photon image sensor, and the classifiers are trained using a new training protocol. The combination of the sensor and algorithm can allow machines to classifier at a light level of 1 photon per pixel.

Imagine that you are taking photos in the dark. No matter how you adjust your camera, the images will likely be too dark and too noisy to let you see anything. Computer vision scientists have been asking two questions for decades: Can we remove the noise to recover the image? Will our computer be able to recognize the objects from the noisy images?

A team of Purdue ECE researchers recently made a major breakthrough in both problems. The team is led by Stanley Chan, assistant professor of electrical and computer engineering and statistics, and includes PhD students Yiheng Chi and Abhiram Gnanasambandam. They tackled the problem using a computational imaging philosophy: by first using an unconventional camera with single photon counting capability, and then developing a new machine learning concept to train the deep neural networks efficiently.

The team has been developing signal processing algorithms and theories for a new type of image sensor since 2014, with various government and industry funding. The image sensor they studied, known as the Quanta Image Sensor (QIS), is a single-photon image sensor with an unconventional image acquisition mechanism. In 2016, Chan and a former student Omar Elgendy won the best paper award at the IEEE International Conference on Image Processing (ICIP).

Gnanasambandam says this time, by leveraging machine learning, the team is demonstrating something even more exciting.

example of image reconstruction
Reconstructing dynamic scenes in extremely photon limited environment has been known as a very challenging task. A team of researchers led by Prof. Chan at Purdue ECE developed a breakthrough solution by integrating a new single-photon image sensor and a novel machine learning algorithm. [Left] An image captured by a conventional CMOS image sensor at 0.5 photons per pixel. [Right] The same image produced by the proposed solution. Notice that the bird in the image is moving. The reconstruction algorithm is able to remove noise, blur and align motion, all simultaneously.

“The problem is like taking a burst of very noisy frames, where pixels in each frame can see only a few photons at most. Our goal is to reconstruct the scene potentially containing moving objects. Besides that, we also want to recognize objects without even using a reconstruction method,” says Gnanasambandam. “But you need to understand one basic thing: With the Quanta Image Sensors we can get a much better signal than the conventional CMOS image sensors. However, at such a low photon level, even an ideal sensor will suffer from shot noise. This is where our machine learning comes into the game.”

Chi says the common wisdom for these types of problem is to fine-tune the networks with a customized dataset. But he says for extremely noisy data like the ones the team is working with, the fine-tuning strategy does not work.

“We developed a new training procedure known as the student-teacher learning,” says Chi. “We first train a teacher model using clean data, and then we transfer the knowledge to the student so that it can handle the noisy data. This idea is simple but extremely effective. It saves us training data, time, and it generalizes better.”

Chan says he is satisfied with the result.

“The photon level they can achieve is 1 photon per pixel. This is approximately seeing at night with just starlight,” says Chan. “Most of the reported results in computer vision are based on well-illuminated images. They are also not using any optical equipment including external light sources or electron-multiplying devices. Their reported results are based on a bare sensor with the size of a quarter.

Chan says the team had a lot of support for its work, including Prof. Eric Fossum of Dartmouth College, Gigajot Technology Inc., which provided the QIS prototype for experiments, Vladlen Koltun from Intel Labs, who collaborated on the dynamic imaging paper.

Both pieces of work are published in the 16th European Conference on Computer Vision (ECCV) 2020. ECCV is one of three major conferences in computer vision (the other two are CVPR and ICCV), with a typical acceptance rate of 20%.

References:

Yiheng Chi, Abhiram Gnanasambandam, Vladlen Koltun, and Stanley H. Chan ‘‘Dynamic Low-light Imaging with Quanta Image Sensors’’, European Conference on Computer Vision (ECCV) 2020. Manuscript is available at https://arxiv.org/abs/2007.08614

Abhiram Gnanasambandam, and Stanley H. Chan ‘‘Image Classification in the Dark using Quanta Image Sensors’’, European Conference on Computer Vision (ECCV) 2020. Manuscript is available at https://arxiv.org/abs/2006.02026

Share