Jump to page content
Eugenio Culurciello, photo courtesy of John Terhune and the Lafayette Journal & Courier.
 
Innovate@PurdueEngineering

Smartphone to become smarter with 'deep learning' innovation

by Emil Venere
 
Bookmark and Share

News spotlights

Smartphone to become smarter with 'deep learning' innovation

Author: Emil Venere
Magazine Section: Innovate
College or School: CoE
Article Type: Issue Feature
Page CSS: .event-image-container.right {
width: 400px;
margin-right: -200px;
}
Feature Intro: Researchers are working to enable smartphones and other mobile devices to understand and identify objects in a camera's field of view, overlaying lines of text that describe items in the environment.
Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera's field of view, overlaying lines of text that describe items in the environment.

“It analyzes the scene and puts tags on everything,” says Eugenio Culurciello, associate professor in Purdue University's Weldon School of Biomedical Engineering and the Department of Psychological Sciences.

The innovation could find applications in “augmented reality” technologies like Google Glass, facial recognition systems and robotic cars that drive themselves.

“When you give vision to machines, the sky’s the limit,” Culurciello says.

Deep learning is computationally expensive

The concept is called deep learning because it requires layers of neural networks that mimic how the human brain processes information. Internet companies are using deep-learning software, which allows users to search the Web for pictures and video that have been tagged with keywords. Such tagging, however, is not possible for portable devices and home computers.

“The deep-learning algorithms that can tag video and images require a lot of computation, so it hasn’t been possible to do this in mobile devices,” says Culurciello, who is working with Berin Martini, a research associate at Purdue, and doctoral students.

The research group has developed software and hardware and shown how it could be used to enable a conventional smartphone processor to run deep-learning software.

Research findings were presented in a poster paper during the Neural Information Processing Systems conference in December 2013 in Nevada. The poster paper was prepared by Martini; Culurciello; and graduate students Jonghoon Jin, Vinayak Gokhale, Aysegul Dundar, Bharadwaj Krishnamurthy and Alfredo Canziani.

Efficiency enables mobile applications

The new deep-learning capability represents a potential artificial-intelligence upgrade for smartphones. Research findings have shown that the approach is about 15 times more efficient than conventional graphic processors, and an additional 10-fold improvement is possible.


Researchers are working to enable smartphones and other mobile devices to understand and immediately identify objects in a camera’s field of view, overlaying lines of text that describe items in the environment. Here, a street scene is labeled by the prototype, running up to 120 times faster than a conventional cellphone processor. (Purdue University image/e-Lab)

“Now we have an approach for potentially embedding this capability onto mobile devices, which could enable these devices to analyze videos or pictures the way you do now over the Internet,” Culurciello says. “You might have 10,000 images in your computer, but you can’t really find an image by searching a keyword. Say you wanted to find pictures of yourself at the beach throwing a football. You cannot search for these things right now.”

Processing in hierarchical layers

The deep-learning software works by performing processing in layers.

“They are combined hierarchically,” Culurciello says. “For facial recognition, one layer might recognize the eyes, another layer the nose, and so on until a person’s face is recognized.”

Deep learning could enable the viewer to understand technical details in pictures.

“Say you are viewing medical images and looking for signs of cancer,” he says. “A program could overlay the pictures with descriptions.”

The Purdue researchers initially worked with deep-learning pioneer Yann LeCun, the Silver Professor of Computer Science and Neural Science at the Courant Institute of Mathematical Sciences and at the Center for Neural Science of New York University.

The research has been funded by the Office of Naval Research, National Science Foundation and Defense Advanced Research Projects Agency.

Culurciello has started a company, called TeraDeep, to commercialize his designs. An article about Culurciello's research is available on the USA Today website.

 

Comments