In the aftermath of major earthquakes, engineers descend on the scene and must quickly document damage to structures before crucial data are destroyed.
“These teams of engineers take a lot of photos, perhaps 10,000 images per day, and these images are critical to learning how the disaster affected structures,” says Shirley Dyke, professor of mechanical engineering and civil engineering. “Every image has to be manually analyzed by people, and it takes a tremendous amount of time for them to go through each image and put a description on it so that others can use it.”
Engineering teams routinely spend several hours after a day of collecting data to review their images and determine how to proceed the next day.
“Unfortunately, there is no way to quickly organize these thousands of images, which are essential to understanding the damage from an event, and the potential for human error is a key drawback,” says Dyke. “When people look at images for more than one hour, they get tired, whereas a computer can keep going.”
She and her team are developing a powerful new technology that harnesses deep learning and computer vision algorithms to exponentially speed the process. The Automated Reconnaissance Image Organizer, or ARIO, classifies images to directly support field teams as they gather important perishable data. The automated method is turning several hours of work into several minutes.
“This was the first-ever implementation of deep learning for these types of images,” Dyke says. “We are dealing with real-world images of buildings that are damaged in some major disasters — by tornados, hurricanes, floods and earthquakes. Design codes for buildings are often based on lessons derived from these data. So if we could organize these large volumes of data more quickly, the images could be used more rapidly to inform design codes.”
Deep learning commonly refers to artificial neural network algorithms that use numerous layers of computations to analyze specific problems. The researchers must design suitable classes to aid the field engineers, and carefully build labeled datasets that are used to train these algorithms to recognize scenes and locate objects in the images. The method harnesses graphical processing units (GPUs), which have led to high-performance machine vision applications.
“You upload the images into the system and a trained neural network automatically classifies each image into appropriate categories and the system extracts its metadata,” Dyke says. “When a reconnaissance team in the field collects building images, they can upload to the ARIO tool, and in about one minute all of their images are organized into a structure that makes it easy for them to use those images.”
Artificial intelligence “image classifiers” enable nearly instantaneous classification. While the research has focused thus far on earthquake damage, the team plans to extend it to wind damage caused by hurricanes.
“It's an evolving tool,” Dyke says. “There are many ways to expand the tool and use this technology for disaster-related purposes.”
The researchers have gathered about 140,000 digital images, including pictures from recent earthquakes in Taiwan and Mexico taken by Purdue civil engineering professor Santiago Pujol. Pujol has performed extensive field research at major earthquake sites. Graduate students Alana Lund, Ali Lenjani and Prateek Shah have been testing the new platform using the earthquake data.
“What I think would be very cool would be to have cities trained to use the system,” says Pujol. “Then, if there is an earthquake, images collected by the public and city officials would be quickly analyzed so the city would know what areas were affected the most or what buildings were affected the most. Right now, it takes a lot of time to do this. That’s one way that I think this could have tremendous impact.”
Faster classification could save lives. Photos are automatically labeled to show buildings and building components that were either collapsed or not collapsed, and areas affected by spalling, where concrete chips off structural elements due to “large tensile deformations.”
“This is a typical type of damage that researchers are interested in investigating,” Dyke says. “We are able to automatically classify images based on whether spalling exists or not, and also to pinpoint specifically where it was located within the image.” Damaged areas in the photos are outlined within green boxes for easy reference.
The system also could make it far easier for engineers to link damage to the structural drawings of damaged buildings, which are typically used by field teams to understand the cause of the damage. It’s important to have structural drawings to study the before-and-after details, to learn precisely how damage occurred and to design more earthquake-resistant buildings.
“You can’t really carry big rolls of structural drawings home with you,” Dyke says. “So we collect a bunch of digital images from them. Then you need to paste them together to make sense of them. We are developing tools that use an advanced computer-vision technique so that you can paste those images back together and reconstruct a high-quality, high-resolution drawing image.”
The research has been funded by the National Science Foundation. Also involved are Bedrich Benes, professor of computer graphics technology (by courtesy); Thomas Hacker, professor of computer and information technology; and research assistant Mathieu Gaillard, all in the Purdue Polytechnic Institute.
The automated deep-learning approach would be especially useful because of a proliferation of databases containing vast collections of digital images.
However, Pujol says universities and other research institutions must develop a standardized system to solve a recurring problem: Experimental data are repeatedly forgotten or even lost because scientists and engineers do not have a reliable and convenient way to preserve and share the data.
“When a student graduates or a researcher moves after years of research and work in the laboratory, digital records representing years of very expensive experiments are often lost,” says Pujol, who is a senior advisor with DataCenterHub at Purdue. He has led research funded by the NSF to develop a method “for the systematic collection, curation and preservation of engineering and science data.”
“Collecting big data is fine, but we all need to realize that data are only useful if you have them,” he says. “What happens when, say, we retire or move or lose a computer?”