Jump to page content

Purdue University Engineering Frontiers

Data Science in Engineering

Monitoring Structural Health

by Emil Venere

Pruning Neural Networks to Inspect Infrastructure Degradation

Mohammad R. Jahanshahi, left, an assistant professor in Purdue’s Lyles School of Civil Engineering, and doctoral student Fu-Chen Chen review results using the new system.

Neural networks promise future robots, drones and other “edge devices” the capability to automatically inspect elements of the infrastructure for cracks and corrosion and providing life-saving emergency-management services.

“Civil infrastructures constantly face aging issues, natural hazards, poor usage and extreme weather conditions,” says Mohammad R. Jahanshahi, assistant professor of civil engineering. “This causes degradation in the service life of the infrastructure, and the damage or defects may propagate over time and potentially be harmful or dangerous. For instance, potholes, corroded underground sewer pipelines and cracks in building or bridge columns could lead to serious deterioration and even catastrophic events if not detected early.”

Today’s “health monitoring” approaches for civil infrastructure rely on human inspectors to record and analyze defects and file reports, which is time-consuming and labor-intensive. An automated and cost-effective inspection system is needed, and edge devices represent critical tools for such an approach. The devices are so named because they are located at some distance from centralized computing systems, operating near the sources of data, potentially making them ideal observers. Edge devices are becoming ubiquitous in the Internet of Things, where such hardware includes medical devices, household appliances, cameras and sensors.

“In future smart cities, many decision processes in critical infrastructure and emergency management will be based on machine-learning techniques,” says Elisa Bertino, Purdue’s Samuel D. Conte Professor of Computer Science. “One particular application will be the processing of large datasets of visual images for defect assessment, where the data are collected by a swarm of mobile-sensing agents, such as unmanned aerial vehicles.”

The idea is to integrate deep-learning algorithms into robotic and mobile systems that have limited computational capability by shrinking neural networks so they can run on hardware, such as small graphical processing units available on edge devices. This advance could sidestep the need to access a cloud server, better allowing the devices to make quick decisions in real time based on constantly updated information.

“In emergency management situations, the availability of a system based on our proposed framework could be critical because human resources are often scarce and emergency management decisions must be made quickly,” Bertino says. “For example, in the case of a rapidly spreading forest fire with high variability due to changing winds. Our approach also could use devices specialized in searching hazardous areas for people in need of help. In many emergency situations, our framework could be extended to recognize humans in danger and rapidly assess the gravity of the danger.”

However, a major obstacle is that edge devices lack the computing power to efficiently operate neural networks. “Deep neural networks need a lot of memory and processing capability,” Jahanshahi says.

One solution is to harness a method inspired by the human brain. “As you do a particular task more and more often, certain neuron connections in the brain are strengthened, while others are weakened, which improves the efficiency to carry out that particular task,” Jahanshahi says.

In a similar way, neural networks can be “pruned” to remove redundant and nonessential connections.

“By doing this we shrink the algorithm and reduce the amount of computation needed, while at the same time retaining a reasonable performance,” he says.

The researchers used pruning to fine-tune two popular neural networks — VGG-16, created by the University of Oxford’s Visual Geometry Group, and ResNet-18, developed by Microsoft.

It is typical for such neural networks to have the capacity to handle hundreds of image “classes,” or categories. However, only a few image classes are needed for networks to inspect the infrastructure for damage or defects. “We may only need ‘crack’ and ‘no crack,’ or ‘corrosion’ and ‘no corrosion,’ and so forth,” says Jahanshahi.

The VGG-16 and ResNet-18 neural networks were adapted for infrastructure health monitoring by drastically reducing the number of image classes. “We take these popular neural networks and we fine-tune them so that, instead of 1,000 classes, they only have to handle a few classes,” he says.

The pruning has allowed the researchers to reduce the amount of memory and length of processing time for edge devices by 80 percent and 89 percent, respectively. Instead of taking five minutes to process one image, the systems requires only a few seconds.

Research findings are detailed in a paper presented during the third ACM/IEEE Symposium on Edge Computing in October 2018. The paper was authored by doctoral students Rih-Teng Wu and Ankush Singla, together with Jahanshahi and Bertino.

“Other groups are using deep learning for damage detection in structural health monitoring,” Wu says. “What’s new here is that we have incorporated these systems into devices that have limited computational capability and power.” The research is partially supported by the National Science Foundation.

“We have outlined a framework for dynamic and adaptive data acquisition aimed at applications in the area of critical infrastructure and emergency management,” Bertino says. “Our proposed framework is particularly suited for such applications because many decisions in these applications will be increasingly based on the use of big data and machine-learning algorithms.”


Additional Information