Can predictive modeling mitigate infrastructure failure?

Modeling and simulation play a central role in the design of the structures and facilities that form the backbone of our modern society. The buildings, roads, bridges and other infrastructure that we rely on for our daily lives are all designed to withstand certain levels of loads imposed on them over the course of their lifetime, which usually is 50 to 100 years. Engineers analyze and design these structures via modern computer models to ensure safe operation of the facilities throughout that lifecycle.
The utmost care and sound judgment must be exercised throughout the lifespan of a structure, from its analysis and design through construction, operation, maintenance and repair, and finally decommissioning and demolition. History is littered with examples of structural failure that led to costly disruptions in the best case and immeasurable loss of life in the worst case — like the recent and sudden collapse of the 40-year-old, 12-story condominium in Surfside, Florida, which took 98 lives.
Failures can occur for a variety of reasons, but they typically involve oversights during the structure’s operating life. Errors in analysis and design are rare. Quality control in construction also is relatively mature and reliable, even though most structures “as built” differ slightly from their original designs. But critically, as any homeowner knows, timely maintenance and repair are essential to avoid catastrophic failures.
Despite active inspection and maintenance, structural problems can go unnoticed for years, and usually manifest themselves in a precipitating event. Events such as earthquakes, windstorms, floods, accidental overloading, collisions, and explosions can push a structure beyond its capability to resist such loads and lead to partial or total collapse. Deterioration due to aging and corrosion also has been responsible for numerous failures. However, because deterioration occurs over years, and sometimes decades, it usually is detectable.

So, the question is: Can computer modeling and simulation be used to predict and mitigate infrastructure failure? In theory, yes, but in practice, it is not cost-effective for the vast majority of structures — at least not yet. For a computer model to predict the types of failures we see, it must be able to simulate the as-built structure with high fidelity. That takes reliable, high-quality data on the ongoing, evolving condition of a structure during its lifecycle — data difficult to obtain from routine inspection and monitoring of the structure.
To use a medical analogy, it is unlikely that a doctor would be able to detect an asymptomatic tumor in a patient during a routine annual physical, but the chances of spotting it in a CT (computed tomography) scan are quite high. Similarly, a high-fidelity computer model that truly represents the actual state of a structure at a given point in time can indeed be used for detection and prognosis of structural problems. But conducting this equivalent of a CT scan for a building at regular intervals, and using it to create and continuously update a high-fidelity computer model of the building, simply is not feasible today.
Instead, structural engineers use various instrumentation and non-destructive testing — a process of inspecting without affecting ongoing functionality and usability — to assess the health and condition of a structure and its critical components during routine checkups. Some tall buildings, bridges, and critical facilities — like hydroelectric dams and nuclear power plants — are permanently equipped with sensors, such as accelerometers and strain gauges, that collect continuous real-time data about the state of the structure.
These data, once calibrated, can be used to create and update computer models for in silico damage detection and performance evaluation. This creates and maintains a “digital twin” of the infrastructure asset — a virtual representation that can be refreshed continuously with real-time data. This digital twin can then be used to simulate things like loads, stresses and environmental factors, for making decisions and mitigating failures.
Computer modeling and simulation are evolving toward being able to predict and reduce infrastructure failure — when paired with the also still-maturing practice of using digital twins that are updated constantly with the structure’s condition and closely mimic its physical behavior.

Arun Prakash
Associate Professor, Lyles School of Civil Engineering, College of Engineering
Faculty Chair, Computational Interdisciplinary Graduate Programs (CIGP)
Co-Director, Indiana Consortium for Simulation-based Engineering of Materials and Structures (ICSEMS)
Purdue University