Computational Modeling and Predictive Analysis

Computational Modeling and Predictive Analysis

Mapping of engineering equations to data derived from laboratory measurements of physical phenomena (mass, temperature, pressure, composition, molecular structure, diffusion, rheology, solution properties, phase equilibria images) is key to communicating new knowledge. The reporting of research results require equations and correlations built on a foundation of classical engineering quantities (viscosity, solubility, rheology, reaction rates, chemical and thermodynamic equilibria) that are assembled into dimensionless correlations that efficiently quantitate flow properties; momentum, heat and mass transfer; reaction parameters (whether for chemical or biological reactions); translation of microscale images to macroscopic behavior; and providing an accounting of material and energy flows. While an established practice for chemical engineering, this approach is relatively recent for biological systems.

A group of individuals around a computer
Reviewing and discussing analysis of run data to begin formulating a mechanistic model

Computational modeling capabilities are based on software packages that include Matlab, Python, (list others that we commonly use), and with which code is developed for specific applications in image analysis and applying first principles models to laboratory measurements. Process models, developed with assistance from ASPEN' and BioPro Designer" are used to map out steady-state material and energy balances which are translated into techno-economic assessments (TEA). The TEA is central to performing Life Cycle Analysis (LCA), that in turn is required to assess sustainability and provide insights of new reactions, new processes, and integrated systems for reducing carbon footprints of new discoveries and technologies that utilize renewable resources in biomanufacturing systems.

A group of individuals in lab coats standing around a table
Visiting scholars and students discuss set-up of run measurements for non-cook, liquefaction of lignocellulose into fermentation media

Predictive analysis is carried out through application of computational models applied to research data in order to develop explanations of laboratory measurements based on first principles. Such models are particularly important for process optimization of individual unit operations and/or systems used in biomanufacturer of products derived from renewable resources for use in energy bioproducts and health. Once predictive models are developed and validated with laboratory data and process measurements, dynamic optimization of a complex process may be carried out using lab-written codes with python used to integrate different modules. This type of research in the bioseparations area is carried out in collaboration with Dr. Rex Reklaitis in the Davidson School of Chemical Engineering (https://engineering.purdue.edu/ChE), and his team.

Large data sets generated in the laboratory are not only compared to literature data but are also analyzed using artificial intelligence (AI) and machine learning (ML). Software packages for this purpose are actively being developed in ABE and LORRE and are available to the broader research community through repositories like GitHub. This is an emerging area and its use as a molecular-level design tool is central to discovery of new protein biocatalysts (i.e., enzymes) and modification / optimization of existing ones. This type of analysis is supported by data from microorganisms modified using synthetic biology techniques and work reported in the literature.