Skip to content

Research

We develop computational Uncertainty Quantification (UQ) methods, essential in optimal, uncertainty-aware decisions for resilient urban community.

Below summarizes our recent research focus with a few selected examples. Full publications list can be found in Publications.


A. Surrogate modeling

Surrogate model is a mathematical model that approximates and replaces the outcome of complex physics-based simulation model. Surrogate model is useful when you need to run simulations multiple times, e.g., for risk assessment or optimizations. Scientific Machine learning models can be used as a surrogate model, with the constraints of limited training data and the need for quantified uncertainty (epistemic and aleatoric) in the outputs. We develop application-driven surrogate models, that sometimes push us to tackle emerging challenges related to large aleatory uncertainty, high-dimensionality, scalability, and limited computational resources.

Keywords: Stochastic emulations; Adaptive design of experiments; Sequential experimental design; Gaussian Process (GP) model; Dimension reduction

Applications to earthquake, wind, (local) fire risk analysis and calibration of model parameters.

Learn more
  • Stochastic emulation is a class of surrogate models capable of predicting a probability distribution of model responses, as opposed to a single deterministic value. The uncertainty captured by stochastic emulation is referred to as aleatoric uncertainty. Accurately predicting this uncertainty is challenging but essential in many engineering applications, including hazard risk assessment [more]
  • When numerical simulation models are very expensive to run, generating a training dataset itself may require a prohibitive amount of computation. Adaptive and sequential selection of training points can help us reduce the number of simulations needed to train a surrogate model, often resulting in computational savings of 50-90%. However, selection of optimal training points is a challenging optimization problem. Various strategies are developed to balance between computational efficiency, robustness, and effectiveness. [more]
  • We can leverage the current training dataset to inform the optimal selection of upcoming training points. We want to generate more samples of unseen events that are distant from existing samples (“exploration”), but also want to generate samples close to the existing samples that showed higher unpredictability (“exploitation”). It is important to balance the two desired criteria. [more]

B. System-Reliability-based Resilience Assessment

Resilience represents a modern paradigm in disaster management. Unlike the traditional risk and performance approaches, resilience also considers the long-term impacts of disasters, the ease of recovery, and the interactions among components and subsystems of urban systems across multiple scales. Therefore, resilience is inherently an interdisciplinary topic. Our group contributes to the resilience research community by providing a framework with strong uncertainty quantification capabilities that enable probability-informed decision-making.

Keywords: Reliability; Redundancy; Recoverability; System-reliability-based resilience analysis; Adaptive Sampling; Surrogate modeling

Applications to earthquake and (local) fire hazard for structural systems

Learn more
  • Resilience is evaluated using various measures such as robustness, rapidity, redundancy, and resourcefulness. Several definitions and algorithms exist for assessing resilience, and one of them is the system reliability-based resilience assessment. [more]
  • A complex system like modern cities can have different system failure paths originating from different possible initial disruption scenarios. The system-reliability-based resilience analysis method evaluates the resilience of a complex system by decomposing:
    • How frequently each initial disruption may occur (reliability);
    • How likely the initial disruption is to lead to system failure (redundancy);
    • What are the socio-economic implications of the disruption scenario for the recovery process (recoverability).
  • Reliability, redundancy, and recoverability indices collectively provide a concise snapshot of the criticality of different system failure modes and inform engineering decision-making [more]
  • The resilience analysis is based on numerous ‘what-if’ scenario analysis and therefore, computationally expensive in nature. To overcome the computational challenges, advanced reliability methods, like adaptive importance sampling or surrogate-aided reliability analysis methods can be revisited and further developed. [more]

C. Regional Risk Assessment

The recent trend toward regional-scale risk and resilience assessments presents both challenges and opportunities for existing uncertainty quantification (UQ) methods. New challenges include limited information (e.g., incomplete inventory or sparse hazard description), high-dimensional representations of system input-output, inter- and intra-system dependencies, decision-making under uncertainty and conflicting objectives, and significant computational memory and processing demands. We are developing advanced UQ methods to overcome the computational limitations of current state-of-the-art approaches.

Keywords: Probabilistic imputation; Global Sensitivity analysis; Post-disaster impact analysis

Applications to regional hurricane (for inventory imputation) and earthquake risk/resilience assessment.

Learn more

  • The collection of datasets is an emerging challenge in large-scale risk analysis. To address this, we must first understand what information is truly needed and what is not. A systematic and probabilistic sensitivity analysis framework can offer valuable insights. [more]
  • Regional-scale analyses often rely on public data sources, which may be imperfect. To address these information gaps, probabilistic imputation techniques can be employed. When performing imputation, it is important to carefully assess the bias and uncertainty introduced and quantify their impact on downstream analyses.
  • Disaster management is inherently interdisciplinary. Civil engineers play a unique role of leveraging hazard source information to predict potential structural damages. However, it is crucial for us to also understand the important assumptions made in upstream of the workflow as well as the decision-making processes in downstream, seeking ways to better inform decision-makers. Interdisciplinary collaboration is becoming increasingly vital. [more]

D. Random Vibrations and Random fields

Infinitely high-dimensional random variables are modeled as random fields or processes. In natural hazards engineering, we frequently encounter geospatially and temporally continuous randomness, necessitating methods to effectively model and propagate such uncertainty. While high-dimensional uncertainty presents challenges (primarily computational limitations, but also informational limitations) the structured correlations inherent in these fields or processes allow for specialized treatment of uncertainties.

Keywords: Spectral representations; Karhunen–Loève expansion; Equivalent linearization method; First passage probability

Applications to earthquake risk assessment and material corrosion.

Learn more
  • When a structural response is modeled as a random process, failure of the structure is often defined as the probability of response exceeding a threshold during a finite time period. Identifying such a probability, called first-passage probability, is both a challenging and intriguing problem. [more]
  • Equivalent linearization is classical but powerful technique, as it allows us to leverage closed-form solutions, superposition principles, and other reliability analysis methods developed for linear(ized) systems. While classical linearization methods have well-known limitations in descriptive flexibility, modern linearization methods overcome the constraint while maintaining the computational benefits. [more]
  • A random field model consists of infinitely many dimensions but can always be approximated by a finite set of random variables. If your field model is well-structured (e.g., exhibits slow correlation decay and is narrow-banded), the size of these random variables can be quite small. This low-dimensional representation is useful in surrogate modeling, reliability analysis, sensitivity analysis, Bayesian updating, and various other uncertainty quantification (UQ) methods.
Random field Principal component analysis