I'll provide 4 topics in today's talk. The first one is Uncertainty and Variability analysis, I will give a general of how we apply uncertainty and sensitivity analysis in our modeling process. In addition, in my recent research, I also focus on sensitivity analysis to quantify the uncertainty from model output. Sensitivity analysis also has some applications. I will provide some examples later on.
On the other side, I will focus on pharmacokinetic modeling. This approach can help us understand the chemical behavior or properties when we under the chemical exposure. We can use this method to estimate the external exposure to internal dose. Secondary, we want to pharmacodynamic model to know how chemical affect on human health and make some predictions.
I'll provide 4 topics in today's talk. The first one is Uncertainty and Variability analysis, I will give a general of how we apply uncertainty and sensitivity analysis in our modeling process. In addition, in my recent research, I also focus on sensitivity analysis to quantify the uncertainty from model output. Sensitivity analysis also has some applications. I will provide some examples later on.
On the other side, I will focus on pharmacokinetic modeling. This approach can help us understand the chemical behavior or properties when we under the chemical exposure. We can use this method to estimate the external exposure to internal dose. Secondary, we want to pharmacodynamic model to know how chemical affect on human health and make some predictions.
According to EPA's Risk assessment terminology. Variability refers to the range of toxic response or exposure, which means the internal and external physical factors that can cause the differences of toxicity effects.
Uncertainty refers to our inability to know for sure. It is often due to the lack of knowledge, such as in experiment data, modeling, and parameter.
Therefore to solve this problem, we can apply probabilistic modeling to quantify or characterize the uncertainty and variability in risk assessment. The probabilistic modeling approach is a more comprehensive than traditional point estimate method to consider the uncertainty and variability in risk assessment.
Uncertainty relates to "lack of knowledge"" that, in theory, could be reduced by better data, whereas variability relates to an existing aspect of the real world that is outside our control.
There is a clear definition to differentiate the uncertainty and variability in WHO document.
Chiu WA and Rusyn I, 2018. https://doi.org/10.1007/s00335-017-9731-6
Here are the reasons of why we need to pay attention on variability. Because in the risk assessment, we need to consider the difference response level in the population. And traditionally, in animal study, we usually focus on single mouse strain in toxicology research. There is a chance to use a poor model to represent the humans. Using the different mouse strains give us us the opportunity to reduce this chance. Also.
Absorption - How will it get in?
Distribution - Which tissue organ it will go?
Metabolism - How is it broken down and transformation?
Elimination - How it leave the body?
Kinetics is a branch of chemistry which describe the change of one or more variables as a function of time. So we focus on the time and concentration. It’ll go up after intake the drug and then go down after reach the maximum concentration.
To "predict" the cumulative does by constructed compartmental model.
\(D\): Intake dose (mass)
\(C\): Concentration (mass/vol.)
\(k_a\): Absorption rate (1/time)
\(V_{max}\): Maximal metabolism rate
\(K_m\): Concentration of chemical achieve half \(V_{max}\) (mass/vol.)
\(k_{el}\): Elimination rate (1/time)
But it’s difficult to realize the whole time-concentration relationship due to it will take a lot of time and money in the experiment. Therefore, we need to apply toxicokinetic modeling. The basic model is constructed by a single variable with few parameters. We can set up input dose and define the output such as chemical concentration in blood or other tissue organs. Then, we can set up the parameters such as absorption,elimination rate constant. We can also used Michaelis-Menten kinetics constants to define the concentration-dependent rate constant for metabolism.
Mathematically transcribe physiological and physicochemical descriptions of the phenomena involved in the complex ADME processes.
But the real world is not as simple as we think. Sometimes we need to consider the complex mechanism and factors that can help us make a good prediction. So people developed the PBPK that include physiological and physicochemical parameters that correspond to the real-world situation and can be used to describe the complex ADME processes.
Also, we can apply probabilistic modeling approach in PBPK modeling, this method called Bayesian-PBPK. Before the conduction of Bayesian PBPK, the first thing is to have or find the prior information and apply this information to define the probability distribution of model parameters that need to be used in PBPK modeling. And then, using the PK-Data of individual to calibrate PBPK model, After that, the distribution of model parameter will be adjusted based on the given experiment data. The posterior parameter can be further used to the additional application. Such as understanding the parameter uncertainty and population variability.
Individuals level
\(E\): Exposure
\(t\): Time
\(\theta\): Specific parameters
\(y\): condition the data
Population level
\(\mu\): Population means
\(\Sigma^2\): Population variances
\(\sigma^2\): Residual errors
https://doi.org/10.1016/j.tox.2010.06.007
Chiu WA et al., 2009. https://doi.org/10.1016/j.taap.2009.07.032
This is a one of representative publication that used the Bayesian approach in population PBPK modeling. This paper describes a Bayesian population analysis using MCMC simulation of the harmonized rat, mouse, and human PBPK models for TCE.
Chiu WA et al., 2009. https://doi.org/10.1016/j.taap.2009.07.032
Through the estimated posterior parameter estimation, the result can be further used to do the population or group specific prediction.
It is useful to generate “population-based” parameters using only the posterior distributions for the population means and variances, instead of the estimated group-specific parameters.
These population predictions provide a sense as to whether the model and the predicted degree of population uncertainty and variability adequately account for the range of heterogeneity in the experimental data.
Assuming the group-specific predictions are accurate, the population-based predictions are useful to identify whether one or more if the datasets are “outliers” with respect to the predicted population.
It is informative to compare the population-based model predictions to the additional “validation” data in order to assess the predictive power of the PBPK model.
Chiu WA et al., 2013. https://doi.org/10.1289/ehp.1307623
This is an extensive study. Previous animal studies mainly used single strain, B6C3F1 as a representative animal. But the issue to used B6C3F1 in extrapolation is the metabolism is very difference with the human. It had the highest rate of respiratory tract oxidative metabolism as compared to rats and humans. Also, it doesn't have representative variability in risk assessment. So in this study, instead of B6 strain, it includes 16 inbred strains to discuss inter-strain variability and use it to compare with the human inter-individual variability. Through the multi-strains model calibration and prediction, we can have consistent estimates of variability in human and mice
--- Calibration ---
Funding: U.S. EPA (STAR RD83561202) and National Institutes of Health (F32 ES026005)
Luo, Y.S., Hsieh, N.H., Soldatow, V.Y., Chiu, W.A. and Rusyn, I., 2018. Comparative Analysis of Metabolism of Trichloroethylene and Tetrachloroethylene among Mouse Tissues and Strains. Toxicology, 409, pp.33-43.
Motivation: Quantitative comparison of tissue- and strain-specific metabolism of TCE and PCE has not been performed
In our recent research, we incorporate more animal experiment results. We built the multicompartment PK model to estimate the metabolism of TCE and PCE. We want to quantitative compare the tissue- and strain-specific metabolism of TCE and PCE because this comparison haven't done before.
In the beginning, we used three different mice strains and given the specific does of TCE amd PCE. After the several hour the mice were sacrified. The time points of 1, 2, 6, 12, 24, 36 h were selected to maximize the metabolic information from both oxidative and glutathione conjugation pathways
We didn't use the PBPK model because the current is PBPK model takes a very long time in model calibration. Therefore, we tried to re-build this simplified model to compare and describe the Trichloroethylene and tetrachloroethylene among mouse tissues and strains.
Predicted disposition of TCE (A), PCE (B), and their respective metabolites in male B6C3F1/J, C57BL/6J, and NZW/LacJ mice.
Pie charts are used to provide a relative comparison among various metabolites as predicted by the model in each strain.
This is the result of the disposition of TCE and PCE and their respective metabolites in mice strains. Pie charts are used to provide a relative comparison among various metabolites as predicted by the model in each strain. Most of each parent compound (TCE and PCE) remained unmetabolized until excretion As you can see the disposition fractions are different across the mice strains. For to the glutathione conjugation metabolism, the overall flux to conjugation was less than 0.3% of the administered dose for both chemicals.
Trichloroethylene (TCE)
Tetrachloroethylene (PCE)
Also, we used our estimated result to compare with other toxicokinetics study of TCE/PCE metabolites in mice, rats, and humans. Our result can fill the data gap to understand the chemical metabolism in mice, especially for TCE.
Based on the result of oxidation to GSH, mice are thought to be more efficient in oxidative metabolism, but less efficient in glutathione conjugation as compared to rats and humans.
Funding: National Institutes of Health (P42 ES027704 and P42 ES ES005948)
Luo, Y.S., Cichocki, J.A., Hsieh, N.H., Lewis, L., Wright, F.A., Threadgill, D.W., Chiu, W.A. and Rusyn, I., 2019. Using Collaborative Cross Mouse Population to Fill Data Gaps in Risk Assessment: A Case Study of Population-Based Analysis of Toxicokinetics and Kidney Toxicodynamics of Tetrachloroethylene. Environmental Health Perspectives, 127(6), p.067011.
Background:
Objectives:
This is our following research. This study aim to apply the developed multicompartment PK model in human health risk assessment. In this study, we used in animal experiment result from CC mouse population to characterize metabolism–toxicity interactions and quantify the interindividual variability. Our main objective is ...
This paper was been selected as Papers of the Month. The brief summary of this study is
Interstrain Variability in Metabolism of PERC through Glutathione Conjugation Pathway
The range of interstrain differences were 54.2-fold for TCVG, 29.0-fold for TCVC, and 33.6-fold for NAcTCVC.
This is one of our finding that explains the interstrain variability in metabolism of PCE through glutathione conjugation pathway. We used area under the curve to compare the tissue-specific TK of GSH conjugation metabolites across strains. We found that variability across mouse strain can have about 30 to 55-fold interstrain differences, which means the CC strain can have different PK profile in PCE metabolism.
Predicted disposition of PERC and its metabolites in CC mouse strains
The population average was 80.4% for PERC, 19.4% for TCA, and 0.25% for GSH conjugates, where 52% of total GSH conjugates further underwent bioactivation to form reactive species.
Inter-strain difference was 1.19-fold for PERC, 2.34-fold for oxidation, 6.68-fold for GSH conjugation, and 5.34-fold for bioactivation of GSH conjugates.
Based on a previously published multicompartment TK model for PERC and its metabolites, we estimated the disposition for PERC, TCA, and GSH conjugates. Compare with our previous study, the disposition of parent compound was estimated about 90% in the previous research, but for CC strains, the disposition of parent compound was estimated about 80%
CC mouse population–derived concentration-time profiles of PERC, oxidative metabolite (TCA), and GSH conjugative metabolites (TCVG, TCVC, and NAcTCVC)
The results can be used to quantify the interstrain variability and derive chemical-specific adjustment factors to replace default uncertainty factors.
Chemical-Specific Adjustment Factors for Population Variability and Risk Assessment
Based on the inter-strain estimation, we further used predicted disposition of PERC, TCA, and GSH conjugates across CC strains to calculate the chemical-specific adjustment factors for TK variability
Currently, the Bayesian Markov chain Monte Carlo (MCMC) algorithm is an effective way to do population PBPK model calibration.
Currently, the Bayesian Markov chain Monte Carlo (MCMC) algorithm is an effective way to do population PBPK model calibration.
This method often have challenges to reach "convergence" with acceptable computational times (More complex model with more parameters, More time!)
We usually used Bayesian approach to quantify the uncertainty and variability. The Bayesian Markov chain Monte Carlo algorithm is an effective way to do population PBPK model calibration. But ...
To improve the computational efficiency, we can focus on these four elements. The first one is ...
Funding: Food and Drug Administration (1U01FD005838)
Project Start: Sep-2016
Name: Enhancing the reliability, efficiency, and usability of Bayesian population PBPK modeling
Based on these reasons, we started this project.
Funding: Food and Drug Administration (1U01FD005838)
Project Start: Sep-2016
Name: Enhancing the reliability, efficiency, and usability of Bayesian population PBPK modeling
Based on these reasons, we started this project.
Funding: Food and Drug Administration (1U01FD005838)
Project Start: Sep-2016
Name: Enhancing the reliability, efficiency, and usability of Bayesian population PBPK modeling
Based on these reasons, we started this project.
Funding: Food and Drug Administration (1U01FD005838)
Project Start: Sep-2016
Name: Enhancing the reliability, efficiency, and usability of Bayesian population PBPK modeling
Based on these reasons, we started this project.
GNU MCSim is a general purpose modeling and simulation program which can performs "standard" or "Markov chain" Monte Carlo simulations. It allows you to specify a set of linear or nonlinear algebraic equations or ordinary differential equations. They are solved numerically using parameter values you choose or parameter values sampled from statistical distributions. Simulation outputs can be compared to experimental data for Bayesian parameter estimation (model calibration).
Founder: Frédéric Y. Bois
Staff Toxicologist (Specialist),
Reproductive and Cancer Hazard Assessment Section,
CalEPA, Berkeley, USA, 1991-96
This project mainly focus on using GNU MCSim
This founder of this project is Dr. Frédéric Y. Bois. He had been worked in CalEPA as Staff Toxicologist for 5 years.
Hsieh N-H, Reisfeld B, Bois FY and Chiu WA. 2018. Applying a Global Sensitivity Analysis Workflow to Improve the Computational Efficiencies in Physiologically-Based Pharmacokinetic Modeling. Frontiers in Pharmacology 9:588.
Hsieh N-H, Reisfeld B, and Chiu WA. pksensi: R package to apply global sensitivity analysis in physiologically based kinetic models. (Submitted)
This is our project from FDA that aim to improve the functionality in MCSim and the research of the Bayesian framework.
One of the current challenges in the Bayesian method is time-consuming if our model becomes more complex or more comprehensive and we have more data in our multi-individuals and multi-groups testing. It will take need very long time to calibrate model parameter. Therefore, in our 2018 paper, we proposed a workflow to fix the non-influential parameter in our PBPK model based on the global sensitivity analysis.
Through this way, we can save about half time in our model calibration.
Our recent submitted paper is to show how we developed and used tempering MCMC in PBPK modeling. This algorithm can also have a faster convergence time compared with the original one.
We usually fix the "possible" non-influential model parameters through "expert judgment".
We usually fix the "possible" non-influential model parameters through "expert judgment".
This approach might cause "bias" in parameter estimates and model predictions.
In our specific aim we want to find find the proper way to fix the non-influential parameter in PBPK model to improve the computational speed. Usually,
The study of how uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input.
Img: Krishnan, K. and Andersen, M.E. eds., 2010. Quantitative modeling in toxicology. John Wiley & Sons.
Sensitivity analysis is the study of how uncertainty in the model output can be appointed to different sources of uncertainty in the model input. For example, in the PBPK model, we can have different measurement outputs such as concentration of blood or organs from animal study. These ouputs are determined by the various input parameters. Therefore, we can apply sensitivity analysis to figure out the most and least important parameter in our modeling process.
Parameter Prioritization
Identifying the most important parameters
Reduce the uncertainty in the model response if it is too large (ie not acceptable)
Parameter Fixing
Identifying the least important parameters
Simplify the model if it has too many parameters
Parameter Mapping
Here we have observations (assumed error-free for simplicity’s sake) and a model whose parameters are estimated from the data. Estimation can take different courses. Usually it is achieved by minimizing, e.g. by least squares, some measure of distance between the model’s prediction and the data. At the end of the estimation step, ‘best’ parameter values as well as their errors are known. At this point we might consider the model ‘true’ and run an uncertainty analysis by propagating the uncertainty in the parameters through the model, all the way to the model output. In this case the estimated parameters become our factors.
Local (One-at-a-time)
"Local" SA focus on sensitivity at a particular set of input parameters, usually using gradients or partial derivatives
Usually, some people have experience in modeling they have the knowledge in local sensitivity analysis. This method is very simple. You move one parameter and fix other parameters then check the change of model outputs. On the other side, some researcher also developed the approach that moves all parameters at a time and checks the change of model output. We call it Global sensitivity analysis or variance-based sensitivity analysis.
Local (One-at-a-time)
"Local" SA focus on sensitivity at a particular set of input parameters, usually using gradients or partial derivatives
Global (All-at-a-time)
"Global" SA calculates the contribution from the variety of all model parameters, including Single parameter effects and Multiple parameter interactions
Usually, some people have experience in modeling they have the knowledge in local sensitivity analysis. This method is very simple. You move one parameter and fix other parameters then check the change of model outputs. On the other side, some researcher also developed the approach that moves all parameters at a time and checks the change of model output. We call it Global sensitivity analysis or variance-based sensitivity analysis.
Local (One-at-a-time)
"Local" SA focus on sensitivity at a particular set of input parameters, usually using gradients or partial derivatives
Global (All-at-a-time)
"Global" SA calculates the contribution from the variety of all model parameters, including Single parameter effects and Multiple parameter interactions
"Global" sensitivity analysis is good at parameter fixing
Usually, some people have experience in modeling they have the knowledge in local sensitivity analysis. This method is very simple. You move one parameter and fix other parameters then check the change of model outputs. On the other side, some researcher also developed the approach that moves all parameters at a time and checks the change of model output. We call it Global sensitivity analysis or variance-based sensitivity analysis.
Usually, some people have experience in modeling they have the knowledge in local sensitivity analysis.
This method is very simple. You move one parameter and fix other parameters then check the change of model outputs.
On the other side, some researcher also developed the approach that moves all parameters at a time and checks the change of model output. We call it Global sensitivity analysis or variance-based sensitivity analysis.
The output variance contributed by the specific parameter \(x_i\),
also known as main effect
The output variance contributed by any pair of input parameter
The output variance contributed by the specific parameter and interaction,
also known as total effect
The output variance contributed by the specific parameter \(x_i\),
also known as main effect
The output variance contributed by any pair of input parameter
The output variance contributed by the specific parameter and interaction,
also known as total effect
"Local" SA usually only addresses first order effects
"Global" SA can address total effect that include main effect and interaction
Hsieh N-H, Reisfeld B, Bois FY and Chiu WA. 2018. Applying a Global Sensitivity Analysis Workflow to Improve the Computational Efficiencies in Physiologically-Based Pharmacokinetic Modeling. Frontiers in Pharmacology 9:588.
Background:
Many global sensitivity analysis (GSA) algorithms have been developed, but we don't have knowledge to identify which one is the best.
There is NO suitable reference in parameter fixing for PBPK model.
Objectives:
There are different types of GSA approach that can estimate the sensitivity index. For example some testing methods use Monte Carlo approach to generate the the random sample in parameter space, but there are also some different kind of methods that uses the algorithm to generate specific search curve to sample the parameter.
Acetaminophen (APAP) PK data and PBPK model
Global sensitivity analysis
Reproduce result from original paper (21 parameters)
Full model calibration (58 parameters)
Our workflow started by reproducing the result from the previous study.
Reproduce result from original paper (21 parameters)
Full model calibration (58 parameters)
Sensitivity analysis using different algorithms
Compare the time-cost for sensitivity index to being stable
Compare the consistency across different algorithms
Our workflow started by reproducing the result from the previous study.
Reproduce result from original paper (21 parameters)
Full model calibration (58 parameters)
Sensitivity analysis using different algorithms
Compare the time-cost for sensitivity index to being stable
Compare the consistency across different algorithms
Bayesian model calibration by SA-judged influential parameters
Compare the model performance under the setting "cut-off"
Exam the bias for expert and SA-judged parameters
Our workflow started by reproducing the result from the previous study.
Time-spend in SA (min): Morris (2.4) < eFAST (19.8) \(\approx\) Jansen (19.8) < Owen (59.4)
Variation of SA index: Morris (2.3%) < eFAST (5.3%) < Jansen (8.0%) < Owen (15.9%)
In our result, we’ll answer the four questions.
We tested the computer time under the sample number from 1000 to 8000 and checked the variation of sensitivity index.
As expected, Morris is the fastest approach that can rapidly compute the result under the same sample number with the lowest variability.
But if we focus on the variance-based method, we can find that eFAST shows better performance than Jansen and Owen.
eFAST \(\approx\) Jansen \(\approx\) Owen > Morris
Grey: first-order; Red: interaction
We correlated the sensitivity index across four algorithms by using the results of original and all model parameters. The grey and red color are the correlation plot for first-order and interaction.
We found that all global methods can provide the consistent sensitivity index.
However, Morris cannot produce the similar results with other algorithms.
Original model parameters
(OMP)
Setting cut-off
Original influential parameters
(OIP)
Full model parameters
(FMP)
Setting cut-off
Full influential parameters
(FIP)
Model calibration and validation
Moving on to the third question.
The crucial factor in this part is how do we set up the cut-off for the sensitivity index that can be used to distinguish the influential and non-influential parameters.
Then we can use the selected parameter sets to do the further model calibration.
To determine the reliable cut-off, we used both parameter sets from original and the full parameter set.
After the sensitivity analysis, we set the cut-off and do the model calibration and validation to see which cut-off can provide the better trade-off between computer time and model performance.
1 Absorption parameters
2 Phase I metabolism parameters
4 Phase II metabolism parameters
1 physiological parameter
1 chemical-specific transport parameter
4 partition coefficients
Firstly, we used the cut-off at .05, which means if the parameter can not provide the impact for output variance over 5%, we’ll drop-off this parameter. We found 11 influential parameters and 10 non-influential parameters in this original parameter set. However, when we used the cut-off at .05, we can only screen 10 parameters in the full model set. So we further used the cut-off at .01 to choose additional 10 parameters to do the model calibration.
Here is the summary of the number of parameters that were be classified to the non-influential parameter in original parameter setting. These parameters are from absorption and metabolism. In addition, we found 6 parameters that were fixed in the previous study.
We compared the observed human experiment data, and the predicted values then summarized the residual.
We can see the full influential parameter with the cut-off at .01 that only used 1/3 of full model parameters can provide the similar result with full parameter sets.
Also, using the cut-off at .05 that only used 10 parameters can generate the similar result with original parameter sets.
Parameter group | Time-cost in calibration (hr) |
---|---|
All model parameters | 104.6 (0.96) |
GSA-judged parameters | 42.1 (0.29) |
Expert-judged parameters | 40.8 (0.18) |
All time-cost value are shown with mean and standard deviation (n = 10).
Application of GSA in trichloroethylene (TCE)-PBPK model
Funding: National Institute of Environmental Health Sciences (P42ES027704)
Hsieh, N.H., Chen, Z., and Rusyn, I., Chiu, W.A., Concentration-response modeling of in vitro bioactivity data from complex mixtures of priority Superfund compounds.
Background:
Objective:
To examine the concentration-response relationships of individual Superfund priority-list chemicals and their mixtures using in vitro bioactivity data
Environmental chemicals at Superfund sites are composed of diverse chemicals that include heavy metals, pesticides, industrial chemicals, polycyclic aromatic hydrocarbons, and plasticizers
Probabilistic TK/TD modeling provides an insight to quantify the uncertainty/variability in risk assessment
Bayesian statistics is a powerful tool. It can be used to understand and characterize the "uncertainty" and "variability" from individuals to population through data and model.
Global sensitivity analysis can be an essential tool to reduce the dimensionality of model parameter and improve the "computational efficiency" in Bayesian PBPK model calibration.
Zunwei Chen
Weihsueh A. Chiu
Yu-Syuan Luo
Ivan Rusyn
Brad Reisfeld
Frederic Y. Bois
Thank You!
I'll provide 4 topics in today's talk. The first one is Uncertainty and Variability analysis, I will give a general of how we apply uncertainty and sensitivity analysis in our modeling process. In addition, in my recent research, I also focus on sensitivity analysis to quantify the uncertainty from model output. Sensitivity analysis also has some applications. I will provide some examples later on.
On the other side, I will focus on pharmacokinetic modeling. This approach can help us understand the chemical behavior or properties when we under the chemical exposure. We can use this method to estimate the external exposure to internal dose. Secondary, we want to pharmacodynamic model to know how chemical affect on human health and make some predictions.
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |