The goal of the toxicological research is to apply the result in human health risk assessment.
Absorption - How will it get in?
Distribution - Which tissue organ it will go?
Metabolism - How is it broken down and transformation?
Elimination - How it leave the body?
Kinetics is a branch of chemistry which describe the change of one or more variables as a function of time.
So we focus on the time and concentration. It’ll go up after intake the drug and then go down after reach the maximum concentration.
To "predict" the cumulative does by constructed compartmental model.
D: Intake dose (mass)
C: Concentration (mass/vol.)
ka: Absorption rate (1/time)
Vmax: Maximal metabolism rate
Km: Concentration of chemical achieve half Vmax (mass/vol.)
kel: Elimination rate (1/time)
But it’s difficult to realize the whole time-concentration relationship due to it will take a lot of time and money in the experiment.
Therefore, we need to use toxicokinetic modeling. The basic model is constructed by a single variable with few parameters.
We can set up input dose and define the output such as chemical concentration in blood or other tissue organs.
Then, we can set up the parameters such as absorption,elimination rate constant.
Mathematically transcribe physiological and physicochemical descriptions of the phenomena involved in the complex ADME processes.
Img: http://pharmguse.net/pkdm/prediction.html
However, the real world is not as simple as we think.
Sometimes we need to consider the complex mechanism and factors that can help us make a good prediction.
So people developed the PBPK that include physiological and physicochemical parameters that correspond to the real-world situation and can be used to describe the complex ADME processes.
It can be applied to drug discovery and development and risk assessment.
Health risk assessment
Pharmaceutical research
Drug development
. . .
https://doi.org/10.1002/psp4.12134
--- Calibration ---
Individuals level
E: Exposure
t: Time
θ: Specific parameters
y: condition the data
Population level
μ: Population means
Σ2: Population variances
σ2: Residual errors
Characterizing and quantifying variability and uncertainty
Cross-species comparisons of metabolism
ggplot(Theoph, aes(x = Time, y =conc, color = Subject)) + geom_point() + geom_line() + labs(x = "Time (hr)", y = "Concentration (mg/L)")
Priors, posteriors, likelihood, multilevel models
p(θ|y)∝p(y|θ)×p(θ)
Creator: Dr. Frédéric Y. Bois
Design and run statistical or simulation models (e.g., differential equations)
Do parametric simulations (e.g., sensitivity analysis)
Perform Monte Carlo (stochastic) simulations
Luo Y-S, Hsieh N-H, Soldatow VY, Chiu WA, Rusyn I. 2018. Comparative analysis of metabolism of trichloroethylene and tetrachloroethylene among mouse tissues and strains. Toxicology 409(1): 33-43.
Luo Y-S, Cichocki JA, Hsieh N-H, Lewis L, Threadgill DW, Chiu WA, Rusyn I. Using collaborative cross mouse population to fill data gaps in risk assessment a case study of population-based analysis of toxicokinetics and kidney toxicodynamics of tetrachloroethylene. (Accepted)
In our recent research, we incorporate more animal experiment results. The first study included an additional measurement of GSH conjugate in mice tissues, to build more comprehensive PBPK model to make Bayesian inference.
In the second and the third study we build the multicompartment PK model to estimate the metabolism of TCE and PCE in liver, kidney, brain, lung, and serum.
We didn't use the PBPK model because the current is PBPK model takes a very long time in model calibration. Therefore, we tried to re-build this simplified model to compare and describe the Trichloroethylene and tetrachloroethylene among mouse tissues and strains.
In our third study, it is also the extensive research that used the same model with Collaborative Cross Mouse and applies the study result in risk assessment.
Currently, the Bayesian Markov chain Monte Carlo (MCMC) algorithm is a effective way to do population PBPK model calibration.
BUT
But, we have a challenge in our calibration process.
Currently, the Bayesian Markov chain Monte Carlo (MCMC) algorithm is a effective way to do population PBPK model calibration.
BUT
This method often have challenges to reach "convergence" with acceptable computational times (More parameters, More time!)
But, we have a challenge in our calibration process.
Our Problem Is ...
How to Improve the Computational Efficiency
Funder: Food and Drug Administration (FDA)
Project Start: Sep-2016
Name: Enhancing the reliability, efficiency, and usability of Bayesian population PBPK modeling
Based on these reasons, we started this project.
Funder: Food and Drug Administration (FDA)
Project Start: Sep-2016
Name: Enhancing the reliability, efficiency, and usability of Bayesian population PBPK modeling
Based on these reasons, we started this project.
Funder: Food and Drug Administration (FDA)
Project Start: Sep-2016
Name: Enhancing the reliability, efficiency, and usability of Bayesian population PBPK modeling
Based on these reasons, we started this project.
Funder: Food and Drug Administration (FDA)
Project Start: Sep-2016
Name: Enhancing the reliability, efficiency, and usability of Bayesian population PBPK modeling
Based on these reasons, we started this project.
February 19th, 2019 - Release of GNU MCSim version 6.1.0.
Version 6.1.0 offers an automatic setting of the inverse-temperature scale in simulated tempering MCMC. This greatly facilitates the uses of this powerful algorithm (convergence is reached faster, only one chain needs to be run if inverse-temperature zero is in the scale, Bayes factors can be calculated for model choice).
Hsieh N-H, Reisfeld B, Bois FY and Chiu WA. 2018. Applying a Global Sensitivity Analysis Workflow to Improve the Computational Efficiencies in Physiologically-Based Pharmacokinetic Modeling. Frontiers in Pharmacology 9:588.
Bois FY, Hsieh N-H, Gao W, Reisfeld B, Chiu WA. Well tempered MCMC simulations for population pharmacokinetic models. (Submitted)
This is our project from FDA that aim to improve the functionality in MCSim and the research of the Bayesian framework.
One of the current challenges in the Bayesian method is time-consuming if our model becomes more complex or more comprehensive and we have more data in our multi-individuals and multi-groups testing. It will take need very long time to calibrate model parameter. Therefore, in our 2018 paper, we proposed a workflow to fix the non-influential parameter in our PBPK model based on the global sensitivity analysis.
Through this way, we can save about half time in our model calibration.
Our recent submitted paper is to show how we developed and used tempering MCMC in PBPK modeling. This algorithm can also have a faster convergence time compared with the original one.
We usually fix the "possible" non-influential model parameters through "expert judgment".
BUT
We usually fix the "possible" non-influential model parameters through "expert judgment".
BUT
This approach might cause "bias" in parameter estimates and model predictions.
Actually, we already applied the concept of parameter fixing to improve the computational time.
But,
This is why we need an alternative way to fix the unimportant parameter. So we chose sensitivity analysis.
The study of how uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input."
Parameter Prioritization
Identifying the most important factors
Reduce the uncertainty in the model response if it is too large (ie not acceptable)
Parameter Fixing
Identifying the least important factors
Simplify the model if it has too many factors
Parameter Mapping
Here we have observations (assumed error-free for simplicity’s sake) and a model whose parameters are esti- mated from the data. Estimation can take different courses. Usually it is achieved by minimizing, e.g. by least squares, some measure of distance between the model’s prediction and the data. At the end of the estimation step, ‘best’ parameter values as well as their errors are known. At this point we might consider the model ‘true’ and run an uncertainty analysis by propagating the uncertainty in the parameters through the model, all the way to the model output. In this case the estimated parameters become our factors.
It consequently provides useful insight into which model input contributes most to the variability of the model output
Local (One-at-a-time)
"Local" SA focus on sensitivity at a particular set of input parameters, usually using gradients or partial derivatives
Usually, some people have experience in modeling they have the knowledge in local sensitivity analysis.
This method is very simple. You move one parameter and fix other parameters then check the change of model outputs.
On the other side, some researcher also developed the approach that moves all parameters at a time and checks the change of model output. We call it Global sensitivity analysis or variance-based sensitivity analysis.
Local (One-at-a-time)
"Local" SA focus on sensitivity at a particular set of input parameters, usually using gradients or partial derivatives
Global (All-at-a-time)
"Global" SA calculates the contribution from the variety of all model parameters, including Single parameter effects and Multiple parameter interactions
Usually, some people have experience in modeling they have the knowledge in local sensitivity analysis.
This method is very simple. You move one parameter and fix other parameters then check the change of model outputs.
On the other side, some researcher also developed the approach that moves all parameters at a time and checks the change of model output. We call it Global sensitivity analysis or variance-based sensitivity analysis.
Local (One-at-a-time)
"Local" SA focus on sensitivity at a particular set of input parameters, usually using gradients or partial derivatives
Global (All-at-a-time)
"Global" SA calculates the contribution from the variety of all model parameters, including Single parameter effects and Multiple parameter interactions
"Global" sensitivity analysis is good at parameter fixing
Usually, some people have experience in modeling they have the knowledge in local sensitivity analysis.
This method is very simple. You move one parameter and fix other parameters then check the change of model outputs.
On the other side, some researcher also developed the approach that moves all parameters at a time and checks the change of model output. We call it Global sensitivity analysis or variance-based sensitivity analysis.
Usually, some people have experience in modeling they have the knowledge in local sensitivity analysis.
This method is very simple. You move one parameter and fix other parameters then check the change of model outputs.
On the other side, some researcher also developed the approach that moves all parameters at a time and checks the change of model output. We call it Global sensitivity analysis or variance-based sensitivity analysis.
First order (Si)
Interaction (Sij)
Total order (ST)
This is the quantification of the impact of the model parameter.
First order (Si)
The output variance contributed by the specific parameter xi,
also known as main effect
Interaction (Sij)
Total order (ST)
First order just like how your friend's face looks like when he tastes the specific fruit.
First order (Si)
The output variance contributed by the specific parameter xi,
also known as main effect
Interaction (Sij)
The output variance contributed by any pair of input parameter
Total order (ST)
The Interaction just like the mixing flavors from two or more fruits.
First order (Si)
The output variance contributed by the specific parameter xi,
also known as main effect
Interaction (Sij)
The output variance contributed by any pair of input parameter
Total order (ST)
The output variance contributed by the specific parameter and interaction,
also known as total effect
The Interaction just like the mixing flavors from two or more fruits.
First order (Si)
The output variance contributed by the specific parameter xi,
also known as main effect
Interaction (Sij)
The output variance contributed by any pair of input parameter
Total order (ST)
The output variance contributed by the specific parameter and interaction,
also known as total effect
“Local” SA usually only addresses first order effects
“Global” SA can address total effect that include main effect and interaction
The Interaction just like the mixing flavors from two or more fruits.
Variance-based methods for sensitivity analysis were first employed by chemists in the early 1970s. The method, known as FAST (Fourier Amplitude Sensitivity Test).
Si=V[E(Y|Xi)]V(Y) Si=Corr(Y,E(Y|Xi))
Example of parameter 1 for model with three parameters
ST1=S1+S12+S13+S123
S1+S2+S3+S12+S13+S23+S123=1
Define the variable of interest (yj,t)
Identify all model factor (xi) which should be consider in GSA
Characterise the uncertainty for each selected input factor p(xi)
Generate a sample of a given size (n) from the previously defined probability distribution
Execute the model for each combination of factor
Visualiza the output, and interpret the outputs
Estimate the sensitivity measures
f(x)=sin(x1)+asin2(x2)+bx43sin(x1)
x <- fast99(model = ishigami.fun, factors = 3, n = 400, q = "qunif", q.arg = list(min = -pi, max = pi))par(mfrow = c(1,3))plot(x$X[,1], x$y); plot(x$X[,2], x$y); plot(x$X[,3], x$y)
plot(x)
(1) Many algorithms have been developed, but we don't have knowledge to identify which one is the best, and (2) there is NO suitable reference in parameter fixing for PBPK model.
ls(pos = "package:sensitivity")
## [1] "ask" "atantemp.fun" "campbell1D.fun" ## [4] "delsa" "dgumbel.trunc" "dnorm.trunc" ## [7] "fast99" "ishigami.fun" "linkletter.fun" ## [10] "morris" "morris.fun" "morrisMultOut" ## [13] "parameterSets" "pcc" "pgumbel.trunc" ## [16] "PLI" "PLIquantile" "plot3d.morris" ## [19] "plotFG" "pnorm.trunc" "PoincareConstant"## [22] "PoincareOptimal" "qgumbel.trunc" "qnorm.trunc" ## [25] "rgumbel.trunc" "rnorm.trunc" "sb" ## [28] "scatterplot" "sensiFdiv" "sensiHSIC" ## [31] "shapleyPermEx" "shapleyPermRand" "sobol" ## [34] "sobol.fun" "sobol2002" "sobol2007" ## [37] "sobolEff" "sobolGP" "soboljansen" ## [40] "sobolmara" "sobolmartinez" "sobolMultOut" ## [43] "sobolowen" "sobolroalhs" "sobolroauc" ## [46] "sobolSalt" "sobolSmthSpl" "sobolTIIlo" ## [49] "sobolTIIpf" "soboltouati" "src" ## [52] "support" "tell" "template.replace"
Acetaminophen (APAP) is a widely used pain reliver and fever reducer.
The therapeutic index (ratio of toxic to therapeutic doses) is unusually small.
Phase I (APAP to NAPQI) is toxicity pathway at high dose.
Phase II (APAP to conjugates) is major pathways at therapeutic dose.
A typical pharmacological agent with large amounts of
both human and animal data.
Well-constructed and studied PBPK model.
I'll move on to our materials, the first one is the PBPK model.
In this study, we used acetaminophen, which is a widely used medication.
It includes two types of metabolic pathways that can have therapeutic and toxicity effects.
But the most important thing is that acetaminophen has a large amount of PK data for both human and other animals. And we also had a well-constructed and studied PBPK model.
The simulation of the disposition of acetaminophen (APAP) and two of its key metabolites, APAP-glucuronide (APAP-G) and APAP-sulfate (APAP-S), in plasma, urine, and several pharmacologically and toxicologically relevant tissues.
Zurlinden, T.J. & Reisfeld, B. Eur J Drug Metab Pharmacokinet (2016) 41: 267.
The acetaminophen-PBPK model is constructed with several tissue compartments such as Fat muscle, liver, and kidney.
It can be used to simulate and predict the concentration of its two metabolites in plasma and urine.
Also, it can be used to describe the different dosing method include oral and intravenous.
2 Acetaminophen absorption (Time constant)
2 Phase I metabolism: Cytochrome P450 (M-M constant)
4 Phase II metabolism: sulfation (M-M constant)
4 Phase II metabolism: glucuronidation (M-M constant)
4 Active hepatic transporters (M-M constant)
2 Cofactor synthesis (fraction)
3 Clearance (rate constant)
M-M: Michaelis–Menten
In the previous study, they only choose 21 parameters to do the model calibration.
These parameters can be simply classified into four types, which includes
You can see most parameters are Michaelis–Menten constant that determined the metabolism of acetaminophen.
VTdCjTdt=QT(CjA−CjTPT:blood)
Physiological parameters
1 cardiac output
6 blood flow rate (Q)
8 tissue volume (V)
Physicochemical parameters
22 partition coefficient for APAP, APAP-glucuronide, and APAP-sulfate (P)
On the other side, we found 37 parameters that were be fixed in the previous study that include cardiac output, blood flow rate, and tissue volume in physiological parameters.
In physicochemical parameters, we also tested the sensitivity of partition coefficients acetaminophen and its conjugates.
We further used these parameters and gave the uncertainty to do the calibration.
Eight studies (n = 71) with single oral dose and three different given doses.
Also, experiment data is very important in our model calibration.
We used the PK data from eight studies. This data included 71 subjects.
All these subjects are administrated with single oral dose.
The given dose levels were from 325 mg in group A to 80 mg/kg in group H.
Local:
Global:
Extended Fourier Amplitude Sensitivity Testing (eFAST)
Jansen's algorithm
Owen's algorithm
Finally, we used four sensitivity analysis algorithms as candidates and compared the final result with each other.
Perform parameter sampling in Latin Hypercube following One-Step-At-A-Time
Can compute the importance (μ∗) and interaction (σ) of the effects
The first one is the Morris method.
Morris can be classified into the local sensitivity analysis. But it is an unusual local approach.
Unlike the traditional local method, it can also estimate the interaction in sensitivity index.
On the right-hand side is an example of Morris screening.
Perform parameter sampling in Latin Hypercube following One-Step-At-A-Time
Can compute the importance (μ∗) and interaction (σ) of the effects
1. It is semi-quantitative – the factors are ranked on an interval scale
2. It is numerically efficient
3. Not very good for factor fixing
The first one is the Morris method.
Morris can be classified into the local sensitivity analysis. But it is an unusual local approach.
Unlike the traditional local method, it can also estimate the interaction in sensitivity index.
On the right-hand side is an example of Morris screening.
set.seed(1111)x <- morris(model = ishigami.fun, factors = c("Factor A", "Factor B","Factor C"), r = 30, binf = -pi, bsup = pi, design = list(type = "oat", levels = 5, grid.jump = 3))par(mfrow = c(1,3))plot(x$X[,1], x$y); plot(x$X[,2], x$y); plot(x$X[,3], x$y)
par(mar=c(4,4,1,1))plot(x, xlim = c(0,10), ylim = c(0,10))abline(0,1) # non-linear and/or non-monotonicabline(0,0.5, lty = 2) # monotonicabline(0,0.1, lty = 3) # almost linearlegend("topleft", legend = c("non-linear and/or non-monotonic", "monotonic", "linear"), lty = c(1:3))
Defines a search curve in the input space.
Generates TWO independent random sampling parameter matrices.
Combines THREE independent random sampling parameter matrices.
The main difference in our global methods is they used the different ways to do parameter sampling.
The eFAST method is using the generated search curve and samples the parameters on this curve.
Jansen and Owen are used Monte Carlo-based method to sample the parameters in the parameter space.
Compare to Morris; the global methods have these characteristics.
Reproduce result from original paper (21 parameters)
Full model calibration (58 parameters)
Our workflow started by reproducing the result from the previous study.
Reproduce result from original paper (21 parameters)
Full model calibration (58 parameters)
Sensitivity analysis using different algorithms
Compare the time-cost for sensitivity index to being stable
Compare the consistency across different algorithms
Our workflow started by reproducing the result from the previous study.
Reproduce result from original paper (21 parameters)
Full model calibration (58 parameters)
Sensitivity analysis using different algorithms
Compare the time-cost for sensitivity index to being stable
Compare the consistency across different algorithms
Bayesian model calibration by SA-judged influential parameters
Compare the model performance under the setting "cut-off"
Exam the bias for expert and SA-judged parameters
Our workflow started by reproducing the result from the previous study.
Time-spend in SA (min): Morris (2.4) < eFAST (19.8) ≈ Jansen (19.8) < Owen (59.4)
Variation of SA index: Morris (2.3%) < eFAST (5.3%) < Jansen (8.0%) < Owen (15.9%)
eFAST ≈ Jansen ≈ Owen > Morris
Grey: first-order; Red: interaction
Original model parameters
(OMP)
Setting cut-off
Original influential parameters
(OIP)
Full model parameters
(FMP)
Setting cut-off
Full influential parameters
(FIP)
Model calibration and validation
OIP: original influential parameters; OMP: original model parameters; FIP: full influential parameter; FMP: full model parameters
1 Absorption parameters
2 Phase I metabolism parameters
4 Phase II metabolism parameters
1 physiological parameter
1 chemical-specific transport parameter
4 partition coefficients
Here is the summary of the number of parameters that were be classified to the non-influential parameter in original parameter setting.
These parameters are from absorption and metabolism.
In addition, we found 6 parameters that were fixed in the previous study.
OIP: original influential parameters; OMP: original model parameters; FIP: full influential parameter; FMP: full model parameters
We compared the observed human experiment data, and the predicted values then summarized the residual.
We can see the full influential parameter with the cut-off at .01 that only used 1/3 of full model parameters can provide the similar result with full parameter sets.
Also, using the cut-off at .05 that only used 10 parameters can generate the similar result with original parameter sets.
Parameter group | Time-cost in calibration (hr) |
---|---|
All model parameters | 104.6 (0.96) |
GSA-judged parameters | 42.1 (0.29) |
Expert-judged parameters | 40.8 (0.18) |
Bayesian statistics is a powerful tool. It can be used to understand and characterize the "uncertainty" and "variability" from individuals to population through data and model.
Global sensitivity analysis can be an essential tool to reduce the dimensionality of model parameter and improve the "computational efficiency" in Bayesian PBPK model calibration.
If you want to do model calibration for PK or PBPK model, you can,
Use the eFAST to detect the "influential" model parameter and making sure to check "convergence".
Distinguish “influential” and “non-influential” parameters with cut-off from sensitivity index
Conduct model calibration for only the "influential" parameters.
Reproducible research - Software development (R package)
Application of GSA in trichloroethylene (TCE)-PBPK model
question?
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |