Choose the "specific" value (or the most conservative scenario) in the risk assessment
Is it good enough?
Choose the "specific" value (or the most conservative scenario) in the risk assessment
Combine "all" information and characterize the uncertainty
Uncertainty relates to "lack of knowledge"" that, in theory, could be reduced by better data, whereas variability relates to an existing aspect of the real world that is outside our control.
There is a clear definition to differentiate the uncertainty and variability in WHO document.
Chiu WA and Rusyn I, 2018. Advancing chemical risk assessment decision-making with population variability data: challenges and opportunities. https://doi.org/10.1007/s00335-017-9731-6
--- Application ---
--- Calibration ---
Stanislaw Ulam
John von Neumann
ENIAC (Electronic Numerical Integrator and Computer)
The objective of a probabilistic risk analysis is the quantification of risk from made man-made and natural activities (Vesely and Rasmuson, 1984).
Two major types of uncertainty need to be differentiated:
Modeling uncertainty
Parameter uncertainty
Completeness uncertainty
The parameter is an element of a system that determine the model output.
Parameter uncertainty comes from the model parameters that are inputs to the mathematical model but whose exact values are unknown and cannot be controlled in physical experiments.
$$y = f(x_i) $$
With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.
-John von Neumann
Mayer J, Khairy K, Howard J. Drawing an elephant with four complex parameters. Am. J. Phys. 78, 648 (2010) https://doi.org/10.1119/1.3254017
Physiological parameters
Cardiac output
Blood flow rate
Tissue volume
Absorption
Absorption fraction, absorption rate, ...
Distribution
Partition coefficient, distribution fraction, ...
Metabolism
Michaelis–Menten kinetics, ...
Elimination
First order elimination rate, ...
Used to: Check possible simulation (under given parameter distributions) results before model calibration
Used to: Posterior predictive check, Local/global sensitivity analysis
It gives us the opportunity to understand and quantify the "uncertainty" and "variability" from individuals to population through data and model.
$$ p(\theta|y) = \frac{p(\theta) p(y|\theta) }{p(y)}$$
\(y\): Observed data
\(\theta\): Observed or unobserved parameter
\(p(\theta)\): Prior distribution of model parameter
\(p(y|\theta)\): Likelihood of the experiment data given by a parameter vector
\(p(\theta|y)\): Posterior distribution
\(p(y)\): Likelihood of data
The algorithm was named for Nicholas Metropolis (physicist) and Wilfred Keith Hastings (statistician). The algorithm proceeds as follows.
Initialize
Iterate
Markov chain Monte Carlo is a general method based on drawing values of theta from approximate distributions and then correcting those draws to better approximate target posterior p(theta|y).
## linear.model.R ####Outputs = {y}# Model ParametersA = 0; # B = 1;CalcOutputs { y = A + B * t); }End.
## linear_mcmc.in.R ####MCMC ("MCMC.default.out","", # name of output "", # name of data file 2000,0, # iterations, print predictions flag, 1,2000, # printing frequency, iters to print 10101010); # random seed (default )Level { Distrib(A, Normal, 0, 2); # prior of intercept Distrib(B, Normal, 1, 2); # prior of slope Likelihood(y, Normal, Prediction(y), 0.05); Simulation { PrintStep (y, 0, 10, 1); Data (y, 0.01, 0.15, 2.32, 4.33, 4.61, 6.68, 7.89, 7.13, 7.27, 9.4, 10.0); }}End.
model <- "models/linear.model"input <- "inputs/linear.mcmc.in"set.seed(1111) out <- mcsim(model, input)
head(out)
## iter A.1. B.1. LnPrior LnData LnPosterior## 1 0 5.17187 -0.421849 -6.820405 -591.1577 -597.9781## 2 1 5.17187 -0.421849 -6.820405 -591.1577 -597.9781## 3 2 5.43465 -0.421849 -7.168811 -565.2515 -572.4203## 4 3 8.62813 -0.421849 -12.782460 -493.2532 -506.0356## 5 4 7.97506 -0.421849 -11.427070 -471.4773 -482.9044## 6 5 7.51347 -0.421849 -10.533410 -467.4057 -477.9391
tail(out, 4)
## iter A.1. B.1. LnPrior LnData LnPosterior## 1998 1997 0.706138 0.957902 -3.286722 -19.06480 -22.35153## 1999 1998 0.706138 0.957902 -3.286722 -19.06480 -22.35153## 2000 1999 0.706138 0.974017 -3.286585 -19.13584 -22.42242## 2001 2000 0.462481 0.974017 -3.250992 -18.92306 -22.17405
plot(out$A.1., type = "l", xlab = "Iteration", ylab = "")lines(out$B.1., col = 2)legend("topright", legend = c("Intercept", "Slope"), col = c(1,2), lty = 1)
plot(out$A.1., out$B.1., type = "b", xlab = "Intercept", ylab = "Slope")
str <- ceiling(nrow(out)/2) + 1end <- nrow(out)j <- c(str:end)plot(out$A.1.[j], out$B.1.[j], type = "b", xlab = "Intercept", ylab = "Slope")
out$A.1.[j] %>% density() %>% plot(main = "Intercept")plot.rng <- seq(par("usr")[1], par("usr")[2], length.out = 1000)prior.dens <- do.call("dnorm", c(list(x=plot.rng), c(0,2)))lines(plot.rng, prior.dens, lty="dashed")
out$B.1.[j] %>% density() %>% plot(main = "Slope")plot.rng <- seq(par("usr")[1], par("usr")[2], length.out = 1000)prior.dens <- do.call("dnorm", c(list(x=plot.rng), c(1,2)))lines(plot.rng, prior.dens, lty="dashed")
# Observedx <- seq(0,10,1)y <- c(0.0, 0.15, 2.32, 4.33, 4.61, 6.68, 7.89, 7.13, 7.27, 9.4, 10.0)# Expecteddim.x <- ncol(out)for(i in 1:11){ out[,ncol(out)+1] <- out$A.1. + out$B.1.*x[i]}# Plotplot(x, y, pch ="")for(i in 1901:2000){ lines(x, out[i,c((dim.x+1):ncol(out))],col="grey")}points(x, y)
# Use prediction from the last iterationi <- 2000Expected <- out[i, c((dim.x+1):ncol(out))] %>% as.numeric()Observed <- c(0.01, 0.15, 2.32, 4.33, 4.61, 6.68, 7.89, 7.13, 7.27, 9.4, 10.0)# Plotplot(Expected, Observed/Expected, pch='x')abline(1,0)
In the real-word study, we need to consider the uncertainty (from different sources) and variability (inter or intra-individual data) to include all possible scenarios.
If we have parameter, we can apply Monte Carlo technique to qunatify the uncertainty and variability.
If we have data, we can calibrate the "unknown" parameter (prior) to "known" parameter (posterior) in our model through Bayesian statistics.
Task 1. Uncertainty analysis on PK model (code
: https://rpubs.com/Nanhung/SRP19_6)
Task 2. Model calibration (code
: https://rpubs.com/Nanhung/SRP19_7)
Task 3. Monte Carlo Simulation for PBPK model (code
: https://rpubs.com/Nanhung/SRP19_8)
Task 4. PBPK model in MCSim (code
: https://rpubs.com/Nanhung/SRP19_9)
In the previous exercise, we find that the predcited result can not used to describe the real cases.
Therefore, we need to conduct the uncertainty analysis to figure out how to reset the model parameter.
After the uncertainty analysis, we can calibrate the model parameters by Markov chain Monte Carlo technique
Use the parameter distributions that we test in uncertainty analysis and conduct MCMC simulation to do model calibration.
BDM
), pulmonary flow (Flow_pul
), partition coefficient of arterial blood (PC_art
) and metabolism rate (Kmetwp
)[*] Bois F.Y., Brochot C. (2016) Modeling Pharmacokinetics. In: Benfenati E. (eds) In Silico Methods for Predicting Drug Toxicity. Methods in Molecular Biology, vol 1425. Humana Press, New York, NY
ode
function in deSolve package. Distrib (BDM, Normal, 73, 7.3);Distrib (Flow_pul, Normal, 5, 0.5);Distrib (PC_art, Normal, 2, 0.2);Distrib (Kmetwp, Normal, 0.25, 0.025);
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |