Sensitivity Analysis

'Sensitivity analysis is the study of how the uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input' (Saltelli, 2002).

From: Second Generation Cell and Gene-based Therapies , 2020

Sensitivity Analysis

C. Pichery , in Encyclopedia of Toxicology (Third Edition), 2014

Sensitivity Analysis: Definition and Properties

In a numerical (or otherwise) model, the Sensitivity Analysis (SA) is a method that measures how the impact of uncertainties of one or more input variables can lead to uncertainties on the output variables. This analysis is useful because it improves the prediction of the model, or reduces it by studying qualitatively and/or quantitatively the model response to change in input variables, or by understanding the phenomenon studied by the analysis of interactions between variables. However, the target of interest must not be the model output per se, but the question that the model has been called to answer.

In other words, the expected values of various parameters involved can be used to evaluate the robustness, i.e., 'sensitivity' of the results from these changes and identify the values beyond which the results change significantly. SA identifies priority needs for improving knowledge. Indeed, this analysis reduces the uncertainties of the parameters of the assessment and then, decisions about the phenomenon under study can be taken.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123864543004310

History, Science and Methods

L.G.M. Gorris , C Yoe , in Encyclopedia of Food Safety, 2014

Sensitivity Analysis

Sensitivity analysis is an essential part of every risk assessment, quantitative and qualitative. The gaps in our knowledge are bridged by assumptions, probability distributions, expert opinion, best guesses, and a variety of other techniques. Sensitivity analysis is a systematic investigation of the means by which assessors bridge these uncertainty gaps. It includes 'what if' analysis of uncertain model parameters and inputs, as well as all significant assumptions. Sensitivity analysis seeks to learn such things as how sensitive model outputs are to changes in inputs and how that sensitivity might affect decisions. A good sensitivity analysis increases overall confidence in a risk assessment.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123786128000317

Ordinary Differential Equations (ODEs) Based Modeling

Stefan Hoops , ... Josep Bassaganya-Riera , in Computational Immunology, 2016

Sensitivity Analysis

Sensitivity analysis (SA) is often employed to quantify the importance of each of the model's parameters on the behavior of the system. We can distinguish between local and global SA. A local SA addresses sensitivity relative to change of a single parameter value, while a global analysis examines sensitivity with regard to the entire parameter distribution. Whereas global SA focuses on the variance of model outputs and determines how input parameters influence the output parameters. It is a central tool in SA since it provides a quantitative and rigorous overview of how different inputs influence the output. Global SA is often preferred when possible, due to its greater detail but for a large system it is very computationally expensive. Local SA method can be preferred because it requires less computational power. Reader is referred to Chapter 6 for an extended discussion on SA of ABM, and Chapter 8 on SA of multiscale models.

Global Sensitivity [25] can be used even without the knowledge of the unknown parameters. This means it can be performed even before the model calibration process. Global SA can be used to reduce the number of parameters. If the result of the global SA is that a parameter does not influence the outcome, that is, the maximum or minimum change of the outcome is near zero. Since this result is independent from the setting of all parameters the studied parameters value is irrelevant and can be removed or be assigned an arbitrary value. Global SA can be performed with Condor-COPASI [24].

Local SA focuses more on a single input's behavior while other parts remain the same. It is narrow in this aspect as the effect of an input parameter is not measured for settings other than the base. Local sensitivity is nevertheless a great tool once the model is calibrated. It can be helpful in determining which parameter should be modified for the system to reproduce a desired outcome. Local SA can be performed in COPASI directly. In order to perform SA in COPASI, one has to select an outcome or desirable effect and provide a list of candidate parameters. COPASI will return a color-coded table that highlights which parameter influenced the outcome and in which direction.

COPASI provides scaled and unscaled results. Unscaled result presents the ratio of the absolute change of the effect to absolute change of the parameter or cause. Scaled result represents the ratio of the relative changes (Figure 5.7).

Figure 5.7. Scaled local sensitivity analysis result in COPASI. Intensive green values mean strong positive effect whereas intensive red values are strong negative effects, pale or white values indicate minor change.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128036976000059

Multiscale Modeling

Vida Abedi , ... Josep Bassaganya-Riera , in Computational Immunology, 2016

Global versus Local SA

In global SA, the focus is centered on the behavior of input parameters on the variation of the model output. In fact, different parameters have different (sometimes extreme) effect on the system's outcome. Given that some parameters play significant roles, while others are marginally important, make global SA a valuable tool. Local SA on the other hand measures the relative sensitivity of a single parameter value to changes in other parameters. Global SA requires higher computational power as compared to the local SA, and it is often supported by HPC systems.

A local SA addresses sensitivity relative to change of a single parameter value, while a global analysis examines sensitivity with regard to the entire parameter space. Global SA focuses on the variance of model outputs and determines how input parameters influence the output parameters. It is a quantitative and rigorous overview of how different inputs influence the output. Local SA focuses on a single input's behavior while other parts remain the same. The limitation of the local SA is in its limited scope, as the effect of input parameter is not measured for settings other than the basal level. For this reason, the global SA is often the preferred method; however, due to higher computational complexity, the global method may not always be the method of choice, in fact in many cases, local SA can be preferred.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128036976000084

Analysis of Data

Richard Chin , Bruce Y. Lee , in Principles and Practice of Clinical Trial Medicine, 2008

15.4.2 Sensitivity Analysis

Sensitivity analysis is performed with assumptions that differ from those used in the primary analysis. Sensitivity analysis addresses the questions such as "will the results of the study change if we use other assumptions?" and "how sure are we of the assumptions?" Sensitivity analysis is typically performed to check the robustness of the results. For instance, if a study yields a p-value of 0.02 for the primary analysis but there are quite a few dropouts, then a sensitivity analysis might be performed while counting all the dropouts as patients who fail therapy. If the p-value becomes 0.03 under this scenario, then the results are robust. If it becomes 0.2 then the results are not robust.

Sensitivity analysis can be performed for a host of reasons, including Good Clinical Practice (GCP) violations, protocol violations, ambiguous/missing data, etc. Since imputations (see Chapter 14) for missing data can have a nontrivial effect on results of a study as well as the p-value, FDA will often request sensitivity analysis to ensure that the results of the test are robust with different imputations. For example, for dropouts, the FDA might ask for analysis that considers each dropout to be a failure (if there are more dropouts in the active group) or each dropout to be a treatment success (if there are more dropouts in the placebo group). In extreme cases, considering the dropouts in the active group to be failure and the dropouts in the placebo group to be success might be necessary.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123736956000156

Outcomes and health economic issues in surgery

Sharath C.V. Paravastu , Jonathan A. Michaels , in Core Topics in General and Emergency Surgery (Fifth Edition), 2014

Simple sensitivity analysis

Simple sensitivity analysis, in which one or more parameters contained within the evaluation are varied across a plausible range, is widely practised. With one-way analysis, each uncertain component of the evaluation is varied individually in order to assess the separate impact that each component will have upon the results of the analysis. Multi-way sensitivity analysis involves varying two or more of the components of the evaluation at the same time and assessing the impact upon the results. It should be noted that multi-way sensitivity analysis becomes more difficult to interpret as progressively more variables are varied in the analysis. 52

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978070204964400002X

Advanced Methodological Aspects in the Economic Evaluation

Vasilios Fragoulakis , ... George P. Patrinos , in Economic Evaluation in Genomic Medicine, 2015

Sensitivity Analysis

The term "sensitivity" essentially refers to the way in which our results change when we change our model's assumptions. If sensitivity is high, then the results vary greatly when we change certain assumptions; these assumptions must be very robustly established for our model to have any validity.

To put it concisely and intuitively, we could say that economic evaluation is beset by many kinds of uncertainties. First, there is uncertainty about the structure of the model we have created (structural uncertainty). For example, is our approach to the issue the correct one? Does our analysis reflect reality? Is our model of clinical practice incorrect? To answer such questions, we need to critically evaluate our model and perform various sensitivity analyses, changing its structure to judge its value.

Another source of uncertainty is heterogeneity between the various subgroups to which the model refers (variability due to heterogeneity). If we want to compare the cost and effectiveness of a new cardiology drug with the standard treatment for the management of acute myocardial infarction, then perhaps we should perform separate analyses for men and women, young and elderly patients, patients with previous infarctions, and so on, because these groups will give different results. Because heterogeneity is important for economic models, it should be included in economic evaluation. Such subanalyses (men vs. women, etc.) are sensitivity analyses for heterogeneity.

Uncertainty also exists between patients. We cannot be sure how long a patient will actually live after a procedure, and even "similar" patients do not survive for the same length of time, nor does their care cost the same. This type of uncertainty is called first-order uncertainty. This would theoretically decrease if data from studies with large samples or valid meta-analyses were available. Realistically, there will always be some first-order uncertainty because nature is inherently stochastic.

The uncertainty that is associated with the exact value of a statistical parameter (and that is estimated by the standard deviation) is called second-order uncertainty. This type of uncertainty is of significant interest. For example, if we estimate that the cost of the treatment of a chemotherapy patient is €10,000±2,000 (mean±standard deviation), then we have entered second-order uncertainty.

Sensitivity analysis in this case is a technique that estimates the effect that different values of an independent variable have on the end results ( Jain et al., 2011). Sensitivity analysis is very important when examining the robustness and validity of our conclusions based on the significance of the initial parameters (Meltzer, 2001; Yoder, 2008). It needs to be performed mostly for three purposes:

First, when the evaluator wants to determine the range of values in which the proposition of the economic model is valid

Second, to increase the model's reliability when the input data are elastic (e.g., when estimates are used)

Third, to make the model evaluation convenient to the end user.

The sensitivity analysis methodology consists of three steps. First, the uncertainty parameters are determined. Second, the range of variation is determined. Third, the results are calculated based on the most likely prediction as well as the "direction" of the results. This means that we examine whether one of the interventions is superior to the other according to a certain statistical probability—usually the 95% or 99% significance level.

The most common forms of sensitivity analysis are:

Single sensitivity analysis: Single analysis explores ICER variations when a single variable of the model—a different one each time—is altered. The variation values are usually within the variation range of the confidence interval, or alternatively they can include all the values found in the literature.

Multiple sensitivity analysis: A multiple analysis is performed to assess simultaneous changes in two or more variables, such as effectiveness and cost. Similarly, the variation values are obtained from the confidence intervals or determined from the literature.

Probabilistic sensitivity analysis: Probabilistic sensitivity analysis (PSA) deals with the significant problem of statistical estimation of quantities, as in the example of the chemotherapy patient we mentioned previously, and should always be included in any reliable economic analysis. For example, when the variables examined are strongly correlated, are uncertain, or follow distributions, then single sensitivity analysis is not appropriate. The same is true for data originating from different sources.

Based on the reasoning used so far, the ICER was calculated in a deterministic way—with only one point estimate—with no uncertainty when drawing conclusions. In practice, the ICER has a probabilistic nature because the types of costs and the benefit from each intervention follow theoretical or empirical distributions (Briggs and Fenn, 1998). Furthermore, the introduction of a new intervention into the health system entails assuming certain risks, in which case this type of analysis is indicated for handling the uncertainty associated with those options. PSA takes into account the mean value, the standard deviation, and the distribution of each variable, creating thousands of results under computer simulation by selecting random cases based on the defined assumptions. It then summarizes and displays the results obtained, despite its limitations, through the use of the acceptability curve (O'Hagan et al., 2000; Fenwick et al., 2004; Barton et al., 2008).

We present a conceptual example of the necessity and understanding of the method. We assume that the reader is familiar with the concept of distribution. In this simple example, we assume that the cost consists of only one factor (e.g., only the drug). If the cost comprised many factors, then we would need to generalize the method for each one separately and obtain the total sum of the individual distributions.

Let us assume that we are comparing two second-line treatments for lung cancer. Assume that the survival for these two patient groups (and the respective costs) is estimated as follows:

E T=10 months

E S=9 months

C T=€10,000

C S=€9,000

Simple calculations give the ICER based on the mean value, as shown in Figure 5.3.

Figure 5.3. An example of ICER calculation.

According to this, the ICER is €1,000 per month or €12,000 per year. If society's willingness to pay (λ) is even slightly higher, such as, €12,001 or €12,002, then the new treatment is considered 100% cost-effective, as can be seen from the diagram (Figure 5.4). The opposite is true if the λ is marginally less than €12,000: there is a 100% chance that the treatment is not cost-effective. It is obvious that such an approach is quite "strict" and does not leave any room for error.

Figure 5.4. Point representation of the ICER.

In practice, we will not restrict ourselves to an estimate of the ICER, but we will perform many experiments to calculate more than one ICER based on the distributions available for our variables. We should note that the selection of the distribution is important, because its characteristics will affect the realization of specific values on cost and effectiveness. Two experiments (out of the thousands we would actually run in the case of a real analysis) are presented in Figure 5.5.

Figure 5.5. A probabilistic approach to the ICER.

Note that in the two experiments we ran, the ΔC, the ΔΕ, and the ICER are different each time, which means that we have moved on from deterministic results to stochastic analysis (i.e., analysis that includes uncertainty). We should, however, note that in this simple example we did not take into account any correlation in the data, but we arbitrarily assumed that there is no correlation. For that reason, we allow variables to move independently along their entire distribution. In reality, correlation often appears in the data; for example, if the cost is high, then survival may also be high. This concept of correlation is represented schematically in Figure 5.6.

Figure 5.6. The concept of correlation in probabilistic analysis.

Regardless of correlation (positive or negative), by this process we create many ICERs (as many as 5,000 or more) and plot them in a diagram as shown in Figure 5.7, where each point represents one experiment and, therefore, one calculation of the ICER (the diagram is based on hypothetical results).

Figure 5.7. Dot plot of ICER representation.

This diagram prompts the following question: How can I handle all of this information and represent it concisely? The last step of this analysis is to assume various values of λ within a reasonable range and use these to find the percentage of points that are cost-effective. In this way, instead of relying on an unrealistic approach in which the intervention would be considered 100% cost-effective at exactly €1 above the ICER we calculated and not cost-effective at €1 below it, again with a probability of 100%, the analysis will now take a probabilistic nature. Let A, B, and C symbolize various values of λ, which we assume (in applied analysis the range is usually €0 to €50,000 or €100,000). The diagram (Figure 5.8) answers the following questions: If the λ is equal to A, then how many points are cost-effective out the total (percentage)? If the λ equals B, then how many points are cost-effective?

Figure 5.8. Derivation of the acceptability curve.

The curve summarizing this information is called the cost-effectiveness acceptability curve (CEAC) and is the output of the probabilistic approach. The scale on the vertical axis is 0–100%, and the horizontal axis represents various values of λ. Based on this curve, we are able to inform budget managers regarding the percentage of probabilistic analysis experiments in which the new treatment is cost-effective compared with the standard as soon as they inform us of their own λ. A hypothetic example of such a curve is shown in Figure 5.9.

Figure 5.9. Representation of the acceptability curve.

It should be noted that plotting a CEAC does not always provide information regarding which treatment is the optimal option and cost-effective compared with an alternative. Under certain conditions (e.g., when comparing more than two options, when the cost and benefit of the treatments follow a specific correlation pattern, when the cost is highly asymmetrical, etc.), a percentage higher than 50% in the CEAC diagram might not indicate a cost-effective option. The curve showing the probability that the optimal option (the one with the greatest NMB) is cost-effective for a specific λ is called cost-effectiveness acceptability frontier, and it is what we are most interested in. In this case, the correct way to find the optimal/cost-effective option is to calculate the mean NΜB for various values of λ ("find the option with the highest mean NMB for various λ based on the PSA and then find the probability that this is cost-effective"). In this case, there will be areas of λ where one treatment is considered cost-effective and other areas where another treatment is considered cost-effective. Usually, however, when comparing two options in practical research, the CEAC is the correct basis for judging different options.

Distribution selection in PSA: The success of PSA essentially relies on the correct selection of distributions for each case. Considering that there are standard computer programs with integrated distributions, the selection of a distribution by the investigators may be somewhat arbitrary in some cases. However, if a correct approach is used, consistent with the statistical properties of the data distribution, then the number of possible selections is significantly limited. For example, if we wish to perform a PSA on a probability that is bounded by 0 on the left and by 1 on the right, then we would normally select a suitable statistical distribution like the beta (Claxton et al., 2005). If this probability is derived from the survival analysis coefficients on a log-hazard scale, then selection of the multivariable normal distribution is preferable (Claxton et al., 2005). For right-skewed costs, the gamma or log-scale distributions are commonly used. In cases of data correlation (e.g., differences in cost and benefit), a normal bivariate analysis is usually followed, whereas recent data correlation is also patterned through copula forms (Daggy et al., 2011). We should note that the development of a probabilistic model is indeed a challenge for economists; nevertheless, such an approach is considered more reliable and less arbitrary than simple sensitivity analysis.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128014974000059

Markov Chain Monte Carlo

A.M. Johansen , in International Encyclopedia of Education (Third Edition), 2010

Sensitivity Analysis in Hierarchical Models

Sensitivity analysis is an assessment of the sensitivity of a mathematical model to its modeling assumptions. In statistics, it is often used to determine how sensitive inferences made using a particular model are to the parameters of that model. This is of great importance in real inferential settings, but can be difficult when dealing with complex models.

In addition to inferential tasks, it is possible to use MCMC output to perform sensitivity analysis. A recent application in the field of educational statistics (Seltzer et al., 2002), for example, considered two-level hierarchical models in which the first level corresponded to individual effects and the second to site effects. Particular examples included the efficacy of remedial reading intervention. Using MCMC rather than traditional techniques made it straightforward to employ non-normal distributions in order to ameliorate the effect of outlying observations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080448947013476

Population Viability Analysis

Hugh P. Possingham , ... David B. Lindenmayer , in Encyclopedia of Biodiversity (Second Edition), 2013

Sensitivity Analysis of Population Viability Analysis Models

Sensitivity analysis is an important component of modeling because one can use it to systematically investigate the complex interactions of a model. Sensitivity is usually measured by varying a parameter by a small amount from its estimated value. The resulting change in the state variable (e.g., the risk of extinction) provides an index of the sensitivity of the model to that parameter. Sensitivity analysis provides practical information for model builders and users by highlighting parameters that have the greatest influence on the results of the model. It can highlight model parameters that should be most accurately measured so as to maximize the precision of the model, give a general indication of the reliability of the model predictions, and highlight parameters and interactions that have the largest influence on the population to help determine effective management strategies.

The sensitivity of deterministic matrix population models can be determined analytically by eigen-analysis. Extension of these techniques to PVA is not appropriate for most PVA models except for matrix models for at least three reasons: (1) PVA models are often complex, so obtaining solutions analytically is difficult if not impossible; (2) in PVA, the result of interest is the risk of population decline and not the deterministic growth rate; and (3) interactions between variables are largely ignored.

Any method of sensitivity analysis should be clearly defined, interactions between parameters should be distinguishable from single parameter effects, and the method should account for variability associated with parameter estimates. The simplest approach to sensitivity analysis of PVA models is to vary the model parameters in turn and investigate the effect on the risk of population decline. Alternatively, such a sensitivity analysis can determine whether the relative efficacy of different management decisions changes with different parameter values. PVA models may be complex with numerous parameters, particularly when individuals are modeled, and understanding the relative importance of different parameters and interactions between parameters may be difficult. For example, if there are 10 parameters and three levels for each parameter are to be investigated, then 21 different parameter combinations would be necessary to assess each parameter independently. However, 310(=59,049) different combinations are required to test all possible interactions.

When the risk of population decline is the important state variable, logistic regression may be useful for summarizing the effects of different parameters and interactions. Data for the regression are generated by using numerous parameter combinations. For each parameter combination, the PVA model is simulated to obtain a limited number of predictions of the incidence of decline. The regression analysis uses the model parameters as explanatory variables and the incidence of decline as the dependent variable. The regression equation provides a simple expression to approximate how the probability of decline is influenced by the model parameters.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123847195001738

Quality risk management for pharmaceutical manufacturing

T. O'Connor , ... S. Lee , in Predictive Modeling of Pharmaceutical Unit Operations, 2017

2.2.2.1 Sensitivity analysis: a risk assessment tool

Sensitivity analysis is a tool for performing quantitative risk assessments that evaluates the relationships between process parameters, material attributes, and product quality attributes. Parametric sensitivities, S i,j , normalized with respect to the average value of the output variable, y ¯ i (i.e., quality attributes), and input variable, p ¯ j (i.e., process parameters and material attributes), can be defined as noted in the next equation (Saltelli et al., 2000):

S i , j = y i p j ( y ¯ i p ¯ j )

Normalization enables relative comparisons between similar size variations in input parameters. As an example, the sensitivity of the active pharmaceutical ingredient (API) concentration in the tablet to a 10% change in API particle size and a 10% change in blender mixing speed can then be directly compared. Parametric sensitivities can be determined from process model simulations or design of experiments studies. Predictive process models can facilitate examining the impact of a greater number of parameters over a wider range of conditions than may be experimentally feasible, thus enhancing process knowledge.

Sensitivity analysis can also be beneficial as feedback to the development of the process model by identifying uncertain model parameters that can have a significant impact on the prediction of quality attributes. In the application of sensitivity analysis, model parameters can be treated as model inputs, along with process parameters and material attributes. Model parameters can be based on noisy experimental data, and uncertainty in these fitted model parameters can contribute to the uncertainty in the model outputs. This analysis aids the assessment of whether the parameter estimates are sufficiently precise to produce reliable predictions of the critical product quality attributes. If not, further work can be directed to refining the estimation of the parameters that give rise to the greatest uncertainty in the model outputs (Saltelli et al., 2000). To illustrate this concept, sensitivity analysis was applied to a model of a continuous direct compression process. The population balance parameters for the blending model (e.g., axial, radial, and backward fluxes) were found insignificant in this case study; however, this finding does not mean that these parameters are not important. This result simply denotes that for the chosen model outputs considered, which were related to bulk output stream properties, variations in these model parameters are not significant compared to the 11 process parameter and material attribute model inputs identified (e.g., API bulk density, API mean particle size, mixer rpm (revolutions per minute), feed frame rotation rate) (Boukouvala et al., 2012).

In addition to identifying critical model inputs, the output of a sensitivity analysis can facilitate the design of an active process control system for risk mitigation. The analysis identifies the relationship between variables guiding the selection of pairs of manipulated variables and controlled variables for automated control loops that form the foundation of an active control system. The active control system is then designed to adjust the manipulated variables upon the detection of a process variation to maintain the controlled variable at the desired targets. In this manner, disturbances are rejected and risks to product quality are mitigated (Singh et al., 2013).

Differential sensitivity analysis methods can be applied depending on the situation, and can be broadly categorized as either a local or a global method. Local sensitivity analysis focuses on the local impact of factors on the model (Saltelli et al., 2000), and is considered as a particular case of the one-factor-at-a-time approach, because all other factors are held constant when one is varied. Derivative-based approaches are the most common local sensitivity analysis method. To compute the derivative numerically, the model inputs are varied within a small range around a nominal value. But when it is important to explore a wider span of the input parameter space, or model inputs have combined effects that cannot be reduced to the sum of the individual responses precluding a linear description, a global sensitivity analysis approach should be utilized (Saltelli et al., 2000).

Global sensitivity analysis methods vary all model inputs simultaneously, and the parametric sensitivities are calculated over the entire range of each model input (Saltelli et al., 2000). Monte Carlo analysis is a common approach for global methods. Monte Carlo analysis is based on performing multiple evaluations with randomly selected values of model inputs, and then using the results of these simulations to (1) determine both uncertainty in the prediction of model outputs and (2) assign to each model input its contribution to the variance in model outputs (Saltelli et al., 2000). The general workflow of Monte Carlo approaches is as follows. (1) Select the range and distribution for each model input; the distributions are significant because they reflect the knowledge, or the lack thereof, with respect to the model and its parameterization. (2) Generate a sample from the ranges and distributions specified in the first step. Samples can be generated via random sampling, stratified sampling (e.g., Latin hypercube designs), or correlation sampling (e.g., Gaussian copula) procedures (Sampling Parameters for Sensitivity Analysis, 2015; Deodatis et al., 2013). (3) Evaluate the expected value and variance for the model outputs from the array of sample points. (4) Apportion the variation in the output to the variation in different model inputs. Many techniques are available for this analysis, including visual methods (e.g., scatterplots and three-dimensional plots) and quantitative methods [e.g., linear regression, Sobol's method, and the Fourier amplitude sensitivity test (FAST)] (Saltelli et al., 2008; Sobol, 2001). Sobol's method and FAST can be used to compute higher-order sensitivity indices based on interactions between model inputs in addition to first-order indices (Saltelli et al., 2008; Sobol, 2001).

Global sensitivity analysis methods can be adopted to calculate, in addition to steady-state values, time-dependent sensitivity indices that can provide additional insights. In a case study, Boukouvala et al. calculated the time profile of the model output after a process perturbation was simulated (Boukouvala et al., 2012). Fig. 2.2 illustrates, for a continuous manufacturing process, the time-dependent sensitivities of tablet API concentration to mean particle size of excipient, mean particle size of API, excipient bulk density, and API bulk density. In this example, the sensitivity of tablet API concentration to the mean particle size of the API significantly decreases during the start-up of the process (i.e., before 300   s) prior to reaching a steady-state value, and the mean particle size of the excipient becomes the most influential parameter on tablet API concentration over this time period. The case study illustrates that the parameters identified as having a significant impact on product quality attributes during dynamic operations (e.g., start-up, shut-down) may be different those identified during the steady-state operations.

Figure 2.2. Dynamic sensitivity analysis for tablet API concentration. STi represents the time-dependent sensitivity. As time goes on, the sensitivity index of tablet API concentration versus mean particle size of excipient increases and becomes dominant after roughly 300   s; the sensitivity indices of tablet API concentration versus mean particle size of API and bulk density of API decrease significantly; and the sensitivity index of tablet API concentration versus bulk density of excipient increases slightly.

Source: Reprinted from Boukouvala, F., Niotis, V., Ramachandran, R., Muzzio, F.J., Ierapetritou, M.G., 2012. An integrated approach for dynamic flowsheet modeling and sensitivity analysis of a continuous tablet manufacturing process. Comput. Chem. Eng. 42, 40, Copyright 2012, with permission from Elsevier.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081001547000028