«This page left intentionally blank. United States Environmental Protection Agency EPA-540-R-05-012 Office of Solid Waste and Emergency Response OSWER ...»
Source: NRC 2001 In Highlight 2-14 and in other modeling discussions, generally, “two-phase partitioning” refers to modeling the contaminant in two parts or phases: a bioavailable dissolved fraction and a generally nonChapter 2: Remedial Investigation Considerations bioavailable particulate fraction. In “three-phase partitioning,” contaminant concentrations are normally considered in three phases: the bioavailable dissolved phase, a generally non-bioavailable dissolved organic carbon (DOC) phase, and a generally non-bioavailable particulate organic carbon phase.
If it is determined that there are no existing models capable of simulating, at a minimum, the most significant (i.e., first-order) processes and interactions, then project managers may need to rely on other tools or methods for evaluating proposed approaches, or develop and test new models or modules.
Examples of processes that cannot be dynamically simulated, even using state-of-the-art sediment transport models, may include geomorphological processes such as the development of meanders in streams and rivers, bank cutting/erosion, nepheloid layer sediment transport, and mud wave phenomena.
However, there are empirical methods for simulating some of these processes, including estimating the total quantity of sediment introduced to a water body due to the failure of a river/stream bank. Likewise, there are empirical tools to estimate the importance of nepheloid layer transport (i.e., relatively high sediment flux occurring immediately above the sediment-water interface). Empirical tools are also being developed to simulate mud wave transport processes resulting from sediment disturbances such as dredging and resultant dispersal of contaminated sediment residuals.
Step 3: Select an Appropriate Model
If one or more models or types of mathematical models capable of simulating the controlling transport and fate processes and interactions exist, then project managers should use the process described above to choose the appropriate type of model (i.e., level of analysis). If the decision is made to apply a numerical model at a sediment site, selection of the most appropriate contaminated sediment transport and fate model to use at a specific site is one of the critical steps in a modeling program. During this process, familiarity with existing sediment transport models is essential. Comprehensive technical reviews of available models have been conducted by the EPA’s ORD National Exposure Research Laboratory (see U.S. EPA in preparation 1 and 2).
2.9.4 Model Verification, Calibration, and Validation
Where numerical models are used, verification, calibration, and validation typically should be performed to yield a scientifically defensible modeling study. The project manager should be aware that the terms “verification” and “validation” are frequently used interchangeably in modeling literature.
These terms, for purposes of this guidance, mean:
Model verification: Evaluating the model theory, consistency of the computer code with model theory, and evaluation of the computer code for integrity in the calculations. This should be an ongoing process, especially for newer models. Model verification should be documented, or the model or model component should be peer-reviewed by an independent party if it is new.
Model calibration: Using site-specific information from a historical period of time to adjust model parameters in the governing equations (e.g., bottom friction coefficient in hydrodynamic models) to obtain an optimal agreement between a measured data set and model calculations for the simulated state variables.
Model validation: Demonstrating that the calibrated model accurately reproduces known conditions over a different period of time with the physical parameters and forcing functions
changed to reflect the conditions during the new simulation period, which is different from that used for calibration. The parameters adjusted during the calibration process should NOT be adjusted during validation. Model simulations during validation should be compared to the measured data set. If an acceptable level of agreement is achieved between the data and model simulations, then the model can be considered validated as an effective tool, at least for the range of conditions defined by the calibration and validation data sets. If an acceptable level of agreement is not achieved, then further analysis should be carried out to determine possible reasons for the differences between the model simulations and measured data during the validation period. The latter sometimes leads to refinement of the model (e.g., using a finer model grid) or to the addition of one or more physical/chemical processes that are represented in the model.
It is important that both calibration and validation be conducted at the space and time scales associated with the questions the model must answer. For example, if the model will be used to make decade-scale predictions, when possible, it should be compared to decade-scale trend data. Even when data exist for a much shorter time period than will be used for prediction, the long-term behavior of the model should be examined as a part of the calibration process. It is not unusual for a model to perform well for a short-term period, but produce unreasonable results when run for a much longer duration. The extent to which components of a modeling study are performed using verified models can determine to a large degree the defensibility of the modeling project. If a verified model has not been sufficiently calibrated or validated for a specific site, then the modeling study may lack defensibility and be of little value. Where possible, project managers should use verified models in the public domain, calibrated and validated to site-specific conditions. Proprietary models may also be useful, but project managers should be aware they contain code that has not been shared publicly and may not have been verified. The interpretation of modeling results and the reliance placed on those results should heavily consider the extent of documented model verification, calibration, and validation performed.
2.9.5 Sensitivity and Uncertainty of Models
Another important tool for understanding model results may be a sensitivity analysis. This process typically consists of varying each of the input parameters by a fixed percent (while holding the other parameters constant) to determine how the predictions vary. The resulting variations in the state variables are a measure of the sensitivity of the model predictions to the parameter whose value was varied. This can be very informative, especially in understanding how the various processes being modeled affect contaminant fate and transport and which are dominant. This analysis is frequently used to identify the model parameters having the most impact on model results, so that the project team can ensure these parameters are well constrained by site data.
Uncertainty in models usually results from the following three principal sources:
• The necessity for models to use equations that are simplifications and approximations of complex processes, which can result in uncertainty in just how well the equations represent the actual processes;
• The uncertain accuracy of the values used to parameterize the equations (i.e., uncertainty about how well the input data represent actual conditions); and 2-40 Chapter 2: Remedial Investigation Considerations
Typically, uncertainty analyses focus on only the second source, the accuracy of the input values for the model. While quantitative uncertainty analyses are possible and practical to perform with watershed loading models and food chain/web models, they are generally not so (at the current time) for fate and transport models. If a quantitative assessment of the uncertainty of fate and transport model predictions could be provided, the value of that prediction would be greatly increased. Lacking a quantitative uncertainty analysis, one method modeling teams might consider to assess uncertainty is to use bounding calculations to produce a conservative model outcome to compare to the model’s best estimate outcome.
This conservative model outcome may be developed by using parameter values that result in a conservative outcome but do not result in significantly degraded model performance, as measured by comparison to the calibration and validation data sets. A second method to assess uncertainty involves quantification of “model error” by comparison of results to the calibration and validation data and application of that error to model predictions, as described in Connolly and Tonelli (1985).
2.9.6 Peer Review
It is EPA policy that a peer review of numerical models is often appropriate to ensure that a model provides decision makers with useful and relevant information. Project managers should use EPA’s Guidance for Conducting External Peer Review of Environmental Regulatory Models (U.S. EPA 1994c) and the Peer Review Handbook (U.S. EPA 2000e) to determine whether a peer review of a model is appropriate and, if so, what type of peer review should be used. As a rule of thumb, when a model is being used outside the niche for which it was developed, is being applied for the first time, or is a critical component of a decision that is very costly, a peer review should be performed. In addition, project managers should refer to OSWER Directive 9285.6-08, Principles for Managing Contaminated Sediments at Hazardous Waste Sites, Principle 6 (U.S. EPA 2002a; see Appendix A).
EPA peer review guidance for models (U.S. EPA 1994c) also notes that environmental models that may form part of the scientific basis for regulatory decision making at EPA are subject to the peer review policy. However, it cannot be more strongly stressed that peer review should be considered only for judging the scientific credibility of the model including applicability, uncertainty, and utility (including the potential for misuse) of results and not for directly advising the Agency on specific regulatory decisions stemming in part from consideration of model output. Peer reviewers advise the Agency regarding proper use and interpretation of a model; it is then the Agency’s task to apply that advice properly to regulatory decisions.
Highlight 2-15 summarizes some important points to remember about modeling at sediment sites.
Highlight 2-15: Important Principles to Consider in Developing and Using Models at Sediment Sites 1. Consider site complexity before deciding whether and how to apply a mathematical model. Site complexity and controversy, available resources, project schedule, and the level of uncertainty in model predictions that is acceptable, are generally the critical factors in determining the applicability and complexity of a mathematical model. Potential remedy cost and magnitude of risk are generally less important, but they can significantly affect the level of uncertainty that is acceptable.
3. Determine what model output data are needed to facilitate decision making. As part of problem formulation, the project manager should consider the following: 1) what site-specific information is needed to make the most appropriate remedy decision (e.g., degree of risk reduction that can be achieved, correlation between sediment cleanup levels and protective fish tissue levels, time to achieve risk reduction levels, degree of short-term risk); 2) what model(s) are capable of generating this information;
and 3) how the model results can be used to help make these decisions. Site-specific data collection should concentrate on input parameters that will have the most influence on model outcome.
4. Understand and explain model uncertainty. The model assumptions, limitations, and the results of the sensitivity and uncertainty analyses should be clearly presented to decision makers and should be clearly explained in decision documents such as proposed plans and RODs.
6. Consider modeling results in conjunction with empirical data to inform site decision making.
Mathematical models are useful tools that, in conjunction with site environmental measurements, can be used to characterize current site conditions, predict future conditions and risks, and evaluate the effectiveness of remedial alternatives in reducing risk. Modeling results should generally not be relied upon exclusively as the basis for cleanup decisions.
7. Learn from modeling efforts. If post-remedy monitoring data demonstrate that the remedy is not performing as expected (e.g., fish tissue levels are much higher than predicted), consider sharing these data with the modeling team to allow them to perform a post-remedy validation of the model. This could provide a basis for model enhancements that would improve future model performance at other sites. If needed, this information could also be used to re-estimate the time frame when RAOs are expected to be met at the site.
2-42 Chapter 3: Feasibility Study Considerations
3.0 FEASIBILITY STUDY CONSIDERATIONSGenerally, the purpose of a feasibility study for a contaminated sediment site is to develop and evaluate a number of alternative methods for achieving the remedial action objectives (RAOs) for the site.
This process lays the groundwork for proposing and selecting a remedy for the site that best eliminates, reduces, or controls risks to human health and the environment. The feasibility study process is described in the U.S. Environmental Protection Agency’s (EPA’s) Guidance for Conducting Remedial Investigations and Feasibility Studies under CERCLA (U.S. EPA 1988a, also referred to as the “RI/FS Guidance”). The proposed plan and record of decision (ROD) process is described in the EPA’s Guide to Preparing Superfund Proposed Plans, Records of Decision, and other Remedy Selection Decision Documents (U.S. EPA 1999a, also referred to as the “ROD Guidance”). This chapter is intended to supplement existing guidance by offering sediment-specific guidance about developing alternatives, considering the National Oil and Hazardous Substances Pollution Contingency Plan (NCP) criteria, identifying applicable or relevant and appropriate requirements (ARARs), estimating cost, and implementing institutional controls. Chapters 4, 5, and 6 present more detailed guidance on evaluating alternatives based on the three major approaches for sediment: monitored natural recovery (MNR), in-situ capping, and dredging (or excavation) with treatment or disposal.
Although this chapter focuses on remedial alternatives for managing contaminated sediment, project managers beginning this stage of site management should keep in mind the first step at almost every sediment site should be to implement measures to control any significant ongoing sources and to evaluate the effectiveness of those controls. Until this is done, appropriately evaluating alternatives for sediment may be difficult. However, it may be appropriate to evaluate implementation of interim sediment cleanup measures prior to completing source control to control further dispersal of sediment hot spots or reduce risks to human health and the environment due to sediment contamination.
In addition, project managers should keep in mind that flexibility is frequently important in the feasibility study process at sediment sites. Iterative or adaptive approaches to site management are likely to be appropriate at these sites. Also, project managers should consider pilot testing various approaches as part of the feasibility study process. Phasing, adaptive management, and early actions are described further in Chapter 2, Section 2.7, Phased Approaches, Adaptive Management, and Early Actions.
3.1 DEVELOPING REMEDIAL ALTERNATIVES FOR SEDIMENT
As described in Chapter 1, Section 1.3.1, Remedial Approaches, there are typically three major approaches that can be taken to reduce risk from contaminated sediment when source control measures are insufficient to reduce risks: MNR, in-situ capping, and sediment removal by dredging or excavation.