Columbia Basin Research hosts this web page for archival purposes only. This project/page/resource is no longer active.

Notes from DGAS Modeling Team Meeting, January 9, 1997

Attendees:
Tom Carlson, COE-WES
Chris Pinney, COE-NPW
Kim Fodrea, COE-NPP
Marshall Richmond, PNNL
Lance Vail, PNNL
Larry Fidler, Aspen
Dennis Rondorf, USGS-BRD
Dan Fiel, USGS-BRD
Pam Shaw, UW-CBR
Jim Anderson, UW-CBR
Rich Zabel (late afternoon), UW-CBR

1. Field Sampling. A meeting has been tentatively scheduled for 21 Jan 97 at The Dalles to discuss data collection needs and FY97 priorities. Field data needs to be collected at a high resolution adequate to develop the full 2-d model systemwide. The 2-d is aggregated to compute cells/streamtube delineations and mixing zone coefficients. Velocity measurements also critical. Joe Carroll will coordinate and advertise. Joe, et al. not currently planned to collect data in McNary pool during 1997, but will support the biological researchers in collecting TDG data associated with their sampling locations.

2. Gas Modeling. There was a great deal of discussion/debate on how the gas transport and mixing is modeled. The debate pertained to whether the PNNL base model for flow/gas transport model (the "platform" definition by WES personnel) that is incorporated into the Fidler and CRiSP evaluation models should be 1-D (Marshall has dubbed GAS-TRANS), 2-D (x and y for horizontal and lateral), or Quasi-2-D (overlaying 1-d descriptions, or adding a modifying function to a 1-d description, adding z for depth or y to an x description) and how the lateral gradients of gas should be modeled. The 2-d high resolution (resolution defined by grid size) is important at a systemwide scale to use as an aggregation tool for empirically based data set in which to compute/reduce to descriptive metrics of lower resolution models. The 2-d resolution is essential for defining mixing coefficients, where the 1-d is adequate for representing generalized unsteady flow that can be assumed to be fully mixed. Each 2-d cell has its own modifying set of parameters that can be used to cover the transition zones of mixing/dispersion, etc. The three evaluation models were each using a different approach at different resolutions of data reduction (Fig. 1). Tom brought the discussion back into focus by stating that the Corps only wants/needs one gas model of each river reach. The best way to model some of the reaches is clear, while other reaches are not so clear. McNary pool includes the confluence and must be modeled two-dimensionally due to the complexity. Mixing occurs immediately below The Dalles, so the Bonneville pool likely only needs to be modeled one-dimensionally. The rest of the pools likely need to be modeled in a quasi-two-dimensional way. Quasi-two-dimensional means that the reach is modeled by two or more one-dimensional models running side by side or in overlap. CRiSP will be calibrated and tested to the 2-d model output instead of actual raw data. CRiSP will contain a set of aggregated 2-d calibrated cells with some 1-d streamtubes where parameters are fully mixed. Marshall will draft up a proposed strategy, including a schedule, for modeling the gas transport and mixing throughout the river system. And, Marshall will also get a simplified rectangular model to Pam Shaw and Jim Anderson by the end of February which will help them keep moving on modifications to CRiSP.

Figure 1.
gas vs. distance across river

3. Gas Production/Sourcing Estimates. There was some concern regarding the gas estimates provided by Steve Wilhelms. Steve's estimates are made for just a few hundred feet downstream of the stilling basin in the vicinity of the fixed monitor. The supersaturation in the stilling basin is significantly higher from 150-170% TDG. It will be necessary for Steve to also provide gas production equations (not a fully developed model) for the high supersaturation found in the stilling basin because acute and rapid fish mortality may occur within just a few minutes (instead of hours) exposure to this high supersaturation irrespective of previous complex or simple exposure. The highly turbulent, lateral transporting hydraulic conditions created in the near field could logically result in a nil or non-existent ability to actively or passively depth compensate. Physical modeling will be performed to help estimate retention times in the stilling basin. These assumptions can be addressed and possibly partially validated with data from the BPA-funded study of John Beeman (USGS-BRD) that will utilize pressure sensitive radio telemetry to closely track lateral, horizontal, and depth movements of individual spring smolts. This radio telemetry study should be ranked with one of the highest priorities for providing critical complex exposure data for calibration. Chris will draft a summary of the significance and importance of estimating the high gas supersaturation in the stilling basin and get review comments from Fidler. It's important to address this issue right away as this need will likely affect the near-field tests being performed in February.

Figure 2. General flow path from field research to numerical model.
image

4. Exposure. Figure 2 shows the general flow path from research to numerical models. The CRiSP team has been tasked to do a sensitivity analysis on fish distribution vertically and laterally within their Vitality Model. They will address the hypothesis stating whether an assumed random distribution is any different statistically to smolt mortality than a logical hypothesized distribution incorporating depth compensation. Once this is determined, then the validation data to be collected can be utilized to a more justifiable and meaningful degree. Fidler's model has incorporated both vertical and lateral distributions of smolts within each streamtube delineation, but has not incorporated any exchange mechanism between streamtubes yet.

5. Fidler's Model. The discussion returned to the fact that we have more than one biological model. The issue seems to be whether or not to proceed with developing Fidler's model to the entire hydrosystem, and if we proceed with it, how far should it be taken and its specific original objective of being a more "realtime" tool providing any more resolute information of GBT risk and TDGS mortality to the smolts at the population scale. The Team also preferred to have demonstrated another, independent analytical solving approach to predicting smolt exposure risks to TDGS isolated from other passage stressing parameters where the analytical functions were empirically based upon the full spectrum of TDGS/GBT research within and outside the Columbia Basin. The issue was not going to be fully resolved at this meeting. The primary reasons for Fidler's model were to meet Bolyvong's needs related to rapid assessment for regional coordination and to model the reach with the confluence at a higher level of detail. One of Bolyvong's needs included the ability to run a simplified risk assessment-type model on a PC as opposed to a CRAY. The issue of credibility/believability needs to be considered when making the choice of continuation of Fidler's model. While we may all believe that Fidler's model is correct, regional ignorance will likely find ways to highly criticize yet another model and move politically to require a PATH-type level of review and testing, ALTHOUGH Larry has made significant attempts to bring to bear upon educating the region on the technical, physiological mechanisms of TDGS/GBT and what resolution and confidence is required to adequately evaluate and rank structural modification alternatives. All NMFS Gas Bubble expert Panels in which Larry has served, have ranked this type of exercise at one of the highest levels, be it composed in an independent model or incorporated into CRiSP. Some members of the Core Team see some benefit in developing a Corps' developed and owned independent model. Some managers perceived the Fidler model to serve this purpose with or without fish parameterized overlays. Regional coordination politics strongly argue against this being salable, no matter how accurate and resolute the model may serve to be. CRiSP, on the other hand, is already somewhat accepted within the region due to 9 years of development and critical review, including PATH and outside peer review. CRiSP is also higher resolution than it used to be and can be made more detailed if necessary with options such as individual spill gate and turbine controls. Additionally, CRiSP will be available on a PC within the next few months. At this time, it is the recommendation of the Core Team to have Dr. Fidler concentrate his analysis and modeling activities on best defining the mortality mechanism of dose-response to TDGS through his cannulation experiments and deep tank testing. His current model for the Ice Harbor to Columbia River confluence reaches is at a level that can be tested and used to direct, refine, calibrate, and test the CRiSP TDGS/GBT module and Vitality Submodel. Exposure regime characteristics and subsequent dose-response effects of the CRiSP Vitality Model will be tested against the dynamic bubble growth/collapse study using cannulation to try to achieve convergence in estimates of mortality probabilities. CRiSP will code for both Vitality and dynamic bubble hypotheses for testing this convergence. Tom Carlson will follow through on this issue to help find a resolution.

6. How to Define Travel of the Fish. We need to define the travel of the fish in order to estimate exposure by laying the travel information over the gas model. Again, Beeman's pressure sensitive radio telemetry study would provide critical descriptive data for calibrating 3-dimensional (x, y, and z) species specific, individually based exposure regimes through some tightly constrained and continuous time steps. To utilize and extrapolate hydroacoustic data collected at the population level to individual fish exposure history, the team is proposing to log tracks of "smart" fish in the models with TDG exposure tracks using coefficients and parameters from the analysis of Rich Zabel's PhD. dissertation. Rich's analysis allows for the application of some logical assumptions for characterizing the hydroacoustic data at the species level. Chris Pinney will also investigate the possibility of continued tracking of the NMFS radio-telemetry study fish through Ice Harbor Dam for Spill Effectiveness in 1997 through the McNary pool confluence to help in distinguishing distributional spread and travel time through each grid/streamtube array.

Figure 3.
image

Establish fixed location hydroacoustic transect for a population level tracking estimate of complex exposure. Back-up and supplement these transects with mobile transecting with ROV, etc. Continuously monitor the horizontal and vertical distribution of fish passing the transects (Fig. 3). This will indicate the distribution and timing of fish traveling downstream. We'll be able to tell how fast the fish are traveling relative to the river velocity and where they tend to cross each transect. The instruments will see approximately 10% of the total river cross section. Dennis Rondorf plans to sample at constant cross-sections rather than random locations, i.e., some form of a modified stratified random sampling design. Of course, some biologists traditionally favor random sampling for statistical power reasons, but logistically one is confined to cover enough areas while maintaining large enough sample sizes with respect to time allowed to remain in one sample area through a large enough treatment set of flow conditions. Random sampling may even act to compromise Type I and II error to a greater degree due to the area coverage required, hence the standard operating strategy to date has remained with gathering the greatest amount of detailed descriptive and dynamic information for the most complex reaches, understand that reach to the best of our ability, then move to the next "hot" reach, for 3-d data can be reduced for 2-d descriptions and 2-d data can be reduced for 1-d descriptions, but the reverse is not logically or scientifically sound. Dennis asked for concurrence on his decision to sample constant locations and asked for guidance on where to collect data. The McNary pool / confluence reach is complex, but it is also being heavily studied which makes it a desirable reach for this research. The reach from Ice Harbor downstream to just before the confluence is a 10 mile stretch which might work well for this research. There was some discussion as to whether near-shore studies might also be important, but near-surface studies all across the river seem to be more important. It was suggested that some random sampling also be performed to get info near the thalweg. Backman and others, meanwhile, will also be sampling fish for signs of GBD. According to what we know, these researchers should see progressively worse signs of exposure as they move downstream. If not, that tells us one of three things-the progression is quicker than we think, or the fish are already dead, or we're wrong.

7. Peer Review. Peer review is desirable for all models or submodules that have been developed, not only for regional acceptability, but more importantly for competency in detecting any illogical or unsupported/untestable relationships, and/or any overlooks in important TDG production and transport/GBT and mortality relationships discovered in recent out of basin studies. The gas production and mixing coefficients and parameters are based upon highly reviewed, well established, and published mathematics. The newer refinements based upon the near field studies should be peer reviewed. WES and Battelle-PNNL personnel will supply a list a critiquers. CRiSP performance outside of the DGAS development continues to be peer reviewed and regionally reviewed through the ANCOOR group's PATH hypothesis testing process directed by NMFS and BPA. Following review of Fidler's exposure and bubble growth derivations that will be incorporated into CRiSP, the DGAS specific CRiSP can be peer reviewed by TDG/GBT-mortality experts designated by the expert participants of past NMFS Gas Bubble Expert Panels, with coordination by PATH participants. Dr. Fidler's work is generally based upon well established GBT-mortality study results and has been in direct response to recommendations and informal, but critical technical review by expert participants of the NMFS GB Expert Panels.