Project On-line Deliverables

Final Project Report

Programme name: ESPRIT
Domain: HPCN
Project acronym: HITERM
Contract number: 22723
Project title: High-Performance Computing and Networking
for Technological and Environmental Risk Management
Project Deliverable: Final Report
Related Work Package:   WP 0
Type of Deliverable: Technical Report
Dissemination level: public
Document Author: Kurt Fedra, ESS
Edited by: Lothar Winkelbauer, ESS
Document Version: 1.1
First Availability: 2000 04 01
Last Modification: 2000 05 31

Please note: this document was designed and formatted as an on-line document in HTML; it is designed to be browsed on-line, not optimised for a hardcopy printed version !



EXECUTIVE SUMMARY

Within the framework of HPCN Information Management and Decision Support, HITERM was designed to expand the application of HPCN to decision support in new domains: its central focus is the interface between technological risk management and the environment.

Using distributed parallel computing, the project has aimed at reaching better-than real time performance for the simulation of accidental release of hazardous substances into the atmosphere, ground and surface water, using state-of-the-art 3D simulation models.

The main Project Objectives of HITERM were to design and develop:

  • HPCN methods and tools for time critical environmental applications, that are related to technological risk and emergency management;

  • a prototype system, based on client/server distributed parallel computing, for the safety analysis of hazardous transports; the simulation of accidents (release scenarios of hazardous substances) for the support of emergency control measures and staff training for emergency management, both for transportation accidents and Seveso-class chemical process and storage plants, including concurrent sensitivity and uncertainty analysis by Monte Carlo methods as an integrated part of the forecasting and decision support scheme;

  • tools for the real-time visualisation of these dynamic and spatially distributed stochastic model results (concurrent ensembles of solutions), to provide efficient interactive decision support tools for these applications, using a rule based expert systems.

HITERM has implemented and tested these tools and methods in three concrete case study applications related to road transportation of hazardous goods in Portugal, railway transportation in Switzerland, and to the chemical process industry in Italy.



Table of Contents





1. Introduction

Within the framework of HPCN Information Management and Decision Support, HITERM was designed to expand the application of HPCN to decision support in new, complex and time-critical domains: the central focus is the interface between technological risk management and the environment .

Using distributed parallel computing, the project has aimed at reaching better-than real time performance for the simulation of accidental release of hazardous substances into the atmosphere, ground and surface water, using state-of-the-art 3D simulation models.

This information is being used, in the framework of on-line decision support and advisory systems for:

  • the support of emergency management tasks (and related staff training) for both road and rail transportation accidents involving hazardous substances

  • and for hazardous installations, as foreseen by the amended Post-Seveso Directive (82/501/EEC, 87/216/EEC, COM(94) 4).

In addition to connecting the HPC simulations to various on-line data sources (primarily environmental and hydro-meteorological monitoring), the project has explored two additional important aspects of HPCN based decision support applications, namely,

  • the on-line integration of uncertainty and error analysis, based on Monte-Carlo methods again realised by parallel simulation;

  • methods for on-line interactive data interpretation and visualisation for dynamic, spatially distributed, and probabilistic model results for effective user interface design, supporting direct understanding.

HITERM has developed a new generation of interactive, model-based decision support systems for complex, time critical applications. It has explored the integration, in a distributed client/server HPCN architecture with local, distributed, and mobile multi-media clients, of:

  1. Real-time data acquisition systems (eg., transport telematics systems, satellite imagery, weather radar, stationary and possibly mobile observation stations including hand-held data acquisition systems and video input);

  2. With (distributed parallel) high-performance computing resources for (better than) real-time modeling and forecasting of transport and dispersion models, plus discrete multi-criteria optimisation, rule-based expert systems and neural nets, as remote advisory and decision support systems;

  3. And multi-media clients, including local and networked X Windows servers, and http clients including distributed, mobile clients (including hand-held computers)

to provide real-time information and decision support for complex and demanding technological and environmental risk management applications with considerable economic implications. Industrial and transportation accidents addressed by the system can cause extremely large economic damage: even a small reduction due to better emergency planning and management would make a system like HITERM highly profitable.

HITERM links networks of information resources and analytical capabilities with a range of clients, including mobile field personnel. As an initial form of use the exploitation of the system for staff training exercises can be foreseen.

The decision support paradigm underlying HITERM is based on interactive, multi-criteria selection from large sets of (HPCN generated) alternatives, supported by (HPCN generated) dynamic visualisation and multi-media representation.



2. Project Objectives

The main Project Objectives of HITERM were to design and develop:

  • HPCN methods and tools for time critical environmental applications, that are related to technological risk and emergency management;

  • a prototype system, based on client/server distributed parallel computing, for

    • the on-line safety analysis of hazardous transports including dynamically updated vehicle position through GPS;

    • the simulation of accidents (release scenarios of hazardous substances) for the support of emergency control measures and staff training for emergency management, both for transportation accidents and Seveso-class chemical process and storage plants,

      including concurrent sensitivity and uncertainty analysis by Monte Carlo methods as an integrated part of the forecasting and decision support scheme;

  • tools for the real-time visualisation of these dynamic and spatially distributed stochastic model results (concurrent ensembles of solutions), to provide

  • efficient interactive decision support tools for these applications, using a discrete multi-criteria optimisation system, rule based expert systems, or neural nets where appropriate.

HITERM has implemented and tested these tools and methods in three concrete case study applications related to the road transportation of hazardous goods (Portugal), rail transport of hazardous goods with complex topography in Alpine valleys (Switzerland), and to the chemical process industry (Italy).

The overall objective of the project was to develop a decision support system based on the integration of advanced simulation models implemented on high-performance computers (parallel workstation clusters), a number of related real-time communication channels for data acquisition, GIS, and expert systems technology.

The feasibility, and comparative advantage, of this approach was demonstrated for a high-risk and high-value application domain (technological risk management). The possibility to integrate all available information resources, and powerful but demanding analytical tools, leads to a better basis for decision support under time critical conditions with a high degree of uncertainty when compared to traditional subjective, statistical empirical methods.



3. User Requirements Analysis

The design of the HITERM system was based on an extensive user requirements and constraints analysis.

3.1 Application Domain

HITERM concentrates on technological risk, and in particular:

  • Emergency planning: consequence analysis, prevention support, accident scenario evaluation (Seveso I), public risk information (safety plans)
  • Emergency response training: emergency plan evaluation (Seveso II)
  • Emergency management support: real-time intervention management.

Emergency within the framework of HITERM is understood as an unintended release of hazardous material that causes, or threatens to cause, harmful effects (casualties, material damages) outside the area of private plants or storage locations.

HITERM has developed HPCN decision support tools for technological risk management and the environment with:

  • better-than-real time complex simulation models (3D, dynamic)
  • fully integrated uncertainty analysis and error propagation (Monte Carlo)
  • on-line data interpretation, visualisation
  • integrated MC decision support tools.

HPCN and DSS: the HITERM approach

The ultimate objective of a computer based decision support system is to improve planning and operational decision making processes by providing useful and scientifically sound information to the actors involved in these processes, including public officials, planners and scientists, and possibly the general public.

This information must be:

  • Timely in relation to the dynamics of decision problem; depending on the nature of the problem (planning, training, or operational risk management) this implies considerably better than real-time performance of any forecasting, and more or less immediate response in any situation of interactive use.

  • Accurate in relation to the information requirements; this requires the use of state-of-the-art tools, methods, and models, which usually are demanding in terms of their data requirements and computational resources;

  • Directly understandable and useful; this implies that the output of any numerical method can be presented in a format that is directly and reliably understandable, that is, graphical and symbolical (in multi-media formats rather than purely textual and numerical);

  • Easily obtainable, i.e., cheap in relation to the problems implied costs, which, however, in the case of technological risk, accidents and emergency situations can be considerable.

All these requirements for decision support information can, at least in part, be addressed by high-performance computing and networking (HPCN).

Many data processing, modeling, and communication tasks that appeared prohibitive only a short while ago become more and more feasible and also commercially viable with the rapid development of computer and information technology. In particular the possibility to use clusters of powerful PCs and workstations to configure virtual super-computers on demand opens new areas of promising applications.

Regulatory Framework

Within the European Union technological risk management is covered by the EC Directive 82/501 EEC (referred to as Seveso I) and subsequent amendments (87-216 EEC, 88-610 EEC) and the Directive 96-82 EC (Seveso II). The Directive entered into force on 3 February 1997 and must be transposed into national law by the Member States within 24 months. It must be applied as from February 1999 (date of repeal of Directive 82/501/EEC-SEVESO I).

Under this common umbrella, the individual countries have quite different national and regional implementations. Important aspects are different coverage (for example, the Seveso Directives do not apply to transportation of hazardous materials and intermediate storage). Regulations in Switzerland are again different in several details as compared to the EU regulations.

Conclusions for HITERM: HITERM was deigned to be sufficiently flexible to accommodate the different national regulatory frameworks, which not only differ, but are also under ongoing modifications as the new Seveso II Directive reaches its national implementation.

Institutional Framework

The institutional framework for risk assessment and management shows considerable variability across Europe. No single form of organisation can be identified, with a broad range from very centralised (such as in Italy) to very distributed (such as in Switzerland) models.

Conclusions for HITERM: As a consequence, HITERM must be able to adapt to a range of organisational models easily. Interviews with potential end users have made clear that compatibility with existing organisational models is a must.

Technical, financial, and human constraints

The analysis of the current situation in the three case study countries demonstrates that neither the level of computer hardware generally in use, nor the (information technology) training of responsible personnel can be considered sufficient in the context of HITERM. Also, financial constraints seem to dictate policy in most institutions involved in risk management.

Conclusions for HITERM: Demonstrated cost-efficiency, flexibility in terms of the required hardware, and ease-of-use (low training requirements) are important design objectives for HITERM.

3.2 User Requirements

User requirements have been compiled from interviews with key industrial and governmental institutions, as well as a questionnaire, and the analysis of similar systems as well as established practice. User Requirements vary considerably, similar to the institutional framework and the different distributions of responsibilities and authority.

The following list of user requirements is compiled from all three case studies in Italy (main end user: Regione Lombardia), Portugal (main end user: Petrogal), and Switzerland (main end user: cantonal authorities). Although with different emphasis on the various functional requirements, no major contradictions were found between the different end users.

Specific user requirements from representative institutions and individuals include:

  • Information on the evolution in time and space of an emergency and its driving conditions, in order to predict consequences and the impact area.

  • This should be derived from advanced state-of-the-art methods and models for atmospheric diffusion, surface and groundwater pollution.

  • The system has to be applicable for emergency planning as well as for emergency management.

  • Training applications include the definition of likely accident scenarios.

  • For emergency management, HITERM should be able to provide a basic estimation of the impact of a transportation accident.

  • For intervention management, HITERM must be able to acquire and analyse time-critical data necessary for the simulations fast and reliable.

  • Speed and reliability in data acquisition, analysis, and communication are general but crucial requirements.

  • Monitoring data acquisition should allow to test and re-calibrate the results of mathematical dispersion models.

  • Uncertainty of parameters and source terms should be explicitly considered as part of model based forecasts.

  • The system must have access to information about chemical and toxicological data of hazardous substances and suggestions for safety and emergency management.

  • Information on the environment including meteorological data, area characteristics, population density and sensitive points are important to the emergency management.

  • Information from the Safety Reports must be an integrated part of the HITERM information system.

  • Information on traffic conditions and access route should be available for intervention forces.

  • A decision support component is required, although the emphasis varies between applications to emergency management and training and accident simulations.

  • The decision making structure of the intervention forces must not be affected by the system.
  • It is suggested that HITERM must not have any impact on existing emergency organisation structures.

  • HITERM should be de-centrally operated and include existing manuals and guidelines. The representation of results should be transparent and easy to interpret.

Technical specifications

No general technical specification other than interoperability with existing legacy applications (data bases, GIS systems, existing communication technology) could be derived from requirements and constraints.

Conclusions for the HITERM design: The overriding conclusions for the design of HITERM is the need for flexibility. The system must be able to accommodate, preferably in a fully data-driven manner:

  • Diverse regulatory frameworks
  • Diverse institutional and organisational structures
  • Interoperability with legacy systems (data bases, specific models)

As a consequence, a highly flexible and modular client-server architecture based on standard protocols (TCP/IP, http, X11) was chosen.



4. Modeling Tools, Parallelisation

Based on user requirements, a methodology for the development of an appropriate modeling system was developed, defining and describing the models and listing their data requirements.

The model system has to fulfill a set of basic constraints:

  • it must be easy to handle
  • it should give reasonable results also with a limited set of input data
  • the execution time must be an order of magnitude shorter than real time to allow the management to react
  • the modeling system should be easily adaptable to a new model area
  • nevertheless, the models should be state of the art.

HITERM uses a number of models for describing technological risk and emergency situations. These primarily include models for:

  • the computation of the release of toxic chemicals (release models)
    • jet release of gas
    • evaporating pool release
  • dispersion calculation (a Lagrangian transport model)
  • the wind field estimation (a 3D diagnostic wind field model)

This basic set of models was implemented for parallel execution on a workstation cluster under PVM.

The selection of the appropriate models had to fulfill the user requirements (see HITERM Deliverables D01.0 - D01.4: Requirements and Constraints Analysis). The models have to be state-of-the-art and must be executable in a period of time much shorter than real time. At the same time, the preparation of the input data has to be easy and possibly automatic.

The models should be suitable for training, planning and emergency management purposes. Especially an air pollution transport model for emergency management requires a very fast response time of the model. 

The input data for the models are in general very sparse. But realistic prognostic models require a lot of input information and have a large execution time. A compromise has to be found to design an emergency model which runs with a basic input data set, but is able to include further information to improve the model results. Additionally, it should work properly at different scales. At least it should be able to take complex terrain into account. 

Some investigations and own model developments have been undertaken to find the appropriate design of the system. A source strength preprocessor computes the release rate and release height of a gas or particulate matter if the rate is unknown. It includes evaporating pool release and horizontal or vertical jet release. The meteorological preprocessor computes the input wind fields for a transport model. This preprocessor is a diagnostic wind field model which solves a simplified version of the continuity equation in order to guarantee a mass-consistent wind field. The core part minimizes the divergence of the first-guess wind field derived from measurements. If a set of surface measurements is available, an iterative procedure tries to find the closest approach between measured and computed wind field. The result is a 3D wind field which takes into account the orography and stability of the boundary layer and additional meteorological quantities, e.g. friction velocity and stability parameters like Monin Obukhov Length, mixing height and lapse rate. These data serve as an input for the transport model. 

A Lagrangian type model has been chosen to avoid numerical diffusion and to make the model more scale-independent. 

The structure of the model is well suited for numerical parallelisation and for this reason it can be easily accelerated if necessary. In a Lagrangian particle model, a concentration is represented by a particle ensemble which has to be transported according to the mean wind as well as by diffusion processes. The meteorological input parameters can have a complex 3D structure taking into account terrain effects. Dry deposition can be included and linear chemical reaction (or decay) can be modeled. The model output is a 3D concentration pattern at selected time steps.

4.1 Constraints and Key Model Parameters

The draft of the model system had to conform to the special requirements for an emergency planning and management system. Especially, some basic constraints can be expected: 

  • sparse meteorological data, simple surface observations only 

  • no or only a rough estimation of the source strength 

  • no knowledge of dynamic evolution of the input data 

  • little information about the area. 

On the other hand, the system must 
  • be simple and adaptable to any desired model domain 

  • allow a quick initialisation and preparation of input data 

  • be executable in a fraction of real time 

  • include suitable data analysis and visualisation tools. 

These constraints restricted the choice of the models for the HITERM system. The model selected and the expected data had to be in balance which lead to some compromises in the modeling architecture. The model should work with a very sparse data set and should allow an enhancement of the model results by adding new input data (if available). 

In principle, a model system for accidential dispersion calculation of toxic air pollutants has to include a variety of different modules for different tasks. Four main modules can be distinguished: 

  • release strength estimation/near field calculation
  • wind flow computation
  • dispersion calculation
  • chemical reactions.

Within the frame of the HITERM project, the release modules try to determine the time-dependent release properties and the release strength. This includes an uncertainty analysis and worst-case scenarios as well as the determination of the effective emission location due to buoyant gases or jet release. Within HITERM, the near field distribution of toxic materials, within the radius of few meters up to 100 m, is not handled. This requires dymanic explosion modeling which is based on a set of data not usually available for common accidental release, and computation is very time-consuming. 

To determine the wind flow in complex terrain, three general methodologies can be distinguished. First, a wind field can be constructed by using objective analysis methods. Second, a diagnostic wind field model can be applied, parametrising terrain and thermal effects and ensuring mass conservation by solving the continuity equation. Under certain circumstances, this can be combined with objective analysis. Third, a prognostic wind field model can be used which is based on the space-time integration of the equations of conservation of impulse, mass, heat and water. Although needing empirical parametrisation for turbulence closure, this approach is the most physical one because it is based on the solution of the complete set of partial differential equations. Nevertheless, under sparse data conditions it can lead to rather worse results in comparison with the much simpler diagnostic approach. The underlying equations are rather sensitive to initial and boundary conditions. Additionally, it is computationally expensive and not easy to initialise.

For these reasons, we decided within the HITERM project to use a fast running diagnostic wind model, parameterizing all relevant effects satisfying physical constraints. The diagnostic wind model produces a snapshot of a given situation. For the intended medium to small scale simulations and the related typical time scale, the use of a time independent wind field can be justified. The time variability in prognostic models under stagnant synoptic conditions is closely related to thermal effects driven by the solar radiation. A problem may arise for a diagnostic model especially during sunset or sunrise; wind field calculations during this period of time should be handled with care because stability classes and therefore local wind flows can change quickly during this period of time. To overcome this problem, several "snapshots" of the situation can produce a time-varying wind field approximating dynamic behaviour. 

For the dispersion calculation, Gaussian, Eulerian and Lagrangian models are typical choices. The Gaussian model is based on the analytical solution of the dispersion equation under very restrictive assumptions. The model is not applicable for complex terrain and nonuniform wind fields, as well as for changing source strength conditions.

Eulerian models, which numerically solve the advection diffusion equation on a grid, show a rather high numerical diffusion. Lagrangian models do not have this disadvantage. As long as no nonlinear chemistry is involved or the consideration of nonuniform background and a large number of sources are to be considered, the Lagrangian approach is well suited for the dispersion calculation for any desired scale. Additionally, the model execution can be easily accelerated by using parallel computation platforms. Of course, problems of turbulence parametrisation also exist in a Lagrangian framework as well as in the Eulerian models. 

Chemical reactions among the released substances or the substances and the ambient air are not an issue in the HITERM project. For this purpose, the complete reaction scheme with all ongoing chemical reaction paths and their rate constants as well as the initial concentrations of different substances have to be known. This will certainly not happen in cases of accidental release. Additionally, for practical purposes it is questionable if the chemical changes during the transport of the released materials for a distance of few kilometres are important for emergency management. Therefore, chemical reactions are not treated. Nevertheless, linear decay or conversion rates can be included in the model. 

Key Model Parameters and Application Range

The application of the model system of the air domain is defined for a horizontal model domain of medium scale to regional scale (1 km x 1 km up to 200 km x 200 km) and covers vertical levels up to the mixing height. The horizontal resolution depends on the given data (recommended 1D horizontal grid size is 1/100 of the domain length, e.g. for a 10 km x 10 km model domain an appropriate grid size is 100 m x 100 m). 

The vertical resolution depends on the stability (mixing height) and expands usually from a few meters at the bottom to some hundred meters at the top of the model domain. The model is not able to resolve local turbulence structures (e.g. lee waves at a building). 

The importance of such features for the dispersion calculation depends on a number of parameters and cannot  be generalised. It might be reasonable under some conditions to use a fine resolution grid even if it is not always possible for the diagnostic wind field model  to compute realistic results in the obstacle-near areas. Results of the dispersion calculation in close proximity to pronounced obstacles can be unrealistic. On the other hand, microscale investigations for emergency purposes are in principle not realisable at the moment. It would require a lot of data on the precise structure of buildings, streets and obstacles and more precise information about the initial meteorological parameters. Additionally, it requires the solution of the prognostic Navier Stokes equations for a relatively large grid which is not possible in a reasonable time for emergency management purposes, even on today`s fastest supercomputers.

4.2 The Emergency Air Pollution Simulation System

The air pollution emergency modeling system consists of: 

  • a set of source strength estimation modules 

  • a three-dimensional wind field model which serves as a meteorological preprocessor and 

  • a Lagrangian particle model. 

The model selection procedure has been described together with an analysis of available third party models in Deliverable D02, Model Specification.

Based on this analysis, the following models have been included in the HITERM system with several application-specific modifications:

  • the diagnostic wind model DWM (Douglas 1990), a third-party model provided by the US EPA

  • a pool release model based on EVAPOR (Kawamura 1987) running in a Monte Carlo frame

  • a release model for buoyant gas and jet release based on Briggs formula (Briggs 1984) (included in the Lagrangian model code)

  • a Lagrangian dispersion model (LDM) developed by GMD.

The complete set of model features and parameters is described in Deliverable D02.3. Coupling of the original DWM and a Lagrangian particle model has been reported previously (Al-Wali 1996), although for a different application range.

The Release Models

The model system for the computation of source strength consists of different individual submodules regarding the release type. The HITERM system explicitly includes 

  • jet release of gas (Ermak 1991) 

  • evaporating pool release. 

If the source strength is known, the release models are no longer necessary. But in the case of an accident very often only some geometric quantities (e.g. of a spill) are known. 

Most of the parameters needed for the source strength estimation are highly uncertain. To take these uncertainties into account a Monte Carlo simulation is used to determine a source strength distribution over time instead of a single number for the source strength. This distribution is used as an input for the dispersion model. 

Although explosion or burning might also be important release types, their explicit handling in the system with dynamic models is impossible for practical reasons (no data, expensive computational burden). 

With a high level of probability, in the case of explosions an immediate total release of the substance can be assumed. For release types due to burning, the determination of the source strength and the properties of the released substances might be very uncertain. However, this is a principal problem of the subject rather then a problem of the modeling tools. 

Thus explosion and fire are covered in HITERM by semi-empirical steady-state models (see below).

For fire type release, the applied methodology is similar to buoyant jet release taking into account the especially broad range of uncertainty for the model input parameters. 

The Wind Field Generator

One of the most important features of an atmospheric emergency simulation system is the computation of a realistic wind field in a complex terrain. The core part of the chosen wind field model is based on the Diagnostic Wind Model (DWM, Douglas 1990). It generates gridded wind fields at a specified time. It adjusts the domain-scale mean wind for terrain effects (kinematic effects like lifting and acceleration of the airflow over terrain obstacles as well as thermodynamically generated slope flows). It performs a divergence minimisation to ensure mass conservation. This divergence minimisation scheme is applied iteratively until the divergence is less than a given threshold value. 

The following steps are performed: 

  • STEP 0: selection of the appropriate parametrisation for the given meteorological conditions. 

  • STEP 1: construction of an inert vertical wind profile depending on atmospheric stability and determination of a set of stability parameters (Ermak 1991). 

  • STEP 2: parametrisation of kinematic terrain effects (Liu 1980). 

  • STEP 3: intermediate divergence minimisation to adjust the horizontal wind components in each vertical level (Goodin 1980). 

  • STEP 4: computation of thermodynamically generated slope flows, modification of the horizontal surface wind components (Allwine 1985). 

  • STEP 5: Froude number adjustment for the horizontal wind (Allwine 1985). 

  • STEP 6: smoothing of the horizontal wind field. 

  • STEP 7: divergence computation of the horizontal field, new vertical wind components. 

  • STEP 8: vertical adjustment of the vertical wind (zero at the top or at the mixing h eight) (O'Brien 1970). 

  • STEP 9: final divergence minimisation to adjust the horizontal wind -> final wind field. 

The model output is a terrain and atmospheric stability-adjusted 3D wind field with its appropriate stability parameters. 

The Dispersion Model

The basic concept of Lagrangian models is the observation of individual particles. For this reason, these models are also referred to as Lagrangian particle dispersion models (LPD) or as trajectory models. 

In the case of atmospheric dispersion the term particle denotes any air pollutant or any buoyant substance in the air. For physical reasons the particles are assumed to have no spatial extension so that they can follow every flow. However they are assumed to show certain characteristics. Together with the motion of a particle, the modifications of these characteristics will be registered. Gases or evaporating liquids are also represented by particles, whereas for example the density of the gas components are considered as characteristics. If a uniform mass is assigned to the particles the density of the particles is proportional to the concentration. 

Lagrangian models use given wind fields and take into account fluctuation caused by turbulence to predict the pathways of individual particles or air volumes and register modifications in their characteristics for each time step. The definite form of a Lagrangian model is mainly determined by the chosen scale, affecting for example, the types of turbulences simulated. Particles or air volumes, respectively, may be released from any number of locations. The type of source, e.g. point or line source, has no influence. 

In contrast to Gaussian models, Lagrangian trajectory models are appropriate for the description of dispersion in complex meteorological situations and/or structured orography. 

The underlying basic methodology of the model can be described as follows: 

The position of a particle is given by its previous position (in the first step the position of its source) plus a term describing the motion by advection processes and by turbulence. The advective wind is completely determined by the velocity and the direction of the wind, while the fluctuation or turbulent component describes the actual fluctuation. The simulation of turbulence is based on the statistical theory of Taylor for diffusion effects (Taylor 1921), and an extension made by Obukhov (Obukhov 1959) and Smith (Smith 1968). 

The velocity fluctuation is simulated by a Markov sequence of first order where the random component describes the coincidental effects in diffusion. 

In non-homogeneous turbulent conditions, the given formula for the velocity fluctuation may lead to non-physical effects such as the accumulation of particles in an originally well-mixed profile. Therefore, Legg and Raupach (Legg et al. 1982) introduced an additional term for the description of the vertical part of the velocity fluctuation. This approach is currently widely used in atmospheric Lagrangian models. A more general form of the dispersion equation which is capable of representing general inhomogeneous, non stationary conditions is described in Thomson (Thomson 1987). 

The meteorological data required by the Lagrangian model are provided from the diagnostic wind model, whereas the emission rate and location is the output from the release model. 

If required and the deposition velocity of the substance is known, dry deposition can be considered each time a particle trajectory touches the ground. 

The main advantages of the Lagrangian approach to the solution of the diffusion equation can be summarised as follows. It 

  • is able to describe transport in complex terrain,
  • is easily parallelised and therefore suitable for HPCN, 
  • shows good accuracy/performance ratio,
  • is scalable over a wide range (from a few centimetres to some kilometres).

On the other hand, the particle methodology is not well suited to treat chemical reactive species. As long as the reactions can be described by a linear rate, chemistry can be included. But for complex nonlinear chemical reactions, an Eulerian approach should be used. For emergency management purposes the computation of chemical changes is irrelevant because of the lack of detailed information of the chemical reaction paths and reaction rates. 

Moreover, the computation of complex chemical reaction schemes often leads to coupled systems of stiff ordinary differential equations. The solution of this type of a system is very time consuming.

Gaussian Plume Model (GPM) versus Lagrangian Dispersion Model (LDM)

The results of the LDM for the Italian case study, shown at the Final Review Meeting, clearly indicated the fact, that the LDM gives much more realistic information than the GPM.

Generally speaking, GPM and LDM are not "very incomparable", but a GPM can only be used under very strong restrictions on the orography and meteorological conditions. A GPM can only give some very coarse estimation of the dispersion plume. On the other hand, a GPM is very fast, which was the reason to apply it in the Demonstrator versions, which should give an overview about the general performance of the HITERM prototype.

In the Italian and Swiss test cases a large number of test runs with the LDM were done, showing the advantages of LDM against GPM. For idealised conditions (homogen wind field), satisfying the usage of the GPM, the results of the comparison between LDM and GPM are demonstrated in chapter 7 (Model calibration, Uncertainty) below.

Auxiliary models

In addition to the basic model system with its parallel implementation, a number of simpler models, fully embedded in the expert system DSS framework were used. They include:

SPILL: dynamic release model

    for one and two-phase releases, pool evaporation, and infiltration (stochastic, Monte Carlo framework);

    Input:

    container description, meteorology, size of a possible containment, soil permeability (infiltration);

    Output:

    mass budget, dynamic source term (evaporation or jet release) for the atmospheric model(s). All output available as a frequency distribution.

DYNPUFF: dynamic multi-puff model

    based on INPUFF 2.4, using the diagnostic 3-D wind model DWM as a pre-processor; The model directly utilises the output of the SPILL model above.

    Input:

    Dynamic source term (automatically coupled to the SPILL model;
    2-D Wind field, automatically computed by the embedded DWM, which in turn uses anemometric and geostrophic winds, temperature and stability class.
    Digital terrain model, surface roughness, population distribution.

    Output:

    dynamic 2-D concentration field (ground level)
    Area and population exposure above user defined threshold concentrations.

BLAST explosion models

    TNT equivalence and a fuel-air charge blast explosion model from the TNO Yellow Book (Third Edition 1997);

    Input:

    Substance amount and parameters, for the TNO model also ignition strength and blockage factors;
    Landuse and population distribution.

    Output:

    2-D pressure distribution, population and area exposed to pressures above a user defined threshold.

FIRE: steady-state 2-D fire model

    can describe pool and trech fires (see the Attachment to D10, Italian case Study) as well as BLEVE (Boiling Liquid Expanding Vapor Explosion) based on the TNO Yellow Book formulas.

    Input

    Source term (feed rate) and substance parameters (loaded automatically from the hazardous chemicals data base); pool or trench geometry, wind direction and speed, background temperature.

    Output

    Heat flux or temperature distribution (steady state, 2-D).

SOILGW: stochastic 1-D soil/groundwater infiltration model

    estimates the arrival time of a spill at the water table, using substance viscosity, soil properties, and the groundwater level (distance from soil surface). Monte Carlo implementation. The model is based on a set of Nomograms used routinely by the Swiss chemical intervention forces.

    Input:

    substance properties, soil properties, groundwater head (vertical distance)

    Output:

    Arrival time of the contaminant (probability distribution).

MS: Metodo Speditivo

    fast empirical estimation method from Italy; (PIANIFICAZIONE DI EMERGENZA ESTERNA PER IMPIANTI INDUSTRIALI A RISCHIO DI INCIDENTE RILEVANTE, LINEE GUIDA, January 1994) it uses tabulated data for substance classes, amounts and storage conditions,and weather conditions.

    Input:

    Substance class, amount, storage conditions, weather, population distribution.

    Output:

    Safety zones (radii) and their sizes, population exposure.




5. Communication Architecture

In Workpackage 3 Communications and Networking the following requirements for the Network Architecture were defined:

The network must be adequate for information flowing between nodes in short messages, which are supposed to be delivered with short delays. This requires a fast small datagram network. Reliability should be high, at least for certain messages. In these cases, backup routes must be provided. When backup routes are used, switching between the default and alternative routes should be done automatic.

Security issues must be considered, in the following levels: authentication, integrity and privacy.

Additionally, the following requirements were defined:

  • The network should use only well-established open protocols and technologies.

  • The network should use protocols and technologies that are widely available across Europe.

An additional feature and requirement was the integration of mobile units, acting potentially both as client and server within the HITERM architecture. The example of a mobile GPS unit for the location of a truck with dangerous material is shown below.

There are several issues that had to be taken into account when doing the analysis and selection of the best network architecture for a given model, with a given set of requirements.

Connections

All networking technologies can be classified according to two types of approaches: Connection Oriented Technologies (COT) and Connection Less Technologies (CLT). The COT are used traditionally by the phone operators since the beginning of this service. These companies have developed all their voice services on the concept that each exchange of information involves three phases: resource reservation, transmission and resource freeing. These three phases allow for these networks to have guaranteed service levels after the resource reservation phase is completed. If there are not enough resources for a given transmission, the resource reservation phase fails and the network signals back to the user that unavailability.

The CLT were developed by the computer industry and its most well known example is the Internet Protocol (IP). There is only the transmission phase in CLT. Without the resource reservation phase the CLT cannot provide any guaranteed level of service but only a best effort service. The CLT approach proved to be very good for the profile of computer generated information: small bursts of information. For most of the computer applications the time and resources needed for the completion of the first and last phases in COT are too high for the benefit of guaranteed service, CLT provides a good enough level of service for them.

Taking into account the requirements of the HITERM model, we must stress the fact that although it is a computer system it deals with risk and emergencies. In this context reliability becomes the most important issue used in the definition of the network architecture.

Coverage

A second issue is the classification of the various components of the model into the well-established classes of LAN-Local Area Networks and WAN-Wide Area Networks. There is also the less know MAN-Metropolitan Area Network class, but in this model MAN technologies can be treated as WAN technologies. The range of available technologies for the LAN is different from WAN. Again, the later were defined and build by traditional phone operators while the LAN was developed mainly by the computer industry. LAN technologies are usually: fast, Connection Less and investment intensive. WAN technologies are usually: slower, Connection Oriented and operation intensive.

Layers

The last issue covered in this analysis is the layering of different technologies to obtain the desired reliable networked medium. The OSI-Open System Interconnect model defines seven layers with increasing service provision. Although much controversy exists regarding the usefulness of all the layers of this model, it is regarded as a well-structured approach to define the layering in the first four layers. In this analysis we'll consider just the first three layers.

The first layer is the Physical Level. It deals with all the aspects of the use of physical properties of materials to transfer information, namely: sockets, cables, etc.

The second layer the Link Level. It deals with the transfer of information between two points, over a single Physical Level cannel. The Link Level does some error correction.

The third layer is the Network Level. It deals with the forwarding of the information across several Link Levels, thus achieving end-to-end connectivity.

Each networking solution can be classified according to these three issues: Connections, Coverage and Layers. Applying them to HITERM, the following set of rules can be derived.

  1. In the WAN context, Connection Oriented Technologies should always be preferred over Connection Less Technologies as the primary means of transporting information. They provide guaranteed quality of service after the successful completion of the resource reservation phase.

  2. In the WAN context, Connection Less Technologies should be used as the secondary (or backup) means of transporting information. If the resource reservation phase fails for the primary path or it is lost, the system should revert to the use of Connection Less solutions and use them until the primary connection is again well established.

  3. In the LAN context, Connection Oriented Technologies should be preferred over Connection Less Technologies as the primary means of transporting information. Although the total control of the networking environment in the LAN provides increased performance and reliability for Connection Less approaches, still Connection Oriented solutions should be preferred.

  4. In the LAN context, in the absence of Connection Oriented solutions, Connection Less Technologies may be used as the primary means of transporting information, provided that an exclusive Physical Layer is used. The development of structured cabling and the use of Ethernet switching instead of shared medium Ethernet, enables a very good level of reliability for a Connection Less Ethernet solution.



    6. Visualisation

    The volume of information generated by large-scale 3D dynamic models - in particular when considering more than one model solution generated in parallel or when the output is inherently stochastic, adding yet another dimension to the data interpretation - is enormous and overwhelming to the human observer.

    Therefor, related methods of real-time interpretation (including visualisation, pattern recognition, classification, etc.) are required to translate the model output into useful, decision relevant information that can be presented, in multi-media formats, to the end user in real time.

    Visualisation and extensive graphical displays are one of the User Requirements defined in The Executive Summary of D01.1, the Requirements and Constraints Report.

    An important element is the preparation of topical maps, using local GIS data around the accident site. Maps, as a familiar format, are an effective basis for the communication of complex information by providing a familiar context.

    On this basis, and supported by other display styles, graphs, symbolic (icon) displays, and possibly synthesised text and audio signals, a clear picture of the state and expected evolution of the system, including the uncertainty of the forecasts, have to be summarised. This includes spatial interpolation, 3D reconstruction, rendering, and animation, as well as various forms of statistical analysis.

    The main objective of visualisation is the presentation of the large volume of data, including the representation of uncertainty, in a directly understandable, easy to comprehend form, i.e., largely symbolic and graphical, that guarantees safe interpretation even under the special situation of emergency management. Since most of the graphic rendering is again compute intensive, the use of HPC can also be helpful at this stage of the overall information processing system.

    At the same time, the field application of an emergency management system will require thin clients with rather limited local processing capabilities.

    HITERM uses a dual approach to user interface design and implementation, reflecting the distributed client-server architecture of the system:

    • X Windows (X 11) for clients with a high-bandwidth connection (10 Mb or better)

    • HTML, JavaScript and Java for low-bandwidth or light-weight (mobile) clients.

    Examples of the final user interface and its visualisation tools are given throughout the report, based on screendumps from the HITERM system.



    7. Model calibration, Uncertainty

    Model calibration and uncertainty analysis are two important tasks to encourage potential users to use the developed software and to ensure a maximum level of truthfulness. The use of state-of-the-art methods for sensitivity analysis (Automatic Differentiation) allows the quantification of the sensitivity of a model output with regard to its input parameters for a given input data set. This forms a basis for a pre-selection of parameters which have a dominant influence on the model result. The methodology is documented for a selected spill release model.

    Monte Carlo simulation techniques are generally used for varying input parameter sets of release submodules. Rather than using a single value, a time dependent source strength probability function serves as the input for the dispersion calculation. Usually, the evaluation of atmospheric and aquatic dispersion models is based on large field experiments and cannot be executed within the framework of the HITERM project. However, a comparison of test cases with observed structures or a comparison of numerical model results with analytical solutions for simple cases was carried out, whenever it was possible.

    Model performance evaluation is realised by a comparison of the model results with the real world. In the case of atmospheric and air pollution models, an enhanced model evaluation is quite difficult. Data from field campaigns together with data from continuous measuring nets has to be compared with simulated data sets.

    This was not within the scope of this project. Moreover, given the possibility to compare simulated with real data, it is hard to distinguish between real model errors and data incorrectness or inadequateness. A model may be valid (this includes an appropriate modeling of the desired features, a proper selection of the numerical schemes, and a careful implementation), but the outcome can be insufficient due to a lack of appropriate input data.

    A validation requires the usage of a perfect input data set to distinguish between model and data. Additionally, there is a principle validation problem which is still the subject of scientific investigations. The reason can be found in the different nature of point measurements and gridded model results as well as in the stochastic behavior of local turbulence patterns.

    Most of the selected software components already have a broad range of applications, and can therefore be considered as sufficiently evaluated within their application range. This is the case for the wind field model which has been derived from EPA's DWM (Douglas 1990) and for the different release models. Nevertheless, an evaluation of the system is documented.

    The model evaluation methodology in the context of the HITERM project focuses on the following features:

    • demonstration of the appropriate changes of the wind field calculation for different meteorological conditions
    • flow description for complex terrain
    • comparison of Lagrangian dispersion results with an analytical solution for a simple case
    • analysing the dispersion calculation for different atmospheric stability parameters
    • evaluation of the interplay of all submodules.

    The major source of uncertainties is given by the, in general, sparse and insufficient data structure of emergency cases. Especially the differences caused by unknown release parameters can lead to uncertainties in the concentration calculation of up to one order of magnitude.

    7.1 Uncertainty and Sensitivity Analysis

    One of the most important input parameters for all kind of dispersion models is the time dependent emission rate of the source. This rate is sufficiently known in the case of a continuously operational source, such as the stack of a power plant or an industrial estate. In the case of an accident, this parameter is very rarely known. Very often, a rough approximation of the total release mass is given and, in some cases, this release mass is unknown as well. This requires the incorporation of different source-strength models. But release and spill models for emergency management tasks suffer from a high degree of uncertainty regarding their input parameter set. Depending on the sensitivity of the source function, this is the reason for many unrealistic concentration patterns.

    Two different methods have been used to determine the uncertainty range of the source module: Monte Carlo Simulation and Automatic Differentiation. Monte Carlo methods are directly included in the model system whereas Automatic Differentiation has been used in the implementation phase.

    The validation of models is a central concern of Applied Mathematics. Sensitivity analysis is carried out in order to examine the robustness of numerical results with respect to changes of the model structure, input parameters, algorithms, machine accuracy etc. The effects of perturbations concerning data or computations can be studied by the application of the Monte Carlo method. Accurate sets of data are perturbed by random data corresponding in their distribution and correlation as closely as possible to those of the real data inaccuracy (�berhuber 1995).

    Within HITERM the Monte Carlo approach is used to find a source-strength probability function with the help of measured input parameters and user specified uncertainty ranges for these parameters. The computation of the probability function requires thousands of runs of the same code with varying parameters. In the case of a very complex multi-parameter release module this can be very time-consuming. Additionally, the dependencies of the individual parameters are hidden.

    A more general approach is used in automatic differentiation. With automatic differentiation, the sensitivity of the result with regard to the individual parameters can be quantitatively specified. This is especially useful for the preselection of parameters which have to be varied with the Monte Carlo method. If a new release module is added, a sensitivity analysis can be executed using automatic differentiation to filter out the most sensitive parameters for a later Monte Carlo simulation. Additionally, the automatic differentiation approach defines which parameters have to be determined with maximum accuracy.

    7.2 Monte Carlo Simulation

    To allow a more realistic determination of the release, a Monte Carlo method is used to construct a probability function of the source-strength for different times of the release duration. With this method, all uncertain parameters are varied in a selected range. 

    The parallel implementation of the source term models uses a generalised mask to run different types of source term models. The user must specify the number of input parameters which are not precisely known. In addition, an uncertainty range must be given for every parameter. This range can vary in positive or negative direction. 

    Example: Monte Carlo simulation for an evaporation module 

    Figure 1
    Source strength probability function 

    A probability function of the source-strength is constructed using millions of runs of a deterministic release model. The input parameters of this model are varied over the given uncertainty range using a random generator. Figure 1 (above) shows an example of a source-strength probability function for evaporating pool release for a given time.

    The input parameter variations for this test run were:

    • cloud cover (0 - 100 %)
    • surface wind speed (20 %)
    • ambient air temperature (-20% / +30%)
    • initial depth of the pool (20 %)
    • total release volume (10 %)
    • total release time of the liquid phase (10 %).

    The results exhibits an asymmetric distribution whereas neither the source-strength at the mean nor at the maximum of the probability function is equal to the "exact value" (which is the deterministic outcome of the model by neglecting the uncertainties). 

    If the release is a complex function of parameters (which is normally the case), the Monte Carlo run is very useful for finding even the most probable values for the source-strength. This is due to the fact that even the most probable values can be considerably different from the exact one.

    The mean and the emission rate at the 95th percentile of the time dependent source term probability function are used as an input for the Lagrangian dispersion model. The mean represents the most probable emission rate, whereas the 95th percentile represents a worst case scenario for the emission rate excluding a small probability of 5 percent of higher emission rates. 

    7.3 Model Evaluation

    The Wind Field Model

    The diagnostic wind field model is based on the well-established and evaluated DWM (EPA). To allow the user a faster selection of the meteorological condition where only sparse observational data is given, some new parametrisation features were added.

    Two examples show the typical wind flow in a hypothetical complex terrain. The measured meteorological values at 5 m above ground parameters were:

    • wind speed 3 m/s
    • wind direction 300
    • surface temperature 290 K.

    The assumed roughness length was 0.5, a quite typical average value. The numerical grid has a horizontal resolution of 100 m x 100 m. The complex orography is characterised by fairly steep rise to a mountain range in the eastern parts of the domain, some isolated smoother hills and some sharp gashes of valleys between the range. The simulation was carried out for the same measured values but for different hypothetical stability classes.

    Figure 2 Figure 3

    Figure 2 (left) represents the flow under very stable conditions, occurring quite often during the night. The wind is forced to flow around the sides of the obstacles. Directly upwind of the hill, some of the air is blocked and becomes nearly stagnant. Additionally, there is a visible tendency to drainage winds (valley winds). The underlying background color represents the orography.

    In Figure 3 (right), the meteorological condition is very different. An unstable vertical stratification in the lower atmospheric boundary layer (happens often on hot summer days) leads to a considerably different flow despite the same measured wind direction and speed. The wind flows over the mountain range and exhibits its highest values at the top of the summits. Additionally, there is a thermally generated uphill flow, the so-called mountain winds. As demonstrated above, the diagnostic wind model is able to describe the main features of air flow in complex terrain.

    The Lagrangian Particle Model 

    Comparison of LPD results with a Gaussian solution for a simple case 

    Gaussian plume models are most commonly used for modeling point source emissions. In case of an (inert) air pollutant and homogeneous conditions the resulting plume represents an exact solution. The restrictive conditions comprise a wind field with a rigid advective wind which is constant in time and space and an even homogeneous orography. Thus the horizontal and vertical wind fluctuations are constant as well. The emission rate at the point source is permanent and constant.

    For preconditions given as described, the concentration dispersion of an arbitrary (inert) air pollutant at ground level is simulated by the LPD model. It is compared to the analytical Gaussian solution calculated for the respective conditions.
    A horizontal grid size of 51 x 51 with a resolution of 100 m is given for the meteorological (input) and the output grid. For the vertical resolution of the input the stratification of a labil wind field is applied while for the output equidistant layers of 10 m are chosen.

    The inputs for both models are as follows:

    Advective wind field:

    The standard deviation of the velocity fluctuation and the Lagrangian time-scale are required for the v and w component and the y, z component, respectively. They are set to the following values: 

    The coordinates of the emission source are:

    coord_x = 2
    coord_y = 25
    coord_z = 5

    For the simulation time of 20 min the emission rate is assumed to be constant with

    In the LPD model the number of emitted particles per time step is set to 200. Hence for a time step of 10 s each particle is associated with a mass of .

    For the Gaussian solution the air pollutant concentration is calculated by:

    with (x,y,z) the position in the three-dimensional coordinate system and H the effective stack height. For the test case the concentration was calculated at H= qz and the level of z = coord_z.

    The standard deviations $\sigma_{\mbox{\scriptsize z}},\, \sigma_{\mbox{\scriptsize z}}$ in the y and z direction, respectively, are obtained from the Taylor Theorem based on Taylor's homogeneous diffusion theory (Taylor 1921, Yamada 1987):

    is constructed analogous.

    Figure 4 (left) Figure 4 (right)

    Figure 4 (above) represents the resulting dispersion of an air pollutant at the end of the simulation time (20 min) under the described conditions and the level of the source height. The Lagrangian dispersion in the right figure shows a good reproduction of the Gaussian solution (left).

    The quality of the Lagrangian dispersion is also displayed in the figures below (Figures 5 and 6). The diagrams show the alteration of the concentration along an intersecting line for the Gaussian solution and for the LPD dispersion at identical locations.

    Figure 5 Figure 5
    Figure 5

    In Figure 5 (above), a line intersecting parallel to the main wind direction was chosen whereas in Figure 6 (below) the intersecting line is nearly perpendicular to the main dispersion direction.

    the diagram in Figure 5 proves the apparent good reproduction of the Gaussian solution by the Lagrangian dispersion calculation along the centerline of the pollutant plume. The further the distance from the centerline (Figure 5) and the emission source, the larger becomes the stochastic error in the LPD model. The stochastic effect is due to the principle of the Lagrangian model theory. It can be reduced by enlarging the number of particles emitted per time step.

    Figure 6 Figure 6

    Comparing the LPD dispersion and the Gaussian solution, Figure 6 indicates that the error effects are relatively small. Discussing this phenomenon, it has to be considered that the solution of Gaussian models gives an exact analytical resolution and hence are not subject to any stochastic influence.

    Test runs with different numbers of emitted particles

    Lagrangian models are based on the determination of trajectories of individual particles. The figures in this section show the dispersion plume for different numbers of emitted particles per time step - from left to right, top to bottom: 50, 100, 200 and 300 particles.

    The number of particles emitted per time step influence the quality of the output, as well as the computation time. The more particles are released the clearer is the shape of the plume. The figures also show that with a certain number of particles asymptotic effects appear. These effects depend on the presently given conditions, like meteorology, orography, etc., and the time parameters (time step, simulation time).

    A limiting factor for the number of particles emitted per time step is the computational burden and the available memory. To achieve reasonable results in a reasonable time - which for risk management has to be as short as possible - a compromise has to be made for the appropriate number of particles.

    Figure 7

    Figure 7

    The LPD model represents a good reproduction of the Gaussian plume under convective conditions. It can be assumed that the modeling of pollution dispersion by the LPD model also holds for stable meteorological conditions.

    The number of particles emitted should be chosen in the range of 100 to 200 particles per time step. The default value in the LPD model version is set to 200. The smaller the number of particles, the larger is the effect of the stochastical error, and thus the coarser the result (see figures). A larger number of emitted particles often shows a nearly proportional increase in computation time.

    Dispersion calculation under different meteorological conditions

    For a complex orography the effect of different meteorological conditions on the dispersion calculation by the LPD model is discussed. A point source for the pollutant emission was chosen and the horizontal reference layer set to the source height. The isolines in Figure 8 (below) denote the height above sea level in m. Equal preconditions as for the emission rates, simulation time, time step, etc. are assumed.

    Figure 8 (left) Figure 8 (right)

    Representative for the tests, two cases have been chosen for display. The left part of Figure 8 shows the dispersion calculated for a slightly unstable wind field with a mixing height of 800 m (stability class 3). In the right of Figure 8, the dispersion in case of slightly stable conditions with a mixing height of 300 m (stability class 5) is displayed.
    For unstable meteorological conditions, the area affected by the pollutant is much wider (left part of Figure 8) than for stable case (right part of Figure 8) in the considered layer. The left part also shows that high concentrations of the pollutant occur only near to the emission source due to larger diffusion, causing, e.g., faster rising effects in an unstable wind field.
    Under the given stable conditions the pollutants rest more at the ground/source level and perturbation across the main wind direction is less. Therefore the region affected by the pollutant dispersion is smaller but concentrations are higher (right part of Figure 8).

    The simulated effects are as expected (also for other than the displayed meteorological conditions) and reproduce known observations gained by field experiments. This implies that the LPD model provides reasonable results for different kinds of meteorological conditions.

    The conclusion can be drawn that the LPD model is appropriate for dispersion calculation in the given context.



    8. Decision Support and Expert System

    HITERM is designed to provide HPCN-based decision support for technological risk analysis, including both risk assessment and risk management aspects.

    As a Decision Support System it uses HPCN technologies to:

    • generate decision relevant information fast and reliably;
    • to explicitly address uncertainty as the determining feature risk analysis;
    • and deliver it to the user fast and in a directly usable format.

    HITERM implements several decision support paradigms in parallel, and integrated with each other, reflecting the complex nature of the application domain, but also the scope of the intended applications ranging from strategic planning (risk assessment) to training and real-time emergency management.

    The DSS paradigms applied are, basically

    • comparative analysis and (multi-criteria) selection, based on scenario analysis, primarily applicable in the planning domain;

    • rule-based classification, applicable to both the planning and real-time domains;

    • uncertainty analysis and sensitivity analysis, that is applied to the criteria used both in comparison and classification;

    • and real-time rule-based guidance, applicable to the real-time training and emergency management domain.

    These paradigms are implemented, respectively, through:

    • a set of tools for the direct comparison of HPCN simulation generated options or scenarios and associated statistical analysis;

    • a backwards chaining rule-based expert system linked to the simulation models;

    • Monte Carlo Analysis and a Direct Differentiation approach;

    • a real-time forward chaining rule-based expert system for emergency management support.

    8.1 DSS: an introduction

    The ultimate objective of a computer based decision support system for technological and environmental risk management is to improve planning and operational decision making processes by providing useful and scientifically sound information to the actors involved in these processes, including public officials, planners and scientists, industrial operators, emergency management personnel and civil defense forces.

    This information must be:

    • timely in relation to the dynamics of decision problem: this is particularly challenging in real-time decision making situations such as in emergency management;

    • accurate in relation to the information requirements;

    • directly understandable and usable;

    • easily obtainable, i.e., cheap in relation to the problems implied costs.

    Decision support is a very broad concept, and involves both rather descriptive information systems that just demonstrate alternatives, as well as more formal normative, prescriptive optimisation approaches that design them. Any decision problem can be understood as revolving around a choice between alternatives.

    These alternatives are analysed and ultimately ranked according to a number of criteria by which they can be compared; these criteria are checked against the objectives and constraints (our expectations), involving possible trade-offs between conflicting objectives. An alternative that meets the constraints and scores highest on the objectives is then chosen. If no such alternative exists in the choice set, the constraints have to be relaxed, criteria have to be deleted (or possibly added), and the trade-offs redefined.

    This choice process may be iterative, and at leisure, when the decision maker(s) can experiment with criteria, objectives, and constraints, to develop his preference structures and reduce the set of alternatives step by step until a preferred solution is found.

    Or this choice process may be a dynamic, real-time sequence of numerous small but interrelated decisions in the form of (alternative) actions that need to be taken under sometimes extreme pressure of time, and under considerable uncertainty.

    However, the key to an optimal choice is in having a set of options to choose from that does indeed contain an optimal - or at least satisfactory - solution. Thus, the generation or design of alternatives is a most important, if not the most important step. In a modeling framework, this means that the generation of scenarios must be easy so that a sufficient repertoire of choices can be drawn upon.

    The selection process is then based on a comparative analysis of the ranking and elimination of (infeasible) alternatives from this set. For spatially distributed and usually dynamic models -- natural resource management problems most commonly fall into this category -- this process is further complicated, since the number of dimensions (or criteria) that can be used to describe each alternative is potentially very large. Since only a relatively small number of criteria can usefully be compared at any one time (due to the limits of the human brain rather than computers), it seems important to be able to choose almost any subset of criteria out of this potentially very large set of criteria for further analysis, and modify this selection if required.

    While this classical approach is most applicable for planning situations where the decision maker is at leisure to contemplate alternatives, the management of an emergency situation in real-time does not offer these luxuries: here efficient decisions have to be taken in a minimum of time, under considerable psychological pressures, and often under large uncertainty.

    Real-time decision support can therefore build on the above concepts of comparative analysis, but must implement them in an efficient and effective way that minimizes time and effort by the decision maker (or rather operator) during an emergency.

    Instead of offering, and manipulating many alternatives subject to the decision makers preferences, here we must present a best alternative (or a very small set of efficient alternatives with a clear ranking and clear trade-offs), i.e., strategies, plans, or a set of actions in the form of a rather firm guidance, but with enough context information to allow the operator to exercise ultimate judgment and decision power. This is based on a set of a priori defined strategies and options, that are adapted dynamically depending on context, i.e., the characteristics and circumstances of the evolving emergency.

    8.2 A DSS approach for planning

    In HITERM, the decision support approach chosen for the strategic planning applications is primarily constrained by the characteristics of the underlying system. These are:

    • dynamic, with a typical time resolution in the order of minutes;

    • spatially distributed, with spatial resolution ranging from the street level (meters) to the regional air quality grid (100 m);

    • highly non-linear and involving time-delays and memory in the cause-effect relationships, e.g., cumulative exposure.

    These problem characteristics preclude any straight forward optimisation approach.

    Consequently, HITERM uses an approach centered on

    • Scenario Analysis and the
    • comparative evaluation of scenarios.
    This can lead, eventually, and if the set of alternatives is reasonably large and complex (i.e., of high attribute dimensionality) to
    • discrete multi-criteria optimisation.

    Scenario Analysis

    In a DSS framework, Scenario Analysis supports the user to explore a number of WHAT -- IF questions. The scenario is the set of initial conditions and driving variables (including any explicit decision variables) that completely characterize the system behavior, which is expressed as a set of output or performance variables.

    Control or Decision Variables

    The control variables or decision parameters the user can set to define a scenario include:

    • industrial site or location along one of the networks (transportation or pipelines);
    • accident conditions (e.g., spill, fire, explosion);
    • receiving environment (air, soil and groundwater, surface water;
    • meteorological conditions (primarily wind and temperature) or hydrological conditions (flow);
    • mitigation measures.

    Editing functions

    An important aspect here is the translation of the more or less technical (and sometimes cryptic) model data requirements into concepts and terms that are directly problem relevant and directly understandable to the user. A general concept used is the specification of most user defined values in relative terms and as a selection from a list of predefined, valid and meaningful options.

    The editing (and estimation of parameters) within HITERM is supported by an embedded expert system that can be used to ensure

    • completeness
    • consistency
    • plausibility
    of any or all user inputs.

    All parameters are represented by Descriptors which are terms used in the expert systems Knowledge Base. Descriptors are Objects that have several Methods available to determine or update their value in a given context (the scenario). One such method is to ask the user through an interactive dialog box.

    The complete syntax of a Descriptor definition is described in Deliverable D06.2, Decision Support and Expert Systems, Technical Report.

    A concrete (but simple) example is the Descriptor exposed_area, used for a storage container mass:

    DESCRIPTOR
    Volume
    A VOLUME
    T S
    U m3
    ### applies to storage containers
    V  very_small[   1.0,  5.0,    8.]
    V  small     [   8.0, 10.0,  20.0]
    V  medium    [  20.0, 50.0, 100,0]
    V  large     [ 100.0, 500., 800.0]
    V  very_large[ 800.0,1000.,2000.0]
    Q What is the volume of the container or vessel ?
    ENDDESCRIPTOR
    

    The upper limit that can be used, is, however, constrained by the selection of the source (e.g., a storage container or transportation vehicle) and the maximum mass of a hazardous substance it contains.

    Performance or Impact Variables

    The performance variables measure the overall behavior of the system (in terms of a set of partly implicit and partly explicit objectives) in an aggregate form. This is clearly necessary for simple reasons of cognitive limitations.

    Visualisation
    An additional important function provided by the user interface is the visualisation of the scenario parameters, i.e., the current status of an emergency, and the related model results. Due to the relatively large number of variables (spatially distributed, dynamic, multi-parameter) graphical and symbolic representation is used to summarize numerous, and in particular spatially distributed and dynamic data.

    Details on the user interface and visualisation aspects in HITERM are described in Deliverable D04, Visualisation and Multi-Media.

    In summary, simple scenario analysis results in a single (set of) result(s), that is (implicitly or explicitly) compared against a set of (absolute) objectives (expectations) and constraints such as environmental or health standards.

    Sensitivity, Uncertainty, and Robustness

    The Deliverable D05   provides an overview of Model Calibration and Uncertainty Analysis.

    From the DSS point of view, uncertainty is simply another of the (multiple) criteria that needs to be displayed and evaluated. In any cases the uncertainty, e.g., represented by a set of Monte Carlo solutions, can be reduced to a set of discrete scenarios by selecting, for example, different probability levels (or risk level) in the reconstructed probability distribution.

    As an example, given a set of Monte Carlo solutions we can consider as samples, we will obtain a frequency distribution of the number of people exposed. Fitting an appropriate probability distribution (with an appropriate transformation), we can select now select a value at two standard deviations from the mean, interpreted as a 95% probability level. Thus, based on the results of the Monte Carlo analysis, the user can select and compare different probability levels of exceeding or staying under, certain threshold values of key criteria.

    As an alternative, the concept of expected value can be used to characterize the probabilistic nature of the simulation results.

    Representation of Uncertainty

    Explicit representation and treatment of uncertainty is implemented at several stages in HITERM.

    Examples include:

    • PVM model group: Monte-Carlo implementation of the (parallel) spill and pool-evaporation model; the main parameters (defined by external sensitivity analysis) are sampled in a Monte-Carlo framework; from the resulting distribution of solutions, the mean and 95% level are used as input to the Lagrangian model to generate two solutions.

    • SOURCE model

      computes the dynamic source term for the atmospheric dispersion models, soil infiltration, and determines the probabilities for fire and explosion with the respective input data values (available mass); the user can determine the level of uncertainty for the input parameters (expressed as a percentage around the mean, the type of a priory distribution to be sampled, and the number of Monte-Carlo runs. All these values are provided as defaults, but can be modified on demand.

      The results are shown in four parallel windows:

      • upper left: ensemble of solution for evaporation rate against time;
      • lower left: ensemble of cumulative mass evaporated
      • lower right: frequency distribution of total mass
      • upper right: frequency distribution of evaporation rate for a selected class of the mass distribution.

      By default, the system selects a source term for the dispersion models from these results based on the 95% percentile range; alternatively, the user can select an arbitrary class range from the mass distribution for the subsequent dispersion computations.

      Probability of fire and explosion

      During the run, the probabilities of fire and explosion are indicated in two separate windows. This is based on the temperature range versus the flashpoint of the substance for fire, and the local concentration over the pool versus the upper and lower ranges for explosivity.

      These values are reported at the end of a run as a probability for fire and explosion, depending on the duration of fire or explosion conditions.

    • GROUNDWATER

      Another example of the direct representation of uncertainty in the simulation models is implemented for the determination of response times for soil contamination.

      The simple screening model estimates the time a given substance will need to reach the groundwater table, based on viscosity, soil permeability, and the distance to the groundwater table.

      For the simulation, the user can again override the defaults for the uncertainty around the input parameters, soil permeability and viscosity.

      The display shows the a priory distribution of the two main input parameters, viscosity and soil permeability, and the frequency distribution of results below. The indicate:

      • the deterministic solution
      • a worst case 95% value
      • the mean value of the frequency distribution
      • the median value of the frequency distribution
      of the response time (the time until the substance reaches the groundwater table).

    8.3 A DSS approach to Emergency Management

    The Real-time Expert System

    This constitutes the top layer of control, on top of the current simulation model layer (representing primarily the planning component) that drives the real-time Accident Management part in HITERM with a real-time, forward-chaining expert system (RTXPS) as the driving engine.

    The implementation in HITERM is based on a set of ACTIONS similar to XPS Descriptors and forward chaining Rules.

    ACTIONS are triggered by the Rules using the status of other ACTIONS and EMERGENCY PARAMETERS, data including externally obtained information and Descriptors, which can be derived through model runs, the expert system, or from data base queries. The EMERGENCY PARAMETERS define dynamic context describing the evolving emergency.

    An ACTION can have four different status values:

    • ready
    • pending
    • ignored
    • done
    For the status pending a timer interval (to avoid being asked about a pending request in too short intervals) can be set in the ACTION declaration.;

    An ACTION declaration looks like:

    ACTION
    Some_action
    A alias_name
    V  ready / pending / ignored / done /
    P 180   # timer set to 180 seconds
    Q For a \value{accident_type} you have 
    Q to specify the total mass or spill volume involved;
    Q you can enter this directly, or refer to the
    Q plant and container database of \value{accident_site}:
    F get_descriptor_value(spill_volume)
    ENDACTION
    

    ACTION declarations are stored in the file Actions in the systems KnowledgeBase (by default located in the directory $datapath/KB

    The declaration of an ACTION object is blocked between the ACTION and ENDACTION keywords.

    The ACTION keyword record is followed by the unique name of the ACTION (Some_Actions), which may be followed by an optional alias after the A keyword.

    The V keyword introduced the array of legal values or states.

    P denotes the timer for pending requests in seconds.

    Q records contain the textual (hypertext syntax) part of the ACTIONS REQUEST. Please note that the

    \value{descriptor_name}
    function can be embedded in the text of the ACTION REQUEST. This will automatically insert the current value of the respective Descriptor in the text.

    One or more optional F records enumerate functions that the action will trigger automatically, and in sequence, when the user presses the DO IT button the ACTION REQUEST DIALOGUE window.

    The associated Rule would look like:

    RULE RULE_ID
    IF   accident_type == chemical_spill
    AND  [Descriptor] [operator] [value]
    OR   [Descriptor] [operator] [value]
    AND  [ACTION]     [operator] [value]
    OR   [ACTION]     [operator] [value]
    THEN Some_Action => ready
    ENDRULE
    

    where depending on the value of some descriptors AND/OR the status of some ACTIONs, the status of the ACTION Some_Action is set to ready and thus displayed to the operator.

    The general syntax of ACTIONs and Rules is similar to the Rules and Descriptors of the current Knowledge Base and inference engine in the embedded XPS (backward-chaining) expert system. The main difference is that the THEN part of the Rule can only trigger (SET TO ready) an ACTION and NOT assign values.

    Running RTXPS in HITERM

    The operating sequence is started by entering the module, where an ACTION Zero_Action (its default status is ready) is displayed; it requests the user to press a START button that starts and possibly sets, the Accident-Time clock, running in tandem with the real time clock.

    Marking the Zero_Action as DONE will trigger the first round of Rule pre-processing (selection) and forward chaining through the rule-base;

    RULE RULE_ID
    IF TRUE
    THEN first_action => ready
    ENDRULE
    
    first_action could then for example:
    • require the operator to call a specific phone number,
    • press the red alarm button, or
    • define the type of the emergency.

    Once the user has completed the ACTION REQUEST, and verified its results, he marks it as DONE in the ACTION DIALOGUE, which will also mark all Rules that can trigger the ACTION as done.

    The next pre-processor run will then select only the subset of Rules that are active (excluding any Rules referring to ACTIONS marked done or ignored).

    A special case is the ACTION status of pending, where the time-stamp set when PENDING was selected by a user in the ACTION DIALOGUE must be compared before including or excluding a Rule in the current run.

    An example for the usefulness for this feature would be a phone call that should be made, but can not because of a busy connection - it can be deferred without interrupting the continuing operation of the system. The period for which this ACTION should be deferred until the next trial is defined in the ACTION declaration in the P field in seconds.

    All status changes of ACTIONSs are entered in an ACTION LOG together with the time stamp of their status change to provide a complete log of the sequential operations of the system.

    ACTIONS may request activities of the operator that may or may not involve the system directly. They may include:

    • external activities for example, communication by phone or fax with other institutions etc. or obtaining relevant information (EMERGENCY PARAMETERS) from external sources;
    • internal activities will involve the use of the system such as entering information or using specific tools such as models to determine more EMERGENCY PARAMETERS.

    For internal activities the ACTION DIALOGUE may offer the option to DO IT; this button will then trigger the appropriate function with the necessary parameters automatically, for example, opening an editor dialog box to enter a value for an EMERGENCY PARAMETER, starting an inference chain (backward chaining ...) to estimate such a value, or starting a model.

    Upon successful return from this operation, the user then marks the ACTION as done, which will trigger a new run of Rule filtering and evaluation, leading the user through the successive steps of the emergency management procedure as defined by the rules, and influenced by the changing context of the EMERGENCY PARAMETERS.

    The procedure ends when either:

    • LAST_ACTION is triggered by a Rule, (this would require the operator to verify return to normal conditions) or
    • when there are no more applicable ACTIONS that are ready.

    At this point, the ACTION LOG provides a complete and chronological record of the entire sequence of events for printing and a post-mortem analysis.

    The user interface consists of three main element:

    • the ACTION DIALOGUE box with its four response buttons:
      • DO IT (if active, this will satisfy the request by triggering one of the built in functions - like starting the expert system or a model - to obtain the requested information)
      • PENDING (refers a task for later execution)
      • IGNORE (actively eliminates or skips a task),
      • DONE (confirms the successful completion of an ACTION REQUEST)
    • the EMERGENCY PARAMETERS listing that provides the evolving summary description of the emergency; please note that the MAP WINDOW will display related (geo)graphical information;
    • the ACTIONS LOG that records each status change of an ACTION REQUEST with its time step in a scrolling window.

    A generic DSS tool: Rule-based Classification

    A major problem in any computer-based system is to obtain reliable and accurate information from the user, and to translate efficiently between the language easily understandable to the user and the technical information requirements and formats used by the computer system.

    The general logic and syntax of the backward-chaining embedded expert systems approach in HITERM was described above under Editing functions within the framework of Scenario Analysis. This covers the domain of user input.

    A similar approach can be used, however, to classify complex systems results (e.g., the results from a spatially distributed, dynamic, multi-parameter model) into a simple and directly understandable statement through linguistic classification.

    In the embedded expert system, this is accomplished through the hybrid nature of the basic concept, the Descriptor object; its possible states (legal values) can be expressed both symbolically or numerically as the example below for storage containers illustrates:

      DESCRIPTOR
      exposed_area
      T S
      U ha
      V none        [  0,  0,  0]
      V very_small  [  0,  2,  3]
      V small       [  3,  5,  8]
      V considerable[  8, 10, 20]
      V large       [ 20, 50, 80]
      V very_large  [ 80,100,300]
      Q What is the total area affected by this accident,
      Q ie., above a no-effects threshold ?
      ENDDESCRIPTOR
      

    Synthesis of model output

    Another use of the backward chaining expert system is to provide a synthesis of large model generated data volumes. The chain of models used to simulate an accident scenario may easily generate data volumes in the order of GigaBytes. These should, however, be summarised in a few simple variables such as the number of people exposed, the level of exposure, the area contaminated, estimated material damage and a rough classification of the accident: these classifications are needed to trigger the appropriate responses.

    Starting from the dynamic model results, specific aggregate parameters are computed as a post-processing step or while the model is running, updating values for maxima of threshold related parameters.

    In the case of the atmospheric dispersion models, the critical parameters are the extent of the area covered, the population exposed in this area, and time factors such as the time until the first houses are reached by the cloud, and the duration of the exposure.

    Starting with the model result and a (default or substance specific) concentration threshold, the system computes the area of the plume that exceeds the threshold (shown in yellow), the populated area (shown in blue), and the intersection (shown in red). Based on the known or estimated population density, two key parameters, namely the area exposed and the population exposed are computed and indicated (see above).

    In addition to the model derived values (which are setting the corresponding Descriptors in the expert system), a user-defined threshold value is used in this evaluation. This can either be derived from a set of rules, or from the hazardous chemicals data base (e.g., based on the Seveso II classification).

    In the simplest case, the user can directly set that threshold value with the expert system's editing functions.

    The behaviour of the editing function is data driven: if the only Method defined for the corresponding descriptor is the Question to the user, a simple editing box will be used. If, however, one or more Rules are defined in the descriptor declaration, the will be offered to perform a rule-based inference. For a detailed description of the data formats of the Knowledge Base files, refer to the Technical Manual of deliverable D06.

    In the next step, the expert system attempts a classification of the emergency in terms of:

    • Public health effects,
    • Environmental damages, and
    • Material damages.

    In terms of the backward chaining inference procedure, these three Descriptors are Target Descriptors i.e., the are at the top of the respective inference trees.

    Each of them has a set of associated Rules that use Descriptors as their inputs. The Descriptor values are set by the model output in the step above, but can, in principle, be overwritten by the user interactively if he repeats the (automatically triggered) inference procedure. If all the necessary data (Descriptor values) to reach a conclusion are available, the expert system will directly arrive at, and display in a symbolic format, the results in the Accident Summary Box.

    If any of the necessary input values are missing, the Dialog Box will be used to obtain the information from the user. This can be entered by either choosing one of the symbolic labels and associated default numerical value; by using the slider tool of the dialog box; of by directly typing in the required numerical value. The default value, if defined or set, is indicated in the blue header bar; in the above case, it is set dynamically from an interpretation of the model output results. The small information icon in the header leads to a hypertext page that provides additional background information on the Descriptor in question to assist in setting its value.

    Benefits and functions

    The integration of the expert systems in the HITERM framework has several major benefits, related to the different functions of the rule-based expert system:

    • Rule-based guidance: the expert system can implement any checklist-type procedure, that guides the user through a number of subsequent steps, like working through the steps of an emergency manual. This takes a major load and responsibility from the operator and simplifies the procedure, thus making it less error prone.

    • Time awareness: the real-time components are time aware, all components are context sensitive. This means that time can be used explicitly in the rules and any evaluation, reflecting the realities of an emergency management situation where it is essential to know not only WHAT will happen, but also WHEN.

    • Context sensitivity: an emergency situation is characterised by a large number of interdepedent developments; every decision has to be taken within the context of all the information available at that point. This context sensitivity is implemented through the Rules of the expert system that make any conclusion and advise conditional to any number of input variables (the context) the designer deems relevant.

    • Natural language interface: The rules of the expert system are formulated in the near natural language syntax of production rules, and follow first order logic. This makes the rules and their processing easy to understand and follow.

    • Explanation facility: to build user confidence, but also as a major didactic element, the expert system can explain its function and conclusions step by step, backtracking the inference chain one rule at a time.

    • Symbolic representation: The possibility to describe systems attributes in symbolic terms, and thus approximately, matches the information available in an emergency situation, where detailed quantitative estimates or measurements are simply not feasible. The symbolic classification and the use of ranges facilitates efficient interaction in the absence of hard and reliable data.




    9. Systems Architecture and Integration

    The HITERM system architecture is presented at two levels:

    • The conceptual level, that describes the logical relationship of the elements (objects) used to represent, and manage, an emergency situation as a decision support system; this is based on an object-oriented design.

      The object-oriented design (OOD) approach followed is based primarily, but not exclusively, on Booch 1991 and Rumbaugh OMT (1991). The design methodology is both relevant for the system architecture as well as for the development and implementation strategy, i.e., rapid prototyping.

    • The physical implementation level, that provides the actual operational hardware and software environment for running the decision support system; this is based on a client-server architecture that includes high-performance computing elements, remote data acquisition, and wireless mobile clients.

      The implementation uses a modular approach; the core of the system is represented by the main Server, that, in principle, can provide a self-contained and fully operational implementation even on a single processor system. To realise the required better-than-real-time and fully interactive nature of the system, high-performance versions of the basic models are available as an optional extension, that in turn requires high-performance computing facilities and a fast network connection.

      However, the main feature achieved with the system architecture is excellent scalability: a broad range of performance characteristics can be supported with the same software system, implemented on increasingly more powerful hardware.

    9.1 The Conceptual Level

    At the conceptual level, the system architecture is based on an object-oriented design. The main real-world elements of technological risk assessment and management are represented by classes of objects and associated methods.

    The basic object classes used for representing an emergency are:

    • Emergency scenarios: an emergency scenario is a dynamically evolving collection of emergency parameters, which are collected by methods including the expert system, and the high-performance parallel simulation models.

      Emergency scenarios can be linked to types of emergencies such as toxic spill, chemical fire, explosion, etc. Alternatively, they can be linked to specific risk objects (see below) such as a particular chemical plants.

      Emergency scenarios can be dynamic, in the context of risk management, or static, in the context of risk assessment, where they represent the set of assumptions describing a plausible accident as defined by the Seveso Directive (96/82/EEC).
       

    • Risk objects: they provide the basic vocabulary and attributes of a emergency scenario; risk objects cover several classes of objects, including:
      • sources of risk: examples are chemicals plants, production units, containers, warehouses, vehicles with hazardous cargo, etc;

      • components exposed to risk: examples are communities (at various levels of geographical and administrative organisation) and their population, including population centers such as schools, shopping malls, churches, etc., but also environmental targets such as water bodies;

      • risk management resources: they include, for example, hospitals, fire fighters and their equipment, police, ambulance services, civil defense forces, etc.


    • ACTIONS are the central component of the decision support system, selected and shaped by the expert system, based on the evolving context provided by the emergency scenario.

      The concepts of the decision support system are described in more detail below.

    In addition to these basic groups, the object oriented paradigm is also used to represent other elements of the system: hazardous chemicals; external information sources such as meteorological stations, infra-structural components such as road segments, and "internal" information resources such as the simulation models.

    9.2 System Modules

    The HITERM system is, in principle, an open framework that can integrate any number of analytical or communication modules in its open client-server architecture.

    The basic elements or classes of modules are:

    Data Layer

    • data bases, including:
      • hazardous chemical data base
      • risk objects (chemical plants, transportation vehicles)
      • target objects (communities, sensitive areas)
    • GIS containing background data and spatial model input such as the digital terrain model or the rail network, and population distribution (gridded)
    • Knowledge Base including Descriptors, Rules, and ACTION declarations;
    • configuration files such as the defaults, templates for communication (fax).

    Expert System

    consisting of two main interlinked components, namely:
    • the real-time forward chaining system (RTXPS) which is the overall driver for the Emergency management branch,
    • and the backward chaining system that supports all editing functions.

    Embedded Models

    currently including:
    • a source model (spill characteristics, pool evaporation, soil infiltration, probabilities of fire and explosion)
    • dispersion model (multi-puff)
    • explosion models: TNT and TNO
    • probabilistic soil infiltration model

    Client-Server Models

    implemented with PVM on external high-performance compute-servers or clusters, including currently:
    • Spill (pool evaporation) model (with Monte-Carlo implementation)
    • Diagnostic wind field model
    • Lagrangian dispersion model

    External data sources

    connection to external http servers that provide information such as:
    • on-line weather data
    • train information (CIS of the SBB).

    LC: local client MC: mobile client DB: data bases HOT: HOT objects
    KB: knowledge base Ch: chemicals GIS: map data Meteo: weather station
    CIS: train information RTXPS: forward chaining expert system XPS: backward chaining expert system
    S: source model P: multi-puff B1: explosion (TNT) B2: explosion (TNO)
    GW: groundwater model W: wind model S: spill model L: Lagrangian

     

    The DSS framework

    HITERM is designed as a decision support system; its distinguishing feature is the integration of HPCN components to support decisions in a complex domain (technological risk management) with the help of complex state-of-the-art analytical tools with better-than-real-time performance, including the possibility to obtain probabilistic solutions to complex, dynamic 3D scenario simulations.

    While the domain of decision support is rather broad and inclusive, there are two distinct aspects implemented in the system:

    • risk assessment and planning, which is based on the comparative evaluation of scenario analysis, based in turn on accident simulation;

    • operational risk management, i.e., a real-time risk management expert system that provides direct advise to an operator or a command center during an emergency situation, or a training exercise.

    A more detailed description of the Decision Support Aspects is given in
    Deliverable D06.1 and D06.2, Decision Support and Expert Systems.

    Modes of Operation: Assessment vs Management

    Based on the findings of Deliverables D01 (Requirements and Constraints Report), HITERM is designed for a range of application modes:

    • risk assessment and planning:
        scenario analysis;
    • training for emergency management:
        scenario analysis or real-time expert system;
    • operational emergency management:
        real- time expert system.
    The system offers two entry levels through its interactive interface:
    • directly to the scenario analysis and modeling; here the emergency scenario parameters are pre-defined in the scenario objects, and can be interactively modified by the user to represent a specific case; this case is simulated and evaluated (in terms of area and population exposed)

    • through the real-time emergency management expert system; the expert system advises the operator in a dialog to compile relevant information on the emergency. This context-sensitive dialog builds the emergency scenario information step by step. It also suggests, in real time and as the emergency develops, specific management actions to the operator.

      The scenario analysis is an integrated part of the emergency management approach; the main difference exists in the fact that the emergency parameters are collected within the framework of a dynamically developing emergency situation and are used to provide operational guidance and advice on emergency management actions.

    Data Structures: The Object Classes

    Objects and object classes are used to represent relevant real-world concepts and elements of an emergency situation. In general terms, an object (class) consists of:

    • a header used for identification;

    • meta information, primarily describing the sources of the information contained with the object;

    • georeference information where applicable (in HITERM, all objects with the exception of hazardous chemicals are spatial objects);

    • a set of attributes with their associated methods; the methods provide the actual values of the attributes in a given context;

    • a list of children: these are objects that logically relate to the parent object as components (e.g., containers in a container group, communities within a province); any or all of their attributes may be aggregated into a parent object's attribute.

    • a list of siblings: these are objects on the same level of logical relationships, i.e., they share a parent object with the current object.

    The objects in HITERM are based on HOT the hierarchical object tool, which describes the details of the technical implementation. For details of HOT, its architecture, implementation, and component object classes, please refer to the HITERM on-line manual ( https://ess.co.at/HITERM/MANUAL/manual.html).

    Emergency Scenarios

    The emergency scenario is the core object in HITERM. Depending on the chosen operational mode (assessment or management) it is a static object (with a complete pre-defined structure and list of attributes) in the case of scenario analysis and risk assessment;

    and a dynamic object in the case of the emergency management mode, with a context dependent structure and list of attributes.

    For scenario analysis and assessment, the emergency scenario is instantiated based on a TEMPLATE; the TEMPLATE is selected based on the primary scenario attribute, emergency type. The emergency type is the first descriptor the user has to define; on the basis of its value, one of a set of predefined defaults scenarios (the TEMPLATES) is loaded. The TEMPLATES are associated with emergency types in a configuration file, ./data/objects/scenarios/CONFIG.

    Each TEMPLATE defines the basic attributes, which includes the characteristics of the emergency and the input data for the associated simulation models. The TEMPLATE ensures that a complete set of default data is available so that the model-based analysis can be run immediately. The user can, however, edit and overload any or all of the scenario attributes with the notable exception of the emergency type.

    Risk Objects: plants, regional elements, resource objects

    Risk Objects cover a broad range of elements used in the representation of a technological emergency; they can be grouped into

    • Sources of risk
    • Targets or recipients
    • Resources for risk management
    • Hazardous chemicals
    Sources of Risk

    Within the scope defined for the HITERM system this class includes both stationary chemical plants and their components, as well as vehicles used for the transportation of hazardous substances.

    Chemical plants are represented by a hierarchy of objects classes, implemented in HOT.

    Storage containers

    A lower-level object within a plant would be, for example, a chemical storage container - the most likely source of a chemical spill, as it is the primary source of hazardous chemicals within the plant.

    Containers, like all plant related objects with the exception of chemicals, are spatial objects, ie., their position within the plant is known and can be used for modeling.

    For a detailed explanation of the interpretation of the individual data TABLES please refer to the on-line manual description of the HOT hierarchical object tool

    Targets or recipients

    Targets or recipients are either related to population, or vulnerable environmental features. Examples of the former class are communities and population centers, examples of the latter water bodies or aquifers.

    As in the case of the sources of risk, the target objects may be organised hierarchically. A typical example is a community or municipality as an administrative and geographical unit of organisation; its primary property is its population, linked to the location and spatial extent of the community used for population exposure estimates.

    The municipality can include sub-locations (e.g., called Frazione in the Italian test case). Please note that the class hierarchy that can be represented in HOT is open, i.e., an arbitrary number of layers and elements can be represented simply through the appropriate data declarations.

    The municipality contains, either directly or through aggregation of its sub-units, specific population centers, like schools, shopping centers, churches, sports grounds etc. Again their specific properties primarily include population, but also other attributes like contact addresses that can be used by the emergency management expert systems's rules.

    Resources for Risk management

    Yet another group of objects describes resources available to the intervention forces and the health care services. A prototypical example is a hospital.

    The above example already indicates the complexity of the object relationship: a hospital represents both a population center that may be relevant for evacuation planning, as well as a health care resource to accept and treat casualties.

    Since hospitals, as every other risk object, are geo-referenced, additional tasks such as optimal routing to the nearest appropriate hospital can easily be added as functions related to this object class.

    Hazardous Chemicals

    Hazardous Chemicals represent a non-spatial class of objects in HITERM. The chemicals are described by:

    • a set of properties (Descriptors) that can be loaded to the scenario object;
    • a set of hypertext files that can be displayed together or individually by the expert systems ACTION construct.

    Please note that as with any other object, the list of attributes is open, i.e., it can be extended by simply adding the desired Descriptor in the above TABLE definition, and updating the data fields accordingly. It also requires that the corresponding Descriptor definition is inserted in Descriptor files. The display routines, and the loading of the data to the KnowledgeBase and ScenarioObject will be automatic.

    RTXPS objects

    Within the overall object-oriented design and implementation framework, the elements of the RTXPS real-time expert system are designed and also implemented as objects. For a more detailed description of RTXPS and its primary role in the decision support functions of HITERM, see the Deliverables on Decision Support and Expert Systems (D06.1 and D06.2).

    • ACTIONS
    • Descriptors
    • Rules (forward chaining)

    ACTIONS provide a hypertext based information and instructions to the operator. ACTIONS may include specific built-in functions (see the Technical report on Decision Support and Expert Systems) which can either be triggered automatically or manually by the operator, depending on the ACTION type declaration. An icon menu in the ACTION display header offers four options:

    • done acknowledges the ACTION and confirms that its requests have been fulfilled; the ACTION will be marked as done for the forward chaining inference;
    • ignore skips the ACTION and continues the forward chaining rule processing;
    • pending backgrounds the ACTION, starts a timer, continues forward chaining rule processing; the ACTION will be considered again when the timer is expired.
    • do it is used to manually trigger function offered, and requested, by the ACTION;

    Methods of instantiations

    • external information sources
    • rule-based inference
    • simulation modeling

    The attribute of the objects representing an emergency obtain their values by a range of methods:

    • external information sources: the operator obtains the information requested by an ACTION from an external source, e.g., through a phone call, and enters this information through the expert systems editing dialog.
    • external data acquisition as a special case of an external data source, automatic connection to external data bases or monitoring equipment can also provide a required data item. An example is the compilation of a meteorological scenario for dispersion modeling from a remote weather station;
    • rule-based inference: the backward chaining expert system is used to help deduce or estimate the required information;
    • simulation modeling: one of the systems built-in (high-performance) models is used to obtain the data.

    9.3 The Physical Implementation

    At the physical implementation level, HITERM is based on a client-server architecture that links an easy-to-use front end (clients) with powerful High-Performance Computing as the main server. The basic architecture of the system is organised around a central HITERM Server, that coordinates the various information resources, prominently including

    • the HPCN components like parallel computers or workstation clusters for better-than-real-time simulation of demanding 3D dynamic models,
    • links to monitoring equipment,
    • and the user interface clients.


    Since the communication of the various software components is based on the standard http protocol, a high degree of hardware independence can be achieved: any platform and operating system that supports that protocol on top of TCP/IP can be integrated within this framework.

    For the hardware and software tools used for HITERM, we have to discriminate between:

    • the development platforms and the Demonstrator
    • the delivery platforms for a commercial exploitation phase

    The Main HITERM server

    The server and clients in HITERM are primarily conceptual: the flexibility of the architecture supports implementation one a single CPU, or in a complex network of CPUs with various levels of interconnectedness, including wireless, mobile, clients. A given machine may perform logical functions of both server and client at the same time.

    The main HITERM server is implemented on a UNIX or Linux machine. In its minimal configuration this can be a single CPU machine, in its most complex configuration, a network of CPUs including massive parallel machines or a workstation cluster, and remote, wireless, mobile clients.

    A major important feature of the architecture is the flexibility to support a wide range of possible hardware configurations to achieve a high degree of scalability.

    The main HITERM server runs the basic HITERM application. This, in a minimal configuration, is operational on a single-processor system and provides a possible, but not recommended, entry-level configuration (see the discussion on entry-level and scalability in Deliverable D12, Exploitation Plan).

    The main server runs the basic HITERM application. The server requires at least one display client which can, in fact, be the Xserver running on the basic single CPU workstation or Linux PC.

    The main server also needs at least one compute server, which again can be supplied by the same CPU on the basic machine.

    For an efficient implementation, the main server is connected to either

    • a remote super computer or high-performance computer center
    • a local computer cluster.
    Optional components include external data acquisition units and their database servers, as well as wireless, mobile clients.

    HPCN Resources: Compute Servers

    The compute servers are connected to the main server through the TCP/IP based http protocol, which requires an http server and httpd daemon to be running at least on one logical compute server; this can, in turn, manage a larger cluster configuration through PVM see: Deliverable D02.2, The Parallel Environment for a description of the parallel computing environment and architecture.

    The apparent complication of the http protocol layer provides the necessary flexibility to connect to either local or remote computational resources, using standard public Internet or dedicated Intranet connections.

    Compute services can either be provided

    • by a single (but possibly multi-processor) powerful super-computer;
    • by a cluster of computers linked by a high-speed network.

    HPCN Resources: Real-time data acquisition

    As an important feature for real-time emergency management, the HITERM architecture supports the direct and automatic on-line integration of field data acquisition, including:

    • meteorological monitoring data for the atmospheric dispersion models
    • hydrological data for surface and groundwater impact models
    • field measurements of air quality data (concentrations) for dynamic model calibration
    • location information (GPS) primarily for vehicle and mobile client position data.
    The communication protocol used is again based on TCP/IP and http.

    Display Clients

    HITERM currently support two types of display clients:

    • X11 (servers) for the X Windows protocol;
    • Java clients through http Java sockets based directly on TCP/IP.
    The current HITERM Demonstrator is based on the X11 user interface; for the Java components, only first test examples for the implementation on mobile, wireless clients have been developed.

    Implementation Considerations

    The primary implementation platform for HITERM Demonstrators was
    • a UNIX workstation for the main HITERM server, including the display client;
    • a (UNIX) workstation cluster under PVM for the implementation of the HPC components.

    The main specification for the client-server architecture is the communication protocol to be used between the HITERM Server and display client and the HPC Model Server.

    This is based on a POST request issued by the client to the server URL.



    10. Case Study Applications

    The following application scenarios have been used as the basis for the HITERM demonstrator; they provide the data and practical testing ground in an industrial context, involving representative end users for the evaluation of the approach.

    10.1 Chemical Plants (Italy)

    • Chemical process plants (Italy)
      using the example of the Ponte S.Pietro industrial district in Bergamo, Northern Italy, a number of accident scenarios following the Seveso II guidelines for the plants in this area have been formulated and simulated. The primary objective of the Italian Demonstrator was to show a completely integrated operational system including the transparent integration of high-performance parallel models (spill, wind field, Lagrangian atmospheric dispersion model) based on PVM and a workstation cluster.

    The basic scenario for stationary objects (Seveso-class process plant or storage plant) and transportation of hazardous goods to and from the loading docks of process and storage plants offers the possibility for partial pre-processing (e.g., of the geographical data of a well defined site), as well as the use of fixed data acquisition and monitoring systems.

    With the type and amount of a substance leak or spill, as well as the meteorological conditions as the main variables, the accident consequence simulations are performed similar to the transportation accident case to support emergency response measures or related training exercises.

    The main case study for process plant accidents, coordinated by SYRECO, was Ponte S.Pietro near Bergamo, Northern Italy. This area covers about 40 km2, distributed over 8 communities with about 30,000 inhabitants. 10% of the area is under industrial land use. A number of Seveso-class chemical process plants and so-called Level 2 installations with a total of more than 3,000 employees operate in this area. The main hazardous substances include Acrylate, Acrylonitrile, Butadyene, Styrol, Methyl Alcohol and Alifatic Alcohols, Formic Aldehyde, Ethylene Oxide, Phenol, Toluene, Amine, Flammable Solvents, Pesticides, Cyanides, LPG, etc, distributed over about 500 storage tanks from 5 to 1,500 tons of capacity and a total quantity of around 10,000 tons of toxic and flammable substances.

    In addition to their storage and use in the process industries, these dangerous substances are transported by road, mainly through the A4 Milan-Venice highway and other roads of provincial interest. The average daily traffic involves 16,000 vehicles. About 10% of these are trucks carrying hazardous substances: more than 10% of these transport toxic, another 45% flammable substances with a flash point below 65 degree Centigrade.

    SYRECO has performed extensive safety audits and analyses in this area, including process plants, storage locations, and the transportation of hazardous substances, and compiled large amounts of recent safety related data and the necessary environmental background information. This did form the basis of the simulation exercises, which covered main possible major accident scenarios as foreseen in the recent amendments (COM(94) 4) to the Seveso Directive (82/501/EEC, 87/216/EEC).





    10.2 Rail Transport (Switzerland)

    • Railway transport (Switzerland)
      The Swiss Demonstrator concentrates on the transport of hazardous material by train, across the strategic North-South Alpine corridors. The example of the Reuss valley as a steep alpine valley with an extreme orography poses a considerable challenge to the atmospheric dispersion models, but demonstrates the need for advanced complex models and thus high-performance computing to obtain realistic and reliable forecasts of the evolution of an emergency. The case study implements the RTXPS real-time expert system as the decision support framework for emergency management.

    The case study addressed the release of a hazardous substance to the atmosphere or into a water body (surface or possibly groundwater) from a train accident. The train uses an emergency information system to broadcast its position (GPS derived, through GSM) and freight data (substance, amount) to a control center running the HITERM main system. From this information the control center with (access to) the appropriate HPC resources initiates further data collection (for example, from the nearest meteorological station(s) or an appropriate hazardous chemicals data base), loads the local GIS data, and initiates the simulation of the accident consequences to guide emergency response measures.

    The consequence simulation involves complex dispersion modeling (atmosphere, soils, water bodies), parallel (Monte Carlo) error and sensitivity analysis, and the graphical visualisation of the accident consequences (for example, population exposure in space and over time in the form of interactively animated topical maps), as well as providing integrative instructions and decision support for field personnel or in a training situation. Using a discrete multi-criteria DSS approach to suggest optimal strategies to the operators, the interactive interface allows people on site to introduce their observations, constraints, and objectives. This information can be accessed remotely (again through GSM phone links or dedicated channels) through TCP/IP Internet protocols and IP/ISDN with simple, mobile and hand-held terminal equipment (PC, laptop running http clients).

    The case study, under the primary responsibility of ASIT, addressed transportation risks on railways in the Canton of Uri, part of the Gotthard alpine transit corridor linking France and Germany with Italy. This provides a model for several similar North-South alpine corridors of strategic economic and environmental importance. Detailed geographical, environmental and transportation systems data, as well as existing traffic telematics systems (using GPS and GSM) are already in place.

    The study involved the evaluation of the HITERM system under the special conditions of the alpine region (narrow valleys, long tunnels, high traffic densities). A number of Swiss authorities, including the National Alarm Head Office, Chemical Accidents Intervention Forces of the Canton Uri, and the Swiss Federal Railways Office, as well as the Association of Chemical Industries have expressed their interest in the project.





    10.3 Road Transport (Portugal)

    • Road transport (Portugal)
      The Portuguese test case concentrates on a very practical case: more than a hundred fuel trucks make the daily trip from the Aveiras fuel depot to the Airport of Lisbon along the major North-South highway leading into the capital. Accident scenarios for the fuel trucks include fire, explosion, and soil and groundwater contamination. An on-board alarm unit with GPS/GSM communication triggers the RTXPS expert system.

    The impact assessment for road accidents involving hazardous goods involves a number of real-time communication elements: the identification of the truck and the determination of its location, based on GPS/GSM in an on-board unit installed in the truck, operated by the driver or automatically in the case of an accident. The assessment also considers dynamically updated environmental criteria to accurately predict accident consequences, based on time-variable local conditions such as temperature, precipitation, wind speed and direction, surface water flow, in addition to traffic related information. The models used (spill/evaporation, fire, explosion, atmospheric dispersion, and infiltration/soil contamination) are all run as stochastic models in a Monte Carlo framework.

    Given a dynamically generated alarm from a vehicle in transit, the possible impacts of the accident are determined through simulation of accident scenarios based on the hazardous substances involved, their amount, and the degree of damage to the vehicle, i.e., the nature of the accident.

    Given the requirements of a large fleet of hundreds of vehicles, and the inherent complexity of the risk calculations (e.g., involving heavy gas dispersion modeling, fire and explosion models) the need for HPCN should be obvious. An additional complexity can arise through the inclusion of on-line sensitivity analysis, where the robustness of the solutions is tested against increasing levels of data and parameter uncertainty.

    The vehicle fleet (tanker trucks with hazardous cargo) of PETROGAL and its Portuguese distribution (road) network, and in particular the highway between Lisbon (airport) and the fuel depot and loading station in Aveiras north of Lisbon, have been used as an example.

    The Petrogal fleet includes:

      Petrol and Diesel Fuels, mixtures Asphalts Chemicals
    Vehicles 185 24 9 6
    avg. tons 4,163 455 191 127
    in transit
    annual km
    13,945,000 1,754,000 1,035,000 793,000

    In addition to the three major case studies, and a number of smaller applications, primarily designed to test and illustrate specific features and functions of the system have been built.





    11. Assessment and Evaluation

    To test and validate the concepts and tools developed in HITERM, they were applied in three regional case studies with varying emphasis:

    • a chemical process plant in Italy;
    • a railway transportation case in Switzerland;
    • a road transportation case in Portugal.

    Assessment and evaluation of the three case studies is based on a common set of validation and evaluation criteria that are being applied to the experiences of the Demonstrator installation, its operations, and the feedback from potential users in a number of demonstrations of the system.

    The latter part includes an analysis not only of the technical aspects of the system such as performance, reliability, efficiency, ease of use etc., but also the equally important institutional and ultimately commercial aspects, summarised in a benefits analysis for each application, that is then generalised for the project as a whole.

    The three application scenarios described above have been used as the basis for the HITERM Demonstrator; they provide the data and practical testing ground in an industrial context, involving representative end users for the evaluation of the approach.

    Each of the three test cases was dedicated to a different aspect of technological risk management, namely:

    • chemical process plants;
    • road transportation accidents;
    • rail transportation accidents.
    While all three cases share the same software framework, their configuration and data make them very different. At the same time, they have been selected and configured to concentrate on, and demonstrate, different aspects of the HITERM system:
    • the integration of parallel high-performance computing models;
    • the integration of artificial intelligence, and in particular, real-time expert systems functionality as a decision support tool;
    • the integration of communication elements such as a GPS/GMS on-board alarm unit, automatic downloading from external data bases, and polling of real-time sensors.

    11.1 Assessment Criteria and Procedure

    The objective of the case study assessment and evaluation phase was to validate the operation of the HITERM demonstrator within the context of a variety of application domains (see above).

    The validation activity took place at the three demonstration sites: Gavirate, Bern, and Lisbon.

    The assessment and evaluation process did seek to confirm that the demonstrator implements the specified user requirements as defined in WP 01, Requirements and Constraints Analysis. The demonstration activity did allow a range of potential users to be exposed to the Demonstrator in a series of presentations at each site. This facilitated feedback on the validity of the originally agreed requirements, their method of implementation, new requirements and the general acceptability of the system for operational use. It did also provide opportunities for publicising the demonstrator as part of the dissemination activities (compare Deliverable D12.0, Dissemination and Exploitation Plan.

    Together with the local users the detailed criteria for success of the evaluation activity have been defined. These include comparisons of the HITERM results with historical and observation data on accidents (where available) as well as well as results derived independently (eg., available from literature including Safety Reports).

    The same evaluation process has been applied at all three evaluation sites: SYRECO at its offices in Gavirate; ASIT at its offices in Berne; and Petrogal, with the support of LNEC and the FCCN was responsible for the validation in Lisbon.

    In the following the individual results (summarised in the Deliverables D08, D09, and D10) are compared and integrated across the three sites.

    The objectives of the assessment and validation include:

    • Technical feasibility:: validation of the technical infrastructure, client-server setup, ease of installation, reliability, robustness

    • Performance: better-than real-time simulation modeling for complex scenarios

    • Accuracy: comparison of model results and model-generated decision support with benchmark results and expert assessment

    • Relevance: evaluation of overall performance and usefulness and usability of the information provided by peer groups and potential users in industry and public administration.

    These criteria are compiled for each of the three Demonstrator implementations. While some of them would, in principle, lend themselves to quantitative treatment, the assessment has been done on a qualitative basis mainly.

    11.2 Assessment Results: Italy

    Validation criteria have been defined according to the following topics including both technical capabilities and marketing perspectives:

    • System should be easily implemented and HW/SF costs should not be an obstacle for exploitation basing on the state-of-the-art

    • System performance should allow for better-than-real-time evaluation

    • On-line connection should be available and tested, integrating dynamic analysis and field data retrieval

    • User comprehension and satisfaction has to be verified by demonstration to local municipalities, industries and authorities

    Interviews and reactions of potential end-users during the demonstration of the system have been used and systematically collected by questionnaires as validation criteria. SYRECO has presented the system in different contexts, such as:

    • National congress on Industrial risk analysis and Technological Emergency management in Pisa (October 1998) together with ESS, where also a paper has been presented.
    • Presentation during a course for emergency managers organised by Regione Lombardia in February 1999.
    • Demonstration to the Civil Protection Bureau of Regione Lombardia and its responsible in April 1999.
    • Demonstration to industrial representatives and population during the SYRECO Emergency Training Center inauguration in Filago (Ponte S.Pietro Area) in October 1999.
    • Demonstration to a wide sample of majors of municipalities in Ponte S.Pietro Area to support the proposal of an integrated Civil Protection and Emergency plan in Ponte S:Pietro area in November 1999.
    • Many other technical demos have been gived in our offices to expert groups and individuals.

    A common relevant validation criteria was the assessment of the capabilities of the potential user to understand what the system is doing and the usefulness of the information he can get.

    Validation Results

    During the first phase of HITERM project development and test case implementation, the first two validation objectives have been well satisfied, that is technical feasibility and performance of the system.

    Validation by expert assessment, peer group and potential end-users, as far as usefulness and usability, was part of the second phase of the Italian test case implementation, but this could only be partially completed, since no comparison in benchmark exercises with other system results could be done by now, due to practical difficulties in doing this.

    Applicability of HITERM to industrial emergency management demonstrate the reliability, flexibility and robustness of the technical infrastructure and installation of the system architecture and client- server set-up. The system could be easily installed and re-configured, changing the initial plan to use a powerful HPC (Parsystec) in JRC Ispra, to a simple, but still efficient (in terms of performance) cluster of work-stations installation by connecting the partners development machines and rented ones, caused by the fact that (although initially promised) access and use of JRC machines was not possible.

    As far as performance is concerned, the application of the HITERM evaporation and Lagrangian dispersion model to the accident scenarios, characterising the Italian case study, showed that the main objective of better-than-real time performance could be obtained by the simple cluster configuration unsed for the Italian demo with a reduction of a factor ten compared to normal operating resolution time.

    11.3 Assessment Results: Switzerland

    Validation experiments

    For the validation of the performance and the accuracy of the system, the Swiss Demonstrator had been tested in detail and the results of the modeling calculations had been exactly investigated also with the help of experts in the various topics.

    The relevance had been validated by sending a questionnaire to all important potential users of such a tool or system in industry and administration in Switzerland.

    After analysing all the incoming answers of the potential users, presentations of the Swiss Demonstrator had been carried out (cantonal authorities) or are still planned (Federal Office of Transport, National Railway Company (SBB), chemical intervention forces of Zurich, etc.) for all those, who showed a strong interest in the new system.

    Results of the Validation

    Adequacy (conceptual)

    The feedback concerning the conceptual adequacy is very positive. Especially from the intervention-experts we got good judgments.

    Technical feasibility:

    We can`t say much about the installation and possible problems (the demonstrator has been installed by ESS). But after the installation, the Swiss demonstrator proved to be reliable concerning the installation. The installation of new timely limited licenses could be done without any problems.

    Technical (implementation) reliability

    In the current version of the Swiss demonstrator the technical performance is not satisfactory enough. The presentations of the demonstrator are not very easy because of unexpected errors, impossibility to step back in some parts of the program to show effects of changing input parameters etc., buttons switching wrong, .....

    Performance and Accuracy:

    The test of the Swiss demonstrator at the ASIT office showed very satisfactory results concerning the modeling of various chemical accident scenarios. All the tested scenarios and the used applications produced results which are very realistic in the eyes of the experts (wind field in the alpine topography, extents of various chemical accidents, etc).

    Tests and model verifications - also with help of external experts - could show that the model accuracy can be judged to be good. Most of the modeled simulations showed realistic results.

    So the Swiss Demonstrator can be considered to be a useful tool for simulation of accident scenarios and also as an information system for decision support for authorities.

    Looking at the different (sub-)tools within the Swiss-Demonstrator, the remark has to be made that not everything seems to be very stable in the present version of the Swiss Demonstrator. Repeated error messages and unexpected program break downs happen more often than desirable.

    Ease of use

    If the program would be more stable, it could be considered to be very easy to handle - also for people who are not very experienced in operating computer programs. The GUI for our opinion is quite OK and it is clear arranged. The buttons should have a text to describe their function, because the meaning of the icons is sometimes not clear enough. The data link functionalities are useful.

    Institutional feasibility (integration)

    Concerning the institutional feasibility we see some problems in Switzerland because of responsibilities in case of an emergency involving dangerous goods, which lies clearly on cantonal level. The railway companies have the duty to prepare preventive and safety measures but in case of an accident the emergency management is on cantonal side. On the other hand the federal supervisory authority has to coordinate the cooperation of the cantonal intervention forces and the railway companies.

    In this situation we see the use of a HITERM-system in a computing center which is operated according to the agreement of

    • the cantonal intervention organisations and emergency management
    • the railway companies and
    • the federal supervisory authority (federal office of transport).

    Relevance:

    The analyses of the questionnaire and the first Swiss demonstrator presentations led to a quite positive judgment. To be more specific, the results can be summarised the following way:

    Most of the interested and involved potential users consider the System as very useful and important for the simulation of various chemical accident/incident scenarios for risk determination, but also as an excellent information system for emergency forces for fast and reliable decision support. Very often mentioned was also that the system would be a good training tool for emergency forces and very useful for the modeling of the extents of accidents with dangerous chemical goods, too. About half of all who answered our questions find it also a good tool for the evaluation of the requirements concerning safety arrangements and a practicable instrument for planning and strategic analyses.

    A bit surprising was the fact that nobody valued the tool as useful as an alert-system for emergency forces. Most of the potential users who gave us a feedback can imagine to use such a system at their office or company.

    Technical/economic feasibility (cost)

    None of the potential users of a system like HITERM we contacted here in Switzerland can see a solution involving a high performing computing system for their needs (and their financial capabilities). They all use normal PC-platforms (or occasionally single workstations).

    After our analyses, a big problem seems to be the expected costs for the purchase of such a system. Most of the interested authorities are currently not equipped with sufficient and suitable hardware to run such a system, so that they would have to buy the according hardware first. But most of the authorities do not have budgets big enough to allow them to buy a system in the financial dimensions of HITERM. Only few potential users made a concrete statement what they are willing to pay for such a system. But their ideas about these costs (system, license and yearly maintenance) are in the region of some thousands of Swiss Francs, which we consider much too low.

    The main conclusion is that HITERM was considered to be a very good and useful tool by potential users in industry and public administration and many decision makers could well imagine to use it. But only if the costs were low enough. So one of the main problem for the atractivity of the product seems to be the purchase price, or more correctly, the expected total cost of ownership of the system.

    As far as Italian experience is concerned, the main business potential now can be seen with:

    • Local communities like Ponte S.Pietro (the pilot implementation to be exploited in other similar areas in Italy), including Municipalities and Industries looking at the new requirement of the SEVESO II directive in which public information and training is one of the most important features. Civil Protection organisation in Italy is now moving (following the European trend) the competence in co-ordination and intervention for large scale, long term emergencies, typical of natural cataclysms, from the central institutions and organisation (like Provinces and Prefettura) to local organisation and management responsibilities at the level of municipalities and majors.

    • Regione Lombardia, which is proposing a pilot project in developing integration with safety reports analysis and management with emergency planning and land use guidelines and useful information, to support local Municipalities and Provinces (as potential remote clients in a client-server architecture).

    11.4 Assessment Results: Portugal

    The implementation of the HITERM system is expected to produce benefits that can be divided roughly in two categories.

    • Response quality
    • Response time

    The impact in response quality, although hard to quantify, can be easily understood if we consider that, in the current situation, risk assessment and emergency management is carried out based mainly on previous experience of the individuals which happen to be in charge at the time. This leads not only to a bigger probability of human failure, but also to a lack of consistency in the responses to emergency situations depending on the particular individuals involved with each emergency. Also, there is no efficient way to propagate information to everyone involved in dealing with the situation which can lead to further inconsistency, as different agents are operating with different background information.

    In addition, we must also consider that - no matter how big the experience of the individuals might be - they still do not have any quantitative data or forecasts on which to base their course of action.

    The HITERM system will provide not only consistent quantitative data and guidance in the information gathering and decision making process, but also has a powerful communications infrastructure which will allow the timely dissemination of that data. The result will be a better and more consistent response to emergency situations.

    The gains in response time, also critical in damage control are even more obvious. The current Petrogal modus operandi relies on external entities (such as road police or passers by) to produce the alarm. The current alarm issuing procedure shows that the time span until the specialised Petrogal team reaches the accident site can reach several hours.

    The HITERM system will allow an alarm to be sent to Petrogal as soon as the accident occurs. This represents a saving of 35 - 75 minutes, ie., about 45% of the total response time.

    It should be noticed that this time saving can be accomplished with little additional investment as all Petrogal trucks already have mobile phones and industrial computers which can be programmed to fulfil the vehicle location and alarm issuing tasks. The additional investment needed refers to the GPS units and possibly more sophisticated alarm situation detection devices such as gyroscopes for tip-over detection or airbag trigger detectors.

    11.5 Comparative Assessment

    The three demonstrators are conceptually and technically quite similar. Not surprisingly, the assessment therefor reflects these similarities by showing quite similar results.

    While the general evaluation is positive, and the technical issues of performance, ease of use and user interface, and model accuracy are all evaluated positively, the specific problems resulting from the assessment are:

    • institutional integration into existing structures;
    • reliability (stability) of the current prototype;
    • total cost of ownership.

    Benefits Analysis

    The HITERM project offers the tools for a rational response to technological risk, both at the planning stage and the emergency management stage. It can thus, on the one hand,

    • contribute to considerably reduce damage from technological emergencies - which is an obvious benefit, but also
    • help the users of the system to do their job better and more cost-effective, which is a second level of benefit.

    The latter aspects has to be seen in a comparative context, i.e., comparing the capabilities of HITERM with the status quo of operations in potential user institutions and organisations and evaluating incremental benefits. This is described, for the example of the Italian market in some detail, in D10.0, The Italian Case Study Report.

    HITERM/RiskWare is not an off-the shelf software system; we expect, as the current experience with marketing the systems clearly demonstrates, to build and support a wide range of implementations for different customers, including:

    • public institutions (government) including the so-called competent authorities of the Seveso Directive (96/82/EC), responsible for external safety, emergency management and civil defense in general;

    • plant operators for hazardous installation such as chemical process plants, storage facilities, or refineries;

    • transportation systems operators, whether they are railway operators, airport authorities, or road transport operator (and possibly harbor authorities).

    For each of these groups, potential benefits of a decision support system based on the HITERM results will differ, depending on their mission and current methodology.

    In general terms, HITERM has potential socio-economic impacts and benefits at several levels:

    • Technological level: There is a substantial amount of new technological development in the project, however, primarily in the domain of integration rather than the individual components. It is the emphasis on an integrated solution within a well-defined European regulatory framework, however, that provides the main thrust for the exploitation.

      The main advantages of the HITERM approach, compared to alternative software packages and methods of risk assessment and risk management are:

      • the highly integrated nature of the product, combining data bases, GIS, expert system for DSS, and a range of complex simulation tools;

      • the integration of risk assessment tools and real-time risk management tools in a single, common framework with a coherent user interface and logic;

      • a fully interactive, menu-driven and graphical user interface that is easy to use even in emergency situations;

      • the flexibility of the object oriented design and client server architecture, that supports flexible and scalable use of computing resources;

      • linkage to on-line data acquisition like meteorological data, the SBB freight data base, or the on-board units (GPS/GSM) on trucks;

      • full integration of GIS as an embedded layer of functionality, which greatly supports the interpretation of the results;

      • the integration of several, and alternative state-of-the-art models with a standardised interface that makes model integration comparatively easy;

      • explicit treatment of uncertainty with Monte Carlo methods;

      • availability of a real-time expert system as a driver and coordinating components for both scenario analysis, potential training applications, and real-time emergency management support;

      • decision support using rule-based expert systems linked closely with the simulation models.

    • Synthesis level: Most of the technological work involves bringing together existing tools and techniques from several domains. The emphasis is on integration. Relatively small investments in this way result in great improvements in the utility and effectiveness of the tools working together and by the much greater insight obtained from a synthetic view of the inherently complex risk assessment and management process.
       

    • Exploitation level: The project involves the potential for immediate exploitation of its results in the context of the three Demonstrators, each of which has a different specific application domain, emphasis and range of functionality. There is thus a multiplier of three within the project itself as well as any subsequent commercial exploitation, as well as the possibility to configure various other combinations of the primary elements for a broad range of possible new applications. Strategies for exploitation are described in The Dissemination and Exploitation Plan (D12.0), and the Technology Implementation Plan (D12.1).
       

    • Commercial level: The project team is firmly located in the commercial world of exploiting advanced technology. The commercial incentives offered by the considerable market potential (compare Deliverable D12.0, Dissemination and Exploitation Report) ensure that maximum use is made of the operational capabilities that emerge from the work.

    Benefits and drawbacks of HITERM have been compared with potential competitive systems for risk analysis and emergency support and results are summarised in the following:

    • Real-time and on-line evaluation in emergency conditions is the most important feature of HITERM: combined with a good comprehensive interface, reproducing the standards of the competent authorities for emergency procedure and planning (recently established and now in normal use) can represent the most important benefit for customisation.

    • A DSS concept, like in HITERM, is not yet provided by competitors for standard applications and improvement of rules in the domain of accident scenarios identification, source terms definition, "domino effects" evaluation, practical suggestions for prompt intervention and first safety measures to be taken seem to be the most promising and attractive features for exploitation.

    • Complex and fast running models for dispersion that can take into account the characteristics of the domain (elevation, obstacles, land-use and so on) still represent an advantage, that most other competitive system are lacking, but the model library needs to be enlarged and improved in order to cover all the basic potential accident scenarios: HITERM cost and performance capabilities can be attractive only if the model library is completed and well integrated with basic hazardous material data base.

    • GIS support and system integration are much better than in any other system in operation or development at now, but public authorities are orienting their choice towards cheaper and easily accessible system, integrating other potential applications (mainly ArcView and ArcInfo); so cost awareness of the HITERM system could present a difficulty in dissemination and exploitation.

    • Integration with MSDS and more specialised public data bases needs to be implemented in HITERM with internal protocols completely transparent for the users, run and integrated by DSS rules.

    • Competitive systems are developing routines and DSS for population behaviour and traffic control in case of an emergency evacuation which are not taken care of in HITERM; implementation efforts should be made in that direction.

    On the one hand consequent follow-up of what has been addressed above could present a way to successful exploitation of the system, but on the other hand cost and complexity of HITERM suggests to (first) select a subset of the functionality to address the potential markets in industrial risk analysis and technological emergency management represented by public authorities and significant industrial players.

    Conclusions

    In summary, the validation phase of the project has been successfully completed. Technical feasibility, reliability, and performance have met or surpassed the expectations as formulated in the user requirements analysis.

    The basic objective of demonstrating the potential of high-performance computing for decision support in a technically demanding domain like technological risk management has been met. A flexible systems architecture was developed that supports the use of high-performance computing in an incremental, scalable way that is very cost-effective and can adapt to growing demand efficiently.

    Shortcomings of the Demonstrator are largely due to lack of critical data (e.g., the chemical data base) or some additional models (surface and groundwater) deemed desirable in some applications.

    These problems, however, are of a quantitative rather than principle, qualitative nature, and there is every indication that they can be easily solved with a reasonable level of effort. This is foreseen in the continuing exploitation phase, converting HITERM results into a marketable product.



    12. Dissemination and Exploitation

    HITERM is designed for two major user groups:
    • private industry (primarily the chemical industry, but also the transportation sector, power generation, heavy industry, etc.)

    • public administrations responsible for emergency management and civil protection including fire brigades, the national competent authorities under Seveso II (96/82/EC).

    As a commercial product, HITERM represents a bundle of several software components that can be licensed, together with consultancy services that can be offered in support of, or by using, the software. This includes performing contract studies with the system, but also the compilation of the necessary data and technical support for end users, including the possibility to provide computational (high-performance computing) services to end users.

    The range of exploitation options therefor includes:

    • licensing of the software components to end users or value-added distributors;
    • providing consultancy services on the basis of the software;
    • providing consultancy and user support for end users;
    • offering computing services for the high-performance components;
    • exploitation of the development components and experience gained in related products and projects.

    The HITERM exploitation plan constitutes a Deliverable of the project; while it provides an overview of the main concepts, plans, and strategies envisioned by the project consortium for the exploitation of the project results, it will not disclose any detailed financial information or confidential business information of the partners.

    Parts of the exploitation plan, and in particular the business plan, therefor, will be expressed in qualitative or at best semi-quantitative terms.

    The markets and target user groups

    HITERM is designed as a flexible system that is fully data driven. It can, therefor, be adapted to any language and regulatory framework with reasonably small effort.

    The target market of HITERM is global. However, for practical reasons the market introduction will have to follow a more gradual approach:

    • introduction in the case study countries Italy, Portugal, Switzerland through the respective partners and using the case study Demonstrator installations as reference systems;
    • the second step will concentrate on the other EU countries;
    • in a third step, the candidate countries will be targeted;
    • and subsequent steps can then attempt expansion into the CIS and other eastern and central European states, and ultimately world wide.
    The target user groups for the HITERM project results can be grouped into two main types:
    • private industry (primarily the chemical industry, but also the transportation sector, power generation, heavy industry, etc.): any industrial sector that is either subject to the Seveso II Directive, or involves major technological risks (chemical emergencies, explosion, fires, structural failure etc.) that may require specific emergency preparedness and management tools;

    • public administrations responsible for emergency management and civil protection including fire brigades, the national competent authorities under Seveso II (96/82/EC) in the EU, and comparable institutions in other countries. Please note that while the HITERM project and demonstrator concentrate of chemical emergencies as defined in the Executive Summary version of the Requirements and Constraints Report, the derived product can again address the entire range of technological and environmental emergencies where the basic methodology is applicable.

    The size of this market is considerable: concentrating again on the chemical industry only, in the US alone, about 250,000 chemical industrial plants are subject to the latest regulations on emergency planning and management. The European common market is of a comparable size. Globally, this would encompass for the chemical industry alone, a market size of 500,000 to 1,000,000 chemical plants and enterprises that are potential users of an emergency management system like HITERM.

    The number of public institutions is in the same order of magnitude if not larger. While national and regional bodies concerned with emergency management are in the order of thousands, the local level, and in particular fire fighters command centers, are again in the hundreds of thousands world wide.

    Therefor, the potential market for a system like HITERM is of a considerable size.

    Competitor Analysis

    For the analysis of competing products, a survey of existing software systems for emergency planning and management, as well as a series of ongoing EU sponsored R&D projects was undertaken. The current status of systems identified is summarised in Deliverable D12, Exploitation Plan; see also Moskowitz et al, (1995) and Bouchart et al, (1995) for recent compilations of risk related computer codes.

    Detailed information on ongoing research and development projects funded under the Fourth Framework Programme by the European Union can be found on the CORDIS server, http://apollo.cordis.lu, and on the respective home pages of the various programmes such as ESPRIT, TELEMATICS.

    In summary ...

    While the analysis of competing products is necessarily incomplete and cursory, the following basic feature relevant for the positioning of the HITERM results seem to emerge:
    • most products are based on relatively old models, and concentrate on vapor cloud dispersion
    • integration with data bases, GIS, or on-line monitoring is the exception
    • no expert system (other than simple decision tables) seems to be available
    • no explicit treatment of uncertainty or stochastic modeling could be identified in emergency management applications
    • graphical user interfaces are generally poor or absent
    • the high degree of integration characteristic for HITERM seems unique
    • no systems that use high-performance computing other than for pure research purposes could be identified.

    The lack of the above features, at least in terms of their integration into a single product, therefor clearly defines the competitive advantage for HITERM.

    Product packaging: software and services

    As a commercial product, the results of the HITERM project represent

    • a bundle of several software components that can be licensed together or individually (the framework system RiskWare and its component screening models; the parallel high-performance computing models; the wireless communication tools);
    • together with consultancy services that can be offered in support of, or by using, the software.

    This includes performing contract studies with the system, but also the compilation of the necessary data and technical support for end users, including the possibility to provide computational (high-performance computing) services to end users.

    As one possible form of packaging, ESS is planning to integrate the results of the project with its RiskWare software system. Other software developers (primarily GMD and LNEC for the parallel models and wireless communication tools) will receive license fees from any copy of their software bundled with RiskWare for a third party client; and the consultant partners in Portugal, Italy, and Switzerland will act as both distributors, but also as consultants providing local user support and technical services, with or for the RiskWare software.

    Finally, as an optional extension of the consultancy in support of end users, the optional HPCN components of RiskWare can be offered by the project partners, or qualified future distribution and support partners, as a computational service. This would allow end-users to minimize their investment, training, and long-term maintenance efforts by outsourcing these components to an external service provider.

    While this option is conceptually very attractive, and technically and commercially sound, there may be issues of confidentiality that may make it difficult to implement both in the industrial and public administration environment.

    Please note that the bundling within the RiskWare framework is only one possibility: other partners, or ESS, may choose to bundle any or all of the software components in other frameworks and products, with reciprocal licensing arrangements.

    The GMD, for example, will continue to exploit the HITERM developments in future projects, and provide continuing support and consultancy on a case to case basis. Research oriented but externally funded projects will try to build on the HITERM components such as the parallel models, sensitivity analysis, and parallel implementation techniques together with the remote client-server execution of models on powerful hardware, triggered by a web-request.

    Commercially oriented exploitation and continuing support for end users is foreseen under a number of constructs, currently under discussion, that will involve spin-off companies that can license and then commercially exploit GMD developed products.

    Technical constraints: data, costs, infrastructure

    As already discussed in the Requirements and Constraints Analysis report a technically demanding system like HITERM faces a number of constraints that are important to consider in any exploitation planning.

    Data

    For the practical application of the HITERM system, the availability of data may be a constraint. This includes:

    • geographical and orographic background data, climate data;
    • risk related data (chemical plants, population, intervention forces);
    • administrative and organisational data: rules and procedures for intervention;
    • hazardous chemicals data.
    Since the compilation of some of these data may be expensive, HITERM as a product has to:
    • operate with a minimum of data
    • facilitate incremental building of its data bases
    • include data compilation as a bundled service.

    Costs

    Cost considerations are a major constraints both for public authorities, as well as for industrial enterprises, in particular small and medium sized enterprises.

    HITERM as a product must therefor:

    • offer a low-cost entry level configuration
    • be easily upgrade-able if and when more performance is required.

    Infrastructure

    Constraints on the availability of HPCN infrastructure (massive parallel computers, cluster with fast LAN connections, fast external network connections) are to be expected in most potential applications. This is both related to costs, but also to institutional constraints that simply do not make the introduction of "exotic" technology easy. In addition, HITERM as a product will have to address the general dominance of Microsoft-based PC equipment as the computing platform of choice in the overwhelming majority of potential client sites.

    Therefor, the following issues must be addressed:

    • simple entry-level configurations
    • flexible upgrade options through cluster solutions
    • porting to Windows NT as the basic (client) platform or further development of Java clients.

    Marketing strategy

    The marketing strategy for HITERM/RiskWare is based on a phased approach (see the description of the market above).

    The initial phase will concentrate on the direct exploitation of the project and the Demonstrator cases in Italy, Portugal and Switzerland. It will focus on the national partners in these three countries and their existing professional contacts and clients.

    Since the national partners already operate successfully in this market, no further market studies seem necessary. The primary mechanism for marketing will be:

    • exploitation of existing business contacts of the partners;
    • presentations of the Demonstrator at exhibitions, conferences, and technology fairs;
    • mailings to potential users with individual follow up;
    • as accompanying measures, publications of articles, features, and editorials describing the system in appropriate technical journals;
    • continuing use of the Internet as an advertising medium.

    In a second phase, the marketing will require to identify strategic partners in various countries. Due to the very important (and comparatively time consuming) consultancy component, and the need for customisation to national regulatory frameworks, institutional structures, language, etc., building up a network of local support partners is essential.

    Business plan options

    In principle, there are several options for a business strategy for HITERM; they include:

    • concentration on a small number of high-profit projects;
    • building a support and distribution network capable to support a high-volume but relatively low-cost market;
    • licensing to national or regional distribution partners or value-added resellers with a minimum direct involvement;
    • seek strategic partnerships with established players in the market.

    These strategies are of course not mutually exclusive but can be combined and mixed with a geographical discretisation, and evolve depending on market response and first experiences.

    For the last point, strategic partnerships with developers of similar systems, several initiatives have been started:

    • initial contacts with DNV (developers of the SAFETI system and related products including models like PHAST)

    • discussion with TEMARS (developer of SIGEMI in Italy) with the goal of integrating the Chemical data base, accident data base, and simple screening models from SIGEMI in RiskWare/SIGEMI for the Italian market.

    • first contacts with ASSOMINERARIA (coordinator of the SINGER project) to explore possibilities for integration, since SINGER and RiskWare/HITERM have complementary capabilities.

    Exploitation and IPR issues

    The exploitation of the HITERM results is regulated by the terms and conditions of the Consortium Agreement signed by the HITERM partners at the beginning of the project.

    The relevant conditions of the Consortium Agreement are as follows:

      Ownership

      Foreground shall be owned by the Contractor(s) generating it.

      Access Rights

      Access Rights granted for Foreground or Background shall be subject, where appropriate, to suitable arrangements determined by the Contractor to ensure their use only for the purpose for which they are granted and may be subject to appropriate undertakings as to confidentiality.

      Access Rights for Background shall be conditional upon the Contractor being free to grant such rights.

      Access Rights shall not, unless expressly agreed, confer any right to sub-license.

      Proprietary information which is to be treated confidentially shall be duly marked.

      Access rights for exploitation

      Each Contractor shall be entitled to exploit all the Foreground, including to procure the manufacture of products by third parties for exploitation by the Contractor at its risk and account and shall grant each other Access Rights for exploitation of Foreground on a royalty-free basis.

      Any Contractor not normally undertaking commercial activities or unable itself to commercialise its Foreground may grant above Access Rights on, instead of royalty-free conditions, fair and reasonable financial or similar conditions which have regard to the Contractor's contribution to the Project and the commercialisation potential of the Foreground. Any Contractor applying this paragraph shall not use the Foreground in commercial activities.

      Each Contractor shall grant Access Rights for its Background necessary for the exploitation of Foreground to the other Contractors in this Contract subject to major business interests, provided they do not result in abusive restrictions to the exploitation of Foreground, under favorable conditions.

    Exploitation proceeds in two parallel but related tracks:

    • commercial exploitation by the industrial (developer) partners;
    • in-house and academic exploitation (Petrogal, GMD, LNEC, FCCN).

    The commercial partners (ASIT, SYRECO, ESS) are directly exploiting HITERM by marketing RiskWare and related services. RiskWare is a proprietary system owned and distributed by ESS.

    The data sets for the three Demonstrators are owned by the respective case study partners.

    ASIT and SYRECO can distribute RiskWare as value-added resellers in their own respective projects. Continuing free licenses are granted to ASIT and SYRECO for marketing purposes.

    Software developed by GMD (parallel models) and LNEC (GPS/GSM integration) constitutes optional components of RiskWare, and can be licensed from GMD and LNEC respectively. A proposed exploitation strategy and licensing agreement for the GMD is available as APPENDIX 3 to this report.



    13. References and Bibliography

    Al-Wali K. I., Samson P. J. (1996) 
    Preliminary Sensitivity Analysis of Urban Airshed Model Simulations to Temporal and Spatial Availability of Boundary Layer Wind Measurements. Atmosheric Environment Vol 30, No. 12, 2027-2042
    Allwine K.J., Whiteman C.D. (1985) 
    MELSAR: A Mesoscale Air Quality Model for Complex Terrain: Volume 1 - Overview, Technical Description and Users Guide. Pacific Northwest Laboratory, Richland (PNL-5460). 
    Booch, G. (1991)
    Object Oriented Design with Applications. Benjamin/Cummings, California, USA. ISBN 0-8053-0091-0
    Bouchart,D.C., Ambrose, R.B.Jr., Barnwell, T.O.Jr., and Disney, D.W. (1995)
    Environmental Modeling Software at the U.S. Environmental Protection Agency's Center for Exposure Modeling. In: G.E.G. Beroggi and W.A. Wallace [Eds.] Computer Supported Risk Management. Kluwer Academic Publishers. Dordrecht. The Netherlands. pp. 321-360.
    Briggs G.A. (1984) 
    Plume Rise and Buoyancy Effects, Atmospheric Science and Power Production, D. Randerson, Editor, DOE/TIC-27601, Office of Scientific and Technical Information, US Dep. of Energy. 
    Douglas G.S., Kessler R.C., Carr L. (1990) 
    User's Manual for the Diagnostic Wind Model. EPA-450/4-90-007C. 
    Ermak L.D. (1991) 
    User's Manual for SLAB: An Atmospheric Dispersion Model for Denser-Than-Air Releases. National Technical Information Services (NTIS), DE91- 008443, Springfield, VA. 
    Fedra, K. and Winkelbauer, L. (1999)
    A hybrid expert system, GIS and simulation modeling for environmental and technological risk management. In: Environmental Decision Support Systems and Artificial Intelligence, Technical Report WS-99-07, pp 1-7, AAAI Press, Menlo Park, CA.
    Fedra, K. (1997)
    Integrated Risk Assessment and Management: Overview and State-of-the-Art. p3-18. In: Ale, B.J.M, Janssen, M.P.M., and Pruppers, M.J.M [eds] Risk 97 Book of Papers. Proceeding of the International Conference Mapping Environmental Risks and Risk Comparison, Amsterdam, 21-24 October 1997. RIVM, Bilthoven.
    Garnatz Th., Haack U., Sander M., Schröder-Preikschat W. (1996) 
    Experiences made with the Design and Development of a Message-Passing Kernel for a Dual-Processor-Node Parallel Computer. In Proceedings of the Twenty-Ninth Annual Hawaii International Conference on System Sciences. (Maui, Hawaii, January 3-6, 1996). IEEE Computer Society Press.
    Geist A., Beguelin A., Dongarra J., Jiang W., Manchek R., Sunderam V. (1994) 
    PVM: Parallel Virtual Machine, A User's Guide and Tutorial for Networked Parallel Computing. The MIT Press, Cambridge, Massachusetts.
    Gerharz I., Lux Th., Sydow A. (1997) 
    Inclusion of Lagrangian Models in the DYMOS System. In Proceedings of the IMACS World Congress 1997. (Berlin, Germany, August 25-29). W&T, Berlin, 6: 53-58.
    Gerharz, I., Mieth, P., Unger, S. (2000)
    A software system for environmental risk management - the HITERM approach, Systems Analysis Modeling Simulation, (to be published)
    Giloi W.K., Brüning U. (1991) 
    Architectural Trends in Parallel Supercomputers. In Proceedings of the Second NEC International Symposium on Systems and Computer Architectures. (Tokyo, Japan, August). Nippon Electric Corp.
    Goodin W.R., McRae G.J., Seinfeld J.H. (1980) 
    An objective analysis technique for constructing three-dimensional urban scale wind fields. J. Applied Meteorol., 19: 10-16. 
    Janicke L. (1991) 
    Ausbreitungsmodell Lasat. Handbuch Version 1.10. Ingenieur-Büro Dr. Lutz Janicke, Überlingen. 
    Kawamura P. I., Mackay D. (1987) 
    J. Hazardous Materials, 15, 343-364.
    Legg B.J., Raupach M.R. (1982) 
    Markov-Chain Simulation of Particle Dispersion in Inhomogeneous Flows: The Mean Drift Velocity Induced by a Gradient in Eulerian Velocity Variance. Boundary-Layer Meteorology, 24: 3-13. 
    Liu M.K., Yocke M.A. (1980) 
    Siting of wind turbine generators in complex terrain. J. Energy, 4: 10-16. 
    Moskowitz. P.D., Pardi, R.R., DePhillips, M.P. Meinhold, A.F. and Irla B. (1995)
    Computer Models Used to Support Cleanup Decision Making at Hazardous and Radioactive Waste Sites. In: G.E.G. Beroggi and W.A. Wallace [Eds.] Computer Supported Risk Management. Kluwer Academic Publishers. Dordrecht. The Netherlands. pp. 275-319.
    Mieth, P.; Unger, S.; Gerharz, I. (1999)
    A model based tool for environmental risk management after accidental atmospheric release of toxic substances, In: MODSIM 99 - International Congress on Modeling and Simulation (Proceedings), Vol. 3, Oxley, L.; Scrimgeour, F.; Jakeman, A. (eds.), 562-572
    Mieth, P.; Unger, S.; Gerharz, I.; Jugel, M. L. (1999)
    HITERM: ein Arbeitsplatz f�r das St�rfall-Management, Der GMD-Spiegel, Nr. 1/2, 45-47
    O'Brien J.J. (1970) 
    Alternative solutions to the classical vertical velocity profile. J. Applied Meteorol., 9: 197-203. 
    Obukhov, A.M. (1959) 
    Description of Turbulence in Terms of Lagrangian Variables. Advances in Geophysics, 6: 113 -116. 
    Rumbaugh,et al., (1991)
    Object Oriented Modelling and Design. Prentice Hall, NJ, USA. ISBN 0-13-629841-9.
    Smith F.B. (1968) 
    Conditioned Particle Motion in a Homogeneous Turbulent Field. Atmospheric Environment, 2: 491-508. 
    Taylor G.I. (1921) 
    Diffusion by Continuous Movements. London Mathematical Society, 20: 196-211. 
    Thomson D.J. (1987) 
    Criteria for the Selection of Stochastic Models of Particle Trajectories in Turbulent Flows. Journal of Fluid Mechanics, 180: 529-586. 
    TNO (1997)
    Yellow Book (CPR 14E), Third Edition, 2 Volumes, 820 pages. TNO Institute of Environmental Sciences, Energy Research and Process Innovation, Apeldoorn, The Netherlands.
    Unger, S. ; Gerharz, I. ; Mieth, P. ; Wottrich, S. (1998)
    HITERM - High-Performance Computing for Technological Risk Management, Transactions of the Society for Computer Simulation, Vol. 15, 3, 109-114
    Überhuber C. (1995) 
    Computernumerik. Bd. 1, Springer Verlag, Berlin Heidelberg.
    Yamada T., Bunker S., Niccum N. (1987) 
    Simulation of the ASCOT Brush Creek Data by a Nested-grid, Second Moment Turbulence-closure Model and a Kernel Concentration Estimator. In Proceedings of the AMS 4th Conference on Mountain Meteorology, (Seattle, WA, August 24-28). 175-179.
    Zani, F. (1998)
    HITERM High Performance Technological and environmental Risk Management: strumenti informatici on-line di supporto alla pianificazione e gestione delle emergenze industriali. VGR 98 Conference, Pisa (I) 6-8 October


© Copyright 1995-2019 by:   ESS   Environmental Software and Services GmbH AUSTRIA | print page