Reports and Papers
A hybrid expert system, GIS and simulation modeling for environmental and technological risk management
Fedra, K. and Winkelbauer, L.
Acknowledgements: HITERM is being funded, in part, by the Commission of the European Communities, DG III, under the Esprit Programme, High Perfomance Computing and networking.
The HITERM project
HITERM (http://www.ess.co.at/HITERM) is an international research project under the European ESPRIT technology programme on high-performance computing and networking (HPCN) for decision support applications. The project integrates high-performance computing on parallel machines and workstation clusters with a decision support approach based on a hybrid expert systems approach. Application examples are drawn from the domain of technological risk assessment and management, and particular chemical emergencies in fixed installations or transportation accidents (Fedra 1998; Fedra and Weigkricht 1995). To integrate the various information resources in an operational decision support system, a flexible client-server architecture is used (Figure 1), based on TCP/IP and http. The central system, which runs the RTXPS expert system as the overall framework is connected to a number of (conceptual) servers that provide high-performance computing and data acquisition tasks, as well as a number of clients that include mobile clients in the field.
The architecture features the two interlinked ES strategies which draw upon an object data base of risk objects and a GIS, as well as a set of simulation models implemented in a distributed client-server environment that includes links to real-time data acquisition, e.g., for meteorological data. Explicit treatment and propagation of uncertainty is made possible by the use of Monte Carlo methods, in part implemented on parallel compute servers.
The RTXPS framework maintains the dialogue with the user, e.g., an operative in an incident command center. The real-time expert system controls communication with the various actors involved in an emergency situation, provides guidance and advice based on several data bases including Material Safety Data Sheets for hazardous substances, and triggers various simulation models for the simulation of the evolution of an emergency and the prediction of human health and environmental impacts. The expert system compiles all necessary input information for the models and performs checks for completeness, consistency, and plausibility. It then triggers, based on the available information and some simple screening and ranking methods, the most appropriate model or set of models, interprets the results, and translates that into guidance and advice for the operators. Embedded simulation models include a detailed source model for different release types including pool evaporation, atmospheric dispersion using either a multi-puff, multi-layer Eulerian, or a Lagrangian approach based on a 3D diagnostic wind field model, fire and explosion models, and a stochastic soil contamination routine. Real-time control and logging of data availability, user inputs and decisions, model results, and communication activities provide an opportunity to use the system for operational management, training purposes, as well as for planning oriented risk assessment tasks.
Decision support and information flow
From a conceptual point of view, the central object of the approach is an emergency scenario. Once started on the occasion of an incident or training session, the RTXPS framework queries the user for the type of incident and selects the appropriate knowledge base. The system, following a rule-based implementation of an operations manual or standard operating procedure, elicits relevant information from a number of information sources. This can include asking the user, retrieving data from the various data bases, prompting the user to establish communication channels to field personnel, or polling remote data acquisition systems.
Based on the developing context of the emergency scenario, the expert system may trigger a number of models that predict the likely evolution of the emergency and its impacts. The selection of the most appropriate model is based on the context or the results of previous modeling steps. As an example, the source model generates information on the total mass evaporated or directly escaping into the atmosphere and thus available for atmospheric dispersion, the mass fraction infiltrating into the soil, and the probabilities for fire and explosion. Based on these results and their respective probability distributions, the models are triggered in sequence with the most likely or dangerous impact scenario simulated first.
Figure 1: The systems architecture
Using Monte Carlo methods based on a priory probability distributions for relevant input parameters, which are adapted based on the information compiled, individual models are run for a large number of parameter input samples so that output probability density functions can be constructed. From these, the expert system selects either a 95\% "worst case" scenario for further propagation, or the user can select a specific result or probability range.
Following the computation of likely impacts, the expert system performs an assessment using the population data base by computing the number of people exposed to concentrations, pressures, or radiative heat fluxes above certain thresholds. On the basis of these spatially and temporally distributed impacts, further advise e.g., for evacuation or the definition of exclusion and safety zones is generated. In general, the expert system starts with a set of worst case assumptions, which it evaluates and provides advice for; it then attempts to eliminate possible scenarios, starting with the most dangerous ones, until the actual scenario can be confirmed and eventually controlled.
However, in parallel to this internally driven approach, the system must at any time be ready to accept external information asynchronously to update the emergency scenario based on field information or real-time data acquisition, and re-evaluate its strategy at this point.
The RTXPS framework
Rule based expert systems can either be goal driven using backward chaining to test whether some hypothesis is true, or data driven, using forward chaining to draw new conclusions from existing data. Forward chaining implies that upon assertion of new knowledge all relevant inductive rules are fired exhaustively, effectively making all knowledge about the current state explicit within the state. Forward chaining may be regarded as progress from a known state (the original knowledge) towards a goal state. Backward chaining systems work from a goal state back to the original state. This means that no rules are fired upon assertion of new knowledge. When an unknown predicate about a known piece of knowledge is detected all rules relevant to the knowledge in question are fired until the question is answered or until quiescence.
In the RTXPS environment forward chaining is used for guiding the user from one state to the next state based on the user's inputs (= new knowledge). The user's input is obtained through various resources, one of them being rule based backward chaining.
Several modifications to the general concept of forward chaining have been implemented in RTXPS to overcome the inherent inefficiency of this approach. The two most important modifications are that (a) rule firing is immediately stopped when enough evidence has been obtained to move on to the next state which significantly improves the performance of the overall system (very important in real time systems and to keep the user alert) (b) states themselves are defined and used as (action) objects directly in the forward chaining rules, making the development of the knowledge base and the transitions between the states more transparent for the user and the knowledge engineer.
RTXPS operates in terms of ACTIONS, which are triggered by the forward chaining Rules; these Rules operate in a knowledge base domain that is shared between forward and backward chaining strategies, so that backward chaining inference can affect the forward chaining Rules and vice versa. The shared information is based on Descriptors, which are the variables both inference strategies work with. Descriptors can be purely symbolic (nominal or ordinal) or hybrid, the latter combining a set of ordinal symbolic values with a cardinal numerical representation in terms of ranges. For a description of the backward chaining system see Fedra and Winkelbauer, 1994.
ACTIONS consist of a hypertext part that maintains the user dialogue, and a number of functions that are either triggered automatically or manually by the user. The functions include triggers for simulation models, the backward chaining expert system, or external communication tasks such as data acquisition from monitoring systems or tasks such as automatic dialing for phone connections, or sending automatically generated fax messages. All ACTIONS are logged with their time stamp, together with all instantiations and assignments of Descriptor values. Since ACTIONS can depend on external objects that may or may not be available at any point in time (like a telephone connection), they can be set pending. A timer is started, that will reactivate the ACTION as soon as its timer has expired.
The built-in functions of an ACTION can include the backward-chaining expert system. The trigger is a request to provide the current value for a Descriptor. This can be done by either direct editing or by starting the rule-based inference. The system then uses a set of alternative methods enumerated in the respective Descriptor definition to obtain or update the Descriptor value in the current context. The inference engine compiles all necessary information for the appropriate Rules' input conditions recursively, evaluates the Rules, and eventually updates the target Descriptor. A typical use of this inference process is to assist the user in specifying scenario parameters: here the system collects circumstantial evidence to derive an informed guess where no hard data are available.
Other ACTION functions trigger special editors to obtain information on more complex risk objects (such as trains, plants, etc.) which require specific dialogue windows for consistent editing of the attributes of the the risk objects and provide additional functionality such as links to on-line databases. An example for this kind of editors is the train editor (Figure 7) which is described in the application example below.
Coupling to GIS and models
Another set of functions in the ACTIONS triggers the GIS and the simulation models which are used to assess the danger and the impact of the potential risk sources.
Currently the following models/model groups are coupled to the RTXPS:
The PVM model group:
Monte-Carlo implementation of the (parallel) spill and pool-evaporation model; the main parameters (defined by external sensitivity analysis) are sampled in a Monte-Carlo framework; from the resulting distribution of solutions, the mean and 95\% level are used as input to the Lagrangian model to generate two solutions.
The SOURCE model:
computes the dynamic source term for the atmospheric dispersion models, soil infiltration, and determines the probabilities for fire and explosion with the respective input data values (available mass); the user can determine the level of uncertainty for the input parameters (expressed as a percentage around the mean, the type of a priory distribution to be sampled, and the number of Monte-Carlo runs. All these values are provided as defaults, but can be modified on demand. By default, the system selects a source term for the dispersion models from these results based on the 95% percentile range; alternatively, the user can select an arbitrary class range from the mass distribution for the subsequent dispersion computations.
Probability of fire and explosion:
Based on the temperature range versus the flashpoint of the substance for fire, and the local concentration over the pool versus the upper and lower ranges for explosivity the probability for fire and explosion, depending on the duration of fire or explosion conditions, are calculated.
Another example of the direct representation of uncertainty in the simulation models is implemented for the determination of response times for soil contamination. The simple screening model estimates the time a given substance will need to reach the groundwater table, based on viscosity, soil permeability, and the distance to the groundwater table. For the simulation, the user can again override the defaults for the uncertainty around the input parameters, soil permeability and viscosity.
All these models have their individual user interface and dialog functions but communicate their results through the Descriptors to the RTXPS where the backward chaining mechanism then can be used together with the GIS to classify complex systems results (e.g., the results from a spatially distributed, dynamic, multi-parameter model) into a simple and directly understandable statement through linguistic classification.
Another use of the backward chaining capabilities of the expert system is to provide a synthesis of large model generated data volumes. The chain of models used to simulate an accident scenario may easily generate data volumes in the order of Gigabytes. These should, however, be summarized in a few simple variables such as the number of people exposed, the level of exposure, the area contaminated, estimated material damage and a rough classification of the accident: these classifications are needed to trigger the appropriate responses.
Starting from the dynamic model results, specific aggregate parameters are computed as a post-processing step or while the model is running, updating values for maxima of threshold related parameters.
Eg., in the case of the atmospheric dispersion models, the critical parameters are the extent of the area covered, the population exposed in this area, and time factors such as the time until the first houses are reached by the cloud, and the duration of the exposure.
Starting with the model result and a (default or substance specific) concentration threshold, the system computes the area of the plume that exceeds the threshold, the populated area, and the intersection. Based on the known or estimated population density, two key parameters, namely the area exposed and the population exposed are computed and indicated.
In addition to the model derived values (which are setting the corresponding Descriptors in the expert system), a user-defined threshold value is used in this evaluation. This can either be derived from a set of rules, or from the hazardous chemicals data base (e.g., based on the Seveso II classification). In the simplest case, the user can directly set that threshold value with the expert system's editing functions.
In the next step, the expert system attempts a classification of the emergency in terms of public health effects, environmental damages, and material damages. In terms of the backward chaining inference procedure, these three Descriptors are Target Descriptors i.e., the are at the top of the respective inference trees. Each of them has a set of associated Rules, that use Descriptors as their inputs. The Descriptor values are set by the model output in the step above, but can, in principle, be overwritten by the user interactively if he repeats the (automatically triggered) inference procedure. If all the necessary data (Descriptor values) to reach a conclusion are available, the expert system will directly arrive at, and display in a symbolic format, the results in the accident summary display.
An application example
A typical application example is the management of a train accident involving hazardous cargo. The user is an officer at an incident command center operated by the railway system. After receiving an external alarm by phone, usually from the police, the user selects the appropriate option in the system, i.e., emergency management: RTXPS now provides all prompts in the form of hypertext messages on the main console dialog window (Figure 6). As a first step, the user is asked to verify the nature of the emergency (train accident). He then is prompted to record the contact details of the caller for verification. The corresponding ACTION pops up the necessary editors, and logs the answers. Based on the source of the alarm, a communication protocol is initiated that involves relaying the information gathered to various groups such as local fire brigades, railway operation centers, etc.
With the approximate location of the accident (obtained from the initial call) and the time, the railway operation center is queried for a train identification. In parallel, the system will zoom in on its map display to the area of the accident location. With the train identification, obtained by phone, the Rules now trigger an automatic download of train cargo information from an on-line database. If no hazardous material is on the train, the incident is logged and closed. If there is hazardous information on the train, the information is relayed to the field team, and a new branch of fire fighters specialized in chemical spills is alerted.
With the detailed information of the train location (from the train operators) and its cargo (from the cargo information database), the system now constructs a representation of the train and all potential sources of risk (Figure 7). This is based on substance specific data, obtained from the embedded hazardous chemical data base and the on-line cargo information data base.
The expert system now scans all railway cars and performs an initial risk assessment. The car with the highest potential risk is selected for detailed analysis. If, however, more detailed information about the state of the accident (e.g., cars damaged, visibly leaking, or burning) becomes available from the intervention forces in the field, this information is entered in the knowledge base through the corresponding Descriptors, and the risk ranking of the cars is revised accordingly.
For a given railway car selected automatically based on the risk ranking or manually by the user, the next ACTION then triggers a SOURCE model that estimates the nature and amount of a hazardous material release. Since the necessary input parameters include the local meteorological conditions, an automatic download of local meteorological data is attempted; should this fail, the user is prompted to obtain this information from the intervention forces by radio.
Depending on the physical and chemical properties of a substance, and its transport conditions (temperature, pressure), the release characteristics (gaseous and/or liquid, flow over time) are then estimated. With a pool evaporation model where appropriate, the total mass spilled is partitioned into an atmospheric release and a liquid fraction that may infiltrate into the soil. The probabilities of fire and explosion are monitored based on flammability and explosion limits of temperature and concentrations. Interventions can be simulated explicitly by specifying a cut-off time for the release and the evaporation. The SOURCE model is run in a Monte Carlo framework so that PDFs for the source terms for the consequence models are generated (Figure 4).
The results are summarized and reported to the user, who may relay them to the intervention team. Based on the release characteristics, the next set of Rules selects the most appropriate consequence model: atmospheric dispersion, fire, explosion, soil infiltration. Where necessary to obtain the required much-better-than-real-time performance, a high-performance compute server is used in client-server mode (Unger et al., 1998). The expert system checks, for each model, whether all the required input data are available, and within plausible ranges. If not, an appropriate message is issued and an editor with the possibility to start a rule-based deduction for parameter estimation is provided. Once the input is complete, consistent, and plausible, the model is triggered. Its result are displayed to the user in a graphical and spatially distributed, and dynamic (animated) display, depending on the dimensionality of the underlying model (Figure 5).
Model results are then combined with the population data base to estimate casualties. These, together with the area exposed and above of no-effects thresholds, are used to update the risk assessment for a given source. The system then returns to the train display, ready to accept new information from the user, or to evaluate the next source according to the updated risk ranking. Where the estimated consequences of a potential or observed source involve casualties or the necessity for evacuation, the system provides the necessary information to the operator who in turn informs the intervention forces in the field. This communication process can be simplified by using mobile clients that connect the field teams directly to the expert system. The communication, in addition to the hypertext prompts, includes various editor tools (Figure 8) and the option to explain the expert systems reasoning step by step (Figure 9).
The expert system keeps moving between gathering scenario information, updating the status description and ranking of potential or actual risk sources, and evaluating their impact, and providing advice on this basis until there is no more possible source, i.e., all is save and the incident is under control. A final set of Rules will then prepare the required reports, and distribute them over appropriate communication channels.
The combination of forward chaining, to implement a context sensitive operations protocol with real-time elements, and backward chaining to provide support for data compilation and estimation has proven very effective for the implementation of a real-time environmental decision support system. By combining this hybrid expert systems approach with powerful simulation models, GIS, and multi-media display formats, it becomes possible to bring together advanced analytical tools with an easy to use decision support framework for complex and mission critical application domains.
The project HITERM is supported by the European Commission under the auspicies of the specific programme for research and technical development in the filed of information technologies.
© Copyright 1995-2016 by: ESS Environmental Software and Services GmbH AUSTRIA | print page