Project On-line Deliverables: D06.1

Decision Support and Expert Systems:   Implementation Report  

    Programme name: ESPRIT
    Domain: HPCN
    Project acronym: HITERM
    Contract number: 22723
    Project title: High-Performance Computing and Networking
    for Technological and Environmental Risk Management
    Project Deliverable: D06.1
    Related Work Package:   WP 6
    Type of Deliverable: Technical Report
    Dissemination level: project internal
    Document Author: Kurt Fedra, ESS
    Edited by: Kurt Fedra, ESS
    Document Version: 1.4
    First Availability: 1998 03 30
    Last Modification: 1999 12 28


    HITERM is designed to provide HPCN-based decision support for technological risk analysis, including both risk assessment and risk management aspects.

    As a Decision Support System it uses HPCN technologies to:

    • generate decision relevant information fast and reliably;
    • to explicitly address uncertainty as the determining feature risk analysis;
    • and deliver it to the user fast and in a directly usable format.

    HITERM implements several decision support paradigms in parallel, and integrated with each other, reflecting the complex nature of the application domain, but also the scope of the intended applications ranging from strategic planning (risk assessment) to training and real-time emergency management.

    The DSS paradigms applied are, basically

    • comparative analysis and (multi-criteria) selection, based on scenario analysis, primarily applicable in the planning domain;
    • rule-based classification, applicable to both the planning and real-time domains;
    • uncertainty analysis and sensitivity analysis, that is applied to the criteria used both in comparison and classification;
    • and real-time rule-based guidance, applicable to the real-time training and emergency management domain.

    These paradigms are implemented, respectively, through:

    • a set of tools for the direct comparison of HPCN simulation generated options or scenarios and associated statistical analysis;
    • a backwards chaining rule-based expert system linked to the simulation models;
    • Monte Carlo Analysis and a Direct Differentiation approach (covered in Deliverable D05, Model Calibration and Uncertainty Analysis;
    • a real-time forward chaining rule-based expert system for emergncy management support.
    The Implementation Report
    • reviews the basic concepts and an approach to strategic planning for technological risk management, based on a discrete multi-criteria choice paradigm;

    • discusses scenario analysis as the primary vehicle to generate decision alternatives;

    • presents comparative evaluation and classification together with uncertainty analysis, sensitivity analysis, and the concept of robustness used in the comparative evaluation;

    • presents a reference point approach to discrete multi-criteria decision analysis as an efficient and easy-to-use tool;

    • presents the extensions to turn these elements into an effective real-time guidance system, based on an innovative integration of a forward-chaining expert system with a backward chaining approach and the HITERM simulation models.

    In adition, a complete presentation with the description of the Swiss Case Study on railway transportation as an application example is made in a parallel published Research Report:

    Fedra, K. and Winkelbauer, L. (1999)
    A hybrid expert system, GIS and simulation modeling for environmental and technological risk management. In: Environmental Decision Support Systems and Artificial Intelligence, Technical Report WS-99-07, pp 1-7, AAAI Press, Menlo Park, CA.
    This research paper is also available on-line.

Table of Contents

  • DSS: an introduction
  • A DSS approach for planning
    • Scenario Analysis
    • Comparative Evaluation
    • Sensitivity, Uncertainty, and Robustness
    • Discrete Multi-Criteria Optimization
  • A DSS approch for emergency management
  • Rule-based Guidance
  • Generic DSS tools: Rule-based Classification
  • Benefits and Functions

DSS: an introduction

The ultimate objective of a computer based decision support system for technological and environmental risk management is to improve planning and operational decision making processes by providing useful and scientifically sound information to the actors involved in these processes, including public officials, planners and scientists, industrial operators, emergency management personnel and civil defense forces.

This information must be:

  • timely in relation to the dynamics of decision problem: this is particularly challenging in real-time decision making situations such as in emergency management;
  • accurate in relation to the information requirements;
  • directly understandable and usable;
  • easily obtainable, i.e., cheap in relation to the problems implied costs.

Decision support is a very broad concept, and involves both rather descriptive information systems, that just demonstrate alternatives, as well as more formal normative, prescriptive optimization approaches that design them. Any decision problem can be understood as revolving around a choice between alternatives.

These alternatives are analyzed and ultimately ranked according to a number of criteria by which they can be compared; these criteria are checked against the objectives and constraints (our expectations), involving possible trade-offs between conflicting objectives. An alternative that meets the constraints and scores highest on the objectives is then chosen. If no such alternative exists in the choice set, the constraints have to be relaxed, criteria have to be deleted (or possibly added), and the trade-offs redefined.

This choice process may be iterative, and at leisure, when the decision maker(s) can experiment with criteria, objectives, and constraints, to develop his preference structures and reduce the set of alternatives step by step until a preferred solution is found.

Or this choice process may be a dynamic, real-time sequence of numerous small but interrelated decisions in the form of (alternative) actions that need to be taken under sometimes extreme pressure of time, and under considerable uncertainty.

However, the key to an optimal choice is in having a set of options to choose from that does indeed contain an optimal - or at least satisfactory - solution. Thus, the generation or design of alternatives is a most important, if not the most important step. In a modeling framework, this means that the generation of scenarios must be easy so that a sufficient repertoire of choices can be drawn upon.

The selection process is then based on a comparative analysis of the ranking and elimination of (infeasible) alternatives from this set. For spatially distributed and usually dynamic models -- natural resource management problems most commonly fall into this category -- this process is further complicated, since the number of dimensions (or criteria) that can be used to describe each alternative is potentially very large. Since only a relatively small number of criteria can usefully be compared at any one time (due to the limits of the human brain rather than computers), it seems important to be able to choose almost any subset of criteria out of this potentially very large set of criteria for further analysis, and modify this selection if required.

While this classical approach is most applicable for planning situations where the decision maker is at leisure to contemplate alternatives, the management of an emergency situation in real-time does not offer these luxuries: here efficient decisions have to be taken in a minimum of time, under considerable psychological pressures, and often under large uncertainty.

Real-time decision support can therefore build on the above concepts of comparative analysis, but must implement them in an efficient and effective way that minimizes time and effort by the decision maker (or rather operator) during an emergency.

Instead of offering, and manipulating many alternatives subject to the decision makers preferences, here we must present a best alternative (or a very small set of efficient alternatives with a clear ranking and clear trade-offs), i.e., strategies, plans, or a set of actions in the form of a rather firm guidance, but with enough context information to allow the operator to exercise ultimate judgement and decision power. This is based on a set of a priori defined strategies and options, that are adapted dynamically depending on context, i.e., the characteristics and circumstances of the evolving emergency.

A DSS approach for planning

In HITERM, the decision support approach chosen for the strategic planning applications is primarily constrained by the characteristics of the underlying system. These are:

  • dynamic, with a typical time resolution in the order of minutes;
  • spatially distributed, with spatial resolution ranging from the street level (meters) to the regional air quality grid (100 m);
  • highly non-linear and involving time-delays and memory in the cause-effect relationships, e.g., cumulative exposure.

These problem characteristics preclude any straight forward optimization approach.

Consequently, HITERM uses an approach centered on

  • Scenario Analysis and the
  • comparative evaluation of scenarios.
This can lead, eventually, and if the set of alternatives is reasonably large and complex (i.e., of high attribute dimesnionality) to
  • discrete multi-criteria optimization.

Scenario Analysis

In a DSS framework, Scenario Analysis supports the user to explore a number of WHAT -- IF questions. The scenario is the set of initial conditions and driving variables (including any explicit decision variables) that completely characterize the system behavior, which is expressed as a set of output or performance variables.

Control or Decision Variables

The control variables or decision parameters the user can set to define a scenario include:

  • industrial site or location along one of the networks (transportation or pipelines);
  • accident conditions (e.g., spill, fire, explosion);
  • receiving environment (air, soil and groundwater, surface water;
  • meteorological conditions (primarily wind and temperature) or hydrological conditions (flow);
  • mitigation measures.

Editing functions

An important aspect here is the translation of the more or less technical (and sometimes cryptic) model data requirements into concepts and terms that are directly problem relevant and directly understandable to the user. A general concept used is the specification of most user defined values in relative terms and as a selection from a list of predefined, valid and meaningful options.

The editing (and estimation of parameters) within HITERM is supported by an embedded expert system that can be used to ensure

  • completeness
  • consistency
  • plausibility
of any or all user inputs.

All parameters are represented by Descriptors which are terms used in the expert systems Knowledge Base. Descriptors are Objects that have several Methods available to determine or update their value in a given context (the scenario). One such method is to ask the user through an interactive dialog box.

The complete syntax of a Descriptor definition is described in part 2 of this Deliverable, the Technical Manual.

A concrete (but simple) example is the Descriptor exposed_area, used for a storage container mass:

U m3
### applies to storage containers
V  very_small[   1.0,  5.0,    8.]
V  small     [   8.0, 10.0,  20.0]
V  medium    [  20.0, 50.0, 100,0]
V  large     [ 100.0, 500., 800.0]
V  very_large[ 800.0,1000.,2000.0]
Q What is the volume of the container or vessel ?

The upper limit that can be used, is, however, constrained by the selection of the source (e.g., a storage container or transportation vehicle) and the maximum mass of a hazardous substance it contains.

Performance or Impact Variables

The performance variables measure the overall behavior of the system (in terms of a set of partly implicit and partly explicit objectives) in an aggregate form. This is clearly necessary for simple reasons of cognitive limitations:

  • or an arbitrary spatially integrated (summed over n land parcels or model grids cells) environmental impact function of the type:
    where Ci represents the estimated immission in land parcel (grid cell i), Co is a reference or no-effects threshold, and Wi represents a landuse-dependent weight or penalty. The coefficients a and b describe the dose-effect behaviour of the hazardous substance in question.
  • total area covered
  • total population exposed.


An additional important function provided by the user interface is the visualization of the scenario parameters, i.e., the current status of an emergency, and the related model results. Due to the relatively large number of variables (spatially distributed, dynamic, multi-parameter) graphical and symbolic representation is used to summarize numerous, and in particular spatially distributed and dynamic data.

Details on the user interface and visualization aspects in HITERM are described in Deliverable D04, Visualization and Multi-Media.

In summary, simple scenario analysis results in a single (set of) result(s), that is (implicitly or explicitly) compared against a set of (absolute) objectives (expectations) and constraints such as environmental or health standards.

Comparative Evaluation

Comparative evaluation requires that the performance variables of more than one scenario (minimally two for direct pairwise comparison) are displayed to the user simultaneously. For the spatially distributed (network or domain-grid specific data) this is accomplished by displaying equivalent data sets in two or four parallel windows. For the performance variables, this is accomplished by the parallel display, tabular and graphical, of the respective values.

The example shows two scenarios (in fact one simulated one observed) of a dynamic 3D simulation model, with a display of the delta (simple arithmetic difference on a cell-by-cell basis) an a statistical evaluation window.

In both cases, graphical and numerical, the side-by-side display can be augmented by the calculation and display of relative and absolute differences (deltas) of the respective scenario performance variables, for example, as a map of differential (increases and decreases) immission from two pollutant concentration maps representing two separate accident scenarios.

In summary, comparative scenario analysis results in direct comparison of two (or a set of) result(s), that are explicitly compared against each other and interpreted in terms of improvement or deterioration of performance variables vis a vis the objectives and constraints.

Discrete Multi-Criteria Optimization

Since each scenario is described by more than one performance variable or criterion, the direct comparison does not necessarily result in a clear ranking structure: improvements in some criteria may be offset by deterioration in others. This can only be resolved (and resulting in an eventual ranking and selection) through the introduction of a preference structure that defines the trade-offs between objectives.

The basic optimization problem can be formulates as:

is the vector of decision variables (the scenario parameters), and
defines the objective function. Xo defines the set of feasible alternatives that satisfy the constraints:
In the case case of numerous scenarios with multiple criteria, we can define the partial ordering
where at least one of the inequalities is strict. A solution for the overall problem is a Pareto-optimal solution:

As an overall decision support tool, we can now use a discrete multi-criteria approach to find an efficient strategy (scenario) that satisfies all the actors and stake holders involved in the traffic and environmental management decision processes.

The preferences of decision makers can conveniently be defined in terms of a reference point, that indicates one (arbitrary but preferred) location in the solution space. Normalizing the solution space in terms of achievement or degree of satisfying each of the criteria between nadir and utopia allows us to find the nearest available Pareto solution efficiently by a simple distance calculation.

Since decision and solution space are of relatively high dimensionality, the direct comparison of a larger number of alternatives becomes difficult in cognitive terms. The data sets describing the scenarios can be displayed in simple scattergrams, using a user defined set of criteria for the (normalized) axes. Along these axes, constraints in terms of minimal and maximal acceptable values of the performance variable in question can be set, leading to a screening and reduction of alternatives.

As an implicit reference point, the utopia point can be used. Consequently, and unless the user overrides this default by specifying and explicit reference point, the system always has a solution (the feasible alternative nearest to the reference point) that can be indicated and highlighted on the scattergrams and in a listing of named alternatives.

In parallel, graphical representation of the spatially distributed parameters can be shown as thematic maps. The visualization tools based on GIS and multi-media formats of the SIMTRAP server system supports a more intuitive and holistic understanding of alternatives that aids the definition of a reference point and thus supports the decision making process.

Sensitivity, Uncertainty, and Robustness

The Deliverable D05   provides an overview of Model Calibration and Uncertainty Analysis.

From the DSS point of view, uncertainty is simply another of the (multiple) criteria that needs to be displayed and evaluated. In any cases the uncertainty, e.g., represented by a set of Monte Carlo solutions, can be reduced to a set of discrete scenarios by selecting, for example, different probability levels (or risk level) in the reconstructed probability distribution.

As an example, given a set of Monte Carlo solutions we can consider as samples, we will obtain a frequency distribution of the number of people exposed. Fitting an appropriate probability distribution (with a appropriate transformation), we can select now select a value at two standard deviations from the mean, interpreted as a 95% probability level. Thus, based on the results of the Monte Carlo analysis, the user can select and compare different probability levels of exceeding or staying under, certain threshold values of key criteria.

As an alternative, the concept of expected value can be used to characterize the probabilistic nature of the simulation results.

Representation of Uncertainty

Explicit representation and treatment of uncertainty is implemented at several stages in HITERM.

The examples include:

  • PVM model group: Monte-Carlo implementation of the (parallel) spill and pool-evaporation model; the main parameters (defined by external sensitivity analysis) are sampled in a Monte-Carlo framework; from the resulting distribution of solutions, the mean and 95% level are used as input to the Lagrangian model to generate two solutions.

  • SOURCE model

    computes the dynamic source term for the atmospheric dispersion models, soil infiltritation, and determines the probabilities for fire and explosion with the respective input data values (available mass); the user can determine the level of uncertainty for the input parameters (expressed as a percentage around thye mean, the type of a priory distribution to be sampled, and the number of Monte-Carlo runs. All these values are provided as defaults, but can be modified on demand.

The results are shown in four parallel windows:
  • upper left: ensemble of solution for evaporation rate against time;
  • lower left: ensembnle of cumulative mass evaporated
  • lower right: frequency distribution of total mass
  • upper right: frequency distribution of evaporation rate for a selected class of the mass distribution.
By default, the system selects a source term for the dispersion models from these results based on the 95% percentile range; alternatively, the user can select an arbitrary class range from the mass distribution for the subsequent dispersion computations.

Probability of fire and explosion

During the run, the probabilities of fire and explosion are indicated in two separate windows. This is based on the temperature range versus the flashpoint of the substsnce for fire, and the local concentration over the pool versus the upper and lower ranges for explosivity.

These values are reported at the end of a run as a probability for fire and explosion, depending on the duration of fire or explosion conditions.


Another example of the direct representation of uncertainty in the simulation models is implemented for the determination of response times for soil contamination.

The simple screening model estimates the time a given substance will need to reach thye groundwater table, based on viscosity, soil permeability, and the distance to the groundwater table.

For the simulation, the user can again override the defaults for the uncertainty around the input parameters, soil permeability and viscosity.

The display shows the a priory distribution of the two main input parameters, viscosity and soil permeability, and the frequency distribution of results below. The indicate:

  • the deterministic solution
  • a worst case 95% value
  • the mean value of teh frequency distribution
  • the median value of the frequency distribution
of the response time (the time until the substance reaches the groundwater table).

A DSS approach to Emergency Management

The Real-time Expert System

This constitutes the top layer of control, on top of the current simulation model layer (representing primarily the planning component) that drives the real-time Accident Management part in HITERM with a real-time, forward-chaining expert system (RTXPS) as the driving engine.

The implementation in HITERM is based on a set of ACTIONS similar to XPS Descriptors and forward chaining Rules.

ACTIONS are triggered by the Rules using the status of other ACTIONS and EMERGENCY PARAMETERS, data including externally obtained information and Descriptors, which can be derived through model runs, the expert system, or from data base queries. The EMERGENCY PARAMETERS define dynamic context describing the evolving emergency.

An ACTION can have four different status values:

  • ready
  • pending
  • ignored
  • done
For the status pending a timer interval (to avoid being asked about a pending request in too short intervals) can be set in the ACTION declaration.;

An ACTION declaration looks like:

A alias_name
V  ready / pending / ignored / done /
P 180   # timer set to 180 seconds
Q For a \value{accident_type} you have 
Q to specify the total mass or spill volume involved;
Q you can enter this directly, or refer to the
Q plant and container database of \value{accident_site}:
F get_descriptor_value(spill_volume)
ACTION declarations are stored in the file Actions in the systems KnowledgeBase (by default located in the directory $datapath/KB

The declaration of an ACTION object is blocked between the ACTION and ENDACTION keywords.

The ACTION keyword record is followed by the unique name of the ACTION (Some_Actions), which may be followed by an optional alias after the A keyword.

The V keyword introduced the array of legal values or states.

P denotes the timer for pending requests in seconds.

Q records contain the textual (hypertext syntax) part of the ACTIONS REQUEST. Please note that the

function can be embedded in the text of the ACTION REQUEST. This will automatically insert the current value of the respective Descriptor in the text.

One or more optional F records enumerate functions that the action will trigger automatically, and in sequence, when the user presses the DO IT button the ACTION REQUEST DIALOGUE window.

The associated Rule would look like:

IF   accident_type == chemical_spill
AND  [Descriptor] [operator] [value]
OR   [Descriptor] [operator] [value]
AND  [ACTION]     [operator] [value]
OR   [ACTION]     [operator] [value]
THEN Some_Action => ready

where depending on the value of some descriptors AND/OR the status of some ACTIONs, the status of the ACTION Some_Action is set to ready and thus displayed to the operator.

The general syntax of ACTIONs and Rules is similar to the Rules and Descriptors of the current Knowledge Base and inference engine in the embedded XPS (backward-chaining) expert system. The main difference is that the THEN part of the Rule can only trigger (SET TO ready) an ACTION and NOT assign values.


The operating sequence is started by entering the module, where an ACTION Zero_Action (its default status is ready is displayed; it requests the user to press a START button that starts and possibly sets, the Accident-Time clock, running in tandem with the real time clock.

Marking the Zero_Action as DONE will trigger the first round of Rule pre-processing (selection) and forward chaining through the rule-base;

THEN first_action => ready
first_action could then for example:
  • require the operator to call a specific phone number,
  • press the red alarm button, or
  • define the type of the emergency.

Once the user has completed the ACTION REQUEST, and verified its results, he marks it as DONE in the ACTION DIALOGUE, which will also mark all Rules that can trigger the ACTION as done.

The next pre-processor run will then select only the subset of Rules that are active (excluding any Rules referring to ACTIONS marked done or ignored.

A special case is the ACTION status of pending, where the time-stamp set when PENDING was selected by a user in the ACTION DIALOGUE must be compared before including or excluding a Rule in the current run.

An example for the usefulness for this feature would be a phone call that should be made, but can not because of a busy connection - it can be deferred without interrupting the continuing operation of the system. The period for which this ACTION should be deferred until the next trial is defined in the ACTION declaration in the P field in seconds.

All status changes of ACTIONSs are entered in an ACTION LOG together with the time stamp of their status change to provide a complete log of the sequential operations of the system.

ACTIONS may request activities of the operator that may or may not involve the system directly. They may include:

  • external activities may, for example, communication by phone or fax with other institutions etc. or obtaining relevant information (EMERGENCY PARAMETERS) from external sources;
  • internal activities will involve the use of the system such as entering information or using specific tools such as models to determine more EMERGENCY PARAMETERS.
For internal activities the ACTION DIALOGUE may offer the option to DO IT; this button will then trigger the appropriate function with the necessary parameters automatically, for example, opening an editor dialog box to enter a value (an EMERGENCY PARAMETER, starting an inference chain (backward chaining ...) to estimate such a value, or start a model.

Upon successful return from this operation, the user then marks the ACTION as done, which will trigger a new run of Rule filtering and evaluation, leading the user through the successive steps of the emergency management procedure as defined by the rules, and influenced by the changing context of the EMERGENCY PARAMETERS.

The procedure ends when either:

  • LAST_ACTION is triggered by a Rule, (this would require the operator to verify return to normal conditions) or
  • when there are no more applicable ACTIONS that are ready.

At this point, the ACTION LOG provides a complete and chronological record of the entire sequence of events for printing and a post-mortem analysis.

The user interface consists of three main element:

  • the ACTION DIALOGUE box with its four response buttons:
    • DO IT (if active, this will satisfy the request by triggering one of the built in functions (like starting the expert system or a model) to obtain the requested information;
    • PENDING (refers a task for later execution)
    • IGNORE (actively eliminates or skips a task),
    • DONE (confirms the successful completion of an ACTION REQUEST.
  • the EMERGENCY PARAMETERS listing that provide the evolving summary description of the emergency; please note that the MAP WINDOW will display related (geo)graphical information);
  • the ACTIONS LOG that records each status change of an ACTION REQUEST with its time step in a scrolling window.

A generic DSS tool: Rule-based Classification

A major problem in any computer-based is to obtain reliable and accurate information from the user, and to translate efficiently between the language easily understandable to the user, and the technical information requirements and formats used by the computer system.

The general logical and syntax of the backward-chaining embedded expert systems approach in HITERM was described above under Editing functions within the framework of Scenario Analysis. This covers the domain of user input.

A similar approach can be used, however, to classify complex systems results (e.g., the results from a spatially distributed, dynamic, multi-parameter model) into a simple and directly understandable statement through linguistic classification

In the embedded expert system, this is accomplished through the hybrid nature of the basic concept, the Descriptor object; it possible states (legal values) can be expressed both symbolically or numerically as the example below for storage containers illustrates:

    T S
    U ha
    V none        [  0,  0,  0]
    V very_small  [  0,  2,  3]
    V small       [  3,  5,  8]
    V considerable[  8, 10, 20]
    V large       [ 20, 50, 80]
    V very_large  [ 80,100,300]
    Q What is the total area affected by this accident,
    Q ie.e., above a no-effects threshold ?

Synthesis of model output

Another use of the backwards chaining expert system is to provide a synthesis of large model generated data volumes. The chain of models used to simulate an accident scenario may easily generate data volumes in the order of GigaBytes. These should, however, be summarized in a few simple variables such as the number of people exposed, the level of exposure, the area contaminated, estimated material damage and a rough classification of the accident: these classifications are needed to trigger the appropriate responses.

Starting from the dynamic model results, specific aggregate parameters are computed as a post-processing step or while the model is running, updating values for maxima of threshold related parameters.

In the case of the atmospheric dispersion models, the critical parameters are the extent of the area covered, the population exposed in this area, and time factors such as the time until the first houses are reached by the cloud, and the duration of the exposure.

Starting with the model result and a (default or substance specific) concentration threshold, the system computed the area of the plume that exceeds the threshold (shown in yellow), the populated area (shown in blue), and the intersection (shown in red). Based on the known or estimated population density, two key parameters, namely the area exposed and the population exposed are computed and indicated (see above).

In addition to the model derived values (which are setting the corresponding Descriptors in the expert system), a user-defined threshold value is used in this evaluation. This can either be derived from a set of rules, or from the hazardous chemicals data base (e.g., based on the Seveso II classification).

In the simplest case, the user can directly set that threshold value with the expert system's editing functions.

The behaviour of the editing function is data driven: if the only Method defined for the corresponding descriptor is the Question to the user, a simple editing box will be used. If, however, one or more Rules are defined in the descriptor declaration, the will be offered to perform a rule-based inference. For a detailed description of the data formats of the Knowledge Base files, refer to the Technical Manual of deliverable D06.

In the next step, the expert system attempts a classification of the emergency in terms of:

  • Public health effects,
  • Environmental damages, and
  • Material damages.

In terms of the backward chaining inference procedure, these three Descriptors are Target Descriptors i.e., the are at the top of the respective inference trees.

Each of them has a set of associated Rules, that use Descriptors as their inputs. The Descriptor values are set by the model output in the step above, but can, in principle, be overwritten by the user interactively if he repeats the (automatically triggered) inference procedure. If all the necessary data (Descriptor values) to reach a conclusion are available, the expert system will directly arrive at, and display in a symbolic format, the results in the Accident Summary Box.

If any of the necessary input values are missing, the Dialog Box will be used to obtain the information from the user. This can be entered by either choosing one of the symbolic labels and associated default numerical value; by using the slider tool of the dialog box; of by directly typing in the required numerical value. The default value, if defined or set, is indicated in the blue header bar; in the above case, it is set dynamically from an interpretation of the model output results. The small information icon in the header leads to a hypertext page that provides additional background information on the Descriptor in question to assist in setting its value.

Benefits and functions

The integration of the expert systems in the HITERM framework has several major benefits, related to the different functions of the rule-based expert system:

  • Rule-based guidance: the expert system can implement any checklist-type procedure, that guides the user through a number of subsequent steps, like working through the steps of an emergency manual; This takes a major load and responsibility from the operator and simplfies the procedure, thus making it less error prone.

  • Time awareness: the real-time components are time aware, all components are context sensitive. Yhis means that time can be used explicitly in the rules and any evaluation, reflecting the realities of an emergency management situation where it is essential to know not only WHAT will happen, but also WHEN.

  • Context sensitivity: an emergency situations i charactersied by a large number of interpedent developments; every decision has to be taken within the context of all the information available at that poijnt. This context sensitivity is implemented through the rule of the expert system, that make any conclusion and advise conditional and any number of input variables (the context) the designer deems relevant.

  • Natural language interface: The rules of the expert system are formul;ated in the near natural language syntax of production rules, and follow first order logic. This makes the rules and their processing easy to understand and follow.

  • Explanation facility: to build user confidence, but also as a major didactic element, the expert system can explain its function and conclusions step by step, backtracking the inference chain one rule at a time.

  • Symbolic representation: The possibility to describe systems attributes in symbolic terms, and thus approximately, mathes the information available in an emergency situation, where detailed quantitative estimates or measurements are simply not feasible. The symbolic classification and the use of ranges facilittates efficient interaction in the absence of hard and reliable data.

© Copyright 1995-2018 by:   ESS   Environmental Software and Services GmbH AUSTRIA | print page