OPTAIR: Multi-criteria optimization for air quality
management and emission control
Supported by the Austrian Research Foundation FFG,
Project No. 814799 and the Province of Lower Austria
3. The approach, proposed technical solutions
The optimization of emission control measures to meet air quality standards
as well as emission targets requires a combination of
- Domain specific knowledge in an appropriate representation that can be integrated into numerical analysis to support effective, cost efficient and sustainable environmental management and policy;
- An efficient approach to the multi-criteria optimization of complex, non-differentiable systems that defy traditional approaches of mathematical programming.
The optimization approach proposed here (Fedra, 2000) is based
on a two-tiered combination of a first satisficing step (Byron, 1998)
followed by a discrete multi-criteria (reference point) DSS (Wierzbicki, 1998)
that operates as a post-processor on the feasible set identified in the first step.
The first step uses the full dynamic, 3D and non-linear emission and air quality models
to generate alternatives given any number of control instruments and strategies.
Feasible solutions are obtained efficiently by a combination of genetic algorithms,
machine learning, and domain specific heuristics;
in addition, the generation of feasible alternatives for the subsequent
DMC optimization is based on the alternative use of different models
to increase computational efficiency by orders of magnitute:
- For ozone: CAMx (full resolution, 3D) and PBM (simplified, dynamic box model);
- For all conservative pollutants including dust: CAMx (full resolution, dynamic)
and AERMOD (simplified, steady state, which also supports direct source apportioning).
This will take advantage of the very different computational requirements
of these models: a first screening level search will be
performed with the simpler model, the most promising solutions
then subjected to the detailed analysis with the full resolution simulation model
to verify that all constraints are met at the full functional
and spatial resolution, i.e., the solution is feasible.
The satisficing approach uses any number of constraints that can be defined
in terms of directly meaningful regulatory and socio-economic criteria
derived from the model runs. Possible criteria can be derived from the
ambient concentration values (compliance with air quality standards),
emission values, and impacts derived from the ambient concentrations.
Constraints can be defined for any temporal or spatial scope,
and will primarily include compliance with the Air Quality Framework Directive 96/62/EC
and daughter directives, but also emission and impact estimates,
e.g., population exposure.
An important class of criteria is related to emission control costs.
The generation of feasible alternatives is based on a set of instruments
for emission control, but also on land use and location
of emission sources and sensitive receptor areas.
Instruments primarily affect emissions (directly by changing fuels
or combustion parameters, or indirectly, e.g., by reducing demand
for energy services such as transportation demand
(urban structure, public transportation, direct constraints such as road pricing),
reduced heating or cooling due to better building insulation)
with an associated cost function (investment, operational costs).
Alternatives are described in terms of pairs of parameter vectors:
- The decision parameters, i.e., the instruments that were applied;
- The performance parameters, that are summarised in a number of regulatory
and economic criteria that result from the impact assessment of the
emissions resulting from the decision parameters given a meteorological scenario,
usually over an entire year, or a particular worst case episode (e.g., for ozone).
In a second step, the set of feasible alternatives,
for a given superset of criteria, can now be automatically partitioned into
- A dominated subset, eliminated from further consideration, and
- A non-dominated subset of pareto optimal set or frontier in the N dimensional performance space.
From the pareto set, a final efficient solution is selected by finding
the feasible, non-dominated alternative that is closest to UTOPIA,
or a reference point displaced from UTOPIA but better reflecting
the decision makers aspirations and trade-offs.
This second phase of the optimization procedure is implemented as
an interactive, discrete multi-criteria DSS (reference point methodology,
e.g., Wierzbicki, 1998). The DMC tools will perform the following main functions:
- The automatic filtering of the non-dominated subset or pareto optimal
solutions from the set of all feasible solutions, given the masterlist of criteria to be considered
- The automatic identification of an efficient solution relative (nearest)
to UTOPIA as the default reference point. This is based on the normalization
of the dimensions in the performance space, so that every alternative can
be expressed in terms of its relative achievement between
NADIR and UTOPIA for a given set of active criteria.
- The interactive determination of an efficient alternative or compromise solution.
- Post-optimal analysis of the solution space structure.
3.1 Genetic algorithms, machine learning, adaptive heuristics
The key objective in the first, satisficing phase of the optimization
is to generate a large number of feasible solutions by choosing efficient
combinations of instruments that are likely to meet the constraints.
For this increased search performance, we propose to test the following mechanisms:
- Setting a priori probabilities for specific instrument/source combinations;
the selection of a given instrument is based on a Monte Carlo method.
However, the probability that a given instrument or instrument/source
combination is being tested can be configured by a simple a priori
weight factor in the optimization scenario configuration interface that represents domain knowledge.
- Analysing the cross variance, correlation or contingency of instruments,
we can identify groups that together have a higher probability
to lead to successful model runs.
These groupings (as synthetic allele) can be systematically
exploited with standard genetic algorithms, while retaining their interdependency.
- Machine learning algorithms like ID3 (Quinlan 1979),
ID5R, or simulated neural networks can be used to identify
efficient search strategies. They would provide guidance for
local heuristic search around any random test case by modifying
the a priori probabilities of instruments given the pattern of constraints violations.
- Domain specific heuristics are a "simpler" version of the same principle:
using first order production rules, we can formulate strategies
that will guide the design of a new computational experiment
based on the observed pattern of performance and/or violation of constraints.