Configuring the Decision Support System and simulating its use

This quick-start tutorial demonstrates how to configure the Decision Support System and simulate its use. The example involves artificial and easily comprehensible data. The modelled decision-making process makes the following assumptions:

  • A simple 2D decision problem is considered.
  • The process is modelled as interactive.
  • The process involves a robust preference learning strategy, named ERS (http://ssrn.com/abstract=5415565).
  • The decision maker’s preferences are assumed to be consistent with an L-norm.
  • The interactive procedure randomly selects two pairs of solutions to be presented to the decision-maker for evaluation.
  • The decision maker’s responses are simulated using an artificial decision maker, consistent with the L-norm.

The results of the processing are visualized and reported in the console. Note that, although it is a quick-start tutorial, having at least a moderate understanding of the concepts described in “Tutorial Series on Java Framework for Evolutionary Computation and Decision Making. Tutorial 4: Decision Support module” is recommended. The complete source code for this tutorial can be found in the Projects module: y2025.SoftwareX_JECDM.QuickStart1 (framework’s version at least 1.7.0). In what follows, the relevant commented code blocks are presented, and the expected results are visualized for convenience. Note that a list of essential imports can be found at the end of this tutorial.

The overview of the source code:

The code starts by defining three elements:
1) random number generator,
2) considered criteria
3) normalization functions mapping the considered objective space into a normalized hypercube:

Then, a set of artificial alternatives is generated:

The following code starts configuring the decision support system. This process begins by defining a refiner object, used to pre-process the input alternatives:

Next, an interaction trigger that specifies an interaction pattern is defined:

Then, a procedure for creating a reference subset of alternatives to be evaluated by the decision-maker is defined:

This tutorial assumes a cooperation with an artificial decision-maker, whose value system is modeled with an L-norm:

Then, the code defines three core components related to learning preferences in the configured system:
1) the definition of the assumed preference model,
2) the preference learning algorithm,
3) and the inconsistency handler:

The following code instantiates the decision support system as defined above. Also, it prepares two plots that will be used to visualize the alternatives space along with the decision maker’s feedback, and the learnt preference model instances compatible with the feedback:

Next, running the system is simulated in the loop. It communicates with external components via the decision-making context:

For those interactions in which the interaction was triggered, the report on the processing is printed, and alternatives that are potentially optimal to the decision-maker are determined:

The following code updates the first plot depicting the objective space:

The following code updates the first plot depicting the compatible model space, and creates a joint render to be stored on a disc:

An example console output associated with the report on the system’s processing done in the 15th iteration (when the second interaction was expected to happen) is illustrated below:

Expected render generated in the 10th iteration:

Expected render generated in the 15th iteration:

Used imports: