Optimization Methods

Palisade offers two optimization add-in applications for Microsoft Excel -Evolver and RISKOptimizer. Evolver is a separate add-in in the Palisade DecisionTools Suite, whereas RISKOptimizer is integrated into the @RISK application.

Evolver is intended for optimization of deterministic model, e.g. models with no uncertainty. RISKOptimizer, on the other hand, combines optimization and simulation to find optimal solutions to models with uncertainty. This section provides background information on the optimization methods used by Evolver. Because there is some overlap between Evolver and RISKOptimizer methods, the discussion also includes some mention of RISKOptimizer.

Optimization Overview

Traditional Excel-based optimization has focused on deterministic models. Such models are comprised of:

  • A “target” cell whose value needs to be minimized or maximized.
  • A set of inputs or “adjustable cells” whose values are variable but can be controlled.
  • A set of constraints that must be satisfied, often specified with expressions such as TotalCost<=100 or A11>=0.

During an optimization, the adjustable cells are varied across allowable ranges specified in the model. For each possible set of adjustable cell values, the model is recalculated and a new value for the target cell is generated. When the optimization completes, it means an optimal solution has been found (a combination of adjustable cell values). This solution is the combination of adjustable cell values that yields the optimal value for the target cell while satisfying all of the constraints.

Some optimization models are much more difficult to solve than others. For difficult problems, such as a model for finding the shortest route between 200 cities, it is not feasible to examine every possible solution. Such an approach would require weeks or months of calculations on the most powerful computers.

To solve such problems, it is necessary to search through a subset of all possible solutions. This is accomplished with an algorithm. An algorithm is a step-by-step description of how to solve a problem.

One type of algorithm is a hill-climbing algorithm:

  1. Start at any initial point.
  2. Move away from this point in some direction to get to a new point.
  3. If the target value at the new point is better, stay and repeat step 2. If it is lower, go back to the previous point and try again.

Numerous hill-climbing variations have been developed and implemented in optimization software, but the devil is in the details, especially in step 2. It isn’t at all clear how to choose the direction, or how far to move in that direction, to get the next solution. Unfortunately, a specific algorithm that works well on one problem or type of problem does not always work well on other problems.

Hill-climbing algorithms have one other critical drawback. They tend to find only local optimal solutions, not global optimal solutions. Imagine a terrain where there are many hilltops but only one that is highest. This highest hilltop represents the desired solution - the global maximum - but because hilltop algorithms tend to search only locally when making the move in step 2, they can easily find a lower hilltop, a "local maximum", and then stop, never reaching the global maximum.

Although hill-climbing algorithms can be modified so that they don’t necessarily get stuck at local hilltops, they are naturally most successful on problems guaranteed to have a single local (and global) hilltop. There are many such problems, but there are also many that do not have this attractive property.