Simplex algorithm

From Example Problems
Jump to navigation Jump to search

In mathematical optimization theory, the simplex algorithm of George Dantzig is the fundamental technique for numerical solution of the linear programming problem. A variation commonly used in nonlinear regression programs is the Nelder-Mead method or Simplex method or downhill simplex method due to Nelder & Mead (1965) and is a numerical method for solving many-dimensional problems, belonging to the more general class of search algorithms.

In both cases, the method uses the concept of a simplex, which is a polytope of N + 1 vertices in N dimensions; a line segment on a line, a triangle on a plane, a tetrahedron in three-dimensional space and so forth.

Problem input

The simplex algorithm requires the linear programming problem to be in augmented form. The problem can then be written as follows in matrix form:

Maximize Z in:
Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \begin{bmatrix} 1 & -\mathbf{c}^T & 0 \\ 0 & \mathbf{A} & \mathbf{I} \end{bmatrix} \begin{bmatrix} Z \\ \mathbf{x} \\ \mathbf{x}_s \end{bmatrix} = \begin{bmatrix} 0 \\ \mathbf{b} \end{bmatrix} }
Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \mathbf{x}, \, \mathbf{x}_s \ge 0 }

where x are the variables from the standard form, xs are the introduced slack variables from the augmentation process, c contains the optimization coefficients, A and b describe the system of constraint equations, and Z is the variable to be maximized.

The system is typically underdetermined, since the number of variables exceed the number of equations. The difference between the number of variables and the number of equations gives us the degrees of freedom associated with the problem. Any solution, optimal or not, will therefore include a number of variables of arbitrary value. The simplex algorithm uses zero as this arbitrary value, and the number of variables with value zero equals the degrees of freedom.

Values of nonzero value are called basic variables, and values of zero values are called nonbasic variables in the simplex algorithm.

This form simplifies finding the initial basic feasible solution (BF), since all variables from the standard form can be chosen to be nonbasic (zero), while all new variables introduced in the augmented form are basic (nonzero), since their value can be trivially calculated (Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \mathbf{x}_{s\,i} = \mathbf{b}_{j}} for them, since the augmented problem matrix is diagonal on its right half).

Algorithm

Template:Section-stub

  • Choose an initial BF as shown above
  • While nonoptimal solution:
    • Determine direction of highest gradient
    Choose the variable associated with the c coefficient of highest positive magnitude. This nonbasic variable, which we call the entering basic, will be increased to help maximize the problem.
    • Determine maximum step length
    Use the Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle \begin{bmatrix} \mathbf{A} & \mathbf{I} \end{bmatrix} \begin{bmatrix} \mathbf{x} \\ \mathbf{x}_s \end{bmatrix} = \mathbf{b}} equation to determine which basic variable reaches zero first when the entering basic is increased. This variable, which we call the leaving basic then becomes nonbasic. This operation only involves a single division for each basic variable.
    • Rewrite problem
    Rewrite c, A and b to make the problem matrix diagonal for the existing and new basic variables. This involves some simple Gaussian elimination.
    • Check for improvement
    Repeat procedure until no further improvement is possible, meaning all the coefficients of c are negative. Procedure is also terminated if all coefficients are zero, and the algorithm have been walking in a circle and revisited a previous state.

Description

File:Simplex description.png
A series of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution.

A linear programming problem consists of a collection of linear inequalities on a number of real variables and a fixed linear functional which is to be maximized (or minimized). Some further details are given in the linear programming article.

In geometric terms we are considering a closed convex polytope P, defined by intersecting a number of half-spaces in n-dimensional Euclidean space, which lie to one side of a hyperplane. The objective is to maximise a linear functional L; if we consider the hyperplanes H(c) defined by L(x) = c as c increases, these form a parallel family. If the problem is well-posed, we want to find the largest value of c such that H(c) intersects P (if there is no such largest value of c, this isn't a reasonable question for optimization as it stands). In this case we can show that the optimum value of c is attained, on the boundary of P. Methods for finding this optimum point on P have a choice of improving a possible point by moving through the interior of P (so-called interior point methods), or starting and remaining on the boundary.

The simplex algorithm falls into the latter class of method. The idea is to move along the facets of P in search of the optimum, from point to point. Note that the optimum cannot occur at a mid-point, for example in the middle of an edge lying as a line segment on the boundary of P, unless the whole relevant 'face' is parallel to H. Therefore it is possible to look solely at moves skirting the edge of a simplex, ignoring the interior. The algorithm specifies how this is to be done.

We start at some vertex of the polytope, and at every iteration we choose an adjacent vertex such that the value of the objective function does not decrease. If no such vertex exists, we have found a solution to the problem. But usually, such an adjacent vertex is nonunique, and a pivot rule must be specified to determine which vertex to pick. Various pivot rules exist.

In 1972, Klee and Minty1 gave an example of a linear programming problem in which the polytope P is a distortion of an n-dimensional cube. They showed that the simplex method as formulated by Dantzig visits all 2n vertices before arriving at the optimal vertex. This shows that the worst-case complexity of the algorithm is exponential time. Similar examples have been found for other pivot rules. It is an open question if there is a pivot rule with polynomial time worst-case complexity.

Nevertheless, the simplex method is remarkably efficient in practice. Attempts to explain this employ the notion of average complexity or (recently) smoothed complexity.

Other algorithms for solving linear programming problems are described in the linear programming article.

References

  • Greenberg, Harvey J., Klee-Minty Polytope Shows Exponential Time Complexity of Simplex Method University of Colorado at Denver (1997) PDF download

Note

Template:Ent Greenberg, cites: Klee and G.J. Minty. "How Good is the Simplex Algorithm?" In O. Shisha, editor, Inequalities, III, pages 159–175. Academic Press, New York, NY, 1972

See also

External links and sources

de:Simplex-Verfahren fr:Algorithme du simplexe it:Algoritmo del simplesso he:שיטת הסימפלקס ja:シンプレックス法 nl:Simplexmethode