# Approximation theory

In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterising the errors introduced thereby. Note that what is meant by best and simpler will depend on the application.

One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or rational (ratio of polynomials) approximations.

The objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computer's floating point arithmetic. This is accomplished by using a polynomial of high degree, and/or narrowing the domain over which the polynomial has to approximate the function. Narrowing the domain can often be done though the use of various addition or scaling formulas for the function being approximated. Modern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment.

Once the domain and degree of the polynomial are chosen, the polynomial itself is chosen in such a way as to minimize the worst-case error. That is, the goal is to minimize the maximum value of $\mid P(x)-f(x)\mid$, where P(x) is the approximating polynomial and f(x) is the actual function. For well-behaved functions, the optimum Nth degree polynomial will lead to an error curve that oscillates back and forth between $+\epsilon$ and $-\epsilon$ a total of N+2 times, giving a worst-case error of $\epsilon$. (It is possible to make contrived functions f(x) for which this property doesn't hold, but in practice it's generally true.) Example graphs, for N=4, showing the error in approximating log(x) and exp(x), are shown below.

File:Logerror.png
Error between optimal polynomial and log(x) (red), and Chebyshev approximation and log(x) (blue) over the interval [2, 4]. Vertical divisions are 10-5. Maximum error for the optimal polynomial is 6.07 x 10-5
File:Experror.png
Error between optimal polynomial and exp(x) (red), and Chebyshev approximation and exp(x) (blue) over the interval [-1, 1]. Vertical divisions are 10-4. Maximum error for the optimal polynomial is 5.47 x 10-4

Note that, in each case, the number of maxima is N+2, that is, 6. Two of the maxima are at the end points. The red curves, for the optimal polynomial, are level, that is, they oscillate between $+\epsilon$ and $-\epsilon$ exactly.

If an Nth degree polynomial leads to an error function that oscillates between maxima at $+\epsilon$ and $-\epsilon$ N+2 times, that polynomial is optimal. (proof)

## Chebyshev Approximation

One can obtain polynomials very close to the optimal one by expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree. This is similar to the Fourier analysis of the function, using the Chebyshev polynomials instead of the usual trigonometric functions.

If one calculates the coefficients in the Chebyshev expansion for a function:

$f(x)\sim \sum _{{n=0}}^{\infty }c_{n}T_{n}(x)$

and then cuts off the series after N terms, one gets an Nth degree polynomial approximating f(x).

The reason this polynomial is nearly optimal is that, for functions with rapidly converging power series, if the series is cut off after the Nth term, the total error arising from the cutoff is close to the first term after the cutoff. That is, the first term after the cutoff dominates all later terms. The same is true if the expansion is in terms of Chebyshev polynomials. If a Chebyshev expansion is cut off after $T_{n}$, the error will take a form close to $T_{{n+1}}$. The Chebyshev polynomials have the property that they are level — they oscillate between +1 and -1 in the interval [-1, 1]. $T_{{n+1}}$ has N+2 level maxima. This means that the error between f(x) and its Chebyshev expansion out to $T_{n}$ is close to a level function with N+2 maxima, so it is close to the optimal Nth degree polynomial.

In the graphs above, note that the blue error function is sometimes better than (inside of) the red function, but sometimes worse, meaning that it is not quite the optimal polynomial. Note also that the discrepancy is relatively less serious for the exp function, which has an extremely rapidly converging power series, than for the log function.

## Remes' algorithm

Remes' algorithm is used to produce an optimal polynomial P(x) approximating a given function f(x) over a given interval. It is an iterative algorithm that converges to a polynomial that has an error function with N+2 level extrema. By the theorem above, that polynomial is optimal.

Remes' algorithm uses the fact that one can construct an Nth degree polynomial that leads to level and alternating error values, given N+2 test points.

Given N+2 test points $x_{1}$, $x_{2}$ ... $x_{{n+2}}$ (where $x_{1}$ and $x_{{n+2}}$ are presumably the end points of the interval of approximation), these equations need to be solved:

$P(x_{1})-f(x_{1})=+\epsilon \,$
$P(x_{2})-f(x_{2})=-\epsilon \,$
$P(x_{3})-f(x_{3})=+\epsilon \,$
$\vdots$
$P(x_{{n+2}})-f(x_{{n+2}})=\pm \epsilon \,$

The right-hand-sides alternate in sign.

That is,

$P_{0}+P_{1}x_{1}+P_{2}x_{1}^{2}+P_{3}x_{1}^{3}...P_{n}x_{1}^{n}-f(x_{1})=+\epsilon \,$
$P_{0}+P_{1}x_{2}+P_{2}x_{2}^{2}+P_{3}x_{2}^{3}...P_{n}x_{2}^{n}-f(x_{2})=-\epsilon \,$
$\vdots$

Since $x_{1}$ ... $x_{{n+2}}$ were given, all of their powers are known, and $f(x_{1})$ ... $f(x_{{n+2}})$ are also known. That means that the above equations are just n+2 linear equations in the n+2 variables $P_{0}$, $P_{1}$ ... $P_{n}$, and $\epsilon$. Given the test points $x_{1}$ ... $x_{{n+2}}$, one can solve this sytem to get the polynomial P and the number $\epsilon$.

The graph below shows an example of this, producing a 4th degree polynomial approximating $e^{x}$ over [-1, 1]. The test points were set at -1, -0.7, -0.1, +0.4, +0.9, and 1. Those values are shown in green. The resultant value of $\epsilon$ is 4.43 x 10-4

File:Remesdemo.png
Error of the polynomial produced by the first step of Remes' algorithm, approximating ex over the interval [-1, 1]. Vertical divisions are 10-4.

Note that the error graph does indeed take on the values $\pm \epsilon$ at the 6 test points, including the end points, but that those points are not extrema. If the 4 interior test points had been extrema (that is, the function P(x)-f(x) had maxima or minima there), the polynomial would be optimal.

The second step of Remes' algorithm consists of moving the test points to the approximate locations where the error function had its actual local minima or maxima. For example, one can tell from looking at the graph that the point at -0.1 should have been at about -0.28. The way to do this in the algorithm is to use a single round of Newton's method. Since one knows the first and second derivatives of P(x)-f(x), one can calculate approximately how far a test point has to be moved so that the derivative will be zero.

Calculating the derivatives of a polynomial is straightforward. One must also be able to calculate the first and second derivatives of f(x). Remes' algorithm requires an ability to calculate $f(x)\,$, $f'(x)\,$, and $f''(x)\,$ to extremely high precision. The entire algorithm must be carried out to higher precision than the desired precision of the result.

After moving the test points, the linear equation part is repeated, getting a new polynomial, and Newton's method is used again to move the test points again. This sequence is continued until the result converges to the desired accuracy. The algorithm converges very rapidly. Convergence is quadratic for well-behaved functions — if the test points are within $10^{{-15}}$ of the correct result, they will be approximately within $10^{{-30}}$ of the correct result after the next round.

Remes' algorithm is typically started by choosing the maxima of the Chebyshev polynomial $T_{n}$ as the initial points, since the final error function will be similar to that polynomial.