Ordinary differential equation

From Example Problems
Jump to: navigation, search
ODE redirects here. For the real-time physics engine, see open dynamics engine.

In mathematics, and particularly in analysis, an ordinary differential equation (or ODE) is a relation that contains functions of only one independent variable, and one or more of its derivatives with respect to that variable. See differential calculus and integral calculus for basic calculus background.

Many scientific theories can be expressed clearly and concisely in terms of ordinary differential equations. For instance, the law for radioactive decay of a single isotope of an element, states that its rate of loss of mass is proportional to its mass. If t represents time and u(t) represents the mass of the isotope at time t, then the law for decay states that

{\frac  {du}{dt}}=-\alpha u\,

where \alpha is a constant that depends upon the particular isotope.

Another example of an ordinary differential equation is Newton's second law of motion of a single particle, which states that f = ma, where f is an applied force, m is the mass of the particle, and a is the acceleration of the particle due to the force. If motion is constrained to a straight line, the coordinate t measures the time elapsed and the unknown function x(t) specifies the position of the particle along the line, then the velocity of the particle v is given by the first derivative of x with respect to t:

v={\frac  {dx}{dt}}\,.

Similarly, the acceleration of the particle a is given by the second derivative of x:

a={\frac  {dv}{dt}}={\frac  {d^{2}x}{dt^{2}}}\,.

Thus Newton's second law implies the differential equation

m{\frac  {d^{2}x}{dt^{2}}}=f(x)\,.

In general, the force depends upon the position of the particle, and thus the unknown function x appears on both sides of the differential equation, as is indicated in the notation f(x).

Important theorems in the field of ODEs include broad existence and uniqueness theorems and for ODEs in the plane, the Poincaré-Bendixson theorem.

Definition

Let y represent an unknown function of x, and let

y',y'',\ \dots ,\ y^{{(n)}}

denote the derivatives

{\frac  {dy}{dx}},\ {\frac  {d^{{2}}y}{dx^{2}}},\ \dots ,\ {\frac  {d^{{n}}y}{dx^{{n}}}}.

An ordinary differential equation (ODE) is an equation involving

x,\ y,\ y',\ y'',\ \dots .

The order of a differential equation is the order n of the highest derivative that appears. If the highest derivative appears only in integer powers, then the degree of the equation is the highest power of the highest derivative.

A solution of an ODE is a function y(x) whose derivatives satisfy the equation. Such a function is not guaranteed to exist and, if it does exist, is usually not unique. A general solution of an nth-order equation is a solution containing n arbitrary variables, corresponding to n constants of integration. A particular solution is derived from the general solution by setting the constants to particular values. A singular solution is a solution that can't be derived from the general solution.

When a differential equation of order n has the form

F\left(x,y',y'',\ \dots ,\ y^{{(n)}}\right)=0

it is called an implicit differential equation whereas the form

F\left(x,y',y'',\ \dots ,\ y^{{(n-1)}}\right)=y^{{(n)}}

is called an explicit differential equation.

A differential equation not depending on x is called autonomous, and one with no terms depending only on x is called homogeneous.

General application

An important special case is when the equations do not involve x. These differential equations may be represented as vector fields. This type of differential equation has the property that space can be divided into equivalence classes based on whether two points lie on the same solution curve. Since the laws of physics are believed not to change with time, the physical world is governed by such differential equations. (See also symplectic topology for abstract discussion.)

In the case where the equations are linear, the original equation can be solved by breaking it down into smaller equations, solving those, and then adding the results back together. Unfortunately, many of the interesting differential equations are non-linear, which means that they cannot be broken down in this way. There are also a number of techniques for solving differential equations using a computer (see numerical ordinary differential equations).

Ordinary differential equations are to be distinguished from partial differential equations where y is a function of several variables, and the differential equation involves partial derivatives.

Existence and nature of solutions

The problem of solving a differential equation is to find the function y whose derivatives satisfy the equation. For example, the differential equation

y''+y=0\,

has the general solution

y=A\cos {x}+B\sin {x}\,,

where A, B are constants determined from boundary conditions.

In general, an n-th order equation allows both x and y to be fixed, as well as all the n-1 lower order derivatives of y; the remaining equation can be solved (at least conceptionally) for y^{{(n)}}. If the equation has finite degree d, then we now have a polynomial equation in y^{{(n)}} with at most d roots. Therefore there can be as many as d possible values for y^{{(n)}} at any given point and for any possible values of the lower order derivatives, though there may be ranges of these points and values where there are fewer solutions (or none at all). A Lipschitz condition must also be satisfied for a solution to exist.

Thus, in the previous example, a second-order, first-degree equation, any point on the plane and any slope through that point can be selected and yield a unique solution (since the single root of y'' exists for any value of y). Note in particular that there are an infinity of solutions through any given point; this is a general characteristic of equations of order higher than one.

File:Diffeq.png
Some solutions to (y')^2+xy'-y=0. Particular solutions are in blue; the singular solution is in green; the hybrid solution described in the text is in red

Consider now

(y')^{2}+xy'-y=0\,

with general solution

y=Ax+A^{2}\,

This is a first-order, second-degree equation, thus any point can have at most two solutions passing through it, corresponding to the two roots of y' in the quadratic equation that would result after fixing x and y. Studying the quadratic equation's discriminant (x^{2}+4y) leads to the conclusion that only a single solution exists along the parabola y=-{\frac  {1}{4}}x^{2} (where the discriminant is zero) and that no solution exists below this parabola (where both roots are complex).

The parabola in this problem is an example of a cusp locus; a curve along which two or more roots of the differential equation are identical. Along such a locus it is possible to move from one general solution to another while still obeying the differential equation; thus the presence of cusp loci introduce the possibility of singular solutions. In this example, the parabola y=-{\frac  {1}{4}}x^{2} is such a singular solution; it satisfies the original differential equation, and a full set of solutions must include such possibilities as the hybrid solution:

y={\begin{cases}x+1,&{\mbox{if }}x<-2\\-{\frac  {1}{4}}x^{2},&{\mbox{if }}-2\leq x<2\\-x+1,&{\mbox{if }}2\leq x\end{cases}}

where the cusp locus has been used to connect two particular solutions; note that the first derivative (the only derivative to appear in the differential equation) is continuous at the transitions.

(Johnson, Chapter 5)

Types of differential equations with some history

The influence of geometry, physics, and astronomy, starting with Newton and Leibniz, and further manifested through the Bernoullis, Riccati, and Clairaut, but chiefly through d'Alembert and Euler, has been very marked, and especially on the theory of linear partial differential equations with constant coefficients.

Homogeneous linear ODEs with constant coefficients

The first method of integrating linear ordinary differential equations with constant coefficients is due to Euler, who realized that solutions have the form e^{{zx}}, for possibly-complex values of z. Thus

{\frac  {d^{{n}}y}{dx^{{n}}}}+A_{{1}}{\frac  {d^{{n-1}}y}{dx^{{n-1}}}}+\cdots +A_{{n}}y=0

has the form

z^{n}e^{{zx}}+A_{1}z^{{n-1}}e^{{zx}}+\cdots +A_{n}e^{{zx}}=0

so dividing by e^{{zx}} gives the nth-order polynomial

F(z)=z^{{n}}+A_{{1}}z^{{n-1}}+\cdots +A_{n}=0

In short the terms

{\frac  {d^{{k}}y}{dx^{{k}}}}\quad \quad (k=1,2,\cdots ,n).

of the original differential equation are replaced by zk. Solving the polynomial gives n values of z, z_{1},\dots ,z_{n}. Plugging those values into e^{{z_{i}x}} gives a basis for the solution; any linear combination of these basis functions will satisfy the differential equation.

This equation F(z) = 0, is the "characteristic" equation considered later by Monge and Cauchy.


Example
y''''-2y'''+2y''-2y'+y=0\,

has the characteristic equation

z^{4}-2z^{3}+2z^{2}-2z+1=0\,.

This has zeroes, i, −i, and 1 (multiplicity 2). The solution basis is then

e^{{ix}}\,, e^{{-ix}}\,, e^{x}\,, xe^{x}\,.

This corresponds to the real-valued solution basis

\cos x, \sin x, e^{x}, xe^{x}\,.

If z is a (possibly not real) zero of F(z) of multiplicity m and k\in \{0,1,\dots ,m-1\}\, then y=x^{k}e^{{zx}}\, is a solution of the ODE. These functions make up a basis of the ODE's solutions.

If the Ai are real then real-valued solutions are preferable. Since the non-real z values will come in conjugate pairs, so will their corresponding ys; replace each pair with their linear combinations Re(y) and Im(y).

A case that involves complex roots can be solved with the aid of Euler's formula.

  • Example: Given y''-4y'+5y=0\,. The characteristic equation is z^{2}-4z+5=0\, which has zeroes 2+i and 2−i. Thus the solution basis \{y_{1},y_{2}\} is \{e^{{(2+i)x}},e^{{(2-i)x}}\}\,. Now y is a solution iff y=c_{1}y_{1}+c_{2}y_{2}\, for c_{1},c_{2}\in {\mathbb  C}.

Because the coefficients are real,

  • we are likely not interested in the complex solutions
  • our basis elements are mutual conjugates

The linear combinations

u_{1}={\mbox{Re}}(y_{1})={\frac  {y_{1}+y_{2}}{2}}=e^{{2x}}\cos(x)\, and
u_{2}={\mbox{Im}}(y_{1})={\frac  {y_{1}-y_{2}}{2i}}=e^{{2x}}\sin(x)\,

will give us a real basis in \{u_{1},u_{2}\}.

Linear ODEs with constant coefficients

Suppose instead we face

{\frac  {d^{{n}}y}{dx^{{n}}}}+A_{{1}}{\frac  {d^{{n-1}}y}{dx^{{n-1}}}}+\cdots +A_{{n}}y=f(x).

For later convenience, define the characteristic polynomial

P(v)=v^{n}+A_{1}v^{{n-1}}+\cdots +A_{n}.

We find the solution basis \{y_{1},y_{2},\ldots ,y_{n}\} as in the homogeneous (f=0) case. We now seek a particular solution yp by the variation of parameters method. Let the coefficients of the linear combination be functions of x:

y_{p}=u_{1}y_{1}+u_{2}y_{2}+\cdots +u_{n}y_{n}.

Using the "operator" notation D=d/dx and a broad-minded use of notation, the ODE in question is P(D)y=f; so

f=P(D)y_{p}=P(D)(u_{1}y_{1})+P(D)(u_{2}y_{2})+\cdots +P(D)(u_{n}y_{n}).

With the constraints

0=u'_{1}y_{1}+u'_{2}y_{2}+\cdots +u'_{n}y_{n}
0=u'_{1}y'_{1}+u'_{2}y'_{2}+\cdots +u'_{n}y'_{n}
0=u'_{1}y_{1}^{{(n-2)}}+u'_{2}y_{2}^{{(n-2)}}+\cdots +u'_{n}y_{n}^{{(n-2)}}

the parameters commute out, with a little "dirt":

f=u_{1}P(D)y_{1}+u_{2}P(D)y_{2}+\cdots +u_{n}P(D)y_{n}+u'_{1}y_{1}^{{(n-1)}}+u'_{2}y_{2}^{{(n-1)}}+\cdots +u'_{n}y_{n}^{{(n-1)}}.

But P(D)y_{j}=0, therefore

f=u'_{1}y_{1}^{{(n-1)}}+u'_{2}y_{2}^{{(n-1)}}+\cdots +u'_{n}y_{n}^{{(n-1)}}.

This, with the constraints, gives a linear system in the u'_{j}. This much can always be solved; in fact, combining Cramer's rule with the Wronskian,

u'_{j}=(-1)^{{n+j}}{\frac  {W(y_{1},\ldots ,y_{{j-1}},y_{{j+1}}\ldots ,y_{n})_{{0 \choose f}}}{W(y_{1},y_{2},\ldots ,y_{n})}}.

The rest is a matter of integrating u'_{j}.

The particular solution is not unique; y_{p}+c_{1}y_{1}+\cdots +c_{n}y_{n} also satisfies the ODE for any set of constants cj.

See also variation of parameters.

Example: Suppose y''-4y'+5y=sin(kx). We take the solution basis found above \{e^{{(2+i)x}},e^{{(2-i)x}}\}.

W\, ={\begin{vmatrix}e^{{(2+i)x}}&e^{{(2-i)x}}\\(2+i)e^{{(2+i)x}}&(2-i)e^{{(2-i)x}}\end{vmatrix}}
=e^{{4x}}{\begin{vmatrix}1&1\\2+i&2-i\end{vmatrix}}
=-2ie^{{4x}}\,
u'_{1}\, ={\frac  {1}{W}}{\begin{vmatrix}0&e^{{(2-i)x}}\\\sin(kx)&(2-i)e^{{(2-i)x}}\end{vmatrix}}
=-{\frac  {i}2}\sin(kx)e^{{(-2-i)x}}
u'_{2}\, ={\frac  {1}{W}}{\begin{vmatrix}e^{{(2+i)x}}&0\\(2+i)e^{{(2+i)x}}&\sin(kx)\end{vmatrix}}
={\frac  {i}{2}}\sin(kx)e^{{(-2+i)x}}.

Using the list of integrals of exponential functions

u_{1}\, =-{\frac  {i}{2}}\int \sin(kx)e^{{(-2-i)x}}\,dx
={\frac  {ie^{{(-2-i)x}}}{2(3+4i+k^{2})}}\left((2+i)\sin(kx)+k\cos(kx)\right)
u_{2}\, ={\frac  i2}\int \sin(kx)e^{{(-2+i)x}}\,dx
={\frac  {ie^{{(i-2)x}}}{2(3-4i+k^{2})}}\left((i-2)\sin(kx)-k\cos(kx)\right).

And so

y_{p}\, ={\frac  {i}{2(3+4i+k^{2})}}\left((2+i)\sin(kx)+k\cos(kx)\right)+{\frac  {i}{2(3-4i+k^{2})}}\left((i-2)\sin(kx)-k\cos(kx)\right)
={\frac  {(5-k^{2})\sin(kx)+4k\cos(kx)}{(3+k^{2})^{2}+16}}.

(Notice that u1 and u2 had factors that canceled y1 and y2; that is typical.)

For interest's sake, this ODE has a physical interpretation as a driven damped harmonic oscillator; yp represents the steady state, and c_{1}y_{1}+c_{2}y_{2} is the transient.

Linear ODEs with variable coefficient

Method of undetermined coefficients

Main article: Method of undetermined coefficients

The method of undetermined coefficients (MoUC), is useful in finding solution for y_{p}. Given the ODE P(D)y=f(x), find another differential operator A(D) such that A(D)f(x)=0. This operator is called the annihilator, and thus the method of undetermined coefficients is also known as the annihilator method. Applying A(D) to both sides of the ODE gives an homogeneous ODE {\big (}A(D)P(D){\big )}y=0 for which we find a solution basis \{y_{1},\ldots ,y_{n}\} as before. Then the original nonhomogeneous ODE is used to construct a system of equations restricting the coefficients of the linear combinations to satisfy the ODE.

Undetermined coefficients is not as general as variation of parameters in the sense that an annihilator does not always exist.

Example: Given y''-4y'+5y=\sin(kx), P(D)=D^{2}-4D+5. The simplest annihilator of \sin(kx) is A(D)=D^{2}+k^{2}. The zeros of A(z)P(z) are \{2+i,2-i,ik,-ik\}, so the solution basis of A(D)P(D) is \{y_{1},y_{2},y_{3},y_{4}\}=\{e^{{(2+i)x}},e^{{(2-i)x}},e^{{ikx}},e^{{-ikx}}\}.

Setting y=c_{1}y_{1}+c_{2}y_{2}+c_{3}y_{3}+c_{4}y_{4} we find

\sin(kx) =P(D)y
=P(D)(c_{1}y_{1}+c_{2}y+c_{3}y_{3}+c_{4}y_{4})
=c_{1}P(D)y_{1}+c_{2}P(D)y_{2}+c_{3}P(D)y_{3}+c_{4}P(D)y_{4}
=0+0+c_{3}(-k^{2}-4ik+5)y_{3}+c_{4}(-k^{2}+4ik+5)y_{4}
=c_{3}(-k^{2}-4ik+5)(\cos(kx)+i\sin(kx))+c_{4}(-k^{2}+4ik+5)(\cos(kx)-i\sin(kx))

giving the system

i=(k^{2}+4ik-5)c_{3}+(-k^{2}+4ik+5)c_{4}
0=(k^{2}+4ik-5)c_{3}+(k^{2}-4ik-5)c_{4}

which has solutions

c_{3}={\frac  i{2(k^{2}+4ik-5)}}, c_{4}={\frac  i{2(-k^{2}+4ik+5)}}

giving the solution set

y\, =c_{1}y_{1}+c_{2}y_{2}+{\frac  i{2(k^{2}+4ik-5)}}y_{3}+{\frac  i{2(-k^{2}+4ik+5)}}y_{4}
=c_{1}y_{1}+c_{2}y_{2}+{\frac  {4k\cos(kx)-(k^{2}-5)\sin(kx)}{(k^{2}+4ik-5)(k^{2}-4ik-5)}}
=c_{1}y_{1}+c_{2}y_{2}+{\frac  {4k\cos(kx)+(5-k^{2})\sin(kx)}{k^{4}+6k^{2}+25}}.

Method of variation of parameters

Main article: Method of variation of parameters.

As explained above, the general solution to a non-homogeneous, linear differential equation y''(x)+p(x)y'(x)+q(x)y(x)=g(x) can be expressed as the sum of the general solution y_{h}(x) to the corresponding homogenous, linear differential equation y''(x)+p(x)y'(x)+q(x)y(x)=0 and any one solution y_{p}(x) to y''(x)+p(x)y'(x)+q(x)y(x)=g(x).

Like the method of undetermined coefficients, described above, the method of variation of parameters is a method for finding one solution to y''(x)+p(x)y'(x)+q(x)y(x)=g(x), having already found the general solution to y''(x)+p(x)y'(x)+q(x)y(x)=0. Unlike the method of undetermined coefficients, which fails except with certain specific forms of g(x), the method of variation of parameters will always work; however, it is significantly more difficult to use.

For a second-order equation, the method of variation of parameters makes use of the following fact:

Fact

Let p(x), q(x), and g(x) be functions, and let y_{1}(x) and y_{2}(x) be solutions to the homogeneous, linear differential equation y''(x)+p(x)y'(x)+q(x)y(x)=0. Further, let u(x) and v(x) be functions such that u'(x)y_{1}(x)+v'(x)y_{2}(x)=0 and u'(x)y_{1}'(x)+v'(x)y_{2}'(x)=g(x) for all x, and define y_{p}(x)=u(x)y_{1}(x)+v(x)y_{2}(x). Then y_{p}(x) is a solution to the non-homogeneous, linear differential equation y''(x)+p(x)y'(x)+q(x)y(x)=g(x).

Proof

y_{p}(x)=u(x)y_{1}(x)+v(x)y_{2}(x)

y_{p}'(x) =u'(x)y_{1}(x)+u(x)y_{1}'(x)+v'(x)y_{2}(x)+v(x)y_{2}'(x)
=0+u(x)y_{1}'(x)+v(x)y_{2}'(x)
y_{p}''(x) =u'(x)y_{1}'(x)+u(x)y_{1}''(x)+v'(x)y_{2}'(x)+v(x)y_{2}''(x)
=g(x)+u(x)y_{1}''(x)+v(x)y_{2}''(x)

y_{p}''(x)+p(x)y'_{p}(x)+q(x)y_{p}(x)=g(x)+u(x)y_{1}''(x)+v(x)y_{2}''(x)+p(x)u(x)y_{1}'(x)+p(x)v(x)y_{2}'(x)+q(x)u(x)y_{1}(x)+q(x)v(x)y_{2}(x)

=g(x)+u(x)(y_{1}''(x)+p(x)y_{1}'(x)+q(x)y_{1}(x))+v(x)(y_{2}''(x)+p(x)y_{2}'(x)+q(x)y_{2}(x))=g(x)+0+0=g(x)

Usage

To solve the second-order, non-homogeneous, linear differential equation y''(x)+p(x)y'(x)+q(x)y(x)=g(x) using the method of variation of parameters, use the following steps:

  1. Find the general solution to the corresponding homogeneous equation y''(x)+p(x)y'(x)+q(x)y(x)=0. Specifically, find two linearly independent solutions y_{1}(x) and y_{2}(x).
  2. Since y_{1}(x) and y_{2}(x) are linearly independent solutions, their Wronskian y_{1}(x)y_{2}'(x)-y_{1}'(x)y_{2}(x) is nonzero, so we can compute -(g(x)y_{2}(x))/({y_{1}(x)y_{2}'(x)-y_{1}'(x)y_{2}(x)}) and ({g(x)y_{1}(x)})/({y_{1}(x)y_{2}'(x)-y_{1}'(x)y_{2}(x)}). If the former is equal to u'(x) and the latter to v'(x), then u and v satisfy the two constraints given above: that u'(x)y_{1}(x)+v'(x)y_{2}(x)=0 and that u'(x)y_{1}'(x)+v'(x)y_{2}'(x)=g(x). We can tell this after multiplying by the denominator and comparing coefficients.
  3. Integrate -(g(x)y_{2}(x))/({y_{1}(x)y_{2}'(x)-y_{1}'(x)y_{2}(x)}) and ({g(x)y_{1}(x)})/({y_{1}(x)y_{2}'(x)-y_{1}'(x)y_{2}(x)}) to obtain u(x) and v(x), respectively. (Note that we only need one choice of u and v, so there is no need for constants of integration.)
  4. Compute y_{p}(x)=u(x)y_{1}(x)+v(x)y_{2}(x). The function y_{p} is one solution of y''(x)+p(x)y'(x)+q(x)y(x)=g(x).
  5. The general solution is c_{1}y_{1}(x)+c_{2}y_{2}(x)+y_{p}(x), where c_{1} and c_{2} are arbitrary constants.
Higher-order equations

The method of variation of parameters can also be used with higher-order equations. For example, if y_{1}(x), y_{2}(x), and y_{3}(x) are linearly independent solutions to y'''(x)+p(x)y''(x)+q(x)y'(x)+r(x)y(x)=0, then there exist functions u(x), v(x), and w(x) such that u'(x)y_{1}(x)+v'(x)y_{2}(x)+w'(x)y_{3}(x)=0, u'(x)y_{1}'(x)+v'(x)y_{2}'(x)+w'(x)y_{3}'(x)=0, and u'(x)y_{1}''(x)+v'(x)y_{2}''(x)+w'(x)y_{3}''(x)=g(x). Having found such functions (by solving algebraically for u'(x), v'(x), and w'(x), then integrating each), we have y_{p}(x)=u(x)y_{1}(x)+v(x)y_{2}(x)+w(x)y_{3}(x), one solution to the equation y'''(x)+p(x)y''(x)+q(x)y'(x)+r(x)y(x)=g(x).

Example

Solve the previous example, y''+y=\sec x Recall \sec x={\frac  {1}{{\cos x}}}=f. From technique learned from 3.1, LHS has root of r=\pm i that yield y_{c}=C_{1}\cos x+C_{2}\sin x, (so y_{1}=\cos x, y_{2}=\sin x ) and its derivatives

\left\{{{\begin{matrix}{{\dot  u}={\frac  {{-y_{2}f}}{W}}={\frac  {{-\sin x}}{{\cos x}}}=\tan x}\\{{\dot  v}={\frac  {{y_{1}f}}{W}}={\frac  {{\cos x}}{{\cos x}}}=1}\\\end{matrix}}}\right.

where the Wronskian

W\left({y_{1},y_{2}:x}\right)=\left|{{\begin{matrix}{\cos x}&{\sin x}\\{-\sin x}&{\cos x}\\\end{matrix}}}\right|=1

were computed in order to seek solution to its derivatives.

Upon integration,

\left\{{\begin{matrix}u=-\int {\tan xdx=-\ln \left|{\sec x}\right|+C}\\v=\int {1dx=x+C}\\\end{matrix}}\right.

Computing y_{p} and y_{G}:

{\begin{matrix}y_{p}=f=uy_{1}+vy_{2}=\cos x\ln \left|{\cos x}\right|+x\sin x\\y_{G}=y_{c}+y_{p}=C_{1}\cos x+C_{2}\sin x+x\sin x+\cos x\ln \left({\cos x}\right)\\\end{matrix}}

General solution method for first-order linear ODEs

Example
y'+3y=2\,

with the initial condition

f\left(0\right)=2\,.

Using the general solution method:

f=e^{{-3x}}\left(\int 2e^{{3x}}dx+\kappa \right)\,.

The integration is done from 0 to x, giving:

f=e^{{-3x}}\left(2/3\left(e^{{3x}}-e^{0}\right)+\kappa \right)\,.

Then we can reduce to:

f=2/3\left(1-e^{{-3x}}\right)+e^{{-3x}}\kappa \,.

Assume that kappa is 2 from the initial condition.

For a first-order linear ODE, with coefficients that may or may not vary with t:

x'(t)+p(t)x(t)=r(t)

Then,

x=e^{{-a(t)}}\left(\int r(t)e^{{a(t)}}dt+\kappa \right)

where \kappa is the constant of integration, and

a(t)=\int {p(s)ds}.

Proof

This proof comes from Jean Bernoulli. Let

x^{\prime }+px=r

Suppose for some unknown functions u(t) and v(t) that x = uv.

Then

x^{\prime }=u^{\prime }v+uv^{\prime }

Substituting into the differential equation,

u^{\prime }v+uv^{\prime }+puv=r

Now, the most important step: Since the differential equation is linear we can split this into two independent equations and write

u^{\prime }v+puv=0
uv^{\prime }=r

Since v is not zero, the top equation becomes

u^{\prime }+pu=0

The solution of this is

u=e^{{-\int pdt}}

Substituting into the second equation

v=\int re^{{\int pdt}}+C

Since x = uv, for arbitrary constant C

x=e^{{-\int pdt}}\left(\int re^{{\int pdt}}+C\right)

First order differential equation with constant coefficients

As an illustrative example, consider a first order differential equation with constant coefficients:

a{\frac  {dx}{dt}}+bx=Af(t).

This equation is particularly relevant to first order systems such as RC circuits, mass-damper systems.

After nondimensionalization, the equation becomes

{\frac  {d\chi }{d\tau }}+\chi =F(\tau ).

In this case, p(t) = r(t) = 1.

Hence its solution by inspection is

\chi (\tau )=e^{{-\tau }}\left(\int F(\tau )e^{{\tau }}\,d\tau +C\right).

Singular solutions

The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century did it receive special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (starting in 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field which was worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.

Reduction to quadratures

The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the nth degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that the differential equation meets its limitations very soon unless complex numbers are introduced. Hence analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter the real question was to be, not whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and if so, what are the characteristic properties of this function.

The Fuchsian theory

Two memoirs by Fuchs (Crelle, 1866, 1868), inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869, although his method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those followed in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve which remains unchanged under a rational transformation, so Clebsch proposed to classify the transcendent functions defined by the differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational one-to-one transformations.

Lie's theory

From 1870 Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact (Berührungstransformationen).

See also

Bibliography

External links