# Lyapunov stability

**Lyapunov stability** is applicable to only unforced (no control input) dynamical systems. It is used to study the behaviour of dynamical systems under initial perturbations around equilibrium points.

Let us consider that the origin is an equilibrium point (EP) of the system.

A system is said to be stable about the equilibrium point "in the sense of Lyapunov" if for every ε, there is a δ such that:

The system is said to be asymptotically stable if as

## Contents

## Lyapunov stability theorems

Lyapunov stability theorems give only sufficient condition.

### Lyapunov second theorem on stability

Consider a function *V(x)* : *R ^{n}* →

*R*such that

- (positive definite)
- (negative definite)

Then *V(x)* is called a Lyapunov function candidate and the system is asymptotically stable in the sense of Lyapunov.

It is easier to visualise this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called the attractor. However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not applicable.

Lyapunov's realisation was that stability can be proven without requiring knowledge of the true physical energy, providing a Lyapunov function can be found to satisfy the above constraints.

## Stability for state space models

A state space model is asymptotically stable if

has a solution where and (positive definite matrices). (The relevant Lyapunov function is .)

### Example

Consider the Van der Pol oscillator equation:

Let so that the corresponding system is

Let us choose as a Lyapunov function

which is clearly positive definite. Its derivative is

If the parameter is positive, stability is asymptotic for

## Barbalat's lemma and stability of time-varying systems

Assume that f is function of time only.

- If does not imply that f(t) has a limit at

- If has a limit as does not imply that .

- If is lower bounded and decreasing (), then it converges to a limit. But it does not say whether or not as .

Barbalat's Lemma says that If has a finite limit as and if is uniformly continuous (or is bounded), then as .

**But why do we need a Barbalat's lemma?**

Usually, it is difficult to analyze the *asymptotic* stability of time-varying systems because it is very difficult to find Lyapunov functions with a *negative definite* derivative.

**What's the big deal about it? We have invariant set theorems when is only NSD.**

Agreed! We know that in case of autonomous (time-invariant) systems, if is negative semi-definite (NSD), then also, it is possible to know the asymptotic behaviour by invoking invariant-set theorems.

But this flexibility is not available for *time-varying* systems.

This is where "Barbalat's lemma" comes into picture. It says:

IF satisfies following conditions:

(1) is lower bounded

(2) is negative semi-definite (NSD)

(3) is uniformly continuous in time (i.e, is finite)

then as .

**But how does it help in determining asymptotic stability?**

There is a nice example on page 127 of "Slotine Li's book on Applied Nonlinear control"

consider a non-autonomous system

This is non-autonomous because the input w is a function of time. Let's assume that the input w(t) is bounded.

If we take then

This says that by first two conditions and hence e and g are bounded. But it does not say anything about the convergence of e to zero. Moreover, we can't apply invariant set theorem, because the dynamics is non-autonomous.

Now let's use Barbalat's lemma:

. This is bounded because e, g and w are bounded. This implies as and hence . If we are interested in error convergence, then our problem is solved.