# Basis (linear algebra)

In linear algebra, a basis is a minimum set of vectors that, when combined, can address every vector in a given space. More precisely, a basis of a vector space is a set of linearly independent vectors that span the whole space.

## Definition

Let B be a subset of a vector space V. A linear combination is a finite sum of the form

$\displaystyle a_1 v_1 + \cdots + a_n v_n, \,$

where the vk are different vectors from B and the ak are scalars. The vectors in B are linearly independent if the only linear combinations adding up to the zero vector have $\displaystyle a_1 = \cdots = a_n = 0\,$ . The set B is a generating set if every vector in V is a linear combination of vectors in B. Finally, B is a basis if it is a generating set of linearly independent vectors.

## Properties

Again, B denotes a subset of a vector space V. Then, B is a basis if and only if any of the following equivalent conditions is met:

• B is a minimal generating set of V, i.e., it is a generating set but no proper subset of B is.
• B is a maximal set of linearly independent vectors, i.e., it is a linearly independent set but no other linearly independent set contains it as a proper subset.
• Every vector in V can be expressed as a linear combination of vectors in B in a unique way.

One can prove that every vector space has a basis. For spaces that cannot be finitely generated, Zorn's lemma is needed for the proof. All bases of a vector space have the same cardinality (number of elements), called the dimension of the vector space. The latter result is known as the dimension theorem, and requires the ultrafilter lemma, a strictly weaker form of choice.

## Examples

• The vectors e1, e2, ..., en are linearly independent and generate Rn. Therefore, they form a basis for Rn and the dimension of Rn is n. This basis is called the standard basis.
• Let V be the real vector space generated by the functions et and e2t. These two functions are linearly independent, so they form a basis for V.
• Let R[x] denote the vector space of real polynomials; then (1, x, x2, ...) is a basis of R[x]. The dimension of R[x] is therefore equal to aleph-0.

## Basis extension

Between any linearly independent set and any generating set there is a basis. More formally: if L is a linearly independent set in the vector space V and G is a generating set of V containing L, then there exists a basis of V that contains L and is contained in G. In particular (taking G = V), any linearly independent set L can be "extended" to form a basis of V. These extensions are not unique.

## Proving that a set is a basis

As an easy example, let us show that the vectors (1,1) and (-1,2) form a basis for R2. The following proof methods require increasing amounts of sophistication and decreasing amounts of effort.

### By brute force

We have to prove that these two vectors are linearly independent and that they generate R2.

Part I: To prove that they are linearly independent, suppose that there are numbers a,b such that:

$\displaystyle a(1,1)+b(-1,2)=(0,0). \,$

Then:

$\displaystyle (a-b,a+2b)=(0,0) \,$
and
$\displaystyle a-b=0 \;$
and
$\displaystyle a+2b=0. \,$

Subtracting the first equation from the second, we obtain:

$\displaystyle 3b=0 \;$
so
$\displaystyle b=0. \,$

And from the first equation then:

$\displaystyle a=0. \,$

Part II: To prove that these two vectors generate R2, we have to let (a,b) be an arbitrary element of R2, and show that there exist numbers x,y such that:

$\displaystyle x(1,1)+y(-1,2)=(a,b). \,$

Then we have to solve the equations:

$\displaystyle x-y=a \,$
$\displaystyle x+2y=b. \,$

Subtracting the first equation from the second, we get:

$\displaystyle 3y=b-a, \,$
and then
$\displaystyle y=(b-a)/3, \,$
and finally
$\displaystyle x=y+a=((b-a)/3)+a. \,$

### By the dimension theorem

Since (-1,2) is clearly not a multiple of (1,1) and since (1,1) is not the zero vector, these two vectors are linearly independent. Since the dimension of R2 is 2, the two vectors already form a basis of R2 without needing any extension.

### By the invertible matrix theorem

Simply compute the determinant

$\displaystyle \det\begin{bmatrix}1&-1\\1&2\end{bmatrix}=3\neq0.$

Since the above matrix has a nonzero determinant, its columns form a basis of R2. See: invertible matrix.

## Ordered bases

A basis is just a set of vectors with no given ordering. For many purposes it is convenient to work with an ordered basis. For example, when working with a coordinate representation of a vector it is customary to speak of the "first" or "second" coordinate, which makes sense only if a ordering is specified for the basis. For finite-dimensional vector spaces one typically indexes a basis {vi} by the first n integers.

Suppose V is an n-dimensional vector space over a field F. A choice of an ordered basis for V is equivalent to a choice of a linear isomorphism from the coordinate space Fn, with its standard basis, to V. To see this, let

A : FnV

be a linear isomorphism. Define an ordered basis {vi} for V by

vi = A(ei) for 1 ≤ in

where {ei} is the standard basis for Fn. Conversely, given any ordered basis {vi} for V define a linear map A : FnV by

$\displaystyle A(x) = \sum_{i=1}^n x_i v_i$

It is not hard to check that A is an isomorphism. Thus ordered bases for V are in 1-1 correspondence with linear isomorphisms FnV.

## Related notions

The phrase Hamel basis is sometimes used to refer to a basis as defined above, in which the fact that all linear combinations are finite is crucial. A set B is a Hamel basis of a vector space V if every member of V is a linear combination of just finitely many members of B.

In Hilbert spaces and other Banach spaces, there is a need to work with linear combinations of infinitely many vectors. In an infinite-dimensional Hilbert space, a set of vectors orthogonal to each other can never span the whole space via their finite linear combinations. What is called an orthonormal basis is a set of mutually orthogonal unit vectors that "span" the space via sometimes-infinite linear combinations. Except in the finite-dimensional case, this concept is not purely algebraic, and is distinct from a Hamel basis; it is also more generally useful. An orthonormal basis of an infinite-dimensional Hilbert space is therefore not a Hamel basis.

In topological vector spaces, quite generally, one may define infinite sums (infinite series) and express elements of the space as certain infinite linear combinations of other elements. To keep clear the distinction of bases using finite and infinite combination, the former ones are called Hamel bases and the latter ones Schauder bases, if the context requires it. The corresponding dimensions are also known as Hamel dimension and Schauder dimension.

### Example

In the study of Fourier series, one learns that the functions {1} ∪ { sin(nx), cos(nx) : n = 1, 2, 3, ... } are an "orthonormal basis" of the set of all complex-valued functions that are quadratically integrable on the interval [0, 2π], i.e., functions f satisfying

$\displaystyle \int_0^{2\pi} \left|f(x)\right|^2\,dx<\infty.$

These functions are linearly independent, and every function that is quadratically integrable on that interval is an "infinite linear combination" of them. That means that

$\displaystyle \lim_{n\rightarrow\infty}\int_0^{2\pi}\left|\left(a_0+\sum_{k=1}^n a_k\cos(kx)+b_k\sin(kx)\right)-f(x)\right|^2\,dx=0$

for suitable coefficients ak, bk. But most quadratically integrable functions cannot be represented as finite linear combinations of these basis functions, which therefore do not comprise a Hamel basis. Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are of little if any interest; orthonormal bases of these spaces are important to Fourier analysis.