# Tensor product

In mathematics, the tensor product, denoted by $\displaystyle \otimes$ , may be applied in different contexts to vectors, matrices, tensors, vector spaces, algebras and modules. In each case the significance of the symbol is the same: the most general bilinear operation.

A representative case is the Kronecker product of any two rectangular arrays, considered as matrices.

Example:

$\displaystyle \begin{bmatrix}b_1 \\ b_2 \\ b_3 \\ b_4\end{bmatrix} \otimes \begin{bmatrix}a_1 & a_2 & a_3\end{bmatrix} = \begin{bmatrix}a_1b_1 & a_2b_1 & a_3b_1 \\ a_1b_2 & a_2b_2 & a_3b_2 \\ a_1b_3 & a_2b_3 & a_3b_3 \\ a_1b_4 & a_2b_4 & a_3b_4\end{bmatrix}$

Resultant rank = 2, resultant dimension = 4×3 = 12.

Here rank denotes the number of requisite indices, while dimension counts the number of degrees of freedom in the resulting array.

## Tensor product of two tensors

What does the general formula mean? It means that if a pair of tensors are juxtaposed (placed side-by-side) then they combine by mere aggregation to form a new tensor which can be subsequently called the tensor product of the pair of juxtaposed tensors. The number of independent components multiplies

There is a general formula for the product of two (or more) tensors, as

$\displaystyle V \otimes U = V_{\left[ i_1,i_2,i_3,...i_n\right] }U_{\left[ j_1,j_2,j_3,...j_m\right] }$ .

We are assuming here orthogonal tensors, with no distinction of covariant and contravariant indices, for simplicity.

The parameters introduced above work out like this:

$\displaystyle \mathrm{rank}( U \otimes V )=\mathrm{rank}(U)+\mathrm{rank}(V)$
$\displaystyle \mathrm{dim}( U \otimes V )=\mathrm{dim}(U) \mathrm{dim}(V)$

### Example

Let U be a tensor of type (1,1) with components Uαβ, and let V be a tensor of type (1,0) with components Vγ. Then

$\displaystyle U^\alpha {}_\beta V^\gamma = (U \otimes V)^\alpha {}_\beta {}^\gamma$

and

$\displaystyle V^\mu U^\nu {}_\sigma = (V \otimes U)^{\mu \nu} {}_\sigma$ .

The tensor product inherits all the indices of its factors.

## Kronecker product of two matrices

Main article: Kronecker product.

With matrices this operation is usually called the Kronecker product, a term used to make clear that the result has a particular block structure imposed upon it, in which each element of the first matrix is replaced by the second matrix, scaled by that element. For matrices $\displaystyle U$ and $\displaystyle V$ this is:

$\displaystyle U \otimes V = \begin{bmatrix} u_{11}V & u_{12}V & \cdots \\ u_{21}V & u_{22}V \\ \vdots & & \ddots \end{bmatrix} = \begin{bmatrix} u_{11}v_{11} & u_{11}v_{12} & \cdots & u_{12}v_{11} & u_{12}v_{12} & \cdots \\ u_{11}v_{21} & u_{11}v_{22} & & u_{12}v_{21} & u_{12}v_{22} \\ \vdots & & \ddots \\ u_{21}v_{11} & u_{21}v_{12} \\ u_{21}v_{21} & u_{21}v_{22} \\ \vdots \end{bmatrix}$ .

## Tensor product of multilinear maps

Given multilinear maps $\displaystyle f(x_1,...x_k)$ and $\displaystyle g(x_1,... x_m)$ their tensor product is the multilinear function

$\displaystyle (f \otimes g) (x_1,...x_{k+m})=f(x_1,...x_k)g(x_{k+1},... x_{k+m})$

## Tensor product of vector spaces

The tensor product $\displaystyle V \otimes W$ of two vector spaces V and W has a formal definition by the method of generators and relations. The equivalence class under these relations (given below) of $\displaystyle (v,w)$ is called a tensor and is denoted by $\displaystyle v \otimes w$ . By construction, one can prove several identities between tensors and form an algebra of tensors.

To construct $\displaystyle V \otimes W$ , take the vector space generated by $\displaystyle V \times W$ and apply (factor out the subspace generated by) the following multilinear relations:

• $\displaystyle (v_1+v_2)\otimes w=v_1\otimes w$ $\displaystyle +v_2\otimes w$
• $\displaystyle v\otimes (w_1+w_2)=v\otimes w_1+v\otimes w_2$
• $\displaystyle cv\otimes w=v\otimes cw=c(v\otimes w)$

where $\displaystyle v,v_i,w,w_i$ are vectors from the appropriate spaces, and $\displaystyle c$ is from the underlying field.

We can then derive the identity

$\displaystyle 0v\otimes w=v\otimes 0w=0(v\otimes w)=0$ ,

the zero in $\displaystyle V \otimes W$ .

The resulting tensor product $\displaystyle V \otimes W$ is itself a vector space, which can be verified by directly checking the vector space axioms. Given bases $\displaystyle \{v_i\}$ and $\displaystyle \{w_i\}$ for V and W respectively, the tensors of the form $\displaystyle v_i \otimes w_j$ forms a basis for $\displaystyle V \otimes W$ . The dimension of the tensor product therefore is the product of dimensions of the original spaces; for instance $\displaystyle \mathbb{R}^m \otimes \mathbb{R}^n$ will have dimension $\displaystyle mn$ .

## Universal property of tensor product

The space of all bilinear maps from $\displaystyle V\times W$ to $\displaystyle \mathbb R$ is naturally isomorphic to the space of all linear maps from $\displaystyle V \otimes W$ to $\displaystyle \mathbb R$ . This is built into the construction; $\displaystyle V\otimes W$ has only the relations that are necessary to ensure that a homomorphism from $\displaystyle V\otimes W$ to $\displaystyle \mathbb R$ will be bilinear.

The tensor product in fact satisfies the universal property of being a fibered coproduct.

## Tensor product of Hilbert spaces

The tensor product of two Hilbert spaces is another Hilbert space, which is defined as described below.

### Definition

Let H1 and H2 be two Hilbert spaces with inner products 〈·,·〉1 and 〈·,·〉2, respectively. Construct the tensor product of H1 and H2 as vector spaces as explained above. We can turn this vector space tensor product into an inner product space by defining

$\displaystyle \langle\phi_1\otimes\phi_2,\psi_1\otimes\psi_2\rangle = \langle\phi_1,\psi_1\rangle_1 \, \langle\phi_2,\psi_2\rangle_2 \quad \mbox{for all } \phi_1,\psi_1 \in H_1 \mbox{ and } \phi_2,\psi_2 \in H_2$

and extending by linearity. Finally, take the completion under this inner product. The result is the tensor product of H1 and H2 as Hilbert spaces.

### Properties

If H1 and H2 have orthonormal basesk} and {ψl}, respectively, then {φk ⊗ ψl} is an orthonormal basis for H1 ⊗ H2.

### Examples and applications

The following examples show how tensor products arise naturally.

Given two measure spaces X and Y, with measures μ and ν respectively, one may look at L2(X × Y), the space of functions on X × Y that are square integrable with respect to the product measure μ × ν. If f is a square integrable function on X, and g is a square integrable function on Y, then we can define a function h on X × Y by h(x,y) = f(xg(y). The definition of the product measure ensures that all functions of this form are square integrable, so this defines a bilinear mapping L2(X) × L2(Y) → L2(X × Y). Linear combinations of functions of the form f(xg(y) are also in L2(X × Y). It turns out that the set of linear combinations is in fact dense in L2(X × Y), if L2(X) and L2(Y) are separable. This shows that L2(X) ⊗ L2(Y) is isomorphic to L2(X × Y), and it also explains why we need to take the completion in the construction of the Hilbert space tensor product.

Similarly, we can show that L2(XH), denoting the space of square integrable functions X → H, is isomorphic to L2(X) ⊗ H if this space is separable. The isomorphism maps f(x) ⊗ φ ∈ L2(X) ⊗ H to f(x)φ ∈ L2(XH). We can combine this with the previous example and conclude that L2(X) ⊗ L2(Y) and L2(X × Y) are both isomorphic to L2(X; L2(Y)).

Tensor products of Hilbert spaces arise often in quantum mechanics. If some particle is described by the Hilbert space H1, and another particle is described by H2, then the system consisting of both particles is described by the tensor product of H1 and H2. For example, the state space of a quantum harmonic oscillator is L2(R), so the state space of two oscillators is L2(R) ⊗ L2(R), which is isomorphic to L2(R2). Therefore, the two-particle system is described by wave functions of the form φ(x1x2). A more intricate example is provided by the Fock spaces, which describe a variable number of particles.

## Relation with the dual space

Note that the space $\displaystyle (V \otimes W)^\star$ (the dual space of $\displaystyle V \otimes W$ containing all linear functionals on that space) corresponds naturally to the space of all bilinear functionals on $\displaystyle V \times W$ . In other words, every bilinear functional is a functional on the tensor product, and vice versa. There is a natural isomorphism between $\displaystyle V^\star \otimes W^\star$ and $\displaystyle (V \otimes W)^\star$ . So, the tensors of the linear functionals are bilinear functionals. This gives us a new way to look at the space of bilinear functionals, as a tensor product itself.

## Types of tensors, e.g., alternating

Linear subspaces of the bilinear operators (or in general, multilinear operators) determine natural quotient spaces of the tensor space, which are frequently useful. See wedge product for the first major example. Another would be the treatment of algebraic forms as symmetric tensors.

## Tensor product for computer programmers

### Array programming languages

Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as $\displaystyle \circ . \times$ (for example $\displaystyle A \circ . \times B$ or $\displaystyle A \circ . \times B \circ . \times C$ ). In J the tensor product is the dyadic form of */ (for example a */ b or a */ b */ c).

Note that J's treatment also allows the representation of some tensor fields (as a and b may be functions instead of constants -- the result is then a derived function, and if a and b are differentiable, then a*/b is differentiable).

However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, Matlab), and/or may not support higher-order functions such as the Jacobian derivative (for example, Fortran/APL).

### C language

If a, b, and c, are rank-one tensors (i.e. one-dimensional arrays), with indices i,j,k, respectively, then the tensor product of them is a rank-three tensor (i.e. three-dimensional array):

  for( int i = 0; i < i_dim; i++)
for( int j = 0; j < j_dim; j++)
for( int k = 0; k < k_dim; k++)
result[i][j][k] = a[i]*b[j]*c[k];


If a is a rank-two tensor and b is a rank-one tensor, with indices i & j, and k, respectively, then the tensor product of them is a rank-three tensor:

  for( int i = 0; i < i_dim; i++)
for( int j = 0; j < j_dim; j++)
for( int k = 0; k < k_dim; k++)
result[i][j][k] = a[i][j]*b[k];


### SQL

If a is a rank-two tensor and b is a rank-one tensor, with indices i & j, and k, respectively, then the tensor product of them is a rank-three tensor:

   select
a.i as i,
a.j as j,
b.k as k,
a.value * b.value as value
from
a
outer join b