Linear Algebra#

Although this course does not assume you have a strong physics background, a solid grounding in linear algebra is essential. These notes cover the important material you will be expected to understand deeply before participating in the course. If anything is unfamiliar, please refresh your understanding prior to the course, or ask one of the instructors for a review.

Please see Linear Algebra for additional resources, including and excellent set of videos.

Terminology#

We will freely use the following terminology throughout these notes and in the course. You will be expected to understand what these terms mean, and the implications. For example, you should know that if the linear operator \(\op{H}\) is Hermitian, then its eigenvalues are real, and its eigenvectors can be chosen to form a complete, orthonormal basis. In equations:

\[\begin{gather*} \op{H}\ket{n} = \ket{n}\lambda_n, \qquad \lambda_n \in \mathbb{R}, \qquad \braket{m|n} = \delta_{mn}. \end{gather*}\]

Some of these terms will be described here, but you may need to refer elsewhere to complete your understanding. We will restrict ourselves to finite-dimensional vector spaces in this course, so some of the terminology will be redundant (i.e. for finite dimensional spaces, Hermitian and self-adjoint mean the same thing)

  • Set.

  • Vector, column vector.

  • Co-vector, row vector.

  • Dyad.

  • Vector space (complex).

  • Subspace.

  • Rank.

  • Linear and anti-linear operator.

  • Inner product \(\braket{a|b}\).

  • Basis (pl. bases).

  • Components.

  • Complete.

  • Orthogonal.

  • Orthonormal.

  • Transpose \(\mat{A}^T\) and Hermitian conjugate \(\mat{A}^\dagger = (\mat{A}^{T})^*\).

  • Symmetric and anti-symmetric, Hermitian, self-adjoint.

  • Orthogonal matrix, unitary matrix.

  • Singular value decomposition (SVD): \(\mat{A} = \mat{U}\mat{D}\mat{V}^\dagger\).

  • QR decomposition \(\mat{A} = \mat{Q}\mat{R}\). I.e. Gram-Schmidt orthogonalization.

  • LR decomposition \(\mat{A} = \mat{L}\mat{U}\). I.e. Gaussian elimination.

  • Trace and determinant.

  • Eigenvectors and eigenvalues.

  • Tensor product.

  • Cartesian product.

  • Partial trace.

Notation#

We will use the following “bra-ket” or “braket” notation due to Dirac, which is ubiquitous and very useful in quantum mechanics.

  • \(\ket{\psi}\): An abstract vector or simply, a vector. This represents a physical object: think of a physical arrow in space, in contradistinction to a “list of numbers”. Although one can treat a “list of numbers” as an abstract column vector, this is better thought of as the components of an abstract vector in a particular basis. This distinction causes quite a bit of confusion, and is the focus of the section Vectors and Components below.

  • \(\bra{\psi}\): An abstract co-vector or simply, a co-vector. This is a linear operator that takes vectors into numbers, and is the Hermitian conjugate of the vector \(\ket{\psi}\):

    \[\begin{gather*} \bra{\psi} = \ket{\psi}^\dagger. \end{gather*}\]

    Although one can think of \(\bra{\psi}\) as a row vector, it is better to think \(\bra{f}\) as a linear operator that takes vectors into numbers.

  • \(\braket{f|g}\): Inner product. Much of the power of Dirac’s notation comes from the expression of the inner product as an operator product of bras and kets. This inner product can be thought of as the dot product of a row vector and column vector.

  • \(\norm{\ket{\psi}} = \sqrt{\braket{\psi|\psi}}\): Norm or \(L_2\) norm. We will only make use of the \(L_2\) norm in this course:

    \[\begin{gather*} \norm{\ket{\psi}} = \sqrt{\sum_{n} \abs{\psi_n}^2}. \end{gather*}\]
  • \(\{\ket{n}\}\): Orthonormal basis. More carefully, we should specify the number of basis vectors such as:

    \[\begin{gather*} \{\ket{n} \;|\; n=0, 1, \dots, N-1\} = \{\ket{0}, \ket{1}, \dots, \ket{N-1}\}. \end{gather*}\]

    We will assume that all bases have been properly orthogonalized an normalized so that

    \[\begin{gather*} \braket{m|n} = \delta_{nm} = \begin{cases} 1 & m = n,\\ 0 & \text{otherwise}, \end{cases} \end{gather*}\]

    where \(\delta_{mn}\) is the [Kronecker-delta]. This assumption greatly simplifies the notation of things like the components of a vector (see below). When working with concrete vectors, then you may need to first orthonormalize your bases using the Gram-Schmidt procedure, or (equivalently) the QR decomposition.

  • \(\psi_n = \braket{n|\psi}\): The components of the vector \(\ket{\psi}\) in the orthonormal basis \(\{\ket{n}\}\). Here \(\bigl\{\psi_n \; |\; n\in\{0, 1, \dots, N-1\}\bigr\}\) is the “list of numbers” version of a vector you might be used to from elementary linear algebra courses.

Vectors and Components#

Students sometimes have difficulty working with quantum mechanics because of confusion between a vector and the components of a vector. To be clear, when we write a vector \(\vect{v} \equiv \ket{v}\) as:

\[\begin{gather*} \ket{v} \= \begin{pmatrix} 1\\ -2\\ 3 \end{pmatrix} \end{gather*}\]

we really have in mind a standard orthonormal basis \(\{\uvect{i}=\ket{i}, \uvect{j}=\ket{j}, \uvect{k}=\ket{k}\}\)

\[\begin{gather*} \ket{i} \= \begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix}, \qquad \ket{j} \= \begin{pmatrix} 0\\ 1\\ 0 \end{pmatrix}, \qquad \ket{k} \= \begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix}. \end{gather*}\]

such that

\[\begin{gather*} \ket{v} = 1\ket{i} - 2\ket{j} + 3\ket{k}. \end{gather*}\]

Note

In my work, I try to write this as

\[\begin{gather*} \ket{v} = \ket{i}1 - \ket{j}2 + \ket{k}3. \end{gather*}\]

Although not common, this helps me keep track of things. The reason is that I think of \(\ket{i}\) etc. as column vectors – matrices with shape \(N\times 1\) – and I think of the coefficients (numbers) as \(1\times 1\) matrices. Thus, I can think of \(\ket{i}1\) as matrix multiplication between a \(N\times 1\) matrix and a \(1 \times 1\) matrix, which makes sense in terms of the dimensions, and gives an \(N \times 1\) matrix as a result.

Similarly, I think of the long bar on the left side of the ket as corresponding to the first index of the matrix that is “long” in the sense that it takes \(N\) values, while the point on the right corresponds second index which takes only \(1\) value.

Thus, when I look at \(\braket{f|g}\), the points on both sides tell me that this is a \(1\times 1\) matrix, or a number, while the dyad \(\ket{f}\bra{g}\) has long bars on both sides, and so is clearly an \(N \times N\) matrix.

Following this idea, I write the eigenvalue problem as:

\[\begin{gather*} \underbrace{\op{H}}_{N\times N}\;\underbrace{\ket{\psi}}_{N\times 1} = \underbrace{\ket{\psi}}_{N\times 1}\;\underbrace{\lambda}_{1\times 1}. \end{gather*}\]

I think of the matrix \(\mat{H}\) moving “through” the ket from left to right, and turning into the eigenvalue \(\lambda\).

Thus, when I diagonalize a matrix \(\mat{H} = \mat{U}\mat{D}\mat{U}^\dagger\), I remember that this is really just a collection of eigenvalue problems where the columns of \(\mat{U}\) are the eigenvectors:

\[\begin{gather*} \mat{H}\mat{U} = \mat{U}\mat{D},\\ \mat{H} \overbrace{\begin{pmatrix} &\\ \ket{u_0} & \ket{u_1} & \dots\\ & \end{pmatrix}}^{\mat{U}} =\\ \begin{pmatrix} &\\ \mat{H}\ket{u_0} & \mat{H}\ket{u_1} & \dots\\ & \end{pmatrix} = \begin{pmatrix} &\\ \ket{u_0}\lambda_0 & \ket{u_1}\lambda_1 & \dots\\ & \end{pmatrix}=\\ = \underbrace{\begin{pmatrix} &\\ \ket{u_0} & \ket{u_1} & \dots\\ & \end{pmatrix}}_{\mat{U}} \underbrace{\begin{pmatrix} \lambda_0\\ & \lambda_1\\ & & \ddots \end{pmatrix}}_{\mat{D}} \end{gather*}\]

Mathematicians will insist that vectors \(\ket{v}\) be treated as abstract objects, and that the “list of numbers” only makes sense after one has specified a basis. Hence, expressions like the one above should be regarded with suspicion (hence the \(\=\) notation).

It is worthwhile developing a nose for such suspect relations, and keeping this in mind can help clarify what is going on:

\[\begin{gather*} \ket{v} \= \begin{pmatrix} 1\\ -2\\ 3 \end{pmatrix} = \begin{pmatrix} \braket{i|v}\\ \braket{j|v}\\ \braket{k|v} \end{pmatrix}. \end{gather*}\]

For the purposes of this course, however, we shall always assume the existence of a standard orthonormal basis, for example, with the \(z\)-axis pointing “up”, the \(x\)-axis pointing “to the right” along, and the \(y\)-axis pointing “into the page” (or “into the board” in the classroom).

Thus, we shall freely write

\[\begin{gather*} \ket{v} = \begin{pmatrix} 1\\ -2\\ 3 \end{pmatrix} \end{gather*}\]

without qualification \(\=\), with the knowledge that this description depends on our implicit choice of the standard orthonormal basis.

Special Matrices#

In quantum mechanics, two general classes of matrices play a central role: hermitian matrices and unitary matrices. These are the complex generalizations of symmetric and orthogonal matrices respectively.

Hermitian Matrices#

Hermitian matrices are square matrices that are the same as their hermitian conjugate:

\[\begin{gather*} \mat{H} = \mat{H}^\dagger = (\mat{H}^{T})^*. \end{gather*}\]

E.g., for a 3×3 matrix:

\[\begin{gather*} \mat{H} = \begin{pmatrix} a & b & c\\ d & e & f\\ g & h & k \end{pmatrix}, \qquad \mat{H}^\dagger = \begin{pmatrix} a^* & d^* & g^*\\ b^* & e^* & h^*\\ c^* & f^* & k^* \end{pmatrix}, \end{gather*}\]

thus, we can write the most general 3×3 hermitian matrix as:

\[\begin{gather*} \mat{H} = \begin{pmatrix} a & b+c\I & d+e\I\\ b-c\I & f & g+h\I\\ d-e\I & g-h\I & k \end{pmatrix}, \end{gather*}\]

where \(a, b, \dots, k\) are real. In particular, note that the diagonals are real.

For quantum mechanics, there are two key properties of hermitian matrices:

  1. Their eigenvalues are real: \(\lambda_i \in \mathbb{R}\).

  2. Their eigenvectors can be chosen to form a complete orthonormal basis.

In quantum mechanics, physical quantities are associated with matrices as follows:

Physical systems are represented by normalized state vectors \(\ket{\psi}\) with \(\braket{\psi|\psi} = 1\). Physical quantities are represented by hermitian matrices, e.g. \(\mat{H} = \mat{H}^\dagger\). The result of measuring \(\mat{H}\) for a state \(\ket{\psi}\) is one of the eigenvalues \(\lambda_n\) of \(\mat{H}\) with probability \(p_n = \braket{n|\psi}\). After obtaining a measurement of \(\lambda_n\), the physical system will be left in the corresponding eigenstate \(\ket{n}\).

By restricting physical quantities to be associated with hermitian matrices, we ensure that measured values are real, and that all events are properly accounted for with unit probability \(1 = p_1 + p_2 + \cdots\) of measuring at least one of the outcomes. (Note: there are extensions of quantum mechanics that consider non-hermitian matrices that have become quite popular recently. This could form the basis for an interesting project if someone is interested. These generally represent “open” systems where particles can leave, so the total probability of measuring one of the outcomes may be less than 1 for example.)

Positive Operator#

A positive operator (or positive semi-definite matrix) is an operator (matrix) \(\mat{A}\) for which

\[\begin{gather*} \braket{\psi|\mat{A}|\psi} \geq 0 \end{gather*}\]

is real and non-negative for all vectors \(\ket{psi}\) in the space. If the vector space is complex, then \(\mat{A} = \mat{A}^\dagger\) is hermitian.

Unitary Matrices#

Unitary matrices are square matrices whose hermitian conjugate is their inverse:

\[\begin{gather*} \mat{U}^\dagger = \mat{U}^{-1}, \qquad \mat{U}^\dagger \mat{U} = \mat{1}. \end{gather*}\]

The most important properties are:

  1. The columns of a unitary matrix form a complete orthonormal basis.

  2. The eigenvalues of a unitary matrix are phases \(e^{\I \theta}\).

  3. The eigenvectors of a unitary matrix can be chosen to form a complete orthonormal basis.

  4. \(\mat{U}\mat{U}^\dagger = \mat{1}\).

Unitary matrices play a central role in quantum mechanics because they preserve the norm of a state \(\ket{\psi}\), which is associated with conservation of probability. I.e. if \(\ket{\phi} = \mat{U}\ket{\psi}\) where \(\mat{U}\) is unitary, then

\[\begin{gather*} \braket{\phi|\phi} = \braket{\psi|\mat{U}^\dagger\mat{U}\psi} = \braket{\psi|\mat{1}\psi} = \braket{\psi|\psi}. \end{gather*}\]

The propagator in quantum mechanics – the linear operator that takes one state into another state in the future – is a unitary matrix \(\mat{U}\). In general, we will represent the action of a quantum computer (without measurement) as a large unitary matrix \(\mat{U}\).

Projection Matrices#

Projection matrices also play an important role, providing a way of decomposing vectors into their components. Their key property is that they are [idempotent][]:

\[\begin{gather*} \mat{P}^2 = \mat{P}. \end{gather*}\]

I.e. once you project a vector into a subspace, projecting it again does not change anything.

For example, suppose

\[\begin{gather*} \newcommand{\proj}[2][]{\mat{P}^{#1}_{\ket{#2}}} \ket{\psi} = \ket{a}a + \ket{b}b \end{gather*}\]

where \(\ket{a}\) and \(\ket{b}\) are linearly independent unit vectors. Projection matrices \(\proj{a}\) and \(\proj{b}\) that act as

\[\begin{gather*} \proj{a}\ket{\psi} = \ket{a}a, \qquad \proj{a}\ket{\psi} = \ket{b}b, \end{gather*}\]

are useful:

\[\begin{gather*} \proj{a} = \frac{\ket{a}\!\bra{a}\proj[\perp]{b}}{\braket{a|\proj[\perp]{b}|a}}, \qquad \proj{b} = \frac{\ket{b}\!\bra{b}\proj[\perp]{a}}{\braket{b|\proj[\perp]{a}|b}}, \end{gather*}\]

where \(\proj[\perp]{a}\) is the projector into the subspace orthogonal to \(\ket{a}\) etc.

We usually work with orthogonal vectors \(\ket{a|b} = 0\) or orthonormal vectors \(\braket{a|a} = 1\), which greatly simplifies the formula:

\[\begin{gather*} \proj{a} = \frac{\ket{a}\!\bra{a}}{\braket{a|a}} = \ket{a}\!\bra{a} \text{ if } \braket{a|a} = 1. \end{gather*}\]

Note

If the vectors are not orthogonal, \(\braket{a|b} \neq 0\), then one must be careful because several properties enjoyed by orthogonal projects do not hold:

  1. If the vectors are not orthogonal, then the projectors depend on the full set of vectors. I.e. our formula for \(\proj{a}\) depends on \(\ket{b}\). If the vectors are orthogonal, then this dependence vanishes.

  2. If the vectors are not orthogonal, then the projectors are not hermitian, and therefore not positive operators on complex vector spaces.

For these reasons, it is usually preferably to change your metric so that your vectors become orthonormal.

To do.

Demonstrate how these projectors can be simply constructed using a different metric.

from qiskit.visualization import array_to_latex

def normalize(psi):
    return np.divide(psi, np.linalg.norm(psi))

a = np.array([1, 0])
b = np.array([1, 1])

psi = 1.2*a - 3.4*b
bhat = normalize(b)

I = np.eye(2)

P_perp_a = I - np.outer(a, a)
P_perp_b = I - np.outer(bhat, bhat)
P_a = (np.outer(a, a) @ P_perp_b) / (a @ P_perp_b @ a)
P_b = (np.outer(b, b) @ P_perp_a) / (b @ P_perp_a @ b)

assert np.allclose(P_a, P_a @ P_a)
assert np.allclose(P_b, P_b @ P_b)
assert np.allclose(P_a @ psi, 1.2*a)
assert np.allclose(P_b @ psi, -3.4*b)
display(array_to_latex(P_a, prefix="\proj{a} = "))
display(array_to_latex(P_b, prefix="\proj{b} = "))
\[\begin{split} \proj{a} = \begin{bmatrix} 1 & -1 \\ 0 & 0 \\ \end{bmatrix} \end{split}\]
\[\begin{split} \proj{b} = \begin{bmatrix} 0 & 1 \\ 0 & 1 \\ \end{bmatrix} \end{split}\]

Matrix Exponential#

Hermitian matrices and unitary matrices are closely related through the matrix exponential and matrix logarithm. I.e. if \(\mat{H} = \mat{H}^{\dagger}\) is a hermitian matrix, then the matrix exponential of \(\I \mat{H}\) is unitary:

\[\begin{gather*} \mat{U} = e^{\I\mat{H}}. \end{gather*}\]

Stated another way, the matrix exponential of an anti-hermitian matrix is unitary.

Note

In quantum mechanics, a special hermitian matrix \(\mat{H}\) called the Hamiltonian represents the energy of the system and defines the Schrödinger equation

\[\begin{gather*} \I\hbar \pdiff{}{t}\ket{\psi(t)} = \mat{H}\ket{\psi(t)}. \end{gather*}\]

If the Hamiltonian \(\mat{H}\) does not depend on time, then the matrix exponential of this with an additional numerical factor of \(t/\I \hbar\) gives the explicit solution:

\[\begin{gather*} \mat{U}_t = e^{\mat{H}t/\I\hbar}, \qquad \ket{\psi(t)} = \mat{U}_t\ket{\psi(0)}. \end{gather*}\]

The unitary matrix \(\mat{U}_t\) is called the propagator and encodes the complete solution to the time-independent the Schrödinger equation where the Hamiltonian \(\mat{H}\) does not depend on time.

In quantum computing, we generally work directly with the unitary propagator \(\mat{U}_t\), which represents the action of our computer. The problem for physicists is to find the appropriate physical system to get a Hamiltonian \(\mat{H}\) that exponentiates to the appropriate \(\mat{U}_t\).

Although somewhat complicated in general to do efficiently, the problem of computing the matrix exponential of a hermitian matrix is quite straight forward. The idea works for any analytic function \(f(\mat{H})\) whose Taylor series converges:

\[\begin{gather*} f(x) = f_0 + f_1x + f_2\frac{x^2}{2} + \cdots + f_n\frac{x^n}{n!} + \cdots\\ f(\mat{H}) = f_0\mat{1} + f_1\mat{H}+ f_2\frac{\mat{H}^2}{2} + \cdots + f_n\frac{\mat{H}^n}{n!} + \cdots. \end{gather*}\]

This can in principle be applied to any matrix, but we can get a simple form for hermitian matrices by using the fact that the eigenvectors of the hermitian matrix \(\mat{H}\ket{n} = \ket{n}\lambda_n\) can be chosen to form a complete orthonormal basis \(\ket{n}\), we can write these as the columns of a unitary matrix \(\mat{V}\):

\[\begin{gather*} \mat{V} = \begin{pmatrix} \ket{1} & \ket{2} & \cdots & \ket{N} \end{pmatrix}. \end{gather*}\]

Using this, we can write:

\[\begin{gather*} \mat{H}\mat{V} = \mat{V}\mat{D}, \qquad \mat{H} = \mat{V}\mat{D}\mat{V}^\dagger, \end{gather*}\]

where \(\mat{D}\) is diagonal

\[\begin{gather*} \mat{D} = \begin{pmatrix} \lambda_1 \\ & \lambda_2 \\ & & \cdots \\ & & & \lambda_N \end{pmatrix}. \end{gather*}\]

This is called diagonalization or we say that we are diagonalizing \(\mat{H}\). We can thus express any of the powers \(\mat{H}^n\) as:

\[\begin{gather*} \mat{H}^n = (\mat{V}\mat{D}\mat{V}^\dagger) (\mat{V}\mat{D}\mat{V}^\dagger)\cdots (\mat{V}\mat{D}\mat{V}^\dagger) = \mat{V}\mat{D}\mat{D}\cdots \mat{D}\mat{V}^\dagger = \\ = \mat{V}\mat{D}^n\mat{V}^\dagger. \end{gather*}\]

Hence, we can write

\[\begin{gather*} f(\mat{H}) = \mat{V}f(\mat{D})\mat{V}^\dagger = \mat{V} \begin{pmatrix} f(\lambda_1) \\ & f(\lambda_2) \\ & & \cdots \\ & & & f(\lambda_N) \end{pmatrix} \mat{V}^\dagger \end{gather*}\]

by factoring the remaining factors of \(\mat{V}\) and \(\mat{V}^\dagger\) on the left and right of each term in the series.

We now see that the \(f(\mat{H})\) and \(\mat{H}\) have the same eigenvectors \(\ket{n}\) and that the eigenvalues of \(f(\mat{H})\) are just the \(f(\lambda_n)\). Thus, if \(\mat{H} = \mat{H}^\dagger\) is hermitian with real eigenvalues \(\lambda_n\), then the matrix exponential of \(\I\mat{H}\) has eigenvalues that are phases \(e^{\I\lambda_n}\).

Review#

Matrix Decompositions#

QR Decomposition#

Given a set of vectors \(\ket{a_0}\), \(\ket{a_1}\), etc., one can follow the Gram-Schmidt procedure to produce an orthonormal basis. To simplify the notation, we define the “normalization” operator \(\newcommand{\N}{\mathcal{N}}\N(\ket{a})\)

\[\begin{gather*} \N(\ket{a}) = \frac{\ket{a}}{\norm{\ket{a}}} = \frac{\ket{a}}{\sqrt{\braket{a|a}}}. \end{gather*}\]

We can now express the Gram-Schmidt procedure as a sequence of steps projecting the remaining basis vectors into an orthogonal subspace, then

\[\begin{align*} \ket{0} &= \N(\ket{a_0}) = \frac{\ket{a_0}}{\sqrt{\braket{a_0|a_0}}} \\ \ket{1} &= \N\Bigl((\op{1} - \ket{0}\bra{0})\ket{a_1}\Bigr) = \frac{\ket{a_1} - \ket{0}\braket{0|a_1}}{\norm{\ket{a_1} - \ket{0}\braket{0|a_1}}}\\ \ket{2} &= \N\Bigl((\op{1} - \ket{0}\bra{0} - \ket{1}\bra{1})\ket{a_2}\Bigr)\\ &\;\;\vdots \end{align*}\]

The QR decomposition numerically encodes the results of this process in terms of a unitary matrix \(\mat{Q}\) and an upper triangular matrix \(\mat{R}\):

\[\begin{gather*} \mat{A} = \mat{Q}\mat{R}\\ \underbrace{ \begin{pmatrix} &\\ \ket{a_0} & \ket{a_1} & \dots\\ & \end{pmatrix}}_{\mat{A}} = \underbrace{ \begin{pmatrix} &\\ \ket{0} & \ket{1} & \dots\\ & \end{pmatrix} }_{\mat{Q}} \underbrace{ \begin{pmatrix} R_{00} & R_{01} & R_{02} & \dots\\ & R_{11} & R_{12} & \dots\\ & & R_{22} & \dots\\ & & & \ddots \end{pmatrix}}_{\mat{R}}\\ \begin{aligned} \ket{a_0} &= \ket{0}R_{00} = \sqrt{\braket{a_0|a_0}}\\ \ket{a_1} &= \ket{0}R_{01} + \ket{1}R_{11}\\ \ket{a_2} &= \ket{0}R_{02} + \ket{1}R_{12} + \ket{2}R_{22}\\ &\;\;\vdots \end{aligned} \end{gather*}\]

Recall that the columns of a unitary matrix \(\mat{Q}\) form an orthonormal basis.

Singular Value Decomposition#

Any matrix \(\mat{A}\) can be decomposed into its singular value decomposition or SVD:

\[\begin{gather*} \mat{A} = \mat{U}\mat{D}\mat{V}^{\dagger}, \end{gather*}\]

where \(\mat{U}\) and \(\mat{V}\) are unitary, and \(\mat{D}\) is diagonal with non-negative entries \(\sigma_n\) called the singular values.

Note: \(\mat{A}\) need not be square:

\[\begin{gather*} \underbrace{\mat{A}}_{2\times 3} = \underbrace{ \begin{pmatrix} &\\ \ket{u_1} & \ket{u_2} \\ & \end{pmatrix} }_{\mat{U}} \underbrace{ \begin{pmatrix} \sigma_1 & 0 & 0\\ 0 & \sigma_2 & 0\\ \end{pmatrix} }_{\mat{D}} \underbrace{ \begin{pmatrix} & \bra{v_1} &\\ & \bra{v_2} &\\ & \bra{v_3} & \end{pmatrix} }_{\mat{V}^\dagger},\\ \underbrace{\mat{A}}_{3\times 2} = \underbrace{ \begin{pmatrix} &\\ \ket{u_1} & \ket{u_2} & \ket{u_3} \\ & \end{pmatrix} }_{\mat{U}} \underbrace{ \begin{pmatrix} \sigma_1 & 0\\ 0 & \sigma_2\\ 0 & 0 \end{pmatrix} }_{\mat{D}} \underbrace{ \begin{pmatrix} & \bra{v_1} &\\ & \bra{v_2} &\\ \end{pmatrix} }_{\mat{V}^\dagger} \end{gather*}\]

The SVD can be computed efficiently by finding the eigenvalues and eigenvectors of the two hermitian matrices \(\mat{A}^{\dagger}\mat{A}\) and \(\mat{A}\mat{A}^\dagger\) respectively.

Direct Sum and Tensor Product#

Consider two vectors \(\ket{a}\) (with \(N_a\) components) and \(\ket{b}\) (with \(N_b\) components). We can form two new vectors from these:

  1. The direct sum \(\ket{a} \oplus \ket{b}\) which has \(N_a + N_b\) components.

  2. The tensor product \(\ket{a} \otimes \ket{b}\) which has \(N_a N_b\) components.

The first can be visualized as stacking the vectors on top of each other:

\[\begin{gather*} \ket{a} \oplus \ket{b} = \begin{pmatrix} \ket{a}\\ \ket{b} \end{pmatrix}. \end{gather*}\]

The direct sum is useful when we decompose a vector space into subspaces, for example, when using projection matrices. The corresponding matrices are block diagonal:

\[\begin{gather*} \mat{A} \oplus \mat{B} = \begin{pmatrix} \mat{A} \\ & \mat{B} \end{pmatrix} \end{gather*}\]

The tensor product is a little more tricky to visualize:

\[\begin{gather*} \ket{a} = \begin{pmatrix} a_1\\ a_2\\ \vdots\\ a_{N_a} \end{pmatrix}, \qquad \ket{b} = \begin{pmatrix} b_1\\ b_2\\ \vdots\\ b_{N_b} \end{pmatrix}, \\ \ket{a}\otimes\ket{b} = \begin{pmatrix} a_1\ket{b}\\ a_2\ket{b}\\ \vdots\\ a_{N_a}\ket{b} \end{pmatrix} = \begin{pmatrix} a_1b_1\\ a_1b_2\\ \vdots\\ a_2b_{N_b}\\ a_2b_1\\ a_2b_2\\ \vdots\\ a_2b_{N_b}\\ \vdots\\ a_{N_a}b_1\\ a_{N_a}b_2\\ \vdots\\ a_{N_a}b_{N_b} \end{pmatrix}. \end{gather*}\]

It is easier to think of \(\ket{a}\otimes\ket{b}\) in terms of grouped indices:

\[\begin{gather*} [\ket{a}\otimes\ket{b}]_{ij} = a_{i}b_{j}. \end{gather*}\]

We can generalize to matrices, first in index form:

\[\begin{gather*} [\mat{A}\otimes\mat{B}]_{ik,jl} = A_{ij}B_{kl}. \end{gather*}\]

Note the grouping carefully. Visually, this corresponds to

\[\begin{gather*} \mat{A} = \begin{pmatrix} a_{11} & a_{12} & \cdots\\ a_{21} & a_{22} & \cdots\\ \vdots & \vdots & \ddots\\ \end{pmatrix}, \qquad \mat{B} = \begin{pmatrix} b_{11} & b_{12} & \cdots\\ b_{21} & b_{22} & \cdots\\ \vdots & \vdots & \ddots\\ \end{pmatrix}, \\ \mat{A}\otimes\mat{B} = \begin{pmatrix} a_{11}\mat{B} & a_{12}\mat{B} & \cdots\\ a_{21}\mat{B} & a_{22}\mat{B} & \cdots\\ \vdots & \vdots & \ddots \end{pmatrix}. \end{gather*}\]

These satisfy the following properties:

\[\begin{gather*} (\mat{A}\ket{a})\otimes(\mat{B}\ket{b}) = (\mat{A}\otimes \mat{B})(\ket{a}\otimes\ket{b}),\\ \Tr \mat{A}\otimes\mat{B} = (\Tr \mat{A})(\Tr\mat{B}). \end{gather*}\]

We also talk about the tensor product of vector spaces, so that if \(\ket{a} \in \mathcal{H}_a\) and \(\ket{b} \in \mathcal{H}_b\), then \(\ket{a}\otimes\ket{b} \in \mathcal{H}_a \otimes \mathcal{H}_b\). If we have an orthonormal basis \(\{\ket{a_i}\}\) for \(\mathcal{H}_a\) and an orthonormal basis \(\{\ket{b_j}\}\) for \(\mathcal{H}_b\), then the basis \(\ket{ij} = \ket{a_i}\otimes\ket{b_j}\) is an orthonormal basis for \(\mathcal{H}_a\otimes \mathcal{H}_b\).

It is useful to be able to work with the tensor product numerically. The function numpy.kron(), named for Kronecker product, directly implements the tensor product, but I find it clearer to explicitly work with indices using numpy.einsum().

First note that we can “combine indices” using numpy.reshape(). Thus, if we have a matrix \(A_{i,j}:\)

\[\begin{gather*} \mat{A} = \begin{pmatrix} A_{00} & A_{01} & A_{02}\\ A_{10} & A_{11} & A_{12} \end{pmatrix}, \end{gather*}\]

this will “ravel” to a vector

\[\begin{gather*} \begin{pmatrix} A_{00} \\ A_{01} \\ A_{02} \\ A_{10} \\ A_{11} \\ A_{12} \end{pmatrix}. \end{gather*}\]

although with a single index (so it appears flat):

A = np.array([
    [1000, 1001, 1002],
    [1010, 1011, 1012]])
display(A)
display(A.ravel())
array([[1000, 1001, 1002],
       [1010, 1011, 1012]])
array([1000, 1001, 1002, 1010, 1011, 1012])

Here is the tensor product demonstrating the index ordering:

\[\begin{gather*} [\mat{A}\otimes \mat{B}]_{ik,jl} = A_{ij}B_{kl} \end{gather*}\]
A = np.array([
    [0, 1],
    [2, 3]])
B = np.array([
    [10, 20],
    [30, 40]])
display(np.kron(A, B))
display(np.einsum('ij,kl->ikjl', A, B).reshape((4,4)))
array([[  0,   0,  10,  20],
       [  0,   0,  30,  40],
       [ 20,  40,  30,  60],
       [ 60,  80,  90, 120]])
array([[  0,   0,  10,  20],
       [  0,   0,  30,  40],
       [ 20,  40,  30,  60],
       [ 60,  80,  90, 120]])

Schmidt Decomposition#

The Schmidt decomposition is an important decomposition of a tensor product space which states that for any vector \(\ket{w} \in \mathcal{H}_a \otimes \mathcal{H}_b\) we can find a set of orthonormal bases \(\{\ket{u_i}\}\) and \(\{\ket{v_j}\}\) for \(\mathcal{H}_a\) and an \(\mathcal{H}_b\) respectively such that

\[\begin{gather*} \ket{w} = \sum_{i=1}^{\min(m,n)} \alpha_i \ket{u}_{i}\otimes\ket{v_{i}} \end{gather*}\]

where \(\min(m, n)\) is the dimension of the smaller space \(m = \dim\mathcal{H}_a\) or \(n = \dim\mathcal{H}_b\) respectively.