Lie Algebras and the Lie Bracket

The Tangent Space at the Identity The Lie Bracket Structure Constants and the Algebra of Rotations Looking Ahead

The Tangent Space at the Identity

In The Matrix Exponential, we proved that every one-parameter subgroup of a matrix Lie group \(G\) has the form \(\gamma(t) = \exp(tA)\) for a unique matrix \(A = \gamma'(0)\). We interpreted \(A\) as the "velocity vector" of the curve \(\gamma\) at the identity, and observed that the collection of all such velocity vectors forms a vector space. We now formalize this observation.

The Lie Algebra of a Matrix Lie Group

Definition: Lie Algebra of a Matrix Lie Group

Let \(G\) be a matrix Lie group. The Lie algebra of \(G\) is the set \[ \mathfrak{g} = \{ A \in M_n(\mathbb{C}) : \exp(tA) \in G \text{ for all } t \in \mathbb{R} \}. \] Equivalently, \(\mathfrak{g}\) consists of all velocity vectors \(\gamma'(0)\) of smooth curves \(\gamma : \mathbb{R} \to G\) with \(\gamma(0) = I\). We call \(\mathfrak{g}\) the tangent space of \(G\) at the identity and write \(\mathfrak{g} = T_I G\).

The notation \(\mathfrak{g}\) uses Fraktur (Gothic) script, which is the standard convention for Lie algebras. The Lie algebra of a group denoted by an uppercase Roman letter is denoted by the corresponding lowercase Fraktur letter: the Lie algebra of \(G\) is \(\mathfrak{g}\), of \(H\) is \(\mathfrak{h}\), and so on. For named groups, we use the corresponding lowercase name: the Lie algebra of \(GL(n, \mathbb{R})\) is \(\mathfrak{gl}(n, \mathbb{R})\), of \(SO(n)\) is \(\mathfrak{so}(n)\), etc.

The two characterizations in the definition — via the exponential map and via tangent vectors of curves — are equivalent by the one-parameter subgroup theorem: a matrix \(A\) satisfies \(\exp(tA) \in G\) for all \(t\) if and only if \(A\) is the velocity vector \(\gamma'(0)\) of the one-parameter subgroup \(\gamma(t) = \exp(tA)\).

Theorem: The Lie Algebra is a Real Vector Space

Let \(G\) be a matrix Lie group. Then \(\mathfrak{g}\) is a real vector subspace of \(M_n(\mathbb{C})\). That is, \(\mathfrak{g}\) is closed under real scalar multiplication and addition.

Proof:

Scalar multiplication. Let \(A \in \mathfrak{g}\) and \(c \in \mathbb{R}\). We must show \(cA \in \mathfrak{g}\), i.e., \(\exp(t(cA)) \in G\) for all \(t \in \mathbb{R}\). But \(\exp(t(cA)) = \exp((tc)A)\), and since \(A \in \mathfrak{g}\), we have \(\exp(sA) \in G\) for all \(s \in \mathbb{R}\). Taking \(s = tc\), we conclude \(\exp(t(cA)) \in G\) for all \(t\).

Addition. Let \(A, B \in \mathfrak{g}\). We must show \(A + B \in \mathfrak{g}\), i.e., \(\exp(t(A + B)) \in G\) for all \(t \in \mathbb{R}\). The key tool is the Lie product formula (also known as the Trotter product formula): \[ \exp(t(A + B)) = \lim_{N \to \infty} \left(\exp\!\left(\frac{tA}{N}\right) \exp\!\left(\frac{tB}{N}\right)\right)^{\!N}. \] For each finite \(N\), the right-hand side is a product of elements of \(G\) (since \(A, B \in \mathfrak{g}\) implies \(\exp(tA/N), \exp(tB/N) \in G\)), hence belongs to \(G\). The limit exists in \(M_n(\mathbb{C})\) (this is a standard result in matrix analysis), and since \(G\) is a closed subset of \(M_n(\mathbb{C})\), the limit also belongs to \(G\). Therefore \(\exp(t(A + B)) \in G\) for all \(t\), so \(A + B \in \mathfrak{g}\). \(\square\)

The proof of closure under addition illustrates a recurring theme: the closedness condition in the definition of a matrix Lie group is not merely a technical convenience — it is the condition that allows limits of group elements to remain in the group, which in turn ensures that the Lie algebra is a vector space.

The Classical Lie Algebras

In The Matrix Exponential, we established which linear conditions on \(A\) ensure that \(\exp(A)\) lands in each classical group. We now identify these conditions as defining the corresponding Lie algebras. For each classical group, the proof that the stated set equals \(\mathfrak{g}\) proceeds in two directions: if \(A\) satisfies the linear condition, then \(\exp(tA) \in G\) for all \(t\) (the "if" direction, already proved in the previous page); conversely, if \(\exp(tA) \in G\) for all \(t\), then differentiating at \(t = 0\) forces \(A\) to satisfy the linear condition (the "only if" direction).

Definition: \(\mathfrak{gl}(n, \mathbb{R})\)

The Lie algebra of \(GL(n, \mathbb{R})\) is \[ \mathfrak{gl}(n, \mathbb{R}) = M_n(\mathbb{R}), \] the space of all \(n \times n\) real matrices, with no additional constraint. This follows immediately from the fact that \(\exp(tA) \in GL(n, \mathbb{R})\) for all \(t\) and all \(A \in M_n(\mathbb{R})\), since \(\det(\exp(tA)) = e^{t\,\mathrm{tr}(A)} \neq 0\). The dimension is \(n^2\).

Definition: \(\mathfrak{sl}(n, \mathbb{R})\)

The Lie algebra of \(SL(n, \mathbb{R})\) is \[ \mathfrak{sl}(n, \mathbb{R}) = \{ A \in M_n(\mathbb{R}) : \mathrm{tr}(A) = 0 \}, \] the space of traceless real matrices. The dimension is \(n^2 - 1\).

Proof:

(If) If \(\mathrm{tr}(A) = 0\), then \(\det(\exp(tA)) = e^{t\,\mathrm{tr}(A)} = e^0 = 1\), so \(\exp(tA) \in SL(n, \mathbb{R})\) for all \(t\). Hence \(A \in \mathfrak{sl}(n, \mathbb{R})\).

(Only if) If \(\exp(tA) \in SL(n, \mathbb{R})\) for all \(t\), then \(\det(\exp(tA)) = 1\) for all \(t\), i.e., \(e^{t\,\mathrm{tr}(A)} = 1\) for all \(t\). Differentiating at \(t = 0\) gives \(\mathrm{tr}(A) = 0\). \(\square\)

Definition: \(\mathfrak{so}(n)\)

The Lie algebra of both \(O(n)\) and \(SO(n)\) is \[ \mathfrak{so}(n) = \{ A \in M_n(\mathbb{R}) : A^\top = -A \}, \] the space of skew-symmetric (or antisymmetric) real matrices. The dimension is \(n(n-1)/2\).

Proof:

(If) If \(A^\top = -A\), then \(\exp(tA)^\top = \exp(tA^\top) = \exp(-tA) = \exp(tA)^{-1}\), so \(\exp(tA) \in O(n)\) for all \(t\). Moreover, the map \(t \mapsto \det(\exp(tA)) = e^{t\,\mathrm{tr}(A)} = e^0 = 1\) is constantly 1 (since every skew-symmetric matrix has zero diagonal, hence zero trace), so \(\exp(tA) \in SO(n)\) for all \(t\).

(Only if) Suppose \(\exp(tA) \in O(n)\) for all \(t\), i.e., \(\exp(tA)^\top \exp(tA) = I\) for all \(t\). Differentiating both sides with respect to \(t\) at \(t = 0\): \[ \frac{d}{dt}\Big|_{t=0} \bigl[\exp(tA)^\top \exp(tA)\bigr] = A^\top \cdot I + I \cdot A = A^\top + A = 0. \] Therefore \(A^\top = -A\). \(\square\)

The fact that \(O(n)\) and \(SO(n)\) have the same Lie algebra reflects a general principle: the Lie algebra captures only the local structure of a group near the identity. Since \(SO(n)\) is the connected component of \(O(n)\) containing the identity, and the exponential map generates only a neighborhood of the identity, both groups yield the same tangent space.

Definition: \(\mathfrak{u}(n)\)

The Lie algebra of \(U(n)\) is \[ \mathfrak{u}(n) = \{ A \in M_n(\mathbb{C}) : A^* = -A \}, \] the space of skew-Hermitian matrices. The dimension is \(n^2\) (as a real vector space: the diagonal entries are purely imaginary, giving \(n\) real parameters, and the strictly upper-triangular entries are arbitrary complex numbers, giving \(2 \cdot \frac{n(n-1)}{2} = n(n-1)\) real parameters; total \(n + n(n-1) = n^2\)).

Definition: \(\mathfrak{su}(n)\)

The Lie algebra of \(SU(n)\) is \[ \mathfrak{su}(n) = \{ A \in M_n(\mathbb{C}) : A^* = -A,\; \mathrm{tr}(A) = 0 \}, \] the space of traceless skew-Hermitian matrices. The dimension is \(n^2 - 1\).

The proofs for \(\mathfrak{u}(n)\) and \(\mathfrak{su}(n)\) are entirely analogous to those for \(\mathfrak{so}(n)\) and \(\mathfrak{sl}(n)\), replacing the transpose \(A^\top\) with the conjugate transpose \(A^*\).

The following table collects the classical Lie algebras alongside the groups from which they arise. Compare this with the summary table in Matrix Lie Groups and the linear-to-nonlinear correspondence in The Matrix Exponential:

Group \(G\) Lie Algebra \(\mathfrak{g}\) Defining Condition on \(A \in \mathfrak{g}\) \(\dim_{\mathbb{R}} \mathfrak{g}\)
\(GL(n, \mathbb{R})\) \(\mathfrak{gl}(n, \mathbb{R})\) (no constraint) \(n^2\)
\(SL(n, \mathbb{R})\) \(\mathfrak{sl}(n, \mathbb{R})\) \(\mathrm{tr}(A) = 0\) \(n^2 - 1\)
\(O(n)\) / \(SO(n)\) \(\mathfrak{so}(n)\) \(A^\top = -A\) \(n(n-1)/2\)
\(U(n)\) \(\mathfrak{u}(n)\) \(A^* = -A\) \(n^2\)
\(SU(n)\) \(\mathfrak{su}(n)\) \(A^* = -A,\; \mathrm{tr}(A) = 0\) \(n^2 - 1\)

Explicit Basis for \(\mathfrak{so}(3)\)

The Lie algebra \(\mathfrak{so}(3)\) — the tangent space of the rotation group \(SO(3)\) at the identity — is a 3-dimensional real vector space. A natural basis consists of the infinitesimal generators introduced in The Matrix Exponential: \[ E_1 = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix}, \quad E_2 = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix}, \quad E_3 = \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}. \] Each \(E_i\) is skew-symmetric (\(E_i^\top = -E_i\)), confirming \(E_i \in \mathfrak{so}(3)\). They are linearly independent and \(\dim \mathfrak{so}(3) = 3(3-1)/2 = 3\), so \(\{E_1, E_2, E_3\}\) is a basis.

Recall that the hat map \(\boldsymbol{\omega} = (\omega_1, \omega_2, \omega_3)^\top \mapsto \hat{\boldsymbol{\omega}}_\times = \omega_1 E_1 + \omega_2 E_2 + \omega_3 E_3\) provides an isomorphism \(\mathbb{R}^3 \xrightarrow{\;\sim\;} \mathfrak{so}(3)\) of vector spaces. We will soon see that this isomorphism also respects additional algebraic structure: the cross product on \(\mathbb{R}^3\) corresponds to the Lie bracket on \(\mathfrak{so}(3)\).

The Lie Bracket

A Lie algebra is more than just a vector space — it carries an algebraic operation that encodes the non-commutativity of the parent group at the infinitesimal level. In The Matrix Exponential, we saw that \(\exp(A)\exp(B) = \exp(A + B)\) if and only if \(A\) and \(B\) commute, and that the first-order correction involves the commutator \([A, B] = AB - BA\). We now show that this commutator equips the Lie algebra with a fundamental algebraic structure.

Closure of the Lie Algebra under the Commutator

The first question is: if \(A\) and \(B\) belong to the Lie algebra \(\mathfrak{g}\), does their commutator \([A, B] = AB - BA\) also belong to \(\mathfrak{g}\)? Note that this is not obvious — \(\mathfrak{g}\) is defined as a subspace of \(M_n(\mathbb{C})\), and the product of two elements of \(\mathfrak{g}\) need not lie in \(\mathfrak{g}\) (for instance, the product of two skew-symmetric matrices is generally not skew-symmetric). The answer is yes, and the proof is illuminating.

Theorem: Closure under the Commutator

Let \(G\) be a matrix Lie group with Lie algebra \(\mathfrak{g}\). If \(A, B \in \mathfrak{g}\), then \([A, B] = AB - BA \in \mathfrak{g}\).

Proof:

We establish the identity \[ [A, B] = \left.\frac{d}{dt}\right|_{t=0} \exp(tA)\, B\, \exp(-tA). \] To verify this, expand the right-hand side using \(\exp(tA) = I + tA + O(t^2)\) and \(\exp(-tA) = I - tA + O(t^2)\): \[ \begin{align*} \exp(tA)\, B\, \exp(-tA) &= (I + tA + O(t^2))\, B\, (I - tA + O(t^2)) \\ &= B + t(AB - BA) + O(t^2) \\ &= B + t[A, B] + O(t^2). \end{align*} \] Differentiating at \(t = 0\) gives \([A, B]\) as claimed.

Now we show \([A, B] \in \mathfrak{g}\). Since \(A \in \mathfrak{g}\), we have \(\exp(tA) \in G\) for all \(t\). Since \(B \in \mathfrak{g}\), we have \(\exp(sB) \in G\) for all \(s\). Therefore the conjugation \[ \exp(tA)\,\exp(sB)\,\exp(-tA) \in G \quad \text{for all } t, s \in \mathbb{R} \] (as a product of three elements of the group \(G\)). For fixed \(t\), the map \(s \mapsto \exp(tA)\,\exp(sB)\,\exp(-tA)\) is a one-parameter subgroup of \(G\) (it is a continuous homomorphism from \((\mathbb{R}, +)\) to \(G\), since the group property in \(s\) is inherited from \(\exp(sB)\)). By the one-parameter subgroup theorem, it has the form \(\exp(s \cdot C(t))\) for some \(C(t) \in \mathfrak{g}\). Differentiating with respect to \(s\) at \(s = 0\) gives: \[ C(t) = \exp(tA)\, B\, \exp(-tA) \in \mathfrak{g} \quad \text{for all } t. \] Since \(\mathfrak{g}\) is a (closed) vector subspace of \(M_n(\mathbb{C})\), it is closed under differentiation of curves: the derivative of a curve lying in \(\mathfrak{g}\) also lies in \(\mathfrak{g}\). Therefore: \[ [A, B] = \left.\frac{d}{dt}\right|_{t=0} C(t) \in \mathfrak{g}. \qquad \square \]

Definition and Properties

Having established that the commutator preserves the Lie algebra, we now formalize it as an algebraic operation.

Definition: Lie Bracket (Matrix Case)

Let \(\mathfrak{g}\) be the Lie algebra of a matrix Lie group. The Lie bracket is the operation \([\,\cdot\,,\,\cdot\,] : \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}\) defined by \[ [X, Y] = XY - YX. \]

Theorem: Properties of the Lie Bracket

For all \(X, Y, Z \in \mathfrak{g}\) and \(a, b \in \mathbb{R}\):

(a) Bilinearity: \[ \begin{align*} [aX + bY,\, Z] &= a[X, Z] + b[Y, Z], \\ [Z,\, aX + bY] &= a[Z, X] + b[Z, Y]. \end{align*} \]

(b) Antisymmetry: \[ [X, Y] = -[Y, X]. \] In particular, \([X, X] = 0\) for all \(X \in \mathfrak{g}\).

(c) Jacobi identity: \[ [X,\, [Y, Z]] + [Y,\, [Z, X]] + [Z,\, [X, Y]] = 0. \]

Proofs:

(a) Bilinearity follows directly from the linearity of matrix multiplication in each factor: \[ \begin{align*} [aX + bY, Z] &= (aX + bY)Z - Z(aX + bY) \\ &= a(XZ - ZX) + b(YZ - ZY) = a[X, Z] + b[Y, Z]. \end{align*} \] The second identity is proved identically.

(b) Immediate: \[ [X, Y] = XY - YX = -(YX - XY) = -[Y, X]. \] Setting \(Y = X\) gives \([X, X] = -[X, X]\), hence \([X, X] = 0\).

(c) We expand each term. Writing \([X, [Y, Z]] = X(YZ - ZY) - (YZ - ZY)X = XYZ - XZY - YZX + ZYX\) and cyclically permuting: \[ \begin{align*} [X, [Y, Z]] &= XYZ - XZY - YZX + ZYX, \\ [Y, [Z, X]] &= YZX - YXZ - ZXY + XZY, \\ [Z, [X, Y]] &= ZXY - ZYX - XYZ + YXZ. \end{align*} \] Adding these three expressions, every term cancels: \(XYZ\) appears once with \(+\) (first line) and once with \(-\) (third line); similarly for all other terms. The sum is zero. \(\square\)

The Jacobi identity is the Lie-algebraic analogue of associativity. While the Lie bracket is not associative — in general, \([X, [Y, Z]] \neq [[X, Y], Z]\) — the Jacobi identity provides a weaker constraint that governs how brackets interact. It can be rewritten as \[ [X, [Y, Z]] = [[X, Y], Z] + [Y, [X, Z]], \] which states that the operation \(\mathrm{ad}(X) : Y \mapsto [X, Y]\) is a derivation with respect to the bracket: it satisfies a Leibniz-type rule. This perspective will become central when we study the adjoint representations in The Lie Correspondence.

The Abstract Definition

The three properties above — bilinearity, antisymmetry, and the Jacobi identity — characterize Lie algebras in full generality.

Definition: Lie Algebra (Abstract)

A Lie algebra over a field \(\mathbb{F}\) is a vector space \(\mathfrak{g}\) over \(\mathbb{F}\) equipped with a bilinear operation \([\,\cdot\,,\,\cdot\,] : \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}\) (the Lie bracket) satisfying:

  1. Antisymmetry: \([X, Y] = -[Y, X]\) for all \(X, Y \in \mathfrak{g}\).
  2. Jacobi identity: \([X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y]] = 0\) for all \(X, Y, Z \in \mathfrak{g}\).

Every Lie algebra of a matrix Lie group is a Lie algebra in this abstract sense, with \(\mathbb{F} = \mathbb{R}\) and the bracket \([X, Y] = XY - YX\). The converse — that every finite-dimensional (abstract) Lie algebra is isomorphic to a matrix Lie algebra — is a deep result known as Ado's theorem. Its proof is far beyond our scope, but the theorem assures us that the matrix setting loses no generality.

Structure Constants and the Algebra of Rotations

A Lie algebra is determined, relative to a chosen basis, by its structure constants — the coefficients that express each bracket of basis elements as a linear combination of basis elements. We compute these for \(\mathfrak{so}(3)\) and discover a striking connection to the cross product.

Structure Constants

Let \(\mathfrak{g}\) be a finite-dimensional Lie algebra with basis \(\{E_1, \dots, E_d\}\). Since the bracket \([E_i, E_j]\) belongs to \(\mathfrak{g}\), it can be written as a linear combination of basis elements: \[ [E_i, E_j] = \sum_{k=1}^{d} c_{ij}^{\,k}\, E_k. \] The scalars \(c_{ij}^{\,k}\) are called the structure constants of \(\mathfrak{g}\) with respect to the basis \(\{E_i\}\). By bilinearity, the bracket of any two elements is determined by these constants: if \(X = \sum_i x_i E_i\) and \(Y = \sum_j y_j E_j\), then \[ [X, Y] = \sum_{i,j} x_i y_j\, [E_i, E_j] = \sum_{i,j,k} x_i y_j\, c_{ij}^{\,k}\, E_k. \] The structure constants encode the Lie algebra completely (relative to the chosen basis).

From antisymmetry, the structure constants satisfy \(c_{ij}^{\,k} = -c_{ji}^{\,k}\), and the Jacobi identity imposes further quadratic relations among them.

The Bracket of \(\mathfrak{so}(3)\)

We now compute the Lie bracket for the basis \(\{E_1, E_2, E_3\}\) of \(\mathfrak{so}(3)\) introduced in the previous section.

Computation:

We compute \([E_1, E_2] = E_1 E_2 - E_2 E_1\) by direct matrix multiplication: \[ E_1 E_2 = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \] \[ E_2 E_1 = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \] \[ [E_1, E_2] = E_1 E_2 - E_2 E_1 = \begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} = E_3. \] By analogous (or cyclic) computations: \[ [E_1, E_2] = E_3, \qquad [E_2, E_3] = E_1, \qquad [E_3, E_1] = E_2. \] Together with the antisymmetry relations \([E_2, E_1] = -E_3\), \([E_3, E_2] = -E_1\), \([E_1, E_3] = -E_2\), and the vanishing brackets \([E_i, E_i] = 0\), these determine the full bracket on \(\mathfrak{so}(3)\).

The structure constants of \(\mathfrak{so}(3)\) with respect to \(\{E_1, E_2, E_3\}\) are therefore \[ c_{ij}^{\,k} = \varepsilon_{ijk}, \] where \(\varepsilon_{ijk}\) is the Levi-Civita symbol: \(\varepsilon_{123} = \varepsilon_{231} = \varepsilon_{312} = +1\), \(\varepsilon_{213} = \varepsilon_{132} = \varepsilon_{321} = -1\), and \(\varepsilon_{ijk} = 0\) whenever two indices coincide.

The Cross Product Isomorphism

The bracket relations \([E_1, E_2] = E_3\), \([E_2, E_3] = E_1\), \([E_3, E_1] = E_2\) are identical to the relations defining the cross product on \(\mathbb{R}^3\): \(\mathbf{e}_1 \times \mathbf{e}_2 = \mathbf{e}_3\), \(\mathbf{e}_2 \times \mathbf{e}_3 = \mathbf{e}_1\), \(\mathbf{e}_3 \times \mathbf{e}_1 = \mathbf{e}_2\). This is not a coincidence.

The hat map \(\boldsymbol{\omega} \mapsto \hat{\boldsymbol{\omega}}_\times\) from \(\mathbb{R}^3\) to \(\mathfrak{so}(3)\) is an isomorphism of Lie algebras: \[ \widehat{\boldsymbol{\omega}_1 \times \boldsymbol{\omega}_2} = [\hat{\boldsymbol{\omega}}_{1,\times},\, \hat{\boldsymbol{\omega}}_{2,\times}] \] for all \(\boldsymbol{\omega}_1, \boldsymbol{\omega}_2 \in \mathbb{R}^3\). In words: the cross product on \(\mathbb{R}^3\) is the Lie bracket on \(\mathfrak{so}(3)\), transferred via the hat map.

This isomorphism \((\mathbb{R}^3, \times) \cong (\mathfrak{so}(3), [\,\cdot\,,\,\cdot\,])\) is an exceptional phenomenon: it relies on the fact that both spaces are 3-dimensional and that the Levi-Civita symbol is totally antisymmetric. There is no analogous cross-product isomorphism for \(\mathfrak{so}(n)\) when \(n \neq 3\) (since \(\dim \mathfrak{so}(n) = n(n-1)/2 \neq n\) for \(n \neq 3\)).

Angular Velocity and the Equation of Rotation

The cross product isomorphism gives physical meaning to the Lie algebra \(\mathfrak{so}(3)\). In mechanics, the angular velocity of a rotating body is a vector \(\boldsymbol{\omega} = (\omega_1, \omega_2, \omega_3)^\top \in \mathbb{R}^3\). Via the hat map, this corresponds to the skew-symmetric matrix \(\hat{\boldsymbol{\omega}}_\times \in \mathfrak{so}(3)\). The kinematic equation of a rotating rigid body is the ODE on \(SO(3)\): \[ \frac{dR}{dt} = \hat{\boldsymbol{\omega}}_\times\, R, \] where \(R(t) \in SO(3)\) describes the orientation of the body at time \(t\) and \(\boldsymbol{\omega}\) is the angular velocity expressed in the spatial frame (the body-frame convention instead writes \(\dot{R} = R\,\hat{\boldsymbol{\omega}}^b_\times\), with the two related by \(\boldsymbol{\omega} = R\,\boldsymbol{\omega}^b\)). This is an ODE on the Lie group \(SO(3)\), driven by a time-varying element of the Lie algebra \(\mathfrak{so}(3)\). When \(\boldsymbol{\omega}\) is constant, the solution is \(R(t) = \exp(t\,\hat{\boldsymbol{\omega}}_\times)\,R(0)\) — a one-parameter subgroup applied to the initial orientation.

The structure constants of \(\mathfrak{so}(3)\) — the Levi-Civita symbol — encode the physics of gyroscopic precession. The relation \([E_1, E_2] = E_3\) means that combining infinitesimal rotations about the \(x\)- and \(y\)-axes produces an infinitesimal rotation about the \(z\)-axis. This is the mathematical content of the right-hand rule.

Abelian and Non-Abelian Lie Algebras

A Lie algebra \(\mathfrak{g}\) is called abelian if \([X, Y] = 0\) for all \(X, Y \in \mathfrak{g}\). This occurs precisely when the corresponding Lie group is locally commutative (i.e., commutative in a neighborhood of the identity).

Examples:

(a) \(\mathfrak{gl}(1, \mathbb{R}) = \mathbb{R}\) is abelian: the bracket of two real numbers is \([a, b] = ab - ba = 0\). The group \(GL(1, \mathbb{R}) = \mathbb{R} \setminus \{0\}\) is commutative (multiplication of real numbers is commutative).

(b) \(\mathfrak{so}(2) \cong \mathbb{R}\) is abelian: it is 1-dimensional, so any bracket \([cJ, dJ] = cd[J, J] = 0\) where \(J = \bigl(\begin{smallmatrix} 0 & -1 \\ 1 & 0 \end{smallmatrix}\bigr)\). The group \(SO(2) \cong S^1\) is commutative (rotations of the plane commute).

(c) \(\mathfrak{so}(3)\) is non-abelian: \([E_1, E_2] = E_3 \neq 0\). This reflects the non-commutativity of 3D rotations, which we first encountered in the context of the dihedral group \(D_n\) and its non-commuting generators. The passage from \(D_n\) (discrete, non-abelian) to \(SO(3)\) (continuous, non-abelian) preserves the essential non-commutativity — and the Lie bracket provides the precise infinitesimal measure of this non-commutativity.

Looking Ahead

We have defined the Lie algebra \(\mathfrak{g}\) of a matrix Lie group \(G\) as the tangent space at the identity, computed the classical Lie algebras, and discovered that the commutator \([X, Y] = XY - YX\) equips \(\mathfrak{g}\) with the structure of an abstract Lie algebra. The worked example of \(\mathfrak{so}(3)\) revealed that the Lie bracket encodes rotational physics — the structure constants are the Levi-Civita symbol, and the bracket is the cross product.

A fundamental question remains: to what extent does the Lie algebra determine the Lie group? We have seen that different groups can share the same Lie algebra (\(O(n)\) and \(SO(n)\) both have Lie algebra \(\mathfrak{so}(n)\)). How much group information is captured by the algebra, and how much is lost?

In the next page, we answer this question through the Lie group–Lie algebra correspondence. The Baker-Campbell-Hausdorff formula will show that the group multiplication near the identity is entirely encoded by the Lie bracket, establishing that \(\mathfrak{g}\) determines the local structure of \(G\). We will examine the dramatic example of \(SU(2)\) and \(SO(3)\) — groups with isomorphic Lie algebras but different global topology, connected by a 2:1 covering map. Finally, the adjoint representations \(\mathrm{Ad}\) and \(\mathrm{ad}\) will show how the group acts on its own Lie algebra, bridging the theory toward Representation Theory.