From Matrices to Operators
Remember, in Linear Algebra, we studied linear transformations \(T: \mathbb{R}^n \to \mathbb{R}^m\). We learned that a transformation is linear if it preserves addition and scalar multiplication: \[ T(\alpha x + \beta y) = \alpha T(x) + \beta T(y). \]
In Functional Analysis, we keep this exact same algebraic definition, but we change the stage from finite-dimensional vectors (\(\mathbb{R}^n\)) to infinite-dimensional function spaces (like Banach and Hilbert spaces). We call these maps linear operators.
The Infinite-Dimensional Trap
Here is the critical difference: In finite dimensions (matrices), every linear map is "well-behaved" — it is automatically continuous. A small change in input always results in a small change in output.
In infinite dimensions, this is no longer true. A linear operator can be "wild" and discontinuous.
Consider the differentiation operator \(T(f) = f'\) acting on smooth functions. Let \(f_k(t) = \sin(kt)\).
The "size" (maximum value) of the input is always \(\|f_k\| = 1\), regardless of \(k\). However, applying the operator gives: \[ T(f_k) = \frac{d}{dt}\sin(kt) = k\cos(kt) \] The "size" of the output is \(\|Tf_k\| = k\). As \(k \to \infty\), the output explodes to infinity even though the input stays small. This means the differentiation operator is unbounded (and thus discontinuous).
This means the differentiation operator is linear but unbounded (and thus discontinuous). Because calculus and optimization require continuity, we must restrict our focus to the class of operators that behave well: bounded linear operators.