So when I got to grad school, certain gaps in my (very unconventional) math education revealed themselves, and one of them was that I didn’t know anything about vector space duality. I took a graduate linear algebra course that first semester, but I think what actually caught me up to speed was my peers. I have a vivid memory of a conversation with Alex Blumenthal in which he said something to the effect of:
“You know what an inner product really is though?? It’s an isomorphism to the dual!!!“
So, that was dope. Thanks, Alex!
But, once I thought about it, something bothered me a little. Suppose is a real, finite-dimensional vector space. An inner product on
is a symmetric, positive definite bilinear form
. But any nondegenerate bilinear form
yields an isomorphism to the dual. The map given by
for any
, is a map
by
‘s linearity in the second argument, is furthermore a linear map by the linearity of
in the first argument, and is injective by the nondegeneracy of
. Since
and
have the same dimension, this proves it is an isomorphism. So, why does an inner product have to be positive definite and symmetric?
Well, ok, look, if you want to use the inner product to define a metric (and, you know, who doesn’t?) then fine, I can see why they are needed. So, I’ve lived happily with the definition of inner products ever since. But still, this feels like geometry or analysis or something. Alex’s elegant explanation for what an inner product “really is”, was pure algebra. So, some part of me always wondered: is there something special, from a purely algebraic point of view, about the isomorphism that comes from an inner product? (Versus the isomorphisms that come from bilinear forms that are merely nondegenerate)?
Well, it took the intervening, what, 12 and a half years? But I figured it out. At least to my satisfaction.
I still see positive definiteness as essentially an analytic or geometric thing, not algebra. It would lose meaning if we switched to vector spaces over a positive-characteristic field, which from an algebraic point of view is not really a different setting. So for the purposes of this inquiry, I view positive definiteness just as a stand-in for nondegeneracy, which it implies. So the real mystery for me was always symmetry. What makes the isomorphism to the dual that arises from a symmetric bilinear form different from ones that arise from forms that are not symmetric?
I will answer below, but to set the stage, let’s talk about some other, more naive, ways to construct an isomorphism . For starters, because they’re both finite-dimensional vector spaces of equal dimension, one could just pick a basis
for
and a basis
for
, and then linearly extend the map
. This would certainly work.
Only slightly less naively, one could stop after the choice of basis for
, and realize that this choice already allows the specification of an isomorphism. The chosen basis uniquely identifies a dual basis
, characterized by the property that
(Kronecker delta). So there is no need to make an independent choice of basis for
; one can linearly extend the map
and be done with it.
All of this I have known for years, probably for the same 12.5 years since Alex clued me in in the first place. What I only just very recently figured out, is this:
The isomorphisms that arise from the second method are the same ones that arise from symmetric bilinear forms!!!
In other words, yes, any nondegenerate bilinear form will give you an isomorphism to the dual; but only the symmetric ones will map a basis of to the dual basis!
I realized this in the context of representation theory. If is an irreducible representation of a group
(over
, say), then it is isomorphic as a representation to its dual representation
iff there is a nondegenerate bilinear form on
that is invariant under
. Because of irreducibility, this form (when it exists) will be unique up to scalar, and either symmetric or antisymmetric. In the former case,
is a subgroup of an orthogonal group; in the latter, it’s a subgroup of a symplectic group. In both cases, there exist bases for
and
according to which the matrices representing
(with respect to those bases) are identical (this is what it means for
and
to be isomorphic), but only in the former case can these bases be chosen to be dual to each other.
That said, I feel like this blog post ought to contain a justification for my excited, italicized claim above. And I can give you that justification in a totally elementary-linear-algebra, just-eff-with-matrices kind of a way!
Let’s just take , viewed as a space of column vectors. Then an arbitrary basis for
is given by the columns of a(n arbitrary) nonsingular square matrix
. Writing
also as column vectors (so that we can map between
and
using matrix multiplication), the dual basis is given by the columns of
. Indeed, the rows of
are evidently dual to the columns of
since the matrix product
is the identity; but I want to write
in terms of column vectors, so I am obligated to take a transpose.
All this said, the matrix of the transformation that sends our chosen basis to its dual basis is the product , because
sends our chosen basis to the standard basis, and
then sends the standard basis to the dual basis of our chosen one. And the matrix
is evidently symmetric! 🦍


