sl(2) for Combinatorialists
There is a long and terrific story to tell about Lie theory, and I wish I could do it justice, but there’s far too much to say in a single post. What I have today is merely one application of one Lie algebraic idea, which ends up being a useful theoretical and practical tool in enumerative combinatorics.
The long and short of it is that the representation theory of a spooky object called can be hijacked by combinatorialists to prove that certain sequences of positive integers are symmetric and unimodal. Typically symmetry is obvious but unimodality is quite hard to establish, so this technology does make things somewhat neater. The other big tool I know about for proving things about unimodality or related properties like log-concavity is the theory of stable polynomials, which is also rather algebraic.
Lie algebras
Fix the field . Some of the following math can be done over other fields and even over general rings, but is good enough for the combinatorialist and hence for this post. Formally, a Lie algebra is a vector space together with a special bilinear map called the Lie bracket. The Lie bracket must be antisymmetric, in that for all , and it must also satisfy the Jacobi identity,
for all .
Lie algebras arise in a natural way from fantastic objects called Lie groups, which are essentially groups with smooth manifold structure. There is an enormous amount of theory on this topic, of which I will be needing rather little, and most of what I will talk about today can be done without invoking any of the deep Lie theory underlying everything, but I thought I would record at least a taste of what lies beneath.
Any associative -algebra gives rise to a Lie algebra on , by taking the Lie bracket to be the commutator . In particular, the matrix algebra of endomorphisms of a finite-dimensional vector space gives a Lie algebra denoted . When the particular vector space is irrelevant we often abbeviate .
The Lie algebra is a sub–Lie algebra of , consisting of those matrices with zero trace. The trace functional is not multiplicative, so is not a subalgebra of , but it is true that , so that and then is closed under the Lie bracket.
To better discuss , let
is a basis for , and we can compute that , , and . In fact, together generate as a Lie algebra, in that the only subset of closed under finite linear combinations and brackets, and containing and , is all of .
Representation theory
A representation of a Lie algebra is a linear map for some vector space , such that . One famous representation of any Lie algebra is the adjoint representation where . We’re going to investigate the representation theory of 1 and an arguably combinatorially useful property will fall out.
Let be a representation. A subspace is -invariant if it is -invariant for all , that is, if . is irreducible if the only nontrivial invariant subspace is .
One would hope, as in the representation theory of finite groups, that every complex finite-dimensional representation of a Lie algebra is a direct sum of irreducibles. This doesn’t work out unless is semisimple. The definition is a bit involved and doesn’t motivate itself, but it’s not wrong to say that is semisimple iff it is a direct sum of simple Lie algebras, which are those where the only nontrivial subspace such that is itself. Point is, is semisimple.
One common abuse of notation is to make implicit by declaring that is a representation of and that for and . Never having been one to rock the boat, I’ll do the same when discussing representations of .
Because generate , representations of are determined by the images of and . The coherence conditions they have to satisfy are and , where of course . By the bilinearity and antisymmetry of the bracket, any pair of maps satisfying these (two) equations forms a representation of .
If we take for granted that every representation of decomposes a direct sum of irreducible representations, often abbreviated irreps, then it suffices to understand the irreps. Once we have that knowledge I can explain what it’s good for and how a combinatorialist might use it. (At this point you can skip to the next section if you believe me and don’t care why.)
So let be a finite-dimensional irrep of . By semisimplicity, we can use a principle called the preservation of Jordan decomposition2. This tells us that acts diagonalizably on , since it itself is diagonal in , and likewise and act nilpotently. Because is diagonalizable, let’s decompose into -eigenspaces for . The eigenvalues that have nontrivial are called weights and the are called weight spaces.
If , then
so , and likewise . For this reason and are often called raising and lowering operators, respectively.
If some has a nontrivial , then is an invariant subrepresentation of , and hence equals by irreducibility. So by finite dimensionality, these eigenvalues show up in an unbroken line as in .
Let be a vector of lowest weight. Then consider the cyclic subspace . Obviously, by lowest weight. By induction we can show , for
It follows that this cyclic subspace is a subrepresentation, and by irreducibility, is equal to this subrepresentation. But now, because is finite-dimensional, we can do some numerological magic. for some least , and then .
Well, is a nonzero vector, so the coefficient must be zero. If is nontrivial, then and hence is a strictly negative integer!
Finally, recall that , so that the weight spaces of have dimensions , starting at . Also, since gives the last nonzero vector, the highest nontrivial weight space is .
Now we know all that we need to about irreps of .
Symmetry and unimodality
Taking stock of what just happened, we see that there is one irrep of any particular dimension, and its weight spaces have dimensions , symmetrically arranged around the 0-eigenspace. It follows that any finite-dimensional representation is isomorphic to a direct sum of these. That is to say, if is a representation of , graded by its weight spaces, and is the dimension of the -th weight space, then the following is true:
These two sequences, and , have two properties that are referred to as symmetry—that they can be reflected about their center and remain equal—and unimodality—that they rise monotonically to some peak, and subsequently fall monotonically.
Because of this, if you would like to prove that some sequence of positive integers is symmetric and unimodal, it would suffice to find a representation of with suitable weight spaces. The encoding for a sequence is usually to have be the dimension of the -th weight space.3 To show you some interesting examples, I’ll use a couple of bits of technology, but in principle I could give the coefficients explicitly and that would suffice.
Given any finite set , there is a natural representation of , called the Boolean representation, on the free vector space whose basis is indexed by subsets of . Denote the basis vector of by . Then the representation of is given by
Let be a representation of and suppose it has an action by some group as well. If the -rep is equivariant with respect to the -action, in that for all and , then there exists a subrepresentation on the -invariant vectors, i.e. on the vector space
To wit, if is some orbit of , then , so in some sense is the space of orbits of .
Now, we can see a couple of examples.
First, let the number of isomorphism classes of -vertex -edge graphs. Clearly the sequence is symmetric, via complementation, but unimodality is far far harder to show combinatorially. Instead, we’ll use !
Let be the edge set of the complete graph . The symmetric group has a natural action on , by permuting the vertices of and bringing the edges along. This induces an action on , which can be interpreted as the set of all graphs on the vertex set . Notably, two graphs are in the same orbit iff they are isomorphic. It follows that the dimensions of the weight spaces of are precisely the ’s above, so by the invariant subrepresentation of the Boolean representation of on , is symmetric and unimodal.
As a second example, let be the number of integer partitions of with at most parts, each of which has size at most . It’s a classical result of q-combinatorics that this is the coefficient of in the Gaussian polynomial
Let be the Boolean representation of , and let be the wreath product of two symmetric groups. If you don’t know what this group is, then know that its action on is exactly to permute the cells within each row, and then also to permute the rows afterwards. (If you’re reading the Wikipedia article, then this is an action induced by the imprimitive action.)
Given a proper definition, it is not hard to show each orbit of on contains the Ferrers diagram of exactly one partition, and hence acts on such that is a vector space with basis indexed by these -bounded partitions. The partitions of all fall into the weight space with eigenvalue , so the sequence is symmetric and unimodal. This is but one of many proofs of the celebrated unimodality of the coefficients of .
It is not very hard to give explicit coefficients for this particular representation, actually. Writing partitions as their multiplicity vectors , it turns out that
At first glance it may appear that the above is not well-defined, but only valid partitions will show up in summands with nonzero coefficients, so all is well.
Let’s talk about posets now
Because posets and lattices are my favourite thing on this blog, I would be remiss if I were not to mention a very obvious connection to posets.
A poset is graded if it can be partitioned into disjoint ranks , , such that the only covering relations are between adjacent ranks and . In such a situation, you could prove that is rank-symmetric and rank-unimodal by finding a representation of on the free vector space whose weight spaces are the free subspaces .
If you additionally require that the representation of respect the poset structure—by saying that and only raise or lower along covering relations, i.e. whenever for nonzero , then —then we say that carries that representation of . In this case, you prove not only that is rank-symmetric and unimodal, but has a third property: that any union of antichains is at most as large as the union of the largest ranks.
This is called the strong Sperner property, and those of you who have heard of Sperner theory are probably already groaning and closing your browser window, so I promise I won’t say much more about it. Essentially, this property is saying that there are no clever collections of large antichains, and if you want a bunch of them you might as well take from the ranks. In some sense it guarantees that your poset is not very lopsided.
A poset has the Peck property if it is rank-symmetric, rank-unimodal, and strongly Sperner. By a theorem of Proctor, a poset is Peck iff it carries a representation of .
The representations given as examples above are actually carried by posets. The first is carried by the poset of -vertex isomorphism classes of graphs, ordered by the subgraph relation, and the second is carried by the lattice of bounded partitions, equivalently viewed as the lattice of order ideals in a product of two chains.
-
By “investigate” I mean I’m just going to say some things which aren’t technically wrong, and you can look them up if you don’t believe me; and by “we” I mean I’m reading some Lie rep theory notes curated by a friend of mine, Michael Baker, and pilfering just enough of the relevant presentation to make me feel bad if I didn’t say anything. ↩
-
Of course, this depends on semisimplicity. Properly, this is called the preservation of Jordan–Chevalley decomposition, and a precise statement and proof can probably be found in something like Fulton and Harris. ↩
-
It is more convenient to index from 0 when dealing with , for the same reason that both and are subsets of . ↩
Comments have been disabled for this post.