Lie algebras 15: Standard bases

Recall that \mathfrak{sl}(2) = \langle x,y,h \mid [x,y]=h,[h,x]=2x,[h,y]=-2y \rangle, and any \mathfrak{sl}(2)-module has a ‘standard’ basis (corresponding to the choice of generators x,y), as described here. Now if \Phi is an irreducible root system, and L = L_0 \oplus \bigoplus_{\alpha \in \Phi} L_\alpha is the corresponding simple Lie algebra, then various subspaces of L are \mathfrak{sl}(2)-modules in many ways, but the message here is that there exists a basis that is standard (up to multiplication by \pm 1) for all these module structures simultaneously. More precisely,

Theorem: There exist vectors e_\alpha \in L_\alpha, \alpha \in \Phi such that

  • [e_\alpha, e_{-\alpha}] = h_\alpha, where h_\alpha \in L_0 are defined by \langle \alpha, h_\beta \rangle = 2 \delta_{\alpha \beta}.
  • For any \alpha and any \beta \neq \pm \alpha the set \{e_{\beta + k \alpha}, k \in \mathbb{Z}\} is a standard basis, up to \pm, of the module over \mathrm{span} \{ e_\alpha, e_{-\alpha}, h_\alpha \} \simeq \mathfrak{sl}(2).
  • For any \alpha let \sigma_\alpha := \exp \{\mathrm{ad} \, e_\alpha\} \exp \{-\mathrm{ad} \, e_{-\alpha}\} \exp \{\mathrm{ad} \, e_\alpha\} be the corresponding reflection. Then \sigma_\alpha e_\beta = \pm e_{\sigma_\alpha \beta}.

To clarify the ideas underlying the proof, I should state a few lemmas.

Lemma 1: Let (e_k) be the standard basis for an \mathfrak{sl}(2)-module. Then the standard reflection \sigma := \exp x \exp(-y) \exp x acts as follows:

\displaystyle \sigma e_k = e_{-k}

I don’t see a better way then to do this ‘by hands’, using the known formulas for the action of x and y. The calculation boils down to a certain binomial sum, which can be shown equivalent to the Vandermonde identity. Anyway, it can be found in Graham-Knuth-Patashnik…

Let \Delta be a basis for \Phi. Let’s define a path between 0 and a root \alpha \ge 0 to be any sequence (\alpha_k, k = 0,\dots,n), such that \alpha_0 = 0, \alpha_n = \alpha, \alpha_k \in \Phi, k > 0, and \alpha_k - \alpha_{k-1} \in \Delta. Let’s define a homotopy between two paths (\alpha_{k0}) and (\alpha_{km}) to be a sequence of paths (\alpha_{k i}, i=0,\dots,m), such that for any i the paths \alpha_{ki} and \alpha_{k,i+1} differ only in one place j, in the only way possible: \alpha_{i+1,j} = \alpha_{i+1,j-1} + \alpha_{i,j+1} - \alpha_{i,j}.

Lemma 2: Any two paths are homotopic.

Proof: Proceed by induction on the length of the paths. Suppose that (\alpha_k) and (\beta_k) are two paths joining 0 and \alpha_n = \beta_n. If \alpha_{n-1} = \beta_{n-1}, then we are done by induction hypothesis. If not, observe that \gamma:=\alpha_{n-1} + \beta_{n-1} - \alpha_n is also a root. This can be shown directly by examining the 3-dimensional root systems. Now by induction hypothesis, \alpha_0,\dots,\alpha_{n-1} is homotopic to \alpha_0,\dots,\gamma,\alpha_{n-1}, and \beta_0,\dots,\beta_{n-1} is homotopic to \beta_0,\dots,\gamma,\beta_{n-1}. Now the parts 0,\dots,\gamma are homotopic, and finally, (\gamma,\alpha_{n-1},\alpha_n) \mapsto (\gamma,\beta_{n-1},\alpha_n) is a homotopy. QED.

Proof of Theorem: Let \Delta be a basis, let e_\beta,\beta \in \Delta be arbitrary vectors, and define e_{\alpha + \beta} := c_{\alpha \beta} [e_\alpha, e_\beta], \beta \in \Delta, where c_{\alpha \beta} are chosen to satisfy the ‘standard basis’ requirement for both e_\alpha- and e_\beta-module structures. To check that this is possible, it is sufficient to consider two-dimensional root systems. Now we should also check that this definition of e_\alpha does not depend on the way to write \alpha as a sum of simple roots. This can be done via homotopy: to verify each step of the homotopy, it is sufficient to consider 3-dimensional root systems. Finally, choose e_{-\alpha} via e_\alpha in an appropriate way. QED.

Semisimple modules

Definition 1: A left module M over a ring \mathbb{A} is called semisimple if one of the following equivalent conditions is satisfied:

  • M is a sum of simple submodules.
  • M is a direct sum (strictly speaking, a coproduct) of simple modules.
  • Every submodule of M has a complement.

Proof of equivalence: 1 \Rightarrow 2: select a subset of indices;
2 \Rightarrow 3: the complement may be chosen to be a sum of coordinate submodules;
3 \Rightarrow 2: a maximal submodule not containing a distinguished element has simple complement. Proceed by transfinite induction. QED.

The first powerful result on semisimple modules is Schur’s lemma. It may be stated in different ways, but I think the main point here is that it describes submodules of semisimple modules and morphisms between them. The solution to these two related problems is suggested by the following decomposition:

\displaystyle M \simeq \bigoplus_P P \otimes_{\mathrm{End}_{\mathbb A} P} M_P

Here P are nonisomorphic simple modules, viewed also as right modules over their endomorphism rings, and M_P are left vector spaces over those rings. Of course, we could write these tensor products as direct sums, but I claim that it is this tensor product structure that actually matters. We can obtain these M_P in an invariant, functorial way: M_P := \mathrm{Hom}_{\mathbb A} (P, M), acting on P from the right.

Theorem 1 (“Schur’s lemma”):

  • \mathrm{End}_{\mathbb A} P are division algebras.
  • Submodules N \subset M correspond to subspaces N_P \subset M_P.
  • Morphisms f: N \to M are (projective limits of) sums of 1_P \otimes f_P, f_P \in \mathrm{Hom}_{\mathrm{End}_{\mathbb A} P} (N_P, M_P).

So M_P must be thought of as modules of “P-points” of M, and  both morphisms and submodules are defined “pointwise”. This theorem follows easily from the modern textbook version of Schur’s lemma, which is itself trivial, but I just wanted to emphasize the slightly different language.

Schur’s lemma describes endomorphisms, that is, the centralizer of the left action of \mathbb{A}. The next useful result is the density theorem, which describes the bicentralizer.

Theorem 2: The bicentralizer of the left action of \mathbb{A} is the closure of the image of \mathbb{A} in the “strong topology”, or equivalently, for any x_1,\dots,x_n \in M and any endomorphism f \in \mathrm{End}_{\mathrm{End}_\mathbb{A} M} M there exists an element \alpha \in \mathbb{A} such that \alpha x_i = f x_i.

Proof: Start with n=1. Let x \in M; we have to show that f x \in \mathbb{A} x. A nice observation is that every \mathbb{A}-submodule, and \mathbb{A} x in particular, is a kernel of an element of \mathrm{End}_\mathbb{A} M. Consequently, it must be respected by f.
The general case follows by considering M^{\oplus n}QED.

A nice consequence is that if M is finitely generated over \mathrm{End}_\mathbb{A} M, \mathbb{A} happens to be closed already, so it has a nice description in terms of matrix algebras over division rings.

Definition 2: A ring is called semisimple if it is a semisimple left module over itself, and simple if it is semisimple and it has no nontrivial two-sided ideals.

Lemma: Any module over a semisimple ring is semisimple.

The proof is almost trivial.

Here it is funny how the presence of 1 makes everything finite. Indeed, it is easy to prove that the decomposition of \mathbb{A} as a left module over itself leads to a decomposition into a direct sum of rings, and since 1 must be a finite sum, this decomposition has to be finite.
And what happens to otherwise semisimple rings without 1, like, say, the ring of finite rank operators on an infinite-dimensional vector space? The answer is that adjoining 1 turns it into a non-complemented ideal, so that we lose semisimplicity at once!

Theorem 3 (Artin-Wedderburn): Any simple ring is the algebra of matrices over a division ring. More explicitly, it is the bicentralizer of its own action on its (unique) simple module.

The proof follows easily from our previous observations.

Theorem 4: Let \mathbb{A} be a ring with no nontrivial two-sided ideals. Then it equals the bicentralizer of its action on any left ideal.

Proof: Let i:\mathbb{A} \to \mathbb{A}^{\prime\prime} be the canonical morphism into the bicentralizer of \mathbb{A} acting on a left ideal L. Then clearly \ker i is a two-sided ideal, hence i is injective. On the other hand, i(L) is a left ideal in \mathbb{A}^{\prime\prime}, since multiplication by \mathbb{A}^{\prime\prime} from the left commutes with multiplication by L from the right. Hence \mathbb{A}^{\prime\prime}=\mathbb{A}^{\prime\prime}i(L)i(\mathbb{A})=i(L)i(\mathbb{A})=i(\mathbb{A})QED.

Remark: The funny thing is that absense of two-sided ideals does not imply semisimplicity. In fact, a ring without two-sided ideals may have no simple ideals at all (and if it had, it would be simple). One example is the algebra of bounded operators in Hilbert space modulo compact operators.

Lie algebras 14: The Weyl group is Coxeter

Definition: A Coxeter group is a group G with a presentation

\displaystyle G = \langle s_i, i \in I \mid s_i^2 = 1, (s_i s_j)^{m_{ij}} = 1 \rangle,

where m_{ij} \in \{2,3,\dots,\infty\}, and m_{ij} = \infty means just no relation at all.

Examples include groups generated by reflections in Euclidean spaces (or pseudo-Euclidean, or affine, or hyperbolic, …)

Here I sketch a geometric proof that the Weyl group of a root system admits a Coxeter presentation, generated by the walls of a Weyl chamber. At the same time this gives another proof of Theorem 3 in the previous post.

Theorem. Let \Phi be a root system with basis \Delta. Then \{\sigma_\alpha, \alpha \in \Delta\} generate a Coxeter group.

Sketch of proof: Any two reflections generate a dihedral group, so clearly \sigma_\alpha satisfy the relations with \displaystyle m_{\alpha \beta} depending in an obvious way on the angle between \alpha and \beta. The only thing to prove is that these relations are the only ones. In fact, we will prove more. If C_1,\dots,C_n is a circular sequence of neighbouring Weyl chambers, with C_1 = C_n, then C_{i+1} = t_i C_i, where t_i = t_{i-1} \dots t_1 \sigma_i t_1^{-1} \dots t_{i-1}^{-1}, and \sigma_i corresponds to the wall between C_i and C_{i+1}. It follows that t_i t_{i-1} \dots t_1 = \sigma_1 \dots \sigma_{i-1} \sigma_i, so it’s just a product of the generators in a “wrong” order. The whole point is that we can contract this loop into a trivial one using the Coxeter relations. Indeed, a generic homotopy of a loop on a sphere will only pass through codimension-two faces of the Weyl chambers, and each such face would turn into a Coxeter relation. More technically, we may form a two-dimensional polyhedral complex consisting of Weyl chambers, connected by walls between them and relations between walls, and show that it is simply connected (it is the two-dimensional subcomplex of a sphere). It has a covering by the complex constructed from the Cayley graph of the group and the same relations, and since it is simply connected, the covering must be trivial. QED.

Lie algebras 13: A few lemmas

Now we are going to fix a basis \Delta and the corresponding decomposition \Phi = \Phi_+ \sqcup \Phi_-. We call the roots in \Delta simple. We will also use a notation \prec for the induced partial ordering: \alpha \prec \beta \Leftrightarrow \beta - \alpha \in \Phi_+. Another notation: \mathrm{ht}, the height of the root, is the sum of coordinates.

Lemma 1: For every positive root \alpha there is a chain 0 \prec \alpha_1 \prec \dots \prec \alpha_n = \alpha such that \alpha_{k+1} - \alpha_k \in \Delta.

Proof: There exists \beta \in \Delta such that \langle \alpha,\beta \rangle > 0, since positive vectors with obtuse angles would be linearly independent. This implies that \alpha - \beta \in \Phi_+ and \mathrm{ht} (\alpha - \beta) = \mathrm{ht} \, \alpha - 1. Proceed by induction. QED.

Lemma 2: If \alpha is a simple root then \sigma_\alpha permutes roots in \Phi_+ \setminus \{\alpha\}.

Proof: If \beta \in \Phi_+ \setminus \{\alpha\} then it has a strictly positive coordinate with respect to a simple root other than \alpha. Hence \sigma_\alpha \beta is positive. QED.

Corollary: Let \delta := \frac{1}{2} \sum_{\beta \in \Phi_+} \beta. Then \sigma_\alpha \delta = \delta - \alpha for every simple \alpha. These \delta‘s look like a nice embedding of the Cayley graph!

Lemma 3: Let \alpha_i be simple roots, and \sigma_i := \sigma_{\alpha_i}. Then if \sigma_1 \dots \sigma_{t-1} \alpha_t \prec 0 then \sigma_s \dots \sigma_t = \sigma_{s+1} \dots \sigma{t-1}.

Proof: Clealy \sigma_{\sigma\beta} = \sigma \sigma_\beta \sigma^{-1}. Let s be the first time \alpha_t becomes negative, so that \sigma_s \dots \sigma_{t-1} \alpha_t \prec 0 and \sigma_{s+1} \dots \sigma_{t-1} \alpha_t \succ 0. Then due to Lemma 2 we must have \sigma_{s+1} \dots \sigma_{t-1} \alpha_t = \alpha_s. It follows that \sigma_s = \sigma_{s+1} \dots \sigma_{t-1} \sigma_t \sigma_{t-1} \dots \sigma_{s+1}. QED.

Corollary: If \sigma = \sigma_1 \dots \sigma_t is a minimal decomposition then \sigma \alpha_t \prec 0.

Theorem 1: The Weyl group acts transitively on Weyl chambers and bases.

Proof: Let \Delta and \Delta^\prime be different bases. We will prove by induction on |\Phi_+(\Delta) \setminus \Phi_+(\Delta^\prime)|. If there is a root \alpha \in \Phi_+(\Delta) \setminus \Phi_+(\Delta^\prime) then there must be a simple root with the same property, so let \alpha be simple. Then \sigma_\alpha \Phi_+(\Delta) \setminus \Phi_+(\Delta^\prime) = \Phi_+(\Delta) \setminus \Phi_+(\Delta^\prime) \setminus \{\alpha\} due to Lemma 2. QED.

Actually, it is obvious from the geometrical viewpoint. We can get from one Weyl chamber to another, since we can always “jump” to any neighbouring Weyl chamber via a reflection with respect to a wall.

Theorem 2: For any root \alpha \in \Phi there exists a basis \Delta containing \alpha.

Proof: Let v be a vector that is orthogonal to \pm\alpha and to no other root. Then for any decomposition \alpha = \beta + \gamma \beta and \gamma have different signs with respect to v. If we perturb v a little, this property remains true, so that \alpha is not decomposable into v-positive roots. It follows from the characterization of bases that \alpha becomes an element of a basis. QED.

Corollary: For any basis \Delta the Weyl group is generated by \{\sigma_\alpha, \alpha \in \Delta\}.

Theorem 3: The action of \mathcal{W} on Weyl chambers has trivial stabilizer.

Proof: This follows from Corollary to Lemma 3. QED.

It’s easy but I think I’m going to get stuck here unless I force myself out. Somehow this happens every time when something is hidden beneath the carpet: for example, when ideas from another theory are needed implicitly. What is it now? Coxeter groups? Or am I just beeing lazy again? Anyway, shut up, write it down and move on.

Lie algebras 12: Bases in root systems

Definition 1: A subset \Delta \subset \Phi of a root system is a basis if

  • It is a basis of the underlying Euclidean space
  • \Phi = \Phi_+ \sqcup \Phi_-, where \Phi_\pm is the set of roots whose coordinates are all positive/negative.

First of all, a simple observation.

Remark 1: If \Delta is a basis then for any \alpha,\beta \in \Delta \langle \alpha,\beta \rangle \le 0. Indeed, otherwise \alpha - \beta is a root which can be neither positive nor negative.

Theorem 1: For every vector \psi that is not orthogonal to any root, consider \Phi_\pm := \{\alpha \in \Phi \mid \langle \alpha, \psi \rangle \gtrless 0\} Then the set \Delta of positive roots \alpha \in \Phi_+ that are not decomposable into a positive combination of other positive roots is a basis. Conversely, any basis can be obtained this way.

Proof: \Rightarrow: Once again, for any \alpha,\beta \in \Delta the angle is obtuse, since otherwise either \alpha = (\alpha - \beta) + \beta or \beta = (\beta - \alpha) + \alpha would be a positive decomposition.
Now I need the following funny observation: any set of vectors in a half-space with obtuse angles between them is linearly independent. Indeed, if \varphi = \sum_{\alpha \in \Delta} \lambda_\alpha \alpha = \sum_{\beta \in \Delta} \mu_\beta \beta is a pair of positive decompositions with disjoint sets of nonzero indices, then

\displaystyle \Vert \varphi \Vert^2 = \sum_{\alpha,\beta \in \Delta} \lambda_\alpha \mu_\beta \langle \alpha,\beta \rangle \le 0

which implies that both sums are zero. On the other hand, since both have nonnegative coefficients, \langle \varphi,\psi \rangle = 0 implies that the coefficients are zero.
\Leftarrow: Any vector \psi in the cone dual to the cone of \Delta-positive vectors will do. QED.

Remark 2: The cone mentioned in the end of the proof is a connected component of the complement of \bigcup_{\alpha \in \Phi} \alpha^\perp, called a Weyl chamber. It is clear from Theorem 1 that bases are in a one-to-one correspondence with Weyl chambers.

Definition 2: The Weyl group of a root system is the group \mathcal{W} generated by the reflections \sigma_\alpha.

Theorem 2: \mathcal{W} acts transitively on Weyl chambers.

Proof: If C and C^\prime are two Weyl chambers, we can always join them by a path (C=C_1,\dots,C_n=C^\prime) of neighbouring Weyl chambers. Since any Weyl chamber can be mapped into any of its neighbours by a reflection, the action is transitive. QED.

Lie algebras 11: Root systems in dimension 2

Definition: A root system is a finite set \Phi of vectors in an Euclidean space, such that

  • \Phi spans the whole space.
  • For each \alpha \in \Phi among the vectors c \alpha only \pm \alpha lie in \Phi.
  • \Phi is closed under reflections \sigma_\alpha := 1 - 2 \frac{\alpha \otimes \alpha}{\Vert \alpha \Vert^2}.
  • The coefficients n_{\alpha \beta} := 2 \frac{\langle \alpha, \beta \rangle}{\Vert \alpha \Vert^2} are integers.

The last two conditions impose certain limitations on the structure of the set \{\beta + c \alpha\} \cap \Phi: at least, its gaps are integral multiples of \alpha, and it must be symmetric. However, we will see that much more can be said. The following simple lemma states a complete description of all possible pairs of vectors in \Phi, and, as a consequence, of all two-dimensional root systems. It would be appropriate to draw a picture, but I’m too lazy to do it here.

Lemma: Let \alpha, \beta \in \Phi be different roots, such that \Vert \beta \Vert \le \Vert \alpha \Vert. Moreover, assume that \langle \alpha,\beta \le 0 \rangle. Then the only possible options for their lengths and angle are as follows:

\begin{matrix}\angle(\alpha,\beta) & \Vert\alpha\Vert^{2}/\Vert\beta\Vert^{2} & \begin{matrix}\text{Generated}\\  \text{root system}  \end{matrix}\\  \pi/2 & \ast & \mathsf{A}_{1}\times\mathsf{A}_{1}\\  2\pi/3 & 1 & \mathsf{A}_{2}\\  3\pi/4 & 2 & \mathsf{B}_{2}\\  5\pi/6 & 3 & \mathsf{G}_{2}  \end{matrix}

The root systems generated by these pairs are the only possible two-dimensional root systems.

Proof: Since \Vert \beta \Vert \le \Vert \alpha \Vert and \alpha \neq \beta, we have |n_{\alpha \beta}| < 2. Hence either n_{\alpha \beta} = 0, and the vectors are orthogonal, or n_{\alpha \beta} = -1. In the latter case \langle \alpha,\beta \rangle = \frac{1}{2} \Vert \alpha \Vert^2. Now note that n_{\beta \alpha} = \Vert \alpha \Vert^2 / \Vert \beta \Vert^2 is an integer and \Vert \alpha \Vert \le 2 \Vert \beta \Vert due to Cauchy-Schwartz inequality. It follows that n_{\beta \alpha} = \Vert \alpha \Vert^2 / \Vert \beta \Vert^2 \in \{1,2,3,4\}, but we can easily check that n_{\beta \alpha} = 4 violates one of the axioms, since it implies that \beta = -2 \alpha. QED.

Corollary: For any \alpha \neq \beta the set \{\beta + c \alpha\} \cap \Phi is a symmetric arithmetic progression of length \le 4.