# Low Dimensional Topology

## June 12, 2014

### A celebration of diagrammatic algebra

Filed under: 3-manifolds,Combinatorics,Misc.,Quantum topology — dmoskovich @ 5:24 am

Relaxing from my forays into information and computation, I’ve recently been glancing through my mathematical sibling Kenta Okazaki’s thesis, published as:

K. Okazaki, The state sum invariant of 3–manifolds constructed from the $E_6$ linear skein.
Algebraic & Geometric Topology 13 (2013) 3469–3536.

It’s a wonderful piece of diagrammatic algebra, and I’d like to tell you a bit about it!

The two main constructions of 3-dimensional topological quantum field theories are:

1. Reshetikhin-Turaev invariants: These are computed from surgery presentations of 3-manifolds.
2. Turaev-Viro invariants: These are based on triangulations of 3-manifolds.

Turaev-Viro invariants are defined using $6j$-symbols coming from representations of quantum groups. When everything is nice’ enough, the Turaev-Viro invariant equals to the square of the absolute value of a corresponding Reshetikhin-Turaev invariant, and its computation reduces to a Reshetikhin-Turaev computation. But there’s a natural extension of Turaev-Viro invariants due to Ocneanu which uses other types of 6j-symbols, such as 6j symbols of subfactors. In particular, the 6j-symbol of the $E_6$ subfactor does not come from any Reshetikhin-Turaev invariant, and so it much be computed directly. Quantum closed 3-manifold invariants associated to 6j-symbols of the $E_6$ subfactor are true state-sum invariant land!!

The study of subfactors, and also of knots, challenges the classical paradigm of algebra as the science of manipulating strings of symbols. Namely, relevant algebras are algebras of diagrams drawn on the plane. To veer off on a philosophical tangent for a moment:

Before algebra of strings’, if you wanted to solve something like $(x-10)^2=81x$, you had to write something monstrous like:

If some one say: “You divide ten into two parts: multiply the one by itself; it will be equal to the other taken eighty-one times.” Computation: You say, ten less thing, multiplied by itself, is a hundred plus a square less twenty things, and this is equal to eighty-one things. Separate the twenty things from a hundred and a square, and add them to eighty-one. It will then be a hundred plus a square, which is equal to a hundred and one roots. Halve the roots; the moiety is fifty and a half. Multiply this by itself, it is two thousand five hundred and fifty and a quarter. Subtract from this one hundred; the remainder is two thousand four hundred and fifty and a quarter. Extract the root from this; it is forty-nine and a half. Subtract this from the moiety of the roots, which is fifty and a half. There remains one, and this is one of the two parts

This is from Al-Khwarizmi’s Compendious Book on Calculation by Completion and Balancing. Without algebra of strings’ itself, you couldn’t even do that. Conceptual advances which make algebra effective include appropriate notation (credit to Al-Qalasadi in the fifteenth century), thinking in terms of algebraic structures, and completing them. For example, to balance’ terms from one side of an equation to another, you need to have zero and negative numbers (so that having five apples and giving you two is the same as having minus two apples recieving five), and you need to have fractions… even if the final answer is known to be a positive integer and if only positive integers make sense in context! As an aside, I think that concepts such as negative probability and negative information can be understood analogously.

But then came the idea, whose origins are discussed in this mathoverflow question and which was popularized in topology by Kauffman HERE, that one should really be able to concatenate algebraic symbols not only on the left and right, but also from above and below and indeed from any direction. That algebra should be done not “along a line”, but rather in the whole plane. For “higher algebra” you might need even more dimensions! And diagrammatic algebra was born.

So how can you use diagrammatic algebra to compute an invariant? You compute a diagrammatic quantity for a presentation of your object. Local moves on your presentations, such as Pachner moves on triangulations, induce local moves on your diagrams. Your goal is now to prove that, using the local moves, you can reduce your diagram to some sort of “normal form”. And then that “normal form” is your invariant! This plan fits into the Kuperberg programme for understanding state-sum invariants, which is:

1. Find a presentation for your skein module (your diagrammatic algebra of diagrams modulo your moves) in terms of generators and relations.
2. Use this presentation to prove properties of your invariant (and to compute it!).

Bigelow had already found a presentation for the relevant $E_6$ planar algebra here:

Bigelow, S., Skein theory for the ADE planar algebras. Journal of Pure and Applied Algebra 214(5) (2010), 658-666.

Okazaki modifies Bigelow’s presentation, and using his modified presentation, he shows that the $E_6$ planar algebra in question is $1$-dimensional, so that any diagram reduces to a scalar multiple of the empty diagram (update: Okazaki just posted a simplified version of this proof HERE). This means that the $E_6$ state sum invariant (Turaev-Viro-Ocneanu Invariant) can be computed recursively by writing down the diagram associated to 6j-symbols of the $E_6$ subfactor for the triangulated closed $3$-manifold in question, and recursively applying local moves until an empty diagram is obtained.

Given that the $E_6$ linear skein is a non-trivial diagrammatic algebraic object, Okazaki’s paper might represent the most archetypal piece of diagrammatic algebra I’ve ever seen. It’s 57 pages full of computations some of which look a bit like this:

At the end of the paper, he computes the invariant for some lens spaces, and he’s done many more computations since. But anyway, it’s all just a beautiful testament to the power of diagrammatic algebra- a celebration of diagrammatic algebra. I believe that diagrammatic algebra will continue to expand and will soon enter all of the sciences… What would Pierce, who envisioned a diagrammatic algebra in the 1880’s as his “chef d’oeuvre”, an outline of the mathematics of the future (see HERE), have made of all the wonderful work on skein modules that we see today? What would he have made of this paper of Okazaki?

A casual question to all of you- what’s the most aesthetically pleasing diagrammatic algebraic computation you know?

1. Here is a question from a non-topologist (I work in number theory): what makes these computations “diagrammatic” in the sense you describe? Obviously they involve diagrams, but you say that diagrammatic algebra means moving beyond strings of symbols to symbols arranged in the plane. The equations you’ve posted look to me like traditional algebra (strings of symbols) with the diagrams simply functioning as complicated symbols. In what sense are these diagrams more than just fancy algebraic symbols obeying a set of algebraic laws?

I hope this question makes sense. I know it’s a little vague, but any intuition you could offer would be appreciated.

Comment by Luke Wassink — June 12, 2014 @ 3:56 pm

The thing to understand is that the things in the angular brackets $\langle\ \rangle$ are evaluations (complex numbers), while the graphs you see are occuring as local pieces of bigger diagrams. So for example, the local picture:

contains five 1-valent vertices. These are all connected (concatenated’, we say) with 1-valent vertices in other graphs. So the image above would never occur in isolation- it would occur only as a subimage of a larger graph such as for example:

(ignore labels… I snipped these images out of different parts of Okazaki’s paper, so the labels happen not to match).

So the featured equation is living inside a skein module. Okazaki defined this skein module in Definition 2.1. Its elements are formal sums over the complex numbers of certain graphs. Relations allow you to replace any one of these graphs with a linear sum of other graphs, all of which coincide with the first graph except in a small part where they differ as pictured.

The analogy is to combinatorial group theory. In combinatorial group theory, a group is presented by generators (symbols) which we string together (concatenation), and then quotient the free group that we thus obtain by rewrite moves (relations), allowing us to replace one string by some other string whenever it occurs (e.g $\ldots ab\ldots \leftrightarrow \ldots ba\ldots$ if your group is abelian).

The situation for skein modules is entirely analogous. We have generating diagrams’, which can be concatenated e.g. by fusing a 1-valent vertex in one of them with a 1-valent vertex in another (if you think of each edge as representing a string (why not?) then your elements are sort-of graphs of strings which come together and split apart at vertices, and concatenation is exactly as mentioned). The free structure thus obtained is they quotiented by rewrite moves, which are the relations. The whole thing is occuring on a diagram in the plane. It’s a wonderfully natural and fruitful conception, actually!

Comment by dmoskovich — June 13, 2014 @ 12:14 am

2. Interesting, thanks for the reply!

Comment by Luke Wassink — June 13, 2014 @ 3:40 pm

The Rubric Theme. Create a free website or blog at WordPress.com.