V.F.R. Jones,

Some Unitary Representations of Thompson’s Groups and, arXiv:1412.7740.

Links occur as braid closures, and so links can be studied via braid theory. This is the starting point for the Jones polynomial , and it’s very nice, because braids form a group (good algebraic property) which is orderable (even biorderable) and automatic (even biautomatic). But here are some complaints I have:

- Combing a link introduces an artificial structure- a `timeline’ where strands always `move forward through time’- which a link doesn’t naturally have. By trading links with braids, maybe we’re missing out on some essential
*je ne sais quoi*that makes a link a link. - More concretely, all known quantum link invariants come from Lie (bi)algebra constructions. But not all subfactors arise this way. For example, the Haagerup subfactor does not. If we believe that quantum invariants should correspond to finite index subfactors and not just to e.g. quantum groups, then surely braids are the wrong way to go.
- Everyone and their cousin uses braid groups nowadays.

Jones’s manuscript provides something new, that is a way to obtain links from elements of Thompson groups, and elements of Thompson groups from links. His algorithm seems just as natural as combing tangles to obtain braids. And it leads directly to new polynomial link invariants.

Thompson groups may be just as nice as braid groups. At the very least, they’re new in this context. It’s an open problem whether they are orderable, automatic, or amenable- as far as I know, they certainly might be. Indeed, they might be *much nicer*, because, unlike braid groups on more than two generators, they don’t contain a subgroup isomorphic to the free group on two generators.

As far as I know, Thompson groups have only shown up in mathematics so far as counterexamples. But in Jones’s paper, they naturally pop out of the structure of planar algebras. There’s nothing artificial about the correspondence between knots and Thompson group elements.

**Edit**: Yemon Choi points out a very nice interpretation of Thompson group elements which both sense in context and also in which the Thompson group is not a counterexample. The reference is HERE. What I said about Thompson groups only being counterexamples is even more false than that- they’ve featured before in quantum topology (and I wonder what the relation is, if any, with the construction in Jones’s new preprint).

**Silly convention**: Elements of braid groups are called braids. I suggest that elements of Thompson groups be called Thompsons. Indeed, two (dependent) elements of the Thompson group are used to construct links, and so, to be precise: What is at hand is truly a case of Thompson and Thompson.

**Edit**: Scott Carter points out that “Thompson and Thompson” ought to be Thomson and Thompson. Which makes sense to me- the link is a diagrammatic inner product of one Thompson with another Thompson acted on by a representation . Perhaps it’s reasonable to call a new polynomial invariant the `Thomson and Thompson Polynomial’ after all, with the `p’ reminding us of the action of .

Jones investigates the scaling limit of tangle diagrams, by letting the number of boundary points of the tangle fill out the boundary disc of the diagram (tangles with more and more strands). It wouldn’t make much sense, at least diagrammatically, to do this all at once- what would a tangle with uncountably many endpoints look like?- but what Jones does instead is to present a directed set construction, increasing the number of tangle endpoints inductively by gluing on tangles with . This mimics the block spin renormalization procedure from physics on a diagrammatic level. Jones credits Dylan Thurston with having suggested this idea to him.

Because the only way to add endpoints to a tangle is with caps or with cups, the number of intervals between tangle endpoints is always even. Jones chooses the interval endpoints (tangle endpoints) to correspond to dyadic rationals, that is to numbers of the form for some . The group of PL homeomorphisms of whose non-differential points are the dyadic rationals and whose slopes are all powers of is Thompson’s group , and this is how Thompson groups enter the picture.

The preprint opens up vast new vistas, and the relationship between links and Thompson groups which it points out is at once so powerful and so natural (*e.g.* the analogous construction for a different planar algebra gives the Tutte polynomial of a planar graph) that it would be difficult to imagine the new polynomials which his construction defines *not* once again reshaping major parts of quantum topology.

Two challenges are presented at the end of the preprint:

- Links are obtained from Thompsons by an
*inner product*, that is by pairing a tangle obtained from the Thompson with a mirror image of another tangle obtained in a different way from the same Thompson. This sort-of reminds me of things like Heegaard splittings by the way, although of course that’s a very different idea. Anyway, the -index of a link is the smallest number of leaves of a Thompson (presented as a pair of rooted binary trees) required for this construction. Investigate bounds for the -index. - Find and prove Markov Theorem for Thompson group presentations. This has got to be a major goal. I would be totally amazed if the `Markov moves’ for Thompsons were anything other than the set of moves on pairs of trees corresponding to the Reidemeister moves, as Jones himself suggests. This makes Thompsons *much more* natural group element representations of tangles than braids are.

Another obvious question is how to compute the polynomial invariant defined in this preprint, at least for small examples, and to establish whether it satisfies a reasonable-looking skein relation.

To complain (never forget to complain!), like the Jones polynomial, the construction of this invariant also uses planarity in an essential way, and therefore it is difficult to see how its construction might extend to virtual and to welded tangles.

I thank Ian Agol for drawing my attention to this preprint.

]]>

For the record: When I was in graduate school, I had no intentions of doing anything but becoming a math professor. Things didn’t change during my postdoc either. In fact, even in the spring of 2009, as the financial crisis was eviscerating the job market and unemployment was staring me grimly in the face (until Bus Jaco somehow convinced the right people to let me fill the vacancy created when Joseph Maher left OSU, but that’s another story…) my last-minute applications were all for one-year visiting lecturer positions.

At the time, my assessment of the private sector vs. academia was pretty bleak: Your salary is higher, but the price you pay for that is longer hours at a mind-numbing job with a micro-managing boss. But it turns out, things aren’t actually that extreme. In fact, there are a lot of nice things about the private sector, that make the comparison much more subtle, even if you take money completely out of the picture.

First, lets talk about working hours: Many youngsters (and non-academics) point to the flexible schedules and long holidays as one of the perks of academia. But by the time you make it into the ranks of the tenure-track, it becomes clear that the flexibility just means that you get to pick which 60 (80?) hours a week that you work. And vacations are the times when you get to work on your real work (research) or else feel guilty for not doing all the writing that you didn’t get done during the semester.

In the private sector, on the other hand, policies regarding time vary quite a bit. I’ve heard that there are some jobs where you’re expected to be in the office 12 hours a day, whether or not you have anything to do. But there are many more companies (including Google and most of its peers) that value work-life balance and have policies that explicitly try to discourage their employees from working harder than is healthy for them. They don’t just do this because they’re nice people – they do it because they want to prevent burn-out, which in the long run counteracts any extra productivity that would come from an 80-hour week. (Don’t believe me? There are books about this.) I spend around 40 hours a week in the office and almost never bring work home. Some of my new colleagues work longer hours than that, but overall I think the attitude towards work/life balance is much healthier.

So this brings up an interesting point: As great as it is to be completely in charge of your own day-to-day and longer-term activities, there are actually some benefits to having a manager who is aware of and has a stake in your daily work: Namely, they can take a more objective view of what you’re doing and help to make corrections, such as pushing back against your self-applied pressure to work harder than is healthy. Granted, not all managers will do this, but if you find the right job at a good company, they likely will. (Google makes all managers go through training on this sort of thing, and I expect there are similar policies at many other companies.)

Next, about the “mind numbing” work: No, the work that I do is not as deep or as beautiful as the mathematics that I previously got to learn and think about. It doesn’t require the sorts of insights that come to you surprisingly at random times, and double you’re pulse rate even though you haven’t moved. It doesn’t require thinking about a problem during every spare moment for days or weeks or months. But it is quite interesting, and exercises most of the mental muscles that one works so hard to develop in a math graduate program.

In particular, it turns out that (at least in my experience) writing a computer program is a lot like writing a math paper, or a proof: You start with an overview of the thing that you want to produce, at a high-level of abstraction. Then you break it up into smaller pieces, and work out the details of each piece at a low-level of abstraction. Then you start putting these pieces together, slowly working from lower levels of abstraction to higher levels. Sometimes, you realize that they don’t fit exactly right, so you have to drop down to a lower level of abstraction to change some of the components before working your way back up.

It turns out that the ability to switch between different levels of abstraction like this is highly prized and relatively rare, even though it’s sometimes taken for granted in academia. It’s basically the same skill that you need to read and write proofs in mathematics (which is the reason I believe that math majors/Ph.D.s can make better software engineers than computer science students. But don’t tell anyone I said that…) Programming is basically this process with some syntax (which is the easy part) layered on top.

In mathematics research, the fun part is coming up with the ideas and the painful part is organizing them into a paper. In software engineering, coming up with the ideas that will solve a given problem (or determining that it can’t be solved) is usually pretty straight forward, if not trivial, by putting together ideas that have already been worked out by others. But implementing them (i.e. writing source code) is quite interesting, and significantly less painful (at least in my experience) than writing papers. Getting the pieces of a computer program to fit together tends to take a lot more time, even for very simple systems, because you can’t gloss over the details as much as you do in a math paper, but it also makes it more interesting.

So the difference between a mathematics research project and developing a complex piece of software is (again) a trade-off. For me, it comes down to how much you enjoy grappling with nearly unsolvable problems, versus how much you dislike writing papers. (And if you actually like writing papers, then lucky you…) There’s also something to be said for being able to know, at the outset of a project, that there is a solution and roughly how long it will take to finish it.

And, of course, in the private sector you generally don’t get to teach. For some this may be a drawback, and for others, maybe not. For me it was yet another trade-off – there were a lot of things I liked about teaching, and other things I didn’t. One thing I didn’t like was the constant tension between time/energy spent on teaching vs. research.

Finally, no discussion of an academic career would be complete without a discussion of committees. I actually enjoyed much of the time I spent in committee meetings discussing important questions about how the department should be run. These were usually questions that needed to be made, and required the sort of experience and insights that could only come from faculty members. Sometimes we spent more time than we needed to splitting hairs, or discussing issues that didn’t deserve the amount of time we devoted to them. But overall, I think most of the committees I had a chance to experience served an important purpose.

At the same time, they required a lot of time, and that time increases every year, as you lose the protections designed to give pre-tenure faculty time for research. So it’s not uncommon, as your academic career progresses, for research (which for me was one of the main draws of an academic career) to become a smaller and smaller part of your day-to-day activities.

In the private sector, on the other hand, decisions tend to get made more quickly and with less discussion. I have very few meetings at Google, and most last less than half an hour. Other people (such as managers) spend much more time in meetings than I do, but they work hard to keep them short and to the point. In general, a lot of the types of decisions that are left to faculty in academia are delegated to specialists or managers in the private sector. This obviously has both benefits (in terms of time) as well as drawbacks (in terms of control.)

So, that should give you a rough idea of the factors I considered when I decided to leave academia. I want to stress that these are all trade-offs, and how you weigh them is a personal decision. There are many mathematicians who would be happier in academia, but there are also many who would do better in the private sector.

While it may be scary to think of all the bright young minds that academia could lose to the private sector, my experience suggests that there were won’t be a shortage among the mathematicians who stay. In fact, I would argue that having such a high ratio of job seekers to jobs is much worse for the morale of the field, makes it easier for administrators to replace tenure lines with contingent faculty positions, and ends up forcing many promising young mathematicians to leave against their will anyway. If more recent Ph.D.s choose to go into the private sector, then the ones who stay will tend to be the ones who are particularly devoted to teaching (on top of their excellent research programs) which is what our universities need. Having the remaining math Ph.D.s go out into the world (willingly) and show how capable they are will both help justify the NSF grants that contributed to their education and encourage more students to enter math graduate programs, or become math majors.

When I was a graduate student, I didn’t take an objective look at the option of entering the private sector, and I don’t think many of my peers did either. But my impression is that attitudes about this are starting to change, both among students and faculty. I hope that in the future, every young mathematician will give serious thought to both options, and will be able to make a well informed decision.

]]>

That friend is also an electrical engineer and knows some things about signal processing. This was important to me — we had some external criterion (from outside of mathematics) for determining whether or not the insights from Persistent Homology were interesting or not.

So I said “okay!” Not really knowing what I was getting myself into.

We got to work, over some rather cool, soggy Victoria winter days.

The idea was to take piles of data from music, put various standard metrics on them, feed them into software that computes the barcodes, and analyze the output, to see if the barcodes see anything that we did not already know about the data. The answer turns out to be yes.

Sometimes the barcodes saw some rather subtle and insightful things. Sometimes they saw some subtle and relatively mundane things. Let me tell you about a few.

First, a quick summary of persistent homology. Jesse has talked about this quite a bit on the blog, but if you’ve forgotten or missed it, the idea goes back to Vietoris.

Given a finite metric space X, you form a family (parametrized by a non-negative real number ε≥0) of simplicial complexes X(ε) whose vertex set is X. You give X(ε) an edge if the distance between two vertices of X is less than ε, similarly you give it a simplex is the pairwise distance between all the prospective vertices is less than ε. The family X(ε) forms a filtration of a contractible simplicial complex X(∞), so the homology of the spaces X(ε) is a family of abelian groups and inclusions, which eventually “dies” when the parameter ε is larger than the diameter of the metric space X. Similarly, the homology of X(0) is that of a finite discrete space. The homology classes that exist for large ε-intervals are called persistent, and one imagines them as describing somewhat relevant shapes in your data. The bar codes essentially represent these intervals over which homology classes live.

The subject of Persistent Homology is advancing fairly rapidly at present, but there are still many unsolved foundational problems in the field. If a person has envy for all the successes of Milnor, Thom and Serre, setting up the foundations of algebraic and differential topology, I can’t imagine a better field to go into.

Anyhow, back to our computations. One of the more interesting computations we looked into was the homology of certain points in the* space of rhythms.* Here we think of rhythms as the periodic beating of a drum. To make a *space* from rhythms, we consider the finite-subset space of a circle. Typically this is denoted exp(S^1). A point of exp(S^1) is a finite (non-empty) subset of the unit circle. The metric on exp(S^1) is the Hausdorff distance. That is the longest distance between a point in one set, and a point in the other. One thinks of a point in exp(S^1) as an explicit periodic beating of a drum, with one point for each beat, and the beat repeats itself every 2π units of time. This does not suffice because two essentially-identical beats can be phase shifts of each other, so our periodic rhythm space is the metric quotient exp(S^1)/SO_2 where SO_2 acts on the circle in the natural (linear) way. So this model for beats ignores many things — for instance the loudness and duration of a drum strike are ignored. There is no notion of different types of drums in this model, and so on.

As our data set, we took a table of Afro-Cuban rhythms. Here are the barcodes.

There are a few nifty things about this computation. There are no homology classes other than in dimension 0. So these barcodes say the data begins as a collection of isolated points, and then after a certain threshold (near ε=0.06) a transition occurs, and the simplicial complex becomes contractible. This strongly suggests the data is a metric tree. We checked, and it turns out the data is a metric tree. The tree appears to be the genetic tree for how afro-cuban rhythms evolved (I don’t know this branch of music well enough to know for sure, but that’s my hunch). Specifically, the centre of the tree of Afro-Cuban rhythms is known as son clave (or clave son), which is thought to be the first afro-cuban rhythm. It would appear the remaining rhythms evolved from this, by making individual changes — doubling a drum strike here, or shifting one there, etc.

Other metric spaces we considered were things like the space of pairs of notes, where one note occurs immediately after another in a composition. The feature we saw most often here in the barcodes were things like a composer’s tendency to “return” to a theme note, with little departures here and there.

On a more topological side, there were some fun observations that a certain “octave-reduced space of 3-note melodies” were homeomorphic to S^1 x S^2, so the homology of S^2 sometimes appears naturally when studying melodies in this manner.

There are several databases out there of various condensed forms of all world music — close to everything recorded in human history. It’s interesting to speculate about what the shape of that data would be. It would be interesting to discover if there is much relatively unexplored territory in this space — is it because we lack the imagination to find it, or is it because it’s all too atonal? More pessimistically, it could be a gaussian distribution centred on Britney Spears.

This leads to one of my personal favourite questions: what kind of normality tests are there for data, using persistent homology?

]]>

A groundbreaking paper which made a deep impression on a lot of people, including me, was Cochran-Orr-Teichner’s Knot concordance, Whitney towers and signatures. This paper revealed an unexpected geometric filtration of the topological knot concordance group, which formed the basis for much of Tim Cochran’s subsequent work with collaborators, and the work of many other people.

In this post, in memory of Tim, I will say a few words about roughly what all of this is about.

It would be very nice to be able to equip the space of knots with a good algebraic structure. Somehow, the natural binary operations on knots seem to be the satelite operations, of which the connect sum may be considered a special (degenerate) case.

Unfortunately, unless is the trivial knot, there is no knot which can be `satelited’ to to obtain a trivial knot. Thus, the set of knots under connect sum, or indeed under any satelite operations, does not form a group.

But the set of knots can be quotiented by an equivalence relation called concordance, and concordance classes of knots do form a group under the connect sum. Concordance takes place in an ambient 4-dimensional space, and so it provides an avenue for knot theory to be used to study 4-dimensional topology. For most of the world, this is the ultimate motivation to study link concordance. This point of view is beautifully laid out in Freedman-Quinn’s Topology of 4-Manifolds.

The way the field has gone, every conjecture about how `good’ the structure of the link concordance group is has turned out to be wrong. Almost every paper which has come out about knot concordance in the last 20 years, as far as I know, has been a negative result. Cochran’s work has been instrumental in showing `how bad things are’. The group isn’t trivial, and its non-triviality is detected by the Casson-Gordon invariants. The next step was taken in Cochran-Orr-Teichner; Casson-Gordon invariants do not detect knots up to concordance. They’re just the first step in an infinite geometric filtration of invariants, which is non-trivial at every step.

Stavros Garoufalidis suggested a long time ago that the Cochran-Orr-Teichner filtration should be investigated through the lens of quantum topology. This was a major research interest of mine at one point, and to the best of my knowledge, nobody has yet achieved this aim. I remain convinced that this is an interesting avenue of research worthy of future investigation.

A recent paper of Tim Cochran which captured by imagination was his joint work with his mathematical daughter Shelly Harvey on The Geometry of the Knot Concordance Space. In it, Cochran and Harvey suggest viewing the topological knot concordance space in a metric space in various different ways, and suggest investigating its coarse geometry. Again, the structure isn’t neat- it isn’t quasi-isometric to a finite product of hyperbolic spaces- but it is possible to address the question of whether it is what the authors call a `fractal space’, that is roughly a space which admits a natural system of self-similarities. The conjecture that the knot concordance space is a fractal space looks intuitively highly plausible to me; and the investigation of the coarse geometry of the knot concordance space looks to me like a marvelous research project which will surely lead to many fruitful results in the future, both positive and negative.

And apart from all of his fantastic and groundbreaking ideas, Tim was an inspiring teacher, lecturer, and colleague: a true powerhouse of good mathematics.

RIP, Tim Cochran.

]]>

Feedback is very welcome (as are “how do I…?” questions), especially for a brand new port such as this.

]]>

The question asks whether, rather than searching for Reidemeister moves to simplify a knot diagram, we could instead search for “big Reidemeister moves” in which we view a section which passes underneath the whole knot (only undercrossing) or over the whole knot (only overcrossing) as a single unit, and we replace it by another undersection (or oversection) which has the same endpoints.

This question (or more generally, the question of how to efficiently simplify knot diagrams in practice) loosely relates to a fantasy about being able to photograph a knot with a smartphone, and for the phone to be able to identify it and to tag it with the correct knot type. Incidentally, I’d like to also draw attention to a question by Ryan Budney on the topic of computer vision identification of knots, which is topic I speculated about here:

A core question to which all of this relates is:

And perhaps more generally, are there any very hard ambient isotopies of knots?

]]>

Gilmer, P.M. and Masbaum, G., Maslov Index, Mapping Class Groups, and TQFT, Forum Math.

25(2013), 1067-1106.

It makes me think a lot about just what the anomaly `actually means’…

I’ll start with some vague philosophical musings. I’m quite taken with the information physics idea that everything is information, and I think that Chern-Simons theory should really be all about information as well. But I’m not sure how. A google search turns up load of physics papers with keywords “anomaly”, “Chern Simons”, and “entropy” in close proximity, so I’m sure that some physicists know the whole story, but I don’t. Maybe somebody could explain it in the comments?

There’s a theme in physics which says that the `interesting’ information content of naturally occuring systems on `things with boundaries’ is contained entirely on the boundary and not in the interior. Manifestations of this theme include the holographic principle which roughly claims that the maximal entropy in a region scales like the surface area of the boundary of that region instead of like its volume (so that the entire information content of a black hole lies on its event horizon), and area laws which roughly claim that the amount of quantum entanglement between particles in a region and in its complement depends on the area of the boundary of the region and not on the volume of the region.

Because every closed oriented –manifold bounds an oriented -manifold, this physics theme suggests a way in which physics might be unreasonably effective in low dimensional topology. Namely, a physically interesting information measure on a bounded -manifold ought to give rise to a -manifold invariant. This is sort-of the meta-intuition I have for why we have Topological Quantum Field Theory (TQFT) invariants of -manifolds. My vague feeling is that because Fisher information and the Chern-Simons action both have something to do with curvature, perhaps Chern-Simons Theory and quantum -manifold invariants have a clear and legitimate information-physics interpretation (if you know what it is, please tell me!).

Be that as it may, it turns out that quantum invariants coming from -dimensional TQFTs tend not quite to give numerical topological invariants of their –dimensional boundaries. We need an integer worth of extra information from the interior of the bounded -manifold to get a numerical –manifold invariant. This `extra information from the interior’ is called the anomaly. The anomaly ought not to even exist for `physically interesting information’ according to the naive interpretation of the physicist’s theme outlined above. Maybe that’s why it’s called an *anomaly*- because a physicist would wish that it not exist. A lot of surveys and entry-level texts seem to gloss over the anomaly, maybe partly for that reason.

It seems to be only recently that anomalies are becoming respectable. Perhaps this is due to Lurie’s higher categorical formalisms which sheds some light on anomolies, and perhaps to work on TQFT’s for manifolds with boundaries and corners, and perhaps to interest in “type II superstring orientifolds” in which anomalies are both tricky and important, but in any event, there does seem to be a resurgence of interested in anomalies. It seems that an anomaly should be considered an (invertible) field theory itself. This paper is the most interesting recent paper I have seen on the subject… maybe I’ll talk about it another time.

Back on the subject of Chern-Simons TQFTs, or rather Reshetikhin-Turaev TQFTs, our setting is a closed oriented -manifold bounding an oriented -manifold. This -manifold matters only up to cobordism (i.e. two cobordant -manifolds are considered equivalent from the point of view of our TQFT, because, while there may be a wee bit of information in a dimension 4 interior, there is no relevant information in a dimension 5 interior). Cobordism classes of oriented -manifold are classified by the signature (an integer), and I think that this is the secret reason that the signature keeps popping up in all kinds of formulae for quantum invariants of -manifolds.

So really, the domain of our TQFT ought to be a pair consisting of a manifold and an integer, or a manifold with an integer-worth of extra structure. How to usefully specify that integer? There are a variety of of approaches- structures, -framings, various choices of largrangian thisses or thats, Masbaum-Roberts explicit methods… There’s an algebraic approach as well, in which we trade our -manifold for a mapping class group element. Remember how every -manifold has a Heegaard splitting? This constructs our -manifold by gluing together two genus handlebodies using an element of the mapping class group . The TQFT induces a representation of the mapping class group which is only projective but not linear because of the anomaly. Gauge-fixing/ choosing a cobordism class of a bounded -manifold/ fixing the anomaly corresponds to choosing a central extension of the mapping class group. And it turns out the has a universal central extension, and (unsurprisingly) the cohomology class of this extension is a generator of a cohomology group which is isomorphic to the integers (the most famous such generator is the Meyer cocycle, and the second most famous is its cohomological negative, the Maslov cocycle).

So the whole problem of fixing the anomaly has been algebratized, and the goal has now become to describe explicit elements of the universal central extension of , which are the algebraic objects which have now replaced “-manifold together with a cobordism class of -manifolds which it bounds”.

That’s pretty-much the goal of Gilmer-Masbaum. Some major steps which were outlined in Walker’s iconic TQFT notes are worked out explicitly. This is at long last a careful treatment of a TQFT anomaly. I know more than I knew before.

Now that we have a technically coherent and careful treatment of the anomaly in the Chern-Simons context which seems more or less amenable to concrete computation (I’m haven’t followed through the details carefully enough to strengthen the above sentence), the next thing I’d love to read would be a survey-level treatment of the anomaly, which explains all of the different approaches to fixing it, the strengths and weaknesses of each, and how they relate to one another.

I’d also really like to understand how quantum invariants measure information (entropy), and in particular what information is measured by the anomaly. And what is the conceptual reason that Chern-Simons theory violates the theme that all interesting information lies on the boundary? Or maybe it doesn’t? I wish I understood more.

]]>

This is great! Knots looking cool in semi-mainstream media!

A completely unrelated thing I’m chuffed about is that, a week ago, I broke my 2004 record of 550km in 10 days by walking 600km, Osaka to Tokyo, in nine and a half days. The trick? Walk slowly but without resting, and sleep less.

]]>

There’s several new features, such as:

- rigorous certification of hyperbolicity (using angle structures and linear programming);
- fast and automatic census lookup over much larger databases;
- much stronger simplification and recognition of fundamental groups;
- new constructions, operations and decompositions for triangulations;
- and more—see the Regina website for details.

You will find (1) and (2) on the Recognition tab, (3) on the Algebra tab, and (4) in the Triangulation menu.

If you work with hyperbolic manifolds then you may be happy to know that Regina now integrates more closely with SnapPy / SnapPea. In particular, if you import a SnapPea triangulation then Regina will now preserve SnapPea-specific data such as fillings and peripheral curves, and you can use this data with Regina’s own functions (e.g., for computing boundary slopes for spun-normal surfaces) as well as with the in-built SnapPea kernel (e.g., to fill cusps or view tetrahedron shapes). Try File -> Open Example -> Introductory Examples, and take a look at the figure eight knot complement or the Whitehead link complement for examples.

Finally, a note for Debian and Ubuntu users: the repositories have moved, and you will need to set them up again as per the installation instructions (follow the relevant Install link from the GNU/Linux downloads table).

Enjoy!

- Ben, on behalf of the developers.

]]>

A binary operation is **distributive** over another operation if . If then the operation is said to be **self-distributive**. Examples of self-distributive operations include conjugation , conditioning (assume X and Y are both Gaussian so that such a binary operation makes sense, essentially as covariance intersection), and linear combinations with (say), and elements of a real vector space.

Two nice survey papers about self-distributivity are:

- J. Przytycki, Distributivity versus associativity in the homology theory of algebraic structures. arXiv:1109.4850.
- M. Elhamdadi, Distributivity in Quandles and Quasigroups. arXiv:1209.6518

I won’t survey these paper today- instead I’ll relate some abstract philosphical musings on the topic of associativity vs. distributivity.

Algebraic topology detects information not only about associative structures like groups, but also about self-distributive structures like quandles. I wonder to what extent distributivity can stand in for associativity. Might our associative age give way to a distributive age? Will future science will make essential use of distributive structures like quandles, racks, and their generalizations? At the moment, such structures appear prominently only in low dimensional topology.

I think that there is a philosophical difference between an *associative world* and a *distributive world*. The associative world is a geometric world; a world in which space and time are important and fundamental concepts. The distributive world seems different to me. I think that it is a quantum world without space and time, in which only information exists.

Analogous to mass being a manifestation of energy via , so energy may be viewed as a manifestation of information via Shannon/Boltzman entropy. From a physics perspective, there exists the `future physics’ idea that space and time might be emergent, and that the only true fundamental physical quantity is information. Vendral has written a book expounding this point of view. If this idea takes hold, then future fundamental physics will include information physics, and I believe that its underlying mathematics will belong not to the associative world, but rather to the distributive world. I speculate that information physics will some day make essential use of quandles, racks, and related structures.

The associative world is more familiar, so I’ll begin with a survey of the history of the distributive world, followed by a brief survey of both worlds. Then I’d like to compare and contrast them.

But perhaps there is more in heaven and earth than is dreamt of in associative philosophy. The person credited with this observation is the great American logician C.S. Pierce when in 1880 he concluded:

These are other cases of the distributive principle… These formulae, which have hitherto escaped notice, are not without interest.

For the next century or so, like stray ants who don’t follow paths to establish food sources, there were occasional bursts of realization that distributivity might be fundamental. Notable among the mavericks is M. Takasaki. Alone and isolated as a fresh Japanese math PhD in Harbin during wartime, Takasaki defined an involutive quandle in 1942 as an abstraction of the geometric idea of a symmetric linear transformation. Takasaki envisioned his self-distributive `keis’ as alternatives to groups, but his dream is still largely unrealized. In 1959 another group of mavericks, John Conway and Gavin Wraithe, discovered quandles and racks whose operations were abstractions of the conjugation operation in group theory. But it was only in 1982, with the work of Joyce, and another great independent discoverer Matveev, that quandles and racks entered the mathematical consciousness. Other independent thinkers who discovered or rediscovered such structures (racks, in this case) include Brieskorn and Kauffman. There were ideas about using quandles in the context of geometry (Takasaki), singularity theory (Breiskorn), and symmetric spaces (Joyce), but I think that quandles and suchlike only really ever took hold in low dimensional topology.

From the knot theorist’s perspective, quandles and racks were popularized by Fenn, Rourke, and Sanderson’s 1992 discovery of rack cohomology (the quandle version is due to Carter et.al., and the history is explained in his survey). It turns out that algebraic topology works just fine when associativity is replaced by distributivity, and quandle cocycles yield computable knot invariants. Algebraic topology of quandles and racks has become a bit of a subfield inside low dimensional topology, and this is more or less the only quasi-popular use of quandles of which I am aware.

Note: quandles and racks are only part of the mathematical consciousness of low-dimensional topologists! Physicists, biologists, chemists, computer scientists, engineers, and the rest of humanity don’t really know what a quandle is. I think that we’re a few steps ahead of the pack.

Viewed broadly enough, I think that every associative operation is an abstraction of one or more of the following archetypes:

**Addition**: The archetypal geometric picture for addition is concatenation of segments of specified lengths. To add natural numbers and , start with a number line, represent the number by the segment , mark a second point at distance from point in the positive direction representing as , and concatenate the two line segments to represent by the concatenated directed segment . Associativity is seen in the geometry (the space), in that , and both are represented by the same directed segment .**Multiplication**: The archetypal geometric picture for multiplication is to fill a cycle by a cell. To multiply natural numbers and , represent by the directed segment along the x-axis and represent by the directed segment along the y-axis, and form the rectangle . The product is visualized as the area of the rectangle (the 2-cell) in the upper right quadrant whose boundary is the above rectangle. Associativity is seen from the fact that both measure the area of the same cube in Euclidean 3-space.

In the associative world, it makes sense to represent objects by 0-cells and maps by 1-cells. Data structures can sensibily be represented using labeled graphs. A composition of maps from an object represented by a vertex to an object represented by a vertex on a graph is represented by a path on the graph between and . It makes sense to represent a composition of maps in this way thanks to associativity- there is no need for brackets along the path. Maps between maps can be represented by directed higher cells, sort of like our geometric picture for multiplication. Again, this makes sense thanks to associativity.

The claim that I am making is that formalisms such as category theory and graph theory are native to the associative world. So too classical probability theory. Probabilities are added and multiplied, and they are always between and . So too, the theory of computable functions relies on associative compositions.

Let’s consider the following archetypes for distributive operations:

**Convex combination**: Our first archetype is with elements of a real vector space, and .**Conjugation**: The second archetype is .

Neither of these operations are associative in general. For example,

.

Both operations have natural archetypes in the world of information (their best-known archetypes are in low dimensional topology of course). One archetype for convex combination is from Bayesian statistics. I estimate the mean of data based on a sample, and I obtain a number . But I have a prior belief that the mean should actually be . Based on external information (*e.g.* the number of elements in the sample and my choice of standard of `absolute credibility’), I compute a constant , and my updated estimate becomes . Fusion operations satisfy .

I can view convex combination as `mixing'; I mix units of with units of .

An archetype for conjugation might quantum interference, where quantum evolution of density operator conjugates it by a unitary operator. So `interaction’ is convex combination, and `evolution’ is conjugation…

It doesn’t make much sense to represent words in **D**istibutive **N**on-**A**ssociative (DNA) structures using concatenated edges in labeled graphs, because concatenating edges would not correspond to a well-defined composition of operations (because of non-associativity). There are still notions of Cayley graphs for quandles and racks (e.g. Chapter 4 of Winker’s thesis); I don’t feel qualified to comment on these.

The natural way to represent words in DNA structures, I would think, would be to walk along (modified) tangle diagrams. A Reidemeister III move on tangle diagrams coloured by distributive structures makes sense, because :

One idea behind tangle machines is to make use of this fact to do distributive algebra on tangles. So, while for an associative operation one might diagrammatically represent in some way like this:

In a distributive world we might represent maybe like this:

Is there a DNA (**D**istributive **N**on-**A**ssociative) analogue to category theory, where morphisms distribute but don’t have associative composition? I wonder… I also wonder whether quantum probability, suitably formulated using convex combination and conjugation operations, would be a valid DNA analogue to probability. If we take Reidemeister 2 seriously, and apply it to the DNA structure of Gaussian distributions whose operation is conditioning, we have to define `unconditioning’ X by Y, and the resulting probability might be negative. Classically this makes no sense, but from a quantum perspective it’s fine, and even natural; it feeds my confirmation bias for the philosophical thesis we are considering. Consider the following quote by Feynmann:

The only difference between a probabilistic classical world and the equations of the quantum world is that somehow or other it appears as if the probabilities would have to go negative.

Most quantum topology of tangles is actually associative, in that we speak of the *category of tangles*, whose operation is stacking. Objects are tangles with tops and bottoms:

Stacking is an associative operation. Via a TQFT formalism, braided monoidal bla bla bla categories give rise to tangle invariants and to knot invariants.

Dror Bar-Natan suggested that this might not really be the right way to think about tangles. Tangles should not have `tops’ and `bottoms’- such information certainly does not exist topologically. Instead, endpoints of tangles should be marked points around a disc (more generally a disjoint union of spheres with holes):

Surprisingly, this disc, which (partially following Bar-Natan) I think we should call the `firmament’, is quite important: See Dror’s “cosmic coincidences” talk.

You then concatenate by connecting two endpoints, and extending the firmament appropriately. This way of thinking is behind Dror’s Khovanov homology work, and current work on various w-knotted objects by him and collaborators.

A major difference between the “stacking” worldview and the “circuit algebra” worldview is that the former views a tangle as a morphism from data stored in the “boundary points at the bottom” to data stored in the `boundary points at the top”. So a tangle encodes an operator (reference: Chapter 3 of Ohtsuki’s book Quantum Invariants). But in the latter worldview, a tangle just encodes some relationship between a bunch of data at endpoints. In this worldview, a tangle cannot encode a mapping in any meaningful sense- this worldview does not support the idea of operator invariants of tangles. This worldview isn’t imposing any non-topological artificial structure on tangles. All it has are the Reidemiester moves, including Reidemeister III. So tangles in this sense are a distributive-world structure.

As an example, let’s consider a single crossing. When tangles express morphisms to be stacked, this `represents’ an R-matrix representing a linear transformation from a vector space to itself. Bottom happens before top, and there’s an implicit time axis. But with no up-down information, it represents a transition from one undercrossing arc to the other by way of an overcrossing arc, . No braided monoidal categories anywhere it sight.

Having tops and bottoms to tangles is nice because associative structures tend to be more amenable to explicit computation. Computing in a quandle is usually very hard, perhaps **because** the Turing machine formalism itself belongs to the associative world. My vague thought is that we can probably do a lot better in the future using different sorts of (probabilistic?) tools… but that’s a speculation for another day. I also think that distributed and parallel computing could provide better ways to compute in distributive structures, and may in turn have distributive algebraic models (Marius Buliga has some work in this direction: e.g. Chemlambda, joint with Louis Kauffman).

Although people have began looking at the distributive world only quite recently, it’s already rife with terminology. The more this world is explored, the more terminology there will be, so I’d just like to point out some parallels. Recalling some axioms, consider the following axioms on a set with a set of binary operations :

**Idepotence**: for all $a\in Q$ and for all .**Injectivity**: If for some and , then .**Distributivity**: for all and .

If contains only one element , and assuming that is also surjective for all , we have the following cases.

- If is both distributive and idempotent then you’re looking at a
*spindle*. - If is distributive and injective then you’re looking at a
*rack*. - If all three, then you’re looking at a
*quandle*. - Only distributive and you’ve got yourself a
*shelf*.

Lots of operations and you might add words like *multi-*, so you have multiracks, multiquandles, multishelfs… or maybe G-families of quandles, or irq’s, or whatever.

Staring at these DNA structures though, they look quite parallel to familiar associative structures. Injectivity parallels invertibility of elements (*i.e.* it tells us that is left-invertible) and distributivity parallels associativity. I’m not sure what the parallel associative concept to idempotence is (idempotence involves both the element and the operation ), but I think it might be orthogonality; because reminds me of in orthogonal groups. Also, conjugation distributes over convex combination, but not the opposite. We might therefore think of convex combination as being parallel to addition, and conjugation as parallel to multiplication. So, using the adjective `DNA’ for `distributive non-associative’, a quandle might be a `DNA orthogonal group’, a rack might be a `DNA group’, if you have both conjugation and convex combination, maybe you have a `DNA near-field‘.

Why would you use a structure like that? Well, as an example of how it might be useful, here’s an AND Gate without trivalent vertices, where and stand in for the digits and correspondingly. The operations are convex combinations, and is conjugation.

It seems to be very natural to consider structures where has lots of elements- it doesn’t inhibit their algebraic topology, it occurs naturally in our archetypes (in the Bayesian probability archetype, to expect all `new’ information to have the same credibility is unnatural; see also Buliga’s work on irq’s, emergent algebras, and related structures- all DNA structures HERE and HERE), and it allows us to construct various topological invariants such as invariants of knotted handlebodies (“A G-family of quandles and handlebody-knots” by A.Ishii, M. Iwakiri, Y. Jang, and K. Oshiro).

The term `DNA’ suggests that distributive non-associative structure are in some way fundamental (like DNA is fundamental to cells in living organisms), and I think that they are. There are some simple transforms between the associative world and the distributive world too: Given a group, you can look at it’s associated conjugation quandle. Conversely, automorphisms of a quandle form a group. In another direction, you can represent a tangle diagram by a graph for example by representing each arc as a vertex, drawing edges from the vertex representing the overcrossing to the two vertices representing undercrossings, and drawing an edge between the undercrossings. By doing this you’ve thrown away all your symmetries- graphs are rigid and there are no Reidemeister moves on graphs. This construction is also partially reversible.

I think there’s a whole distributive world waiting to be discovered, and we’re just looking at the tip of the iceberg. I can’t wait to see these distributive structures play a role outside low dimensional topology, in other parts of mathematics and in other sciences!

]]>