A recently published paper by Gelca and Uribe, which is also the topic of a book by Gelca and some nice slides, constructs the MOO invariant from theta functions completely classically essentially without using anything quantum at all (although the representation theory behind it was originally developed for quantum mechanical purposes). Thus, like the Alexander polynomial and the linking number, MOO is seen to be quantum but also classical.

There is also a more analytic, heat-equation-based way of seeing the same thing due to Andersen, but I haven’t read Andersen’s paper and therefore I can’t say anything about that.

Amongst the most useful functions in mathematics are the trigonometric functions. The function arises as a cross-section of the complex exponential function. It is periodic with period (I’ve become a proponent of tau) and it allows you to parametrize the points on a circle.

The formula for the theta function, as formulated by Jacobi, is

Theta functions can be thought of as sort of complex analogues of trigonometric functions. They are not doubly periodic (otherwise they would have to be constant by Liouville’s Theorem), but they are as close to being doubly periodic as possible, satisfying

where and are integers, and has positive imaginary part. Note that adding an integer to the -input of a theta function leaves it unchanged. Riemann taught us that we should think of such a function as a multi-valued function from the torus, that is the complex plane quotiented by .

Personally, I really like theta functions. In an alternative life I’d like to study theta functions. In another alternate life, I’d like to study K-theory. These are all things I think are really beautiful, but I never really sank myself properly into.

Anyway, the theta function is genuinely periodic in the integer direction. If we’re adding to the -argument, which is times the meridian of the torus and times its longitude, then we can throw away, contracting the meridian to a point. Geometrically, this is like viewing the torus as the boundary of a solid torus, and this is where handlebodies and later Heegaard splittings begin to enter the picture.

Riemann generalized theta functions to higher genera, and somebody (Mumford?) generalized them to depend on a natural number (in Riemann or Jacobi’s case ) to yield *theta functions with characteristic* so that adding times the meridian to gives identity. It is these Reimann theta functions with characteristic which are related to the MOO invariant.

Gelca and Uribe’s basic idea is to think of theta functions as oriented framed multicurves in handlebodies. These basically encode the curves around which the theta function is periodic. Thus, the torus meridian is the Jacobi theta function, the Jacobi theta function in characteristic is times the meridian, and in higher genus, a theta function is just an oriented multicurve which represents some element in a lagrangian subspace of the first homology of the surface.

Theta functions are naturally acted on by two groups, the Heisenberg group and the modular group (AKA the mapping class group of the surface).

Acting by the Heisenberg group multiplies the theta function by some exponential function, which can in turn be represented as an oriented multicurve representing an element of the **other** lagrangian. This is where the framing comes in- the theta function is periodic in the contractible direction, and the multiplicative factor is captured by its framing.

Because we only really care about the homology class of the oriented framed multicurve we can quotient by a load of relations and what we really have here is a skein module called the *reduced linking number skein module*.

This whole construction is wonderfully elucidated by Example 5.6 on Pages 240-241 of Gelca’s book, but all sources of this I can find are behind paywalls, I can’t be bothered to scan and then cut and paste, and I can’t be bothered to redraw it… I apologise.

The action of the mapping class group turns out to be more or less what you’d expect, as long as you remember that the it is the class of the multicurve in the skein module, not the multicurve itself, which represents the theta function. Namely, push the framed multicurve representing the theta function to the boundary of the handlebody in all possible ways, Dehn-twist the picture, and push the result back into the handlebody, and you get the correct theta function (times whatever multiplicative factor).

Now things get a bit more interesting. There’s an identity in the theory of theta functions called the exact Egorov identity, which turns out to say that handlesliding the multicurve representing the theta function (or rather a certain theta series) over the curve representing an element of the mapping class group gives the identity. In other words, invariance under handleslides.

And now for MOO. The reduced linking number skein module modulo the action of the mapping class group turns out to be isomorphic to the field of complex numbers. Viewing the theta function multicurve as a surgery link, such that surgery on by this link gives our manifold, the image of this link is the MOO invariant. It is invariant under handleslides by the exact Egorov identity. We have thus constructed the MOO invariant without a shred of quantum field theory.

I’ll conclude with some random thoughts:

- Yoshida has a paper in Annals proposing to reformulate the Reshetikhin-Turaev invariants in terms of “higher level theta functions”. Hansen and I wrote some notes on this paper… Teleman pointed out a gap. I’m currently doubtful that that such a simple formula has any chance to hold.
- I have wondered for a long time whether and in what sense -manifolds “exist”. Surfaces certainly “exist” because they arise inevitably from other objects which “exist”, for example as closed Riemann surfaces. But it’s never been clear to me that -manifolds “exist” in the same way, and reading Poincaré’s original papers, it wasn’t clear to Poincaré either.
At first I was excited about the Gelca-Uribe paper, because it seemed they were pulling –manifolds “out of a hat”, and that their Heegaard splittings were an inevitable consequence of the theory of theta functions. I no longer feel that this is the case. Their whole theory is really about skein modules, and can be reformulated in diagrammatic algebraic terms without reference to topology.

- Continuing on from that last thought, somebody ought to reformulate their paper completely diagrammatically! Doing so might provide new ways of thinking about aspects of the theory of theta functions, and so might be useful in a wider sense (I dream). For example, the exact Egorov identity corresponds to invariance under handleslides. But we know that handleslides are unzips of embedded theta graphs. So if embedded curves correspond to theta functions, what to embedded theta graphs correspond to? What are embedded trivalent graphs in analytic language? What are unzips?
- It would be cool to explain the whole idea in 2-3 pages with no reference to fancy concepts and formulae, and with nothing quantum in sight. The core idea (which I interpret as sort of a diagrammatic calculus for theta functions) is so simple and elegant that I believe it deserves to be better known.

]]>

- Major improvements to the link and planar diagram component, including link simplification, random links, and better documentation.
- Basic support for spun normal surfaces.
- New extra features when used inside of Sage:
- HIKMOT-style rigorous verification of hyperbolic structures,

contributed by Matthias Goerner. - Many basic knot/link invariants, contributed by Robert

Lipschitz and Jennet Dickinson. - Sage-specific functions are now more easily accessible as

methods of Manifold and better documented. - Improved number field recognition, thanks to Matthias.

- HIKMOT-style rigorous verification of hyperbolic structures,
- Better compatibility with OS X Yosemite and Windows 8.1.
- Development changes:
- Major source code reorganization/cleanup.
- Source code repository moved to Bitbucket.
- Python modules now hosted on PyPI, simplifying installation.

All available at the usual place.

]]>

Deraux, M. & Falbel, E. 2015 Complex hyperbolic geometry of the figure-eight knot.

Geometry & Topology19, 237–293.

In it, the authors study a very different geometric structure for the figure-eight knot complement, as the manifold at infinity of a complex hyperbolic orbifold.

Geometrization contains a characterization of manifolds that admit a geometry modeled on real hyperbolic 3-space. CR-geometry, on the other hand, is about manifolds which admit a geometry that is locally modeled on the CR structure (the largest subbundle in the tangent bundle that is invariant under the complex structure) of viewed as the boundard of the unit ball . Such a structure is called *uniformizable* roughly if our manifold M has a discrete cover which sits inside , with the CR-structure on M lifting to the standard CR-structure on (I think).

Quotients of such as lens spaces give the simplest examples of manifolds with uniformizable spherical CR-structures. A nice question, which is a bit reminiscent of a part of Geometrization, is to topologically classify which manifolds admit such structures, and how many of them there are.

CR structures are very good to have around; CR-structures are the kind of structures you get on real hypersurfaces in complex manifolds. There is a whole theory around them which parallels Riemannian geometry, and there seems to be a lot of deep analysis going on (which I know next to nothing about) around trying to understand various fundamental operators and their spectra (sub-Laplacian, Kohn Laplacian…).

This paper gives a really interesting example of a manifold with a uniformizable spherical CR-structure, namely the figure-eight knot complement. This manifold played an important motivational role in the development of real hyperbolic geometry, and the hope is that here too it will provide a good motivational example to study, which is simple enough to work with “by hand” but complicated enough to exhibit somehow “generic” behaviour.

A quite fascinating research programme! Looking at the a class of manifolds as manifolds at infinity of complex hyperbolic orbifolds! I look forward to reading this paper and to learning more about this!

]]>

V.F.R. Jones,

Some Unitary Representations of Thompson’s Groups and, arXiv:1412.7740.

Links occur as braid closures, and so links can be studied via braid theory. This is the starting point for the Jones polynomial , and it’s very nice, because braids form a group (good algebraic property) which is orderable (even biorderable) and automatic (even biautomatic). But here are some complaints I have:

- Combing a link introduces an artificial structure- a `timeline’ where strands always `move forward through time’- which a link doesn’t naturally have. By trading links with braids, maybe we’re missing out on some essential
*je ne sais quoi*that makes a link a link. - More concretely, all known quantum link invariants come from Lie (bi)algebra constructions. But not all subfactors arise this way. For example, the Haagerup subfactor does not. If we believe that quantum invariants should correspond to finite index subfactors and not just to e.g. quantum groups, then surely braids are the wrong way to go.
- Everyone and their cousin uses braid groups nowadays.

Jones’s manuscript provides something new, that is a way to obtain links from elements of Thompson groups, and elements of Thompson groups from links. His algorithm seems just as natural as combing tangles to obtain braids. And it leads directly to new polynomial link invariants.

Thompson groups may be just as nice as braid groups. At the very least, they’re new in this context. It’s an open problem whether they are orderable, automatic, or amenable- as far as I know, they certainly might be. Indeed, they might be *much nicer*, because, unlike braid groups on more than two generators, they don’t contain a subgroup isomorphic to the free group on two generators.

As far as I know, Thompson groups have only shown up in mathematics so far as counterexamples. But in Jones’s paper, they naturally pop out of the structure of planar algebras. There’s nothing artificial about the correspondence between knots and Thompson group elements.

**Edit**: Yemon Choi points out a very nice interpretation of Thompson group elements which both sense in context and also in which the Thompson group is not a counterexample. The reference is HERE. What I said about Thompson groups only being counterexamples is even more false than that- they’ve featured before in quantum topology (and I wonder what the relation is, if any, with the construction in Jones’s new preprint).

**Silly convention**: Elements of braid groups are called braids. I suggest that elements of Thompson groups be called Thompsons. Indeed, two (dependent) elements of the Thompson group are used to construct links, and so, to be precise: What is at hand is truly a case of Thompson and Thompson.

**Edit**: Scott Carter points out that “Thompson and Thompson” ought to be Thomson and Thompson. Which makes sense to me- the link is a diagrammatic inner product of one Thompson with another Thompson acted on by a representation . Perhaps it’s reasonable to call a new polynomial invariant the `Thomson and Thompson Polynomial’ after all, with the `p’ reminding us of the action of .

Jones investigates the scaling limit of tangle diagrams, by letting the number of boundary points of the tangle fill out the boundary disc of the diagram (tangles with more and more strands). It wouldn’t make much sense, at least diagrammatically, to do this all at once- what would a tangle with uncountably many endpoints look like?- but what Jones does instead is to present a directed set construction, increasing the number of tangle endpoints inductively by gluing on tangles with . This mimics the block spin renormalization procedure from physics on a diagrammatic level. Jones credits Dylan Thurston with having suggested this idea to him.

Because the only way to add endpoints to a tangle is with caps or with cups, the number of intervals between tangle endpoints is always even. Jones chooses the interval endpoints (tangle endpoints) to correspond to dyadic rationals, that is to numbers of the form for some . The group of PL homeomorphisms of whose non-differential points are the dyadic rationals and whose slopes are all powers of is Thompson’s group , and this is how Thompson groups enter the picture.

The preprint opens up vast new vistas, and the relationship between links and Thompson groups which it points out is at once so powerful and so natural (*e.g.* the analogous construction for a different planar algebra gives the Tutte polynomial of a planar graph) that it would be difficult to imagine the new polynomials which his construction defines *not* once again reshaping major parts of quantum topology.

Two challenges are presented at the end of the preprint:

- Links are obtained from Thompsons by an
*inner product*, that is by pairing a tangle obtained from the Thompson with a mirror image of another tangle obtained in a different way from the same Thompson. This sort-of reminds me of things like Heegaard splittings by the way, although of course that’s a very different idea. Anyway, the -index of a link is the smallest number of leaves of a Thompson (presented as a pair of rooted binary trees) required for this construction. Investigate bounds for the -index. - Find and prove Markov Theorem for Thompson group presentations. This has got to be a major goal. I would be totally amazed if the `Markov moves’ for Thompsons were anything other than the set of moves on pairs of trees corresponding to the Reidemeister moves, as Jones himself suggests. This makes Thompsons *much more* natural group element representations of tangles than braids are.

Another obvious question is how to compute the polynomial invariant defined in this preprint, at least for small examples, and to establish whether it satisfies a reasonable-looking skein relation.

To complain (never forget to complain!), like the Jones polynomial, the construction of this invariant also uses planarity in an essential way, and therefore it is difficult to see how its construction might extend to virtual and to welded tangles.

I thank Ian Agol for drawing my attention to this preprint.

]]>

For the record: When I was in graduate school, I had no intentions of doing anything but becoming a math professor. Things didn’t change during my postdoc either. In fact, even in the spring of 2009, as the financial crisis was eviscerating the job market and unemployment was staring me grimly in the face (until Bus Jaco somehow convinced the right people to let me fill the vacancy created when Joseph Maher left OSU, but that’s another story…) my last-minute applications were all for one-year visiting lecturer positions.

At the time, my assessment of the private sector vs. academia was pretty bleak: Your salary is higher, but the price you pay for that is longer hours at a mind-numbing job with a micro-managing boss. But it turns out, things aren’t actually that extreme. In fact, there are a lot of nice things about the private sector, that make the comparison much more subtle, even if you take money completely out of the picture.

First, lets talk about working hours: Many youngsters (and non-academics) point to the flexible schedules and long holidays as one of the perks of academia. But by the time you make it into the ranks of the tenure-track, it becomes clear that the flexibility just means that you get to pick which 60 (80?) hours a week that you work. And vacations are the times when you get to work on your real work (research) or else feel guilty for not doing all the writing that you didn’t get done during the semester.

In the private sector, on the other hand, policies regarding time vary quite a bit. I’ve heard that there are some jobs where you’re expected to be in the office 12 hours a day, whether or not you have anything to do. But there are many more companies (including Google and most of its peers) that value work-life balance and have policies that explicitly try to discourage their employees from working harder than is healthy for them. They don’t just do this because they’re nice people – they do it because they want to prevent burn-out, which in the long run counteracts any extra productivity that would come from an 80-hour week. (Don’t believe me? There are books about this.) I spend around 40 hours a week in the office and almost never bring work home. Some of my new colleagues work longer hours than that, but overall I think the attitude towards work/life balance is much healthier.

So this brings up an interesting point: As great as it is to be completely in charge of your own day-to-day and longer-term activities, there are actually some benefits to having a manager who is aware of and has a stake in your daily work: Namely, they can take a more objective view of what you’re doing and help to make corrections, such as pushing back against your self-applied pressure to work harder than is healthy. Granted, not all managers will do this, but if you find the right job at a good company, they likely will. (Google makes all managers go through training on this sort of thing, and I expect there are similar policies at many other companies.)

Next, about the “mind numbing” work: No, the work that I do is not as deep or as beautiful as the mathematics that I previously got to learn and think about. It doesn’t require the sorts of insights that come to you surprisingly at random times, and double you’re pulse rate even though you haven’t moved. It doesn’t require thinking about a problem during every spare moment for days or weeks or months. But it is quite interesting, and exercises most of the mental muscles that one works so hard to develop in a math graduate program.

In particular, it turns out that (at least in my experience) writing a computer program is a lot like writing a math paper, or a proof: You start with an overview of the thing that you want to produce, at a high-level of abstraction. Then you break it up into smaller pieces, and work out the details of each piece at a low-level of abstraction. Then you start putting these pieces together, slowly working from lower levels of abstraction to higher levels. Sometimes, you realize that they don’t fit exactly right, so you have to drop down to a lower level of abstraction to change some of the components before working your way back up.

It turns out that the ability to switch between different levels of abstraction like this is highly prized and relatively rare, even though it’s sometimes taken for granted in academia. It’s basically the same skill that you need to read and write proofs in mathematics (which is the reason I believe that math majors/Ph.D.s can make better software engineers than computer science students. But don’t tell anyone I said that…) Programming is basically this process with some syntax (which is the easy part) layered on top.

In mathematics research, the fun part is coming up with the ideas and the painful part is organizing them into a paper. In software engineering, coming up with the ideas that will solve a given problem (or determining that it can’t be solved) is usually pretty straight forward, if not trivial, by putting together ideas that have already been worked out by others. But implementing them (i.e. writing source code) is quite interesting, and significantly less painful (at least in my experience) than writing papers. Getting the pieces of a computer program to fit together tends to take a lot more time, even for very simple systems, because you can’t gloss over the details as much as you do in a math paper, but it also makes it more interesting.

So the difference between a mathematics research project and developing a complex piece of software is (again) a trade-off. For me, it comes down to how much you enjoy grappling with nearly unsolvable problems, versus how much you dislike writing papers. (And if you actually like writing papers, then lucky you…) There’s also something to be said for being able to know, at the outset of a project, that there is a solution and roughly how long it will take to finish it.

And, of course, in the private sector you generally don’t get to teach. For some this may be a drawback, and for others, maybe not. For me it was yet another trade-off – there were a lot of things I liked about teaching, and other things I didn’t. One thing I didn’t like was the constant tension between time/energy spent on teaching vs. research.

Finally, no discussion of an academic career would be complete without a discussion of committees. I actually enjoyed much of the time I spent in committee meetings discussing important questions about how the department should be run. These were usually questions that needed to be made, and required the sort of experience and insights that could only come from faculty members. Sometimes we spent more time than we needed to splitting hairs, or discussing issues that didn’t deserve the amount of time we devoted to them. But overall, I think most of the committees I had a chance to experience served an important purpose.

At the same time, they required a lot of time, and that time increases every year, as you lose the protections designed to give pre-tenure faculty time for research. So it’s not uncommon, as your academic career progresses, for research (which for me was one of the main draws of an academic career) to become a smaller and smaller part of your day-to-day activities.

In the private sector, on the other hand, decisions tend to get made more quickly and with less discussion. I have very few meetings at Google, and most last less than half an hour. Other people (such as managers) spend much more time in meetings than I do, but they work hard to keep them short and to the point. In general, a lot of the types of decisions that are left to faculty in academia are delegated to specialists or managers in the private sector. This obviously has both benefits (in terms of time) as well as drawbacks (in terms of control.)

So, that should give you a rough idea of the factors I considered when I decided to leave academia. I want to stress that these are all trade-offs, and how you weigh them is a personal decision. There are many mathematicians who would be happier in academia, but there are also many who would do better in the private sector.

While it may be scary to think of all the bright young minds that academia could lose to the private sector, my experience suggests that there were won’t be a shortage among the mathematicians who stay. In fact, I would argue that having such a high ratio of job seekers to jobs is much worse for the morale of the field, makes it easier for administrators to replace tenure lines with contingent faculty positions, and ends up forcing many promising young mathematicians to leave against their will anyway. If more recent Ph.D.s choose to go into the private sector, then the ones who stay will tend to be the ones who are particularly devoted to teaching (on top of their excellent research programs) which is what our universities need. Having the remaining math Ph.D.s go out into the world (willingly) and show how capable they are will both help justify the NSF grants that contributed to their education and encourage more students to enter math graduate programs, or become math majors.

When I was a graduate student, I didn’t take an objective look at the option of entering the private sector, and I don’t think many of my peers did either. But my impression is that attitudes about this are starting to change, both among students and faculty. I hope that in the future, every young mathematician will give serious thought to both options, and will be able to make a well informed decision.

]]>

That friend is also an electrical engineer and knows some things about signal processing. This was important to me — we had some external criterion (from outside of mathematics) for determining whether or not the insights from Persistent Homology were interesting or not.

So I said “okay!” Not really knowing what I was getting myself into.

We got to work, over some rather cool, soggy Victoria winter days.

The idea was to take piles of data from music, put various standard metrics on them, feed them into software that computes the barcodes, and analyze the output, to see if the barcodes see anything that we did not already know about the data. The answer turns out to be yes.

Sometimes the barcodes saw some rather subtle and insightful things. Sometimes they saw some subtle and relatively mundane things. Let me tell you about a few.

First, a quick summary of persistent homology. Jesse has talked about this quite a bit on the blog, but if you’ve forgotten or missed it, the idea goes back to Vietoris.

Given a finite metric space X, you form a family (parametrized by a non-negative real number ε≥0) of simplicial complexes X(ε) whose vertex set is X. You give X(ε) an edge if the distance between two vertices of X is less than ε, similarly you give it a simplex is the pairwise distance between all the prospective vertices is less than ε. The family X(ε) forms a filtration of a contractible simplicial complex X(∞), so the homology of the spaces X(ε) is a family of abelian groups and inclusions, which eventually “dies” when the parameter ε is larger than the diameter of the metric space X. Similarly, the homology of X(0) is that of a finite discrete space. The homology classes that exist for large ε-intervals are called persistent, and one imagines them as describing somewhat relevant shapes in your data. The bar codes essentially represent these intervals over which homology classes live.

The subject of Persistent Homology is advancing fairly rapidly at present, but there are still many unsolved foundational problems in the field. If a person has envy for all the successes of Milnor, Thom and Serre, setting up the foundations of algebraic and differential topology, I can’t imagine a better field to go into.

Anyhow, back to our computations. One of the more interesting computations we looked into was the homology of certain points in the* space of rhythms.* Here we think of rhythms as the periodic beating of a drum. To make a *space* from rhythms, we consider the finite-subset space of a circle. Typically this is denoted exp(S^1). A point of exp(S^1) is a finite (non-empty) subset of the unit circle. The metric on exp(S^1) is the Hausdorff distance. That is the longest distance between a point in one set, and a point in the other. One thinks of a point in exp(S^1) as an explicit periodic beating of a drum, with one point for each beat, and the beat repeats itself every 2π units of time. This does not suffice because two essentially-identical beats can be phase shifts of each other, so our periodic rhythm space is the metric quotient exp(S^1)/SO_2 where SO_2 acts on the circle in the natural (linear) way. So this model for beats ignores many things — for instance the loudness and duration of a drum strike are ignored. There is no notion of different types of drums in this model, and so on.

As our data set, we took a table of Afro-Cuban rhythms. Here are the barcodes.

There are a few nifty things about this computation. There are no homology classes other than in dimension 0. So these barcodes say the data begins as a collection of isolated points, and then after a certain threshold (near ε=0.06) a transition occurs, and the simplicial complex becomes contractible. This strongly suggests the data is a metric tree. We checked, and it turns out the data is a metric tree. The tree appears to be the genetic tree for how afro-cuban rhythms evolved (I don’t know this branch of music well enough to know for sure, but that’s my hunch). Specifically, the centre of the tree of Afro-Cuban rhythms is known as son clave (or clave son), which is thought to be the first afro-cuban rhythm. It would appear the remaining rhythms evolved from this, by making individual changes — doubling a drum strike here, or shifting one there, etc.

Other metric spaces we considered were things like the space of pairs of notes, where one note occurs immediately after another in a composition. The feature we saw most often here in the barcodes were things like a composer’s tendency to “return” to a theme note, with little departures here and there.

On a more topological side, there were some fun observations that a certain “octave-reduced space of 3-note melodies” were homeomorphic to S^1 x S^2, so the homology of S^2 sometimes appears naturally when studying melodies in this manner.

There are several databases out there of various condensed forms of all world music — close to everything recorded in human history. It’s interesting to speculate about what the shape of that data would be. It would be interesting to discover if there is much relatively unexplored territory in this space — is it because we lack the imagination to find it, or is it because it’s all too atonal? More pessimistically, it could be a gaussian distribution centred on Britney Spears.

This leads to one of my personal favourite questions: what kind of normality tests are there for data, using persistent homology?

]]>

A groundbreaking paper which made a deep impression on a lot of people, including me, was Cochran-Orr-Teichner’s Knot concordance, Whitney towers and signatures. This paper revealed an unexpected geometric filtration of the topological knot concordance group, which formed the basis for much of Tim Cochran’s subsequent work with collaborators, and the work of many other people.

In this post, in memory of Tim, I will say a few words about roughly what all of this is about.

It would be very nice to be able to equip the space of knots with a good algebraic structure. Somehow, the natural binary operations on knots seem to be the satelite operations, of which the connect sum may be considered a special (degenerate) case.

Unfortunately, unless is the trivial knot, there is no knot which can be `satelited’ to to obtain a trivial knot. Thus, the set of knots under connect sum, or indeed under any satelite operations, does not form a group.

But the set of knots can be quotiented by an equivalence relation called concordance, and concordance classes of knots do form a group under the connect sum. Concordance takes place in an ambient 4-dimensional space, and so it provides an avenue for knot theory to be used to study 4-dimensional topology. For most of the world, this is the ultimate motivation to study link concordance. This point of view is beautifully laid out in Freedman-Quinn’s Topology of 4-Manifolds.

The way the field has gone, every conjecture about how `good’ the structure of the link concordance group is has turned out to be wrong. Almost every paper which has come out about knot concordance in the last 20 years, as far as I know, has been a negative result. Cochran’s work has been instrumental in showing `how bad things are’. The group isn’t trivial, and its non-triviality is detected by the Casson-Gordon invariants. The next step was taken in Cochran-Orr-Teichner; Casson-Gordon invariants do not detect knots up to concordance. They’re just the first step in an infinite geometric filtration of invariants, which is non-trivial at every step.

Stavros Garoufalidis suggested a long time ago that the Cochran-Orr-Teichner filtration should be investigated through the lens of quantum topology. This was a major research interest of mine at one point, and to the best of my knowledge, nobody has yet achieved this aim. I remain convinced that this is an interesting avenue of research worthy of future investigation.

A recent paper of Tim Cochran which captured by imagination was his joint work with his mathematical daughter Shelly Harvey on The Geometry of the Knot Concordance Space. In it, Cochran and Harvey suggest viewing the topological knot concordance space in a metric space in various different ways, and suggest investigating its coarse geometry. Again, the structure isn’t neat- it isn’t quasi-isometric to a finite product of hyperbolic spaces- but it is possible to address the question of whether it is what the authors call a `fractal space’, that is roughly a space which admits a natural system of self-similarities. The conjecture that the knot concordance space is a fractal space looks intuitively highly plausible to me; and the investigation of the coarse geometry of the knot concordance space looks to me like a marvelous research project which will surely lead to many fruitful results in the future, both positive and negative.

And apart from all of his fantastic and groundbreaking ideas, Tim was an inspiring teacher, lecturer, and colleague: a true powerhouse of good mathematics.

RIP, Tim Cochran.

]]>

Feedback is very welcome (as are “how do I…?” questions), especially for a brand new port such as this.

]]>

The question asks whether, rather than searching for Reidemeister moves to simplify a knot diagram, we could instead search for “big Reidemeister moves” in which we view a section which passes underneath the whole knot (only undercrossing) or over the whole knot (only overcrossing) as a single unit, and we replace it by another undersection (or oversection) which has the same endpoints.

This question (or more generally, the question of how to efficiently simplify knot diagrams in practice) loosely relates to a fantasy about being able to photograph a knot with a smartphone, and for the phone to be able to identify it and to tag it with the correct knot type. Incidentally, I’d like to also draw attention to a question by Ryan Budney on the topic of computer vision identification of knots, which is topic I speculated about here:

A core question to which all of this relates is:

And perhaps more generally, are there any very hard ambient isotopies of knots?

]]>

Gilmer, P.M. and Masbaum, G., Maslov Index, Mapping Class Groups, and TQFT, Forum Math.

25(2013), 1067-1106.

It makes me think a lot about just what the anomaly `actually means’…

I’ll start with some vague philosophical musings. I’m quite taken with the information physics idea that everything is information, and I think that Chern-Simons theory should really be all about information as well. But I’m not sure how. A google search turns up load of physics papers with keywords “anomaly”, “Chern Simons”, and “entropy” in close proximity, so I’m sure that some physicists know the whole story, but I don’t. Maybe somebody could explain it in the comments?

There’s a theme in physics which says that the `interesting’ information content of naturally occuring systems on `things with boundaries’ is contained entirely on the boundary and not in the interior. Manifestations of this theme include the holographic principle which roughly claims that the maximal entropy in a region scales like the surface area of the boundary of that region instead of like its volume (so that the entire information content of a black hole lies on its event horizon), and area laws which roughly claim that the amount of quantum entanglement between particles in a region and in its complement depends on the area of the boundary of the region and not on the volume of the region.

Because every closed oriented –manifold bounds an oriented -manifold, this physics theme suggests a way in which physics might be unreasonably effective in low dimensional topology. Namely, a physically interesting information measure on a bounded -manifold ought to give rise to a -manifold invariant. This is sort-of the meta-intuition I have for why we have Topological Quantum Field Theory (TQFT) invariants of -manifolds. My vague feeling is that because Fisher information and the Chern-Simons action both have something to do with curvature, perhaps Chern-Simons Theory and quantum -manifold invariants have a clear and legitimate information-physics interpretation (if you know what it is, please tell me!).

Be that as it may, it turns out that quantum invariants coming from -dimensional TQFTs tend not quite to give numerical topological invariants of their –dimensional boundaries. We need an integer worth of extra information from the interior of the bounded -manifold to get a numerical –manifold invariant. This `extra information from the interior’ is called the anomaly. The anomaly ought not to even exist for `physically interesting information’ according to the naive interpretation of the physicist’s theme outlined above. Maybe that’s why it’s called an *anomaly*– because a physicist would wish that it not exist. A lot of surveys and entry-level texts seem to gloss over the anomaly, maybe partly for that reason.

It seems to be only recently that anomalies are becoming respectable. Perhaps this is due to Lurie’s higher categorical formalisms which sheds some light on anomolies, and perhaps to work on TQFT’s for manifolds with boundaries and corners, and perhaps to interest in “type II superstring orientifolds” in which anomalies are both tricky and important, but in any event, there does seem to be a resurgence of interested in anomalies. It seems that an anomaly should be considered an (invertible) field theory itself. This paper is the most interesting recent paper I have seen on the subject… maybe I’ll talk about it another time.

Back on the subject of Chern-Simons TQFTs, or rather Reshetikhin-Turaev TQFTs, our setting is a closed oriented -manifold bounding an oriented -manifold. This -manifold matters only up to cobordism (i.e. two cobordant -manifolds are considered equivalent from the point of view of our TQFT, because, while there may be a wee bit of information in a dimension 4 interior, there is no relevant information in a dimension 5 interior). Cobordism classes of oriented -manifold are classified by the signature (an integer), and I think that this is the secret reason that the signature keeps popping up in all kinds of formulae for quantum invariants of -manifolds.

So really, the domain of our TQFT ought to be a pair consisting of a manifold and an integer, or a manifold with an integer-worth of extra structure. How to usefully specify that integer? There are a variety of of approaches- structures, -framings, various choices of largrangian thisses or thats, Masbaum-Roberts explicit methods… There’s an algebraic approach as well, in which we trade our -manifold for a mapping class group element. Remember how every -manifold has a Heegaard splitting? This constructs our -manifold by gluing together two genus handlebodies using an element of the mapping class group . The TQFT induces a representation of the mapping class group which is only projective but not linear because of the anomaly. Gauge-fixing/ choosing a cobordism class of a bounded -manifold/ fixing the anomaly corresponds to choosing a central extension of the mapping class group. And it turns out the has a universal central extension, and (unsurprisingly) the cohomology class of this extension is a generator of a cohomology group which is isomorphic to the integers (the most famous such generator is the Meyer cocycle, and the second most famous is its cohomological negative, the Maslov cocycle).

So the whole problem of fixing the anomaly has been algebratized, and the goal has now become to describe explicit elements of the universal central extension of , which are the algebraic objects which have now replaced “-manifold together with a cobordism class of -manifolds which it bounds”.

That’s pretty-much the goal of Gilmer-Masbaum. Some major steps which were outlined in Walker’s iconic TQFT notes are worked out explicitly. This is at long last a careful treatment of a TQFT anomaly. I know more than I knew before.

Now that we have a technically coherent and careful treatment of the anomaly in the Chern-Simons context which seems more or less amenable to concrete computation (I’m haven’t followed through the details carefully enough to strengthen the above sentence), the next thing I’d love to read would be a survey-level treatment of the anomaly, which explains all of the different approaches to fixing it, the strengths and weaknesses of each, and how they relate to one another.

I’d also really like to understand how quantum invariants measure information (entropy), and in particular what information is measured by the anomaly. And what is the conceptual reason that Chern-Simons theory violates the theme that all interesting information lies on the boundary? Or maybe it doesn’t? I wish I understood more.

]]>