# Low Dimensional Topology

## November 9, 2010

### Gauthier update

Filed under: Uncategorized — dmoskovich @ 11:09 am

Update: Renaud Gauthier has retracted the claim of an error in the foundations of the LMO construction, and has withdrawn both preprints from arXiv.

A few weeks ago, we discussed a preprint by Renaud Gauthier which claims a fatal error in the construction of the LMO invariant, having to do with invariance under the Kirby II move of a certain renormalization of the framed Kontsevich invariant which is used to construct the LMO. It also claims to correct that error with an alternative normalization- I do not discuss that claim. In this post, I would like to take stock, to explain Gauthier’s claim and Massuyeau’s response, and to tell you where I think we stand now.

### LMO review

The foundation of 3-manifold quantum topology is Edward Witten’s Quantum Field Theory and the Jones Polynomial, in which he constructs a Jones polynomial for 3-manifolds at the physical level of rigour. A whole mini-discipline of mathematics sprang up around understanding Witten’s invariant mathematically, and we’re still a long way from achieving that goal. If we restrict and restrict and restrict, until we only consider integral homology 3-spheres and the contribution of the trivial flat connection to Witten’s invariant, then (modulo various minor issues) Witten’s invariant becomes (equivalent to) something called the Reshetikhin-Turaev invariant.
These invariants- Witten’s invariant, the Reshetikhin-Turaev invariant, and indeed the LMO invariant- are built by exploiting the fact that a 3-manifold can be represented via Dehn surgery as a framed link in $S^3$. Topologically, this works by considering $S^3\times [0,1]$ with the framed link sitting in $S^3\times \{1\}$, and attaching a 2-handle to each link component as specified by the framing. The top boundary of the space which you end up with is your 3-manifold $M$. For intuition, think one dimension down, where you would be attaching 1-handles to pairs of points on the top boundary of “disk cross unit interval” to obtain some surface as the new top-boundary.
Kirby’s theorem tells us that two framed links present the same 3-manifold $M$ only if they are related by blow-ups and by Kirby II (handleslides):

The recipe for constructing a 3-manifold invariant from a surgery presentation $L$ (i.e. from a framed link) is to take your favourite link invariant, to extend it to a framed link invariant, and to mod out its values by relations induced by the Kirby moves. The best case is when your framed link invariant is invariant under Kirby moves. Quantum invariants are local (recall as we discussed here that this is perhaps their defining property), so the above recipe can be carried out explicitly.
The universal finite-type invariant for links is the Kontsevich invariant. You can extend it to a framed link invariant $\hat{Z}_f$. There’s a standard way to make it invariant under Kirby I. But- and this is the key point- $\hat{Z}_f$ is not invariant under Kirby II. A key insight of Le, Hitoshi and Jun Murakami, and Ohtsuki (I don’t know whose insight it was first… but one of these people) is that $\hat{Z}_f$ DOES become invariant under Kirby II if you renormalize by summing in a certain element $\nu$ for each component of $L$, which converts $\hat{Z}_f$ into a new framed link invariant $\check{Z}_f$. Everything works out wonderfully and you get the LMO invariant, which is the universal quantum invariant for 3-manifolds. It fits in well with 3-manifold topology: it recovers the Reshetikhin-Turaev invariant and the Alexander polynomial, and its degree 1 part recovers the Casson invariant.

### Gauthier’s claim

Renaud Gauthier claimed that the normalization of $\check{Z}_f$ is wrong, and that it is not invariant under Kirby II moves. The claim was then that each component should have been summed in with an element $\nu^{-1}$ instead of with $\nu$; and that doing so makes the framed Kontsevich invariant of $L$ invariant under Kirby II.
There’s something which we have to understand about Kirby II before evaluating this claim: namely that the handleslide is not a well-defined move between links. Consider two link components $K_{1,2}$ which are a long way apart, and between are lions, tigers, and bears. To slide $K_1$ over $K_2$, first a part of it must be brought near a part of $K_2$, and there are many ways of doing this. Two different approach paths may lead to two different links. Consider:

To upgrade Kirby II to a well-defined move on links, which is a prerequisite for calculating the correct normalization, a path between a point in $K_1$ and a point in $K_2$ must be specified. LMO do this by equipping $K_1$ and $K_2$ with points which are infinitesmally close together. Gauthier doesn’t do this, and he arrives at different results. Observe:

In Gauthier, $K_1$ and $K_2$ are far apart (Proof of Proposition 4.1.1, pages 58-59):

But in LMO they are close together:

Massuyeau said this in a comment to the last post; and it accounts for the disrepancy between Gauthier’s results in Section 4, and LMO.
This is unsurprising given that, computationally, the LMO construction is compatible with the Reshetikhin-Turaev invariant, and therefore with Witten’s invariant. It would shake our very view of physics if it were to contain a substantial error.

### The end of the story?

Despite the apparent failure of Gauthier’s claim of an LMO error, this affair does raise a number of important and disturbing points:

1. The language of LMO is very confusing. The calculations always seem to hang by a narrow thread, and the fact that everything cancels out at the end seems little short of a miracle. A better language is sorely needed; and LMO needs to be rewritten in that language. Zsuzsana Dancso is doing this (at least for LMMO).
2. I don’t know a conceptual reason why $\hat{Z}_f$ needs to be renormalized, and why the renormalization factor has to be $\nu$ for each link component (as opposed to $\nu^{-1}$ or to something entirely different). Thus our confidence that the LMO normalization is correct relies on computational verification as opposed to on understanding. This is unsatisfactory.
3. Perhaps the LMO normalization is not unique? It is a-priori possible that many different renormalizations of the framed Kontsevich invariant could give rise to different 3-manifold invariants. None of these (except for $\check{Z}_f$) wod be compatible with physics, but so what? It doesn’t mean they can’t exist!

These are the important foundational questions which Gauthier’s preprint raises. They can no longer be ignored.

1. For point (2), there is a good explanation. If you look at what’s happening on the physics side, this factor of $\nu$ comes from the invariant of the solid torus. Namely, if you do surgery on a knot, you’re cutting out a solid torus and gluing it back in in another way. The invariant of the glued manifold is the pairing of the invariant of the complement and the invariant of the solid torus. The invariant of the complement is essentially the non-renormalized Kontsevich integral, and the invariant of the solid torus is that of the unknot, namely $\nu$. You have to think a little more to see that the pairing is the right pairing.

I mostly worked all this out some years ago, but never had an excuse to write it down.

Comment by Dylan Thurston — November 9, 2010 @ 11:26 am

• That makes a lot of sense actually! I’d love to read the details.
Gauthier’s preprint is a blessing to the quantum topological community, because it motivates us to take the “folklore” and to write it down; and to try to simplify and to explain the key steps in the construction of the LMO invariant.

Comment by dmoskovich — November 9, 2010 @ 11:50 am

• Dror points out (on his “Academic Penseive”) that this is the wrong explanation: I shouldn’t have to invoke 3-manifolds to explain a knot theory fact. (Maybe my explanation is OK from the LMO (surgery) point of view, but for the LMMO (handleslide) paper its overkill.)

Alternate related facts include the invariant of the theta graph, which has nu^{-1/2} on each edge, or the behaviour of the unzip move, which similarly has various powers of nu. I know all these factors and can prove them, but I can’t say I have a coherent explanation why they are what they are.

Comment by Dylan Thurston — November 10, 2010 @ 1:01 am

2. I would like to point to some (honest) mistake on the part of the blogger here; the picture following “In Gauthier, K_1 and K_2…” is not in my paper, it actually arose from a series of e-mail exchanges I’ve had with Daniel Moskovich. Actually, the picture resembles something we’ve been discussing but as written it is wrong. The \neq sign should read “does not map under band sum moves to…”, and the \nu’s are doubled, but that’s just a detail.

The statement arose from my contention that \check{Z}_f= \nu \hat{Z}_f is not the right normalization. I took the simple example of 2 unframed unknots. Then \check{Z}_f(O O)=\nu^2 \otimes \nu^2 maps to \check{Z}_f(O band summed over O)=\nu^2 \otimes \nu^2. Daniel suggested that I write \check{Z}_f(O O)=\nu \otimes \nu \hat{Z}_f(O O) with \hat{Z}_f(O O)=\Delta \nu instead. Ok. Then you do a band sum move on the resulting tangle chord diagram in \check{Z}_f(O O) to get what you find in the subsequent schematic representation on the blog. I got the one above by sticking with \check{Z}_f(O O)=\nu^2 \otimes \nu^2 following the same prescription of doing a band sum move on Jacobi diagrams to show we do not get the same result, though we are starting from the same point. That shows there is a problem with eliminating the \Delta \nu upon band summing. Note however this does not mean that such band sum moves on Jacobi diagrams are well-defined!! I am not saying it’s Ok to do band sum moves on chord diagrams, I’m just using a certain line of thought and see what we get.

As a matter of fact, if the statement of Le, Murakami and Ohtsuki is true regarding the behavior of \check{Z}_f under band sum moves, meaning chords on the link component that will be band summed over are doubled in the expression for \check{Z}_f of the band summed link, then \check{Z}_f(O O)=\nu^2 \otimes \nu^2 does map to \nu^4 \otimes \nu^2. I did not say this is true. The only thing I’m saying is, if they are correct, then that’s got to be correct. The second line above however contradicts their statement about the doubling of chords on the link component that’s being band summed over.

At the heart of this problem is two schools of thought: yours essentially consists in computing the band sum move on tangle chord diagrams by doing a band sum on link components, and then putting on such a resulting band summed link \nu’s, associators and \Delta’s at appropriate places. Those chords arise from the computation of the framed Kontsevich integral of individual elementary tangles. Then once you have engineered a band sum move on tangle chord diagrams, the idea is that the tangle chord diagrams you started with is a summand of \check{Z}_f(L), the one you get at the end is a summand of \check{Z}_f(L’), with the same coefficient. Playing with tangle chord diagrams to compute the change of \check{Z}_f induced by a band sum move on link components, you unfortunately have to collect strands together, consider q-tangles, use associators, etc… and graft \nu’s, \Phi’s and \Delta’s at key places, to hopefully get something correct. This I find extremely risky, and consider it to be misleading. What I do instead is that I stick with Morse links, compute \hat{Z}_f(L), compute \hat{Z}_f(L’), and ask myself what normalization of \hat{Z}_f gives a statement such as that of Le, Murakami et.al. It turns out the only normalization that works is \tilde{Z}_f=\nu^{-1} \hat{Z}_f.

You may ask why do that? Simply because initially, in “A 3-fold invariant via the Kontsevich integral”, the paper that pretty much started it all, Proposition 1 says that if \check{Z}_f(L)=\sum c |O=, then \check{Z}_f(L’)=\sum c times the straight strand wrapping around the circle and the chords = doubled. From the proof, it’s not quite clear what’s happening. Part computation, part engineering, as in putting \nu’s, \Phi’s and \Delta’s at select places. In Ohtsuki’s book “Quantum Invariants: a study of knots…”, his proof shows what is really happening: use the long chords lemma, use \hat{Z}_f \Delta = \Delta \hat{Z}_f on the band summed link minus the band part, and use \hat{Z}_f(band part)=1 \otimes \nu^{-1} \Delta \nu, obtained from doing computations of \hat{Z}_f of that band part of the link. Bottom line: the initial Proposition 1 is really saying compute \hat{Z}_f(L), compute \hat{Z}_f(L’), renormalize by \nu, and observe that going from the first expression to the second, it turns out! the chords ending on the link component that will be band summed over are doubled in the expression for \check{Z}_f(L’). That’s what I did. I computed \hat{Z}_f(L) and \hat{Z}_f(L’), and contended that an appropriately computed \hat{Z}_f(L’) (not engineered!) shows that the only normalization that makes a statement like chords on some link component that will be band summed over maps to \Delta of those chords on the resulting link after band sum move is \tilde{Z}_f=\nu^{-1} \hat{Z}_f.

On the handle slide discussion, it is true that different choices of band sums will lead to different isotopy classes of band summed links. Nevertheless, staying within the very convenient class where the band is really the simplest one can imagine, if one works with \hat{Z}_f (which is an isotopy invariant), it is not necessary to have link components involved in the band sum move procedure infinitely close. Likewise, “far apart” has no meaning for \hat{Z}_f. If one works with tangle chord diagrams however, it is a must to have strands bracketted together, which introduces \Phi’s and all kinds of other complications. That essentially amounts to computing \hat{Z}_f of a representative of the class we have picked here.

I find working with \hat{Z}_f of Morse links more palatable. Engineering maps on Jacobi diagrams I find a most risky business. I am not saying it is wrong. It’s just that it’s too much on one plate. Starting small, if I want to do the band sum move of one unframed unknot over another unframed unknot, Jacobi diagrams engineering above says we go from \nu^2 \otimes \nu^2 to \nu \otimes \nu \Delta \nu. The first Jacobi is for \check{Z}_f(O O), ok, but the second is for \check{Z}_f(O O), and I don’t get that. So it seems for that simple example, we don’t get the expected result \nu^2 \otimes \nu^2. Moreover neither statement make the statement of Le and Murakami about the doubling of chords come true. If one considers \tilde{Z}_f however, the statement is trivially true as it reads O O maps to O O. I will not go into the proof of why \tilde{Z}_f works out fine here, it would take too long, but consider this: \hat{Z}_f(K # K’) = \nu^{-1}\hat{Z}(K) \hat{Z}_f(K’). Multiply by \nu^{-1} on both sides, we get \tilde{Z}_f(K # K’)=\tilde{Z}_f(K) \tilde{Z}_f(K’). The band sum is (somewhat) similar to a connected sum. I prove \tilde{Z}_f(K # \Delta K’)=\tilde{Z}_f(K) \tilde{Z}_f(\Delta K’)=\tilde{Z}_f(K) \Delta \tilde{Z}_f(K’), where \Delta K’ means push K’ off itself using its framing to get a second copy, and K # \Delta K’ really means connected sum of K with the second copy of K’. Observe that this equality is exactly what Le and Murakami were advocating for a doubling of chords on the band summed link component.

Comment by Renaud Gauthier — November 10, 2010 @ 10:40 pm

• Your preprint makes two claims:

1. The normalization $\check{Z}_f= \nu \hat{Z}_f$ is wrong.
2. The normalization $\tilde{Z}_f= \nu^{-1} \hat{Z}_f$ is right.

I’m debating the first claim- at the moment I have no intelligent argument against the second. The first claim, embodied in Proposition 4.1.1, is essentially that Ohtsuki’s Proposition 10.1 is incorrect. Your argument is that Proposition 10.1 would imply the diagram after “In Gauthier, $K_1$ and $K_2$ are far apart”. Therefore your proposition claims that Proposition 10.1 leads to a contradiction. Massuyeau pointed out that this conclusion is flawed, because implicit in Ohtsuki’s Figure 10.2 is the unique bracketting for which it makes sense. That’s the point I am making… I’m arguing against your argument for Proposition 10.1 being incorrect… if it were correct (and if your second claim (which I have yet to evaluate) were false, and if no other normalization would work), then LMO would collapse, and quantum 3-manifold topology would be thrown into disarray.

Comment by dmoskovich — November 11, 2010 @ 12:19 pm

3. Mr. Gauthier,

You have some honest mistake somewhere, probably in under-appreciating the fact that band-sum is not well defined, neither at the level of links nor at the level of chord diagrams. This means that one has to be extra careful with the statement of what is being proven; arguably LMMO are not, but their end result, ${\check Z}$, is correct. I hope you will be able to pinpoint your mistake soon.

1. While associators and parenthesized tangles are good and jolly (and they work!), you are right that they are not necessary for this discussion. Much of the background work needed in order to understand doubling etc. (and more – namely knotted trivalent graphs, KTGs) was done a couple years ago by my student Zsuzsanna Dancso (see arXiv:0811.4615), following my own class notes from 2007 (http://katlas.math.toronto.edu/drorbn/index.php?title=The_Kontsevich_Integral_for_Knotted_Trivalent_Graphs) and other work I’ve done years before.

2. The language of KTGs is anyway a much better language to use in order to discuss LMMO, as using it band-slides becomes well-defined and all ambiguities disappear. Ms. Dancso and I are nearing the completion of a further short article on KTGs. That article is being written with a different motivation in mind, but as the issue of LMMO is being revisited these days, we will include a short and pointed discussion. In short, among other things we use Kontsevich-only (no associators) techniques to establish a language in which LMMO can be stated with no ambiguities. This done, we unambiguously agree with the LMMO normalization ${\check Z}$.

Comment by Dror Bar-Natan — November 11, 2010 @ 8:39 am

4. The paper which Dror mentions in part 2. of his comment is unfinished, but the relevant part (Section 4, which builds on Section 3) is written and can be found here:
http://www.math.toronto.edu/zsuzsi/research/ktgs.pdf

(Of course, being a draft, this cannot be fully trusted, however we are confident that the normalization of LMMO works, as it has a very simple proof in the KTG language.)

Comment by Zsuzsanna Dancso — November 12, 2010 @ 2:04 pm

5. Dear ALL,

To set the record straight, I am now convinced, by computations that I have done, that Le, Murakami, Ohtsuki et al are right in making the statements they made about the behavior of \check{Z}_f under band sum moves. The confusion arose because the packs of chords in those statements implicitly contained contributions from associators that were not drawn. What I have derived is an invariant that is well-behaved under band sum moves, and this is derived by just using the isotopy invariance of \hat{Z}_f, computing its values on links before and after band sum moves, and seeing what normalization makes the renormalized object well-behaved. What Le, Murakami et al did is start from \hat{Z}_f of some link, focus on the resulting Jacobi diagrams, bring components involved in the band sum move together, and do a band sum move on the resulting chord diagram. The normalization that results in a well-behaved invariant under band sum moves in that case is given by their \check{Z}_f, which indeed is well-behaved once associators are taken into consideration. So there are two pictures, one by sticking with isotopy classes of links and doing computations involving \hat{Z}_f, the other by moving to chord diagrams land, consider q-tangles, put in associators, etc… in which case I do agree, band sums other than those as described on the blog are inconsistent, and the only one that works is the one given by Le, Murakami et al implicitly containing contributions from associators. In light of these considerations, I now retract my two papers that point to mistakes in Le, Murakami and Ohtsuki’s work insofar as they are actually right once everything is taken into consideration.

renaud

Comment by Renaud Gauthier — November 21, 2010 @ 4:23 pm

• Bravo! I think this is the optimal response.
There is a good question which this matter raises- expert’s intuition aside, why should there only be one normalization which gives rise to a 3-manifold invariant? And why should it be the LMMO normalization, as opposed to the much more reasonable-looking $\nu^{-1}$ normalization, or something else entirely? I don’t think we have a satisfying conceptual answer to this question- but I wish that we did.

Comment by dmoskovich — November 21, 2010 @ 8:31 pm

6. I am anonymous. Renaud Gauthier statement is true. He retracted his statement of an error on the construction of LMO because he was literally diminished, scared and this was a potential threat to his math career to be over by politics of “powerful mathematicians” that are hiding the truth on errors committed. Renaud not only points out the error but gives a correction for it .The correction that Renaud Gauthier offered have been rejected and not received with a smile because that will mean many will be proven wrong too and years of their research may go to trash. It is time for justice between mathematicians. There are many math geniuses that have been crushed by the “powerful mathematicians” out there that are not capable of accepting that there is an error on their works. Thus leading generations to mediocre math calculations. Will that be our future legacy for centuries to come? I do not want it to remain that way. The Claim of an error on the foundations of LMO construction must be acknowledge and publish for the benefit of those mathematicians that want to take their work serious. Renaud Gauthier’s work should be recovered.

Comment by Dorian stockhelsnki — June 8, 2016 @ 1:59 pm

• This comment makes me very sad. Gauthier is a really smart and objective guy, and everyone is wrong sometimes.
Ohtsuki was my PhD supervisor and I was a student of Bar-Natan, so I know the “main players” well, and I also heard the discussion between Gauthier and others on Skype; I think we all understood just where the error was in Gauthier’s work, and there was no politics of powerful mathematicians.
He’s right that it wasn’t explained as well as it could have been, and this is remedied in subsequent work- e.g. the linked paper by Dancso.

Comment by dmoskovich — December 3, 2016 @ 12:18 pm

Create a free website or blog at WordPress.com.