By the way, if you happen to know of any other good geometry/topology blogs that aren’t in our blog roll (on the right side of the page), please feel free to include the link in a comment so I can add it.

]]>

There will be some travel funding available for graduate students and early career mathematicians. Before the conference, there will be graduate student workshops, led by Jessica Purcell, who has been doing a lot of very cool work on WYSIWYG geometry/topology and Alex Zupan, who has been proving a lot of nice results about thin position and bridge surfaces. The graduate student workshop is August 5-7, and the conference is August 8-10. I’m looking forward to it and hope to see you there.

]]>

Recall that a train track *T* in a surface *S* is a subsurface of *S* endowed with a certain type of singular foliation by intervals. We say that a loop *l* is *carried* by *T* if it is contained in the subsurface and transverse to the intervals in the foliation. Away from the singularities, the intervals in the foliation make up parallel strips, and the singularities define junctions where they come together. If we follow a loop around the train track, the transverse condition implies that each time we enter one of the parallel strips, we have no choice but to follow it to the junction at the other end. However, when the loop enters a junction, it will often have a “choice” of whether to take the branch to the left or right. So as we follow the loop around, depending on the “choices” the loop makes of how to turn, it may end up only crossing some of the parallel bands, and missing others. We say that a carried loop *covers* the train track *T* if the loop intersects every fiber, or equivalently if it crosses every band or parallel fibers.

We now have three types of loops in the surface *S* defined by the train track *T*: There are the loops that cover *T*, the loops that are carried by *T*, but don’t cover it, and the loops that aren’t carried at all. This is where an important observation comes in: It turns out that for a train track *T* whose complement in *S* is a collection of triangles, if a loop *l* covers *T* and a second loop *m* is disjoint from *l* then *m* must be carried (though not necessarily cover) *T*. (I don’t know who first noticed this. It’s in Masur and Minsky’s [1] work, but may go back much earlier.)

To see why this is true, note that the loop *m* cannot cut across *T* parallel to the interval fibers because then it would have to cross *l*. Moreover, any arcs of *m* outside of *T* will be contained in the triangular complementary regions and can be pushed into *T* in a canonical way. If you look carefully at what *m* can do inside of *T* without intersecting *l*, you’ll quickly conclude that it’s possible to isotope *m* so that it’s transverse to all the fibers and thus carried by *T*.

What this means in terms of the three classes of loops is that no edge in the curve complex connects a loop that isn’t carried by *T* to a loop that covers *T*. In particular, any path from a covering loop to an uncarried loop has to pass through a loop that is carried but doesn’t cover. So, in other words, the set of non-covering carried loops forms a buffer between the covering loops and the uncarried loops.

This is the buffer that I mentioned at the beginning of the post. But now the question is, how can we place these buffers next to each other to make wider ones? The key to this is to construct a second train track *U* such that *T* is “carried” by *U*. By this I mean that the subsurface defined by *T* is contained in the subsurface defined by *U* and every interval fiber in *T* is contained in an interval fiber in *U*. Note that it follows immediately from definitions that every loop carried by *T* will be carried by *U*.

We’ll next add an extra condition that’s slightly more subtle: We want every loops that is carried by *T* to cover *U*. Note the difference there: We’re asking for a stronger condition on *U**.* At first, this may seem like too much to ask for, since there will be infinitely many loops carried by *T* and we don’t want to have to check each one in *U*. But in fact, we usually only need to check a finite number of things. In particular, we can often arrange so that every band of parallel fibers in *T* covers *U*, i.e. intersects every interval fiber in *U*. Because any loop carried by T must follow at least one of the parallel bands of fibers in *T*, this guarantees that any loop carried by *T* will cover *U*.

In some cases, we may not be able to guarantee that every band in *T* covers *U*, but we may still be able to find a subset of the bands in *T *such that every carried loop must cross one of these bands, and each of these bands covers *U*.

If we can find a train track *U* with this condition, then we can compare any loop *l* that covers *T* to a loop *n* that is not carried by *U*. Since *n* is also not carried by *T*, any path from *l** *to *n* must pass through a loop *k* that is carried by *T*. The above condition implies that *k* covers *U*, so the path must also contain another loop *m *that is carried by *U, *but does not cover* U*. Thus any path from *l* to *n *must pass through at least to other loops, making its length at least three.

We can repeat the process again by constructing a train track *V* that carries *U* and has the same filling property. Any loop that is not carried by *V* will be distance at least four from any loop that covers *T*. As we build more train tracks in this way, we can find loops that are farther and farther apart. (This is one way to show that the complex of curves has infinite diameter.)

In my paper with Yoav Moriah, we construct a sequence of such train tracks in the bridge surface of a certain type of knot, with the structure of the train tracks determined by a certain type of diagram of the knot. We’re then able to show that every bridge disk below the bridge surface covers one of the train tracks early in the sequence, while no disk above the bridge surface is carried by the last train track in the series. By the above argument, this gives us a lower bound on the distance between the two disk sets. (In practice, we use a slightly different version of the carried condition, which allows the complement of the train track to be any polygon, not just a triangle.) An explicit construction shows a lower bound for the distance. As it turns out, these two bounds are the same, so we’re able to calculate the exact distance for this certain class of knot diagrams.

]]>

We can form a train track on a torus by taking two essential loops in the torus that intersect once, then smoothing the intersection, as in the Figure below. (I’m drawing the torus as a square with opposite sides identified.) There are two possible ways to smooth the intersection, and for now we’ll just arbitrarily pick one. (Later on, we’ll come back to look at the difference between the two smoothings.) The resutling graph isn’t a train track, bit we can turn it into a train track by taking a regular neighborhood of it, then giving the neighborhood a foliation by intervals perpendicular to the original graph. The original graph (shown in the middle of the Figure) is called a *train track diagram*.

The question I want to explore in this post is: What loops in the torus are carried by this train track? The answer will be in terms of the slopes of the carried loops. Recall that the universal cover of the torus is the plane. In every isotopy class of essential loops, there is a representative that lifts to a straight line in the universal cover. In fact, there’s an infinite family of such loops that lift to different lines in the plane, but all these lines have the same slope. This slope is what we call the *slope* of the (isotopy class of the) loop in the torus. In the Figure above, the blue loop has slope 0 and the red loop has slope 1/0 or . Note that both of these loops are carried by the train track. (Or, more precisely, they’re isotopic to loops that are carried by the train track.)

In general, we can calculate the absolute value of the slope of a loop by dividing the number of times it intersects the horizontal boundary of the square by the number of times it intersects the vertical boundary. (You can check that this formula holds for the red and blue loops.) For any slope other than 0 and , we can figure out the sign as follows: If an arc has one endpoint on the left side of the square and the other endpoint on the top then the loop has positive slope. If an arc has one endpoint on the left and its other endpoint on the bottom then the loop has negative slope. (It’s not too hard to check that a loop that intersects the sides of the square minimally can’t have both types of arcs.)

There are many other loops in the torus, in addition to the red and blue loops above, that are carried by this particular train track. Examples with slopes , and , respectively, are shown in red in the Figure below.

All these loops have positive slopes, and in fact, you can see that no arc from the left side of the square to the bottom of the square can be carried by this train track. So this means that this train track can only carry positive slopes.

On the other hand, we can put in as many copies of either the vertical or the horizontal arc as we want. We can also put in as many arcs as we want from the left side to the top side, and the same number from the bottom to the right side. By choosing the number of such arcs carefully, we can get the intersections between the resulting loops and the sides of the squares to be whatever we want. (If the number of intersections with the top is greater than the number with the bottom, we’ll only use vertical arcs. Otherwise, we’ll only use horizontal arcs.) So, every loop with positive slope will be carried by this train track.

To make it clear, let me summarize what we’ve learned: The train track that we constructed carries all the loops with positive slopes, as well as the loops with slope 0 and . Going back to the beginning of the post, note that if we had chosen to smooth the intersection between the original two loops in the opposite way, the resulting train track would have carried all the negative slope loops, as well as 0 and . So, we can think of a train track as a way to separate the loops in a surface into two different classes: the loops that are carried and the loops that aren’t.

One way that this gets really interesting is when consider what these two classes look like in the curve complex for the surface. This approach is one of the main tools used in Masur and Minsky’s work on the curve complex [1], particularly their proof that curve complexes of surfaces are Gromov -hyperbolic.

Recall that the curve complex for a surface *S* is the simplicial complex whose vertices represent isotopy classes of essential, simple closed curves in *S* and whose faces span sets of isotopy classes with pairwise-disjoint representatives. The curve complex for a torus is pretty boring: Any two disjoint essential loops in a torus are parallel (and thus isotopic) to each other, so there are no edges in this curve complex- It’s just an infinite collection of discrete vertices.

So instead, one generally works with the Farey graph for the torus. Much like the curve complex, the vertices of the Farey graph represent isotopy classes of essential loops in the torus. In particular, each vertex represents a rational number (a slope) including , and in fact we can arrange the vertices in order by slope along a circle. Since there are no pairs of disjoint loops, we connect any two vertices representing loops that intersect in a single point.

Similarly, we include in the Farey graph all the triangles bounded by loops of three edges. I’ll leave it as an exercise for the reader to check that for every pair of loops in the torus that intersect in exactly one point, there are exactly two other loops such that each of these loops intersects each of the original two loops in a single point. (The two new loops will intersect each other in two points.) So, in other words, each edge in the Farey graph is in the boundary of exactly two triangles. This tells us that the triangles form a surface. In fact, the surface that they form is the disk bounded by the circle along which we placed the vertices in the previous paragraph.

Six of these triangles are shown in the figure on the right, with the slopes corresponding to their vertices indicated as fractions. For each edge in the Farey graph, we can calculate the third vertex representing one of the adjacent triangles as follows: The numerator of the new slope is the sum of the numerators of the original two, and the denominator is the sum of their denominators. Similarly, to get the vertex defining the other triangle, we subtract the numerators and denominators. (To see why this works, you can think about the normal loops and Haken sums that I mentioned in another post from a while back.)

Notice that the triangles in this picture are different sizes, and in fact they get smaller as the numerators and denominators get bigger. But in reality, the edges of the Farey graph should all be the same length. So, you should think about this circle like the boundary of the hyperbolic plane, and the triangles as being ideal triangles. This isn’t exactly right either, since the edges in the Farey graph have finite length, unlike the edges of ideal triangles. But the Farey graph will have the same symmetry group as a tesselation of the hyperbolic plane by ideal triangles.

The Farey graph is closer in structure to a tree. In fact, we can construct a tree by putting a vertex at the center of each triangle and connecting two vertices whenever the corresponding triangles share an edge. The Farey graph will be quasi-isometric to this tree (though if you don’t know what quasi-isometric means, don’t worry about it.) In the same way that each edge in a tree cuts the tree into two separate trees, each edge in the Farey graph cuts the Farey graph (which is really a cell complex) into two disconnected sets of triangles.

Now, lets go back to the train track from the beginning of this post. Recall that the set of loops carried by the train track consisted of all loops with positive slopes, as well as the loops with slopes 0 and . These loops make up the right half-circle of the Farey graph. In particular, the subcomplex of the Farey graph spanned by the loops carried by this train track is exactly one of the two components that we get if we cut along the edge spanned by 0 and .

Note that when we constructed this train track, we started with any two loops in the torus that intersect in one point, or equivalently, any edge in the Farey graph. We then had a choice of two different ways to smooth the vertex where they intersect into a pair of switches in the train track. If we had made the other choice with our original two loops, we would have gotten a train track that carried all negative slopes, i.e. the other component defined by the edge between 0 and . By symmetry, if we had started with a different pair of loops, the two possible train tracks that we could construct from them would similarly define the two different components that we get by cutting the Farey graph along this new edge. (Note that one can also show that every “reasonable” train track in the torus can be constructed from two loops in this way.)

The point of all this is that the different train tracks on the torus can be thought of as defining all the different ways of cutting the Farey graph along single edges. Train tracks in higher genus surfaces play a very similar role, though it’s more complicated because the curve complexes of these surfaces are much less tree-like (though they’re still delta hyperbolic, which is close.) In particular, you can’t separate these complexes by removing a single edge, or indeed any finite collection of simplices. But train tracks still define subsets of loops that are very nice with respect to the curve complex structure.

The reason this turns out to be useful is that it is often possible to prove things about the types of loops that are carried by a given train track, which can then be translated into the language of the curve complex. This is one of the main techniques in Masur and Minsky’s papers on the curve complex [1], and on disk sets of handlebodies [2]. It also proved very useful in my work with Yoav Moriah [3] and his earlier work with Martin Lustig [4]. But a discussion along those lines will have to wait for a future post.

]]>

To the user, the only difference between Manifold and ManifoldHP is the extra precision, see here for details.

**Q:**How does this differ from the program Snap or the corresponding features of SnapPy?**A:**Snap computes hyperbolic structures to whatever precision you specify, not just 212 bits. However, only some aspects of that structure can be accessed at the higher precision. In contrast, with ManifoldHP every part of the SnapPea kernel uses the high-precision structure. Eventually, we hope to add a ManifoldAP which allows for arbitrary precision throughout the kernel.**Q:**Are there any negatives to using ManifoldHP over Manifold?**A:**Yes, ManifoldHP is generally slower by a factor of 10 to 100. Multiplying two quad-double numbers requires at least 10 ordinary double multiplications, so some of this is inevitable.**Q:**What is one place where the extra precision really helps?**A:**Computing Dirichlet domains and subsidiary things like the length spectrum. A ManifoldHP can find the Dirichlet domain of a typically 15 crossing knot exterior but Manifold can’t.

]]>

R. Fenn, Tackling the trefoils.

One of the main questions in Knot Theory is the Tabulation Problem. Tabulate all knots up to crossings for the largest you can manage! Many modern tabulation techniques are quite high-tech algebraic/topological, using geometric structures on knot complements and the likes, but sometimes simple combinatorial techniques will do the job. A combinatorial invariant views a knot as a knot diagram, which it views in-turn as a network of crossings concatenated together in a plane, *i.e.* as a special sort of a tangle diagram. Knot diagrams are considered equivalent if they differ by a finite sequence of Reidemeister moves.

To classify knots, we identify knot invariants. A `good’ knot invariant should be both powerful and easy to compute.

Historically, the first knot invariants to strike this balance were the knot colourings, first considered by Tietze, and eventually explained in entirely elementary terms by Fox in 1956. The invariant is a yes-no answer to the following question:

Can you colour your knot by three colours such that all three colours are used, and at each crossing either one colour meets itself, or all three colours meet?

If you answer `yes’, then the knot is *3-colourable*.

The trefoil is 3-colourable:

The unknot is not 3 colourable, because it has a diagram with only one arc, and therefore with only one colour. Three-colourability is easily seen to be a knot invariant, because it is conserved by Reidemeister moves:

In fact, with different language and in the fundamental group world, the above was the original proof that the trefoil is knotted!

Of course, there’s only so far a yes-no `boolean’ invariant can go- it sifts knots into two equivalence classes, but doesn’t do more than that. The Figure Eight knot, for example, is not 3-colourable, which distinguishes it from the trefoil but not from the unknot.

A large set of boolean invariants are much better than just one, though. By varying our pallete of colours, we can distinguish many more knots. We can colour knots with 5 colours instead of 3 (*e.g.* the Figure Eight knot *is* 5-colourable), play with the colouring rule so as to colour with general quandles, and our power to separate knots goes right up. As a matter of fact, a recent preprint of W. Edwin Clark, Mohamed Elhamdadi, Masahico Saito, Timothy Yeatman distinguishes all 2977 prime oriented knots, up to reversal and mirror image, with up to 12 crossings using just 26 quandles!

W.E. Clark, M. Elhamdadi, M. Saito, T. Yeatman, Quandle Colorings of Knots and Applications.

But what about distinguishing a knot from its mirror image, or a knot from its reverse, using colourings? Here, a quandle is no longer sufficient, because you can reflect and reverse any quandle colouring, so that any colouring of a knot uniquely induces a colouring of its reverse and of its mirror image.

So how can we distinguish a knot from its mirror image? Do we need heavy machinery? (Actually the Kauffman polynomial is not at all bad; but never mind).

It turns out that a slightly extended notion of a knot colouring does the job, at least for the left-hand trefoil and for the right-hand trefoil. Namely, the following colouring does the job:

I’ll explain what the colouring is, then how it distinguishes these two knots. First, make a parallel copy of each trefoil. Colour the outermost one in the usual way with three colours . As for the inner one, colour it also , but with the rule that it interacts only with the outer trefoil, changing colours whenever it passes over or under it on a different colour. Actually, we could just make the lower one always pass under the upper one, and make all of its `crossings’ virtual; but that’s a cosmetic quibble.

A quick standard verification shows that this notion of a doubled 3-colouring is invariant under Reidemeister moves. Note that the parallel trefoils are “connected”, and you can’t move one without moving the other.

So that’s the colouring… how do we now extract a knot invariant from it? Simple. Ignore all doubled crossings where only a single colour participates for the inner knot (there aren’t any in the above picture, but a Reidemeister 1 move for example would create one), all doubled crossings where the upper and lower colour agree for both undercrossing arcs (there is one such crossing in each knot). The set of coloured doubled crossings that left over is a knot invariant, up to global automorphism of the colours (renaming “blue” as “green” and “green” as “blue”, for example) and setting two copies of the same crossing equal to minus itself (itself with crossing sign reversed) and cancelling over-crossings with under-crossings with the same pattern.

As an invariant for coloured knots, this coincides with the `coloured untying invariant’ from my thesis:

D. Moskovich, Surgery untying of coloured knots.

It’s beautiful that Fenn can compute it combinatorially, generalize it hugely, and use it to distinguish the right-hand trefoil from the left-hand trefoil! Bravo!!!

]]>

Now a banker has found another duplicate in yet another table of 3-manifolds. This time it was Ben Burton, and the duplicate appears in the Hildebrand-Weeks cusped hyperbolic census.

I’m exaggerating a little. Ben is no longer a banker. But the duplication is real. The problem is that SnapPea’s Epstein-Penner decomposition code sometimes has trouble with highly symmetric manifolds. If the Epstein-Penner decomposition is not a triangulation, SnapPea attempts to build a subdivision. Unfortunately it does not always build a canonical-such subdivision. In this particular case, SnapPea generated distinct 5-tetrahedron subdivisions of a cube. Somehow the duplication was missed.

The problem resulted in a repeated manifold in the census.

]]>

The Thurston Legacy Conference Organizers write:

The conference “What’s Next? The mathematical legacy of Bill Thurston.” will be held at Cornell University from Monday, June 23rd-Friday, June 27th, 2014.

Bill Thurston made fundamental contributions to topology, geometry, and dynamical systems. But beyond these specific accomplishments he introduced new ways of thinking about and of seeing mathematics that have had a profound influence on the entire mathematical community. He discovered connections between disciplines that led to the creation of entirely new fields. The goal of this meeting is to bring together mathematicians from a broad spectrum of areas to describe recent advances and explore future directions motivated by Thurston’s transformative ideas.

The program will feature talks by Ian Agol, Mladen Bestvina, Michel Boileau, Danny Calegari, Benson Farb, Etienne Ghys, Rick Kenyon, Francois Labourie, Tan Lei, Vlad Markovic, Dusa McDuff, Curtis McMullen, John Milnor, Yair Minsky, Yi Ni, Alan Reid, Mitsuhiro Shishikura, Dennis Sullivan, Jeffrey Weeks, Anna Wienhard, Dani Wise, and Anton Zorich.

Additional activities will include a mathematical film festival, a presentation by Kelly Delp about her work with Thurston on constructing models of surfaces, software presentations by Nathan Dunfield and Rich Schwarz, exhibits of digital artwork and mathematical models, curated by Sarah Koch and Dylan Thurston, a public lecture by Jeff Weeks and a panel discussion on the topic of communicating mathematics.

The website for the Conference is http://www.math.cornell.edu/~thurston/.

Registration for the Conference will open January 2014 and the website will provide additional information on housing options, conference social activities, and other information about the Ithaca area.

We anticipate that some financial support will be available to help with travel and accommodation expenses. Information on this and the form for requesting support will be available when registration opens in January. Young researchers and underrepresented groups are especially encouraged to apply.

General inquiries should be addressed to ThurstonLegacyConference2014@math.cornell.edu

We look forward to seeing you in Ithaca!

Thurston Legacy Conference Organizers

]]>

For millenea, the Inca used knots in the form of quipu to communicate information. Let’s think how we might attempt to do the same.

So on the left we have Alice and on the right we have Bob. Alice is going to communicate with Bob by sending Bob partial information. In her hand, Alice holds a knot coloured by a quandle . Let’s assume for simplicity that for every and in there exists in such that . Alice sends Bob a collection of colours in for arcs in a knot diagram for . These colours `sit’ at dots (representing arcs) on the knot diagram, and these dots cannot be `isotopied’ under an overcrossing *i.e.* the following move is NOT an equivalence:

But I think that we should allow the following:

So what we’ve actually got is a `partially coloured knot’, that is a knot together with a choice of a subset of arcs on that knot marked by dots. Two of these are equivalent if they are related by a sequence of Reidemeister moves, the move mentioned earlier involving pairs of dots around a crossing, an automorphism of , plus the following moves when all colours involved are known (note that this is not the same thing as having dots on the arcs, because you can sometimes deduce colours that you’re not actually sent by Alice):

So dots have to be sent to dots, the colours have to match up, the diagram has to match up, but the set of dots is unordered.

Bob receives the colours at the dots in . So now Bob knows , but perhaps not its colouring. Given two such packets of information, Bob can confuse one with the other if the partially coloured knots he receives are equivalent. So the question is how many non-confusable packets of information we can send using knot and quandle .

Let’s start with the 3-coloured trefoil. For we can send only one bit, because the symmetry of the trefoil confuses each of its arcs with each of its other arcs, and any two elements of are related by an automorphism of . For , we may send either two identically coloured bits on the two arcs involved in a Reidemeister 1 move (which gives no information about the colouring of the trefoil), or we may send two colours of two different arcs. So we have two different packets we can send, the length of the packets is 2, and the capacity of such a channel is . For , there are three mutually distinguishable packets we may send, so the capacity of such a channel is .

In general, define the Shannon Capacity of the -coloured knot to be the maximum (or the supremum) over of the th root of the number of mutually distinguishable packets of size which Alice may send Bob. This is similar to the definition of a Shannon capacity for a graph, except that now we have made explicit the `agent' (the overcrossing arc) which allows us to confuse the `patients' (the undercrossing arcs), instead of it just being an edge in a graph. We also have a notion of equivalence of descriptions of the structure of the information (via Reidemeister moves etc.), which the notion of Shannon Capacity of a graph did not have.

To compute the Shannon Capacity of a knot, we would need to know a lot about its symmetries and about the structure of . So it will be a hard problem in general- which shouldn't worry us too much, because even for graphs the problem is hard. Indeed, the Shannon capacity of the 7-cycle is an open problem, as mentioned here.

A quite natural extension of this idea would be for Alice not to have to send all of , but rather to allow her to send `pseudocrossings’, that are crossings in which over-under information is suppressed. There’s a very nice recent arXiv preprint about such objects, which they call `pseudoknots’.

There’s nothing special about knots in the above discussion- we could generalize with no effort at all to links, tangles, and virtual knotted and w-knotted objects of any sort. For me, the really interesting version (whose definition is entirely analogous) is for tangle machines, discussed HERE and HERE.

Recall that the basic building block of a tangle machine is an *interaction*:

If we were to delete all of the dotted lines (and assume we knew all the agents in some sense), we’d have just a graph, in which two vertices could be confused if connected by an edge. Confusion and deduction are two sides of the same coin. The dotted lines tell us how that deduction takes place- we can know an output given both the input and the agent, or we can know the input given both the output and the agent, or we can know the agent given both the input and the output.

It looks to me as though this notion of Shannon capacity could come up when statistically detecting tangle machines inside data (I’ll post more about this in the future). We might detect certain colours and crossings with certainty, but others we’re not sure about. I’m imagining Shannon capacity as being an ingredient in some sort of a measure for how much we know about the tangle machine (or about the knot) from this partial data, but this is pure fantasy at the moment. Of course, without a solid application such as this in mind, the definition is fluid, and I doubt that the above is a `final’ definition of Shannon Capacity of a coloured knot in any useful sense. I’d love to know what is!

I love this idea of partially coloured knots as “thumb drives”. I wonder how it might be useful…

]]>

Yesterday I received correspondence from a certain Kenneth A. Perko Jr., whose name perhaps you have heard before. Its contents are too delicious not to share- knot theory’s favourite urban legend is completely false!

Excited, Ken Perko shot off a paper to PAMS, containing only a title and a list of figures demonstrating an ambient isotopy. His paper entered the Guiness Book of World Records as the “shortest mathematics paper of all time”, and Ken Perko obtained immortality.

This is the Perko pair:

What a story! The human drama, the “math for the masses” aspect that a complete amateur could make a massive mathematical discovery by playing with some string, the beautiful magenta pair of knots, the importance of attention to detail and using all your senses (not just your head)! What a shame that virtually everything written above turns out to be false!

By now, Ken Perko was studying law at Harvard Law School (he quotes Reidemeister- “das mathematische Denken its nur der Anfang des Denkens”, or “mathematical thought is just the beginning of thought”). But he was still deeply interested in low dimensional topology, frequently publishing notable results. How cool is that! In 1973, noting that his covering linkage invariants did not distinguish the from the , he pulled out his yellow legal pad and worked out an explicit sequence of Reidemeister moves relating the two diagrams (or rather, one with the mirror image of the other). His celebrated paper on the topic was rejected by TAMS for being too short (poor TAMS), but was picked up by PAMS [5]. It is in fact 2 pages long (plus tables).

What makes this story even more interesting is that the Perko pair in fact falsified what was at the time a commonly-accepted “theorem” of Little, which had been quoted as fact for almost a century. Perko explains:

The duplicate knot in tables compiled by Tait-Little [3], Conway [1], and Rolfsen-Bailey-Roth [6], is not just a bookkeeping error. It is a counterexample to an 1899 “Theorem” of C.N. Little (Yale PhD, 1885), accepted as true by P.G. Tait [4], and incorporated by Dehn and Heegaard in their important survey article on “Analysis situs” in the German Encyclopedia of Mathematics [3].

Little’s `Theorem’ was that any two reduced diagrams of the same knot possess the same writhe (number of overcrossings minus number of undercrossings). The Perko pair have different writhes, and so Little’s “Theorem”, if true, would have proven them to be distinct!

Perko continues:

Yet still, after 40 years, learned scholars do not speak of Little’s false theorem, describing instead its decapitated remnants as a Tait Conjecture- and indeed, one subsequently proved correct by Kauffman, Murasugi, and Thislethwaite.

Whoa!!

I think they are missing a valuable point. History instructs by reminding the reader not merely of past triumphs, but of terrible mistakes as well.

And the final nail in the coffin is that **the image above isn’t of the Perko pair**!!! It’s the `Weisstein pair’ and mirror , described by Perko as “those magenta colored, almost matching non-twins that add beauty and confusion to the Perko Pair page of Wolfram Web’s Math World website. In a way, it’s an honor to have my name attached to such a well-crafted likeness of a couple of Bhuddist prayer wheels, but it certainly must be treated with the caution that its color suggests by anyone seriously interested in mathematics.”

The reason for this error was the the was deleted from subsequent editions of Rolfsen’s knot tables, so the there is actually . Tamper with knot numberings at your peril!

The real Perko pair (accept no imitations!) is this:

1. J.S Birman, *On the Jones polynomial of closed 3-braids*, Inventiones Mat. **81** (1985), 287-294 at 293.

2. J.H. Conway, *An enumeration of knots and links, and some of their algebraic properties*, Proc. Conf. Oxford, 1967, p. 329-358 (Pergamon Press, 1970).

3. M. Dehn and P. Heegaard, Enzyk. der Math. Wiss. III AB 3 (1907), p. 212: “Die algebraische Zahl der Ueberkreuzungen ist fuer die reduzierte Form jedes Knotens bestimmt.”

4. C.N. Little, *Non-alternating +/- knots*, Trans. Roy. Soc. Edinburgh **39** (1900), page 774 and plate III. This paper describes itself at p. 771 as “Communicated by Prof. Tait.”

5. K.A. Perko, Jr. *On the classification of knots*, Proc. Amer. Math. Soc. **45** (1974), 262-266.

6. D. Rolfsen, *Knots and links* (Publish or Perish, 1976).

]]>