A binary operation is **distributive** over another operation if . If then the operation is said to be **self-distributive**. Examples of self-distributive operations include conjugation , conditioning (assume X and Y are both Gaussian so that such a binary operation makes sense, essentially as covariance intersection), and linear combinations with (say), and elements of a real vector space.

Two nice survey papers about self-distributivity are:

- J. Przytycki, Distributivity versus associativity in the homology theory of algebraic structures. arXiv:1109.4850.
- M. Elhamdadi, Distributivity in Quandles and Quasigroups. arXiv:1209.6518

I won’t survey these paper today- instead I’ll relate some abstract philosphical musings on the topic of associativity vs. distributivity.

Algebraic topology detects information not only about associative structures like groups, but also about self-distributive structures like quandles. I wonder to what extent distributivity can stand in for associativity. Might our associative age give way to a distributive age? Will future science will make essential use of distributive structures like quandles, racks, and their generalizations? At the moment, such structures appear prominently only in low dimensional topology.

I think that there is a philosophical difference between an *associative world* and a *distributive world*. The associative world is a geometric world; a world in which space and time are important and fundamental concepts. The distributive world seems different to me. I think that it is a quantum world without space and time, in which only information exists.

Analogous to mass being a manifestation of energy via , so energy may be viewed as a manifestation of information via Shannon/Boltzman entropy. From a physics perspective, there exists the `future physics’ idea that space and time might be emergent, and that the only true fundamental physical quantity is information. Vendral has written a book expounding this point of view. If this idea takes hold, then future fundamental physics will include information physics, and I believe that its underlying mathematics will belong not to the associative world, but rather to the distributive world. I speculate that information physics will some day make essential use of quandles, racks, and related structures.

The associative world is more familiar, so I’ll begin with a survey of the history of the distributive world, followed by a brief survey of both worlds. Then I’d like to compare and contrast them.

But perhaps there is more in heaven and earth than is dreamt of in associative philosophy. The person credited with this observation is the great American logician C.S. Pierce when in 1880 he concluded:

These are other cases of the distributive principle… These formulae, which have hitherto escaped notice, are not without interest.

For the next century or so, like stray ants who don’t follow paths to establish food sources, there were occasional bursts of realization that distributivity might be fundamental. Notable among the mavericks is M. Takasaki. Alone and isolated as a fresh Japanese math PhD in Harbin during wartime, Takasaki defined an involutive quandle in 1942 as an abstraction of the geometric idea of a symmetric linear transformation. Takasaki envisioned his self-distributive `keis’ as alternatives to groups, but his dream is still largely unrealized. In 1959 another group of mavericks, John Conway and Gavin Wraithe, discovered quandles and racks whose operations were abstractions of the conjugation operation in group theory. But it was only in 1982, with the work of Joyce, and another great independent discoverer Matveev, that quandles and racks entered the mathematical consciousness. Other independent thinkers who discovered or rediscovered such structures (racks, in this case) include Brieskorn and Kauffman. There were ideas about using quandles in the context of geometry (Takasaki), singularity theory (Breiskorn), and symmetric spaces (Joyce), but I think that quandles and suchlike only really ever took hold in low dimensional topology.

From the knot theorist’s perspective, quandles and racks were popularized by Fenn, Rourke, and Sanderson’s 1992 discovery of rack cohomology (the quandle version is due to Carter et.al., and the history is explained in his survey). It turns out that algebraic topology works just fine when associativity is replaced by distributivity, and quandle cocycles yield computable knot invariants. Algebraic topology of quandles and racks has become a bit of a subfield inside low dimensional topology, and this is more or less the only quasi-popular use of quandles of which I am aware.

Note: quandles and racks are only part of the mathematical consciousness of low-dimensional topologists! Physicists, biologists, chemists, computer scientists, engineers, and the rest of humanity don’t really know what a quandle is. I think that we’re a few steps ahead of the pack.

Viewed broadly enough, I think that every associative operation is an abstraction of one or more of the following archetypes:

**Addition**: The archetypal geometric picture for addition is concatenation of segments of specified lengths. To add natural numbers and , start with a number line, represent the number by the segment , mark a second point at distance from point in the positive direction representing as , and concatenate the two line segments to represent by the concatenated directed segment . Associativity is seen in the geometry (the space), in that , and both are represented by the same directed segment .**Multiplication**: The archetypal geometric picture for multiplication is to fill a cycle by a cell. To multiply natural numbers and , represent by the directed segment along the x-axis and represent by the directed segment along the y-axis, and form the rectangle . The product is visualized as the area of the rectangle (the 2-cell) in the upper right quadrant whose boundary is the above rectangle. Associativity is seen from the fact that both measure the area of the same cube in Euclidean 3-space.

In the associative world, it makes sense to represent objects by 0-cells and maps by 1-cells. Data structures can sensibily be represented using labeled graphs. A composition of maps from an object represented by a vertex to an object represented by a vertex on a graph is represented by a path on the graph between and . It makes sense to represent a composition of maps in this way thanks to associativity- there is no need for brackets along the path. Maps between maps can be represented by directed higher cells, sort of like our geometric picture for multiplication. Again, this makes sense thanks to associativity.

The claim that I am making is that formalisms such as category theory and graph theory are native to the associative world. So too classical probability theory. Probabilities are added and multiplied, and they are always between and . So too, the theory of computable functions relies on associative compositions.

Let’s consider the following archetypes for distributive operations:

**Convex combination**: Our first archetype is with elements of a real vector space, and .**Conjugation**: The second archetype is .

Neither of these operations are associative in general. For example,

.

Both operations have natural archetypes in the world of information (their best-known archetypes are in low dimensional topology of course). One archetype for convex combination is from Bayesian statistics. I estimate the mean of data based on a sample, and I obtain a number . But I have a prior belief that the mean should actually be . Based on external information (*e.g.* the number of elements in the sample and my choice of standard of `absolute credibility’), I compute a constant , and my updated estimate becomes . Fusion operations satisfy .

I can view convex combination as `mixing'; I mix units of with units of .

An archetype for conjugation might quantum interference, where quantum evolution of density operator conjugates it by a unitary operator. So `interaction’ is convex combination, and `evolution’ is conjugation…

It doesn’t make much sense to represent words in **D**istibutive **N**on-**A**ssociative (DNA) structures using concatenated edges in labeled graphs, because concatenating edges would not correspond to a well-defined composition of operations (because of non-associativity). There are still notions of Cayley graphs for quandles and racks (e.g. Chapter 4 of Winker’s thesis); I don’t feel qualified to comment on these.

The natural way to represent words in DNA structures, I would think, would be to walk along (modified) tangle diagrams. A Reidemeister III move on tangle diagrams coloured by distributive structures makes sense, because :

One idea behind tangle machines is to make use of this fact to do distributive algebra on tangles. So, while for an associative operation one might diagrammatically represent in some way like this:

In a distributive world we might represent maybe like this:

Is there a DNA (**D**istributive **N**on-**A**ssociative) analogue to category theory, where morphisms distribute but don’t have associative composition? I wonder… I also wonder whether quantum probability, suitably formulated using convex combination and conjugation operations, would be a valid DNA analogue to probability. If we take Reidemeister 2 seriously, and apply it to the DNA structure of Gaussian distributions whose operation is conditioning, we have to define `unconditioning’ X by Y, and the resulting probability might be negative. Classically this makes no sense, but from a quantum perspective it’s fine, and even natural; it feeds my confirmation bias for the philosophical thesis we are considering. Consider the following quote by Feynmann:

The only difference between a probabilistic classical world and the equations of the quantum world is that somehow or other it appears as if the probabilities would have to go negative.

Most quantum topology of tangles is actually associative, in that we speak of the *category of tangles*, whose operation is stacking. Objects are tangles with tops and bottoms:

Stacking is an associative operation. Via a TQFT formalism, braided monoidal bla bla bla categories give rise to tangle invariants and to knot invariants.

Dror Bar-Natan suggested that this might not really be the right way to think about tangles. Tangles should not have `tops’ and `bottoms’- such information certainly does not exist topologically. Instead, endpoints of tangles should be marked points around a disc (more generally a disjoint union of spheres with holes):

Surprisingly, this disc, which (partially following Bar-Natan) I think we should call the `firmament’, is quite important: See Dror’s “cosmic coincidences” talk.

You then concatenate by connecting two endpoints, and extending the firmament appropriately. This way of thinking is behind Dror’s Khovanov homology work, and current work on various w-knotted objects by him and collaborators.

A major difference between the “stacking” worldview and the “circuit algebra” worldview is that the former views a tangle as a morphism from data stored in the “boundary points at the bottom” to data stored in the `boundary points at the top”. So a tangle encodes an operator (reference: Chapter 3 of Ohtsuki’s book Quantum Invariants). But in the latter worldview, a tangle just encodes some relationship between a bunch of data at endpoints. In this worldview, a tangle cannot encode a mapping in any meaningful sense- this worldview does not support the idea of operator invariants of tangles. This worldview isn’t imposing any non-topological artificial structure on tangles. All it has are the Reidemiester moves, including Reidemeister III. So tangles in this sense are a distributive-world structure.

As an example, let’s consider a single crossing. When tangles express morphisms to be stacked, this `represents’ an R-matrix representing a linear transformation from a vector space to itself. Bottom happens before top, and there’s an implicit time axis. But with no up-down information, it represents a transition from one undercrossing arc to the other by way of an overcrossing arc, . No braided monoidal categories anywhere it sight.

Having tops and bottoms to tangles is nice because associative structures tend to be more amenable to explicit computation. Computing in a quandle is usually very hard, perhaps **because** the Turing machine formalism itself belongs to the associative world. My vague thought is that we can probably do a lot better in the future using different sorts of (probabilistic?) tools… but that’s a speculation for another day. I also think that distributed and parallel computing could provide better ways to compute in distributive structures, and may in turn have distributive algebraic models (Marius Buliga has some work in this direction: e.g. Chemlambda, joint with Louis Kauffman).

Although people have began looking at the distributive world only quite recently, it’s already rife with terminology. The more this world is explored, the more terminology there will be, so I’d just like to point out some parallels. Recalling some axioms, consider the following axioms on a set with a set of binary operations :

**Idepotence**: for all $a\in Q$ and for all .**Injectivity**: If for some and , then .**Distributivity**: for all and .

If contains only one element , and assuming that is also surjective for all , we have the following cases.

- If is both distributive and idempotent then you’re looking at a
*spindle*. - If is distributive and injective then you’re looking at a
*rack*. - If all three, then you’re looking at a
*quandle*. - Only distributive and you’ve got yourself a
*shelf*.

Lots of operations and you might add words like *multi-*, so you have multiracks, multiquandles, multishelfs… or maybe G-families of quandles, or irq’s, or whatever.

Staring at these DNA structures though, they look quite parallel to familiar associative structures. Injectivity parallels invertibility of elements (*i.e.* it tells us that is left-invertible) and distributivity parallels associativity. I’m not sure what the parallel associative concept to idempotence is (idempotence involves both the element and the operation ), but I think it might be orthogonality; because reminds me of in orthogonal groups. Also, conjugation distributes over convex combination, but not the opposite. We might therefore think of convex combination as being parallel to addition, and conjugation as parallel to multiplication. So, using the adjective `DNA’ for `distributive non-associative’, a quandle might be a `DNA orthogonal group’, a rack might be a `DNA group’, if you have both conjugation and convex combination, maybe you have a `DNA near-field‘.

Why would you use a structure like that? Well, as an example of how it might be useful, here’s an AND Gate without trivalent vertices, where and stand in for the digits and correspondingly. The operations are convex combinations, and is conjugation.

It seems to be very natural to consider structures where has lots of elements- it doesn’t inhibit their algebraic topology, it occurs naturally in our archetypes (in the Bayesian probability archetype, to expect all `new’ information to have the same credibility is unnatural; see also Buliga’s work on irq’s, emergent algebras, and related structures- all DNA structures HERE and HERE), and it allows us to construct various topological invariants such as invariants of knotted handlebodies (“A G-family of quandles and handlebody-knots” by A.Ishii, M. Iwakiri, Y. Jang, and K. Oshiro).

The term `DNA’ suggests that distributive non-associative structure are in some way fundamental (like DNA is fundamental to cells in living organisms), and I think that they are. There are some simple transforms between the associative world and the distributive world too: Given a group, you can look at it’s associated conjugation quandle. Conversely, automorphisms of a quandle form a group. In another direction, you can represent a tangle diagram by a graph for example by representing each arc as a vertex, drawing edges from the vertex representing the overcrossing to the two vertices representing undercrossings, and drawing an edge between the undercrossings. By doing this you’ve thrown away all your symmetries- graphs are rigid and there are no Reidemeister moves on graphs. This construction is also partially reversible.

I think there’s a whole distributive world waiting to be discovered, and we’re just looking at the tip of the iceberg. I can’t wait to see these distributive structures play a role outside low dimensional topology, in other parts of mathematics and in other sciences!

]]>

One major problem with this story, and with similar stories, is that the knot diagrams have to be photographed (and thus identified) by hand. The pictures are not always easy to interpret (e.g. distinguishing overcrossings from undercrossings):

Also resolution might be low, objects might be in the way…

This is a computer vision problem as opposed to a math problem- but wouldn’t it be nice if a computer could recognise a knot type from a suboptimal picture? If you could snap a picture of yourself standing in front of an knot making bunny ears behind it, and your computer would automatically tag it with the correct knot type? Furthermore, wouldn’t it be nice if a computer could recognise your knot on the basis of many noisy pictures, perhaps taken from different angles?

In computer vision, there is a concept of a geon. A geon is a fundamental shape, such as a sphere or a cube, which a computer or the human brain can recognise from any angle even if the resolution is low and even if there are other objects in the way. The Recognition by components (RBC) theory asserts that vision is a bottom-up process which works by combining geons.

Geons have always been defined geometrically. A. Carmi suggested to me that topological geons should also exist. Indeed- a human can recognise a trefoil in any “reasonable” (i.e. fairly close to “minimal energy”) configuration, from any angle, even at low resolution and even if there are objects in the way. A computer ought to be able to do the same thing; and actually much more.

Computer vision is the most intensively researched field in applied computer science. It contains a huge body of research; all geometric and analytical as far as I know. Would it help to introduce some low-dimensional topology? Could topological geons such as knots and links help computers to see the world better? This would be a further manifestation of a “low-dimensional topology of information”!

]]>

Here is a lovely, simple theorem.

Given a non-trivial link in the 3-sphere with all pairwise linking numbers equal to zero, it is impossible to put that link into a position

where every component is a round circle.

Definition: A link in S^3 is “round” if every component is the intersection of an affine-linear 2-dimensional subspace of R^4 with S^3.

The idea for the proof is that if all the components of a link are round the linking number of components would either be 0 or +-1, depending on whether or not the affine-linear 2-discs they bound in D^4 intersect or not. If the pairwise linking numbers were zero, the discs do not intersect, so shrinking the radius of the sphere produces an animation where the link component radii go to zero, and the link components remain disjoint.

A corollary of this observation is that the Borromean rings (and the Whitehead link, etc) can not be put into a position where every component is round — this holds true in R^3 as well as S^3, since stereographic projection preserves round circles.

Although the Borromean rings can not be realized by round circles in R^3, they can be realized by ellipses. Haefliger used a higher-dimensional version of the ellipsoidal Borromean rings to construct his exotic smooth embedding of S^3 in S^6, so this is an idea that “has legs.”

Here is one elliptical embedding of the Borromean rings in R^3:

x^2 + 2y^2 = 1, z=0

y^2 + 2z^2 = 1, x=0

z^2 + 2x^2 = 1, y=0

You might ask “what does all this have to do with spaces of knots?” It’s about time we got to that.

Much time has been spent in geometric topology on relatively foundational problems, like classification problems. Manifolds up to diffeomorphism. Rigid hyperbolic structures. Various cobordism relationships between manifolds, surgery relationships, and so on. These are relatively discrete-ish problems. There are times when that’s less of the case. Cerf theory, sweep outs, singularity theory, open book decompositions and Teichmuller theory all have aspects of the spaces-of-things philosophy, where one studies families.

In spaces of knots, the objects of study tend to be things like the space of all C^1-smooth embeddings S^1 –> S^3 with the C^1-metric topology. That’s the topology where one takes as a distance between two smooth embeddings f,g : S^1 –> S^3 the maximum of |f(z)-g(z)| + |f'(z)-g'(z)| where z is in the circle S^1, it is sometimes called the Whitney Topology. So in this topology two such embeddings are close only when there is a “small” isotopy from one to the other.

One of the natural reasons to study spaces of knots comes not from foundational 3-manifold theory questions, but from mechanical engineering (considered broadly!). Specifically, continuum mechanics: subjects like elasticity and plasticity. These subjects study materials and their behaviours under different stresses and conditions. The connection to spaces of knots is the idea of thinking of a physical process as a dynamical system on a state space, a space of all possible configurations. Knots are one of the most basic examples of infinite-dimensional state spaces that allow for deformable objects. A more typical continuum mechanics problem would be 2-dimensional continuua, like the study of how a plastic shopping bag deforms when its carrying groceries, or the dynamics of human flesh, or the dynamics of a big canvas tent. On the extreme end, general relativity is very close to continuum mechanics. On the more pragmatic end, high-dimensional state spaces are increasingly important in subjects like robotics where one has to plan the motion of a complex object. In that sense, spaces of knots could be viewed as a “baby” case of a much wider collection of problems.

A `physical’ dynamical systems on spaces of knots is the electrostatic potential. The idea would be to imagine a knot as being an elastic band embedded in S^3, and one places a uniform electric charge along that elastic band. The elastic band is made of rubber, so the charges do not move relative to the rubber. One can write down differential equations such as this and construct various potential functions on the space of knots Emb(S^1,S^3), see for example the work of Jun O’Hara at Tokyo Metropolitan University. Knowledge of the homotopy-type of spaces of knots tells you about what kind of critical points your potential function must have (and conversely), via traditional subjects such as Morse Theory.

Here is one of the most direct connections with low-dimensional interests. An open problem in knot theory is whether or not there is an efficient algorithm to determine if a knot is trivial, say, starting from a knot diagram. The Haken algorithm has nice implementations in Regina, but it’s exponential run-time. And although it gives one access to the isotopy to the trivial knot provided it verifies the knot is trivial, it isn’t the most convenient access one could hope for.

Consider the subspace UK of Emb(S^1,S^3) consisting of knots that are isotopic to the trivial knot. We know via Allen Hatcher’s work in the 1980’s that UK has the homotopy-type of the subspace of parametrized great circles, i.e. UK has the homotopy-type of S^3 x S^2. From this we can conclude that there exists a smooth, real-valued function UK –> R where the only critical points are the global minima, that being the great circles. At present we only know such a “potential function” exists in the weak Zermelo-Frankel sense. Due to the nature of Hatcher’s proof, we do not know the *form* of such a function. If the potential function had a nice geometric or physical interpretation (something like an electrostatic potential, for example) then perhaps the gradient-flow could be turned into an efficient mechanism to recognise trivial knots. By and large the issue of finding critical points on physically-defined potential functions Emb(S^1,S^3) –> R is an open problem. But as Hatcher shows in the final section of his paper (linked above), if you had such a potential function, you could give a new proof of the Smale Conjecture. The electrostatic potential is not the only potential function that could potentially be used in a new proof of the Smale Conjecture, the Menger curvature is another seemingly-reasonable candidate, and has its own appeal.

The person that really got the study of spaces of knots off the ground and into peoples’ imaginations is Victor Vassiliev. Vassiliev had been studying singularity theory with Arnold, in the spirit of how Arnold used singularity theory to describe the (co)homology of configuration spaces. One can think of a configuration space of points in the plane as the space of embeddings of a finite set into the plane, Emb({1,2,…,n}, R^2). That embedding space sits in the space of all maps Maps({1,2,…,n}, R^2), which is just R^{2n}. So the configuration space is the complement of a “discriminant” space, sometimes also called the “diagonal” where the points in R^2 are required to have some collisions. Similarly, the embedding space Emb(S^1, S^3) is a subspace of the mapping space Map(S^1,S^3) whose homotopy-type is known, this is S^3 x \Omega S^3. So if one is content to study (co)homology of Emb(S^1,S^3) one can study it via Spanier-Whitehead duality. This turns the relatively tricky problem of studying the (co)homology of Emb(S^1,S^3) into the somewhat more tractible problem of studying the singular maps S^1 –> S^3. The singular maps are “more tractible” precisely because they form a stratified space. You can count the double points, triple points, etc, similarly you can count the places where the derivative is zero, giving a filtration. This gives you a non-homogeneous object to work with, and suddenly there are details to study. Vassiliev went quite far with this perspective, giving a spectral sequence that converges to the (co)homology of Emb(S^1,S^n) for n at least 4. In the 3-dimensional case, it’s unclear precisely what the Vassiliev spectral sequence says about the homotopy-type of Emb(S^1,S^3), and that is an open problem. The invariants of H_0(Emb(S^1,S^3)) that it produces are known as “Vassiliev invariants” or “finite type invariants”. It remains an open problem whether or not one can distinguish knots via Vassiliev invariants. Due to the nature of their definition in terms of double points, one might expect that the key property of Vassiliev invariants is how they depend on crossing changes. You would be right!

There are some wonderful connections, though. For example, the first non-trivial finite-type invariant of knots is called “the type two invariant”. It has many interpretations, my favourite being a signed count of the number of families of “satanic circles” intersecting the knot, these are the round circles that intersect the knot in 5 points making a pentagram. See Daniel’s write-up, linked, for details. This interpretation also “has legs”. The type-2 invariant of knots, from the perspective of Vassiliev, is a cohomology class defined in H^{2n-6}(Emb(S^1,S^n)) for all n>2. So it is an isotopy invariant in dimension 3, but it is also a non-trivial cohomology class in all higher dimensions as well, having a fundamental interpretation. 2n-6 is the dimension of the first non-trivial homotopy class in Emb(S^1,S^n) that does not come from the homotopy of the free loop space on S^n. Moreover, the type-2 invariant faithfully detects this homotopy/homology class. Just as in dimension 3, it is a signed count of the number of “satanic circles” on the knot. This result appears rather tersely, here. If you want to work out the proof you’ll have to understand the relation with the long-knot space, outlined in the linked paper, first.

I’m starting to hope questions such as “do Vassiliev invariants distinguish knots” are perhaps answerable in the near future. There are a variety of ways to attack this problem but I’m increasingly drawn to a relatively formal perspective. I don’t want to bore you with too much operads verbiage, but let me tell you about the geometric-topology input to this perspective. The homotopy-type of the space of smooth embeddings Emb(S^1, S^3) has a rather beautiful description in the language of operads (operads are something like topological monoids, and are a general language for operations on spaces). The most immediate analogy I can think of would be to consider the subgroups of braid groups that preserve a system of closed curves in the punctured disc. They clearly have semi-direct product descriptions. The space of knots is comparable to that, with the key ingredient being an operad that codifies satellite operations. I call it the Splicing Operad, in reference to Larry Siebenmann’s work on JSJ-decompositions of homology 3-spheres. A key theorem that allows one to compute the homotopy-type of the splicing operad (and Emb(S^1,S^3)) is:

Given an (n+1)-component hyperbolic link L in S^3, with the components denoted (L_0,L_1,…,L_n)=L, if we know the n-component sublink (L_1,…,L_n) is the trivial link, then one can isotope L into a position in S^3 so that each of the components L_1 through L_n are round circles, and if we let G be the group of orientation-preserving isometries of S^3 that restrict to homeomorphisms of L, and which restrict to homeomorphisms also of L_0, then we can ensure that the restriction of G to S^3 \setminus L is the full group of orientation-preserving hyperbolic isometries on the exterior which preserve the L_0 cusp (and which extend to continuous functions on S^3).

So this theorem is something of a partial converse to the stated theorem at the beginning of this article. While one can’t put the Borromean rings into a position where all 3 components are round circles, one can equivariantly put the Borromean rings into a position where two of the three components are round.

]]>

K. Okazaki, The state sum invariant of 3–manifolds constructed from the linear skein.

Algebraic & Geometric Topology13(2013) 3469–3536.

It’s a wonderful piece of diagrammatic algebra, and I’d like to tell you a bit about it!

The two main constructions of 3-dimensional topological quantum field theories are:

**Reshetikhin-Turaev invariants**: These are computed from surgery presentations of 3-manifolds.**Turaev-Viro invariants**: These are based on triangulations of 3-manifolds.

Turaev-Viro invariants are defined using -symbols coming from representations of quantum groups. When everything is `nice’ enough, the Turaev-Viro invariant equals to the square of the absolute value of a corresponding Reshetikhin-Turaev invariant, and its computation reduces to a Reshetikhin-Turaev computation. But there’s a natural extension of Turaev-Viro invariants due to Ocneanu which uses other types of 6j-symbols, such as 6j symbols of subfactors. In particular, the 6j-symbol of the subfactor does not come from any Reshetikhin-Turaev invariant, and so it much be computed directly. Quantum closed 3-manifold invariants associated to 6j-symbols of the subfactor are true state-sum invariant land!!

The study of subfactors, and also of knots, challenges the classical paradigm of algebra as the science of manipulating strings of symbols. Namely, relevant algebras are algebras of diagrams drawn on the plane. To veer off on a philosophical tangent for a moment:

Before `algebra of strings’, if you wanted to solve something like , you had to write something monstrous like:

If some one say: “You divide ten into two parts: multiply the one by itself; it will be equal to the other taken eighty-one times.” Computation: You say, ten less thing, multiplied by itself, is a hundred plus a square less twenty things, and this is equal to eighty-one things. Separate the twenty things from a hundred and a square, and add them to eighty-one. It will then be a hundred plus a square, which is equal to a hundred and one roots. Halve the roots; the moiety is fifty and a half. Multiply this by itself, it is two thousand five hundred and fifty and a quarter. Subtract from this one hundred; the remainder is two thousand four hundred and fifty and a quarter. Extract the root from this; it is forty-nine and a half. Subtract this from the moiety of the roots, which is fifty and a half. There remains one, and this is one of the two parts

This is from Al-Khwarizmi’s Compendious Book on Calculation by Completion and Balancing. Without `algebra of strings’ itself, you couldn’t even do that. Conceptual advances which make algebra effective include appropriate notation (credit to Al-Qalasadi in the fifteenth century), thinking in terms of algebraic structures, and completing them. For example, to `balance’ terms from one side of an equation to another, you need to have zero and negative numbers (so that having five apples and giving you two is the same as having minus two apples recieving five), and you need to have fractions… even if the final answer is known to be a positive integer and if only positive integers make sense in context! As an aside, I think that concepts such as negative probability and negative information can be understood analogously.

But then came the idea, whose origins are discussed in this mathoverflow question and which was popularized in topology by Kauffman HERE, that one should really be able to concatenate algebraic symbols not only on the left and right, but also from above and below and indeed from any direction. That algebra should be done not “along a line”, but rather in the whole plane. For “higher algebra” you might need even more dimensions! And diagrammatic algebra was born.

So how can you use diagrammatic algebra to compute an invariant? You compute a diagrammatic quantity for a presentation of your object. Local moves on your presentations, such as Pachner moves on triangulations, induce local moves on your diagrams. Your goal is now to prove that, using the local moves, you can reduce your diagram to some sort of “normal form”. And then that “normal form” is your invariant! This plan fits into the Kuperberg programme for understanding state-sum invariants, which is:

- Find a presentation for your skein module (your diagrammatic algebra of diagrams modulo your moves) in terms of generators and relations.
- Use this presentation to prove properties of your invariant (and to compute it!).

Bigelow had already found a presentation for the relevant planar algebra here:

Bigelow, S., Skein theory for the ADE planar algebras.

Journal of Pure and Applied Algebra214(5) (2010), 658-666.

Okazaki modifies Bigelow’s presentation, and using his modified presentation, he shows that the planar algebra in question is -dimensional, so that any diagram reduces to a scalar multiple of the empty diagram (**update**: Okazaki just posted a simplified version of this proof HERE). This means that the state sum invariant (Turaev-Viro-Ocneanu Invariant) can be computed recursively by writing down the diagram associated to 6j-symbols of the subfactor for the triangulated closed -manifold in question, and recursively applying local moves until an empty diagram is obtained.

Given that the linear skein is a non-trivial diagrammatic algebraic object, Okazaki’s paper might represent the most archetypal piece of diagrammatic algebra I’ve ever seen. It’s 57 pages full of computations some of which look a bit like this:

At the end of the paper, he computes the invariant for some lens spaces, and he’s done many more computations since. But anyway, it’s all just a beautiful testament to the power of diagrammatic algebra- a celebration of diagrammatic algebra. I believe that diagrammatic algebra will continue to expand and will soon enter all of the sciences… What would Pierce, who envisioned a diagrammatic algebra in the 1880’s as his “chef d’oeuvre”, an outline of the mathematics of the future (see HERE), have made of all the wonderful work on skein modules that we see today? What would he have made of this paper of Okazaki?

A casual question to all of you- what’s the most aesthetically pleasing diagrammatic algebraic computation you know?

]]>

Information theory is a big, amorphous, multidisciplinary field which brings together mathematics, engineering, and computer science. It studies *information*, which typically manifests itself mathematically via various flavours of entropy. Another side of information theory is algorithmic information theory, which centers around notions of complexity. The mathematics of information theory tends to be analytic. Differential geometry plays a major role. Fisher information treats information as a geometric quantity, studying it by studying the curvature of a statistical manifold. The subfield of information theory centred around this worldview is known as information geometry.

But Avishy Carmi and I believe that information geometry is fundamentally topological. Geometrization shows us that the essential geometry of a closed 3-manifold is captured by its topology; analogously we believe that fundamental aspects of information geometry ought to be captured topologically. Not by the topology of the statistical manifold, perhaps, but rather by the topology of *tangle machines*, which is quite similar to the topology of tangles or of virtual tangles.

We have recently uploaded two preprints to ArXiv in which we define tangle machines and some of their topological invariants:

Tangle machines I: Concept

Tangle machines II: Invariants

I’ve posted about an earlier phase of this work HERE and HERE.

Our terminology is classical computer-science inspired- the term “tangle machine” imitates “Turing machine”, our connected components are “processes”, our strands are “registers”, and our crossings are “interactions”. Tangle machines are envisioned as a diagrammatic calculus for information theory, in a big amorphous multidisciplinary sense, which capture an underlying topological nature to information manipulation and transfer.

In what sense is information topological?

Information manipulation should fundamentally be *causal*, by which I mean that one unit of information causes another unit of information to change (we’ll call it’s updated state ). By how much? That depends on your method of measurement. In what direction? That depends on your chosen (perhaps arbitrary) system of coordinates. But the plain fact of causation, that causes to change to , doesn’t depend on any of that. I’d like to draw such an *interaction* as a crossing:

**Note:** Statistics gives us the tools to detect such causal interactions inside real-world data, in which one piece of information triggers a transition between two pieces of information. This means we can actually detect tangle machines inside *e.g.* Google Trends search data! As a single-interaction example, given graphs of number of searches and nothing else, we can detect with statistical significance that iPhone 5 caused Samsung to update from S2 to S3. Some graphics for another detection example are given below.

Our information is in the form of colours on strands (*i.e.* in registers). For example, each strand might be coloured by a real number representing the entropy of a `typical sequence’ of zeros and ones. I’m imagining `information’ sitting as colours on each of the strands, with each crossing representing an information fusion or its converse.

Note also that information plays a dual role *e.g.* in the classical paradigms of computing, such as a universal Turing machine. On the one hand, information is something that is manipulated by a computer, such as the input or the output of a computation. Such information is called a *patient*. On the other hand, the computer programme that does the manipulation is itself information. Information in this capacity is called an *agent*. A computer programme can modify another computer programme, so that an agent in one context may be a patient in another context. A labeled digraph (such are the classical diagrammatic languages for such things) does not capture this dual nature of information. But a strand in a tangle diagram may be an overstrand mediating between an *input understrand* and an *output understrand* in one crossing, and it may be an understrand itself in another crossing.

We claim that interactions, *i.e.* crossings, satisfy the Reidemeister relations, and that these represent fundamental properties of information fusion. Indeed, Reidemeister 1 tells us that information cannot generate new information from itself and so, for example, that the entropy of a closed system cannot drop, and in fact can’t increase either, without outside intervention. This seems to contradict the second law of thermodynamics, but thanks to *e.g.* the Poincaré recurrence theorem I think it’s actually fine.

Reidemeister 2 is what information theorists call `causal invertibility’, telling us that we can recover the input from the output and the agent , and that updating and then discounting by - adding information and then taking it away- brings us back to where we started as though we had done nothing.

And Reidemeister 3, which comes from distributivity, tells us that if we find a common cause for an interaction in which causes to change to , then that doesn’t change our causal relationships: still causes to change to . In information theory, this is equivalent to *no double counting*. If we update by then we obtain , so is counted towards the result `just once’.

One fun spinoff of this approach is that several classical information theoretical algorithms, such as Kalman filtering and covariance intersection, can be developed using Reidemeister move invariance for suitable choices of what we mean precisely when we say `information’. We can imagine a little Kalman filter fusing and discounting estimators, or a little covariance intersection, sitting at each crossing.

So what does this all give us? We now have a coordinate free `topological’ language with which to discuss fusion and discounting of information. Moreover, we can describe the same network in many different ways, which differ by finite sequences of Reidemeister moves. Different equivalent tangle machines may have different local performance- to fuse and then to discount may consume time and resources, although topologically we’ve done nothing. So tangle machines become a formalism for choosing between different ways to realize `the same’ network of information manipulation.

These ideas become quite concrete in the quantum physical context of adiabatic quantum computation (this is our Section 5.2). Here, the colours represent Hamiltonians, and the interaction is of the form , where . As we move from to (this is the computation), evolves from to . Concatenate many such interactions, and the machine describes a controlled evolution (*quantum annealing*) of an initial groundstate of a Hamiltonian towards a final state, where we are forcing the Hamiltonian to pass through each state described by a groundstate of a Hamiltonian which overcrosses its strand. The effect may be to speed up adiabatic quantum computations! Farhi et.al. have a recent preprint in which they discuss such speedups by `inserting intermediate Hamiltonians’, and it seems to be known that you can speed up classical quantum algorithms such as the Grover algorithm in this way. Essentially, the idea is that the speed of the computation is inversely proportional to the minimum distance between the lowest two eigenvalues of the Hamiltonian along the evolution path. The `straight line annealing’ of `classical adiabatic quantum computing’ is like walking in a straight line between two points on hilly ground- you might have to climb over hills and down gullies, and, despite being a straight line, it may be a strenuous path. I wouldn’t necessarily want to hike between two peaks of a high mountains by going in a straight line! Tangle machines give a diagrammatic language to describe the process of choosing the traverse.

From another perspective, information manipulation might be just another word for *computation*. The word `computation’ is a charged word, which, like `information’, doesn’t have clear mathematical meaning. One way to ascribe the word `computation’ mathematical meaning would be to define computation as the operation of a Turing machine. Can a tangle machine simulate a Turing machine?

Let’s define the computation of a tangle machine to be `input colours into a set of registers , and read off colours from another set of registers ‘. Assume that the colours of uniquely determine the colours of .

This notion of computation, unlike many others, makes no mention of the notion of `time’.

Let’s choose our set of colours to be , , and , represented as red, green, and blue correspondingly. A colour acts trivially on itself, but switches any colour other than itself- this is a Fox 3-colouring.

We first simulate a NOT gate, exchanging and . The arrow on the left is the input, and on the right is the output. The strand without the arrow is fixed at , and is neither input nor output, but is merely part of the gate.

We next simulate a multiplexer, which copies a register labeled of . This gate has one input and two outputs.

To simulate an AND gate, we need one more piece (and its inverse). This is a trivalent vertex which accepts two colours as input, and outputs their minimum. This sends , , and , to zero, and it sends to one.

And that’s it! With an AND gate, a NOT gate, and a multiplexer, we can compute any recursively computable function, and tangle machines (in this new extended sense, which I haven’t properly defined) are Turing complete.

I haven’t explicitly written down the Reidemeister moves that such a machine satisfies in this blog post.

Tangle machines can further simulate a Turing machine, tape and all, and can further simulate neural networks. If we allow the overcrossing colour to also be updated, so our colour set is not a quandle but is instead a *biquandle*, we can also simulate machine learning.

Besides our approach, there are other quite different approaches which relate low dimensional topology with the theory of computation (although I don’t know other works relating low dimensional topology with information theory). One approach is to view the tangle itself as a unit of data, and to compute by applying rewrite rules. This approach originates with Kauffman, who is I think the father of applying low dimensional topology to computing. It is tremendously exciting, with possible applications in internet architecture and in biology. A recent preprint by Kauffman and Buliga outlines one such idea. Marius Buliga has a very nice research blog in which he explains his research programme. There is also another approach of Kauffman in which colours evolve along a braid. I think that this concept of computation is similar in spirit to ours (with knots instead of with tangle machines), in that his crossings are also performing computations. Meredith and Snyder have an approach in which they encode knot diagrams using Milner’s π calculus, with a view to using process calculi formalisms to find and compute knot invariants. In this approach also, crossings play the role of switches. Then there are the category theory approaches, outlined nicely in Baez and Stay’s survey.

Thanks to Lou Kauffman and to Marius Buliga, for useful feedback regarding tangle machines and computation.

As today there is information geometry, Avishy and I strongly believe that there will be information topology. The low dimensional topology of information.

]]>

By the way, if you happen to know of any other good geometry/topology blogs that aren’t in our blog roll (on the right side of the page), please feel free to include the link in a comment so I can add it.

]]>

There will be some travel funding available for graduate students and early career mathematicians. Before the conference, there will be graduate student workshops, led by Jessica Purcell, who has been doing a lot of very cool work on WYSIWYG geometry/topology and Alex Zupan, who has been proving a lot of nice results about thin position and bridge surfaces. The graduate student workshop is August 5-7, and the conference is August 8-10. I’m looking forward to it and hope to see you there.

]]>

Recall that a train track *T* in a surface *S* is a subsurface of *S* endowed with a certain type of singular foliation by intervals. We say that a loop *l* is *carried* by *T* if it is contained in the subsurface and transverse to the intervals in the foliation. Away from the singularities, the intervals in the foliation make up parallel strips, and the singularities define junctions where they come together. If we follow a loop around the train track, the transverse condition implies that each time we enter one of the parallel strips, we have no choice but to follow it to the junction at the other end. However, when the loop enters a junction, it will often have a “choice” of whether to take the branch to the left or right. So as we follow the loop around, depending on the “choices” the loop makes of how to turn, it may end up only crossing some of the parallel bands, and missing others. We say that a carried loop *covers* the train track *T* if the loop intersects every fiber, or equivalently if it crosses every band or parallel fibers.

We now have three types of loops in the surface *S* defined by the train track *T*: There are the loops that cover *T*, the loops that are carried by *T*, but don’t cover it, and the loops that aren’t carried at all. This is where an important observation comes in: It turns out that for a train track *T* whose complement in *S* is a collection of triangles, if a loop *l* covers *T* and a second loop *m* is disjoint from *l* then *m* must be carried (though not necessarily cover) *T*. (I don’t know who first noticed this. It’s in Masur and Minsky’s [1] work, but may go back much earlier.)

To see why this is true, note that the loop *m* cannot cut across *T* parallel to the interval fibers because then it would have to cross *l*. Moreover, any arcs of *m* outside of *T* will be contained in the triangular complementary regions and can be pushed into *T* in a canonical way. If you look carefully at what *m* can do inside of *T* without intersecting *l*, you’ll quickly conclude that it’s possible to isotope *m* so that it’s transverse to all the fibers and thus carried by *T*.

What this means in terms of the three classes of loops is that no edge in the curve complex connects a loop that isn’t carried by *T* to a loop that covers *T*. In particular, any path from a covering loop to an uncarried loop has to pass through a loop that is carried but doesn’t cover. So, in other words, the set of non-covering carried loops forms a buffer between the covering loops and the uncarried loops.

This is the buffer that I mentioned at the beginning of the post. But now the question is, how can we place these buffers next to each other to make wider ones? The key to this is to construct a second train track *U* such that *T* is “carried” by *U*. By this I mean that the subsurface defined by *T* is contained in the subsurface defined by *U* and every interval fiber in *T* is contained in an interval fiber in *U*. Note that it follows immediately from definitions that every loop carried by *T* will be carried by *U*.

We’ll next add an extra condition that’s slightly more subtle: We want every loops that is carried by *T* to cover *U*. Note the difference there: We’re asking for a stronger condition on *U**.* At first, this may seem like too much to ask for, since there will be infinitely many loops carried by *T* and we don’t want to have to check each one in *U*. But in fact, we usually only need to check a finite number of things. In particular, we can often arrange so that every band of parallel fibers in *T* covers *U*, i.e. intersects every interval fiber in *U*. Because any loop carried by T must follow at least one of the parallel bands of fibers in *T*, this guarantees that any loop carried by *T* will cover *U*.

In some cases, we may not be able to guarantee that every band in *T* covers *U*, but we may still be able to find a subset of the bands in *T *such that every carried loop must cross one of these bands, and each of these bands covers *U*.

If we can find a train track *U* with this condition, then we can compare any loop *l* that covers *T* to a loop *n* that is not carried by *U*. Since *n* is also not carried by *T*, any path from *l** *to *n* must pass through a loop *k* that is carried by *T*. The above condition implies that *k* covers *U*, so the path must also contain another loop *m *that is carried by *U, *but does not cover* U*. Thus any path from *l* to *n *must pass through at least to other loops, making its length at least three.

We can repeat the process again by constructing a train track *V* that carries *U* and has the same filling property. Any loop that is not carried by *V* will be distance at least four from any loop that covers *T*. As we build more train tracks in this way, we can find loops that are farther and farther apart. (This is one way to show that the complex of curves has infinite diameter.)

In my paper with Yoav Moriah, we construct a sequence of such train tracks in the bridge surface of a certain type of knot, with the structure of the train tracks determined by a certain type of diagram of the knot. We’re then able to show that every bridge disk below the bridge surface covers one of the train tracks early in the sequence, while no disk above the bridge surface is carried by the last train track in the series. By the above argument, this gives us a lower bound on the distance between the two disk sets. (In practice, we use a slightly different version of the carried condition, which allows the complement of the train track to be any polygon, not just a triangle.) An explicit construction shows a lower bound for the distance. As it turns out, these two bounds are the same, so we’re able to calculate the exact distance for this certain class of knot diagrams.

]]>

We can form a train track on a torus by taking two essential loops in the torus that intersect once, then smoothing the intersection, as in the Figure below. (I’m drawing the torus as a square with opposite sides identified.) There are two possible ways to smooth the intersection, and for now we’ll just arbitrarily pick one. (Later on, we’ll come back to look at the difference between the two smoothings.) The resutling graph isn’t a train track, bit we can turn it into a train track by taking a regular neighborhood of it, then giving the neighborhood a foliation by intervals perpendicular to the original graph. The original graph (shown in the middle of the Figure) is called a *train track diagram*.

The question I want to explore in this post is: What loops in the torus are carried by this train track? The answer will be in terms of the slopes of the carried loops. Recall that the universal cover of the torus is the plane. In every isotopy class of essential loops, there is a representative that lifts to a straight line in the universal cover. In fact, there’s an infinite family of such loops that lift to different lines in the plane, but all these lines have the same slope. This slope is what we call the *slope* of the (isotopy class of the) loop in the torus. In the Figure above, the blue loop has slope 0 and the red loop has slope 1/0 or . Note that both of these loops are carried by the train track. (Or, more precisely, they’re isotopic to loops that are carried by the train track.)

In general, we can calculate the absolute value of the slope of a loop by dividing the number of times it intersects the horizontal boundary of the square by the number of times it intersects the vertical boundary. (You can check that this formula holds for the red and blue loops.) For any slope other than 0 and , we can figure out the sign as follows: If an arc has one endpoint on the left side of the square and the other endpoint on the top then the loop has positive slope. If an arc has one endpoint on the left and its other endpoint on the bottom then the loop has negative slope. (It’s not too hard to check that a loop that intersects the sides of the square minimally can’t have both types of arcs.)

There are many other loops in the torus, in addition to the red and blue loops above, that are carried by this particular train track. Examples with slopes , and , respectively, are shown in red in the Figure below.

All these loops have positive slopes, and in fact, you can see that no arc from the left side of the square to the bottom of the square can be carried by this train track. So this means that this train track can only carry positive slopes.

On the other hand, we can put in as many copies of either the vertical or the horizontal arc as we want. We can also put in as many arcs as we want from the left side to the top side, and the same number from the bottom to the right side. By choosing the number of such arcs carefully, we can get the intersections between the resulting loops and the sides of the squares to be whatever we want. (If the number of intersections with the top is greater than the number with the bottom, we’ll only use vertical arcs. Otherwise, we’ll only use horizontal arcs.) So, every loop with positive slope will be carried by this train track.

To make it clear, let me summarize what we’ve learned: The train track that we constructed carries all the loops with positive slopes, as well as the loops with slope 0 and . Going back to the beginning of the post, note that if we had chosen to smooth the intersection between the original two loops in the opposite way, the resulting train track would have carried all the negative slope loops, as well as 0 and . So, we can think of a train track as a way to separate the loops in a surface into two different classes: the loops that are carried and the loops that aren’t.

One way that this gets really interesting is when consider what these two classes look like in the curve complex for the surface. This approach is one of the main tools used in Masur and Minsky’s work on the curve complex [1], particularly their proof that curve complexes of surfaces are Gromov -hyperbolic.

Recall that the curve complex for a surface *S* is the simplicial complex whose vertices represent isotopy classes of essential, simple closed curves in *S* and whose faces span sets of isotopy classes with pairwise-disjoint representatives. The curve complex for a torus is pretty boring: Any two disjoint essential loops in a torus are parallel (and thus isotopic) to each other, so there are no edges in this curve complex- It’s just an infinite collection of discrete vertices.

So instead, one generally works with the Farey graph for the torus. Much like the curve complex, the vertices of the Farey graph represent isotopy classes of essential loops in the torus. In particular, each vertex represents a rational number (a slope) including , and in fact we can arrange the vertices in order by slope along a circle. Since there are no pairs of disjoint loops, we connect any two vertices representing loops that intersect in a single point.

Similarly, we include in the Farey graph all the triangles bounded by loops of three edges. I’ll leave it as an exercise for the reader to check that for every pair of loops in the torus that intersect in exactly one point, there are exactly two other loops such that each of these loops intersects each of the original two loops in a single point. (The two new loops will intersect each other in two points.) So, in other words, each edge in the Farey graph is in the boundary of exactly two triangles. This tells us that the triangles form a surface. In fact, the surface that they form is the disk bounded by the circle along which we placed the vertices in the previous paragraph.

Six of these triangles are shown in the figure on the right, with the slopes corresponding to their vertices indicated as fractions. For each edge in the Farey graph, we can calculate the third vertex representing one of the adjacent triangles as follows: The numerator of the new slope is the sum of the numerators of the original two, and the denominator is the sum of their denominators. Similarly, to get the vertex defining the other triangle, we subtract the numerators and denominators. (To see why this works, you can think about the normal loops and Haken sums that I mentioned in another post from a while back.)

Notice that the triangles in this picture are different sizes, and in fact they get smaller as the numerators and denominators get bigger. But in reality, the edges of the Farey graph should all be the same length. So, you should think about this circle like the boundary of the hyperbolic plane, and the triangles as being ideal triangles. This isn’t exactly right either, since the edges in the Farey graph have finite length, unlike the edges of ideal triangles. But the Farey graph will have the same symmetry group as a tesselation of the hyperbolic plane by ideal triangles.

The Farey graph is closer in structure to a tree. In fact, we can construct a tree by putting a vertex at the center of each triangle and connecting two vertices whenever the corresponding triangles share an edge. The Farey graph will be quasi-isometric to this tree (though if you don’t know what quasi-isometric means, don’t worry about it.) In the same way that each edge in a tree cuts the tree into two separate trees, each edge in the Farey graph cuts the Farey graph (which is really a cell complex) into two disconnected sets of triangles.

Now, lets go back to the train track from the beginning of this post. Recall that the set of loops carried by the train track consisted of all loops with positive slopes, as well as the loops with slopes 0 and . These loops make up the right half-circle of the Farey graph. In particular, the subcomplex of the Farey graph spanned by the loops carried by this train track is exactly one of the two components that we get if we cut along the edge spanned by 0 and .

Note that when we constructed this train track, we started with any two loops in the torus that intersect in one point, or equivalently, any edge in the Farey graph. We then had a choice of two different ways to smooth the vertex where they intersect into a pair of switches in the train track. If we had made the other choice with our original two loops, we would have gotten a train track that carried all negative slopes, i.e. the other component defined by the edge between 0 and . By symmetry, if we had started with a different pair of loops, the two possible train tracks that we could construct from them would similarly define the two different components that we get by cutting the Farey graph along this new edge. (Note that one can also show that every “reasonable” train track in the torus can be constructed from two loops in this way.)

The point of all this is that the different train tracks on the torus can be thought of as defining all the different ways of cutting the Farey graph along single edges. Train tracks in higher genus surfaces play a very similar role, though it’s more complicated because the curve complexes of these surfaces are much less tree-like (though they’re still delta hyperbolic, which is close.) In particular, you can’t separate these complexes by removing a single edge, or indeed any finite collection of simplices. But train tracks still define subsets of loops that are very nice with respect to the curve complex structure.

The reason this turns out to be useful is that it is often possible to prove things about the types of loops that are carried by a given train track, which can then be translated into the language of the curve complex. This is one of the main techniques in Masur and Minsky’s papers on the curve complex [1], and on disk sets of handlebodies [2]. It also proved very useful in my work with Yoav Moriah [3] and his earlier work with Martin Lustig [4]. But a discussion along those lines will have to wait for a future post.

]]>

To the user, the only difference between Manifold and ManifoldHP is the extra precision, see here for details.

**Q:**How does this differ from the program Snap or the corresponding features of SnapPy?**A:**Snap computes hyperbolic structures to whatever precision you specify, not just 212 bits. However, only some aspects of that structure can be accessed at the higher precision. In contrast, with ManifoldHP every part of the SnapPea kernel uses the high-precision structure. Eventually, we hope to add a ManifoldAP which allows for arbitrary precision throughout the kernel.**Q:**Are there any negatives to using ManifoldHP over Manifold?**A:**Yes, ManifoldHP is generally slower by a factor of 10 to 100. Multiplying two quad-double numbers requires at least 10 ordinary double multiplications, so some of this is inevitable.**Q:**What is one place where the extra precision really helps?**A:**Computing Dirichlet domains and subsidiary things like the length spectrum. A ManifoldHP can find the Dirichlet domain of a typically 15 crossing knot exterior but Manifold can’t.

]]>