For those of you who aren’t on regina-announce: Regina 4.96 came out last weekend.
There’s several new features, such as:
- rigorous certification of hyperbolicity (using angle structures and linear programming);
- fast and automatic census lookup over much larger databases;
- much stronger simplification and recognition of fundamental groups;
- new constructions, operations and decompositions for triangulations;
- and more—see the Regina website for details.
You will find (1) and (2) on the Recognition tab, (3) on the Algebra tab, and (4) in the Triangulation menu.
If you work with hyperbolic manifolds then you may be happy to know that Regina now integrates more closely with SnapPy / SnapPea. In particular, if you import a SnapPea triangulation then Regina will now preserve SnapPea-specific data such as fillings and peripheral curves, and you can use this data with Regina’s own functions (e.g., for computing boundary slopes for spun-normal surfaces) as well as with the in-built SnapPea kernel (e.g., to fill cusps or view tetrahedron shapes). Try File -> Open Example -> Introductory Examples, and take a look at the figure eight knot complement or the Whitehead link complement for examples.
Finally, a note for Debian and Ubuntu users: the repositories have moved, and you will need to set them up again as per the installation instructions (follow the relevant Install link from the GNU/Linux downloads table).
- Ben, on behalf of the developers.
A binary operation is associative is . Examples of associative operations include addition, multiplication, connect-sum, disjoint union, and composition of maps.
A binary operation is distributive over another operation if . If then the operation is said to be self-distributive. Examples of self-distributive operations include conjugation , conditioning (assume X and Y are both Gaussian so that such a binary operation makes sense, essentially as covariance intersection), and linear combinations with (say), and elements of a real vector space.
Two nice survey papers about self-distributivity are:
- J. Przytycki, Distributivity versus associativity in the homology theory of algebraic structures. arXiv:1109.4850.
- M. Elhamdadi, Distributivity in Quandles and Quasigroups. arXiv:1209.6518
I won’t survey these paper today- instead I’ll relate some abstract philosphical musings on the topic of associativity vs. distributivity.
Algebraic topology detects information not only about associative structures like groups, but also about self-distributive structures like quandles. I wonder to what extent distributivity can stand in for associativity. Might our associative age give way to a distributive age? Will future science will make essential use of distributive structures like quandles, racks, and their generalizations? At the moment, such structures appear prominently only in low dimensional topology. (more…)
I don’t know about you, but when I tell non-mathematicians what knot theory is, I often find myself telling a story about identifying a knotted protein by its knottedness- something about different proteins tending to be bendy to differing degrees, so that certain types of protein tend to form knots with higher writhe than others, and that this helps biologists and chemists to distinguish proteins which they would otherwise need a lot of time and money and an electron microscope to tell apart.
One major problem with this story, and with similar stories, is that the knot diagrams have to be photographed (and thus identified) by hand. The pictures are not always easy to interpret (e.g. distinguishing overcrossings from undercrossings):
Also resolution might be low, objects might be in the way…
This is a computer vision problem as opposed to a math problem- but wouldn’t it be nice if a computer could recognise a knot type from a suboptimal picture? If you could snap a picture of yourself standing in front of an knot making bunny ears behind it, and your computer would automatically tag it with the correct knot type? Furthermore, wouldn’t it be nice if a computer could recognise your knot on the basis of many noisy pictures, perhaps taken from different angles? (more…)
Over the past 10-12 years, geometric topology has entered a new era. Most of the foundational problems are solved, and there’s a fairly isolated collection of foundational problems remaining. In my mind, the two most representative ones would be the smooth 4-dimensional Poincare hypothesis, and getting a better understanding of the homotopy-type of the group of diffeomorphisms of the n-sphere (especially for n=4, but for n large as well). I want to talk about what I’d call second-order problems in low-dimensional topology, less foundational in nature and more oriented towards other goals, like relating low-dimensional topology to other areas of science. Specifically, this is an attempt to describe the “spaces of knots” subject in a way that might entice low-dimensional topologists to think about the subject.
Relaxing from my forays into information and computation, I’ve recently been glancing through my mathematical sibling Kenta Okazaki’s thesis, published as:
K. Okazaki, The state sum invariant of 3–manifolds constructed from the linear skein.
Algebraic & Geometric Topology 13 (2013) 3469–3536.
It’s a wonderful piece of diagrammatic algebra, and I’d like to tell you a bit about it! (more…)
Is information geometric, or is it fundamentally topological?
Information theory is a big, amorphous, multidisciplinary field which brings together mathematics, engineering, and computer science. It studies information, which typically manifests itself mathematically via various flavours of entropy. Another side of information theory is algorithmic information theory, which centers around notions of complexity. The mathematics of information theory tends to be analytic. Differential geometry plays a major role. Fisher information treats information as a geometric quantity, studying it by studying the curvature of a statistical manifold. The subfield of information theory centred around this worldview is known as information geometry.
But Avishy Carmi and I believe that information geometry is fundamentally topological. Geometrization shows us that the essential geometry of a closed 3-manifold is captured by its topology; analogously we believe that fundamental aspects of information geometry ought to be captured topologically. Not by the topology of the statistical manifold, perhaps, but rather by the topology of tangle machines, which is quite similar to the topology of tangles or of virtual tangles.
We have recently uploaded two preprints to ArXiv in which we define tangle machines and some of their topological invariants:
Tangle machines I: Concept
Tangle machines II: Invariants (more…)
Along with not writing many posts over the last year, I also haven’t been reading many math blogs. But I just stumbled across Alex Sisto’s blog, and wanted to share the link. He has a number of really nice posts related to curve complexes, mapping class groups, and even a trefoil knot complement cake. If you haven’t read it before, you should go and read it now.
By the way, if you happen to know of any other good geometry/topology blogs that aren’t in our blog roll (on the right side of the page), please feel free to include the link in a comment so I can add it.
I just wanted to point everyone’s attention to an upcoming conference The Thin Manifold, being organized by my long-time collaborators Scott Taylor and Maggy Tomova. The main theme of the conference will be thin position for knots and three-manifolds, with many of the talks focusing on the sort of hands-on, cut-and-paste geometric topology that I’ve been writing about on this blog.
There will be some travel funding available for graduate students and early career mathematicians. Before the conference, there will be graduate student workshops, led by Jessica Purcell, who has been doing a lot of very cool work on WYSIWYG geometry/topology and Alex Zupan, who has been proving a lot of nice results about thin position and bridge surfaces. The graduate student workshop is August 5-7, and the conference is August 8-10. I’m looking forward to it and hope to see you there.
In my last post, I described how a train track on a surface determines a collection of loops in a surface, namely the loops that are carried by the track. Looking at these loops from the perspective of the the Farey graph for the torus, this set consists of the loops corresponding to vertices in one of the components that results from cutting the Farey graph along a certain edge. In the curve complex, train tricks define partitions that are almost as simple, though they are necessarily more complicated because there is no one simplex that separates separates the complex. Still, this type of partition comes in very useful for calculating distances in the curve complex (and was central to my recent preprint with Yoav Moriah) but to see how that works, we need something a bit stronger. In this post, I’ll explain how we can turn the partition defined by a train track into two sets of curves with a buffer between them. By placing these buffers next to each other, we can build larger gaps that imply a lower bound on the distance between certain loops in the curve complex.
A little over a year ago, I started writing a series of posts on train tracks and normal loops, then got distracted by other things. In the mean time, I wrote a paper with Yoav Moriah involving train tracks and curve complex distances, which gave me a whole new perspective on what train tracks really mean, more in line with much of Masur and Minsky’s work . So, I want to resuscitate the series of posts on train tracks, but in a slightly different direction than where I was headed before. I’ll start by looking at a very simple case: train tracks on a torus. If you need a review of what train tracks are (the mathematical object, not the literal ones), you can reread my earlier post.