In today’s post, I will define tangle machines. In subsequent posts, I’ll realize them topologically and describe how we study them and more about what they mean.

To connect to what we already know, as a rough first approximation, a tangle machine is an algebraic structure obtained from taking a knot diagram coloured by a rack, then building a graph whose vertices correspond to the arcs of the diagram and whose edges correspond to crossings (the overcrossing arc is a single unit- so it “acts on” one undercrossing arc to change its colour and to convert it into another undercrossing arc). Such considerations give rise to a combinatorial diagrammatic-algebraic setup, and tangle machines are what comes from taking this setup seriously. One dream is that this setup is well-suited to modeling mutually interacting processes which satisfy a natural `conservation law’- and to move in a very applied direction of actually identifying tangle machine inside data.

To whet your appetite, below is a pretty figure illustrating a knot hiding inside a synthetic collection of phase transitions between anyons (an artificial and unrealistic collection; the hope is to find such things inside real-world data):

### A single interaction

The basic unit I’d like to consider is

Here, an __input__ is acted on by an __operator__ to become an __output__ . This process is called an __interaction__.

In the above picture there are 3 registers (*i.e.* vertices of a graph) each of which contains a colour , , or in a set of colours . The interaction is by way of a binary operation on . So, for instance, if were (integers modulo 5), and if were defined as , then

would be an example of an interaction.

Note that there does not exist a `time’ parameter in the definition of an interaction. The input does not occur `before’ the output.

I want you to think of registers as representing real-world objects (elementary particles in physics, employees in a firm, physical units of computer memory) and of their colours in as representing the relevant data which they contain and which we are interested in (wave function, wave field, role, phase, state). So

means that the operator register causes the input register to become the output register, whose colour becomes .

An alternative figure for the interaction is

that depicts a line (the input) passing through a cylinder (the operator) in dimension 4, to become the output.

### Conservation law

The operation on must satisfy one axiom, that is the __conservation law__. It says that the function sending each to is an automorphism of . In particular, sends the set of elements of bijectively onto itself (for any colour there exists a colour such that ). This requirement is called *global conservation* or *Reidemeister 2*. It is also a bijection on relations, which means that if and only if . This requirement is called *local conservation* or *Reidemeister 3*.

So is a mathematical structure called a rack or an *automorphic set*. The above description, due to Brieskorn, makes it clear how natural the axiom is ( acting on itself loses no information), and how it has nothing at all to do with topology, even if later we’ll see that global and local conservation correspond to invariance of a diagram under the classical Reidemeister moves. There is no creatio ex nihilo- every state must have arisen from another state, and every relation must have arisen from another relation, bijectively.

Global conservation means in particular that has a set-theoretical inverse operation, which we call . Thus our possible interactions are

not to mention cases in which some or all of the registers in an interaction happen to coincide, such as

I also want to have a `nothing happens’ edge to work with:

A special property which is satisfied in all of the applications which we are currently studying, and which is worthy of special consideration philosophically and mathematically as well, is the property that for all . In other words, this is the property that the action of an operator on itself is trivial- a colour cannot add information to itself (one setting in which I could imagine this failing would be for applications to machine learning- maybe a self-interaction could add information to the machine. But I haven’t thought about this enough to have a convincing specific example of this happening). If the property is satisfied for all in a rack , then is said to me a quandle.

### Multiplication

Output for one interaction can be input for another interaction. So we can `multiply’ or `compose’ interactions to form patterns such as

These correspond to a braid which closes to a coloured trefoil, and to a coloured trefoil knot, correspondingly (see if you can figure out how before it is explained below).

Notive that operators are themselved coloured, and so it makes sense to to use them as inputs or as outputs for interactions. Objects which act on other objects may in turn be acted on.

I’m going to want to call the structures discussed above *tangle machines*, and my next step is going to be to justify that language, and related terminology which may be used to discuss them.

Let’s think of the Cayley graph of as an automaton. This graph has the elements of as its vertices, and a labeled edge between and for all . A __process__ is a walk on this automaton (I think that this is consistent with automata theory usage of the word `process’- am I right?). The start point of the walk is its __initial register__ and its endpoint is its __terminal register__. A machine is a collection of mutually interacting processes.

Precisely:

Definition:

Amachineis a graph each of whose connected components is either a line graph or a cycle, together with a partially-defined function (which assigns to each edge the vertex corresponding to the operator in the interaction across that edge, with a plus sign if it’s and a minus sign if it’s , and this sign is called ) and a function (uploading a colour into each register) such that, if is an edge from to then we have:

- , If ;
- , If ;
- , If .

### Reidemeister moves

The conservation law implies that some local modifications of a machine do not change its information content. For instance:

This relation says that if we perform an operation and then immediately undo it, it’s the same as not having done anything at all. This move is called *Reidemeister 2*, because it will be expressed as sort of a Reidemeister 2 moves when we move to a different sort of diagram.

Local conservation implies that:

The left hand side imparts the same information as the right hand side.

And, more generally:

What happens here is that the `z’ label moves all the way over the local picture, without adding or taking away any information.

### Philosophical aside

Because I’m trying to sell you the idea that machines pre-exist, I owe you a philosophical explanation for the Reidemeister moves in the context of machines and processes. This serves to elucidate the type of phenomena which machines might perhaps be used to model.

I think that Reidemeister 2 tells us that interactions in machines are causation rather than correlation. In statistics, we measure correlation between variables- but finding such a correlation does not tell us that one causes the other, or which causes which. There is a nice discussion on wikipedia. Conversely, in our machines the vertices are oriented clockwise or counterclockwise, as determined by whether the operation is or , and this does describe causality- it tells determines a causal relation between the input, which, acted on by the operator, becomes the output; and it tells that that given the output and the operator we can reconstruct the input. This tells us that phenomena which machines can model should be phenomena in which the interactions are causal, and that given the output and operator we know the input. Causality is often viewed as being a time-related phenomenon- the cause must precede the effect- but that isn’t really true. Causality is order (orientation of vertices) as opposed to direction (orientation of edges).

I interpret Reidemeister 3 philosophically as telling us that, if we have determined a `causal web’ of causes and effects (one interaction is a sufficient `web’ for me), and then we find a more fundamental cause for everything (an operator which acts on everything), then this still preserves the causal web. So it’s a stability property of the phenomenon- the causal relationships which we have found inside the data remain causal relationships when we find more fundamental causes as well. Our model does not melt away when we peer deeper into a system.

### Tangle diagrams and topological realization

I’ll say more about this next time, but tangle machines have diagrams which look a lot like knot diagrams. A picture is worth a thousand words:

You should think of this as representing a 4 dimensional picture (at least, if Q is a quandle). So the thick line is a 2-sphere in , and the line is crossing `through’ it:

We also want to allow a line segment to `inflate’ into a sphere with nothing coming through it, and for such a thing to deflate (else no Reidemeister 2).

So for an example of a diagram:

This is quite similar to the Bar-Natan’s balloons and hoops, except that we have a lot of balloons all lined up, instead of just one per component- thanks to Dror for pointing this out!

Note (and we’ll discuss this next time) that Reidemeister 2 for machines is different from a Reidemeister 2 move in knot theory in an interesting way! (also Reidemeister 3) Namely, there is no ordering of operations along the `overstrand’ (the operator). Our Reidemeister 2 preserves information. In contrast, Reidemeister 2 in knot theory **does** add information. It separates the overstrand into three segments (`before’, `during’, and `after’ the local picture), and so induces an ordering of points on the overstrand which we did not have before having performed the move. It introduces an ordering along the overstrand which did not exist before. This exemplifies the difference in flavour between tangle machines and classical knot theory.

Next time I’ll say more about topological realizations, and about invariants for these things.

Nice explanation – I’ll stay tuned!

Comment by Scott Taylor — July 10, 2013 @ 12:08 pm |

Hi, I just discovered your blog. I am interesting for a long time in quandles, tangles and computation, shall give only one link on graphic lambda calculus. I started from tangle diagrams (and some Plato) see arXiv:1103.6007 .

Comment by chorasimilarity — February 11, 2014 @ 6:38 pm |