I see topological objects as natural receptacles for information. Any knot invariant is information- perhaps a knot with crossing number is a fancy way of writing the number , or a knot with Alexander polynomial is a fancy way of carrying the information . A few days ago, I was reading Tom Leinster’s nice description of Shannon capacity of a graph, and I was wondering whether we could also define Shannon capacity for a knot. Avishy Carmi and I think that we can (and the knots I care about are coloured), and although the idea is rather raw I’d like to record it here, mainly for my own benefit.

For millenea, the Inca used knots in the form of quipu to communicate information. Let’s think how we might attempt to do the same.

So on the left we have Alice and on the right we have Bob. Alice is going to communicate with Bob by sending Bob partial information. In her hand, Alice holds a knot coloured by a quandle . Let’s assume for simplicity that for every and in there exists in such that . Alice sends Bob a collection of colours in for arcs in a knot diagram for . These colours `sit’ at dots (representing arcs) on the knot diagram, and these dots cannot be `isotopied’ under an overcrossing *i.e.* the following move is NOT an equivalence:

But I think that we should allow the following:

So what we’ve actually got is a `partially coloured knot’, that is a knot together with a choice of a subset of arcs on that knot marked by dots. Two of these are equivalent if they are related by a sequence of Reidemeister moves, the move mentioned earlier involving pairs of dots around a crossing, an automorphism of , plus the following moves when all colours involved are known (note that this is not the same thing as having dots on the arcs, because you can sometimes deduce colours that you’re not actually sent by Alice):

So dots have to be sent to dots, the colours have to match up, the diagram has to match up, but the set of dots is unordered.

Bob receives the colours at the dots in . So now Bob knows , but perhaps not its colouring. Given two such packets of information, Bob can confuse one with the other if the partially coloured knots he receives are equivalent. So the question is how many non-confusable packets of information we can send using knot and quandle .

Let’s start with the 3-coloured trefoil. For we can send only one bit, because the symmetry of the trefoil confuses each of its arcs with each of its other arcs, and any two elements of are related by an automorphism of . For , we may send either two identically coloured bits on the two arcs involved in a Reidemeister 1 move (which gives no information about the colouring of the trefoil), or we may send two colours of two different arcs. So we have two different packets we can send, the length of the packets is 2, and the capacity of such a channel is . For , there are three mutually distinguishable packets we may send, so the capacity of such a channel is .

In general, define the Shannon Capacity of the -coloured knot to be the maximum (or the supremum) over of the th root of the number of mutually distinguishable packets of size which Alice may send Bob. This is similar to the definition of a Shannon capacity for a graph, except that now we have made explicit the `agent' (the overcrossing arc) which allows us to confuse the `patients' (the undercrossing arcs), instead of it just being an edge in a graph. We also have a notion of equivalence of descriptions of the structure of the information (via Reidemeister moves etc.), which the notion of Shannon Capacity of a graph did not have.

To compute the Shannon Capacity of a knot, we would need to know a lot about its symmetries and about the structure of . So it will be a hard problem in general- which shouldn't worry us too much, because even for graphs the problem is hard. Indeed, the Shannon capacity of the 7-cycle is an open problem, as mentioned here.

A quite natural extension of this idea would be for Alice not to have to send all of , but rather to allow her to send `pseudocrossings’, that are crossings in which over-under information is suppressed. There’s a very nice recent arXiv preprint about such objects, which they call `pseudoknots’.

There’s nothing special about knots in the above discussion- we could generalize with no effort at all to links, tangles, and virtual knotted and w-knotted objects of any sort. For me, the really interesting version (whose definition is entirely analogous) is for tangle machines, discussed HERE and HERE.

Recall that the basic building block of a tangle machine is an *interaction*:

If we were to delete all of the dotted lines (and assume we knew all the agents in some sense), we’d have just a graph, in which two vertices could be confused if connected by an edge. Confusion and deduction are two sides of the same coin. The dotted lines tell us how that deduction takes place- we can know an output given both the input and the agent, or we can know the input given both the output and the agent, or we can know the agent given both the input and the output.

It looks to me as though this notion of Shannon capacity could come up when statistically detecting tangle machines inside data (I’ll post more about this in the future). We might detect certain colours and crossings with certainty, but others we’re not sure about. I’m imagining Shannon capacity as being an ingredient in some sort of a measure for how much we know about the tangle machine (or about the knot) from this partial data, but this is pure fantasy at the moment. Of course, without a solid application such as this in mind, the definition is fluid, and I doubt that the above is a `final’ definition of Shannon Capacity of a coloured knot in any useful sense. I’d love to know what is!

I love this idea of partially coloured knots as “thumb drives”. I wonder how it might be useful…

## Leave a Reply