In my last entry on SOGs, I described a definition of complexity for the sequences suggested by Dave Bachman. The complexity comes from a lexicographic ordering on the complexities of the locally maximal generalized Heegaard splittings in the sequence, just as the measure of complexity for the splittings comes from a lexicographic ordering on their thick levels. Just as Scharlemann and Thompson derived topological information about the thick levels by minimizing the complexity for generalized splittings, Dave has suggested that we should be able to derive topological information about the generlized splittings in the SOG by minimizing its complexity. Rather than start off the statement about what Dave says a minimized SOG should look like, I want to look for ways to minimize a SOG and then at the end I’ll see if I believe Dave’s claims.
Recall that consecutive splittings in a SOG are related by either weak reduction (moving a 1-handle past a 2-handle) or destabilization (canceling a trivial 1-with a 2-handle). If you’re having trouble picturing this, think of a knot in the 3-sphere that’s in Morse position with respect to a height function on the sphere. As the level sets pass through the knot, we will count the number of points of intersection at the local maxima. We can define a complexity for Morse positions of the knot by a lexicographic ordering on the number of intersections. We can reduce this complexity by pushing a minimum up past a disjoint maximum (a weak reduction) or canceling an adjacent maximum and minimum (destabilization). Most (though not necessarily all) of what I’m about to write works just as well in this situation.
Consider a locally maximal generalized Heegaard splitting G_1 in a minimal SOG. Right before it we have a generalized splitting G_0 and right after it we have G_2. If we can find a SOG from G_0 to G_2 where each intermediate step has complexity strictly lower than that of G_1 then we can replace the sequence G_0, G_1, G_2 by this other sequence. The new sequence might be longer, but because of the lexicographic ordering it will be counted as less complex.
We get each of G_0, G_2 from G_1 by reducing a thick surface in G_1. (To save space, I’m going to use the term reduce to mean a weak reduction or a destabilization.) If the reductions are in distinct thick surfaces of G_1, then starting from G_0, we can first perform the reduction that gets us from G_1 to G_2 (since the disks that allow us to do this reduction exist in G_0 as well) and then undo the reduction that originally got us from G_0 to G_1, but now gets us to G_2. This new intermediate splitting has complexity strictly less than G_1, so original SOG was not minimal.
Thus the moves right before and after G_1 must take place in the same thick surface of G_1. In fact, if there was a way to reduce one of the other thick surfaces in G_1, we could do this reduction, then undo the reduction that gets us from G_0 to G_1, then perform the reduction that gets us to G_2, then undo the reduction that we just added. This new path is longer, but the complexity is lower because the complexities of the intermediate splittings are lower. Thus in C_1, all but one of the thick surfaces can’t be reduced – i.e. they’re strongly irreducible.
So there’s the first important property of our local maxima – all but one of the thick surfaces is strongly irreducible. Of course, that on it’s own is not terribly useful – it could be a Heegaard splitting with exactly one thick level. Thus we need to analyze the one thick level that is weakly reducible. However, this entry is already on the long side, so I’ll leave that for next time.