]]>

Chern-Simons theory is concerned with canonically associating functions to representations. In a typical topological context, we would be looking at the moduli space of representations of a knot group into a group such as . The associated functions are topological invariants, and such invariants are of primary interest in quantum topology. Examples of invariants arising as or from such functions are the Jones polynomial and the Alexander polynomial (indirectly; what actually shows up is the square root of analytic torsion).

I find it deeply unsatisfying that Chern-Simons theory for topologists (indeed all of quantum topology) all happens over the complex numbers (some stuff can happen over or , but that’s as far as it goes). There is no conceptual justification for introducing complex numbers that I can see- for technical reasons we just seem to need the complex structure on the moduli space in order to be able to prove anything. There have been various attempts to study quantum topology over other fields or over rings such as the integers, but as far as I know the results are weak. For example, a fundamental result in quantum knot topology, that Vassiliev invariants are uniquely specified by weight systems, is only known over a few fields, as lamented e.g. by Bar-Natan.

In analytic number theory the goal is once again to canonically associate functions to representations. The parallel nature of the task is striking- the role of the knot complement is taken by an arithmetic scheme , and the role of the group is played by a so-called motivic sheaf which is uniquely built up from a representation of the arithmetic fundamental group of . On the other side, the properties the canonical functions must satisfy nicely parallel those of Chern-Simons theory.

The functions thus constructed, assuming that they exist, are topological invariants of . No general construction for these functions is known, but L-functions, whose constructions have been via ad-hoc methods, are examples. In fact, two major conjectures in analytic number theory, the Iwasawa Main Conjecture and the Hasse-Weil Conjecture, can both be framed as conjectures that such a canonical assigment of functions to representations exists.

I ought to mention parenthetically that quantum topological analytic number theory has already happened, when Le and Murakami used the Kontsevich invariant to discover relations between multiple-zeta values that had not been known previously, which analytic number theorists have since assimilated (see this recent survey by Furusho).

Arithmetic Chern-Simons promises to be exciting for both communities. For topologists, we may dream of a more flexible version of Chern-Simons theory which works over more general rings than just the complex numbers (although sadly we have a dearth of good conjectures in this direction at the moment). For number theorists, perhaps quantum topology can provide ideas to help attack some conjectures of interest. One may fantasize that, by way of these goals, we will gain an understanding of how knots and 3-manifolds fit into the main body of the big picture of mathematics, and how they might act special model cases not only in topology, but perhaps also in number theory.

]]>

A. Hope Jahren, She Wanted to Do Her Research. He Wanted to Talk ‘Feelings.’, New York Times, March 4, 2016.

What makes this piece especially interesting for me is that it’s written so that one understands the harasser, and is made to realize that “it could be me”. The pattern she describes sounds more common than one might like to admit- and the person writing the e-mail would almost certainly not be cogniscent of it being harassment. A male TA, professor, or supervisor, using the excuse of an altered state of mind (haven’t slept, drank too much) e-mails a love confession to a female student or colleague in a way that blames her, is a total power play, and is creepy and maybe a bit threatening (although of course he doesn’t see it that way). A wrong response to this first e-mail might mean that the victim gets harassed for a long time.

The author says that this first e-mail must be answered by firmly telling him (not asking him) to stop. But, Jahren laments, it never, never stops. While surely Jahren’s suggestion is sensible, a firm, “Dude, I have zero romantic interest in you. In addition you might want to read this piece by Jahren,” might, I think, be even more effective.

What do you all think? How prevalent is this type of sexual harassment in mathematics, and what can be done to effectively nip such harassment patterns in the bud?

]]>

A.Y. Carmi and D.M.,

Statistics Limits Nonlocality, arXiv:1507.07514.

It offers a statistical explanation for a Physics inequality called Tsirelson’s bound (perhaps to be compared to a known explanation called Information Causality). Behind the fold I will sketch how it works.

A *binary channel* is a pair of Bernoulli ( and valued) random variables and representing *input* and *output* together with a conditional probability function representing *noise*. A channel is typically described by telling a story about how is constructed from and some additional random resources; but mathematically it’s really just the conditional probability function.

Usually it is a realization of , *i.e.* a zero or a one, that is the message we would like to send through the channel. So the random variables of channels usually represent distributions of a realization. But I’d like to consider a different setting, in which the message through the channel is all of . In other words, the message is the real number . The parameter contains an infinite amount of information (all values in its binary expansion, for instance), as opposed to the content of a sample that is one bit. So Bob’s “task” is to estimate the parameter to the best of his ability. To do this, he is allowed to sample a predetermined number of times.

I would like to partition what may happen into three (realistic) cases:

- There is no channel between and because . A fortiori, a finite number of samples of tell us nothing about .
- There is a channel between and ,
*i.e.*, but what is being broadcast through the channel cannot be distinguished from noise. More precisely, consider Fisher Information that is a mathematical quantity measuring how much samples of a random variable tells us about a parameter. It measures this via the Cramér-Rao Theorem, which tells us that the variance of any estimate which Bob can construct of based on the information at his disposal is bounded from below by one over the Fisher information. Our -valued random variables have variance bounded above by (the variance of a Bernoulli random variable is whose maximum us at ), therefore Fisher information of under is `no information’. Thus Bob would learn just the same about Alice’s variable by tossing a fair coin as he would learn by listening to the output of the channel. - Alice and Bob are communicating!

I would like to draw your attention to the second case, in which there is a channel but the information broadcast through the channel is indistinguishable from noise. The situation is analogous to a long game of Chinese whispers, in which one person whispers a message to another until the final person announces the message to the entire group. A massive such game played in 2012 resulted in “Alice’s” message “Life must be lived as play” (a paraphrase of a quote from Plato) being relayed to “Bob” as “He bites snails”. In a long enough game, with probability one, Bob will receive only noise despite a channel undeniably existing.

In a certain context, Physicists refer to Case I (nonexistence of a channel) as “Locality”, in that Alice and Bob are effectively isolated from one another. But I think that Case II is also “Locality” according to my intuitive understanding of the term. If a tree falls in a forest and no one is around to hear, does it make a sound? If a sample of cannot be used to analyze , in what sense is it paradoxical that and are dependent?

But the word “Locality” is taken to refer to Case I, therefore I’ll refer to Cases I and II together (in the physical context I’m just about to describe) as “Information Locality”.

In Newtonian Mechanics an object can only be in one place at one time. An arresting feature of Quantum Mechanics is there is a sense in which an object can be located in two places at once. More precisely:

Nonlocality:A pair of quantum systems which are shown not to be physically interacting may be impossible to describe as independent entities.

Such a pair of unseparable quantum systems perforce must be described as one system system which is in two places at once. The archetype of nonlocality is a pair of distant agents Alice and Bob each of whom hold one half of a singlet. A measurement performed on Alice’s particle appears to have an instantaneous effect on Bob’s particle and vice versa. The strength of this perceived effect is quantified by a real number called the *Bell-CHSH correlation*. If (“Bell’s Inequality”) then we are in a *local* setting, and Alice’s system may be fully described independently of Bob’s system, and these two systems fully describe the joint system. Bell’s Theorem tells us that Alice and Bob’s halves can no longer be described as independent entities governed exclusively by local influences when exceeds .

Bell’s Theorem is proved using only Probability Theory and as such is independent of the functional analysis formalism of Quantum Mechanics. Why is this important? Besides aesthetic considerations, reliance on Probability Theory alone is good in the context of a search for a Grand Unified Theory to unify Quantum Mechanics with General Relativity. But the mathematical formalism of Quantum Mechanics (functional analysis) is different from the mathematical formalism of General Relativity (differential geometry). Thus, we would expect a grand unified theory to be described by a mathematical formalism which envelopes both of these formalisms and more, and in particular we would not expect it to be based on functional analysis.

Bell’s Inequality is indeed violated experimentally. Nonlocality is real. Newtonian mechanics alone cannot describe the quantum world.

How large can be?

Within the Hilbert-space formalism of quantum mechanics, Tsirelson showed that . Tsirelson’s bound is supported experimentally.

We would love to understand Tsirelson’s bound in a broader context (*e.g.* probability or statistics), so that the same upper bound on continues to hold if and when the functional analytic formalism of Quantum Mechanics is replaced by a more abstract language.

The basic building block of the so-called “context-free approach” to nonlocality is a pair of boxes, one held by Alice and one by Bob. These boxes abstract the notion of entangled particles. Into each box you can insert either a zero or a one, and the box responds by instantaneously spitting out either a zero or a one. Call Alice’s box input and her box output , and call Bob’s box input and his box output . We assume various marginals such as and to be random variables.

The Bell-CHSH correlation is now defined as the conditional probability

Thus, defines a binary channel from to . Addition and multiplication are modulo . This is kind-of weird but also kind-of cool- the channel in the Bell-CHSH setting isn’t between it’s “Alice” and its “Bob”, but rather between the product of Alice and Bob’s inputs and the sum of their outputs.

Having noted that can be used to define a channel, we can generalize to the case of multiple boxes. Each one of Alice’s boxes has a corresponding box on Bob’s side, and the coordination strength between the pair of boxes is quantified by .

The classical protocol for multiple boxes is called an “oblivious transfer” and is detailed in a paper by van Dam. Alice and Bob each hold in front of them an infinite family of boxes, such that each box of Alice’s is correlated with a box of Bob’s with Bell-CHSH parameter (the same for each matching pair of boxes). Alice holds an information source which is a Bernoulli random variable with mean . We imagine as encoding a message, perhaps in the digits of its binary expansion (because it’s a real number, it contains infinite information). Alice independently samples values from (the interesting case is in the limit ).

We specialize to the case . Using the oblivious transfer protocol which takes advantage of the full power of Alice and Bob’s boxes, we compress into a single bit which Alice send through a channel to Bob who recieves it as . Using his boxes, Bob decompresses the bit he receives into which are also independent identically distributed (iid) and which we may consider as realizations of a Bernoulli random variable whose mean is their sample average (the variable depends on but we suppress this from the notation). We now have a noisy channel with input and with output .

Almost all of our actual work was figuring out the reformulation above- with everything well-defined and phrased in terms of channels, the computations are routine.

A quick computation shows that . Thus the channel between and disconnects in in limit. Conversely, the Fisher information about in is computed to be . This terms stays between zero and one (one instead of as above because our random variables are -valued) for all only when Tsirelson’s bound holds.

[Note that we’re assuming that in the above formulae, which is essentially a technicality.]

In other words, within the context of the oblivious transfer protocol, Tsirelson’s bound is interpreted as a necessary and sufficient condition for information locality. This interprets Tsirelson’s bound entirely in terms of statistics.

Note that if Tsirelson’s bound were violated we would have a strange “Case 4” (which is actually Case I and Case III at once) in the limit in our 3-way division. Namely, in this limit the channel would disconnect but Bob would nevertheless receive full information of . We suggest that such a case ought not to occur in the real world.

Informally, our result states that Bob may infer nontrivial information about if and only if Tsirelson’s bound is violated. A thought experiment which sharpens this point (but which isn’t in the present version of our preprint) is presented below.

Let’s consider the special case in which either (Alice samples by flipping a fair coin) or (Alice’s samples are either all or all ), and Bob’s task is to determine which of these is the case. Is Alice sending random bits, or are her samples all the same? Say that the reality is . Bob’s null-hypothesis is that and his alternative hypothesis is that . He conducts the likelihood ratio test in an attempt to rule out the null-hypothesis. So he computes the likelihood of the null hypothesis divided by the likelihood of the alternative hypothesis- if the ratio is zero then the test succeeds, and if it is one then the test fails.

A computation shows that the likelihood ratio in this case is asymptotically , so that the test succeeds in the limit (and Bob can infer the value of ) if and only if Tsirelson’s bound is violated (and remember that the channel is disconnected in this case).

Our result is related to a known criterion called Information Causality. Is it sufficiently novel compared to this criterion? I can’t express an opinion… “sufficiency of novelty” isn’t well-defined. Below I describe the relationship of our work with Information Causality.

The original paper which formulated Tsirelson’s bound in terms of information was Information Causality as a Physical Principle by Pawlowski, Paterek, Kaszlikowski, Scarani, Winter, and Zukowski. In the same context as the one we work in, that paper formulates a principle called Information Causality, which roughly states that the maximum information Bob can have about Alice’s bits is because he was only sent one bit by Alice (the bit ). So Bob can infer at most the amount of information in bits that Alice actually classically sent him. Nonlocality cannot be used to construct a superluminal telegraph.

Here is a rough formulation of Information Causality:

Information Causality:The amount of information potentially available to Bob about Alice’s bits is bounded above by the number of bits Alice sends to Bob through a classical channel.

And here, beside it, is a formulation of the principle we would like to suggest instead.

Statistical No-Signaling:No information can pass through a channel whose output is independent of its input (Case I).

This is equivalent to Tsirelson’s bound via the case of our experiment described above. Namely, the channel correlation converges to zero (so the channel disconnects at infinity) while the Fisher information stays bounded (in fact stays between zero and one) if and only if Tsirelson’s bound holds.

The “Information Causality quantity” is the mutual information of and . The result of that paper is that violation of Tsirelson’s bound allows violation of Information Causality. But not the converse- they cannot prove that violation of Information Causality implies violation of Tsirelson’s bound.

Concretely, the information causality quantity is less than or equal to a term which we interpret as Fisher information, which is less than or equal to if and only if Tsirelson’s bound holds.

In the appendix to their paper:

In what sense, then, is our result more than a trivial restatement of Information Causality? Well, first of all, it’s technically a different mathematical result. It implies information causality (in the context of the protocols we both consider, anyway) but is not implied by it in any obvious way.

Perhaps I can argue that Statistical No-Signaling is more fundamental. Information causality involves a whole Alice-and-Bob story (and I’m not sure how to formulate it rigourously mathematically), whereas Statistical No-Signaling is a general statistical statement- if and are independent random variables then you cannot learn by sampling a countable number of times (in oblivious transfer, is constructed via a limiting construction). Mathematically you can (if Tsirelson’s bound were violated then by oblivious transfer), but the statement is that Physically you can’t. It suggests that although we use words like Locality and No-Signaling, perhaps they shouldn’t mean what we think they mean.

Our `story’ is different also. We’ve interpreted the intermediate term as Fisher information so that the task we are discussing is a statistical inference task as opposed to measuring mutual information between two strings. So, for us, Tsirelson’s bound is related to the Central Limit Theorem, by which we can characterize the convergence the sample mean of to as grows. Fundamental physics relates to fundamental statistics. Aesthetically, I like that.

Because the CLT is so mathematically fundamental, many criteria can be formulated that will follow from it. Actually I think that Statistical No-Signaling might philosophically be closer to Macroscopic Locality than to Information Causality, because we’re saying that a physical system `becomes local’ as the number of boxes grows to infinity. I don’t know how to rigourously derive Macroscopic Locality from our work though.

Still on the `story’ front, Fisher information gives us the 3-way division discussed at the beginning of this post, interpreting the three relevant ranges of the Bell-CHSH parameter . If there is no channel. If there is a channel but its output is indistiguishable from noise. If then there is communication through a channel. Information causality doesn’t interpret the different ranges in any meaningful way to the best of my understanding.

In addition, functionally, we have a thought experiment with a binary outcome which succeeds if and only if Tsirelson’s bound is violated, and I’m not sure how you could do that with Information Causality.

So why would I (D.M.) be looking at physics-y things like this?

Well, personally I think that the fundamental laws of nature ought to be distributive and nonassociative (this is an irrational bias, I know). One thing that this implies to me is that we should attempt to work as much as possible at the level of measures of information (*e.g.* Fisher information) rather than working in terms of vectors, functions, strings, and so on. We’ve worked on understanding information flow in a low dimensional topological context in previous work, such as arXiv:1409.5505.

I’d like to suggest the philosophical idea that joint distributions are largely a fiction. We can’t meaningfully speak about joint distributions of distant objects we can’t instantaneously compare. But marginals- conditional probabilities- are real. Nonlocality is a setting in which that is how things work. Estimators based on conditional probabilities behave like quandle elements (that’s in arXiv:1409.5505), so my dream is to link this all up to low-dimesional topology. But I’m not there yet.

If you want to know more, please read the preprint. Feedback is welcome!!

]]>

The Blanchfield pairing on the Alexander module occurs in various places in knot theory, including in quantum topology. Levine’s 1977 argument for its expression in terms of the Seifert matrix doesn’t make easy reading (the authors suggest it’s incomplete- I can’t judge), and it is notoriously difficult to prove that the Blanchfield pairing is Hermitian. The authors deal deftly with both problems using a more modern but clearly sensible toolbox. Time to rewrite the textbooks.

I wish there were more papers like this. Some aspects of low dimensional topology could use a careful, sensible, modern reboot such as that of this paper.

]]>

The Simple Loop Conjecture fits into that family of statements such as Dehn’s Lemma and the Sphere Theorem which translate statements about fundamental groups into statements about 3-manifolds. Such theorems allow us to trade 3-manifolds for their fundamental groups (which are much simpler mathematical objects).

Consider a 2-sided immersion of a closed orientable surface into a closed 3-manifold . The Simple Loop Conjecture states that if is not injective then there is an essential simple closed curve in that represents an element in the kernel of . If were an embedding then this would follow from Papakyriakopoulos’s loop theorem. `To be an embedding’ doesn’t translate to an algebraic property, so the Simple Loop Conjecture is more of a ` to 3-manifolds’ statement than the loop theorem. It allows us to replace non--injective immersions by immersions of lower genus surfaces by surgery paralleling passage to a normal subgroup; So it really does translate between algebra and topology. I asked an MO question regarding applications.

One of the nice things about the simple loop theorem is that it really does seem that the target seems to be a 3-manifold group or something similar. There have been several attempts to generalize, for instance to consider representations into , but they have all failed.

Joel Hass proved the conjecture for Seifert-fibered spaces using geometrical techniques in 1987, and Hyam Rubinstein and Shicheng Wang proved the conjecture in 1998 for non-trivial graph manifolds, which for them meant the graph manifold was also not Sol. Mayer Landau tells me that Rubinstein-Wang’s technique might work also for Sol manifolds, but he is not sure… Anyway, Zemke does something different.

The most interesting case for the Simple Loop Conjecture is of course for hyperbolic manifolds. There, it’s known to be false in higher dimensions.

Thanks to Mayer Landau for drawing my attention to this preprint and for explaining its significance.

]]>

Question:What is an alternating knot?

The preprints are:

- Joshua Evan Greene, Alternating links and definite surfaces, arXiv:1511.06329
- Joshua Howie, A characterisation of alternating knot exteriors, arXiv:1511.04945

This post will briefly introduce the problem; I look forward to reading the solutions themselves!

An alternating knot *diagram* is a diagram of a knot in which crossings alternate over and under along the knot. A knot is alternating if it has an alternating diagram. The Tait conjectures are about properties of alternating knots, and there are various other theorems in topology, *e.g.* in Floer homology, in which alternating knots are special. Given that a knot diagram is a mere combinatorial shadow of a topological object (a knot in 3-space), it seems that there ought to be a simple topological characterization of alternating knots in 3-space, with no mention of diagrams. Unfortunately there has not been such a characterization… until now!

Josh G.’s characterization works in a more general setting than Josh H.’s, but they both come down to the same idea- properties of pairings on the first homologies of checkerboard surfaces. In Josh G.’s case (which works also for links), there is a pairing on the first homology of a surface in a homology sphere, and the checkerboard surfaces of the knot are isotopic rel. boundary to two surfaces, one of which has a positive definite pairing, and the other a negative definite pairing.

This is so simple and elegant!! How could it take so long to discover ??

Both solutions yield very similar exponential-time algorithms to determine whether or not a knot is alternating (this would be much harder via the diagram).

The results of the two Joshuas might open the door to a geometric topological proof of Tait’s flyping conjecture. This would very nicely steal the thunder of quantum topology- the great classical triumph of quantum topology has always been the proof of the Tait conjectures, which had no other known proof.

Thanks to Dave Futer for calling my attention to these preprints!

]]>

The main problem that most newly minted math PhDs have is that they don’t know what non-academic jobs are out there, and what they might be well suited to. I certainly had that problem. So the first step is to find out. There are a few companies and a few types of jobs that specifically look to hire math PhDs, and you’ll see some of these advertising on mathjobs.org. But I found this to be too narrow a list and one that I didn’t find particularly appealing – the most obvious ones are the NSA and computerized trading companies.

Instead, I had much better luck investigating jobs that were looking for people from any background who had the types of skills that I thought had made me a good mathematician. This meant thinking in terms of general skills/abilities such as communication, understanding abstract/complex systems and managing complex (collaborative) projects. That opened the door to a much broader range of jobs, including both technical and non-technical jobs. Programming/software engineering (what I ended up with) is on this list, but it’s far from the only one.

Of course, that still leaves the problem of finding the jobs that meet this criteria. What I found most useful for this was something that the book I mentioned above calls an *informational interview*. The idea is simple: you ask someone who has an interesting sounding job to have a short conversation about their career. It helps if someone you know introduces you to them, and LinkedIn can be handy for finding such connections. You ask about what they do during the day, how they like it, how they got the job, etc. I know this sounds a but hoakey, but it’s also really interesting, and it turns out many people like talking about themselves.

The informational interview is explicitly no-strings-attached, i.e. it’s not a job interview and there’s no expectation that the person you talk to will help you get a job. But because it’s no-strings, people tend to be happy to do this. And because it’s a minimal investment for you, it’s a good way to explore options that you might not have thought you’d be interested in. I talked to a lot of different people in this phase, and probably wouldn’t have ended up where I am now otherwise; when I started my job search I was focused on jobs that were directly related to machine learning. But then I talked to an acquaintance from my undergraduate days who works at Google. He introduced me to a former computer science professor who had just started at Google, and who convinced me to apply.

The people that you have informational interviews with may also point you to specific job openings, and may even offer to refer you to someone who’s involved in the hiring decision. Perhaps not surprisingly, it turns out that applying for jobs on job boards is less effective than getting personal referrals, even if it’s from someone that you’ve only talked to over the phone. This may feel strange compared to the academic job search where all applications go through an official process such as mathjob.org. But keep in mind that mathematics is a small world. On the hiring committee at OSU, for the vast majority of applicants, someone on the committee personally knew at least one of the applicant’s letter writers, if not their PhD adviser. Outside of academia, it doesn’t work like that, so many companies use personal referrals to make up for it.

(Unfortunately, this reliance on personal referrals is a factor in the lack of diversity in the tech industry: since people tend to spend their time with others with similar backgrounds, individuals will tend to refer job candidates who are similar to them. My comments above are not intended to justify or defend this system. I’m just describing my understanding of how things work.)

But even without personal referrals, if you present yourself in the right way (particularly for jobs such as software engineer and data scientist that are in sufficient demand) old-fashioned job applications can still be pretty effective. With one job I applied for the old fashioned way, the recruiter e-mailed me back within an hour to schedule a phone call, though it turned out they needed someone to start before my semester was over.

Presenting yourself in the right way turns out to be the second tricky part. There are a few simple but essential things like understanding the difference between a resume and a CV, making sure your resume is at most two pages (or better yet one) and focusing it on skills related to the job in question, rather than on unrelated academic merits. It’s easy to get used to the fact that every job in a math department is pretty much the same – some combination of teaching, research and service – making it easy to point to previous experience as evidence that you’ll do well in the job you’re applying to. In the private sector, it’s more common to get a job in a position that you’ve never had before. Fitting an academic background into a non-academic resume takes some creativity, but again the key is to think in terms of broad skills. That means things like communication (teaching, research talks) and managing complex projects (your dissertation, long-distance collaborations). These will help demonstrate that you’ll be able to learn whatever it takes to succeed at the job, even though you’re going to be starting from scratch.

As an example, Google’s interview process consists mostly of working out programming problems where the only background knowledge requirement is (roughly) an undergraduate data structures and algorithms course. (Take a look at the official list of study resources.) What makes the interview hard is that you have to think on your feet about a problem you haven’t seen before, and the interviewers pay close attention to how you think (out loud) through it. So, having lots of experience coding may give you some advantage, but not a huge one. I found that my years of working on hard math problems, plus a few months of computer science cramming, was a surprisingly good preparation. (Standard Disclaimer: The opinions expressed here are my own, and have not been reviewed or approved by my employer.) The process is far from perfect, and has its own biases, but it minimizes the impact of one’s background and experience. And while not all employers have this type of interview process (though many software companies do), many are willing to overlook lack of experience for the sake of potential.

There are also some things you can do to get more direct experience to put on your resume. Many companies are starting to offer internships, even for PhD students (including Google). You can get programming experience by contributing to an open source project. To show off you data science skills, you can compete in a Kaggle competition. (Those are the ones I know of – if you know of any that I missed, leave a comment below!)

OK, so once you’ve worked out all your non-academic skills, and written your resume accordingly, the final step is to determine how you will avoid setting off red flags that some employers look for from academics. In general, the folks who review your resume will have no doubt that you’re smart based on your math background, but they will also be acutely aware that there’s such a thing as being too smart for your (or their) own good. Many employers feel that hiring a “bad” job candidate is much more costly than not hiring a half dozen “good” candidates, so they spend a lot of energy looking for red flags. When considering an academic, there are certain red flags that they may think they see even if you didn’t do anything to indicate them. So, it’s not enough to avoid the red flags – you need to actively provide a counter narrative.

Here are the things I know of that employers may be expecting you to say, and that they may hear even if you don’t say them: (Did I miss any?)

- I couldn’t make it as an academic, so I’ll settle for a private sector job, but I don’t have to like it.
- I just want to think about fun, abstract problems, whether or not they’re useful.
- This job is going to be much easier than being a professor, so I won’t have to work very hard.
- I’m clearly smarter than everyone who currently works for you so I’ll just tell them all what to do.

Now, I know you wouldn’t actually think, let alone say, any of these things. (You wouldn’t, right?) But better than not saying them is to say things that completely refute them (and you have to mean it when you say it!) Have a solid explanation for why you want to leave academia, which focuses on the positive aspects of the private sector. (See my previous post.) Talk about how you want to work on things that have a real world impact, how you’re looking forward to the challenge of adapting to a completely different environment, and how you’ll enjoy being a member of a team and learning from your much more experienced colleagues.

In the end, the process of getting a non-academic job can be long and complex. At the beginning, it feels completely hopeless, but the more you learn and the more non-academics you talk to, the better it gets. And here’s the kicker: There are a lot of jobs out there where the supply and demand dynamics are completely the opposite of academia – where employers are desperately seeking qualified applicants. Once you find your way there, and see what it’s like applying for a job where you’re NOT one of 500 applicants for a single position, it’s completely worth it.

]]>

N. Dunfield, A knot without a nonorientable essential spanning surface, arXiv:1509.06653

The Neuwirth Conjecture, posed by Neuwirth in 1963, asks roughly whether all knots can be embedded in surfaces in a way analogous to how a torus knot can be embedded in an unknotted torus. A weaker version, the “Weak Neuwirth Conjecture”, asks whether the knot group of any non-trivial knot in the 3-sphere can be presented as a product of free groups amalgameted along some subgroup. This was proven by Culler and Shalen in 1984. But nothing is proven about the ranks of these groups. The Neuwirth Conjecture would give the ranks as the genus of the surface. Thus, the Neuwirth Conjecture is an important conjecture for the structure theory of knot groups.

The Neuwirth Conjecture has been proven for many classes of knots, all via basically the same construction using a nonorientable essential spanning surface. The “Strong Neuwirth Conjecture” of Ozawa and Rubinstein asserts that this construction is always applicable because such a surface always exists.

Dunfield’s counterexample, verified by Snappea, indicates that we will need a different technique to prove the Neuwirth conjecture. Neuwirth’s Conjecture has just become even more alluring and interesting!

]]>

Theo Johnson-Freyd,

Heisenberg-picture quantum field theory, arXiv:1508.05908

It argues for a different category-theoretical formalism for TQFT than the `Schroedinger-picture‘ Atiyah-Segal-type axiomatization that we are used to. The `Heisenberg-picture‘ functor it proposes has as its target a category whose top level is pointed vector spaces instead of numbers, and whose second to top level is associative algebras instead of vector spaces. The preprint argues that this formalism is better physically motivated, and one might dream that it is better-suited to analyze “semiclassical limit” conjectures such as the AJ conjecture and its variants.

I’m very happy to see this sort of playing-around with the foundations of TQFT, which I am happy to believe are too rigid. I expect there should be a useful Dirac picture also, and that there are other alternative axiomatizations also. Let’s see where this all leads!

]]>