Posted on 1st June 2021
I attended the BMC in Glasgow (sadly remotely: I actually had a hotel booked for the original 2020 date, before Covid19 overwhelmed the UK) and amongst the plenary talks, I perhaps most enjoyed the plenary talk by Kevin Buzzard, which is now available online. This is a variation on a talk I'd seen before, and Prof Buzzard says a number of things which resonated with me.
I have long slightly despaired at the publishing system in Mathematics (or academia more generally), especially around "accuracy". We like to think of mathematical proofs as being watertight, correct statements, but much of the published literature is full of mistakes, ranging from minor typos, to misleading statements, through to correct results with wrong proofs, through to incorrect results. Buzzard points out (see 23:00 mark) the case of two papers in the Annals which are mutually contradictory, but with neither having any sort of (in-)formal errata or retraction. Now, the talk slightly overstates the case, as a cursory glance at (the now completely open) zbmath pages will show, this is not a "secret". But, still, if perhaps the most prestigious journal in Mathematics features this, it should come as no surprise what lurks in lesser publications.
Buzzard is interested in computer assisted / verified proofs, and talks about his "mid-life crisis" (see 25:00), and how he got over it. He seems (if I can paraphrase) to say that he eventually reassured himself because of two points:
it's always been like this. Eventually things get cleaned up, proofs simplified, mistakes noticed;
the "experts" generally know what is correct.
I find this view a bit disturbing. Very little credit (maybe the work "kudos" is better, see The Algebraist) seems to be given to researchers who spend time writing survey articles, or books. Think how much credit we give to being the "first" to prove something, and how a "new proof of a known result" is something almost to apologise for. In almost the same sentence as Buzzard expounds about experts, he also admits that he himself is no longer an expert (in the sense he is speaking of). Just to relegate the issue of "correctness" to a few mathematicians who are at the real top of their game, and necessarily geographically and intellectually concentrated, seems to ignore a huge swath of mathematics. That is, if you work in a fashionable area at a Golden Triangle university, maybe this does suffice, but this would cover almost none of the areas I could consider myself to be research active in, and almost none of the 100s of new articles which I glance at each year.
This is all hard. I think we should, collectively, be more humble about mistakes, and be more willing to issue corrections, and take time to try to write clearly. I definitely think more kudos should be given to those who do the difficult work of trying to synthesis existing work into a more coherent whole. I have for a while been concerned by what you might call "Mathematical Technical Debt": cutting-edge mathematics is carried out by writing short, technical research papers, which for reasons of space, or a (misguided, in my opinion) feeling that too-obvious remarks makes the work less "impressive", are often lacking in motivation, and are written to communicate to (only) other experts (that word again). This can put up a huge barrier to entry: for example, an important role of a PhD supervisor is to curate the literature for the student. This makes me worried that whole areas of mathematics are slowly dying as, literally, the only people who really understand them retire, or more on to other topics.
Where I do completely agree with Buzzard (29:30 mark) is the split between "beautiful ideas" vs "fiddly technical details". I also really find the "fiddly details" really interesting (I am, after all, an analyst). Two comments:
Buzzard seems to almost apologise for "reading and checking every lemma". But in my own work, and in working with collaborators, I have too often had the experience of absolutely believing I (or we) have proved something, only to find a "small gap" which just completely resists being filled. I've seen whole papers being retracted on the arXiv due to a single lemma being wrong. Surely everyone has had the experience of finding that small gap, working frantically for a week, and managing to rescue the overall proof. But we delude ourselves if we think that will always happen, and human nature means that the author of a paper is far more likely to overlook a small gap than a less-invested referee. Referees should check details!
The idea of some people "having the ideas" and leaving it to "others" to fill in the gaps is maybe less problematic than Buazzrd seems to say (I'm not sure if Buzzard is expressing his own ethical worried here, or reporting on others' worries). However, again the issue of "kudos" comes up: adding those annoying little details to "someone else's work" is just not seen as important by too many mathematicians.
The rest of Kevin's talk is really interesting, and well worth a view. I suspect it might be some time before I can teach my undergraduates to use computer proof systems, however... (More seriously, while I strongly believe there is a correlation between students who are good at abstract proof, and students who are good at programming, I am not convinced (and here I claim no originally) as to any causation: my own interest in programming, and the way I think about proof, abstraction, and problem solving are deeply intertwined, and I suspect there is no easy way to teaching proof through programming).