Suppose that while writing software, or doing research, or whatever, you need to pick between pursuing one of several mutually exclusive options. The basis point difference between how often you pick the platonically-correct option and how often an AI model does is multiplicative in effect, because any project of sufficient complexity has many of these decisions. Picking the correct option 80% of the time might work in the short term, but quickly spirals. Increasing correctness by 100 basis points is more valuable when your baseline is 90% correctness than 80%.
Practically, this means that even if human experts retain only a few basis points of advantage over AI models, humans might contribute more value in the limit than naively predicted.