Let’s say we’re writing a web app. We want the app to be as fast as possible for end users, so we write benchmarks for common tasks within the app and run them against the browsers we expect our audience to be using. Over time the app becomes faster because of gradual improvements to its code based on feedback from the benchmarks.
A different team of engineers works on a web browser. To make the browser run as fast as possible for end users, they benchmark them against real web apps. Let’s say that the browser engineers decide to benchmark against the app we wrote earlier. This team of engineers implements optimizations tailored to how our web app works, improving the benchmark scores.
It might seem like this iterative approach is beneficial. We’ll arrive at an equilibrium where any changes to the browser or app will reduce performance. But this doesn’t mean that we’re as fast as possible.
Independent groups working like this over time are performing a “hill climbing” optimization. In the short term, they find the local maxima for performance. In the long term, each group will likely miss opportunities for improvement.
This web app example is a gross oversimplification. App engineers benchmark different approaches against framework APIs, hardware engineers base their design around the capabilities of popular compilers, compiler engineers write optimizations for popular patterns within a language. Feedback loops form at each level of the increasingly complex software toolchain.
To avoid being stuck in a local maxima, we need to periodically look across boundaries. This means both looking below familiar abstractions, and above to see what others are building upon your work.