Theoretical computer science has provided some examples of "the price of abstraction." The two most prominent are for Gaussian elimination and sorting. Namely:
- It is known that Gaussian elimination is optimal for, say, computing the determinant if you restrict operations to rows and columns as a whole1. Obviously Strassen's algorithm does not obey that restriction, and it is asymptotically better than Gaussian elimination.
- In sorting, if you treat the elements of the list as black boxes that can only be compared and moved around, then we have the standard $n \log n$ information-theoretic lower bound. Yet fusion trees beat this bound by, as far as I understand it, clever use of multiplication.
Are there other examples of the price of abstraction?
To be a bit more formal, I'm looking for examples where a lower bound is known unconditionally in some weak model of computation, but is known to be violated in a stronger model. Furthermore, the weakness of the weak model should come in the form of an abstraction, which admittedly is a subjective notion. For example, I do not consider the restriction to monotone circuits to be an abstraction. Hopefully the two examples above make clear what I'm looking for.
1 Klyuyev, V. V., and N. I. Kokovkin-Shcherbak: On the minimizations of the number of arithmetic operations for the solution of linear algebraic systems of equations. Translation by G. I. Tee: Technical Report CS 24, June 14, 1965, Computer Science Dept., Stanford University. Available online.