A resource overhang is a development jolt waiting to happen. Eliezer Yudkowsky on hard AI takeoff (from December 2008):
… hominid brain size increased by a factor of five over the course of around five million years. You might want to think very seriously about the contrast between that idiom, and a successful AI being able to expand onto five thousand times as much hardware over the course of five minutes — when you are pondering possible hard takeoffs, and whether the AI trajectory ought to look similar to human experience.
A subtler sort of hardware overhang, I suspect, is represented by modern CPUs have a 2GHz serial speed, in contrast to neurons that spike 100 times per second on a good day. The “hundred-step rule” in computational neuroscience is a rule of thumb that any postulated neural algorithm which runs in realtime has to perform its job in less than 100 serial steps one after the other. We do not understand how to efficiently use the computer hardware we have now, to do intelligent thinking. But the much-vaunted “massive parallelism” of the human brain, is, I suspect, mostly cache lookups to make up for the sheer awkwardness of the brain’s serial slowness — if your computer ran at 200Hz, you’d have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.
So that’s another kind of overhang: because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don’t know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.
A still subtler kind of overhang would be represented by human failure to use our gathered experimental data efficiently.
(via)