In an earlier post (March 20), I discussed Moore’s Law and its relation to the history of computing. Once again I feel compelled to return to the topic—this time, to discuss its impact, not on computer science and technology, but on its historians. Put simply, historians of technology, including me, find Moore’s Law unnerving. The existence of an exponential growth curve that has remained nearly constant since the 1960s goes against a basic tenet of the history of technology, namely that technical change (call it “progress” if you will) is not an autonomous force of nature, but rather “…a contingent construction shaped by political forces…” in the words of Tom Misa, Director of the Charles Babbage Institute at the University of Minnesota. Consider all that has happened in world politics, culture, and science since 1960. Consider the cultural upheavals in the United States alone during the single decade of the 1960s. Nowhere do these events affect the curve of chip density that Gordon Moore first noticed midway through that decade. What, then, is the driver of chip density, and by implication all that has resulted in computing from the steady, exponential increase in storage capacity and processor speeds for almost a half-century now?
Tom wrote those words in response to an article I had written, in which I suggested that perhaps the emperor of Social Construction has no clothes. I don’t believe that, but I was being provocative. Perhaps in partial response, the Babbage Institute has begun a study of Moore’s Law, which, I hope, will shed light on this question. I hope they invite me. But if they don't, I'll understand.