What past tech revolutions teach us about AI

Those of us who have lived through the rise of transformational technologies have a strong sense of déjà vu right now. Old enough to remember the early days of the internet or of desktop computing? When those technologies began to diffuse widely, it was obvious to careful observers that something big was happening. It wasn’t just hype. What was not obvious — and what almost no one got right at the time — was exactly how and when these would transform our economy and our way of life. That made those times both heady and filled with uncertainty.

Artificial Intelligence (AI) feels exactly like such a moment. Many early adopters, including me and many of my fellow economists, see huge benefits of using it in their own work. We sense the exciting possibility. AI may be a general-purpose technology, one so widely applicable with such productivity benefits that it will change the way everyone does business. Earlier general-purpose technologies include the telephone, electricity, the assembly line, the internal combustion engine, desktop computing and the internet.

Unfortunately, history teaches us that the early years of the emergence of new general-purpose technology is fraught with uncertainty. Inventing a standout technology is one thing, but discovering and developing the extent of its application takes many years. We are still reaping the benefits of the computer revolution, and it took most of the 1980s and 1990s for its effects to become truly widespread. Nearly 40 years elapsed between the commercialization of electricity and wide adoption of the electric motor. With AI, we are still right at the beginning.

There is a wide range of predictions about where AI is taking us: from sweeping productivity gains and mass job displacement to claims that it’s mostly hype. So far, I think we can reject the claim that it’s hype. There are certainly good reasons to be looking for productivity gains and to expect employment effects. How quickly they will come and how widespread the effect will be is still unclear.

The history of general-purpose technologies also teaches us that enthusiasm overshoots reality. Major technological shifts attract capital, talent and speculation. Overinvestment is not a bug — it is a recurring feature as entrepreneurs vie to explore what is possible and hit on a big idea. Railroads in the 19th century, electrification, the dot-com boom, and now AI all show a similar pattern.

This matters for businesses and investors today. There will be winners and losers as the potential of AI is explored. Some firms will spend heavily on AI initiatives that never pay off. Others will adopt cautiously and miss opportunities. This is creative destruction in real time. Knowing this doesn’t allow us to pick winners with confidence — but it should make us skeptical of claims that every AI investment is obviously smart, or that transformation will be immediate.

The right stance toward AI, then, is neither hype nor dismissal. It is to accept uncertainty. We can be confident that something significant is underway while admitting we don’t know exactly where the pitfalls and breakthroughs will be.

For business leaders, that means experimenting seriously but selectively, watching carefully where value materializes and where it does not, and understanding the different risks posed by moving too early and too late. ●

David Clingingsmith is associate professor of economics at Case Western Reserve University Weatherhead School of Management

David Clingingsmith

Associate professor of economics
Contact

216.368.4294

Connect On Social Media