For those of us in the prediction business, it’s sometimes useful to go back and read past predictions to try to discern patterns in what they got right and wrong.
Back in the early 90s, a lot of people thought the Internet was overhyped. Here’s one example from Newsweek:
Do our computer pundits lack all common sense? The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works…. What the Internet hucksters won’t tell you is that the Internet is one big ocean of unedited data, without any pretense of completeness. Lacking editors, reviewers or critics, the Internet has become a wasteland of unfiltered data.
Today, it’s easy to find people expressing similar skepticism about emerging technologies like the Internet of things, robotics, 3D printing, Bitcoin, etc.
What the skeptics overlook is that platforms that are open to third-party developers have the following characteristic: it’s hard to think of important use cases before they are built, and hard to find examples where important use cases weren’t developed after they were built.
Just look at the founding years of top websites. Google: 1998. Wikipedia: 2001. YouTube: 2005. Twitter: 2006. No wonder it was so hard to imagine these services early on. It took years to imagine them even after the Internet had gone mainstream.
Read more posts on cdixon.org »