There are a lot of problems in software that aren’t solved well in a ubiquitous product (think PIMs, Personal Information Managers; they all suck royally despite everybody’s best efforts, and the OSAF Chandler project has taken years trying to redesign the very concept, with little to show for it to date). But there are precious few problems that haven’t been solved at all.
In fact, a ton of things that are being held up these days as “innovations” are rehashes of old concepts from the 90s, or the 80s, or sometimes even the 70s. Today this came into sharp focus when I saw this bit from a circa 1999 document on the Semantic Web by the Right Reverend Tim Berners-Lee. Here he asks, then answers, a question:
<blockquote> Surely all first-order or higher-order predicate caluculus based systems (such as KIF) have failed historically to have wide impact?
The same was true of hypertext systems between 1970 and 1990, ie before the Web. Indeed, the same objection was raised to the Web, and the same reasons apply for pressing on with the dream. </blockquote>
Then, while searching for some theory on append-only databases (such as would be used in a revision-control system), I came across this 1994 piece on “collaborative filtering.” That report in turn points to earlier work on “Information Tapestry” from the early 1990s.
So: 1970-1990, hypertext exists and is studied in CS departments. 1995, Netscape IPO. Early 1990s, collaborative filtering exists and is studied in research labs. 2006-2007, rise of Digg and Delicious.
I think there’s a very strong case to be made that VCs should stop looking for “innovation,” per se, and start looking for 10-20 year old CS masters’ theses that touch on an emerging market space…