Futurist Prediction Accuracy

I’ve been reading a lot of predictions from people who are looking to understand what problems humanity will face 10-50 years out (and sometimes longer) in order to work in areas that will be instrumental for the future and wondering how accurate these predictions of the future are. The timeframe of predictions that are so far out means that only a tiny fraction of people making those kinds of predictions today have a track record so, if we want to evaluate which predictions are plausible, we need to look at something other than track record.

The idea behind the approach of this post was to look at predictions from an independently chosen set of predictors (Wikipedia’s list of well-known futurists) whose predictions are old enough to evaluate in order to understand which prediction techniques worked and which ones didn’t work, allowing us to then (mostly in a future post) evaluate the plausibility of predictions that use similar methodologies.

Unfortunately, every single predictor from the independently chosen set had a poor record and, on spot checking some predictions from other futurists, it appears that futurists often have a fairly poor track record of predictions so, in order to contrast techniques that worked with techniques that I didn’t, I sourced predictors that have a decent track record from my memory, an non-independent source which introduces quite a few potential biases.

Something that gives me more confidence than I’d otherwise have is that I avoided reading independent evaluations of prediction methodologies until after I did the evaluations for this post and wrote 98% of the post and, on reading other people’s evaluations, I found that I generally agreed with Tetlock’s “Superforecasting” on what worked and what didn’t work despite using a wildly different data set.

In particular, people who were into “big ideas” who use a few big hammers on every prediction combined with a cocktail party idea level of understanding of the particular subject to explain why a prediction about the subject would fall to the big hammer generally fared poorly, whether or not their favored big ideas were correct. Some examples of “big ideas” would be “environmental doomsday is coming and hyperconservation will pervade everything”, “economic growth will create near-infinite wealth (soon)”, “Moore’s law is supremely important”, “quantum mechanics is supremely important”, etc. Another common trait of poor predictors is lack of anything resembling serious evaluation of past predictive errors, making improving their intuition or methods impossible (unless they do so in secret). Instead, poor predictors often pick a few predictions that were accurate or at least vaguely sounded similar to an accurate prediction and use those to sell their next generation of predictions to others.

By contrast, people who had (relatively) accurate predictions had a deep understanding of the problem and also tended to have a record of learning lessons from past predictive errors. Due to the differences in the data sets between this post and Tetlock’s work, the details are quite different here. The predictors that I found to be relatively accurate had deep domain knowledge and, implicitly, had access to a huge amount of information that they filtered effectively in order to make good predictions. Tetlock was studying people who made predictions about a wide variety of areas that were, in general, outside of their areas of expertise, so what Tetlock found was that people really dug into the data and deeply understood the limitations of the data, which allowed them to make relatively accurate predictions. But, although the details of how people operated are different, at a high-level, the approach of really digging into specific knowledge was the same.

Because this post is so long, this post will contain a very short summary about each predictor followed by a moderately long summary on each predictor. Then we’ll have a summary of what techniques and styles worked and what didn’t work, with the full details of the prediction grading and comparisons to other evaluations of predictors in the appendix.

  • Ray Kurzweil: 7% accuracy
    • Relies on: exponential or super exponential progress that is happening must continue; predicting the future based on past trends continuing; optimistic “rounding up” of facts and interpretations of data; panacea thinking about technologies and computers; cocktail party ideas on topics being predicted
  • Jacque Fresco: predictions mostly too far into the future to judge, but seems very low for judgeable predictions
    • Relies on: panacea thinking about human nature, the scientific method, and computers; certainty that human values match Fresco’s values
  • Buckminster Fuller: too few predictions to rate, but seems very low for judgeable predictions
    • Relies on: cocktail party ideas on topics being predicted to an extent that’s extreme even for a futurist
  • Michio Kaku: 3% accuracy
    • Relies on: panacea thinking about “quantum”, computers, and biotech; exponential progress of those
  • John Naisbitt: predictions too vague to score; mixed results in terms of big-picture accuracy, probably better than any futurist here other than Dixon, but this is not comparable to the percentages given for other predictors
    • Relies on: trend prediction based on analysis of newspapers
  • Gerard K. O’Neill: predictions mostly too far into the future to judge, but seems very low for judgeable predictions
    • Relies on: doing the opposite of what other futurists had done incorrectly, could be described as “trying to buy low and sell high” based on looking at prices that had gone up a lot recently; optimistic “rounding up” of facts and interpretations of data in areas O’Neill views as underrated; cocktail party ideas on topics being predicted
  • Patrick Dixon: 10% accuracy; also much better at “big picture” predictions than any other futurist here (but not in the same league as non-futurist predictors such as Yegge, Gates, etc.)
    • Relies on: extrapolating existing trends (but with much less optimistic “rounding up” than almost any other futurist here); exponential progress; stark divide between “second millennial thinking” and “third millennial thinking”
  • Alvin Toffler: predictions mostly too vague to score; of non-vague predictions, Toffler had an incredible knack for naming a trend as very important and likely to continue right when it was about to stop
    • Relies on: exponential progress that is happening must continue; a medley of cocktail party ideas inspired by speculation about what exponential progress will bring
  • Steve Yegge: 50% accuracy; general vision of the future generally quite accurate
    • Relies on: deep domain knowledge, font of information flowing into Amazon and Google; looking at what’s trending
  • Bryan Caplan: 100% accuracy
    • Relies on: taking the “other side” of bad bets/predictions people make and mostly relying on making very conservative predictions
  • Bill Gates / Nathan Myhrvold / old MS leadership: timeframe of predictions too vague to score, but uncanny accuracy on a vision of the future as well as the relative importance of various technologies
    • Relies on: deep domain knowledge, discussions between many people with deep domain knowledge, font of information flowing into Microsoft

Ray Kurzweil

Ray Kurzweil has claimed to have an 86% accuracy rate on his predictions, a claim which is often repeated, such as by Peter Diamandis where he says:

Of the 147 predictions that Kurzweil has made since the 1990’s, fully 115 of them have turned out to be correct, and another 12 have turned out to be “essentially correct” (off by a year or two), giving his predictions a stunning 86% accuracy rate.

The article is titled “A Google Exec Just Claimed The Singularity Will Happen by 2029” opens with “Ray Kurzweil, Google’s Director of Engineering, is a well-known futurist with a high-hitting track record for accurate predictions.” and it cites this list of predictions on wikipedia. 86% is an astoundingly good track record for non-obvious, major, predictions about the future. This claim seems to be the source of other people claiming that Kurzweil has a high accuracy rate, such as here and here. I checked the accuracy rate of the wikipedia list Diamandis cited myself (using archive.org to get the list from when his article was published) and found a somewhat lower accuracy of 7%.

Fundamentally, the thing that derailed so many of Kurzweil’s predictions is that he relied on the idea of exponential and accelerating growth in basically every area he can imagine, and even in a number of areas that have had major growth, the growth didn’t keep pace with his expectations. His basic thesis is that not only we do have exponential growth due to progress (improve technologically, etc.), improvement in technology feeds back into itself, causing an increase in the rate of exponential growth, so we have double exponential growth (as in e^x^x, not 2*e^x) in many important areas, such as computer performance. He repeatedly talks about this unstoppable exponential or super exponential growth, e.g., in his 1990 book, The Age of Intelligent Machines, he says “One reliable prediction we can make about the future is that the pace of change will continue to accelerate” and he discusses this again in his 1999 book, The Age of Spiritual Machines, his 2001 essay on accelerating technological growth, titled “The Law of Accelerating Returns”, his 2005 book, The Singularity is Near, etc.

One thing that’s notable is despite the vast majority of his falsifiable predictions from earlier work being false, Kurzweil continues to use the same methodology to generate new predictions each time, which is reminiscent of Andrew Gelman’s discussion of forecasters who repeatedly forecast the same thing over and over again in the face of evidence that their old forecasts were wrong. For example, in his 2005 The Singularity is Near, Kurzweil notes the existence of “S-curves”, where growth from any particular “thing” isn’t necessarily exponential, but, as he did in 1990, concludes that exponential growth will continue because some new technology will inevitably be invented which will cause exponential growth to continue and that “The law of accelerating returns applies to all of technology, indeed to any evolutionary process. It can be charted with remarkable precision in information-based technologies because we have well-defined indexes (for example, calculations per second per dollar, or calculations per second per gram) to measure them”.

In 2001, he uses this method to plot a graph and then predicts unbounded life expectancy by 2011 (the quote below isn’t unambiguous on life expectancy being unbounded, but it’s unambiguous if you read the entire essay or his clarification on his life expectancy predictions, , where he says “I don’t mean life expectancy based on your birthdate, but rather your remaining life expectancy”):

Most of you (again I’m using the plural form of the word) are likely to be around to see the Singularity. The expanding human life span is another one of those exponential trends. In the eighteenth century, we added a few days every year to human longevity; during the nineteenth century we added a couple of weeks each year; and now we’re adding almost a half a year every year. With the revolutions in genomics, proteomics, rational drug design, therapeutic cloning of our own organs and tissues, and related developments in bio-information sciences, we will be adding more than a year every year within ten years.

Kurzweil pushes the date this is expected to happen back by more than one year per year (the last citation I saw on this was a 2016 prediction that we would have unbounded life expectancy by 2029), which is characteristic of many of Kurzweil’s predictions.

Quite a few people have said that Kurzweil’s methodology is absurd because exponential growth can’t continue indefinitely in the real world, but Kurzweil explains why he believes this is untrue in his 1990 book, The Age of Intelligent Machines:

A remarkable aspect of this new technology is that it uses almost no natural resources. Silicon chips use infinitesimal amounts of sand and other readily available materials. They use insignificant amounts of electricity. As computers grow smaller and smaller, the material resources utilized are becoming an inconsequential portion of their value. Indeed, software uses virtually no resources at all.

That we’re entering a world of natural resource abundance because resources and power are irrelevant to computers hasn’t been correct so far, but luckily for Kurzweil, many of the exponential and double exponential processes he predicted would continue indefinitely stopped long before natural resource limits would come into play, so this wasn’t a major reason Kurzweil’s predictions have been wrong, although it would be if his predictions were less inaccurate.

At a meta level, one issue with Kurzweil’s methodology is that he has a propensity to “round up” to make growth look faster than it is in order to fit the world to his model. For example, in “The Law of Accelerating Returns”, we noted that Kurzweil predicted unbounded lifespan by 2011 based on accelerating lifespan when “now we’re adding almost a half a year every year” in 2001. However, life expectancy growth in the U.S. (which, based on his comments, seems to be most of what Kurzweil writes about) was only 0.2 years per year overall and 0.1 years per year in longer lived demographics and worldwide life expectancy was 0.3 years per year. While it’s technically true that you can round 0.3 to 0.5 if you’re rounding to the nearest 0.5, that’s a very unreasonable thing to do when trying to guess when unbounded lifespan will happen because the high rate of worldwide increase life expectancy was mostly coming from “catch up growth” where there was a large reduction in things that caused “unnaturally” shortened lifespans.

If you want to predict what’s going to happen at the high end, it makes more sense to look at high-end lifespans, which were increasing much more slowly. Another way in which Kurzweil rounded up to get his optimistic prediction was to select a framing that made it look like we were seeing extremely rapid growth in life expectancies. But if we simply plot life expectancy over time since, say, 1950, we can see that growth is mostly linear-ish trending to sub-linear (and this is true even if we cut the graph off when Kurzweil was writing in 2001), with some super-linear periods that trend down to sub-linear. Kurzweil says he’s a fan of using indexes, etc., to look at growth curves, but in this case where he can easily do so, he instead chooses to pick some numbers out of the air because his “standard” methodology of looking at the growth curves results in a fairly boring prediction of lifespan growth slowing down, so there are three kinds of rounding up in play here (picking an unreasonably optimistic number, rounding up that number, and then selectively not plotting a bunch of points on the time series to paint the picture Kurzweil wants to present).

Kurzweil’s “rounding up” is also how he came up with the predictions that, among other things, computer performance/size/cost and economic growth would follow double exponential trajectories. For computer cost / transistor size, Kurzweil plotted, on a log scale, a number of points on the silicon scaling curve, plus one very old point from the pre-silicon days, when transistor size was on a different scaling curve. He then fits what appears to be a cubic to this, and since a cubic “wants to” either have high growth or high anti-growth in the future, and the pre-silicon point puts pulls the cubic fit very far down in the past, the cubic fit must “want to” go up in the future and Kurzweil rounds up this cubic growth to exponential. This was also very weakly supported by the transistor scaling curve at the time Kurzweil was writing. As someone who was following ITRS roadmaps at the time, my recollection is that ITRS set a predicted Moore’s law scaling curve and semiconductor companies raced to beat curve, briefly allowing what appeared to be super-exponential scaling since they would consistently beat the roadmap, which was indexed against Moore’s law. However, anyone who actually looked at the details of what was going on or talked to semiconductor engineers instead of just looking at the scaling curve would’ve known that people generally expected both that super-exponential scaling was temporary and not sustainable and that the end of Dennard scaling as well as transistor-delay dominated (as opposed to interconnect delay-dominated) high-performance processors were imminent, meaning that exponential scaling of transistor sizes would not lead to the historical computer performance gains that had previously accompanied transistor scaling; this expectation was so widespread that it was discussed in undergraduate classes at the time. Anyone who spent even the briefest amount of time looking into semiconductor scaling would’ve known these things at the time Kurzweil was talking about how we were entering an era of double exponential scaling and would’ve thought that we would be lucky to even having general single exponential scaling of computer performance, but since Kurzweil looks at the general shape of the curve and not the mechanism, none of this knowledge informed his predictions, and since Kurzweil rounds up the available evidence to support his ideas about accelerating acceleration of growth, he was able to find a selected set of data points that supported the curve fit he was looking for.

We’ll see this kind of rounding up done by other futurists discussed here, as well as longtermists discussed in the appendix, and we’ll also see some of the same themes over and over again, particularly exponential growth and the idea that exponential growth will lead to even faster exponential growth due to improvements in technology causing an acceleration of the rate at which technology improves.

Jacque Fresco

In 1969, Jacque Fresco wrote Looking Forward. Fresco claims it’s possible to predict the future by knowing what values people will have in the future and then using that to derive what the future will look like. Fresco doesn’t describe how one can know the values people will have in the future and assumes people will have the values he has, which one might describe as 60s/70s hippy values. Another major mechanism he uses to predict the future is the idea that people of the future will be more scientific and apply the scientific method.

He writes about how “the scientific method” is only applied in a limited fashion, which led to thousands of years of slow progress. But, unlike in the 20th century, in the 21st century, people will be free from bias and apply “the scientific method” in all areas of their life, not just when doing science. People will be fully open to experimentation in all aspects of life and all people will have “a habitual open-mindedness coupled with a rigid insistence that all problems be formulated in a way that permits factual checking”.

This will, among other things, lead to complete self-knowledge of one’s own limitations for all people as well as an end to unhappiness due to suboptimal political and social structures.

The third major mechanism Fresco uses to derive his predictions is the idea that computers will be able solve basically any problem one can imagine and that manufacturing technology will also progress similarly.

Each of the major mechanisms that are in play in Fresco’s predictions are indistinguishable from magic. If you can imagine a problem in the domain, the mechanism is able to solve it. There are other magical mechanisms in play as well, generally what was in the air at the time. For example, behaviorism and operant conditioning were very trendy at the time, so Fresco assumes that society at large will be able to operant condition itself out of any social problems that might exist.

Although most of Fresco’s predictions are technically not yet judgable because they’re about the far future, for the predictions he makes whose time has come, I didn’t see one accurate prediction.

Buckminster Fuller

Fuller is best known for inventing the geodesic dome, although the geodesic dome were actually made by Walther Bauersfeld decades before Fuller “invented” them. Fuller is also known for a variety of other creations, like the Dynamixion car, as well as his futurist predictions.

I couldn’t find a great source of a very long list of predictions from Fuller, but I did find this interview, where he makes a number of predictions. Fuller basically free associates with words, making predictions by riffing off of the English meaning of the word (e.g., see the teleportation prediction) or sometimes an even vauger link.

Predictions from the video:

  • We’ll be able to send people by radio because atoms have frequencies and radio waves have frequencies so it will be possible to pick up all of our frequencies and send them by radio
  • Undeveloped countries (as opposed to highly developed countries) will be able to get the most advanced technologies “via the moon”
    • We’re going to put people on the moon for a year, which will require putting something like mile diameter of earth activity into a little black box weighing 500 lbs so that the moon person will be able to operate locally as if they were on earth
    • This will result in everyone realizing they could just get a little black box and they’ll no longer need local sewer systems, water, power, etc.
  • Humans will be fully automated out of physical work
    • The production capability of China and India will be irrelevant and the only thing that will matter is who can “get” the consumers from China and India
  • There will be a realistic accounting system of what wealth is, which is really about energy due to the law of conservation of energy, which also means that wealth won’t deteriorate and get lost
    • Wealth can only increase because energy can’t be created or destroyed and when you do an experiment, you can only learn more, so wealth can only be created
    • This will make the entire world successful

For those who’ve heard that Fuller predicted the creation of Bitcoin, that last prediction about an accounting system for wealth is the one people are referring to. Typically, people who say this haven’t actually listened to the interview where he states the whole prediction and are themselves using Fuller’s free association method. Bitcoin comes from spending energy to mine Bitcoin and Fuller predicted that the future would have a system of wealth based on energy, therefore Fuller predicted the creation of Bitcoin. If you actually listen to the interview, Bitcoin doesn’t even come close to satisfying the properties of the system Fuller describes, but that doesn’t matter if you’re doing Fuller-style Free association.

In this post, Fuller has fewer predictions graded than almost anyone else, so it’s unclear what his accuracy would be if we had a list of, say, 100 predictions, but the predictions I could find have a 0% accuracy rate.

Michio Kaku

Among people on Wikipedia’s futurist list, Michio Kaku is probably relatively well known because, as part of his work on science popularization, he’s had a nationally (U.S.) syndicated radio show since 2006 and he frequently appears on talk shows and is interviewed by news organizations.

In his 1997 book, Visions: How Science Will Revolutionize the 21st Century, Kaku explains why predictions from other futurists haven’t been very accurate and why his predictions are different:

… most predictions of the future have floundered because they have reflected the eccentric, often narrow viewpoints of a single individual.

The same is not true of Visions. In the course of writing numerous books, articles, and science commentaries, I have had the rare privilege of interviewing over 150 scientists from various disciplines during a ten-year period.

On the basis of these interviews, I have tried to be careful to delineate the time frame over which certain predictions will or will not be realized. Scientists expect some predictions to come about by the year 2020; others will not materialize until much later—from 2050 to the year 2100.

Kaku also claims that his predictions are more accurate than many other futurists because he’s a physicist and thinking about things in the ways that physicists do allows for accurate predictions of the future:

It is, I think, an important distinction between Visions, which concerns an emerging consensus among the scientists themselves, and the predictions in the popular press made almost exclusively by writers, journalists, sociologists, science fiction writers, and others who are consumers of technology, rather than by those who have helped to shape and create it. … As a research physicist, I believe that physicists have been particularly successful at predicting the broad outlines of the future. Professionally, I work in one of the most fundamental areas of physics, the quest to complete Einstein’s dream of a “theory of everything.” As a result, I am constantly reminded of the ways in which quantum physics touches many of the key discoveries that shaped the twentieth century.

In the past, the track record of physicists has been formidable: we have been intimately involved with introducing a host of pivotal inventions (TV, radio, radar, X-rays, the transistor, the computer, the laser, the atomic bomb), decoding the DNA molecule, opening new dimensions in probing the body with PET, MRI, and CAT scans, and even designing the Internet and the World Wide Web.

He also specifically calls out Kurzweil’s predictions as absurd, saying Kurzweil has “preposterous predictions about the decades ahead, from vacationing on Mars to banishing all diseases.”

Although Kaku finds Kurzweil’s predictions ridiculous, his predictions rely on some of the same mechanics Kurzweil relies on. For example, Kaku assumes that materials / commodity prices will tank in the then-near future because the advance of technology will make raw materials less important and Kaku also assumes the performance and cost scaling of computer chips would continue on the historical path it was on during the 70s and 80s. Like most of the other futurists from Wikipedia’s list, Kaku also assumed that the pace of scientific progress would rapidly increase, although his reasons are different (he cites increased synergy between the important fields of quantum mechanics, computer science, and biology, which he says are so important that “it will be difficult to be a research scientist in the future without having some working knowledge of” all of those fields).

Kaku assumed that UV lithography would run out of steam and that we’d have to switch to X-ray or electron lithography, which would then run out of steam, requiring us to switch to a fundamentally different substrate for computers (optical, molecular, or DNA) to keep performance and scaling on track, but advances in other fundamental computing substrates have not materialized quickly enough for Kaku’s predictions to come to pass. Kaku assigned very high weight to things that have what he considers “quantum” effects, which is why, for example, he cites the microprocessor as something that will be obsolete by 2020 (they’re not “quantum”) whereas fiber optics will not be obsolete (they rely on “quantum” mechanisms). Although Kaku pans other futurists for making predictions without having a real understanding of the topics they’re discussing, it’s not clear that Kaku has a better understanding of many of the topics being discussed even though, as a physicist, Kaku has more relevant background knowledge.

The combination of assumptions above that didn’t pan out leads to a fairly low accuracy rate for Kaku’s predictions in Visions.

I didn’t finish Visions, but the prediction accuracy rate of the part of the book I read (from the beginning until somewhere in the middle, to avoid cherry picking) was 3% (arguably 6% if you give full credit to the prediction I gave half credit to). He made quite a few predictions I didn’t score in which he said something “may” happen. Such a prediction is, of course, unfalsifiable because the statement is true whether or not the event happens.

John Naisbitt

Anyone who’s a regular used book store bargain bin shopper will have seen this name on the cover of Megatrends, which must be up there with Lee Iocacca’s autobiography as one of the most common bargain bin fillers.

Naisbitt claims that he’s able to accurately predict the future using “content analysis” of newspapers, which he says was used to provide great insights during WWII and has been widely used by the intelligence community since then, but hadn’t been commercially applied until he did it. Naisbitt explains that this works because there’s a fixed amount of space in newspapers (apparently newspapers can’t be created or destroyed nor can newspapers decide to print significantly more or less news or have editorial shifts in what they decide to print that are not reflected by identical changes in society at large):

Why are we so confident that content analysis is an effective way to monitor social change? Simply stated, because the news hole in a newspaper is a closed system. For economic reasons, the amount of space devoted to news in a newspaper does not change significantly over time. So, when something new is introduced, something else or a combination of things must be omitted. You cannot add unless you subtract. It is the principle of forced choice in a closed system.

Unfortunately, it’s not really possible to judge Naisbitt’s predictions because he almost exclusively deals in vague, horoscope-like, predictions which can’t really be judged as correct or incorrect. If you just read Megatrends for the flavor of each chapter and don’t try to pick out individual predictions, some chapters seem quite good, e.g., “Industrial Society -> Information Society”, but some are decidedly mixed even if you very generously grade his vague predictions, e.g., “From Forced Technology to High Tech / High Touch”. This can’t really be compared to the other futurists in this post because it’s much easier to make vague predictions sound roughly correct than to make precise predictions correct but, even so, if reading for general feel of what direction the future might go, Naisbitt’s predictions are much more on the mark than any other futurists discussed.

That being said, as far as I read in his book, the one concrete prediction I could find was incorrect, so if you want to score Naisbitt comparably to the other futurists discussed here, you might say his accuracy rate is 0% but with very wide error bars.

Gerard K. O’Neill

O’Neill has two relatively well-known non-fiction futurist books, 2081 and The Technology Edge. 2081 was written in 1980 and predicts the future 100 years from then. The Technology Edge discusses what O’Neill thought the U.S. needed to do in 1983 to avoid being obsoleted by Japan.

O’Neill spends a lot more space on discussing why previous futurists were wrong than any other futurist under discussion. O’Neill notes that “most [futurists] overestimated how much the world would be transformed by social and political change and underestimated the forces of technological change” and cites Kipling, Verne, Wells, Haldane, and Ballamy, as people who did this. O’Neill also says that “scientists tend to overestimate the chances for major scientific breakthroughs and underestimate the effects of straightforward developments well within the boundaries of existing knowledge” and cites Haldane again on this one. O’Neill also cites spaceflight as a major miss of futurists past, saying that they tended to underestimate how quickly spaceflight was going to develop.

O’Neill also says that it’s possible to predict the future without knowing the exact mechanism by which the change will occur. For example, he claims that the automobile could’ve been safely predicted even if the internal combustion engine hadn’t been invented because steam would’ve also worked. But he also goes on to say that there are things it would’ve been unreasonable to predict, like the radio, TV, and electronic communications, saying that even though the foundations for those were discovered in 1865 and that the time interval between a foundational discovery and its application is “usually quite long”, citing 30-50 years from quantum mechanics to integrated circuits and 100+ years from relativity to faster than light travel, and 50+ years from the invention of nuclear power without “a profound impact”.

I don’t think O’Neill ever really explains why his predictions are of the “automobile” kind in a convincing way. Instead, he relies on doing the opposite of what he sees as mistakes others made. The result is that he predicts huge advancements in space flight, saying we should expect we should expect large scale space travel and colonization by 2081, presaged by wireless transmission of energy by 2000 (referring to energy beamed down from satellites) and interstellar probes by 2025 (presumably something of a different class than the Voyager probes, which were sent out in 1977).

In 1981, he said “a fleet of reusable vehicles of 1990s vintage, numbering much less than today’s world fleet of commercial jet transports, would be quite enough to provide transport into space and back again for several hundred million people per year”, predicting that something much more advanced the the NASA Space Shuttle would be produced shortly afterwards. Continuing that progress “by the year 2010 or thereabouts there will be many space colonies in existence and many new ones being constructed each year”.

Most of O’Neill’s predictions are for 2081, but he does make the occasional prediction for a time before 1981. All of the falsifiable ones I could find were incorrect, giving him an accuracy rate of approximately 0% but with fairly wide error bars.

Patrick Dixon

Dixon is best known for writing Futurewise, but he has quite a few books with predictions about the future. In this post, we’re just going to look at Futurewise, because it’s the most prediction-oriented book Dixon has that’s old enough that we ought to be able to make a call on a decent number of his predictions (Futurewise is from 1998; his other obvious candidate, The Future of Almost Everything is from 2015 and looks forward a century).

Unlike most other futurists featured in this post, Dixon doesn’t explicitly lay out why you should trust his predictions in Futurewise in the book itself, although he sort of implicitly does so in the acknowledgements, where he mentions having interacted with many very important people.

I am indebted to the hundreds of senior executives who have shaped this book by their participation in presentations on the Six Faces of the Future. The content has been forged in the realities of their own experience.

And although he doesn’t explicitly refer to himself, he also says that business success will come from listening to folks who have great vision:

Those who are often right will make a fortune. Trend hunting in the future will be a far cry from the seventies or eighties, when everything was more certain. In a globalized market there are too many variables for back-projection and forward-projection to work reliably .. That’s why economists don’t make good futurologists when it comes to new technologies, and why so many boards of large corporations are in such a mess when it comes to quantum leaps in thinking beyond 2000.

Second millennial thinking will never get us there … A senior board member of a Fortune 1000 company told me recently: ‘I’m glad I’m retiring so I don’t have to face these decisions’ … ‘What can we do?’ another senior executive declares …

Later, in The Future of Almost Everything, Dixon lays out the techniques that he says worked when he wrote Futurewise, which “has stood the test of time for more than 17 years”. Dixon says:

All reliable, long-range forecasting is based on powerful megatrends that have been driving profound, consistent and therefore relatively predictable change over the last 30 years. Such trends are the basis of every well- constructed corporate strategy and government policy … These wider trends have been obvious to most trend analysts like myself for a while, and have been well described over the last 20–30 years. They have evolved much more slowly than booms and busts, or social fads.

And lays out trends such as:

  • fall in costs of production of most mass-produced items
  • increased concern about environment/sustainability
  • fall in price of digital technology, telecoms and networking
  • rapid growth of all kinds of wireless/mobile devices
  • ever-larger global corporations, mergers, consolidations

Dixon declines to mention trends he predicted that didn’t come to pass (such as his prediction that increased tribalism will mean that most new wealth is created in small firms of 20 or fewer employees which will mostly be family owned, or his prediction that the death of “old economics” means that we’ll be able to have high economic growth with low unemployment and no inflationary pressure indefinitely), or cases where the trend progression caused Dixon’s prediction to be wildly incorrect, a common problem when making predictions off of exponential trends because a relatively small inaccuracy in the rate of change can result in a very large change in the final state.

Dixon’s website is full of endorsements for him, with implicit and explicit claims that he’s a great predictor of the future, as well as more general statements such as “Patrick Dixon has been ranked as one of the 20 most influential business thinkers alive today”.

Back in Futurewise, Dixon relies heavily on the idea of a stark divide between “second millennial thinking” and “third millennial thinking” repeatedly comes up in Dixon’s text. Like nearly everyone else under discussion, Dixon also extrapolates out from many existing trends to make predictions that didn’t pan out, e.g., he looked at the falling cost and decreasing price of phone lines and predicted that people would end up with a huge number of phone lines in their home by 2005 and that screens getting thinner would mean that we’d have “paper-thin display sheets” in significant use by 2005. This kind of extrapolation sometimes works and Dixon’s overall accuracy rate of 10% is quite good compared to the other “futurists” under discussion here.

However, when Dixon explains his reasoning in areas I have some understanding of, he seems to be operating at the buzzword level, so that when he makes a correct call, it’s generally for the wrong reasons. For example, Dixon says that software will always be buggy, which seems true, at least to date. However, his reasoning for this is that new computers come out so frequently (he says “less than 20 months” — a reference to the 18 month timeline in Moore’s law) and it takes so long to write good software (“at least 20 years”) that programmers will always be too busy rewriting software to run on the new generation of machines (due to the age of the book, he uses the example of “brand new code … written for Pentium chips”).

It’s simply not the case that most bugs or even, as a fraction of bugs, almost any bugs are due to programmers rewriting existing code to run on new CPUs. If you really squint, you can see things like Android devices having lots of security bugs due to the difficulty of updating Android and backporting changes to older hardware, but those kinds of bugs are both a small fraction of all bugs and not really what Dixon was talking about.

Similarly, on how computer backups will be done in the future, Dixon basically correctly says that home workers will be vulnerable to data loss and people who are serious about saving data will back up data online, “back up data on-line to computers in other cities as the ultimate security”.

But Dixon’s stated reason for this is that workstations already have large disk capacity (>= 2GB) and floppy disks haven’t kept up (< 2MB), so it would take thousands of floppy disks to do backups, which is clearly absurd. However, even at the time, Zip drives (100MB per portable disk) were common and, although it didn't take off, the same company that made Zip drives also made 1GB "Jaz" drives. And, of course, tape backup was also used at the time and is still used today. This trend has continued to this day; large, portable, disks are available, and quite a few people I know transfer or back up large amounts of data on portable disks. The reason most people don't do disk/tape backups isn't that it would require thousands of disks to backup a local computer (if you look at the computers people typically use at home, most people could back up their data onto a single portable disk per failure domain and even keep multiple versions on one disk), but that online/cloud backups are more convenient.

Since Dixon’s reasoning was incorrect (at least in the cases where I’m close enough to the topic to understand how applicable the reasoning was), it seems that when Dixon is correct, it can’t be for the stated reason and Dixon is either correct by coincidence or because he’s looking at the broader trend and came up with an incorrect rationalization for the prediction. But, per the above, it’s very difficult to actually correctly predict the growth rate of a trend over time, so without some understanding of the mechanics in play, one could also say that a prediction that comes true based on some rough trend is also correct by coincidence.

Alvin Toffler / Heidi Toffler

Like most others on this list, Toffler claims some big prediction wins

The Tofflers claimed on their website to have foretold the breakup of the Soviet Union, the reunification of Germany and the rise of the Asia-Pacific region. He said in the People’s Daily interview that “Future Shock” envisioned cable television, video recording, virtual reality and smaller U.S. families.

In this post, we’ll look at Future Shock, Toffler’s most famous work, written in 1970.

According to a number of sources, Alvin Toffler’s major works were co-authored by Heidi Toffler. In the books themselves, Heidi Toffler is acknowledged as someone who helped out a lot, but not as an author, despite the remarks elsewhere about co-authorship. In this section, I’m going to refer to Toffler in the singular, but you may want to mentally substitute the plural.

Toffler claims that we should understand the present not only by understanding the past, but also by understanding the future:

Previously, men studied the past to shed light on the present. I have turned the time-mirror around, convinced that a coherent image of the future can also shower us with valuable insights into today. We shall find it increasingly difficult to understand our personal and public problems without making use of the future as an intellectual tool. In the pages ahead, I deliberately exploit this tool to show what it can do.

Toffler generally makes vague, wish-y wash-y statements, so it’s not really reasonable to score Toffler’s concrete predictions because so few predictions are given. However, Toffler very strongly implies that past exponential trends are expected to continue or even accelerate and that the very rapid change caused by this is going to give rise to “future shock”, hence the book’s title:

I coined the term “future shock” to describe the shattering stress and disorientation that we induce in individuals by subjecting them to too much change in too short a time. Fascinated by this concept, I spent the next five years visiting scores of universities, research centers, laboratories, and government agencies, reading countless articles and scientific papers and interviewing literally hundreds of experts on different aspects of change, coping behavior, and the future. Nobel prizewinners, hippies, psychiatrists, physicians, businessmen, professional futurists, philosophers, and educators gave voice to their concern over change, their anxieties about adaptation, their fears about the future. I came away from this experience with two disturbing convictions. First, it became clear that future shock is no longer a distantly potential danger, but a real sickness from which increasingly large numbers already suffer. This psycho-biological condition can be described in medical and psychiatric terms. It is the disease of change .. Earnest intellectuals talk bravely about “educating for change” or “preparing people for the future.” But we know virtually nothing about how to do it … The purpose of this book, therefore, is to help us come to terms with the future— to help us cope more effectively with both personal and social change by deepening our understanding of how men respond to it

The big hammer that Toffler uses everywhere is extrapolation of exponential growth, with the implication that this is expected to continue. On the general concept of extrapolating out from curves, Toffler’s position is very similar to Kurzweil’s: if you can see a trend on a graph, you can use that to predict the future, and the ability of technology to accelerate the development of new technology will cause innovation to happen even more rapidly than you might naively expect:

Plotted on a graph, the line representing progress in the past generation would leap vertically off the page. Whether we examine distances traveled, altitudes reached, minerals mined, or explosive power harnessed, the same accelerative trend is obvious. The pattern, here and in a thousand other statistical series, is absolutely clear and unmistakable. Millennia or centuries go by, and then, in our own times, a sudden bursting of the limits, a fantastic spurt forward. The reason for this is that technology feeds on itself. Technology makes more technology possible, as we can see if we look for a moment at the process of innovation. Technological innovation consists of three stages, linked together into a self-reinforcing cycle. … Today there is evidence that the time between each of the steps in this cycle has been shortened. Thus it is not merely true, as frequently noted, that 90 percent of all the scientists who ever lived are now alive, and that new scientific discoveries are being made every day. These new ideas are put to work much more quickly than ever before.

The first N major examples of this from the book are:

  • Population growth rate (doubling time of 11 years), which will have to create major changes
  • Economic growth (doubling time of 15 years), which will increase the amount of stuff people own (this is specifically phrased as amount of stuff and not wealth)
    • It’s very strongly implied that this will continue for at least 70 years
  • Speed of travel; no doubling time is stated, but the reader is invited to extrapolate from the following points: human running speed millions of years ago, 100 mph in the 1880s, 400 mph in 1938, 800 mph by 1958, 4000 mph very shortly afterwards (18000 mph when orbiting the earth)
  • Reduced time from conception of an idea to the application, used to support the idea that growth will accelerate

As we just noted above, when discussing Dixon, Kurzweil, etc., predicting the future by extrapolating out exponential growth is fraught. Toffler somehow manages to pull off the anti-predictive feat of naming a bunch of trends which were about to stop, some of which already had their writing on the wall when Toffler was writing.

Toffler then extrapolates from the above and predicts that the half-life of everything will become shorter, which will overturn how society operates in a variety of ways.

For example, companies and governments will replace bureaucracies with “adhocracies” sometime between 1995 and 2020 . The concern that people will feel like cogs as companies grow larger is obsolete because, in adhocracy, the entire concept of top-down command and control will disappear, obsoleted by the increased pace of everything causing top-down command and control structures to disappear. While it’s true that some companies have less top-down direction than would’ve been expected in Toffler’s time, many also have more, which has been enabled by technology allowing employers to keep stricter tabs on employees than ever before, making people more of a cog than ever before.

Another example is that Toffler predicted human colonization of the Ocean, “The New Atlantis”, “long before the arrival of A.D. 2000”.

Fabian Giesen points out that, independent of the accuracy of Toffler’s predictions, Venkatesh Rao’s Welcome to the Future Nauseous explains why “future shock” didn’t happen in areas of very rapid technological development.

People from the Wikipedia list who weren’t included

  • Laurie Anderson
    • I couldn’t easily find predictions from her, except some song lyrics that allegedly predicted 9/11, but in a very “horoscope” sort of way
  • Arthur Harkins
    • His Wikipedia entry was later removed for notability reasons and it was already tagged as non-notable at the time
  • Stephen Hawking
    • The predictions I could find are generally too far out to grade and are really more suggestions as to what people should do than predictions. For example the wikipedia futurist list above links to a 2001 prediction that humans will be left behind by computers / robots if genetic engineering wasn’t done to allow humans to keep up and it also links to a 2006 prediction that humans need to expand to other planets to protect the species
  • Thorkil Kristensen
    • I couldn’t easily find a set of English language predictions from Kristensen. Thorkil Kristensen is associated with but not an author of The Limits to Growth, a 1970s anti-growth polemic
  • David Sears
    • Not notable enough to have a wikipedia page, then or now
  • John Zerzan
    • Zerzan seems like more of someone who’s calling for change in society due to his political views than a “futurist” who’s trying to predict the future

Steve Yegge

As I mentioned at the start, none of the futurists from Wikipedia’s list had very accurate predictions, so we’re going to look at a couple other people from other sources who aren’t generally considered futurists to see how they rank.

We previously looked at Yegge’s predictions here, which were written in 2004 and were generally about the next 5-10 years, with some further out. There were nine predictions (technically ten, but one isn’t really a prediction). If grading them as written, which is how futurists have been scored, I would rank these at 4.5/9, or about 50%.

You might argue that this is unfair because Yegge was predicting the relatively near future, but if we look at relatively near future predictions from futurists, their accuracy rate is generally nowhere near 50%, so I don’t think it’s unfair to compare the number in some way.

If you want to score these like people often score futurists, where they get credit for essentially getting things directionally correct, then I’d say that Yegge’s score should be between 7/9 and 8/9, depending on how much partial credit he gets for one of the questions.

If you want to take a more holistic “what would the world look like if Yegge’s vision were correct vs. the world we’re in today”, I think Yegge also does quite well there, with the big miss being that Lisp-based languages have not taken over the world, the success of Clojure notwithstanding. This is quite different than the futurists here, who generally had a vision of many giant changes that didn’t come to pass, e.g., if we look at Kurzweil’s vision of the world, by 2010, we would’ve had self-driving cars, a “cure” for paraplegia, widespread of AR, etc., by 2011, we would have unbounded life expectancy, and by 2019 we would have pervasive use of nanotechnology including computers having switched from transistors to nanotubes, effective “mitigations” for blindness and deafness, fairly widely deployed fully realistic VR that can simulate sex via realistic full-body stimluation, pervasive self driving cars (predicted again), entirely new fields of art and music, etc., and all that these things imply, which is a very different world than the world we actually live in.

And we see something similar if we look at other futurists, who predicted things like living underground, living under the ocean, etc.; most predicted many revolutionary changes that would really change society, a few of which came to pass. Yegge, instead, predicted quite a few moderate changes (as well as some places where change would be slower than a lot of people expected) and changes were slower than he expected in the areas he predicted, but only by a bit.

Yegge described his methodology for the post above as:

If you read a lot, you’ll start to spot trends and undercurrents. You might see people talking more often about some theme or technology that you think is about to take off, or you’ll just sense vaguely that some sort of tipping point is occurring in the industry. Or in your company, for that matter.

I seem to have many of my best insights as I’m writing about stuff I already know. It occurred to me that writing about trends that seem obvious and inevitable might help me surface a few not-so-obvious ones. So I decided to make some random predictions based on trends I’ve noticed, and see what turns up. It’s basically a mental exercise in mining for insights

In this essay I’ll make ten predictions based on undercurrents I’ve felt while reading techie stuff this year. As I write this paragraph, I have no idea yet what my ten predictions will be, except for the first one. It’s an easy, obvious prediction, just to kick-start the creative thought process. Then I’ll just throw out nine more, as they occur to me, and I’ll try to justify them even if they sound crazy.

He’s not really trying to generate the best predictions, but still did pretty well by relying on his domain knowledge plus some intuition about what he’s seen.

In the post about Yegge’s predictions, we also noted that he’s made quite a few successful predictions outside of his predictions post:

Steve also has a number of posts that aren’t explicitly about predictions that, nevertheless, make pretty solid predictions about how things are today, written way back in 2004. There’s It’s Not Software, which was years ahead of its time about how people write “software”, how writing server apps is really different from writing shrinkwrap software in a way that obsoletes a lot of previously solid advice, like Joel’s dictum against rewrites, as well as how service oriented architectures look; the Google at Delphi (again from 2004) correctly predicts the importance of ML and AI as well as Google’s very heavy investment in ML; an old interview where he predicts “web application programming is gradually going to become the most important client-side programming out there. I think it will mostly obsolete all other client-side toolkits: GTK, Java Swing/SWT, Qt, and of course all the platform-specific ones like Cocoa and Win32/MFC/”; etc. A number of Steve’s internal Google blog posts also make interesting predictions, but AFAIK those are confidential.

Quite a few of Yegge’s predictions would’ve been considered fairly non-obvious at the time and he seemed to still have a fairly good success rate on his other predictions (although I didn’t try to comprehensively find them and score them, I sampled some of his old posts and found the overall success rate to be similar to the ones in his predictions post).

With Yegge and the other predictors that were picked so that we can look at some accurate predictions there is, of course, a concern that there’s survivorship bias in picking these predictors. I suspect that’s not the case for Yegge because he continued to be accurate after I first noticed that he seemed to have accurate predictions, so it’s not just that I picked someone who had a lucky streak after the fact. Also, especially with some of his Google internal G+ comments, made fairly high dimension comments that ended being right for the reasons he suggested, which provides a lot more information about how accurate his reasoning was than simply winning a bunch of coin flips in a row. This comment about depth of reasoning doesn’t apply to Caplan, below, because I haven’t evaluated Caplan’s reasoning, but does apply to MS leadership circa 1990.

Bryan Caplan

Bryan Caplan reports that his track record is 23/23 = 100%. He is much more precise in specifying his predictions than anyone else we’ve looked at and tries to give a precise bet that will be trivial to adjudicate as well as betting odds.

Caplan started making predictions/bets around the time the concept that “betting is a tax on bullshit” became popular (the idea being that a lot of people are willing to say anything but will quiet down if asked to make a real bet and those that don’t will pay a real cost if they make bad real bets) and Caplan seems to have a strategy as acting as a tax man on bullshit in that he generally takes the safe side of bets that people probably shouldn’t have made. Andrew Gelman says:

Caplan’s bets are an interesting mix. The first one is a bet where he offered 1-to-100 odds so it’s no big surprise that he won, but most of them are at even odds. A couple of them he got lucky on (for example, he bet in 2008 that no large country would leave the European Union before January 1, 2020, so he just survived by one month on that one), but, hey, it’s ok to be lucky, and in any case even if he only had won 21 out of 23 bets, that would still be impressive.

It seems to me that Caplan’s trick here is to show good judgment on what pitches to swing at. People come at him with some strong, unrealistic opinions, and he’s been good at crystallizing these into bets. In poker terms, he waits till he has the nuts, or nearly so. 23 out of 23 . . . that’s a great record.

I think there’s significant value in doing this, both in the general “betting is a tax on bullshit” sense as well as, more specifically, if you have high belief that someone is trying to take the other side of bad bets and has good judgment, knowing that the Caplan-esque bettor has taken the position gives you decent signal about the bet even if you have no particular expertise in the subject. For example, if you look at my bets, even though I sometimes take bets against obviously wrong positions, I much more frequently take bets I have a very good chance of losing, so just knowing that I took a bet provides much less information than knowing that Caplan took a bet.

But, of course, taking Caplan’s side of a bet isn’t foolproof. As Gelman noted, Caplan got lucky at least once, and Caplan also seems likely to lose the Caplan and Tabarrok v. Bauman bet on global temperature. For that particular bet, you could also make the case that he’s expected to lose since he took the bet with 3:1 odds, but a lot of people would argue that 3:1 isn’t nearly long enough odds to take that bet.

The methodology that Caplan has used to date will never result in a positive prediction of some big change until the change is very likely to happen, so this methodology can’t really give you a vision of what the future will look like in the way that Yegge or Gates or another relatively accurate predictor who takes wilder bets could.

Bill Gates / Nathan Myhrvold / MS leadership circa 1990 to 1997

A handful of memos that were released to the world due to the https://en.wikipedia.org/wiki/United_States_Department_of_Justice case against Microsoft which laid out the vision Microsoft executives had about how the world would develop, with or without Microsoft’s involvement. These memos don’t lay out concrete predictions with timelines and therefore can’t be scored in the same way futurist predictions were scored in this post. If rating these predictions on how accurate their vision of the future was, I’d rate them similarly to Steve Yegge (who scored 7/9 or 8/9), but the predictions were significantly more ambitious, so they seem much more impressive when controlling for the scope of the predictions.

Compared to the futurists we discussed, there are multiple ways in which the predictions are much more detailed (and therefore more impressive for a given level of accuracy on top of being more accurate). One is that MS execs have a much deeper understanding of the things under discussion and how they impact each other. With “our” futurists, they often discuss things at a high level and, when they discuss things in detail, they make statements that make it clear that they don’t really understand the topic and often don’t really know what the words they’re writing mean. MS execs of the era pretty clearly had a deep understanding of the issues in play, which let them make detailed predictions that our futurists wouldn’t make, e.g., while protocols like FTP and IRC will continue to be used, the near future of the internet is HTTP over TCP and the browser will become a “platform” in the same way that Windows is a “platform”, one that’s much more important and larger than any OS (unless Microsoft is successful in taking action to stop this from coming to pass, which it was not despite MS execs forseeing the exact mechanisms that could cause MS to fail to own the internet). MS execs use this level of understanding to make predictions about the kinds of larger things that our futurists discuss, e.g., the nature of work and how that will change.

Actually having an understanding of the issues in play and not just operating with a typical futurist buzzword level understanding of the topics allowed MS leadership to make fairly good guesses about what the future would look like.

A topic for another post is how MS leadership had a more effective vision for the future than leadership at old-line competitors (Novell, IBM, AT&T, Yahoo, Sun, etc.) and how this resulted in MS turning into a $2T company while their competitors became, at best, irrelevant and most didn’t even succeed at becoming irrelevant and ceased to exist. Reading through old MS memos, it’s clear that MS really kept tabs on what competitors were doing and they were often surprised at how ineffective leadership was at their competitors, e.g., on Novell, Bill Gates says “Our traditional competitors are just getting involved with the Internet. Novell is surprisingly absent given the importance of networking to their position”; Gates noted that Frankenberg, then-CEO of Novell, seemed to understand the importance of the internet, but Frankenberg only joined Novell in 1994 and left in 1996 and spent much of his time at Novell reversing the direction the company had taken under Noora, which didn’t leave Novell with a coherent position or plan when Frankenberg “resigned” two years into the pivot he was leading.

In many ways, a discussion of what tech execs at the time thought the future would look like and what paths would lead to success is more interesting than looking at futurists who basically don’t understand the topics they’re talking about, but I started this post to look at how well futurists understood the topics they discussed and didn’t know, in advance, that their understanding of the topics they discuss and resultant prediction accuracy would be so poor.

Common sources of futurist errors

  • Not learning from mistakes
    • Good predictors tend to be serious at looking at failed past predictions and trying to calibrate
  • Reasoning from a cocktail party level understanding of a topic
    • Good predictors tend to engage with ideas in detail
  • Pushing one or a few “big ideas”
  • Generally assuming high certainty about the future
    • Worse yet: assuming high certainty of scaling curves, especially exponential scaling curves
  • Panacea thinking
  • Only seeing the upside (or downside) of technological changes
  • Starting from evidence-free assumptions
Not learning from mistakes

The futurists we looked at in this post tend to rate themselves quite highly and, after the fact, generally claim credit for being great predictors of the future, so much so that they’ll even tell you how you can predict the future accurately. And yet, after scoring them, the most accurate futurist (among the ones who made concrete enough predictions that they could be scored) came in at 10% accuracy with generous grading that gave them credit for making predictions that accidentally turned out to be correct when they mispredicted the mechanism by which the prediction would come to pass (a strict reading of many of their prediction would reduce the accuracy because they said that the prediction would happen because of their predicted mechanism, which is false, rendering the prediction false).

There are two tricks that these futurists have used to be able to make such lofty claims. First, many of them make vague predictions and then claim credit if anything vaguely resembling the prediction comes to pass. Second, almost all of them make a lot of predictions and then only tally up the ones that came to pass. One way to look at a 4% accuracy rate is that you really shouldn’t rely on that person’s predictions. Another way is that, if they made 500 predictions, they’re a great predictor because they made 20 accurate predictions. Since almost no one will bother to go through a list of predictions to figure out the overall accuracy when someone does the latter, making a huge number of predictions and then cherry picking the ones that were accurate is a good strategy for becoming a renowned futurist.

But if we want to figure out how to make accurate predictions, we’ll have to look at other people’s strategies. There are people who do make fairly good, generally directionally accurate, predictions, as we noted when we looked at Steve Yegge’s prediction record. However, they tend to be harsh critics of their predictions, as Steve Yegge was when he reviewed his own prediction record, saying:

I saw the HN thread about Dan Luu’s review of this post, and felt people were a little too generous with the scoring.

It’s unsurprising that a relatively good predictor of the future scored himself lower than I did because taking a critical eye to your own mistakes and calling yourself out for mistakes that are too small for most people to care about is a great way to improve. We can see in communications from Microsoft leadership as well, e.g., calling themselves out for failing to predict that a lack of backwards compatibility doomed major efforts like OS/2 and LanMan. Doing what most futurists do and focusing on the predictions that worked out without looking at what went wrong isn’t such a great way to improve.

Cocktail party understanding

Another thing we see among people who make generally directionally correct predictions, as in the Steve Yegge post mentioned above, Nathan Myhrvold’s 1993 “Road Kill on the Information Highway”, Bill Gates’s 1995 “The Internet Tidal Wave“, etc., is that the person making the prediction actually understands the topic. In all of the above examples, it’s clear that the author of the document has a fairly strong technical understanding of the topics being predicted and, in the general case, it seems that people who have relatively accurate predictions are really trying to understand the topic, which is in stark contrast to the futurists discussed in this post, almost all of whom display clear signs of having a having a buzzword level understanding of the topics they’re discussing.

There’s a sense in which it isn’t too difficult to make correct predictions if you understand the topic and have access to the right data. Before joining a huge megacorp and then watching the future unfold, I thought documents like “Road Kill on the Information Highway” and The Internet Tidal Wave” were eerily prescient, but once I joined Google in 2013, a lot of trends that weren’t obvious from the outside seemed fairly obvious from the inside.

For example, it was obvious that mobile was very important for most classes of applications, so much so that most applications that were going to be successful would be “mobile first” applications where the web app was secondary, if it existed at all, and from the data available internally, this should’ve been obvious going back at least to 2010. Looking at what people were doing on the outside, quite a few startups in areas where mobile was critical were operating with a 2009 understanding of the future even as late as 2016 and 2017, where they focused on having a web app first and had no mobile app and a web app that was unusable on mobile.

This isn’t to say that the problem is trivial — many people with access to the same data still generally make incorrect predictions. A famous example is Ballmer’s prediction that “There’s no chance that the iPhone is going to get any significant market share. No chance.” Ballmer and other MS leadership had access to information as good as MS leadership from a decade earlier, but many of their predictions were no better than the futurists we discussed here.

Assuming high certainty

Another common cause of incorrect predictions was having high certainty. That’s a general problem that’s magnified when making predictions from looking at past exponential growth and extrapolating to the future both because mispredicting the timing of a large change in exponential growth can have a very large impact and also because relatively small sustained changes in exponential growth can also have a large impact. An example that exposed these weaknesses for a large fraction of our futurists was their interpretation of Moore’s law, which many interpreted as a doubling of every good thing and/or halving of every bad thing related to computers every 18 months. That was never what Moore’s law predicted in the first place, but it was a common pop-conception of Moore’s law. One thing that’s illustrative about that is that predictors who were writing in the late 90s and early 00s still made these fantastical Moore’s law “based” predictions even though it was such common knowledge that both single-threaded computer performance and Moore’s law would face significant challenges that this was taught in undergraduate classes at the time. Any futurist who spent a few minutes talking to an expert in the area or even an undergrad would’ve seen that there’s a high degree of uncertainty about computer performance scaling, but most of the futurists we discuss either don’t do that or ignore evidence that would add uncertainty to their narrative.

For computer scaling in particular, it would’ve been possible to make a reasonable prediction about 2022 computers in, say, 2000, but it would’ve had to have been a prediction about the distribution of outcomes which had a lot of weight on a severely reduced performance gains in the future with some weight on a portfolio of possibilities that could’ve resulted in continued large gains. Someone making such a prediction would’ve had to, implicitly or explicitly, been familiar with ITRS semiconductor scaling roadmaps of the era as well as recent causes of recent misses (my recollection from reading roadmaps back then was that, in the short term, companies had actually exceeded recent scaling predictions, but via mechanisms that were not expected to be scalable into the future) as well as things that could unexpectedly keep semiconductor scaling on track. Furthermore, such a predictor would also have to be able to evaluate architectural ideas that might have panned out to rule them out or assign them a low probability, such as dataflow processors, the basket of techniques people were working on in order to increase ILP in order an attempt to move from the regime Tjaden and Flynn discussed in their classic 1970 and 1973 papers on ILP to the something closer to the bound discussed by Riseman and Foster in 1972 and later by Nicolau and Fisher in 1984, etc.

Such a prediction would be painstaking work for someone who isn’t in the field because of the sheer number of different things that could have impacted computer scaling. Instead of doing this, futurists relied heavily on the pop-understanding they had about semiconductors. Kaku was notable among futurists under discussion for taking seriously the idea that Moore’s law wasn’t smooth sailing in the future, but he incorrectly decided on when UV/EUV would run out of steam and also incorrectly had high certainty that some kind of more “quantum” technology would save computer performance scaling. Most other futurists who discussed computers used a line reasoning like Kurzweil’s, who said that we can predict what will happen with “remarkable precision” due to the existence of “well-defined indexes”:

The law of accelerating returns applies to all of technology, indeed to any evolutionary process. It can be charted with remarkable precision in information-based technologies because we have well-defined indexes (for example, calculations per second per dollar, or calculations per second per gram) to measure them

Another thing to note here is that, even if you correctly predict an exponential curve of something, understanding the implications of that precise fact also requires an understanding of the big picture which was shown by people like Yegge, Gates, and Myhrvold but not by the futurists discussed here. An example of roughly getting a scaling curve right but mispredicting the outcome was Dixon on the number of phone lines people would have in their homes. Dixon at least roughly correctly predicted the declining cost of phone lines but incorrectly predicted that this would result in people having many phone lines in their house despite also believing that digital technologies and cell phones would have much faster uptake than they did. With respect to phones, another missed prediction, one that came from not having an understanding of the mechanism was his prediction that the falling cost of phone calls would mean that tracking phone calls would be so expensive relative to the cost of calls that phone companies wouldn’t track individual calls.

For someone who has a bit of understanding about the underlying technology, this is an odd prediction. One reason the prediction seems odd is that the absolute cost of tracking who called whom is very small and the rate at which humans make and receive phone calls is bounded at a relatively low rate, so even if the cost of of metadata tracking were very high compared to the cost of the calls themselves, the absolute cost of tracking metadata would still be very low. Another way to look at it would be to look at the number of bits of information transferred during a phone call vs. the number of bits of information necessary to store call metadata and the cost of storing that long enough to bill someone on a per-call basis. Unless medium-term storage became relatively more expensive than network by a mind bogglingly large factor, it wouldn’t be possible for this prediction to be true and Dixon also implicitly predicted exponentially falling storage costs via his predictions on the size of available computer storage with a steep enough curve that this criteria shouldn’t be satisfied and, if it were to somehow be satisfied, the cost of storage would still be so low as to be negligible.

Panacea thinking

Another common issue is what Waleed Khan calls panacea thinking, where the person assumes that the solution is a panacea that is basically unboundedly great and can solve all problems. We can see this for quite a few futurists who were writing up until the 70s, where many assumed that computers would be able to solve any problem that required thought, computation, or allocation of resources and that resource scarcity would become irrelevant. But it turns out that quite a few problems don’t magically get solved because powerful computers exist. For example, the 2008 housing crash created a shortfall of labor for housing construction that only barely got back to historical levels just before covid hit. Having fast computers neither prevented this nor fixed this problem after it happened because the cause of the problem wasn’t a shortfall of computational resources. Some other topics to get this treatment are “nanotechnology”, “quantum”, “accelerating growth” / “decreased development time”, etc.

A closely related issue that almost every futurist here fell prey to is only seeing the upside of technological advancements, resulting in a kind of techno utopian view of the future. For example, in 2005, Kurzweil wrote:

The current disadvantages of Web-based commerce (for example, limitations in the ability to directly interact with products and the frequent frustrations of interacting with inflexible menus and forms instead of human personnel) will gradually dissolve as the trends move robustly in favor of the electronic world. By the end of this decade, computers will disappear as distinct physical objects, with displays built in our eyeglasses, and electronics woven in our clothing, providing full-immersion visual virtual reality. Thus, “going to a Web site” will mean entering a virtual-reality environment—at least for the visual and auditory senses—where we can directly interact with products and people, both real and simulated.

Putting aside the bit about how non-VR interfaces about computers would disappear before 2010, it’s striking how Kurzweil assumes that technological advancement will mean that corporations make experiences better for consumers instead of providing the same level of experience at a lower cost or a worse experience at an even lower cost.

Although that example is from Kurzweil, we can see the same techno utopianism in the other authors on Wikipedia’s list with the exception of Zerzan, whose predictions I didn’t tally up because prediction wasn’t really his shtick. For example, a number of other futurists combined panacea thinking with techno utopianism to predict that computers would cause things to operate with basically perfect efficiency without human intervention, allowing people at large to live a life of leisure. Instead, the benefits to the median person in the U.S. are subtle enough that people debate whether or not life has improved at all for the median person. And on the topic of increased efficiency, a number of people predicted an extreme version of just-in-time delivery that humanity hasn’t even come close to achieving and described its upsides, but no futurist under discussion mentioned the downsides of a world-wide distributed just-in-time manufacturing system and supply chain, which includes increased fragility and decreased robustness, notably impacted quite a few industries from 2020 through at least 2022 due to covid despite the worldwide system not being anywhere near as just-in-time or fragile as a number of futurists predicted.

Though not discussed here because they weren’t on Wikipedia’s list of notable futurists, there are pessimistic futurists such as Jaron Lanier and Paul Erlich. From a quick informal look at relatively well-known pessimistic futurists, it seems that pessimistic futurists haven’t been more accurate than optimistic futurists. Many made predictions that were too vague to score and the ones who didn’t tended to predict catastrophic collapse or overly dystopian futures which haven’t materialized. Fundamentally, dystopian thinkers made the same mistakes as utopian thinkers. For example, Paul Erlich fell prey to the same issues utopian thinkers fell prey to and he still maintains that his discredited book, The Population Bomb, was fundamentally correct, just like utopian futurists who maintain that their discredited work is fundamentally correct.

Evidence free assumptions

Another major source of errors among these futurists was making an instrumental assumption without any supporting evidence for it. A major example of this is Fresco’s theory that you can predict the future by starting from people’s values and working back from there, but he doesn’t seriously engage with the idea of how people’s values can be predicted. Since those are pulled from his intuition without being grounded in evidence, starting from people’s values creates a level of indirection, but doesn’t fundamentally change the problem of predicting what will happen in the future.

Fin

A goal of this project is to look at current predictors to see who’s using methods that have historically had a decent accuracy rate, but we’re going to save that for a future post. I normally don’t like splitting posts up into multiple parts, but since this post is 30k words (the number of words in a small book, and more words than most pop-sci books have once you remove the pop stories) and evaluating futurists is relatively self-contained, we’re going to stop with that (well, with a bit of an evaluation of some longtermist analyses that overlap with this post in the appendix).

In terms of concrete takeaways, you could consider this post a kind of negative result that supports the very boring idea that you’re not going to get very far if you make predictions on topics you don’t understand, whereas you might be able to make decent predictions if you have (or gain) a deep expertise of a topic and apply well-honed intuition to predict what might happen. We’ve looked at, in some detail, a number of common reasoning errors that cause predictions to miss at a high rate and also taken a bit of a look into some things that have worked for creating relatively accurate predictions.

A major caveat about what’s worked is that while using high-level techniques that work poorly is a good way to generate poor predictions, using high-level techniques that work well doesn’t mean much because the devil is in the details and, as trite as this is to say, you really need to think about things. This is something that people who are serious about looking at data often preach, e.g., you’ll see this theme come up on Andrew Gelman’s blog as well as in Richard McElreath’s Statistical Rethinking. McElreath, in a lecture targeted at social science grad students who don’t have a quantitative background, likens statistical methods to a golem. A golem will mindlessly do what you tell it to do, just like statistical techniques. There’s no substitute for using your brain to think through whether or not it’s reasonable to apply a particular statistical technique in a certain way. People often seem to want to use methods as a talisman to ward off incorrectness, but that doesn’t work.

We see this in the longtermist analyses we examine in the appendix which claim to be more accurate than “classical” futurists analyses because they, among other techniques, state probabilities, which the literature on forecasting (e.g., Tetlock’s Superforecasting) says that one should do. But the analyses fundamentally use the same techniques as the futurists analyses we looked at here and then add a few things on top that are also things that people who make accurate predictions do. This is backwards. Things like probabilities need to be a core part of modelling, not something added afterwards. This kind of backwards reasoning a common error when doing data analysis and I would caution readers who think they’re safe against errors because their analyses can, at a high level, be described roughly similarly to good analyses.

Thanks to nostalgebraist, Arb Research (Misha Yagudin, Gavin Leech), Laurie Tratt, Fabian Giesen, David Turner, Yossi Kreinin, Catherine Olsson, Tim Pote, David Crawshaw, Jesse Luehrs, @TyphonBaalAmmon, Jamie Brandon, Hillel Wayne, Qualadore Qualadore, Sophia, Justin Blank, and @s__video for comments/corrections/discussion (and probably some others I forgot because this post is so long and I’ve gotten so many comments).

Appendix: brief notes on Superforecasting

  • Very difficult to predict more than 3-5 years out; people generally don’t do much better than random
    • Later in the book, 10 years is cited as a basically impossible timeframe, but scopes that to certain kinds of predictions (the earlier statement of 3-5 years is more general) > Taleb, Kahneman, and I agree there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious—“there will be conflicts”—and the odd lucky hits that are inevitable whenever lots of forecasters make lots of forecasts. These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out. And yet, this sort of forecasting is common, even within institutions that should know better
    • One possibility is that people like Bill Gates are right due to hindsight bias, but that doesn’t seem correct w.r.t., e.g., being at Google making it obvious that mobile was the only way forward circa 2010
  • Ballmer prediction: “There’s no chance that the iPhone is going to get any significant market share. No chance.”
  • Very important to precisely write down forecasts
  • “big idea” predictors inaccurate (as in, heavily rely on one or a few big hammers, like “global warming”, “ecological disaster”, “Moore’s law”, etc., to drive everything
  • Specific knowledge predictors (relatively) accurate; relied heavily on probabilistic thinking, used different analytical tools as appropriate
  • Good forecasters are fluent with numbers, generally aced numerical proficiency test given to forecasters, think probabilistically
  • Good forecasters not particularly high IQ; typical non super-forecaster IQ from forecaster population was 70%-ile; typical forecaster IQ was 80%-ile

See also, this Tetlock interview with Tyler Cowen if you don’t want to read the whole book, although the book is a very quick read because it’s written the standard pop sci style, with a lot of anecdotes/stories.

On the people we looked at vs. the people Tetlock looked at, the predictors we looked at are operating in a very different style from the folks studied in the studies that led to the Superforecasting book. Both futurists and tech leaders were trying to predict a vision for the future whereas superforecasters were asked to answer very specific questions.

Another major difference among the accurate predictors is that the accurate predators we looked at (other than Caplan) had very deep expertise in their fields. This may be one reason for the difference in timelines here, where it appears that some of our predictors can predict things more than 3-5 years out, contra Tetlock’s assertion. Another difference is in the kind of thing being predicted — a lot of the predictions we’re looking at here are fundamentally whether or not a trend will continue or if a nascent trend will become a long-running trend, which seems easier than a lot of the questions Tetlock had his forecasters try to answer. For example, in the opening of Superforecasting, Tetlock gives predicting the Arab Spring as an example of something that would’ve been practically impossible — while the conditions for it had been there for years, the proximal cause of the Arab Spring was a series of coincidences that would’ve been impossible to predict. This is quite different from and arguably much more difficult than someone in 1980 guessing that computers will continue to get smaller and faster, leading to handheld computers more powerful than supercomputers from the 80s.

Appendix: other evaluations

Of these, the evaluations above, the only intersection with the futurists evaluated here is Kurzweil. Holden Karnofsky says:

A 2013 project assessed Ray Kurzweil’s 1999 predictions about 2009, and a 2020 followup assessed his 1999 predictions about 2019. Kurzweil is known for being interesting at the time rather than being right with hindsight, and a large number of predictions were found and scored, so I consider this study to have similar advantages to the above study. … Kurzweil is notorious for his very bold and contrarian predictions, and I’m overall inclined to call his track record something between “mediocre” and “fine” – too aggressive overall, but with some notable hits

Karnofsky’s evaluation of Kurzweil being “fine” to “mediocre” relies on these two analyses done on LessWrong and then uses a very generous interpretation of the results to conclude that Kurzweil’s predictions are fine. Those two posts rate predictions as true, weakly true, cannot decide, weakly false, or false. Karnofsky then compares the number of true + weakly true to false + weakly false, which is one level of rounding up to get an optimistic result; another way to look at it is that any level other than “true” is false when read as written. This issue is magnified if you actually look at the data and methodology used in the LW analyses.

In the second post, the author, Stuart Armstrong indirectly noted that there were actually no predictions that were, by strong consensus, very true when he noted that the “most true” prediction had a mean score of 1.3 (1 = true, 2 = weakly true … , 5 = false) and the second highest rated prediction had a mean score of 1.4. Although Armstrong doesn’t note this in the post, if you look at the data, you’ll see that the third “most true” prediction had a mean score of 1.45 and the fourth had a mean score of 1.6, i.e., if you round to the nearest prediction score, only 3 out of 105 predictions score “true” and 32 are >= 4.5 and score “false”. Karnofsky reads Armstrong’s as scoring 12% of predictions true, but the post effectively makes no comment on what fraction of predictions were scored true and the 12% came from summing up the total number of each rating given.

I’m not going to say that taking the mean of each question is the only way one could aggregate the numbers (taking the median or modal values could also be argued for, as well as some more sophisticated scoring function, an extremizing function, etc.), but summing up all of the votes across all questions results in a nonsensical number that shouldn’t be used for almost anything. If every rater rated every prediction or there was a systematic interleaving of who rated what questions, then the number could be used for something (though not as a score for what fraction of predictions are accurate), but since each rater could skip any questions (although people were instructed to start rating at the first question and rate all questions until they stop, people did not do that and skipped arbitrary questions), aggregating the number of each score given is not meaningful and actually gives very little insight into what fraction of questions are true. There’s an air of rigor about all of this; there are lots of numbers, standard deviations are discussed, etc., but the way most people, including Karnofsky, interpret the numbers in the post is incorrect. I find it a bit odd that, with all of the commentary of these LW posts, few people spent the one minute (and I mean one minute literally — it took me a minute to read the post, see the comment Armstrong made which is a red flag, and then look at the raw data) it would take to look at the data and understand what the post is actually saying, but as we’ve noted previously, almost no one actually reads what they’re citing.

Coming back to Karnofsky’s rating of Kurzweil as fine to mediocre, this relies on two levels of rounding. One, doing the wrong kind of aggregation on the raw data to round an accuracy of perhaps 3% up to 12% and then rounding up again by doing the comparison mentioned above instead of looking at the number of true statements. If we use a strict reading and look at the 3%, the numbers aren’t so different from what we see in this post. If we look at Armstrong’s other post, there are too few raters to really produce any kind of meaningful aggregation. Armstrong rated every prediction, one person rated 68% of predictions, and no one else even rated half of the 172 predictions. The 8 predictors rated 506 predictions, so the number of ratings is equivalent to having 3 raters rate all predictions, but the results are much noisier due to the arbitrary way people decided to pick predictions. This issue is much worse for the 2009 predictions than the 2019 predictions due to the smaller number of raters combined with the sparseness of most raters, making this data set fairly low fidelity; if you want to make a simple inference from the 2019 data, you’re probably best off using Armstrong’s ratings and discarding the rest (there are non-simple analyses one could do, but if you’re going to do that, you might as well just rate the predictions yourself).

Another fundamental issue with the analysis is that it relies on aggregating votes of from a population that’s heavily drawn from Less Wrong readers and the associated community. As we discussed here, it’s common to see the most upvoted comments in forums like HN, lobsters, LW, etc., be statements that can clearly be seen to be wrong with no specialized knowledge and a few seconds of thought (and an example is given from LW in the link), so why should an aggregation of votes from the LW community be considered meaningful?

On the Karnofsky / Arb Research evaluation, Karnofsky appears to use a less strict scoring than I do and once again optimistically “rounds up”. The Arb Research report scores each question as “unambiguously wrong”, “ambiguous or near miss”, or “unambiguously right” but Karnofsky’s scoring removes the ambiguous and near miss results, whereas my scoring only removes the ambiguous results, the idea being that a near miss is still a miss. Accounting for those reduces the scores substantially but still leaves Heinlen, Clarke, and Asimov with significantly higher scores than the futurists discussed in the body of this post. For the rest, many of the predictions that were scored as “unambiguously right” are ones I would’ve declined to rate for similar reasons to predictions which I declined to rate (e.g., a prediction that something “may well” happen was rated as “unambiguously right” and I would consider that unfalsifiable and therefore not include it). There are also quite a few “unambiguously right” predictions that I would rate as incorrect using a strict reading similar to the readings that you can see below in the detailed appendix.

Another place where Karnofsky rounds up is that Arb research notes that ‘The predictions are usually very vague. Almost none take the form “By Year X technology Y will pass on metric Z”’. This makes the prediction accuracy from futurists Arb Research looked at not comparable to precise predictions of the kind Caplan or Karnofsky himself makes, but Karnofsky directly uses those numbers to justify why his own predictions are accurate without noting that the numbers are not comparable. Since the non-comparable numbers were already rounded up, there are two levels of rounding here (more on this later).

As noted above, some of the predictions are ones that I wouldn’t rate because I don’t see where the prediction is, such as this one (this is the “exact text” of the prediction being scored, according to the Arb Research spreadsheet), which was scored “unambiguously right”

application of computer technology to professional sports be counterproduc- tive? Would the public become less interested in sports or in betting on the outcome if matters became more predictable? Or would there always be enough unpredictability to keep interest high? And would people derive particular excitement from beat ing the computer when low-ranking players on a particular team suddenly started

This seems like a series of questions about something that might happen, but wouldn’t be false if none of these happened, so would not count as a prediction in my book.

Similarly, I would not have rated the following prediction, which Arb also scored “unambiguously right”

its potential is often realized in ways that seem miraculous, not because of idealism but because of the practical benefits to society. Thus, the computer’s ability to foster human creativity may well be utilized to its fullest, not because it would be a wonderful thing but because it will serve important social functions Moreover, we are already moving in the

Another kind of prediction that was sometimes scored “unambiguously correct” that I declined to score were predictions of the form “this trend that’s in progress will become somewhat {bigger / more important}, such as the following:

The consequences of human irresponsibility in terms of waste and pollution will become more apparent and unbearable with time and again, attempts to deal with this will become more strenuous. It is to be hoped that by 2019, advances in technology will place tools in our hands that will help accelerate the process whereby the deterioration of the environment will be reversed.

On Karnofsky’s larger point, that we should trust longtermist predictions because futurists basically did fine and longtermsists are taking prediction more seriously and trying harder and should therefore generate better prediction, that’s really a topic for another post, but I’ll briefly discuss here because of the high intersection with this post. There are two main pillars of this argument. First, that futurists basically did fine which, as we’ve seen, relies on a considerable amount of rounding up. And second, that the methodologies that longtermists are using today are considerably more effective than what futurists did in the past.

Karnofsky says that the futurists he looked at “collect casual predictions – no probabilities given, little-to-no reasoning given, no apparent attempt to collect evidence and weigh arguments”, whereas Karnofsky’s summaries use (among other things):

  • Reports that Open Philanthropy employees spent thousands of hours on, systematically presenting evidence and considering arguments and counterarguments.
  • A serious attempt to take advantage of the nascent literature on how to make good predictions; e.g., the authors (and I) have generally done calibration training, and have tried to use the language of probability to be specific about our uncertainty.

We’ve seen, when evaluating futurists with an eye towards evaluating longtermists, Karnofsky heavily rounds up in the same way Kurzweil and other futurists do, to paint the picture they want to create. There’s also the matter of his summary of a report on Kurzweil’s predictions being incorrect because he didn’t notice the author of that report used a methodology that produced nonsense numbers that were favorable to the conclusion that Karnofsky favors. It’s true that Karnofsky and the reports he cites do the superficial things that the forecasting literature notes is associated with more accurate predictions, like stating probabilities. But for this to work, the probabilities need to come from understanding the data. If you take a pile of data, incorrectly interpret it and then round up the interpretation further to support a particular conclusion, throwing a probability on it at the end is not likely to make it accurate. Although he doesn’t use these words, a key thing Tetlock notes in his work is that people who round things up or down to conform to a particular agenda produce low accuracy predictions. Since Karnofsky’s errors and rounding heavily lean in one direction, that seems to be happening here.

We can see this in other analyses as well. Although digging into material other than futurist predictions is outside of the scope of this post, nostalgebraist has done this and he said (in a private communication that he gave me permission to mention) that Karnofsky’s summary of https://openphilanthropy.org/research/could-advanced-ai-drive-explosive-economic-growth/ is substantially more optimistic about AI timelines than the underlying report in that there’s at least one major concern raised in the report that’s not brought up as a “con” in Karnofsky’s summary and nostalgebraist later wrote this post, where he (implicitly) notes that the methodology used in a report he examined in detail is fundamentally not so different than what the futurists we discussed used. There are quite a few things that may make the report appear credible (it’s hundreds of pages of research, there’s a complex model with many moving parts, etc.), but when it comes down to it, most of the model is irrelevant and the model boils down to a few simple variables. If you do a sensitivity analysis on the most important variable (how much Moore’s law will improve FLOPS/$), the output behavior doesn’t make any sense, e.g., Moore’s law running out of steam after “conventional” improvements give us a 144x improvement would give us a 34% chance of transformative AI (TAI) by 2100, a 144*6x increase gives a 52% chance, and a 144*600x increase gives a 66% chance (and with the predicted 60000x improvement, there’s a 78% chance), so the model is, at best, highly flawed unless you believe that going form a 144x improvement to a 144*6x improvement in computer cost gives almost as much increase in the probability of TAI as a 144*6x to 144*60000x improvement in computer cost.

The part about all of this that makes this fundamentally the same thing that the futurists here did is that the estimate of the FLOPS/$ which is instrumental for this prediction is pulled from thin air by someone who is not a deep expert in semiconductors, computer architecture, or a related field that might inform this estimate.

As Karnofsky notes, a number of things were done in an attempt to make this estimate reliable (“the authors (and I) have generally done calibration training, and have tried to use the language of probability”) but, when you come up with a model where a single variable controls most of the variances and the estimate for that variable is picked out of thin air, all of the modeling work actually reduces my confidence in the estimate. If you say that, based on your intuition, you think there’s some significant probability of TAI by 2100; 10% or 50% or 80% or whatever number you want, I’d say that sounds plausible but wouldn’t place any particular faith in the estimate. But if you take a model that produces nonsense results and then pick an arbitrary input to the model that you have no good intuition about to arrive at an 80% chance, you’ve basically picked a random number that happens to be 80%. If you have a model that’s highly determined by a number that you pick with low confidence, it’s not possible for the output of the model to produce an 80% confidence estimate, but that’s what happened here.

At one point, in summarizing the report, Karnofsky says

I consider the “evolution” analysis to be very conservative, because machine learning is capable of much faster progress than the sort of trial-and-error associated with natural selection. Even if one believes in something along the lines of “Human brains reason in unique ways, unmatched and unmatchable by a modern-day AI,” it seems that whatever is unique about human brains should be re-discoverable if one is able to essentially re-run the whole history of natural selection. And even this very conservative analysis estimates a ~50% chance of transformative AI by 2100

But it’s impossible to make any kind of “very conservative” estimate when the estimate comes from a model which produces nonsense. As noted above, if we take one of the major variables and set it to a low but arguably plausible number, the total estimate drops into the 30s, so 50% doesn’t make sense as a “very conservative” estimate, and that’s only looking at plausibly pessimizing one variable that goes into the model.

More generally, the whole methodology is backwards — if you have deep knowledge of a topic, then it can be valuable to put a number down to convey the certainty of your knowledge to other people, and if you don’t have deep knowledge but are trying to understand an area, then it can be valuable to state your uncertainties so that you know when you’re just guessing. But here, we have a fairly confidently stated estimate (nostalgebraist notes that Karnofsky says “Bio Anchors estimates a >10% chance of transformative AI by 2036, a ~50% chance by 2055, and an ~80% chance by 2100.”) that’s based off of a model that’s nonsense that relies on a variable that’s picked out of thin air. Naming a high probability after the fact and then naming a lower number and saying that’s conservative when it’s based on this kind of modeling is just window dressing.

Back to other evaluators, on Justin Rye’s evaluations, I would grade the predictions “as written” and therefore more strictly than he did and would end up with lower scores.

For the predictors we looked at in this document who mostly or nearly exclusively give similar predictions, I declined to give them anything like a precise numerical score. To be clear, I think there’s value in trying to score vague predictions and near misses, but that’s a different thing than this document did, so the scores aren’t directly comparable.

A number of people have said that predictions by people who make bold predictions, the way Kurzweil does, are actually pretty good. After all, if someone makes a lot of bold predictions and they’re all off by 10 years, that person will have useful insights even if they lose all their bets and get taken to the cleaners in prediction markets. However, that doesn’t mean that someone who makes bold predictions should always “get credit for” making bold predictions. For example, in Kurzweil’s case, 7% accuracy might not be bad if he uniformly predicted really bold stuff like unbounded life span by 2011. However, that only applies if the hits and misses are both bold predictions, which was not the case in the sampled set of predictions for Kurzweil here. For Kurzweil’s predictions evaluated in this document, Kurzweil’s correct predictions tended to be very boring, e.g., there will be no giant economic collapse that stops economic growth, cochlear implants will be in widespread use in 2019 (predicted in 1999), etc.

The former is a Caplan-esque bet against people who were making wild predictions that there would be severe or total economic collapse. There’s value in bets like that, but it’s also not surprising when such a bet is successful. For the latter, the data I could quickly find on cochlear implant rates showed that implant rates slowly linearly increased from the time Kurzweil made the bet until 2019. I would call that a correct prediction, but the prediction is basically just betting that nothing drastically drops cochlear implant rates, making that another Caplan-esque safe bet and not a bet that relies on Kurzweil’s ideas about the law of accelerating growth that his wild bets rely on.

If someone makes 40 boring bets of which 7 are right and another person makes 40 boring bets and 22 wild bets and 7 of their boring bets and 0 of their wild bets are right (these are arbitrary numbers as I didn’t attempt to classify Kurzweil’s bets as wild or not other than the 7 that were scored as correct), do you give the latter person credit for having “a pretty decent accuracy given how wild their bets were”? I would say no.

On the linked HN thread from a particular futurist, a futurist scored themselves 5 out of 10, but most HN commenters scored the same person at 0 out of 10 or, generously, at 1 out of 10, with the general comment that the person and other futurists tend to score themselves much too generously:

sixQuarks: I hate it when “futurists” cherry pick an outlier situation and say their prediction was accurate – like the bartender example.

karaterobot: I wanted to say the same thing. He moved the goal posts from things which “would draw hoots of derision from an audience from the year 2022” to things which there has been some marginal, unevenly distributed, incremental change to in the last 10 years, then said he got it about 50% right. More generally, this is the issue I have with futurists: they get things wrong, and then just keep making more predictions. I suppose that’s okay for them to do, unless they try to get people to believe them, and make decisions based on their guesses.

chillacy: Reminded me of the ray [kurzweil] predictions: extremely generous grading.)

Appendix: other reading

Appendix: detailed information on predictions

Ray Kurzweil

4/59 for rated predictions. If you feel like the ones I didn’t include that one could arguably include should count, then 7/62.

This list comes from wikipedia’s bulleted list of Kurzweil’s predictions at the time Peter Diamadis, Kurzweil’s co-founder for SingularityU, cited it to bolster the claim that Kurzweil has an 86% prediction accuracy rate. Off the top of my head, this misses quite a few predictions that Kurzweil made, such as life expectancy being “over one hundred” by 2019 and 120 by 2029 (prediction made in 1999) and unbounded (life expectancy increasing at one year per year) by 2011 (prediction made in 2001), that a computer would beat the top human in chess by 2000 (prediction made in 1990).

It’s likely that Kurzweil’s accuracy rate would change somewhat if we surveyed all of his predictions, but it seems extremely implausible for the rate to hit 86% and, more broadly, looking at Kurzweil’s vision of what the world would be like, it also seems impossible that we live in a world that’s generally close to Kurzweil’s imagined future.

  • 1985
    • Voice activated typewriter / speech writer by 1985 (founded a company to build this in 1982)
    • No. Not true in any meaningful sense. Speech to text with deep learning, circa 2013, was accurate enough that it could be used, with major corrections, on a computer, but it would’ve been hopeless for a typewriter
  • “Early 2000s” (wikipedia notes that this is listed before 2010 in Kurzweil’s chronology, so this should be significantly before 2010 unless the book is very poorly organized)
    • Translating telephones allow people to speak to each other in different languages.
    • No. Today, this works poorly and translations are comically bad, but can sort of work in a “help a tourist get around” sort of way with deep learning, but was basically hopeless in 2010
    • Machines designed to transcribe speech into computer text allow deaf people to understand spoken words.
    • No. Per above, very poor in 2010
    • Exoskeletal, robotic leg prostheses allow the paraplegic to walk.
    • No. Maybe some prototype existed, but this still isn’t meaningfully deployed in 2022
    • Telephone calls are routinely screened by intelligent answering machines that ask questions to determine the call’s nature and priority.
    • Definitely not in 2010. This arguably exists in 2022, although I think it would be a stretch to call phone trees “intelligent” since they generally get confused if you don’t do the keyword matching they’re looking for
    • “Cybernetic chauffeurs” can drive cars for humans and can be retrofitted into existing cars. They work by communicating with other vehicles and with sensors embedded along the roads.
    • No.
  • “Early 21st century” (wikipedia notes that this is listed before 2010 in Kurzweil’s chronology, so this should be significantly before 2010 unless the book is very poorly organized)
    • The classroom is dominated by computers. Intelligent courseware that can tailor itself to each student by recognizing their strengths and weaknesses. Media technology allows students to manipulate and interact with virtual depictions of the systems and personalities they are studying.
    • No. If you really want to make a stretch argument, you could say this about 2022, but I’d still say no for 2022
    • A small number of highly skilled people dominates the entire production sector. Tailoring of products for individuals is common.
    • No. You could argue that, as written, the 2nd part of this was technically satisfied, but that was really in a trivial way compared the futurist vision Kurzweil was predicting
    • Drugs are designed and tested in simulations that mimic the human body.
    • No.
    • Blind people navigate and read text using machines that can visually recognize features of their environment.
    • Not in 2010. Deep learning unlocked some of this later, though, and continues to improve
  • 2010
    • PCs are capable of answering queries by accessing information wirelessly via the Internet.
    • Yes
  • 2009
    • Most books will be read on screens rather than paper.
    • No
    • Most text will be created using speech recognition technology.
    • No
    • Intelligent roads and driverless cars will be in use, mostly on highways.
    • No
    • People use personal computers the size of rings, pins, credit cards and books.
    • No. One of these was true (books), but the prediction is an “and” and not an “or”
    • Personal worn computers provide monitoring of body functions, automated identity and directions for navigation.
    • No. Arguably true with things like a Garmin band some athletes wear around the chest for heart rate, but not true when the whole statement is taken into account or in the spirit of the prediction
    • Cables are disappearing. Computer peripheries use wireless communication.
    • No. Even in 2022, cables generally haven’t come close to disappearing and, unfortunately, wireless perpihphals generally work poorly (Gary Bernhardt, Ben Kuhn, etc.)
    • People can talk to their computer to give commands.
    • Yes. I would say this one is actually a “no” in spirit if you look at Kurzweil’s futurist vision, but it was technically true that this was possible in 2009, although it worked quite poorly
    • Computer displays built into eyeglasses for augmented reality are used
    • No. You can argue that someone, somewhere, was using these, but pilots were using head mounted displays in 1999, so it’s nonsensical to argue that limited uses like that constitute a successful prediction of the future
    • Computers can recognize their owner’s face from a picture or video.
    • No
    • Three-dimensional chips are commonly used.
    • No
    • Sound producing speakers are being replaced with very small chip-based devices that can place high resolution sound anywhere in three-dimensional space.
    • No
    • A $1,000 computer can perform a trillion calculations per second.
    • Undefined. Technically true, but using peak ops to measure computer performance is generally considered too silly to do by people who know much about computers. In this case, for this to merely be a bad benchmark and not worthless, the kind of calculation would have to be defined.
    • There is increasing interest in massively parallel neural nets, genetic algorithms and other forms of “chaotic” or complexity theory computing.
    • No. There was a huge uptick in interest in neural nets in 2012 due to the “Alexnet” paper, but note that this prediction is an “and” and would’ve been untrue even in the “or” form in 2009
    • Research has been initiated on reverse engineering the brain through both destructive and non-invasive scans.
    • Undefined. Very vague and could easily argue this either way
    • Autonomous nanoengineered machines have been demonstrated and include their own computational controls.
    • Unknown (to me). I don’t really care to try to look this one up since the accuracy rate of these predictions is so low that whether or not this one is accurate doesn’t matter and I don’t know where I’d look this one up
  • 2019
    • The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 quadrillion calculations per second).
    • Undefined. Per above prediction on computational power, raw ops per second is basically meaningless
    • The summed computational powers of all computers is comparable to the total brainpower of the human race.
    • Undefined. First, you need a non-stupid metric to compare these by
    • Computers are embedded everywhere in the environment (inside of furniture, jewelry, walls, clothing, etc.).
    • No. There are small computers, but this is arguing they’re ubiquitously inside common household items, which they’re not
    • People experience 3-D virtual reality through glasses and contact lenses that beam images directly to their retinas (retinal display). Coupled with an auditory source (headphones), users can remotely communicate with other people and access the Internet.
    • No
    • These special glasses and contact lenses can deliver “augmented reality” and “virtual reality” in three different ways. First, they can project “heads-up-displays” (HUDs) across the user’s field of vision, superimposing images that stay in place in the environment regardless of the user’s perspective or orientation. Second, virtual objects or people could be rendered in fixed locations by the glasses, so when the user’s eyes look elsewhere, the objects appear to stay in their places. Third, the devices could block out the “real” world entirely and fully immerse the user in a virtual reality environment.
    • No. You need different devices for these use cases and for the HUD use case, the field of view is small and images do not stay in place regardless of the user’s perspective or orientation
    • People communicate with their computers via two-way speech and gestures instead of with keyboards. Furthermore, most of this interaction occurs through computerized assistants with different personalities that the user can select or customize. Dealing with computers thus becomes more and more like dealing with a human being.
    • No. Some people sometimes do this, but I’d say this implies with “instead” that speech and gestures have replaced keyboards, which they have not
    • Most business transactions or information inquiries involve dealing with a simulated person.
    • No
    • Most people own more than one PC, though the concept of what a “computer” is has changed considerably: Computers are no longer limited in design to laptops or CPUs contained in a large box connected to a monitor. Instead, devices with computer capabilities come in all sorts of unexpected shapes and sizes.
    • No if you literally use the definition of “most people” and consider a PC to be a general purpose computing device (which a smartphone arguably is), but probably yes for people at, say, 90%-ile wealth and above in the U.S. or other high-SES countries
    • Cables connecting computers and peripherals have almost completely disappeared.
    • No
    • Rotating computer hard drives are no longer used.
    • No
    • Three-dimensional nanotube lattices are the dominant computing substrate.
    • No
    • Massively parallel neural nets and genetic algorithms are in wide use.
    • No. Note the use of “and” here
    • Destructive scans of the brain and noninvasive brain scans have allowed scientists to understand the brain much better. The algorithms that allow the relatively small genetic code of the brain to construct a much more complex organ are being transferred into computer neural nets.
    • No
    • Pinhead-sized cameras are everywhere.
    • No
    • Nanotechnology is more capable and is in use for specialized applications, yet it has not yet made it into the mainstream. “Nanoengineered machines” begin to be used in manufacturing.
    • Unknown (to me). I don’t really care to try to look this one up since the accuracy rate of these predictions is so low that whether or not this one is accurate doesn’t matter and I don’t know where I’d look this one up
    • Thin, lightweight, handheld displays with very high resolutions are the preferred means for viewing documents. The aforementioned computer eyeglasses and contact lenses are also used for this same purpose, and all download the information wirelessly.
    • No. Ironically, a lot of people prefer things like Kindles for viewing documents, but they’re quite low resolution (a 2019 Kindle has a resolution of 800×600); many people still prefer paper for viewing documents for a variety of reasons
    • Computers have made paper books and documents almost completely obsolete.
    • No
    • Most learning is accomplished through intelligent, adaptive courseware presented by computer-simulated teachers. In the learning process, human adults fill the counselor and mentor roles instead of being academic instructors. These assistants are often not physically present, and help students remotely. Students still learn together and socialize, though this is often done remotely via computers.
    • No
    • All students have access to computers.
    • No. True in some places, though.
    • Most human workers spend the majority of their time acquiring new skills and knowledge.
    • No
    • Blind people wear special glasses that interpret the real world for them through speech. Sighted people also use these glasses to amplify their own abilities. Retinal and neural implants also exist, but are in limited use because they are less useful.
    • No
    • Deaf people use special glasses that convert speech into text or signs, and music into images or tactile sensations. Cochlear and other implants are also widely used.
    • Yes? I think this is actually a no in terms of whether or not Kurzweil’s vision was realized, but these are possible and it isn’t the case that no one was using these. I’m bundling the Cochlear implant prediction in here because it’s so boring. It was arguably already true when the prediction was made in 1999 and reaching the usage rate it did in 2019 basically just continued slow linear growth of implant rate, i.e., people not rejecting the idea of cochlear implants outright and/or something else superseding cochlear implants.
    • People with spinal cord injuries can walk and climb steps using computer-controlled nerve stimulation and exoskeletal robotic walkers.
    • No
    • Computers are also found inside of some humans in the form of cybernetic implants. These are most commonly used by disabled people to regain normal physical faculties (e.g. Retinal implants allow the blind to see and spinal implants coupled with mechanical legs allow the paralyzed to walk).
    • No, at least not at the ubiquity implied by Kurzweil’s vision
    • Language translating machines are of much higher quality, and are routinely used in conversations.
    • Yes, but mostly because this prediction is basically meaningless (language translation was of a “much higher quality” in 2019 than 1999)
    • Effective language technologies (natural language processing, speech recognition, speech synthesis) exist
    • Yes, although arguable
    • Access to the Internet is completely wireless and provided by wearable or implanted computers.
    • No
    • People are able to wirelessly access the Internet at all times from almost anywhere
    • No. This might feel true inside a big city, but is obviously untrue even on a road trip that stays on the U.S. interstate highway system and becomes even less true if you drive away from the interstate and less true once again if you go to places that can’t be driven to
    • Devices that deliver sensations to the skin surface of their users (e.g. tight body suits and gloves) are also sometimes used in virtual reality to complete the experience. “Virtual sex”—in which two people are able to have sex with each other through virtual reality, or in which a human can have sex with a “simulated” partner that only exists on a computer—becomes a reality. Just as visual- and auditory virtual reality have come of age, haptic technology has fully matured and is completely convincing, yet requires the user to enter a V.R. booth. It is commonly used for computer sex and remote medical examinations. It is the preferred sexual medium since it is safe and enhances the experience.
    • No
    • Worldwide economic growth has continued. There has not been a global economic collapse.
    • Yes
    • The vast majority of business interactions occur between humans and simulated retailers, or between a human’s virtual personal assistant and a simulated retailer.
    • No? Depends on what “simulated retailers” means here. In conjunction with how Kurzweil talks about simulations, VR, haptic devices that are fully immersive, etc., I’d say this is a “no”
    • Household robots are ubiquitous and reliable
    • No
    • Computers do most of the vehicle driving—-humans are in fact prohibited from driving on highways unassisted. Furthermore, when humans do take over the wheel, the onboard computer system constantly monitors their actions and takes control whenever the human drives recklessly. As a result, there are very few transportation accidents.
    • No
    • Most roads now have automated driving systems—networks of monitoring and communication devices that allow computer-controlled automobiles to safely navigate.
    • No
    • Prototype personal flying vehicles using microflaps exist. They are also primarily computer-controlled.
    • Unknown (to me). I don’t really care to try to look this one up since the accuracy rate of these predictions is so low that whether or not this one is accurate doesn’t matter and I don’t know where I’d look this one up
    • Humans are beginning to have deep relationships with automated personalities, which hold some advantages over human partners. The depth of some computer personalities convinces some people that they should be accorded more rights
    • No
    • A growing number of humans believe that their computers and the simulated personalities they interact with are intelligent to the point of human-level consciousness, experts dismiss the possibility that any could pass the Turing Test. Human-robot relationships begin as simulated personalities become more convincing.
    • No
    • Interaction with virtual personalities becomes a primary interface
    • No? Depends on what “primary interface” means here, but I think not given Kurzweil’s overall vision
    • Public places and workplaces are ubiquitously monitored to prevent violence and all actions are recorded permanently. Personal privacy is a major political issue, and some people protect themselves with unbreakable computer codes.
    • No. True of some public spaces in some countries, but untrue as stated.
    • The basic needs of the underclass are met
    • No. Not even true when looking at some high-SES countries, like the U.S., let alone the entire world
    • Virtual artists—creative computers capable of making their own art and music—emerge in all fields of the arts.
    • No. Maybe arguably technically true, but I think not even close in spirit in 2019

The list above only uses the bulleted predictions from Wikipedia under the section that has per-timeframe sections. If you pull in other ones from the same page that could be evaluated, which includes predictions like ” “nanotechnology-based” flying cars would be available [by 2026]”, this doesn’t hugely change the accuracy rate (and actually can’t due to the relatively small number of other predictions).

Jacque Fresco

The foreword to Fresco’s book gives a pretty good idea of what to expect from Fresco’s predictions:

Looking forward is an imaginative and fascinating book in which the authors take you on a journey into the culture and technology of the twenty-first century. After an introductory section that discusses the “Things that Shape Your Future.” you will explore the whys and wherefores of the unfamiliar, alarming, but exciting world of a hundred years from now. You will see this society through the eyes of Scott and Hella, a couple of the next century. Their living quarters are equipped with a cybernator. a seemingly magical computer device, but one that is based on scientific principles now known. It regulates sleeping hours, communications throughout the world, an incredible underwater living complex, and even the daily caloric intake of the “young” couple. (They are in their forties but can expect to live 200 years.) The world that Scott and Hella live in is a world that has achieved full weather control, has developed a finger-sized computer that is implanted in the brain of every baby at birth (and the babies are scientifically incubated the women of the twenty-first century need not go through the pains of childbirth), and that has perfected genetic manipulation that allows the human race to be improved by means of science. Economically, the world is Utopian by our standards. Jobs, wages, and money have long since been phased out. Nothing has a price tag, and personal possessions are not needed. Nationalism has been surpassed, and total disarmament has been achieved; educational technology has made schools and teachers obsolete. The children learn by doing, and are independent in this friendly world by the time they are five.

The chief source of this greater society is the Correlation Center, “Corcen,” a gigantic complex of computers that serves but never enslaves mankind. Corcen regulates production, communication, transportation and all other burdensome and monotonous tasks of the past. This frees men and women to achieve creative challenging experiences rather than empty lives of meaningless leisure. Obviously this book is speculative, but it is soundly based upon scientific developments that are now known

As mentioned above, Fresco makes the claim that it’s possible to predict the future and to do so, one should start with the values people will have in the future. Many predictions are about “the 21st century” so can arguably be defended as still potentially accurate, although the way the book talks about the stark divide between “the 20th century” and “the 21st century”, we should have already seen the changes mentioned in the book since we’re no longer in “the 20th century” and the book makes no reference to a long period of transition in between. Fresco does make some specific statements about things that will happen by particular dates, which are covered later. For “the 21st century”, his predictions from the first section of his book are:

  • There will be no need for laws, such as a law against murder because humans will no longer do things like murder (which only happen “today” because “our sick society” conditions people to commit depraved acts
    • “Today we are beginning to identify various things which condition us to act as we do. In the future the factors that condition human beings to kill or do other things that harm fellow human beings will be understood and eliminated”
    • The entire section is very behaviorist and assumes that we’ll be able to operant condition people out of all bad behaviors
  • Increased understanding of human nature will lead to
    • Total freedom, including no individual desire for conformity
    • Total economic abundance, which will lead to the end of “competitiveness, acquisitiveness, thriftiness”, etc.
    • Total freedom from disease
    • Deeper feelings of love and friendship to an extent that can not be understood by those who live in the twentieth-century world of scarcity”
    • Total lack of guilt about sex
    • Appreciation of all kinds of natural beauty, as opposed to “the narrow standards of the ‘beauty queen’ mentality of today.” as well as eschewing any kind of artificial beauty
    • Complete self-knowledge, lack of any repression, leading to “produce a new dimension of relaxed living that is almost unknown today”
    • Elevation of the valuing of others at the same level people value themselves or local communities, i.e., complete selflessness and an end to anything resembling tribalism or nationalism
    • All people will be “multidimensional” and sort of good at everything
    • This is contrasted with “For the first time all men and women will live a multidimensional life, limited only by their imagination. In the twentieth century we could classify people by saying, “He is good in sports. She is an intellectual. He is an artist.” In the future all people will have the time and the facilities to accept the fantastic variety of challenges that life offers them”

As mentioned above, the next part of Fresco’s prediction is about how science will work. He writes about how “the scientific method” is only applied in a limited fashion, which led to thousands of years of slow progress. But, unlike in the 20th century, in the 21st century, people will be free from bias and apply “the scientific method” in all areas of their life, not just when doing science. People will be fully open to experimentation in all aspects of life and all people will have “a habitual open-mindedness coupled with a rigid insistence that all problems be formulated in a way that permits factual checking”.

This will, among other things, lead to complete self-knowledge of one’s own limitations for all people as well as an end to unhappiness due to suboptimal political and social structures:

The success of the method of science in solving almost every problem put to it will give individuals in the twenty-first century a deep confidence in its effectiveness. They will not be afraid to experiment with new ways of feeling, thinking, and acting, for they will have observed the self-corrective aspect of science. Science gives us the latest word, not the last word. They will know that if they try something new in personal or social life, the happiness it yields can be determined after sufficient experience has accumulated. They will adapt to changes in a relaxed way as they zigzag toward the achievement of their values. They will know that there are better ways of doing things than have been used in the past, and they will be determined to experiment until they have found them. They will know that most of the unhappiness of human beings in the mid-twentieth century was not due to the lack of shiny new gadgets; it was due, in part, to not using the scientific method to check out new political and social structures that could have yielded greater happiness for them

After discussing, at a high level, the implications on people and society, Fresco gets into specifics, saying that doing everything with computers, what Fresco calls a “cybernated” society, could be achieved by 1979, giving everyone a post-tax income of $100k/yr in 1969 dollars (about $800k/yr in 2022 dollars):

How would you like to have a guaranteed life income of $100,000 per year—with no taxes? And how would you like to earn this income by working a three-hour day, one day per week, for a five-year period of your life, providing you have a six-months vacation each year? Sound fantastic? Not at all with modern technology. This is not twenty-first-century pie-in-the-sky. It could probably be achieved in ten years in the United States if we applied everything we now know about automation and computers to produce a cybernated society. It probably won’t be done this rapidly, for it would take some modern thinking applied in an intelligent crash program. Such a crash program was launched to develop the atomic bomb in a little over four years.

Other predictions about “cybernation”:

  • Manufacturing will be fully automated, to the point that people need to do no more than turn on the factory to have everything run (and maintain itself)
    • This will lead to “maximum efficiency”
  • Since there will be no need for human labor, the price of items like t-shirts will be so low that they’ll be free since there’s no need for items to cost anything when the element of human labor is removed
  • The elimination of human labor will lead to a life of leisure for everyone
  • Fresco notes that his previous figure of $100k/yr (1969 dollars) is meaningless and could just as easily be $1M/yr (1969 dollars) since everything will be free
  • A “cybernetically” manufactured item produced anywhere on earth will be able to be delivered anywhere on earth within 24 hours

Michio Kaku

  • By 2005
  • “During the 21st century” implied to not be something that happens at the very end, but something that’s happening throughout
    • “it will be difficult to be a research scientist in the future without having some working knowledge of [quantum mechanics, computer science, and biology]” due to increasing “synergy” and “cross-fertilization” between these fundamental fields
    • Silicon computer chips will hit a roadblock that will be unlocked via DNA research allowing for computation on organic molecules
    • Increased pace of scientific progress due to “intense synergy”
  • In 2020
    • Commodity prices down 60% (from 1997 prices) due to wealth becoming based on knowledge, trade being global, and markets being linked electronically, continuing a long-term trend of reduced commodity prices
    • No. CRB commodity price index was up in 2020 compared to 1997 and is up further in 2022
    • Microprocessors as cheap as “scrap paper” due to Moore’s law scaling continuing with no speedbump until 2020 (10 cents in 2000, 1 cent in 2010, 1/10th of a cent in 2020)
    • No. Moore’s law scaling curve changed and microprocessors did not, in general, cost 1 cent in 2010 or 1/10th of a cent in 2020
    • Above will give us “will give us smart homes, cars, TVs, clothes, jewelry, and money”
    • No due to “and” and comments implying total ubiquity, but actually a fairly good directional prediction
    • “We will speak to our appliances, and they will speak back”
    • No, due to the implied ubiquity here, but again directionally pretty good
    • “the Internet will wire up the entire planet and evolve into a membrane consisting of millions of computer networks, creating an “intelligent planet.””
    • No due on “intelligent planet”
    • Moore’s law / silicon scaling will continue until 2020, at which point “quantum effects will necessarily dominate and the fabled Age of Silicon will end”
    • No
    • Advances in DNA sequencing will continue until roughly 2020 (before it stops); “literally thousands of organisms will have their complete DNA code unraveled”
    • Maybe? Not sure if this was hundreds or thousands; also, the lack of complete sequencing of the human genome project when it was “complete” may also have some analogue here? I didn’t score this one because I don’t have the background for it
    • “it may be possible for anyone on earth to have their personal DNA code stored on a CD”
    • Not counting this as a prediction because it’s non-falsifiable due to the use of “may”
    • “Many genetic diseases will be eliminated by injecting people’s cells with the correct gene.”
    • No
    • “Because cancer is now being revealed to be a series of genetic mutations, large classes of cancers may be curable at last, without invasive surgery or chemotherapy”
    • Not counting this as a prediction because it’s non-falsifiable due to the use of “may”
    • In or near 2020, bottlenecks in DNA sequencing will stop progress of DNA sequencing
    • No
    • In or near 2020, bottlenecks in silicon will stop advances in computer performance
    • No; computer performance slowed its advancement long before 2020 and then didn’t stop in 2020
    • The combination of the two above will (after 2020) require optical computers, molecular computers, DNA computers, and quantum computers for progress to advance in biology and computer science
    • No. Maybe some of these things will be critical in the future, but they’re not necessary conditions for advancements in computing and biology in or around 2020
    • Focus of biology will shift from sequencing DNA to understanding the functions of genes
    • I’m not qualified to judge this one
    • something something may prove the key to solving key diseases
    • Not counting this as a prediction because it’s non-falsifiable due to the use of “may”
    • [many predictions based around the previous prediction that microprocessors would be as cheap as scrap paper, 1/10th of a cent or less, that also ignore the cost of everything around the processor]
    • No
    • A variety of non-falsifiable “may” predictions about self-driving car progress by 2010 and 2020
    • VR will be “an integral part of the world”
    • No
    • People will use full-body suits and electric-field sensors
    • No
    • Exploring simulations in virtual reality will be a critical part of how science proceeds
    • No
    • A lot of predictions about how computers “may” be critical to a variety of fields
    • Not counting this as a prediction because it’s non-falsifiable due to the use of “may”
    • Semiconductor lithography below .1 um (100 nm) will need to switch from UV to X-rays or electrons
    • No; modern 5nm processes use EUV
    • Some more “may” and “likely” non-falsifiable predictions

That gives a prediction rate of 3%. I stopped reading at this point, so may have missed a number of correct predictions. But, even if the rest of the book was full of correct predictions, the correct prediction rate is likely to be low.

There were also a variety of predictions that I didn’t include because they were statements that were true in the present. For example

If the dirt road of the Internet is made up of copper wires, then the paved information highway will probably be made of laser ber optics. Lasers are the perfect quantum device, an instrument which creates beams of coherent light (light beams which vibrate in exact synchronization with each other). This exotic form of light, which does not occur naturally in the universe, is made possible by manipulating the electrons making quantum jumps between orbits within an atom

This doesn’t seem like much of a prediction since, when the book was written, the “information highway” already used a lot of fiber. Throughout the book, there’s a lot of mysticism around quantum-ness which is, for example, on display above and cited as a reason that microprocesses will become obsolete by 2020 (they’re not “quantum”) and fiber optics won’t (it’s quantum):

John Naisbitt

Here are a few quotes that get at the methodology of Naisbitt’s hit book, Megatrends:

For the past fifteen years, I have been working with major American corporations to try to understand what is really happening in the United States by monitoring local events and behavior, because collectively what is going on locally is what is going on in America.

Despite the conceits of New York and Washington, almost nothing starts there.

In the course of my work, 1 have been overwhelmingly impressed with the extent to which America is a bottom-up society, that is, where new trends and ideas begin in cities and local communities—for example, Tampa, Hartford, San Diego, Seattle, and Denver, not New York City or Washington, D.C. My colleagues and I have studied this great country by reading its local newspapers. We have discovered that trends are generated from the bottom up, fads from the top down. The findings in this book are based on an analysis of more than 2 million local articles about local events in the cities and towns of this country during a twelve-year period.

Out of such highly localized data bases, I have watched the general outlines of a new society slowly emerge.

We learn about this society through a method called content analysis, which has its roots in World War II. During that war, intelligence experts sought to find a method for obtaining the kinds of information on enemy nations that public opinion polls would have normally provided.

Under the leadership of Paul Lazarsfeld and Harold Lasswell, later to become well-known communication theorists, it was decided that we would do an analysis of the content of the German newspapers, which we could get—although some days after publication. The strain on Germany’s people, industry, and economy be- gan to show up in its newspapers, even though information about the country’s supplies, production, transportation, and food situation remained secret. Over time, it was possible to piece together what was going on in Germany and to figure out whether conditions were improving or deteriorating by carefully tracking local stories about factory openings, clos- ings, and production targets, about train arrivals, departures, and delays, and so on. … Although this method of monitoring public behavior and events continues to be the choice of the intelligence community—the United States annually spends millions of dollars in newspaper content analysis in various parts of the world it has rarely been applied commercially. In fact. The Naisbitt Group is the first, and presently the only, organization to utilize this approach in analyzing our society.

Why are we so confident that content analysis is an effective way to monitor social change? Simply stated, because the news hole in a newspaper is a closed system. For economic reasons, the amount of space devoted to news in a newspaper does not change significantly over time. So, when something new is introduced, something else or a combination of things must be omitted. You cannot add unless you subtract. It is the principle of forced choice in a closed system.

In this forced-choice situation, societies add new preoccupations and forget old ones. In keeping track of the ones that are added and the ones that are given up, we are in a sense measuring the changing share of the market that competing societal concerns command.

Evidently, societies are like human beings. A person can keep only so many problems and concerns in his or her head or heart at any one time. If new problems or concerns are introduced, some existing ones are given up. All of this is reflected in the collective news hole that becomes a mechanical representation of society sorting out its priorities.

Naisbitt rarely makes falsifiable predictions. For example, on the “information society”, Naisbitt says

In our new information society, the time orientation is to the future. This is one of the reasons we are so interested in it. We must now learn from the present how to anticipate the future. When we can do that, we will understand that a trend is not destiny; we will be able to learn from the future the way we have been learning from the past.

This change in time orientation accounts for the growing popular and professional interest in the future during the 1970s. For example, the number of universities offering some type of futures-oriented degree has increased from 2 in 1969 to over 45 in 1978. Membership in the World Future Society grew from 200 in 1967 to well over 30,000 in 1982, and the number of popular and professional periodicals devoted to un- derstanding or studying the future has dramatically increased from 12 in 1965 to more than 122 in 1978.

This could be summed up as “in the future, people will think more about the future”. Pretty much any case one might make that Naisbitt’s claims ended up being true or false could be argued against.

In the chapter on the “information society”, one of the most specific predictions is

New information technologies will at first be applied to old industrial tasks, then, gradually, give birth to new activities, processes, and products.

I’d say that this is false in the general case, but it’s vague enough that you could argue it’s true.

A, rare, falsifiable comment is this prediction about the price of computers

The home computer explosion is upon us. soon to be followed by a software implosion to fuel it. It is projected that by the year 2000, the cost of a home computer system (computer, printer, monitor, modem, and so forth) should only be about that of the present telephone-radio-recorder-television system.

From a quick search, it seems that reference devices cost something like $300 in 1982? That would be $535 in 2000, which wasn’t really a reasonable price for a computer as well as the peripherals mentioned and implied by “and so forth”.

Gerard K. O’Neill

We discussed O’Neill’s predictions on space colonization in the body of this post. This section contains a bit on his other predictions.

On computers, O’Neill says that in 2081 “any major central computer will have rapid access to at least a hundred million million words of memory (the number ‘1’ followed by 14 zeros). A computer of that memory will be no larger than a suitcase. It will be fast enough to carry out a complete operation in more more time than it takes light to travel from this page to your eye, and perhaps a tenth of that time”, which is saying that a machine will have 100TWords of RAM or, to round things up simply, let’s say 1PB of RAM and a clock speed of something between 300 MHz and 6 GHz, depending on how far away from your face you hold a book.

On other topics, O’Neill predicts we’ll have fully automated manufacturing, people will use 6 times as much energy per capita in 2081 as in 1980, pollution other than carbon dioxide will be a solved problem, coal plants will still be used, most (50% to 95%) of energy will be renewable (with the caveat that “ground-based solar” is a “myth” that can never work, and that wind, tide, and hydro are all forms of solar that even combined with geothermal thrown in, can’t reasonably provide enough energy), that solar power from satellites is the answer to then-current and future energy needs.

In The Technology Edge, O’Neill makes predictions for the 10 years following the book’s publication in 1983. O’Neill says “the book is primarily based on interviews with chief executives”. It was written at a time when many Americans were concerned about the impending Japanese dominance of the world. O’Neill says

As an American, I cannot help being angry — not at the Japanese for succeeding, but at the forces of timidity, shortsightedness, greed, laziness and misdirection here in America that have mired us down so badly in recent years, sapped our strength and kept us from equal achievements.

As we will see, opportunities exist now for the opening of whole new industries that can become even greater than those we have lost to the Japanese. Are we to delay and lose those too?

In an interview about the book, O’Neill said

microengineering, robotics, genetic engineering, magnetic flight, family aircraft, and space science. If the U.S. does not compete successfully in these areas, he warns, it will lose the technological and economic leadership it has enjoyed.

This seems like a big miss with both serious false positives as well as false negatives. O’Neill failed to cite industries that ended up being important to the then-continued U.S. dominance of the world economy, e.g, software, and also predicted that space and flight were much more important than they turned out to be.

On the specific mechanism, O’Neill also generally misses, e.g., in the book, O’Neill cites the lack of U.S. PhD production and people heading directly into industry as a reason the U.S. was falling behind and would continue to fall behind Japan, but in a number of important industries, like software, a lot of the major economic/business contributions have been made by people going to industry without a PhD. The U.S. didn’t need to massively increase PhD production in the decades following 1983 to stay economically competitive.

There’s quite a bit of text dedicated to a commonly discussed phenomenon at the time, how Japanese companies are going to wipe the floor with American and European companies because they’re able to make and execute long-term plans, unlike American companies. I’ll admit that it’s a bit of a mystery to me how short-term thinking has worked so well for American companies and I would’ve, at least to date.

Patrick Dixon

Dixon opens with:

The next millennium will witness the greatest challenges to human survival ever in human history, and many of them will face us in the early years of its first century …

The future has six faces, each of which will have a dramatic effect on all of us in the third millennium … [Fast, Urban, Radical, Universal, Tribal, Ethical, which spells out FUTURE]

Out of these six faces cascade over 500 key expectations, specific predictions as logical workings-out of these important global trends. These range from inevitable to high probability to lower probability — but still significant enough to require strategic planning and personal preparation.

  • In the third millennium, things reminiscent of the previous millennium will be outdated By [variously, 2004, 2005, 2020, 2025], e.g., “the real winners will be those who tap into this huge shift — and help define it. What television producer will want to produce second millennial TV? What clothes designer dare risk his annual collection being labeled as a rehash of tired late twentieth-century fashions? …”
    • No, late 20th century fashion is very “in” right now and other 20th century fashions were “in” a decade ago
  • “Pre-millenialists tend to see 2000 to 2010 as just another decade. The trends of the eighties and nineties continue, just more of the same. Post-millennialists are very different. They are products of the third millennium. They live in it. They are twenty-first century people, a new age. Expect to see one of the greatest generation gaps in recent history”
    • Subjective, but no. Dixon assigns huge importance to the millennium counter turning over and says things like “Few people have woken up so far to the impact of the millennium. My children are the M generation. Their entire adult existence will be lived in the third millennium … Expect to see the M factor affect every aspect of life on earth … The human brain makes sense of the past by dividing it into intervals: the day… month… year. Then there are decades and centuries … And four time-events are about to hit us in the same instant. New year, decade, century, and millennium”, but the counter turning over doesn’t appear to have caused any particularly drastic changes.
  • “Expect to see millennial culture clashes between opposing trends, a world increasingly of extremes with tendencies to intolerance as groups fight to dominate the future”
    • Basically yes, although his stated reasoning (not quoted) as to why this should happen at the turn of the century (as opposed to any other time) is nonsensical as it applies to all of history.
  • Market dominance / power will become less important as “micromarkets” become more important
    • No; the bit about smaller markets existing was correct, but huge players, the big $1T companies of what Dixon calls “the third millennium”, Apple, Microsoft, Google, and Amazon, have a huge amount of power of these markets and has not reduced either the economic or cultural importance of what Dixon calls “dominance”
  • Expect more “wild cards” over “the next 20 years” [from 1998 to 2018], such as “war, nuclear accident or the unplanned launch of of nuclear weapons, vast volcanic eruptions or plagues or even a comet collision with enormous destructive power”
    • No; this would’ve sounded much better if it included covid, but if we look at the 20 years prior to the book being published, there was the fall of the soviet union, Tiananmen Square, etc., which isn’t obviously less “wild card-y” than we saw from 1998 to 2018
  • Less emphasis on economic growth, due to increased understanding that wealth doesn’t make people happy
    • No; Dixon was writing not too long after peak “growth is unsustainable and should be deliberately curtailed to benefit humanity”

That’s the end of the introduction. Some of these predictions are arguably too early to call since, in places, Dixon write as if Futurewise is about the entire “third millenia”, but Dixon also notes that drastic changes are expected in the first years and decades of the 21st century and these generally have not come to pass, both the specific cases where Dixon calls out particular timelines or in the cases where Dixon doesn’t name a particular timeline. In general, I’m trying to only include predictions where it seems that Dixon is referring to the 2022 timeframe or before, but his general vagueness makes it difficult to make the right call 100% of the time.

The next chapter is titled “Fast” and is about the first of the six “faces” of the future.

  • “Expect further rapid realignments [analogous to the fall of the Soviet Union], with North Korea at the top of the list as the last outpost of Stalinism … North Korea could crash at any moment, spilling thousands of starving refugees into China, South Korea, and Japan”
    • No; there’s been significant political upheaval in many places (Thailand, Arab Spring, Sudan, etc.); North Korea hasn’t been in the top 10 political upheavals list, let alone at the top of the list
  • “Expect increasing North-South tension as emerging economies come to realize that abolishing all trade and currency restrictions in a rush for growth also places their countries at the mercy of rumors, hunches, and market opinion”
    • No to there being a particular increase in North-South tension
  • “Expect a growing backlash against globalisaiton, with some nations reduced to “economic slavery” by massive, destabilising, currency flows
    • No, due to the second part of this sentence, although highly subjective
  • [A bunch of unscored predictions that are gimmes about vague things continuing to happen, such as “expect large institutions to continue to make (and lose) huge fortunes trying to outguess volatile markets in these countries”]
    • On the example prediction, that’s quite vague and could be argued either way on the spirit of the prediction, but is very easy to satisfy as stated since it only requires (for example) two hedge funds to make major bets on volatility that either win or lose; there’s list of similar “predictions” that seem extremely easy to satisfy as written that I’m not going to include
  • “Expect increasingly complex investment instruments to be developed, so that a commodity [from the context, this is clearly referring to actual commodities markets and not things like mortgages] sometimes rises or falls dramatically as a large market intervention is made, linked to a completely different and apparently unrelated event
    • Yes, although this trend was definitely already happening and well-known when Dixon wrote his book, making this a very boring prediction
  • “Management theory is still immature … expect that to change over the next two decades as rigorous statistical and analytical tools are divides to prove or disprove the key elements of success in management methods”
    • No; drastically underestimates the difficulty of rigorously quantifying the impact of different management methods in a way that only someone who hasn’t done serious data analysis would do
    • Yes, although this statement would be more compelling with less stated detail
  • “Expect ‘management historians’ to become sought after, analyzing industrial successes and failures during the previous Industrial Revolution and at the turn of the twentieth century
    • No; some people do this kind of work, but they’re not particularly sought after. The context of the statement implies they’d be sought after by CEOs or other people trying to understand how to run actual businesses, which is generally not the case
  • “Expect consumer surveys and market research to be sidelined by futurology-based customer profiles. Market research only tells you what people want today. What’s so smart about that…”
    • No; not that people don’t try to predict trends, but the context for this prediction incorrectly implies that market research is trivial “anyone can go out and ask the same questions, so where’s the real competitive edge?”, that in the computerized world, brands are irrelevant, etc., all of which are incorrect, and of course the simple statement that market research and present-day measurement are obsolete are simply wrong.
  • Flat-rate global “calls” with no long-distance changes
    • Yes as written since you can call people anywhere with quite a few apps, so I’ll give Dixon this one, although the context implies that his reasoning was totally incorrect. For one thing, he seems to be talking about phone calls and thinks traditional phone calls will be important, but he also makes some incorrect statements about telecom cost structures, such as “measuring the time and distance of every call is so expensive as a proportion of total call costs” (which was predicted to happen because the cost of calls themselves would fall, causing the cost of metadata tracking of calls to dominate the cost of the calls themselves; even if that came to pass, the cost of tracking how long a call was and where to call was to would be tiny and, in fact, my phone bill still tracks this information even though I’m not charged for it because the cost is so small that it would be absurd not to track other than for privacy reasons)
  • “Expect most households in wealth nations to have several phone numbers by 2005 … this means that most executives will have access to far more telephone lines at home than they do at work today for their personal use”
    • No; there’s a way to read this as some kind of prediction that was correct, but from the context, Dixon is clearly talking about people having a lot of phone numbers and phone lines and makes a statement elsewhere that implies explosive growth in the number of landline phone numbers and lines people will have at home
  • Mobile phones used in most places landline phones are used today
    • Yes; basically totally on the nose, although he has a story about a predicted future situation that isn’t right due to some incorrect guesses about how interfaces would play out
  • Many emerging economies will go straight to mobile and leapfrog existing technically
  • Ubiquitous use of satellite phones by traveling execs / very important people by 2005
    • No; many execs, VPs, etc., still impacted by incomplete cell coverage and no sat phone in 2005
  • “The next decade” [by 2008], cell phones will seamlessly switch to satellite coverage when necessary
  • Phone trees will have switched from “much hated push-button systems to voice recognition” by 2002, with seamless basically perfect recognition by 2005
    • No; these systems are now commonplace in 2022, but many people I know find them to be significantly worse than push-button systems
  • Computational power per “PC” will continue to double every 18 months indefinitely [there’s a statement that implies this will continue at least through 2018, but there’s no implication that this will end level off at any time after that]
    • No; even at the time, people had already observed that performance scaling was moving to a slower growth curve
  • Future small displays will be able to be magnified
    • No, or not yet anyway (if the prediction means that software zoom will be possible, that was possible and even built into operating systems well before the book was published, so that’s not really a prediction about the future)
  • “Paper-thin display sheets by 2005”
  • Projection displays will be in common use, replacing many uses of CRTs
    • No; projectors are used today, but in many of the same applications they were used in at the time the book was written
  • Many CRT use cases will be replaced by lasers projected onto the retina
    • No, or not yet anyway; even if this happens at some point, I would rate this as a no since this section was about what would kill the CRT and this technology was not instrumental in killing the CRT
  • Digital cameras rival film cameras in terms of image quality by 2020
    • Yes; technically yes as written, but the way this is written implies that digital cameras will just have caught up to film cameras in 2020 when this happened quite a long time ago, so I’d say that Dixon was wrong but made this prediction vague enough that it just happens to be correct as written
  • For consumer use, digital cameras replace 35mm film by 2010
    • Yes; but same issue as above where Dixon really underestimated how quickly digital cameras would improve
  • “Ultra high definition TV cameras” replace film “in most situations” by 2005
  • Software will always be buggy because new chips will be released at a pace that means that programmers can’t keep up with bug fixes because they need to re-write the software for new chips.
    • Yes, although the reason was completely wrong. Despite the obvious trueness that software bugginess will continue for quite some time. I’m going to include more of Dixon’s text here since a lot of readers are programmers who will have opinions on why computers are buggy and will be able to directly evaluate Dixon’s reasoning with no additional context: “Software will always be full of bugs. Desktop computers today are so powerful that even if technology stands still it will take the world’s programmers at least 20 years to export their capability to the full. The trouble is that they have less than 20 months – because by then a new generation of machines will be around … So brand new code was written for Pentium chips. The bugs were never sorted out in the old versions and bugs in the new ones will never be either, for the same reason”.
    • Dixon’s reasoning as to why software is buggy is completely wrong. It is not because Intel releases a new chip and programmers have to abandon their old code and write code for the new chip. This level of incorrectness of reasoning generally holds for Dixon’s comments even when he really nails a prediction and doesn’t include some kind of “because” that invalidates the prediction
  • Computer disaster recovery will become more important, resulting in lawsuits against backup companies being a major feature of the next century
    • No; not that there aren’t any lawsuits, but lawsuits over backup data loss aren’t a major feature of this century
  • Home workers will be vulnerable to data loss, will eventually “back up data on-line to computers in other cities as the ultimate security”
    • Yes, although the reasoning here was incorrect. Dixon concluded this due to the ratio of hard disk sizes (>= 2GB) to floppy disk sizes (<= 2 MB), which caused him to conclude that local backups are impossible (would take more than 1000 floppy disks), but even at the time Dixon was writing, cheap, large, portable disks were available (zip drives, etc.) and tape backups were possible
  • Much greater expenditure on anti virus software, with “monthly updates” of antivirus software, and anti virus companies creating viruses to force people to buy anti virus software
    • No; MS basically obsoleted commercial anti virus software for what was, by far, the largest platform where users bought anti virus software by providing it for free with Windows
  • New free zones or semi-states will be created to bypass online sales tax and countries will retaliate against ISPs that provide content served from these tax havens
  • Sex industry will be a major driver of internet technologies and technology in general “for the next 30 years” (up through 2028)
    • No; porn was a major driver of internet technology up to the mid 90s by virtue of being a huge fraction of internet commerce, but this was already changing when Dixon was writing the book (IIRC, mp3 surpassed sex as the top internet search term in 1999) and the non-sex internet economy dwarfs the sex internet economy, so sex sites are no longer major drivers of tech innovation, e.g., youtube’s infra drives cutting edge work in a way that pornhub’s infra has no need to
  • The internet will end income tax as we know it by 2020 because transactions will be untraceable
  • By 2020, sales and property taxes will have replaced income tax due to the above
  • All new homes in western countries will be “intelligent” in 2010, which includes things like the washing machine automatically calling a repair person to get repaired when it has a problem, etc.
    • No; I’ve lived in multiple post-2010 builds and none of them have been “intelligent”
  • Pervasive networking via power outlets by 2005, allowing you to plug into any power outlet “in every building anywhere in the world” to get networking
  • PC or console as “smart home” brains by one of the above timelines
  • Power line networking eliminates other network technologies in the home
  • No more ordering of food by 2000; scanner in rubbish bin will detect when food is used up and automatically order food
    • No; nonsensical idea even if such scanners were reliable and ubiquitous since the system would only know what food was used, not what food the person wants in the future
  • World will be dominated by the largest telecom companies
    • No; Dixon’s idea was that the importance of the internet and networks would mean that telecom companies would dominate the world, an argument analogous to when people say software companies must grow in importance because software will grow in importance; instead, telecom became a commodity
  • Power companies will compete with telecoms and high voltage lines will carry major long haul traffic by 2001
  • Internet will replace the telephone
  • Mobile phone costs drop so rapidly that they’re free by 2000
    • No; arguably yes because some cell phone providers were providing phones free with contract at one point, but once total costs were added up, these weren’t cheaper than non-contract phones where those were available
  • Phones with direct retinal displays and voice recognition very soon (prototypes already exist)
  • The end of books; replaced by digital books with “more than a hundred paper-thin electronic pages. Just load the text you want, settle back and enjoy”
    • No; display technology isn’t there, and it’s unclear why something like a Kindle should have Dixon’s proposed design instead of just having a one-page display
  • Cheap printing causes print on demand in the home to also be force in the end of books
    • No; a very trendy idea in the 90s (either in the home or at local), though
  • Growth in internet radio; “expect thousands of amateur disc jockeys, single-issue activists, eccentrics and misfits to be broadcasting to audiences of only a few tens ot a few hundred from garages or bedrooms with virtually no equipment other than a hi-fi, a PC, modem, and a microphone, possibly with TV camera”
    • No; drastically underestimated how many people would broadcast and/or stream
  • Mainstream TV companies will lose prime time viewership
    • Not scoring this prediction because it’s an extremely boring prediction; as Dixon notes, in the book, this had already started happening years before he wrote the book
  • By 2010, doctors will de facto be required to defer to computers for diagnoses because computer diagnoses will be so much better than human diagnoses that the legal liability for overruling the computer with human judgement will be prohibitive
  • Surgeons will be judged on how many people die during operations, which will cause surgeons to avoid operating on patients with likely poor outcomes
  • Increased education; “several graduate or postgraduate courses in a lifetime”
  • Paper credentials devalued, replaced by emphasis on “skills not created by studying books”
  • Governments set stricter targets for literacy, education, etc.
    • No, or at least not in general for serious targets that are intended to be met
  • Many lawsuits from people who received poor education
  • Return to single-sex schools, at least regionally in some areas
  • “complete rethink about punishment and education, with the recognition that a no-touch policy isn’t working”, by 2005
  • Collapse of black-white integration in schooling in U.S. cities
  • College libraries become irrelevant
    • No, or no more so than when the book was written, anyway
  • Ubiquitous video phones and video phone usage by 2005
  • Dense multimedia and VR experiences in grocery stores
  • General consolidation of retails, except for “corner shops”, which will survive as car-use restrictions “being to bite”, circa 2010 or so
  • Blanket loyalty programs are grocery stores replaced by customized per-person programs
  • VR dominates arcades and theme parks by 2010
  • “all complex prototyping [for manufacturing]” done in VR by 2000
  • Rapid prototyping from VR images
  • Pervasive use of voice recognition will cause open offices to get redesigned by 2002
  • Speech recognition to have replaced typing to the extent that typing is considered obsolete and inefficient by 2008, except in cases where silence is necessary
  • Accurate handwriting recognition will exist but become irrelevant by 2008, obsoleted by speech recognition
  • Traditional banking wiped out by the internet
  • “millions” of people will buy and sell directly to and from each other via online marketplaces
    • Not counting this because ebay alone already had 2 million users when the book was published
  • Traditional brokerage services will become less important over time; more trading will happen via cheap or discount brokerages, online
    • Yes, but an extremely boring prediction that was already coming to pass when the book was written
  • Pervasive corporate espionage, an increase over prior eras, made possible by bugs becoming smaller and easier to palace, etc.
    • No? Hard to judge this one, though
  • Pervasive internal corporate surveillance (microphones and hidden cameras everywhere, including the homes of employees), to fight corporate espionage
    • No
    • Retina scans commonly used to verify identity
    • No
  • Full self-driving cars, networked with each other, etc.
  • Cars physically linked together to form trains on the road
  • Widespread tagging of humans with identity chips by 2010

This marks the end of the “Fast” chapter. From having skimmed the rest of the book, the hit rate isn’t really higher later nor is the style of reasoning any different, so I’m going to avoid doing a prediction-by-prediction grading. Instead, I’ll just mention a few highlights (some quite accurate, but mostly not; not included in the prediction accuracy rate since):

  • Extremely limited water supply by 2020, with widespread water metering, recycling of used bathwater, etc.; water so limited that major nations have conflicts over water and water is a major foreign policy instrument by 2010; waterless cleaning of fabrics, etc., by 2025
  • Return to “classic” pop-Christian American family and cultural values, increased stigmatization of single parent households, etc., by 2020
  • Major prohibition movement against smoking, drinking, psychedelic drugs, etc.
  • Increased risk of major disease epidemics due to higher global population and increased mobility
  • Due to increasing tribalism, most new wealth created by companies with <= 20 employees, of which >= 75% are family owned or controlled and started with family money
  • Increased global free trade
  • Death of “old economics” allow for (for example) low unemployment with no inflationary pressure due to combination of globalization pushing down wages and computerization causing productivity increases
  • Travel will have virtually no friction by 2000 due to increased automation; you’ll be able to buy a plane ticket online, go to the airport, where a scanner will scan you as you walk through security without delay; you’ll even be able to skip the ticket buying process and just walk directly onto a plane, at which point a system will scan an embedded smart-card in your watch or skin will allow the system to seamlessly deduct the payment from your bank account
  • End of left/right politics and rise of single-issue politics and parties
  • Environmentalism the single biggest political issue
  • Destruction of ozone layer causes people to avoid sun; vacations in sunny areas and beaches no longer popular
  • Very accurate weather predictions by 2008, due to newly collected data allowing accurate forecasting
  • Nuclear power dead, with zero or close to zero active reactors by 2030
  • Increased concern over damage / cancer from “electromagnetic fields”
  • Noise canceling technology wipes out unpleasant noise in cars and homes
  • Widespread market for human cloning, with people often raising a genetic clone of themselves instead of conceiving traditionally
  • Have the capability to design custom viruses / plagues that target particular organs or racial groups by 2010
  • Comprehensive reform of U.S. legal system to reduce / eliminate spurious lawsuits by 2010
  • Major growth of religions; particularly Islam and Christianity
    • Globally, as well as in the U.S., where the importance of Christianity will give rise to things like “the Christian Democratic Party” and an increasing number of Christian schools
  • The internet helps guarantee freedom against authoritarian regimes, which can censor newspapers, radio, and TV, but not the internet
  • Total globalization will cause a new world religion to be created which doesn’t come from old ideas and will market itself as dogmatic, exclusive, and superior to old religions
  • New world order with international laws and international courts; international trade impossible otherwise
  • “Cyberspace” has its own governance, with a “cyber-government” and calls for democracy where each email address gets a vote; nation-level governance over “cyberspace” “cannot and will not last, nor will any other benevolent dictatorship of non-elected, unrepresentative authority”

Overall accuracy, 8/79 = 10%

Toffler

Intro to Future Shock:

Another reservation has to do with the verb “will.” No serious futurist deals in “predictions.” These are left for television oracles and newspaper astrologers. … Yet to enter every appropriate qualification in a book of this kind would be to bury the reader under an avalanche of maybes. Rather than do this, I have taken the liberty of speaking firmly, without hesitation, trusting that the intelligent reader will understand the stylistic problem. The word “will” should always be read as though it were preceded by “probably” or “in my opinion.” Similarly, all dates applied to future events need to be taken with a grain of judgment.

[Chapter 1 is about how future shock is going to be a big deal in the future and how we’re presently undergoing a revolution]

Despite the disclaimer in the intro, there are very few concrete predictions. The first that I can see is in the middle of chapter two and isn’t even really a prediction, but is a statement that very weakly implies world population growth will continue at the same pace or accelerate. Chapter 1 has a lot of vague statements about how severe future shock will be, and then Chapter 2 discusses how the world is changing at an unprecedented rate and cite a population doubling time eleven years to note how much this must change the world since it would require the equivalent of a new Tokyo, Hamburg, Rome, and Rangoon in eleven years, illustrating how shockingly rapid the world is changing. There’s a nod to the creation of future subterranean cities, but stated weakly enough that it can’t really be called a prediction.

There’s a similar implicit prediction that economic growth will continue with a doubling time of fifteen years, meaning that by the time someone is thirty, the amount of stuff (and it’s phrased as amount of stuff and not wealth) will have quadrupled and then by the time someone is seventy it will have increased by a factor of thirty two. This is a stronger implicit prediction than the previous one since the phrasing implies this growth rate should continue for at least seventy years and is perhaps the first actual prediction in the book.

Another such prediction appears later in the chapter, on the speed of travel, which took millions of years to reach 100 mph in the 1880s, only fifty-eight years to reach 400 mph in 1938, and then twenty to double again, and then not much more time before rockets could propel people at 4000 mph and people circled the earth at 18000 mph. Strictly speaking, no prediction is made as to the speed of travel in the future, but since the two chapters are about how this increased rate of change will, in the future, cause future shock, citing examples where exponential growth is expected to level off as reasons the future is going to cause future shock would be silly and implicit in the citation is that the speed of travel will continue to grow.

Toffler then goes on to cite a series of examples where, at previous times in history, the time between having an idea and applying the idea was large, shrinking as we get closer to the present, where it’s very low because “we have, with the passage of time, invented all sorts of social devices to hasten the process”.

Through Chapter 4, Toffler continued to avoid making concrete, specific predictions, but also implied that buildings would be more temporary and, in the United States specifically, there would be an increase in tearing down old buildings (e.g., ten year old apartment buildings) to build new ones because new buildings would be so much better than old ones that it wouldn’t make sense to live in old buildings, and that schools will move to using temporary buildings that are quickly dismantled after they’re no longer necessary, perhaps often using geodesic domes.

Also, a general increase in modularity, which parts of buildings being swapped out to allow more rapid changes during the short, 25-year life, of a modern building.

Another implied prediction is that everything will be rented instead of owned, with specific examples cited of cars and homes, with an extremely rapid growth in the rate of car rentership over ownership continuing through the 70s in the then-near future.

Through Chapter 5, Toffler continued to avoid making specific predictions, but very strongly implies that the amount of travel people will do for mundane tasks such as committing will hugely increase, making location essentially irrelevant. As with previous implied predictions, this is based on a very rapid increase in what Toffler views as a trend and is implicitly a prediction of the then very near future, citing people who commute 50k miles in a year and 120 miles in a day and citing stats showing that miles traveled have been increasing. When it comes to an actual prediction, Toffler makes the vague comment

among those I have characterized as “the people of the future,” commuting, traveling, and regularly relocating one’s family have become second nature.

Which, if read very strictly, not technically not a prediction about the future, although it can be implied that people in the future will commute and travel much more.

In a similar implicit prediction, Toffler implies that, in the future, corporations will order highly skilled workers to move to whatever location most benefits the corporation and they’ll have no choice but to obey if they want to have a career.

In Chapter 6, in a rare concrete prediction, Toffler writes

When asked “What do you do?” the super-industrial man will label himself not in terms of his present (transient) job, but in terms of his trajectory type, the overall pattern of his work life.

Some obsolete example job types that Toffler presents are “machine operator”, “sales clerk”, and “computer programmer”. Implicit in this section is that career changes will be so rapid and so frequent that the concept of being “a computer programmer” will be meaningless in the future. It’s also implied that the half-life of knowledge will be so short in the future that people will no longer accumulate useful knowledge over the course of their career in the future and people, especially in management, shouldn’t expect to move up with age and may be expected to move down with age as their knowledge becomes obsolete and they end up in “simpler” jobs.

It’s also implied that more people will work for temp agencies, replacing what would previously have been full-time roles. The book is highly U.S. centric and, in the book, this is considered positive for workers (it will give people more flexibility) without mentioning any of the downsides (lack of benefits, etc.). The chapter has some actual explicit predictions about how people will connect to family and friends, but the predictions are vague enough that it’s difficult to say if the prediction has been satisfied or not.

In chapter 7, Toffler says that bureaucracies will be replaced by “adhocracies”. Where bureaucracies had top down power and put people into well-defined roles, in adhocracies, roles will change so frequently that people won’t get stuck into defined roles.. Toffler notes that a concern some people have about the future is that, since organizations will get larger and more powerful, people will feel like cogs, but this concern is unwarranted because adhocracy will replace bureaucracy. This will also mean an end to top-down direction because the rapid pace of innovation in the future won’t leave time for any top down decision making, giving workers power. Furthermore, computers will automate all mundane and routine work, leaving no more need for bureaucracy because bureaucracy will only be needed to control large groups of people doing routine work and has no place in non-routine work. It’s implied that “in the next twenty-five to fifty years [we will] participate in the end of bureaucracy”. As Toffler was writing in 1970, his timeframe for that prediction is 1995 to 2020.

Chapter 8 takes the theme of everything being quicker and turns it to culture. Toffler predicts that celebrities, politicians, sports stars, famous fictional characters, best selling books, pieces of art, knowledge, etc., will all have much shorter careers and/or durations of relevance in the future. Also, new, widely used, words will be coined more rapidly than in the past.

Chapter 9 takes the theme of everything accelerating and notes that social structures and governments are poised to break down under the pressure of rapid change, as evidenced by unrest in Berlin, New York, Turin, Tokyo, Washington, and Chicago. It’s possible this is what Toffler is using to take credit for predicting the fall of the Soviet Union?

Under the subheading “The New Atlantis”, Toffler predicts an intense race to own the bottom of the ocean and the associated marine life there, with entire new industries springing up to process the ocean’s output. “Aquaculture” will be as important as “agriculture”, new textiles, drugs, etc., will come from the ocean. This will be a new frontier, akin to the American frontier, people will colonize the ocean. Toffler says “If all this sounds too far off it is sobering to note that Dr. Walter L. Robb, a scientist at General Electric has already kept a hamster alive under water by enclosing it in a box that is, in effect, an artificial gill–a synthetic membrane that extracts air from the surrounding water while keeping the water out.” Toffler gives the timeline for ocean colonization as “long before the arrival of A.D. 2000”.

Toffler also predicts control over the weather starting in the 70s, that “It is clearly only a matter of years” before women are able to birth children “without the discomfort of pregnancy”.

I stopped reading at this point because the chapters all seem very similar to each other, applying the same reasoning to different areas and the rate of accuracy of predictions didn’t seem likely to increase in later chapters.

Read More

Zonia Pepper