Jason Swanson shares his thoughts with us about “the cone of plausibility” in this blog post for our Emerging Fellows program. The views expressed are those of the author and not necessarily those of the APF or its other members.
In my colleague Julian Valkieser’s latest blog post, Julian wrote about the start-up Mapegy, the programming language “R”, and Big Data analysis as they relate to creating systems models and possible applications in foresight. It was a fascinating post and I look forward to reading more of his analysis as I am excited about the uses for Big Data in the foresight. The potential for Big Data to be disruptive is massive. One of the potential disruptions could to the foresight field.
With the development of “R” and start-ups like Mapegy, along with the generation and capture of more and more data, and new tools for analysis, our ability to analyze massive data sets is growing in leaps and bounds. Analysis of complex data sets combined with predictive analytics is allowing us to create increasingly accurate models and predict outcomes and behaviors. By now most people are familiar with the story of Target using data analysis to correctly predict that one of their customers was pregnant. A more recent example could be found with HealthMap , a project of Harvard Medical School and Boston Children’s Hospital, which predicted an Ebola outbreak 9 days before the World Health Organization began reporting irregular spikes in cases.
While neither of these are long range predictions, as we capture and analyze larger and larger data sets the ability to predict outcomes and behaviors with accuracy, at least in the near term, goes up. Even though Futurists are not in the prediction business, will being able to accurately assess the near term cancel out the need for long range thinking in multiple narratives? Furthermore, would an increasing reliance on Big Data analysis and prediction affect not only the business side of foresight, but also the the study or practice of foresight itself? Would the cone of plausibility shrink as we develop the ability to analyze larger data sets with increasing sophisticated tools? Would we see a rise in a rise in wild cards?
While I can only speculate on these questions, there is a possible implication that as we gain the ability to use data analysis and models to predict outcomes with greater accuracy there is the potential for the cone of plausibility to shrink. The highest probability in terms of outcome or behavior might become a major piece, or the piece, in terms of a baseline future, with variability from the models in terms of outcomes or behaviors as your alternative futures, or greatly influencing alternative futures. Those probabilities could create or influence the bounds of the cone of plausibly. The greater the degree of accuracy, even in the near term, could potentially act to focus or tighten the cone, in effect shrinking the bounds of plausibility.
As the cone of plausibility shrinks, there might also be a potential rise in wild cards, specifically Type 2 wild cards. Introduced by Dr. Oliver Markley in his article, “A New Methodology for Anticipating STEEP Surprises” , Dr. Markley defines type 2 wilds cards as “having high probability and high impact as seen by experts if present trends continue, but low credibility for non-expert stakeholders of importance”. If the bounds of plausibility were to tighten, even some alternative futures which in the past might have been considered plausible alternate futures might fall out of the bounds of plausibility. By falling out of the bounds of plausibility, those same alternative futures have the potential to fall out of creditably for non-expert stakeholders of importance and as a result could be classified as type 2 wild cards if the impact were thought to be enough. In the event that the potential impact is possibly too low to be considered a wild card, a new term may be needed for the alternative futures that do not fit inside of the bounds of the predictive models.
It will be interesting to see the effect that Big Data will have on the foresight field. Will clients shy away from long term thinking in favor of near or short term predication? Will increasingly accurate models add to or possibly alter our foresight toolboxes? How is the futures community currently utilizing big data and predictive analytics?
Markley, O. (2010). A new methodology for anticipating STEEP surprises. Technological Forecasting & Social Change, 78(6), 19-19. Retrieved December 1, 2014, from http://www.imaginalvisioning.com/wp-content/uploads/2010/08/Anticipating_STEEP_Surprises-TFSC2.pdf