Jason Swanson shares his thoughts about Future Shock for Futurists in this blog post for our Emerging Fellows program. The views expressed are those of the author and not necessarily those of the APF or its other members.
Roughly this time last year, Ben Wittes of the Brookings Institute wrote about what he called the “Intelligence Legitimacy Paradox. ” Wittes argued, “…the threat environment America faces is growing ever more complicated and multifaceted, and the ability to meet it is growing ever-more-deeply dependent on first-rate intelligence. Yet at precisely the same time, the public has grown deeply anxious about our intelligence authorities and our intelligence community is facing a profound crisis of legitimacy over its basic authorities to collect.”
Witte’s explanation for this paradox is technology. Technology has allowed for weak nations and non-state actors to play “in the big leagues of if international power politics”. As technology is helping to contribute to the USA’s threat matrix, “…technological change is also the fundamental reason for the intelligence legitimacy crisis. The more ubiquitously communications technology spreads and the more integrated it all becomes globally, after all, the more that surveillance of the bad guys—in all their complexity—requires the intelligence community to surveil systems that we all use every day too. In other words, the same technologies that are making the threat picture more complicated, more diverse, and more bewildering are also bringing the intelligence process into closer day-to-day contact with people living their daily lives. These technologies also require intelligence agencies, to be effective, to touch giant volumes of material, most of which is utterly anodyne. The more the community does these things, as it must, the more people it offends and the more legitimacy problems it creates for itself.”
As a Futurist, I find the “Intelligence Legitimacy Paradox” fascinating. Technological advances have made for an increasingly complicated threat matrix, yet at the same time gives our security agencies the tools to mine for first-rate intelligence. Leaving aside the issues surrounding the authority to collect data and information, I wonder if technological acceleration might one day create a paradox or dilemma for the futures field?
As mentioned above, Witte’s explanation for the paradox was technology, but to be more accurate the core of Wittes’ idea might be better defined as technological acceleration. With more and more data being generated and shared, agencies must sift through vast piles of information to find first-rate intelligence, scanning more broadly, probing more deeply, and coming closer in contact with those creating and sharing the data than ever before. As technological change continues to accelerate, the amount of data we generate will continue to grow. In 2015, we are expected to create and share eight zettabytes of information. How much is a zettabyte? 1 zettabyte = 1 trillion gigabytes. And that amount will rise, along with the ease of sharing the data that we create. As technology accelerates, Witte’s “Intelligence Legitimacy Paradox” might be even more pressing in the future, with more and more data being generated, an ever more complicated and evolving threat window, closer touch points with data creators, and a greater need for quality data in the ever-expanding sea of information.
So where might this leave the futures field? To be clear, the majority of us are not dealing with a security risks or impending violence, rather we see more complex and rapid changes to the present, a more complicated and multifaceted threat matrix to present or current reality by way of rapidly approaching futures. Much like the intelligence community, our field must also contend with technology acceleration. As researchers, we put a premium on quality information, or what Witte calls “first-rate intelligence.” If the information we use for our work is less that quality, we can assume the output also to be less than quality, or to borrow a phrase,” garbage in, garbage out”.
As more and more data is created and shared there is an issue of quantity versus quality that any researcher must contend with. For Futurists, in particular, this has the potential to be a blessing and a curse. With the acceleration of data generation, we are able to use increasingly rich streams of information to gain insights and generate images of the future. Beyond trends and drivers of change, these data streams also put us in touch with novel ideas and other signals. With more data being generated and shared over time, we might expect to come in contact with greater numbers of novel ideas and signals. This is where I see a potential issue. While not quite a paradox such as the “Intelligence Legitimacy Paradox”, the issue I see arising might be called something to the effect of “Future Shock for Futurists”. This is where the accelerating change of technology, specifically the increase in the amount of data being generated and shared exponentially increases over time, combined with accelerating social change, create an issue in which novel ideas and signals are no longer novel but commonplace, or in instances where they are novel, the shelf life of these ideas are extremely short, creating the potential for an echo chamber of sorts within the field. What happens to our signals and signposts if they move from novel to accepted idea in a matter of weeks rather than years? Would that affect your practice?
Longer term the issue of increased data creation may be solved as data analytics such as R become easier to use so that we might make sense of this growing sea of information. It stands to reason that web analytics will also provide increased brokering and curation services for information delivery in the form of a stronger filter bubble. Nearer term we might continue to use primary research, social networks (being mindful of our own filter bubbles there!) and other tools to ride the growing wave of data, being mindful of the rate at which ideas move from the seemingly crazy person rambling to accepted social fact.
How has increased data generation affected your practice? Do you see a downside to the increased creation and sharing of data? How might the hyper acceleration of ideas, where an idea might move from novel conception to mainstream inception affect the filed?
© Jason Swanson 2015