This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Join Us | Print Page | Sign In
Emerging Fellows
Group HomeGroup Home Blog Home Group Blogs

To be continued – Benefits of Big Data – from predictions to foresight

Posted By Administration, Monday, March 30, 2015
Updated: Saturday, February 23, 2019

Julian Valkieser  shares his thoughts with us about “Benefits of Big Data” in this blog post for our Emerging Fellows program. The views expressed are those of the author and not necessarily those of the APF or its other members.

In my previous articles, I have already mentioned some examples where large amounts of data are used to create future predictions. Mostly, these are very specific and limited to a certain range. After all, worldly influences are very complex. If there is too much variety of influences, the predictions using big data are less accurate.

Next I want to mention other examples, in which big data is used for creation of short- and medium-term forecasts. Of course, at first this has little to do with Futurists and Foresight and long-term forecasts. But in my opinion, it represents a baseline for future practice for Futurists and Foresights. I will explain at the end of the article. Now I want to mention two examples of big data forecasting.

The Berlin-based start-up SO1 claims to be able to predict your behavior very accurately based on customer data in supermarkets. With certain offers and discounts they can move you to change your favorite brand. This works on the principle that we already know from Amazon: “Customers Who Bought This Item Also Bought”. Of course, the concern of SO1 is a frightening scenario. After all, each customer may be offered different prices for a specific product. I think no one wants this. Presumed that SO1 maintain its algorithm, this is a good indication of how well you can predict human behavior already.

Another example from German Technology Review: Thomas Chadefaux from Trinity College in Dublin, analyzed social media channels and the Google News Archive from 1900 to 2011 by specific signal words, to find out if weak signals in the media advance to crises and violent confrontations. With a probability of 85% he could predict crises, like those in Armenia, Iran or Iraq up to one year in advance. The problem here is currently: He is looking back. How his algorithm will be developed in the future, must be observed. Nevertheless, one should be alert of his name.

In summary, I would like to explain why I see these examples of predictions using big data so important for the area of Futurists and foresight. Of course, classical foresight methods are used for a company to be prepared for future influences and circumstances. For example, this is also the theme of the so-called HRO (High Reliability Organizations).

Many companies base their strategic decisions in the short and medium term now on Big Data. For long-term and accompanied much more complex decisions Big Data itself is not complex enough. Here the classical Futurist jumps in. On the basis of Big Data evaluated scenarios and trigger events it can record creative eventualities that have not been enumerated by Big Data Analytics. The future of Futurists is essentially asking to set its basis for discussion with big data and finally, base eventualities on classical methods to which a company besides the main focus should also prepare. An HRO works similarly. There are eventualities outlined and for each one with a given weighting a process is defined, e.g. how to react. HRO examples are hospitals, fire stations or on an aircraft carrier.

Tags:  big data  foresight  futurist 

Share |
PermalinkComments (0)

Benefits of Big Data – predictions vs. foresight

Posted By Administration, Monday, February 16, 2015
Updated: Saturday, February 23, 2019

Julian Valkieser  shares his thoughts with us about “Benefits of Big Data” in this blog post for our Emerging Fellows program. The views expressed are those of the author and not necessarily those of the APF or its other members.

In my last articles, I have already mentioned the power of Big Data. My blog colleague Jason adopted it and expressed his own thoughts. In his last article, he has shown wonderfully how technology has already overturned business models and efficiency in other sectors and renewed them. In comparison to this, it could happen in the area of futurist and industry’s foresight as well.

Now, there are foresight methods that work well or best with uncertainty. Indeed, Delphi-Interviews are planned preciously, e.g. interviewees are pre-selected. But this does not mean that the statements can be processed for hard facts of future reality. And, they should not. That’s the exciting thing about scenarios. They give a way to stimulate the imagination and to derive recommendations for action.

But again, you try to keep the “cone of plausibility” as narrow as possible. (See Jason’s blog). You are looking for certain experts. You force certain issues. This is done in order to build the scenario reasonably.

Now you can imagine how neutral subjective responses and subjective questions are. Anyone who read “Thinking Fast and Slow“ from Daniel Kahneman knows what I mean. And right here data comes into play. Information could passively express motives and interests of groups. I have already indicated this in my last article.

In this article, I already referred to the fact that you can only get the most out of Big Data, if one applies the prediction to a trigger event. One extracts motives and interests out of big data for one or more so-called, trigger events. These are events that can be relatively easily predicted in the near future based on data, because the circumstances are (should be) less complex. Based on these trigger events you can create a scenario. In principle, this is nothing new. Just the basic information is extracted out of big data instead of interviews and subjective insights.

Let’s take an example. A major mobile phone company has 50 million customers. Each customer has a phone and moves every day with this turned-on phone – in this case between different radio towers (See Triangulation). Let’s suppose further that the company receives 20-100 motion information’s by any customer. Provided the company may cache this information for a longer period of time, the result is a huge amount of data information, how people move, how long they stay in which locations, etc. Of course, each individual could now be afraid of privacy. But the individual is not of interest. It’s about the mass.

Imagine what you can do with this information now available. Road offices could optimize the logistics. Infrastructure projects could be optimized. Where should the new stadium be built? How is the highway to be calculated? How many trains must be set on this track?

In a rising urban environment, where sheer masses of people are moving, all these data are exciting as the basis for trigger events and scenarios.

And finally, I have another wonderful example for these ideas. Eric Fischer has evaluated geo-tagging data from photo cameras. He compared where locals and tourists take pictures in certain cities in the world and displayed this information on maps.

Tags:  big data  foresight  scenarios 

Share |
PermalinkComments (0)

Why should I take a step into "R"

Posted By Administration, Monday, November 24, 2014
Updated: Friday, February 22, 2019

Julian Valkieser shares his thoughts with us about “R language” in this blog post for our Emerging Fellows program. The views expressed are those of the author and not necessarily those of the APF or its other members.

Of course, the topic “Big Data” was already mentioned a few times in the Profuturist blog. Of course, we all know what it involves and consists of. We now move to a higher and higher activity on the Internet. We produce data – massive data. Worldwide, already 3 billion people are online. We spend much of our time online. The amount of data that is created, rise to a stunning 107,958 petabytes per month by 2018. For example, these are over 100 mio. hard drives with a capacity of 1 Terabyte – a drive with capacity the most of us would never use.

Companies like Google act and work with this data. Of course, they are not focused solely on this one business model. So Google is spreading in different directions. But a focus can be seen. Google is also spreading more and more offline. Why?

The data created online, are relatively negligible in comparison to the data you can still receive from the physical world. Behavior patterns online are certainly interesting, e.g. for the field of e-commerce – but behavior and properties offline are much more interesting. The greatest benefit would be to analyze all information that can be obtained and secondly to be able to deduce something. Exciting!

Here I want to present an example specifically for research-intensive areas. The start-up “Mapegy” from Berlin in Germany.

Mapegy is the compass for the high-tech world, referring to their own definition. One possible application would be the following. Let’s imagine.

I am interested in a specific topic and I would like to evaluate. Now Big Data comes into the game. Let’s take the example of a patent analysis. With tools like Mapegy I could figure out easily, who is an important stakeholder of a particular technology development, as he is related to another and what influence he has. A method of representation is about maps. Stakeholders and technological developments are illustrated via a kind of map. The larger the island, the more stakeholders gather around a particular development. The higher the mountain, the more patents were applied by a stakeholder. The closer the islands are arranged to each other, the stronger is the reference to one another. With this kind of Visual Analytic it is quite easy to illustrate how a certain subject area is connected to others.

And that is the sticking point. A lot of data is already available. But finally the correct processing and representation make this data useful.

“R is a free software programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians and data miners for developing statistical software and data analysis. Polls and surveys of data miners are showing R’s popularity has increased substantially in recent years.” (Wikipedia)

Someone who can program in “R” is well paid. Even at the upper end of the scale. And not for no reason. To be able to understand a context and deduce recommendations for action, not only in the economy, but also in science and research, such as in biotechnology and of course the pharmacy, is a higher aim in business and decision processes.

If you already understand some small connections, you can use it to create a network and may even explain the behavior of systems. In this specific example, it would be human behavior. Of course, the influencing factors are still too complex to be able to make reliable predictions from available data collections. But the more powerful computational resources, the closer is the opportunity to analyze all factors.

Mapegy is an example of visualizing relationships and influencing factors via big data analysis. For example, the cost of genetic testing is an indicator of how quickly data analysis will change in the next years. The costs decreased in recent years more as the price of computer chips in relation to Moore’s Law. In my next article I go further to the development in big data analysis with “R”.

Tags:  big data  R language  tec 

Share |
PermalinkComments (0)