This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Join Us | Print Page | Sign In
Member Blogs
Blog Home All Blogs
This community-wide blog showcases blogs by APF members on topics they select. The views, thoughts, and opinions expressed in this section belong solely to the authors, and do not necessarily reflect the official policy or position of APF.


Search all posts for:   


Top tags: future  futurist  professionalization  technology  foresight  job  war  work  design  education  guide  industry  scenarios  security  art  artificial augment  artificial intelligence  chaos  city  construction  course design  digital divide  digitisation  disaster  DNA  fashion  foresight competency  generations  governance  immagination 

An Eye to the New Year: Dialogue of an Almanac-Pedlar and a Passer-by

Posted By Website Admin, Wednesday, February 21, 2018
Updated: Saturday, March 9, 2019

The following is a member post written by Emilio Mordini and originally posted on his blog. He discusses his views on the coming year with reference. Be sure to click through to his original post to further the discussion. The views expressed are those of the author and not necessarily those of the APF or its other members.

January unavoidably brings predictions. Old and new media rival one another to publish forecasts and more mundane prophecies around the new year; old-fashioned horoscopes, scientific foresight, experts’ opinions, they all work well in January. No one would be willing to replicate without any change even one of the last twenty years of his life -ironically noticed Giacomo Leopardi in his Dialogue of an Almanac-Pedlar and a Passer-by –we all prefer a future at random, although it is very unlikely that the future will be happier than the past.

Medical Breakthroughs

A special class of new year predictions regards medical breakthroughs.  There are many of these medical predictions (including politically oriented ones); one of the most accredited is a list published, since 2010,  by a panel of physicians and scientists convened by Cleveland Clinic.  These  Top 10 Medical Innovations are announced in October, but  they are usually “re-discovered” and amplified in January. In 2018, they include in order of relevance, 1)  Hybrid Closed-Loop Insulin Delivery System; 2) Neuromodulation to Treat Obstructive Sleep Apnea; 3) Gene Therapy for Inherited Retinal Diseases; 4) The Unprecedented Reduction of LDL Cholesterol; 5) The Emergence of Distance Health; 6) Next Generation Vaccine Platforms; 7) Arsenal of Targeted Breast Cancer Therapies; 8) Enhanced Recovery After Surgery; 9) Centralized Monitoring of Hospital Patients; 10) Scalp Cooling for Reducing Chemotherapy Hair Loss.


Ranked only sixth in this list, “Next Generation Vaccine Platforms”  is thus considered less significant than, e.g., “Neuromodulation to Treat Obstructive Sleep Apnea” and “Unprecedented Reduction of LDL Cholesterol”; families involved in  5 million child deaths, worldwide caused, still  in 2016, by communicable and infectious diseases,  would probably disagree. The real issue is not, however, in this cliched comment (families could be finally wrong from a global perspective), rather in what the panel describes as a medical breakthrough. According to the Cleveland Clinic panel “in 2018, innovators will be upgrading the entire vaccine infrastructure to develop new vaccines more rapidly and break ground on novel mechanisms to better deliver vaccines to vast populations (…)  Companies are finding faster ways to develop flu vaccines using tobacco plants, insects, and nanoparticles. Oral, edible and mucosally delivered vaccines, intranasal vaccines, and vaccine chips are being developed. In 2018, a bandage-sized patch for flu vaccine is expected to be on the market. These new ways of developing, shipping, storing and vaccinating are anticipated to help stave off current and future diseases and epidemics”. Similar arguments are also proposed by Innovation House in its list of 2018 top medical breakthroughs, ranking “Next generation vaccination” 5th; by the list provided by Health24, which ranks “Next-generation vaccines” 9th; and even by the list of 2018 technology breakthrough proposed by MIT Technology Review.  All these arguments are based on old rhetoric expedient – first described by Aristotle – called “enthymeme,” an argument in which a critical supporting fact is omitted or only implicitly suggested.  If one looks more in-depth, the argument about next-generation vaccines first states that significant technical innovations in vaccines are expected in 2018; second, it affirms that this will dramatically contribute to avoiding infectious diseases and epidemics; what is missed? An explicit statement about the relationship between “new ways of developing, shipping, storing, and vaccinating” and the occurrance of diseases, in real human, environment, and zoonotic communities. The argument takes as granted that this relationship is linear, while it is not.

Vaccines are hardly drugs aiming to cure an individual; they are a way to modify the state of immunization of a population. A vaccination campaign fight against an infectious disease by increasing the number of hosts who are resistant (immune) to the particular microorganism that produces the disease in a community.   This modification is not a goal per se, rather it is instrumental to prevent, control, or eliminate the infectious disease. More targeted, better distributed, easily delivered, vaccines, are welcome, but 1) we know that this is only an element, hardly the most important one, in real life vaccination; 2)  there are several critical societal, economic, cultural, variables that are more important in determining failure or success of vaccination campaigns; 3) ultimately, the relationship between infections and diseases is rather complex, in many cases it is very difficult to predict from modifications of the immunity status of a population, the real evolution of  the  disease. Vaccines do not provide any certainty to the single individual, because individual immune response depends on the whole of host-pathogen interaction, including the overall health conditions, state of nutrition, pathogenic variability in hosts, the germ loads, and so.  The mere fact that one has been vaccinated does not guarantee against developing the disease – as well as it is not the sole possibility to avoid it; further preventive measures are always necessary, they are sometimes even more important than vaccination. Failing to communicate this central tenet could easily jeopardize any vaccination campaign.

Vaccination Objectives

Vaccinations have two possible objectives: 1) the elimination, or 2) the containment of the disease.  Elimination means full removing of the disease and its causal agent from a geographical area. Elimination requires universal vaccination, which is very difficult to achieve and could be even risky at the individual level. For instance, universal infantile vaccination raises the age in which the disease appears (“building up of susceptible”), and some infectious diseases may have a more severe course in adulthood (e.g., mumps), with a grave impact on population health. Moreover, the objective to drastically raise the “herd immunity,” inherent to any infectious elimination campaign, depends on many factors falling outside vaccine technology, such as modes of transmission, interspecies transmission, the degree of genetic and antigenic variation, and so. If these concepts are not communicated and well understood, vaccines could raise excessive expectations, easily followed by disillusionment and skepticism. Containment aims instead to reduce morbidity and mortality to “acceptable” levels.   In such, more frequent, cases, selective vaccination of groups and individuals is the most appropriate strategy. Also, with containment, communication strategies are thus a critical variable, because targeted groups and individuals should be convinced to take a preventive measure (vaccination), which is not asked to the majority of the population.

In conclusion, if the objective of medical breakthroughs should not be mere modifications of some biological parameters, but the real health conditions of a human population, it would have made more sense to consider current trends on data science and predictive analytics to influence individual and collective behaviors. Like it or not, they are the real breakthrough which is promising to change, for good or bad, the global vaccination scenario in 2018 and further.

Tags:  medicine  technology  vaccination 

Share |
PermalinkComments (0)

Disaster Superheroes–Wearable Technology and Sensory Enhancements

Posted By Administration, Sunday, November 5, 2017
Updated: Saturday, March 9, 2019

Dennis Draeger

The following is a member post written by Dennis Draeger and originally posted on the disaster preparedness website, Prepare with Foresight. It is a trend alert that playfully looks at how some emerging technologies might be used during disasters. The views expressed are those of the author and not necessarily of the APF or its members.

Often survivalists focus on saving themselves for a variety of reasons. No one can afford to buy enough food to feed a whole community, and no one can afford to buy a shelter that will fit the whole community. Worse yet, no one can force their community to prepare for disasters.

However, there are a number of ways that preppers can become superheroes for their communities in case of disasters. Technology is driving much of this opportunity, and wearables is one of the dominant technology types making this practical. Using wearables, we can augment our senses to enable us to thrive in disasters. Whether you are searching for survivors or leading them to safety, you are going to want these devices to help you navigate safely.

The Dentist Chair

First, let’s cover a bit of explanation and history about sensory substitution. Sensory substitution is substituting one sense for another. It is about tricking one of your senses to communicate to your brain in a similar way to another sense. For example, Marvel’s Daredevil sees with his ears because of his sonar sense.

Paul Bach-y-Rita conducted experiments to help people with blindness to see objects. He sat them in a dentist chair equipped with a machine that poked them at multiple points on their lower back. Bach-y-Rita connected a video camera to the machine. The machine communicated the shape of an object to the person in the dentist chair. After training and time to acclimate to the technology, the participants were able to effectively “see” the objects displayed to the camera.

Bach-y-Rita started this back in the 1960s. He was the first to artificially augment senses to do unconventional things. Sensory substitution is still a novel idea. However, wearable technology is making it practical.

Spidey Senses

One of the most obvious examples of wearables augmenting your senses for disaster situations is the SpiderSense suit. To read the rest of the article, please visit the website that originally published it.

Prepare with Foresight has the whole 1200-word article.

Tags:  artificial augment  disaster  technology 

Share |
PermalinkComments (0)

The New Digital Divide

Posted By Administration, Thursday, December 8, 2016
Updated: Saturday, March 9, 2019

This is a blog post about the digital divide from a member of the APF, Emilio Mordini. It was originally posted to his LinkedIn account. He also regularly updates his Blogger account which is more focused on medical topics. The views in the article belong to the author and do not necessarily represent the APF or its other members.

The expression “digital divide” dates back to mid-1990s and refers “to the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regard both to their opportunities to access information and communication technologies (ICTs) and to their use of the Internet for a wide variety of activities” (OECD, 2001, Understanding the Digital Divide, OECD Publications, Paris, p.5).

For more than two decades, scholars and policy makers have discussed the digital divide in terms of inequalities between people who could benefit from digital resources, and people who do not have these opportunities and skills. The concept of a digital divide has been used to describe the technology gap between rich and poor populations and individuals; to depict disparities in technology access between young and senior people; to capture inequalities in ICT usage between people living in cities and in rural areas; between the educated and the uneducated; and between low and high income countries.  However, that type of digital divide is now finished, and a new digital divide is rapidly progressing.

According to the last OECD survey, young people all over the world have roughly equal access to the Internet, no matter whether they are rich or poor, educated or not. Yet, what changes it is how they are using the Internet. While richer teenagers, and teenagers living in richer countries, tend to use the Internet to search for information and to read news, poorer teenagers, and teenagers living in poorer countries, prefer to chat, play video games, or surf Facebook. Moreover, disadvantaged students spend more time on line than advantaged students, which contradicts the conventional wisdom about the old “digital divide”: the socio-economic disadvantaged are those who are more often online.

It is somehow obvious that patterns of Internet usage change according to a user’s socio-economic status, what is less obvious is the way in which inequalities impact on these patterns. Current digital divide does not concern access to technology but its usage. While advantaged people conceive the Internet as a tool for exploring the world, disadvantaged people think of the Internet chiefly as a game and an instrument to establish and develop social relations.  What is the reason of this disparity?

The OECD Report provides a simple answer to this question, “the use of online media – they argue –  depends on the student’s own level of skills, motivation, and support from family, friends and teachers, which vary across socio-economic groups…socio-economic differences in the use of the Internet and in the ability to use ICT tools for learning are strongly related to the differences observed in more traditional academic abilities”. In other words, according to the OECD, poor and uneducated people use the Internet in the way in which they conceptualize it, which is determined by their baseline knowledge. Yet, this is not an explanation, it is a truism. Of course, people use the Internet in the way in which they conceptualize it, but the question is exactly why disadvantaged people think of the Internet chiefly in terms of Facebook, social media, and video games. Is there any reason for this phenomenon?

Society has not yet realized the epochal change which occurred due to the digital revolution. Most of us are still thinking in terms of the industrial society. Who was the “poor” in the industrial society? The proletariat, those –  in Marxist theory – whose only possession was their labor.   The proletariat is the social class that sells the new merchandise “invented” by the industrial revolution, that is to say, “human labor”. The proletariat does not exist any longer, yet poor people still exist. Who is the “poor” in the digital society? Those people whose only possession is the new merchandise “invented” by the digital revolution, that is to say, “data”.

Discussing the digital divide, we are often victims of an illusion because we have difficulty in understanding that those who use the Internet only as a game and an instrument to establish social relations are not active users. Rather, they are passive data providers. They are the mines, not the miners.

Therefore the discrepancy between disadvantaged and advantaged people on the Internet is now defined as: advantaged people are mostly”purchasing” data, while disadvantaged people are mostly “selling” their own data. This is the new digital divide. Marx would probably have commented, "hic Rhodus, hic salta".

Tags:  digital divide  digitisation  technology 

Share |
PermalinkComments (0)

Towards the future of technology for education

Posted By Administration, Thursday, July 7, 2016
Updated: Saturday, March 9, 2019

The blog post below is reblogged from Bryan Alexander’s own site: The views expressed are those of the author and not necessarily of the APF or its members.

On June 16th I gave the closing keynote at the New Media Consortium’s annual conference. It was a big talk, with tons of images, ranting, and ideas crammed into a very busy hour.

It meant a great deal to me to address an organization which meant so much. I cut loose in this talk, making 95% of it new just for the occasion, taking a lot of risks and challenging the audience. I’d like to share recordings and material here for your use and/or feedback. So sit back and watch, listen, or read.

Here’s the NMC’s video recording:

The slides are on Slideshare.

And here are the prepared remarks. I riffed on them at points, which you can see in the video above. I’ve added several of the images as I referred to them directly, plus a very short bibliography at the very end:

“It is a signal honor to address an organization – a community – that has meant so much to me for more than a decade. NMC is a source of inspiration, learning, challenges, and many friendships. In honor of the futures work long conducted by the NMC, allow me to take you on a futuring journey for the next hour

Here’s my plan, what we’ll be exploring:
1.Some quick introductory notes
2.The short- term future
3.Some medium-term futures
4.Towards the longer term
5.What to do

1. Introductory notes

I’m going to focus this talk on the ways technology might develop in the future. This entails a risk, that of technological determinism. This assumes that technological developments drive some non-technological changes – for our purposes, to education and society. Think of how train tracks and rolling stock can enable yet constrain human actions. A related assumption: people will keep developing and playing with tech. More simply put: I’ll take the persistent drive for technological invention seriously.

I won’t be talking much about Black Swans, like a possible Singularity, or airborne Ebola, or a WWI-scale disaster, or everyone’s favorite, the zombie apocalypse. Also, I won’t dwell on most non-technological contexts (economics, policy, demographics), unusually for me.

Is the future we’re making a good one or a bad time? Americans like to see technologies and futures in terms of starkly opposed utopian and dystopian poles. I’d like the make things more nuanced, stretching futures across a utopia – reality – dystopia spectrum.

Two guides will help us forward, starting with history. We have a good sense, now, about how humans tend to create and react to new technologies, and we can extrapolate from that knowledge. Our second guide is science fiction, which informs much of today’s talk. Not only has sf been giving us visions of possible futures for more than a century, in addition to offering cognitive tools for imagining the future, but technologists and designers are increasingly influenced by what sf has already imagined. In short, if you’re not reading science fiction, you’re not ready for the rest of the 21st century.

2. Short term, to 2021

We are living through a remarkable time when revolutions are rippling through traditional education. An unprecedented boom in human creativity thanks to the digital revolution is returning storytelling and story-sharing capabilities to people around the world. And powerful changes in economics, demographics, and globalization, not to mention technology, are reshaping education. Some of schooling as we know it might not survive the decade.

Technological development rushes on. VR in now in place, with applications in gaming, storytelling, and visualization. Watch the costs drop and accessibility rise. Content is starting to appear. AR is developing broadly, for basic visualizations across many different hardware platforms. What’s next? AR and VR connect and intertwine, as the digital and nondigital worlds are thoroughly interlaced. Think Mixed Reality. Think computing in space. Watch Microsoft Hololens and Magic Leap.

Meanwhile, 3d printing is growing rapidly. In education, we’ve seen it move from engineering to libraries. Think: 3d printing across the curriculum. 3d printing is also allied to new learning spaces. A DIY ethos contributes to the growth of Makerspaces and the Maker movement.

Those spaces and technologies link up with the often-heralded transition from consumption to co-creation and production, which continues. Think: student as producer, student as maker.

Meanwhile, hardware continues to shrink, as Moore’s Law keeps on going. For example, my alma mater, UM, produced a combination camera, data storage, and Wifi connection the size of a grain of rice – last year. Let’s assume hardware keeps shrinking. This will let us embed hardware throughout our environment. It will let us do more with projected displays, flexible interfaces. Contact lenses as interfaces could well appear. Mark Weiser’s dreams of ubiquitous computing are coming true.

One way of describing this world of small, embedded, invisible, and environmental hardware is the Internet of Things. This is already occurring through an enormous infrastructure build out, including: expanding into the IPv6 internet protocol; developing new middleware, OSes; building out data ownership and control systems. This should lead us to rethink privacy, data ownership and control, safety tradeoffs, and the public/ private dynamic.

At a technical level, will we rethink what a file is? Imagine an ecosystem mostly composed of streams, not documents in directories; points and flows, not files.

Will there be hyperlinks in the internet of everything? What happens to the web in a world of ubiquitous, often invisible computing? There are many incentives to not develop the web. For example, mobile apps, streaming video, AAA video games, the LMS, paywalls all offer alternatives to the open web of Sir Tim Berners-Lee’s invention. Perhaps the web of 2021 will become like US community tv, trawled by a few humans and increasing #s of AIs. Or perhaps, as Kevin Kelly suggests, we’ll see the IoE hyperlinked and Googleable. Perhaps we’ll improve our ability to search and link across time, connecting to a site’s prior states, hyperlinking the emerging history of the web.

While we shrink some hardware devices, we send others into the air. Drones are changing public and private spaces, around the world. There are peaceful uses for delivery, photography, research, art. Some hobbyists have figured out how to add new devices to drones, such as shotguns and chainsaws. Others, like the United States Pentagon, have created still more uses in war and espionage. Drones were once largely controlled; now some are semi-autonomous, or autonomous, acting on their own. Already ethicists and insurance companies debate the implications of drone crimes, asking who’s responsible for injuries and deaths at the metaphorical hands of a literal machine. And automating jobs: Japanese firm Komatsu uses drones on construction projects to feed data to automated trucks and digging machines.

So many future trends are historical trends that won’t die or seem to cease only to lurch back into life later on. Some of you may remember p2p architectures dating back to the 1990s. The blockchain is a new realization of that concept. Not only has blockchain led to bitcoin, an interesting, messy, and potentially transformative financial development, but now, through Ethereum, supports decentralized autonomous organizations (DAO): distributed, automated enterprises. One such already functions as a fundraising and fund dispersal firm.

Meanwhile, for the next five years let’s expect more of the boring old stuff: social media, crowdsourcing, crowdfunding, open source, data analytics, mobile computing, gaming, gamification, virtualization, digitization, digital storytelling, always-on media capture, always-on surveillance, hacking… There’s more, of course. There always is.

That’s all in the short term. The next 5 years. We already know all about this stuff.

3. Medium term

Let’s look ahead 10 years. To 2026.

Facebook_next 10 years_Business insider

Facebook is already looking ahead to that point and planning. Note what they want to nail down by then:

Facebook_next 10 years_Business insider -detail

Automation: so to get to 2026, let’s just assume progress, and let’s consider artificial intelligence. Not at the level of a cataclysmic, world-rebooting Singularity. Just extrapolations of current trends, along the lines marked out by McAfee and Brynjolfson. I’ll assume Moore’s law continues., and add in that quantum computing starts to appear at consumer and enterprise levels. We start talking about a Fourth Industrial Revolution. Let’s grant further, steady growth in deep learning and advanced neural networks. Count Google’s victory over the game of Go as a milestone, and Siri’s uncanny abilities as a baseline.

Then we have to rethink how we design the digital world. Maybe all of it. How does more advanced AI force us to reconceive data standards and publication, information architecture, archiving, for starters?

Mobile first to AI first world

As it advances, AI starts taking up human functions. We, humans, generate a vast and growing horde of data; this is fodder for machines. Projects are appearing every day to take advantage of improving machine analysis, like, which aims to improve your health by diving deeply into your guts to better understand their microbial life. We’ve already seen criminal analytics automated – which already has problems. Machine to machine functions keep rising, such as high-frequency trading, which has already advanced beyond regulators’ abilities to constrain. Already we’ve seen flash crashes, economic incidents, driven by the conversation among programs.

Looking ahead to 2026, imagine increasing segments of human life automated as machine-to-machine functions. We could see the emergence of a posthuman order in our lives.

Let’s add robots to the mix since automation means both AI and robots. The combination is extending into more human labor functions. This can supplement labor shortfalls (Japan, China) or replace capital with labor (everywhere). Robots + AI + 3dprinting could mean deglobalization, as we relocalize production, especially through customization and creativity.

More: we’re seeing the development of affective, emotional computing, as the Horizon Report notes. For example, we could develop emotional analysts. When will they be on par with a human baseline of emotional assessment? When will they go beyond, and how do we handle that? On another line, what does good machine translation do to professional translators and second language teaching? If we combine automation with the IoE and MR, should we anticipate the appearance of intelligence, even sentient tools?

Today we’re seeing the automation of more job functions and entire jobs. Sometimes they replace human functions, physical or mental, sometimes through expert systems. Since 1990, for the first time in centuries, automation outmodes jobs without creating new ones, perhaps leading to rising unemployment. Imagine a 2026 with persistent 10% or 20% unemployment. What does education mean in such a world?

We’re also seeing the development of automated creativity. Already operational in writing (finance, sports, weather) and images. This image is a screen cap from a neutral net recreating a classic movie – 2001 – on its own terms:

This next image was created by Google’s DreamDream, which turned my original photo of our pre-conference session into mild psychedelia:

NMC 2016 scenarios group_Google Dreamed

We’re also seeing automated assistants. For example, tools for analyzing one’s writing, which can help us edit and revise more effectively – without a teacher. We’ve seen IBM’s Watson help point to new avenues of medical research, and legal AIs help with document analysis. By 2026 will we see an AI acknowledged, or even credited as coauthor for a scholarly article?

How should we expect creativity itself to change with automation? The history of human interaction with technology suggests we should, as humans love to revise old forms and create new ones with each invented medium. So look to new ways of making art, different forms of storytelling, fresh takes on gaming, and, maybe, new forms of creativity in 2026 we lack the words to describe in 2016.

Hang on. There are plenty of reasons to resist such an automation-shaped scenario.

Objection: Humans want contact!

Answer: except when we don’t. Introverts overdose daily on human contact. People don’t necessarily prefer human interaction for unpleasant tasks. Geeks and increasing geeky culture famously are comfortable with the computer-mediated experience. Generally speaking, younger folks are happier with the digital than their elders.

Objection: Automation is too expensive!

Answer: capital continues to accumulate in this economy. That’s one part of rising inequality (cf Thomas Piketty’s R>g equation). And technology prices drop, historically.

Objection: I’m scared of machines doing bad things to me and my children!

Answer: what happens when the machines are safer and better than humans? Think of self-driving cars, while human drivers murder tens of thousands each year. Or robots in hospitals, where human accidents kill 100s of thousands every year.

4. Long term

Let’s look ahead even further. Try 2050. And let’s be open to the full range of possibilities.

What’s happening in the long range horizon is truly disruptive. We’re seeing grand challenges loom like science fiction plotlines. The specter of automation threatens to radically reformat the world of work and society, changing the world our students will inhabit while supplanting teaching and learning. And that’s just for starters.

Consider the new silicon order. Let’s consider different ways AI could unfold. Nick Bostrom at Oxford has done speculative research into the different ways AI could grow and shape the world, ranging from benign to malign to simply strange. Stephen Hawking wants us of proceeding too quickly, of allowing a dangerous force to erupt across our deeply networked world; imagine how much more threatening his warning becomes in an IoE world. There’s the dystopia of a world ruled by inhuman AI, like the classic movie The Forbin Project. Then there’s the utopian vision of Iain Banks. Imagine benign, grand, and administrative AI that simply works to improve human life. That’s a continuum of silicon-ordered 2050s.

Consider the new social order. Given sufficient automation, how do humans organize together in post-2016 forms? We might not see new jobs appear. Income inequality could accelerate to 19th-century levels. In which case, we could see two new worlds of work.

On the one hand, the mass of humans work part-time at low wages, living at a subsistence level, otherwise engaged and entertained by a rich and endless digital environment. Above them are the 1%, often deeply skilled, the owners and managers of the new digital order. There isn’t much middle class between them. Call it the new Gilded Age, or neofeudalism.

On the other hand, automation unleashes a new era in human prosperity, of digital delights and technology-enabled offline goods. New political regulations and social orders transfer enough wealth to the majority of people to enable them to lead rich and rewarding lives, which combine productive work with reflective leisure – what one British organization half-jokingly referred to as Fully Automated Luxury Communism. Again, hat’s a continuum of the 2050s.

Perhaps we combine and synthesize these movements. Technology doesn’t replace humans but extends and enriches us. We work and play in ever-closer relationship to the digital world. We are both metaphorically and literally cyborgs.

Let’s go further. These technological advances let us hacking life. At the same time as we develop silicon technology, we apply digital tools and concepts to the biological realm. New tools, like CRISPR, give us the ability to shape offspring – to edit life – with increasing precision and power. Open source biology gives new insights into life forms – and shares that knowledge widely. Consider a recent paper in World Neurosurgery, “Brainjacking: implant security issues in invasive neuromodulation”. Or consider another paper, on creating macromolecules to reduce the spread of infection within a body.

The new humanity: consider more deeply what happens when we apply these technologies to humanity. What happens to our sense of what it means to be human?

What we think of as “human” may change beyond recognition. We’re already there in 2016 with bionics and widespread, legal, even mandated psychopharmaceuticals. We’re experimenting with brain-controlled machines. nutraceuticals. We’re starting to print tissue and organ replacements. Precision medicine via bioinformatics, new imaging technologies, and nanotech medicine are coming online. New devices give some measure of sight to some blind people. A Stony Brook team used targeted light to alter acetylcholine in the brains of mammals, removing some emotional memories. We can conceive of editing human DNA via CRISPR and gene driving. Some populations live decades longer than they did just 2 generations ago; if life extension becomes even basically successful, by 2050 will we see 100 become the new 60? Meanwhile, biological indicators are increasingly used in security: retina scanning, gait recognition.

With such innovations, after such knowledge, what happens to our sense of what it means to be human?

How does public health change? Does health care become the leading American industry? What’s the public interest in editing people’s minds and bodies?

Beyond human life, we could experience a new nature. As one marine biologist, Ruth Gates, explains her new role: “Really, what I am is a futurist. Our project is acknowledging that a future is coming where nature is no longer fully natural.” None of our technological innovations occurs in a vacuum. As we alter life and grow the digital world, we also alter the earth. As we change humanity, we alter nature. We may, by 2050, speak of a new Earth.

Already some use the term Anthropocene to describe the planet after the year 1900. The Northwest passage is now open. Multiple nations are engaged in a geopolitical rush for the north polar region, which is now opening up into a new world.

That’s just the start. What happens when snows and permafrost retreat northwards, opening up lands for farming? When hot climates turn arid and desertification begins? Do more cities become like Las Vegas, artificial creations maintained solely by massive infrastructural investment? When do people flee such cities? What changes will occur in the planetary ecosystem when we produce hybrid and novel forms of life?

In a parallel to the transformation wrought by infusing human bodies and societies with increasing numbers of machines, what happens to the natural world when that world is suffused with small, networked, data-gathering devices? What happens to the thin layer of life wrapping the Earth’s rocky mantle when we achieve nanotechnology at industrial scale, or nanotechnology at consumer scale? Will digital connectivity laminate or subsume the biosphere?

In one of his novels, Iain Banks describes the infusion of computation into the world through tiny, networked devices. Others have used the term computronium to name the new material that results. Banks coins the sharper word Smatter (smart matter). By 2050, will we produce matter in labs? Or in garages? Or in forests?

What would we call this world, revised by humans and post-human technologies? Donna Haraway offers the maybe tongue-in-cheek term “Cthulhucene”.

By 2050, in short, we are hacking the world. Humans change humans, humans change the world, the new world changes humans, and so it goes. By 2050 we’ve hacked the world, and keep on doing so.

Should we envision this as a renaissance? Perhaps this new world is one where human creativity and identity is reborn through an expansion of our powers and capacities, fraught with all kinds of dangers and disasters. Perhaps 2050 is a time of human rebirth.

Maybe a new politics appears by 2050. Think of this combination: drones, perhaps perpetually aloft thanks to solar power, with big data, IOE-based surveillance, and data analytics backed by AI could yield a dictator’s ecstatic dream of total social control. Does this system elicit a new politics in thirty-five years? Perhaps, for some, they will idolize heroes of our time, like Edward Snowden and Alexandra Elbakyan. Others will abhor them as dangerous criminals. What kind of politics are described by their fans and opponents?

A new politics: for example, in 2016 a proposal appeared for casting some urban areas as Rebel Cities, spaces where surveillance is disallowed. Would such spaces be fruitful ground for shooters like the one in Orlando, as well as for creative expression? Would Rebel Cities descend into chilling cycles of escalating violence and terror, or create new forms of social amity? By 2050 has this range of thinking about surveillance become the new left-right, blue-red political bedrock?

Or, instead, after we hack the earth and transform our population, is our politics described as what Bruce Sterling calls “cities full of old people, afraid of the sky”?

Hang on. What could stop some or all of these developments from happening?

Objection: Moore’s law could slow down or stop, which might ratchet down the pace of technological innovation and production a bit.

Answer: the pace might slow, but the end state still occurs. Alternatively, we could shift energies from digital technologies to robotics and quantum computing.

Objection: we could turn our postindustrial economy into one based on the principle of no growth economies. After all, as Edward Abbee famously observed, growth for growth’s sake is the ideology of the cancer cell.

Answer: you first. Seriously, try to convince people that they don’t need any more economic growth. Think of the vast equity issues involved in telling the developing world to stop. Or doing this without redistributing wealth.

Objection: a resource crash could knock these futures offline.

Answer: true.

Objection: we could voluntarily stop developing technologies.

Answer: “giving up the gun” rarely works, historically, with the rare exception of state power used against the crossbow.

Objection: a new anti-technological politics could arise, urging us to return to an older form of humanity. NeoLuddites? anti-intellectuals? New Humanists?

Trump the Game, littlebiglens

“I love the poorly educated!”

Answer: it’s possible, and something to watch for. But too many people see themselves benefitting from technologies. This will take some interesting cultural turning.

Objection: could a religious movement against new technologies arise? Frank Herbert gave us such a vision in his classic novel Dune, where a kind of crusade blocks AIs from working for centuries.

Answer: it’s possible, and something to watch for. But most religions are happy to use the technologies, in the end. So we have to anticipate a new religious movement.

Objection: various Black Swans could occur, such as an extraordinarily massive solar event or EMP strikes from some foe or the clathrate gun firing.

Answer: true. That’s the nature of very unlikely, high impact events. Will our technological society build enough resilience into its new Earth?

But before we leave, let’s go even further.

Imagine 2075.

The humans we knew from the year 2000 are a vanishingly rare type, studied by descendants of anthropologists. Artificial intelligence busily works around and above the globe, redesigning life. The biosphere has gained and lost species and entire biomes. The Earth… is transformed. Education and creativity? something else entirely.

Some inspired and creative AI and semi-human teams launch mixed reality reenactments of life in 2016.

5. What is to be done?

How can we anticipate and act strategically in the face of such potential transformation?

We are so not ready.

We currently suffer under a bad mix: the weird simultaneity of a popular and well-funded embrace of technology with strong anti-scientism and unreason. Academic disciplines are not necessarily prepared (think of how 2008 caught macroeconomics flat-footed, and what 2016 is doing to political science. We are radically divided over what constitutes human nature, as we start to hack it up. In the United States, we enjoy political sclerosis and dystopian reaction.

We have many political leaders skeptical of, if not actively opposed to civil liberties in the digital world: Trump, Clinton; Cameron; China’s gamified autocracy. Journalism is less free to report now than it was a decade ago, according to a Reporters Sans Frontières report; Turkey arresting journalists on press freedom day. Meanwhile, American tv “news” is a planetary and historical embarrassment. We maintain a horrible legacy of prejudice restricting human growth and creativity. And inequality is starting to aim for nineteenth-century levels.

So given all of that, what shouldn’t we do?

Don’t think about it.

Evade the issue by thinking of retirement. (Present generations don’t have a good record about leaving the world to the young right now)

What is to be done instead?

The blindingly obvious: collaborate with each other, across institutions, sectors, nations, populations, professions. Work through inter-institutional groups (like NMC!). Use social media. Use and be open. Read and watch science fiction.

The not so obvious, and challenging: rethink everything in terms of automation’s possibilities. Think of what can be replaced. Become a cyborg. Use futures methods.

The more challenging: Lead! You’re best placed on campuses and other institutions to inform people in context. Get political. Imagine different worlds and inhabiting them – yourselves, your institution, your children and the generation to come.

You. Help. Make. The. Future.

It isn’t something just done to you, delivered like gifts from a cargo cult. You help make the future.

Every decision you make contributes. When you craft a creative work, or teach in a certain way, or nudge a campus in one direction, or support a political candidate, or tell a story, or dream out loud, or influence younger folks, you help co-create what is coming next. Don’t be passive – it’s too late! You’re already making it happen. You are all – each of you – practicing futurists and world-makers. Do so with open eyes, and the flame of creative possibility roaring in your heart.

Thank you.”


Renata Avila, “Ciudades Rebeldes – hacia una red global de barrios y ciudades rechazando la vigilancia“.

Erik Brynjolfsson and Andrew McAfee, The Second Machine Age.

Andrea Castillo, “Can a Bot Run a Company?

Alison Cook-Sather, Catherine Bovill, Peter Felten, Engaging Students as Partners in Learning and Teaching: A Guide for Faculty (2014).

Kristi DePaul, “Robot Writers, Open Education, and the Future of EdTech” (2015).

Lori Dorn, “The First Aerial Illuminated Drone Show in the United States Takes Place Over the Mojave Desert”.

Fully automated luxury communism: a utopian critique“.

Donna Haraway, “Anthropocene, Capitalocene, Chthulucene: Staying with the Trouble”.

Michio Kaku, Physics of the Future.

Rebecca Keller, “The Rise of Manufacturing Marks the Fall of Globalization.”

Kevin Kelly, The Inevitable.

“Komatsu to use drones for automated digging in the U.S.“

Ray Kurzweil,

Brooke McCarthy, “Flex-Foot Cheetah”.

Alexis Madrigal, “‘The Future Is About Old People, in Big Cities, Afraid of the Sky’”.

Babak Parviz, “Augmented Reality in a Contact Lens

Brandt Ranj, “Goldman Sachs says VR will be bigger than TV in 10 years “.

David Rose, Enchanted Objects.

Edward Snowden, “Inside the Assassination Complex“.

Avianne Tan, “Legally Blind 5th Grader Sees Mother for 1st Time Through Electronic Glasses”.

Tags:  education  future  technology 

Share |
PermalinkComments (0)