This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Join Us | Print Page | Sign In
Emerging Fellows
Group HomeGroup Home Blog Home Group Blogs

What sparks could ignite the next great-power conflagration?

Posted By Administration, Sunday, July 15, 2018
Updated: Monday, February 25, 2019

Craig Perry has written his seventh installment in our Emerging Fellows program. His entire series explores the potential for another Great-power War. This piece asks about what might ignite the next great-power conflagration. The views expressed are those of the author and not necessarily those of the APF or its other members.

America’s great-power rivals are increasingly pursuing strategic ends through nonmilitary means, betting that competition short of conflict will advance their interests without risking nuclear annihilation. Yet they are also gearing up to project military force abroad, and defend themselves should the United States intervene to defend its interests and allies. This raises the very real possibility that Russian or Chinese adventurism—and miscalculations over American willingness or ability to respond militarily—could inadvertently trigger the next great-power war. Unfortunately, growing doubts about longstanding U.S. commitments to its allies and international norms are making this tragic outcome far more likely.

Russia has reemerged in the past decade as a formidable military power, capable of defeating neighboring states such as Georgia and Ukraine while seizing the initiative farther afield in Syria. Its theater ballistic missiles and sophisticated air and coastal defense systems dominate the Black Sea and Baltic regions, posing a worrying threat to America’s NATO allies. Similarly, the People’s Republic of China has vastly improved its offensive capabilities in recent years, projecting naval power far beyond its littoral areas while holding its renegade offshore province, Taiwan, at ever-greater risk.

These developments have substantially increased the likelihood of American forces coming into conflict with their great-power counterparts. For example, not long after Russian mercenaries launched an ill-fated attack on a U.S. outpost in Syria earlier this year, the United States and Russia nearly come to blows over the Syrian regime’s use of chemical weapons. Just a month later, China deployed a nuclear-capable bomber to the disputed Paracel Islands, then dispatched warships to challenge the U.S. Navy’s freedom of navigation in the region. As such brinkmanship becomes more common, the likelihood of a serious—and potentially escalatory—military confrontation will only grow.

This problem is particularly acute wherever the United States maintains alliances within its rivals’ historical spheres of influence. In Europe, Moscow could quickly defeat the meager NATO forces forward-deployed to the Baltic States—former Soviet republics sandwiched between mainland Russia and its Kaliningrad exclave—while making it exceedingly difficult for the United States and its allies to retake this territory without triggering nuclear war. Meanwhile in Asia, Beijing has set a mid-century deadline for national reunification, with the People’s Liberation Army reportedly planning to accomplish this goal as early as 2020. The PLA is already poised to overwhelm Taiwanese defenses with little warning, and disrupt U.S. carrier and airbase operations as far away as Okinawa and Guam through a combination of kinetic, cyber, and electronic warfare. In both cases, America’s near-peer adversaries are positioned to seize the initiative in their own backyards while severely complicating Washington’s ability to come to the aid of its allies.

All of this presupposes, of course, that the United States remains fully committed to its far-flung network of alliances, which have been a cornerstone of its foreign policy success since World War II. The 2016 election of a U.S. commander-in-chief who repeatedly questions the value of NATO and other foreign entanglements, however, has fundamentally challenged assumptions of American resolve. President Trump’s pronouncements naturally undermine confidence in U.S. security guarantees, and this growing uncertainty may eventually embolden Russia or China to call America’s bluff. The ramifications of such a gamble would be catastrophic: if the U.S. military responds as promised, it would plunge the world into the next great-power war; if it does not, the international system that has underpinned global peace and prosperity for the better part of a century would come to an ignominious end. Either way, the future is shaping up to be a much different place than the “Pax Americana” of yesteryear.


© Craig Perry 2018

Tags:  NATO  politics  war 

Share |
PermalinkComments (0)
 

Ugh, Scenarios? That’s Not Very Innovative…

Posted By Administration, Tuesday, July 3, 2018
Updated: Monday, February 25, 2019

Adam Cowart is one of our Emerging Fellows, and this is his eighth article written for the program. In it, he explores scenarios. The views expressed are not necessarily those of the APF or its members.

Previously we have challenged what is truly innovation (depending on how you parse the definition, maybe nothing!), and the challenges of systems wide innovation. We can now introduce 4 possible scenarios (among many, many others) to begin to conceptualize what the future of innovation looks like:

1. Baseline. The baseline scenario has organizations, governments, and economies continuing to muddle through. Some continue to sub optimize and exist, others innovate to various degrees, others are shape shifters, able to leap from one new industry to another, evolving as they go along. Over time, we become more fluent in the efficiency and quality of innovation activities, using such measurements as research quotients (RQ), developed by Anne Marie Knott, that measure innovation inputs and outputs to gauge success. Innovation as disruption is still seen as the gold standard, though large, mature firms still struggle with whether to embrace or suppress these disruptions. Innovation is still largely situated.

2. Systems-Bound Mutual Self-Interest: A greater understanding of systems, and systems literacy, coupled with virtual and software based platforms and physical innovative cross-disciplinary spaces point to less sub optimization and unproductive situated innovation. Instead, organizations will bond together in mutual self-interest (perhaps making some a bit uneasy as self-organization looks a lot like vertical integration and monopolization). Definitions of competition and anti-trust will be rewritten and new measurements of social good will come to eclipse GDP as the primary measurement of innovation and economic health.

3. Double-Down on Sub-Optimization and Quick Profit. The insatiable thirst for quarterly profits points to a future of sub optimization that shows no signs of disappearing. In an effort to stimulate the real economy, governments implement varying policies of charging demurrage on the financial market, charging fees or taxing capital that isn’t being used in “productive” ways. This will increase available capital for reinvestment. However, instead of increasing innovation and productivity, it could simply “feed the beast” of sub optimization. Especially when coupled with the relentless focus on quarterly returns, on “maximizing value”. In a world where capital must be reinvested in the firm, the resultant behavior could involve a great deal of tire spinning, or more precisely, short-term cycles of excessive value extraction meant to replicate share buy-back schemes and large dividend payouts. This, coupled with rapid rise and fall “fad” industries, make for rapid boom-bust cycles globally. Think of it as shifting the burden on overdrive.

4. The Rapid S-Curve Economy. Organizations become increasingly amorphous, shape shifters that are constantly seeking out new emergent industries, colonizing them rapidly, then moving on once the industry matures. And the maturity of the industries is also rapid. Organizations become almost nomadic. With either large or scarce reserves of capital for innovation and expansion, organizations move to rapidly create and capitalize on opportunities; one or two players prove successful and monopolize the space; the rest move on to find greener pastures.

What to do? The answer may be in the form of government policy. Governments are still the primary investor in risky innovative endeavours. When will investment programs and structures be put in place to invest in systems wide innovation? Even if it is still sub optimized within the nation-state?

Perhaps this is all just an elaborate way of saying that innovation is messy, often fails, and even when successful we can’t be sure of what positive and negative externalities it will cause. Ultimately, sub optimization is inherently a product of innovation. By correcting one problem we create others. Not all problems are created equal, of course, and we would prefer to have certain problems over others – for example, all the side-effects associated with medications we choose are an unfortunate but necessary trade off. We are both selfish in the sense that we view progress from our own situated perspective, and we are utilitarian in the sense that we generally hope that any negative outcomes of innovation will be less than the derived benefit.

We are left with a future where the nature and impact, not to mention the consistency and payback, of innovation, creating results by doing new things, is unclear. Will we move towards increased sub optimization or will we create and adopt the social technologies we need to generate deep structural results?


© Adam Cowart 2018

Tags:  economics  innovation  system 

Share |
PermalinkComments (0)
 

Superposition and Frosting a Cake

Posted By Administration, Monday, July 2, 2018
Updated: Monday, February 25, 2019

Adam Cowart is one of our Emerging Fellows, and this is his seventh article written for the program. In it, he compares superposition to innovation. The views expressed are not necessarily those of the APF or its members.

We previously challenged what is truly innovative versus sub optimization in some shape or form. We can now delve deeper into assumptions about innovation today. If we start with standpoint theory and situated knowledge, we can drill down a little deeper to reach the concept of situated imagination. Situated imagination contends that knowledge is dependent upon our location: physically, economically, and socially, among other situating coordinates. Thus, imagination is an amalgamation, an alchemy of self and collective conditions, which influences not only meaning but the concept of the “not-yet real”, in the words of Jean Paul Sartre.

In order to “do different things” we must be able to “think differently”. Therefore, moving beyond sub optimization first requires an acknowledgement of what we could call “Situated Innovation”. Situational innovation, the phenomenon of innovation being inherently connected to a particular observer or group of observers, is perhaps the most significant hidden cost in the global economy. Exercises in empathy can help to re-situate the observer, but typically this is done in a manner that is inherently sub optimizing.

While our discussion has largely revolved around the issue of space (across silos, organizations, and countries), time is another dimension in which we sub optimize. The most obvious example of this is shifting the burden of climate change onto future generations. But the issue also manifests in, for example, inter-generational organizational issues. Consider the sub optimization of the past. While not as obvious as the impact on the future, consider the many cost-benefits and projects built around past projections of the future, now the present. By sub optimizing within the present, we underutilize the past, which can have ripple effects into the future. Decisions are forgotten, goals are lost and abandoned. For every idea that survives, countless others falter at some point. A Darwinian perspective on ideas and past innovations is nothing more than permission to hold a bias towards old ideas.

Finally, beyond the space and time, is matter. Agential realism, and spacetimemattering, a theory developed by Karen Barad, provides another perspective on matter and meaning in innovation activities. Agential realism, at its core, is about the inherent entanglement of all things. Any act of observation creates the “agential cut” in which we include some things, and exclude a number of other possibilities. This temporary “cut” allows us to view a chunk of a thing, matter, in isolation, in order to gain a greater understanding of the thing we wish to observe. Hence, our observations of the world are inherently performative. Here, too, we see the challenge of innovation and the limits to our innovative capacities. Observationally, we must isolate an object as either a thing or process. By doing this, we ignore its connected (here, entangled) state of being.

Think of it like this: You are making a cake. Typically, you would wait until it cooled and then frost the whole cake. However, once this particular cake has cooled, you can no longer observe the cake as a whole. You see a piece of the cake and cut out that piece, cover it in frosting, and return the piece to the cake. Then you go about trying to find other pieces of the cake. Unable to see the whole cake at once, you are left with cutting, frosting, returning. In all likelihood, you will only frost a small portion of the cake. Even if you somehow manage to cut off and frost every piece of the cake, it will look more than a little strange! Later that evening, most guests at the party, blessed with the ability to observe the whole cake at once, will agree this is a far from optimum cake. Rather, the cake looks like it was made by Frankenstein.


© Adam Cowart 2018

Tags:  economics  innovation  realism 

Share |
PermalinkComments (0)
 

Redesigning Society for Prosperity

Posted By Administration, Sunday, July 1, 2018
Updated: Monday, February 25, 2019

Daniel Riveong has written his fourth installment in our Emerging Fellows program. Here, he explores the evolving concept of prosperity. The views expressed are those of the author and not necessarily those of the APF or its other members.

We are at a critical juncture in our understanding of prosperity. We no longer have an unshaken belief that prosperity is based on economic development or industrialization. The United Nations’ Strategic Development Goals (SGD) have helped reassess the belief that prosperity is mainly an economic goal. These goals have expanded our definition of prosperity towards a holistic improvement of well-being, such as in health, education, nutrition, et cetera.

As we begin to free ourselves from a GDP-focused view of prosperity, we are gaining greater freedom to design society for a more holistic approach to prosperity. To explore these new possibilities, we should revisit indigenous views towards social values, commerce, community, and governance. Indeed, we can draw from a few readily available examples.

In the Andean region of South America, Ecuador and Bolivia have reimagined their social contracts by integrating the concept of Buen Vivir into their constitutions. Buen Vivir (“good living”) is the Spanish phrase for a worldview shared among Andean peoples. While it has no single definition, Buen Vivir emphasizes collective well-being that is in harmony with nature and also culturally sensitive. This concept was integrated into the Ecuadorian constitution in 2008 and later in Bolivia in 2009.

Both countries have interpreted Buen Vivir in different ways. The Ecuadorian constitution guarantees a healthy and an economically balanced way of living. This includes granting nature legal rights that can be enforced through the court system. In contrast, the Bolivian constitution views Buen Vivir through the lens of social justice and political-economic redistribution. The harmony of Buen Vivir is achieved through limiting land ownership size and elevating the political power of village and indigenous communities.

Indigenous concepts not only offer more holistic visions for social contracts but also alternative ways of thinking about work and capitalism. The Igbo people of Nigeria have created a system of apprenticeship that focuses on entrepreneurship and self-sufficiency. It’s more than an education system; it’s a unique venture capital system. In the Igbo tradition, children, usually after primary school, are sent to work for an owner of a trade or shop for 5 to 10 years. At the end of the apprenticeship, owners are obliged to help the apprentices set-up their own businesses (called a “settlement”). This apprenticeship model offers a different way to think about business, economics, and education. The Igbo tradition ensures inter-generational equity through enabling entrepreneurship and self-sufficiency.

While we can find many sources of inspiration from the Global South, we need to understand how they can work, be re-interpreted, and scaled in different societies and contexts. The different Ecuadorian and Bolivian approaches to Buen Vivir is one example of the challenges of interpretation. In the case of the Igbo apprenticeship, we need to imagine what a regulated, digitalized, and scaled-up version of this apprenticeship could look like. Now that we have greater freedom to rethink society, these are the exciting new challenges we must focus on.


© Daniel Riveong 2018

Tags:  development  economics  prosperity 

Share |
PermalinkComments (0)
 

Welcome to Earth, the Sub-Optimized Planet

Posted By Administration, Wednesday, June 20, 2018
Updated: Monday, February 25, 2019

Adam Cowart is one of our Emerging Fellows, and this is his sixth article written for the program. In it, he explores the potential of sub-optimized ecosystems. The views expressed are not necessarily those of the APF or its members.

What role can – or will – innovation play in the future of the real economy? To begin, let’s start with the simplest definition of innovation. Innovation is “creating results by doing new things.” The “doing new things” part is fairly self-explanatory on the surface. It is the process of doing something in a way in which it was not previously done. Although, often, we may have done something, then stopped doing it, then started doing it again either aware or unaware of having done it in the past. But the “creating results” piece of the definition is less clear. What, precisely, is a result? A positive outcome? And, if so, a positive outcome for whom? We can say that, today, an innovative result is one in which we either save money, make money, or provide some sort of social or environmental good. But are those the results we should be aiming for? Do we care, or even understand, how these results impact and influence other components of the global system? And, for our purposes here, how will our definition of “results” evolve in the future? The point here is not to focus on types of results reporting, such as triple bottom line or happiness index-type measurements. The purpose here is on the deeper structural challenges to “real” innovation.

There is no question that what we might call “innovation-offset” occurs across a system or multiple subsystems. How often does an innovation in one area generate an offsetting process inefficiency or product redundancy in another? Put another way, if you innovate in one area, to the detriment of another area, are you really innovating at all? And how would you even know?

Two common terms used to describe this phenomenon are sub-optimization and shifting the burden. Sub-optimization commonly refers to silo-type thinking within an organization. This leads to non-value added activities, redundancy, or diminished returns. Shifting the burden, on the other hand, is a term generally meant to describe a tendency to focus on resolving surface-level, symptomatic issues, pushing costs and negative externalities onto others.

What we might call “sub-optimization ecosystems” are now a vital part of the real economy, not only the maintenance of them, but the perpetual attempts to circumnavigate them. By focusing on the self-interest of the firm at the expense of the larger system, we are inherently sub-optimizing. We ostensibly innovate within a department, across a division, across a firm, across an industry, across multiple industries, then across whole economies. There is no escape.

Our attention has been focused on what we can call the migratory patterns of money. How its shapes and structures tend to manifest. And here, we see a profitable innovation ecosystem, where activity and expenditure is its own reward. But wait, isn’t innovation inherently messy? An iterative process of trial and error. “Fail fast to succeed sooner”?

Consider what percentage of global GDP is dedicated to activities that solve problems by creating new problems elsewhere. Cycles of pointless zero-sum innovation. At a time of complexity, instead of adopting the tools of social innovation and systems change, we have doubled down on the mercurial dark arts of sub-optimization, masked as real innovation.


© Adam Cowart 2018

Tags:  economics  environment  innovation 

Share |
PermalinkComments (0)
 

Why aren’t nukes ever enough?

Posted By Administration, Monday, June 18, 2018
Updated: Monday, February 25, 2019

Craig Perry has written his sixth installment in our Emerging Fellows program. His entire series explores the potential for another Great-power War. This piece asks an important question about nukes and their effectiveness as a deterrent. The views expressed are those of the author and not necessarily those of the APF or its other members.

For all their destructive potential, nuclear weapons ushered in an unprecedented era of global stability after 1945, deterring the great powers from the kinds of internecine conflicts that risk their mutual destruction. But this period hasn’t been entirely peaceful, either, as states and non-state actors have sporadically waged more limited wars the old-fashioned way—that is, utilizing “conventional” weapons—whenever they calculate the odds of nuclear escalation are low. Consequently, powers great and small have continued to arm themselves with military capabilities of ever-increasing speed and lethality, determined to gain a decisive advantage on some future battlefield—an unfortunate function of survival in our anarchic international system.

For much of the Cold War, the United States made little effort to match the Soviet Union’s massive conventional-warfare superiority in Europe, calculating that its nuclear arsenal would be enough to offset any Soviet military advantage. Beginning in the late 1970s, however, the Pentagon embarked on a new offset strategy incorporating technological breakthroughs in precision-guided munitions, radar-evading stealth technology aircraft, and space-based communications and navigation. Rather than rely on the traditional American way of war—attrition and annihilation—this revolution in military affairs allowed relatively small numbers of highly nimble American and allied forces to defeat numerically superior adversaries, as dramatically demonstrated during such operations as Desert Storm (1991) and Iraqi Freedom (2003), while sharply reducing civilian casualties and collateral damage.

Some scholars attribute the collapse of the Soviet Union in part to its failed efforts to keep up with the West in this expensive, high-tech arms race—and for decades afterward the United States had no peers in terms of conventional military capabilities. But a funny thing happened on the way to American global hegemony: while Washington diverted resources away from cutting-edge investments after 9/11, Moscow and Beijing slowly but surely began closing the capability gap through a combination of indigenous know-how, industrial espionage, and lessons learned from U.S. military operations. In recent years, Russia and China have developed increasingly effective air defense systems to blunt America’s signature warfighting advantage, and deployed sophisticated missile systems on a variety of platforms to complicate U.S. ground and maritime operations near their territory. Such anti-access, area-denial measures complement their markedly improved power-projection capabilities now on display in Syria and the South China Sea, respectively.

Not to be outdone, the U.S. Department of Defense recently embarked on a third offset strategy to harness innovations in artificial intelligence, automation, additive manufacturing, and other fields. While traditional weapons acquisition processes have become increasingly unaffordable—with more and more money spent procuring fewer and fewer high-end aircraft, ships, and armored vehicles—this latest approach hopes to reduce costs by disaggregating marquee platforms into more specialized networked systems leveraging off-the-shelf commercial technology. Of course, this same technology is accessible to America’s rivals as well, suggesting U.S. forces will soon need to develop new defenses against the very drone swarms and other “futuristic” weaponry they are currently developing, in a seemingly never-ending cycle.

Unfortunately, such military modernization has the potential to make great-power conflict more likely, their credible nuclear deterrents notwithstanding. Both Russia and China perceive America’s superior conventional capabilities—coupled with its expanding anti-ballistic missile networks in Europe and Asia—as destabilizing, since they could facilitate preemptive U.S. attacks targeting their nuclear arsenals. Meanwhile, each country is developing its own expeditionary forces capable of quickly seizing nearby territory, then (theoretically) holding out against an anticipated U.S.-led conventional counterattack—which may embolden them to resolve a greater variety of regional disputes militarily, especially where they judge the United States unwilling to intervene at the risk of nuclear war.

This combination of mutual distrust and localized military parity is increasing the likelihood of strategic miscalculation, and undermining the logic of nuclear deterrence that has constrained great-power competition for nearly three-quarters of a century. While it remains unlikely that the United States, Russia, or China will launch large-scale attacks on each other in the coming decades, they could very well become embroiled in regional conflicts that devolve into direct military confrontation among the great powers—conflicts with the potential for a much wider global conflagration.

© Craig Perry 2018

Tags:  attack  politics  war 

Share |
PermalinkComments (0)
 

Do we need new levers to close infrastructure investment gaps?

Posted By Administration, Tuesday, June 5, 2018
Updated: Monday, February 25, 2019

Daniel Bonin‘s fourth post in our Emerging Fellows program concerns infrastructure investment gaps. The views expressed are those of the author and not necessarily those of the APF or its other members.

Forecasts suggest that infrastructure investment gaps are here to stay, especially when it comes to transportation and electricity. How can we close these gaps by 2050? The World Economic Forum distinguishes three major levers: (a) reduce demand, (b) build new infrastructures and (c) optimize existing infrastructures. There may be two additional important levers on our way to 2050 that stem from the growing connectedness of people and intelligent things paired with self-executing contracts (i.e. smart contracts). The first lever is novel pricing models for infrastructure usage or, more exactly an “intelligent user charges principle”. The second lever relates to effective altruism movement, the idea of identifying areas where one additional Euro can create the most impact.

Today, we are used to all kinds of pricing structures to pay for products and services. Product purchase, freemium models, flat rates or pay per use are part of everyday life. There is a second class of pricing structures that benefit from the growing connectedness and powerful algorithms. These have just started to become a normal part of life. There is Uber’s surge pricing that is based on market demand levels. Contingency pricing, which is common in the manufacturing or energy sectors, conditions the amount to be paid on the performance of the contractor. Any Mac user who tries to book a flight should be familiar with differential pricing, pricing based on the type of customer. So why shouldn’t we pay a dynamically adapted price for each time we “consume” infrastructures? Think of paying a certain amount of money per kilometer travelled? We could be reimbursed for travelling during off-peak hours or pay extra to have priority during peak hours. Depending on how the fuel economy of our car ranks compared to the median fuel economy or whether we share a ride or not, we would pay a dynamic price per kilometer travelled. And who has not yet dreamed about punishing the SatNav for inefficient routing? There could even be subsidies to foster the adoption of new environmentally friendly technologies. What about subsidies for vehicles that pro-actively improve the air quality in cities or discounts for socially disadvantaged families? The possibilities are endless and so are the synergies. If we track the kilometers we travel, we could also identify the roads that are working at a particularly high level of capacity utilization and allocate maintenance and expansion investments accordingly. This is similar to effective altruism, which tries to identify areas where a certain amount of money can generate the largest societal impact.

This may sound like Sci-Fi, but if we take all the buzz around the Internet of Things, distributed ledgers like Blockchain and Artificial Intelligence seriously, why shouldn’t the intelligent charge principle and effective altruism be feasible? Yes, there are various obstacles. For instance, we would need to quantify and balance a tremendous number of aspects. There are ethical questions like how should eligibility for discounts be calculated? Is it OK to favour densely populated areas and how can we make sure that infrastructures with low capacity usage are not side-lined? All these are questions and trade-offs that we need to find answers to in any case amid budget constraints. I believe that both, user charge principle and effective altruism driven infrastructure planning can help to find fairer answers and pose the right questions.

Another argument for why we need to consider both levers is, that on our way to a more sustainable world, certain trends undermine the ability of our existing infrastructure funding mechanisms to function properly. Firstly, we will need a lot of resources to roll out an intelligent transport and energy infrastructure. Secondly, E-mobility and mobility services with lower vehicle ownership have a negative impact on today’s infrastructure funding mechanisms. While E-mobility and more efficient transport systems reduce externalities and perhaps even infrastructure demand, they also reduce revenues from fuel taxes. The impact of E-mobility is even more far-reaching as this new paradigm will increase the investment needs for our energy grids. With lower private car ownership due to mobility as a service paradigm, countries with historically high motorization rates will see their tax revenues from car registration dwindle. In order to secure a steady monetary stream for infrastructure maintenance and expansion, we need to establish a policy that links taxes to the actual usage of infrastructures. These examples also raise another question: with all that change in terms of way of living and technology until 2050, do we overestimate future infrastructure demand or does Jevons paradox hold true? I will try to address this question in the next post.


© Daniel Bonin 2018

Tags:  economics  infrastructure  technology 

Share |
PermalinkComments (0)
 

Literacy for the Year 20018

Posted By Administration, Monday, June 4, 2018
Updated: Monday, February 25, 2019

Monica Porteanu has written her fifth installment in our Emerging Fellows program. Here, she explores the evolving meaning of the word literacy. The views expressed are those of the author and not necessarily those of the APF or its other members.

At its origin, the term “literacy” meant “the ability to read and write.” Although it was first recorded in the 19th century, coinciding with the beginning of the industrial era, specialists have studied its evolution starting from much earlier times. One of the ways they have tracked literacy was through signatures on marriage certificates. When discussing literacy and the industrial revolution, economic historians such as E.G. West, note that “the evidence on literacy and schooling is interdependent.” His study suggests that “literacy specialists usually describe figures of schooling as ‘indirect evidence’ of literacy. Schooling specialists, meanwhile, regard literacy as ‘indirect evidence’ of schooling.”

With the transition from the Industrial to the knowledge era, the term has evolved to convey the message of “competence or knowledge in a specific area.” For example, in addition to reading and writing, organizations such as OECD discuss numeracy and financial literacy. Futurists advocate for future literacy. Others address health or science literacy, with reading and writing is now considered fundamental literacies.

What does it mean for the era we are in? Arguably, we are still attempting to understand what that is. The knowledge era seemed to have been first identified by Peter Drucker, in 1959, when he introduced the concept of the knowledge worker. Today, the term seems more relevant as ever. Some suggest that 2018 belongs to the era of the humans or the Anthropocene, while the World Economic Forum points to the fourth industrial revolution. Eras, nevertheless, seem to be better defined after the fact. Literacies, however, seem to prepare us for what’s to come. Asking what these literacies are, regardless of how we choose to name an era, seems timely. Today’s literacy types include technological and informational competencies, which have been essential for a while now. Such a trend appears to continue, but for how long could it last? With the current aims of developing systems that are highly usable by humans, will there continue to be a need for deep technological literacy?

What might the literacies for tomorrow look like? Would they still be interdependent with schooling, as noted for the industrial era? How could one prepare for a future when the current rate of change is high already?

While possible but unknown tomorrows unfold in our imagination, they still have two things in common: (1) they are uncertain; and (2) preparing for them is ambiguous. At the same time, most of us have difficulties dealing with ambiguity. In fact, social psychologist Geert Hofstede has identified “uncertainty avoidance” as one of the six dimensions of culture. His study across about 100 nations, reveals that globally, the human comfort with ambiguity sits, on average, at about 36%. The other 64% of the time, humans seem to prefer to control the future. The higher such preference is, the more rigid the codes of beliefs, behaviour, and intolerance are.

But how could one control the future? At best, we can prepare. Getting ready for next year seems attainable with current competencies. Bracing for five years from now would include a multitude of assumptions, and potentially competency changes. How about ten years out? Or fifty? The longer the time horizon, the more we seem to joke about and care less how far ahead we talk about. At the same time, such a view would encourage us to switch our thinking beyond what we know today.

In this context, let’s say, the year is 20018, eighteen thousand years ahead of now. What literacies would prepare us for 20018? By that time, we could be back to an agrarian era, or become Martians, or non-existent altogether. Who knows? Breathing might become the literacy of those times. What would prepare us for such eras, seems to be our ability to deal with the ambiguity awaiting us. Shouldn’t we think about ambiguity as critical literacy? How might such literacy be developed?

Some might argue that ambiguity develops during critical thinking or resilience practice or training events. The same event though still sets the expectations for a determined outcome by the end of it, such as earning a grade, job, or advancement. Others might point to life events or religion as good teachers of ambiguity.

Overall, the current schooling system cannot train us to be at ease in ambiguous environments. So do most of existing societal dimensions, including economics, politics, and governance. Isn’t it the time to address that?

© Monica Porteanu 2018

Tags:  economics  education  politics 

Share |
PermalinkComments (0)
 

Morality First, Knowledge Second?

Posted By Administration, Thursday, May 24, 2018
Updated: Monday, February 25, 2019

Polina Silakova‘s fifth post in our Emerging Fellows program explores the role of morality and manners amid disruptive technologies. The views expressed are those of the author and not necessarily those of the APF or its other members.

If you have ever travelled around Vietnam, you might have noticed at the main entrance of some schools the motto, previously ubiquitous in the communist era: “Tien hoc le, hau hoc van”. The direct meaning refers to the importance to learn proper manners in human relations first, and only then start learning other things that you would normally learn at school. Loosely it can be translated as “morality comes before knowledge”. In the past, it has served as a good call for millions of Vietnamese students and really, would not hurt anyone to be reminded of it. We are wondering if this prioritisation would still be applicable to our world of rapidly growing technologies?

The past couple of months offered us some food for thought on the evolution of business ethics in the light of technological progress.

– Facebook makes money from selling our data, which it gets in exchange for letting us share this very data free of charge – is it a fair deal? While regulators are only attempting to catch up with technicalities of this business model, Facebook continues benefiting from this knowledge gap.
– The first pedestrian died from an autonomous car approved by Uber for public roads, even with a vehicle operator behind the steering wheel. Was it human complacency with an autonomous vehicle offering a relaxing ride? Did the launch of the system happen too early, rushed by the appetite for a quicker return on investments? Or was it the lack of maturity in this field that prevented good judgement on whether the system is ready for operation?

What previously was good or bad as black and white, has now shifted into a grey area.

While in these cases Facebook users and Uber testing the driverless technology might be victims of ignorance and lack of caution, some other innovations make us concerned about the way the ethics of consumers might evolve in the future in our market society. Augmented reality and cruel video games; robots and the sex industry; more generally, robots as household servants (or slaves?). One can say that whatever people choose to do in their free time is their business, but wouldn’t it be naive to assume that the change in our own morality will have no implication for society?

A further twist to these already ambiguous scenarios came out of the study on human-robot interaction conducted by researchers from MIT and Stanford. Their experiments have shown that when people work with autonomous robots and errors occur, humans tend to blame the robots rather than themselves. Interestingly, when a success occurs, we humans take the credit more frequently than giving it to the machine. In another word, our habit to shift responsibility for mistakes from ourselves to other people remains unchanged when we get to deal with autonomous tech-friends instead of our familiar colleagues.

This poses further questions on what implications this might have for ethics in a high-tech post-capitalistic world. Who will take responsibility for decisions made by a board which consists of both humans and AI? One of the first non-human board directors – VITAL – already gets to vote in board meetings together with five human directors in a venture capital firm in Hong Kong. While VITAL only takes decisions on investments, where its skills in scanning large volumes of data come in particularly handy, we can only imagine how this might play out with advancements in deep learning. Will we still be sure that the machine is acting in the company’s interests? And if reality shows the opposite, who is to blame?

How will ethical decision-making evolve in the future? Will it be something a majority demands? Something the powerful agree on? Or something that AI would recommend as the least harmful option? What is clear is that it is becoming increasingly dependent upon how much we know about technology and its implications for society. Knowledge starts to inform morality and we should challenge ourselves to stay up to speed to make sure we take decisions that meet our moral standards.

© Polina Silakova 2018

Tags:  Facebook  society  technology 

Share |
PermalinkComments (0)
 

The End of a (Virtual) Way of Life

Posted By Administration, Monday, May 21, 2018
Updated: Monday, February 25, 2019

Adam Cowart is one of our Emerging Fellows, and this is his fifth article written for the program. In it, he explores the virtual spaces where we currently base our economy.

10,000 years ago, our economies were largely mobile and borderless. We roamed, we hunted, we foraged. One of the earliest clashes of economic models was when land ownership and borders, spurred on by the Agricultural Revolution, disrupted the nomadic lives of a decreasing proportion of the population. From the earliest evolutionary days of humanity until now, hunting and gathering was the dominant societal and economic model for approximately 90% of our history. Today, nomadic peoples number around 40 million.

Parallels between nomadic hunter-gatherer societies and contemporary, Generation X knowledge workers were made in the late 90’s. But these observations were more often than not quirky, meant to emphasize a “new” way of working. Not to mention participate in a bit of generational bashing that has not evolved a whole lot now that it is being applied to millennials.

Foraging societies are typically characterized as not placing value on fixed resources (i.e. land), are collective, typically non-hierarchical societies with immediate-return economies. They derive benefits from their activities immediately, not in a delayed-return economy where benefits from activities occur over a period of time and are often associated with property rights of some sort.

Virtual foraging is such an innate activity that we do not even consider it as such – no different than our nomadic ancestors. We “search” the web for what we are looking for, we hunt, and we roam across countries and worlds. While traditionally this has meant searching for food, what real difference is there between finding food and finding information that can be utilized in such a way as to monetize that information and purchase food? Virtual foraging and knowledge work is not typically an immediate-return economic environment, but the other characteristics of a foraging society are evident in the form of non-hierarchical groups without fixed resources, exploring open spaces.

Much has been written about censorship and net neutrality. There is still a very strong assumption that the virtual world is an open, borderless world. But as we increasingly migrate – and colonize – virtual spaces, will this continue? The bulk of the conversation has been at the micro level. We typically point to Big Brother-type influencers. Nefarious government organizations monitoring and censoring us, or corporations manipulating us. The issue is never us – it’s someone else. At the macro level, we see echoes of our old ways of living and working. Vast open plains, forests, and oceans. A limitless world for us to wander and forage within. And the relatively brisk pace at which we have begun to colonize, divide, and weaponize this space.

We have an unnerving ability to replicate our collective behaviors across time and space. It took thousands of years to erect barriers and borders on earth, and less than 20 years to begin the process in the virtual world. This is our capitalist model in its truest form: find or create space, break it up into pieces, monetize those pieces, move on. Capitalist free “safe spaces” have been created, but the walls erected around them, like all walls, don’t last for long. That capitalism is a relatively recent phenomenon in human existence is both reassuring and frightening.

What else is left? Creating or finding more space. Making the intangible tangible. Taking the unreal and making it real. If it is unmonetized, monetize it. When the first group of settlers head for Mars, it should not surprise anyone if one of those settlers has already incorporated a new business. “Martian Fencing Ltd.” You know, just in case.

© Adam Cowart 2018

Tags:  economics  technology  virtual reality 

Share |
PermalinkComments (0)
 
Page 19 of 27
 |<   <<   <  14  |  15  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  23  |  24  >   >>   >|