This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
Join Us | Print Page | Sign In
Emerging Fellows
Group HomeGroup Home Blog Home Group Blogs

As much as we might deny it, we always trust the bank

Posted By Administration, Sunday, April 8, 2018
Updated: Sunday, February 24, 2019

Nichola Cooper‘s third post in our Emerging Fellows program explores trust, blockchain technology, and banks. The views expressed are those of the author and not necessarily those of the APF or its other members.

The future of trust is topical. A sustained spate of political and financial calamities has accelerated the decline of global trust levels and enhanced interest and development in decentralised technologies and peer-to-peer networks. This blog post marks the first in a series regarding how that trust is expressed in discernible changes in social organising patterns, engagement with technology and financial markets.

We begin with Bitcoin. You might have heard of it? It is commonly thought that its creation was a reaction to the global financial calamity; from a desire to obviate unscrupulous bankers and prevent bad lending practices. This is not quite true.

In fact, the Bitcoin protocol was designed to resolve the double-spend problem of digital currencies. Unlike physical money which is reasonably difficult to counterfeit, digital currencies can be replicated quite easily – they are basically like a file on your computer that you can email to a friend. There is nothing stopping you and your friend both copying (counterfeiting) the file and sending it multiple times across the network. The Bitcoin blockchain prevents double-spending by verifying each transaction with a proof-of-work algorithm which made digital currencies as a medium of exchange all the more viable. The proof-of-work is why a common refrain has become that the blockchain negates or even creates trust.

This also is not wholly true. There is an increasingly prevalent inverse relationship between trust in institutions and peers. For, unlike Bitcoin, decreased trust in centralised institutions can be attributed to corporate malfeasance and bankers’ chicanery. Whilst transactions on decentralised networks skirt institutions, they are not inherently trustworthy for this reason alone.

Despite excited claims that we evidently trust technology more than institutions, I suggest that blockchains are simply an artefact of greater trust in peers. In the cloud of blockchain and cryptocurrency confusion, we have forgotten Bitcoin’s famous integrity is designed and maintained by a community of users – people just like us. In other words, social trust is not shifting to technology, but ourselves.

In financial transactions, we deal with three particular kinds of trust: institution-based, character-based and process-based. Institution-based trust is self-explanatory: we trust the authority in the transaction, usually a bank or government. Characteristic-based trust is awarded to someone that reminds us of ourselves. Process-based trust occurs when precedent indicates reciprocity in an exchange. For example, if I go to shake your hand, I trust you will reach out to take my hand and return my handshake.

It naturally follows that trust in our peers would increase when we lose trust in central institutions and we don’t understand technology. Part of our fascination with cryptocurrencies is a yearning to be able to stick it to the man while making a quick buck. The dominant, practical part of ourselves, however, simultaneously wants to be protected from risk.

It’s all fun and games as long as the price of Bitcoin keeps going up. But it isn’t. Bitcoin’s price has lost 27% during the time it took to write this post, commentators blaming volatility in the markets on banks’ demands for regulatory intervention. Academics have observed banks’ demands are motivated by challenges to their power and legitimacy; technology that disintermediates them suggests lack of relevance.

Yet, the evolution of money has consistently shown approval by a central authority to always have been necessary. Gold has been valued by a jeweller, money dispensed by a bank, tax paid and refunded by a government agency. As much as we might wish the success of digital currency its use as a medium of exchange is probably dependent on support by governments or central banks; as seen in movements towards cashless societies in Denmark, Sweden, Norway, India, and Venezuela.

As much as we might yearn otherwise, we always trust the bank.




© Nichola Cooper 2018

Tags:  blockchain  economics  technology 

Share |
PermalinkComments (0)
 

The Future of Paracosm Economies

Posted By Administration, Tuesday, March 27, 2018
Updated: Sunday, February 24, 2019

Adam Cowart is one of our Emerging Fellows, and this is his fourth article written for the program. In it, he explores the concept of a paracosm economy.

In this blog series, we’ve been exploring just how real the real economy will be in the future. Not just the inherent “realness” of the economy, but the relevance of the real. Will the real economy continue to exist in any meaningful way in the future? The answer, at least in this particular blog, is an emphatic “No!”

Paracosm refers to an imaginary world, usually a very elaborate imaginary world, developed by a child early in their life, which may or may not stay with them into adulthood. Psychiatrists have used the term to denote a process of understanding loss and tragedy in early childhood by retreating into the imagination. The historical image of this is well known: A Victorian-era child sits despondent in a garden somewhere, the only adult who ever really loved them now dead; they are wearing formal “adult” clothes that in no way are conducive for garden-exploring; they are pale, forlorn, at the mercy of a world devoid of happiness. And their only escape will be an active imagination, a world of characters and high drama, a world just barely in their control.

Indeed, most early examples of paracosms and their creators (paracosmists) are the usual crowd: Emily Bronte and her paracosm “Gondal”, J.R.R. Tolkien and the languages of Middle-earth (the imaginary characters would emerge sometime after the imaginary languages that they spoke), Henry Darger, the “outsider” artist, who invented the world of the Vivian Sisters in his teens, and many others. Paracosms are considered a sign of high intelligence in children, an example of “worldplay”.

Beyond the rather obvious economic value of the imagination in contributing to books, film, and art in the physical world, what do paracosms have to do with the economy? The answer is in how we reconceive of that image of the precocious child. They are no longer wearing frilly Victorian garments, spending hours alone in a vast garden finding respite from disapproving servants. They have taken their imaginations online, and are increasingly being given the tools to construct their imaginative worlds – not out of words, not out of inanimate toys, or the rocks and sticks lying about the garden. But in the virtual world.

Consider a few ongoing trends. Prosumerism, where we generate our own products. The end of growth which, presumably, means children yet to be born will not enjoy the abundance that we currently fail to fully appreciate. And, of course, the multi-streamed and nefarious ways in which companies are trying to tap into (and latch onto) the hearts, minds, and imaginations of children at the earliest age possible.

In the future, a nearly infinite area of future growth will be our imaginations. We often look at “developing” nations as under-exploited areas of opportunity. Meanwhile, every child is walking around with a world of undercapitalized voices in their heads that could become its own nation, its own economy.

Imagine the two warring moons in a distant galaxy, and the market potential for their military industrial complex. Imagine a happily married couple, she a talking car, he a unicorn, navigating the exciting but expensive world of reproductive medicine to help them start a family of their own. The paradigm shift at play is moving from the current virtual market, which relies on human exchange on behalf of their avatars, to a virtual world of virtual exchange between multiple avatars created by a single human. Likely with no human knowledge of the exchange.

What does the rise and fall of our civilizations look like? Will they continue to exist after we are gone (regardless of their future growth potential)? A soulless universe without a creator that exists only for the pursuit of profit? Or will they be tied to us as if by a virtual umbilical cord?

 


© Adam Cowart 2018

Tags:  art  economics  technology 

Share |
PermalinkComments (0)
 

What factors might prevent Peak Boomer from occurring in 2035?

Posted By Administration, Monday, March 26, 2018
Updated: Sunday, February 24, 2019

Laura Dineen has written her second installment in our Emerging Fellows program. Here, she questions how the effects of an impending peak boomer situation could be mitigated. The views expressed are those of the author and not necessarily those of the APF or its other members.

In my previous post, we talked about our globally accelerating and ageing society, as Baby Boomers continue to flood the over 65 age group. Using the latest projections from the United Nations Population Division we estimated the year 2035 to be Peak Boomer. The point at which the ageing population’s rate of acceleration begins to diminish.

How certain can we be of the UN’s population projections and the year at which we will hit Peak Boomer? The maths behind the projections is certainly solid, and uses an accounting framework for the three major demographic components of change; mortality, fertility and immigration. But any major deviation from these estimated demographic components of change could blow the Peak Boomer projection off course.

The ageing population of today, and the Peak Boomer prediction of 2035 is determined by the high fertility levels post-WW2. Fertility since then has reduced, coupled with the likelihood that these Boomers will survive to older ages.

The first component that could affect the Peak Boomer prediction then, is mortality rates. Crude death rates (deaths per 1,000 population) have been decreasing globally from 19.1 in 1950 to its lowest point, 7.7 in 2010. However, the projections do not continue to decrease past this point and in fact are seen to be rising again. The actual figures in more developed countries have risen from 2010 to 2015 and are set to continue to do so. Why has there been a rise in mortality rates? And in particular the rise in crude death rate in high-income countries? Our ageing population may hold the answer here. With more strain being put upon societies’ health and social care systems by our growing aged population, the increased healthcare requirements alone may be enough to significantly impair the system as it stands. If we add in restrictions on funding, austerity measures and other increasing demands on healthcare provision in many jurisdictions, you get a perfect storm where the supply can’t meet the demand.

Another issue adding to the stress on the healthcare and social support system is the fact that the older population itself is ageing, with an increasing share aged 80 years or over. Driven again by the Boomer cohort, between 2030 and 2050, the global population that is aged 80 years or over is expected to rise to more than 20%, from today’s 14%. Might this pressure on the system cause a tipping point that could bring the Peak Boomer date closer than predicted? That scenario might come about more gradually but another consideration is the breakout of a new or mutated disease. Epidemics that we are ill-equipped to fight against could cause a more rapid change in population structure. Particularly as much older people are more susceptible to infection and more vulnerable to the effects of disease.

One major cause of population ageing is fertility decline. In most of the world, fertility rates have been falling since the Baby Boom, with the exception of Africa where fertility started falling from 1970. The assumption is that fertility will continue to decline, as it has since then, albeit at a slower rate. But what if there were a sudden increase in fertility? A new societal pressure to breed? A mutation or medical advances resulting in a vast increase in fertility, twins or triplets? A major political or cultural occurrence similar to what sparked the post-WW2 Baby Boom? Any significant increase in fertility over the next ten years could have an impact on the Peak Boomer prediction by changing yet again the age distribution in society and slowing down its acceleration.

The final component of demographic population change is migration. Migration between nations does nothing to change the global Peak Boomer prediction. However, there are significant differences between the rate of ageing across the populations of the world, some driven by migration, that I will be exploring further in the next article.


© Laura Dinneen 2018

Tags:  economics  generation  society 

Share |
PermalinkComments (0)
 

What are ways the Global South might redefine prosperity?

Posted By Administration, Friday, March 23, 2018
Updated: Sunday, February 24, 2019

Daniel Riveong has written his second installment in our Emerging Fellows program. Here, he questions the nature of prosperity. The views expressed are those of the author and not necessarily those of the APF or its other members.

The stunning economic successes of Asian countries like South Korea and China have been touted as proof that economic growth is possible for all, not just Western countries. What’s implied in the celebration of their economic success is that economic growth drives prosperity, generates happiness, and raising living standards. While countries like Ethiopia seek to replicate the Asian success stories, environmental degradation, fears over job automation, and rising inequality are challenging this narrative: economic success is both now more difficult and less relevant in the face of dangerous environmental consequences.

If explosive GDP growth is no longer plausible nor desirable, where does this leave policymakers and might we measure our prosperity in new ways? Must rising living standards be rooted in Western-minded developmentalism? This question is of pressing concern to developing countries, home to 85% of humanity, over 6 billion people. If we must look beyond the West and the Asian Tigers to re-define prosperity, how might we do so? Where do we look?

The past decades have seen many attempts to look beyond GDP as a measure of a country’s prosperity and improving living standards. The most common known alternative measure of development has been from the United Nations, which developed the Human Development Index (HDI) and Sustainable Development Goals. More recently, economist Kate Raworth has proposed the “doughnut economics” framework based on addressing the challenges of Earth’s life support systems (fertile soil, stable climate, etc.) to life’s essentials (as defined by UN’s SDG).

If we shift our focus to the Global South, we can find far more radical rethinking of prosperity. At the 2018 World Government Summit, the Indonesian Minister of National Development Planning, Bambang Brodjonegoro, spoke of how SDG has been reinterpreted within Indonesia’s cultural lens. The 17 SDG goals were reframed across spiritual, environmental, and human dimensions drawing from Indonesia’s Hindu and Muslim beliefs:


• Improving People-to-God relationship (Hablum minallah)
• Improving People-to-People relationship (Hablum minannas)
• Improving People-to-Nature relationship (Hablum minal’alam)


These three relationships above called Tri Hita Karana (“Three Reasons for Prosperity”) among Indonesia’s Hindus. Such a worldview from a high-ranking government official, specifically the minister of national development planning, speaks volumes of how the narrower, Western idea of “economic growth is good” is supplemented by more culturally-specific values.

In the United Arab Emirates, we find even more ambitious, culturally-driven rethinking of prosperity with the establishment of the Ministry of Happiness. The ministry’s mission is to drive “government policy to create social good and satisfaction” and to “make the country amongst the top five happiest countries in the world by 2021.” To achieve UAE’s vision of happiness, UAE monitors metrics like divorce rates help track family cohesion and adherence to Muslim values to assess the strength of its national identity.

UAE’s interpretation of a happy society and Indonesia’s views of development challenges the traditional materialist view of prosperity. It emphasizes a culturally specific perspective of what is a successful society. Prosperity is no longer just about building gleaming skyscrapers or eliminating hunger, but can also mean flourishing cultural traditions and strong families. Indeed, the challenges of climate change, inequality, and automation throughout the world will perhaps inspire each society to rethink prosperity in more cultural and human terms.



© Daniel Riveong 2018

Tags:  economics  prosperity  society 

Share |
PermalinkComments (0)
 

Artificial Intelligence and Us

Posted By Administration, Thursday, March 22, 2018
Updated: Sunday, February 24, 2019

Monica Porteanu has written her third installment in our Emerging Fellows program. Here, she questions the effects of artificial intelligence on society. The views expressed are those of the author and not necessarily those of the APF or its other members.

“Will AI take over the world?” is a common question across many news outlets these days. “Artificial Intelligence will best humans at everything by 2060, experts say,” predicts one of them. “More than 70% of US fears robots taking over our lives, survey finds,” describes another. Most of all, “how long will it take for your job to be automated?” seems to be the question on everyone’s mind. Opposing views are also present, arguing about “The great tech panic: robots won’t take all our jobs.” How do we reconcile these views into what Artificial Intelligence is and can be?

The term “Artificial Intelligence” was coined in the 1950s, intending to describe the ability of machines to perform tasks at a human intelligence level. Today, the definition encompasses more nuanced meanings, especially when considering the level of human cognition. In this regard, there seem to be four categories: (1) automation; (2) machine learning using artificial neural networks; (3) deep learning; (4) and beyond.

Automation represents a low cognitive process that is repeatable, having well-defined sequences of actions that are pre-programmed into machine behaviour. The machine is a passive executor of what is being instructed to accomplish. Its ability to complete complex computations fast and without error is superior to humans. Automation can be applied on a large scale, with numerous examples from manufacturing production lines, to, more recently, interactions with customers, such as onboarding operations. It has the most concrete social impact, as it does take away jobs as we know them today. However, it also opens the opportunity for humans to do what they are better at than machines are: empathy, critical thinking, and creativity. The key to staying ahead of automation is, as Garry Kasparov puts it, “human ambition.”

Machine learning using artificial neural networks requires a more sophisticated, yet still moderate level of cognition. The machine can mimic repeatable but personalized activities, while learning from each interaction, and utilizing increasing amounts of data. It reacts to events based on what was instructed to be accomplished. In other words, it can present a solution to a problem as posed, recommend tasks, or take simple actions. For example, it can automatically set up preferences at home, adjust ambient environment parameters based on these preferences, turn appliances on/off, or keep track of our grocery list. This stage has developed in leaps and bounds during the last decade or so, achieving results in recognition and even digitization of image, face, or speech. However, the machine still has difficulty perceiving at a level comparable to a human. Although we are still irritated by recommendations gone wrong or irrelevant comments coming from the chat box, we allow this type of artificial intelligence into our lives, without yet understanding its concrete positive and negative impacts.

The leap to deep learning is the phase that debuted only a few years ago. With big visions at the forefront, deep learning aims to build capacity for a machine to solve problems without being told how. Such machines mimic the brain, through layers of artificial neurons that connect with and send signals to each other in the network. Initial results are astounding. For example, the machine has been able to beat humans at Go, the complex ancient Chinese game, whose number of alternative positions surpasses the atoms in the universe. However, it seems we have yet to uncover what is happening inside these deep neural networks. Scientists are currently investigating adversarial examples, in which the difference between what the human and machine sees is extreme (e.g., turtle versus gun).

Beyond deep learning is yet an area for even bigger dreams in which, perhaps, machines will surpass the human brain capacity, being able to create symbol systems (e.g., language, money, time, religion, governance) and with that, structurally alter every aspect of the life as we know it.

It seems we are now somewhere during the development of the second category, machine learning, and in the early stages of the third one, deep learning.

We have been warned that “Artificial Intelligence will best humans at everything by 2060.” With the many and contradicting opinions though, one could wonder, what will human capacity be in 2060? How will our brain functions evolve, and with that, where will our creativity, empathy, ambition, and critical thinking take us?

 



© Monica Porteanu 2018

Tags:  artificial intelligence  machine  society 

Share |
PermalinkComments (0)
 

With Competition Like This, Who Needs Conflict?

Posted By Administration, Tuesday, March 20, 2018
Updated: Sunday, February 24, 2019

Craig Perry has written his fourth installment in our Emerging Fellows program. His entire series explores the potential for another Great-power War. This piece looks at how competition could minimize conflict. The views expressed are those of the author and not necessarily those of the APF or its other members.

“If you call what’s going on now a hybrid war, let it be hybrid war. It doesn’t matter: It’s war.” – Dmitry Peskov, Kremlin spokesperson

What if the great powers really could subdue their enemies without fighting, as Sun Tzu suggested? This appears to be what Russian agents were up to in 2016 when they allegedly meddled in America’s presidential elections. According to the U.S. intelligence community and federal prosecutors, Moscow’s goals were to undermine public faith in democracy and influence the selection of the next U.S. commander-in-chief, presumably with the aim of weakening a superpower rival—or better yet, installing a favored candidate in the White House. Russians apparently were up to similar tricks in the latest French, German, and Montenegrin elections as well.

Then again, the United States and its allies are hardly innocent when it comes to interfering in other countries’ affairs—so it should come as no surprise that Moscow blames the West for much of the world’s instability, from Arab Spring uprisings to “color revolutions” across the former Soviet Union. In 2013, Gen. Valeriy Gerasimov, chief of the Russian general staff, framed these turbulent events as a new form of warfare, where political, economic, informational, humanitarian, and other nonmilitary measures are often more effective than traditional weapons. The very rules of war have changed, he concluded, and Russia’s military must adapt accordingly.

The Kremlin has clearly embraced these modern rules of war in recent years, pursuing an aggressive, whole-of-govern¬ment approach to achieving its foreign policy goals while avoiding escalation into full-blown state-on-state conflicts. This strategy of indirect action typically begins with so-called “information confrontation,” a combination of old-fashioned propaganda and modern cyber operations to shape perceptions and manipulate the behavior of target audiences. Russia’s intelligence services might then mix it up with subversive “active measures,” while the military and its proxies—ethnic compatriots, private military contractors, or even “little green men”—stand ready to up the ante while obscuring Moscow’s involvement.

Not to be outdone, China also updated its military doctrine to incorporate nonmilitary means of influence in 2003. The People’s Liberation Army’s “three warfares” strategy—encompassing public opinion, psychological, and legal warfare—is intended to control public narratives and influence perceptions to advance China’s interests while compromising the ability of opponents to respond. This approach offers China a new form of “non-kinetic” weaponry that can be combined in highly synergistic ways. For example, to advance its territorial claims in the East and South China Seas, Beijing is advancing spurious legal arguments, deploying civilian flotillas, and broadcasting propaganda portraying itself as a victim of foreign powers. Sun Tzu would be proud.

There is debate in national security circles over what to call these new forms of warfare—and whether they are really all that new. Pundits have coined terms such as “gray zone conflicts” and “hybrid warfare” to describe what others chalk up to time-honored doctrinal concepts like information operations and irregular warfare. The 2018 U.S. National Defense Strategy offered yet another buzz phrase for this phenomenon: “competition short of armed conflict.”

Whatever we call it, there is no doubt that nonmilitary methods of warfare are becoming more commonplace, for a variety of reasons. Compared to traditional combat operations, they are relatively inexpensive, deceptively innocuous, and difficult to attribute, particularly in the cyber domain. They also carry a limited risk of escalation, as even the most audacious provocations seldom trigger an armed response—especially against a nuclear power. Perhaps most importantly, these subtle, indirect approaches can sometimes affect strategic centers of gravity—such as government decision-making and political legitimacy—that are difficult to target directly with military force. Future advances in communications technology, big-data analytics, and artificial intelligence will only further enable such competition below the threshold of conflict.

This is certainly a worrying trend, as these tactics have the potential to exacerbate social divisions, undermine confidence in democratic governance, and blur distinctions between civilian and military combatants and targets. On the other hand, the more confident great powers are in their ability to secure national interests through nonmilitary means, the more likely they are to pursue less violent and risky courses of action. In other words, competition short of conflict could very well reduce the risk of future great-power wars.

 



© Craig Perry 2018

Tags:  economics  politics  war 

Share |
PermalinkComments (0)
 

The Cycles of Life

Posted By Administration, Wednesday, March 14, 2018
Updated: Sunday, February 24, 2019

Polina Silakova‘s third post in our Emerging Fellows program explores Spiral Dynamics in the context of the ouroboros as a symbol for the cycles of life. The views expressed are those of the author and not necessarily those of the APF or its other members.

Ouroboros – a snake eating its own tail. In many ancient cultures across the globe, this symbol represented the infinite cycle of renewal of life, a rebirth of the Earth, the continuous development of consciousness. We got used to the renewable nature of the world and learned how to benefit from it. Over centuries, humanity has been focusing on getting better, faster, more efficient – taking everything we do to the next level. In a search for better life, we tamed many types of energy, speed and even time, thanks to the advancements in health care. What is the driving force that makes us continuously strive for more, creating demand for overly saturated markets and often unnecessary exploitation of Earth?

Apparently, the answer may lay in Darwin’s Evolution by Natural Selection. Canadian evolutionary psychologists ran multiple experiments which demonstrated the link between the natural selection modules that help species survive in the wild and our consumption habits. Survival module, sexual selection, kin selection, and reciprocity – all these mechanisms from the jungle still exist in our casual lives and are covertly guiding our behaviour in grocery stores and shopping malls. They help us make the “safest” choices as dictated by the millions of years spent in a continuous fight for survival. Examples of this might be purchasing several flavours of the same type of food instead of one (to make sure we won’t die in the event of it being poisonous), buying things that make us look more attractive (to be selected by a sexual partner with more promising genes), or acquiring possessions that help a desirable group identify us as a part of their tribe. Eventually, we often end up with an amount of stuff far beyond what we need.

If this behaviour has been in our genes for generations, does it mean that as consumers we will always be primarily guided by these instincts? Maslow addresses this in his “Theory of Human Motivation” where he links our motivations to the needs we have in a particular point of time or stage of life. But allegedly, Maslow himself admitted that another theory does a better job of explaining the psychology of human development. In contrast to his focus on an individual, Graves’ theory of Spiral Dynamics explains the social and psychological development of a person and humanity in general.

This data-based approach to psychology charts the transformation in values and worldviews that humanity went through to the present, and the ones emerging. Initially, eight levels (later updated with additional one) represent different ways of how people think about things and respond to the world around them. They suggest that as the challenges we face change, so does our response to them, supported by the evolving consciousness. Like in a video game, upper levels, emerging in the context of new challenges, prompt different ways of thinking and worldviews that did not exist before.

No level is better than others and all of them coexist at any point in time. For simplicity, each level was assigned a colour. For this conversation, the most interesting is a shift between the two levels in the mid-upper part of the spiral: orange and green. Orange is about striving for success, competition, autonomy, working for abundance and reward. Think capitalism, Wall Street, show business or battles for “likes” in social media. Tired of the egocentric orange, following it, green is open to collaboration and inclusion. It values harmony, empathy, and sensitivity and becomes environmentally conscious. Some estimates suggest that worldwide for every person with green attributes there are about three people with orange attributes.

We also see lots of pseudo-green happening in the orange world with some companies using sustainability and CSR messages more as marketing tools, rather than being truly guided by these values. What matters is that at the end of the day, together with the actions of those genuinely concerned about the future of the planet, this might be slowing down the speed of destruction in the age of Anthropocene.

The question remains though, which will happen quicker: will we collectively break-through to the next cycle of human consciousness or cross a biophysical threshold to a point of no return? How much of our own tail have we eaten already? And do we need a disaster – a problem of the next level of complexity – to activate our next cycle of consciousness?

© Polina Silakova 2018

Tags:  economics  politics  society 

Share |
PermalinkComments (0)
 

Rewriting freedom of speech

Posted By Administration, Tuesday, March 13, 2018
Updated: Sunday, February 24, 2019

Monica Porteanu‘s second post in our Emerging Fellows program concerns the weaponization of free speech. The views expressed are those of the author and not necessarily those of the APF or its other members.

Many can only dream of having the freedom to express their opinion. The fortunate of us might take it for granted. Others might see it through biased lenses. I acknowledge mine, originating from growing up in a former communist country. In that world, information dissemination meant precisely two hours of evening TV programming. Any form of expression linked everything back to the doctrine regurgitating “glorious” dictatorship propaganda. Information beyond meant treason. Asking for a passport just to explore a different culture, stamped one as being against the system. Censorship was deeply entrenched in everyday life. It should come as no surprise that the world after the cold war sought freedom of access and expression. But with little knowledge on how to achieve that after 50 years of communist rule. As a result, even today, after almost another 30 years, the discovery of free speech is continuing. Experimentation with extreme polarization is allowed. Perhaps unknowingly, essential aspects of freedom, well-being, or even human rights are impacted. Foul language, objectifying those who are different, or talk shows exploiting fear are such examples.

However, everywhere else the discovery of what free speech may become seems to also be in question. The internet has opened massive channels of online communication. It has increased our acceptance of sharing more about ourselves, even intertwining our private and public selves. To stay informed, and to speak up when we see fit, we use a multitude of devices and apps. We freely give our consent to provide pieces of our private information. More recently though, the online life that we thought was public, has increasingly become an island that we inhabit together with those like us. The software is becoming more and more sophisticated, learning about our preferences and presenting us with the information it thinks we want. In the process, it isolates us in our own world. Not realizing we live with a different flavour of privacy, we still stay always online, expecting everything to happen now, while disseminating instantly what resonates with us, thus reinforcing the attributes of the data silo forming our world.

The paradox of the bubble is that it still drives the fragmentation of our attention with methods that have their own chapters in economics, politics, or social realms. Some are fair, some are not. Most of these methods have recently emerged and our language is catching up with the times by adding these contemporary meanings to the dictionary. For example, “someone who posts inflammatory messages online to provoke emotional responses” is called an internet troll. Sadly, though, our “always on,” instant, share-all expectations have become a medium for expanding trolling into the physical world, making it ubiquitous. With this, everything goes, it seems, including extreme, hateful, and harmful speech that has been recently coined in media as the “weaponization” of free speech.

We have the freedom of assembly, religion, and speech, or the freedom to marry and love, but is this enough? OCAD Professor Suzanne Stein argues for a “greater form of freedom: freedom from harm.”

The question is what do we do about it? Perhaps zooming into this proliferation of trolling, online anonymity, social media bubbles and their connection with the fight for our attention would provide an answer. In a world in which we’ve become omnipresent, attention fragmentation seems to illustrate the contemporary version of the divide and conquer paradigm. Attention holds the key to today’s competitive advantage, starting from the individual level. At the same time, the more we realize our attention is selective yet limited, the more we seem to crave it while giving it freely away at the expense of freedom itself. Has the freedom for attention become a basic need, and right?



© Monica Porteanu 2018

Tags:  freedom  politics  speech 

Share |
PermalinkComments (0)
 

The Trajectory of the Nation-State

Posted By Administration, Wednesday, March 7, 2018
Updated: Sunday, February 24, 2019

David Roselle‘s second post in our Emerging Fellows program concerns the apparent decline of the nation-state. The views expressed are those of the author and not necessarily those of the APF or its other members.

Many can only dream of having the freedom to express their opinion. The fortunate of us might take it for granted. Others might see it through biased lenses. I acknowledge mine, originating from growing up in a former communist country. In that world, information dissemination meant precisely two hours of evening TV programming. Any form of expression linked everything back to the doctrine regurgitating “glorious” dictatorship propaganda. Information beyond meant treason. Asking for a passport just to explore a different culture, stamped one as being against the system. Censorship was deeply entrenched in everyday life. It should come as no surprise that the world after the cold war sought freedom of access and expression. But with little knowledge on how to achieve that after 50 years of communist rule. As a result, even today, after almost another 30 years, the discovery of free speech is continuing. Experimentation with extreme polarization is allowed. Perhaps unknowingly, essential aspects of freedom, well-being, or even human rights are impacted. Foul language, objectifying those who are different, or talk shows exploiting fear are such examples.

However, everywhere else the discovery of what free speech may become seems to also be in question. The internet has opened massive channels of online communication. It has increased our acceptance of sharing more about ourselves, even intertwining our private and public selves. To stay informed, and to speak up when we see fit, we use a multitude of devices and apps. We freely give our consent to provide pieces of our private information. More recently though, the online life that we thought was public, has increasingly become an island that we inhabit together with those like us. The software is becoming more and more sophisticated, learning about our preferences and presenting us with the information it thinks we want. In the process, it isolates us in our own world. Not realizing we live with a different flavour of privacy, we still stay always online, expecting everything to happen now, while disseminating instantly what resonates with us, thus reinforcing the attributes of the data silo forming our world.

The paradox of the bubble is that it still drives the fragmentation of our attention with methods that have their own chapters in economics, politics, or social realms. Some are fair, some are not. Most of these methods have recently emerged and our language is catching up with the times by adding these contemporary meanings to the dictionary. For example, “someone who posts inflammatory messages online to provoke emotional responses” is called an internet troll. Sadly, though, our “always on,” instant, share-all expectations have become a medium for expanding trolling into the physical world, making it ubiquitous. With this, everything goes, it seems, including extreme, hateful, and harmful speech that has been recently coined in media as the “weaponization” of free speech.

We have the freedom of assembly, religion, and speech, or the freedom to marry and love, but is this enough? OCAD Professor Suzanne Stein argues for a “greater form of freedom: freedom from harm.”

The question is what do we do about it? Perhaps zooming into this proliferation of trolling, online anonymity, social media bubbles and their connection with the fight for our attention would provide an answer. In a world in which we’ve become omnipresent, attention fragmentation seems to illustrate the contemporary version of the divide and conquer paradigm. Attention holds the key to today’s competitive advantage, starting from the individual level. At the same time, the more we realize our attention is selective yet limited, the more we seem to crave it while giving it freely away at the expense of freedom itself. Has the freedom for attention become a basic need, and right?



© Monica Porteanu 2018

Tags:  economics  nation  politics 

Share |
PermalinkComments (0)
 

War: What Is It Good For?

Posted By Administration, Tuesday, March 6, 2018
Updated: Sunday, February 24, 2019

This is Craig Perry’s third post for our Emerging Fellows program. Both of his previous articles centered on questions about war. With this latest post, he continues to give readers much to consider. The views expressed are those of the author and not necessarily those of the APF or its other members.

Anarchy is a feature of our international system. Not a bug. With no supranational authority capable of policing relations between sovereign states, competition for resources and influence can naturally lead to conflict. Great powers will sometimes resort to war—despite the risk and expense this entails—when their other, nonmilitary instruments of national power come up short. As Carl von Clausewitz famously observed nearly two centuries ago, war is simply a continuation of politics by other means—namely, acts of violence to compel opponents to fulfill one’s will. The nature of war hasn’t changed much in the intervening years, even as armed conflict has become less endemic in world affairs.

If we were to construct a taxonomy of reasons great powers wage war, it might closely resemble Maslow’s hierarchy of needs. Regime survival in the face of an existential threat is the most fundamental excuse for conflict, followed closely by the protection of national sovereignty or territorial integrity. Great powers may also go to war to preserve or expand their spheres of influence over less-powerful neighbors, while the defense of allies against a mutual adversary lies higher up the proverbial pyramid. At the top of the hierarchy, we would find enforcement of international law and humanitarian intervention, arguably the most enlightened—or rather, least indefensible—pretexts for war.

Whatever its causes, however, war is always an ugly business, and great powers have long sought to constrain its excesses by codifying laws governing armed conflict. In the aftermath of World War II, the victorious Allied Powers even resolved “to save succeeding generations from the scourge of war” once and for all by establishing the United Nations, whose charter provided mechanisms for the pacific settlement of disputes and maintenance of international peace and security. Yet the UN Charter also enshrined the inherent right of states to defend themselves from armed attack, and granted the great powers of the day—in their roles as permanent members of the UN Security Council—sweeping authority to determine when a state of war exists; intervene militarily to restore the peace; and prevent the UN from acting against their interests. Such concessions were necessary to gain buy-in for the UN project, but this design flaw has guaranteed that future generations would continue to experience the scourge of war.

The question of what, exactly, constitutes war has taken on increased urgency in recent decades, as combatants have found ever more innovative ways to wreak havoc. Article 5 of the North Atlantic Treaty, for example, commits NATO members to consider an armed attack against any one of them as an attack on them all. Yet the only time in its history the Alliance has invoked this provision was not in response to a conventional military assault, but rather after the 9/11 terrorist attacks. In 2014, the Alliance further warned that cyberattacks could also trigger Article 5 if they reached a threshold that threatened NATO’s prosperity, security, or stability. Meanwhile, NATO has quietly dropped the qualifier “armed” when describing its Article 5 obligations in most of its communiqués—a tacit admission, perhaps, that war can come in all shapes and sizes.

Indeed, while all the great powers maintain formidable forces capable of conducting offensive military operations, they are also fielding new tools and techniques to compel opponents to fulfill their will short of armed conflict—that is, without egregiously violating international law or crossing a collective-defense threshold. Consequently, future wars may not reflect the thinking of Clausewitz so much as the ancient Chinese strategist Sun Tzu, who wrote: “The supreme art of war is to subdue the enemy without fighting.”



© Craig Perry 2018

Tags:  economics  environment  politics 

Share |
PermalinkComments (0)
 
Page 19 of 25
 |<   <<   <  14  |  15  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  23  |  24  |  25