Join Us | Print Page | Sign In
Emerging Fellows
Group HomeGroup Home Blog Home Group Blogs

The Bank of Facebook?

Posted By E. Alex Floate, Thursday, September 12, 2019

Alex Floate, a member of our Emerging Fellows program devotes his ninth blog post to Facebook’s digital currency innitiative. The views expressed are those of the author and not necessarily those of the APF or its other members.

 

When Facebook first premiered their idea for a digital currency Libra in 2019, the reactions ranged from eye-rolling to predictions of a vast conspiracy to control the world. Most pundits could not understand how or why a social media platform would create or need a currency. The calls for regulation or legal action to disallow the notion were immediate.

 

However, within just a few years of introduction, more governments turned to nationalism or authoritarianism, and the Trumpian trade and currency wars of the early 2020’s disrupted global markets. Despite their mutual needs for trade and economic growth, the use of national currencies became a point of contention as the world fragmented into tribes.

 

The group most affected were entrepreneurs in the countries once called the third world. For them, globalization had brought a connection to the wider world of consumers, especially as 7G broadband and localized electrical grids were built out. The initiatives began by the Chinese in the 2010s to build out transportation infrastructure in Africa were bearing fruit by the mid-2020s. Small companies and farmers found they could directly market and ship to global customers. Into this currency void stepped the technology companies that were enabling the global marketplace.

 

Although Facebook was the first to market with a digital currency, Amazon, Alibaba, and a few regional upstarts began using their positions as marketplaces to promote their in-house digital currencies as a means of global trade. They created means to earn additional currency, such as an exchange for personal information, reviews, actions as beta or market testers, or selling and buying on the marketplace. The member could earn additional Amazonians or Alibablers to spend within their respective marketplaces, and even at many outside venues.

 

As the reach of the tech companies expanded beyond supply chains and into services, the ability to negotiate paying for goods in services in their own currencies was greatly expanded. This allowed many companies to offer the option of being paid in corporate or domestic script. The corporate script became highly preferable as the companies offering it were able to better manage it for inflation and deflationary pressures. Additionally, by restricting it from most secondary markets the ability to manipulate the currency via speculation was taken away from those seeking to make money off other’s misery.

 

Companies also created a social scoring like the system China implemented in the late 2010s. This system was more reward than punishment and sought to incentivize behavior in line with the company’s values and social conscience. By allowing the company to track the individual throughout their day, including conversations and actions, the company could determine if they were acting as a good citizen of the planet.

 

When a person’s interactions were friendly, helpful, informative, and advanced civility or relationships, the score could potentially increase. Energy and water usage, recycling, using public or personally powered transportation were also monitored and properly rewarded. The opposite of these positive actions lowered the score. Higher scores were rewarded with extra currency, merchandise, or socially with offers of more prestigious jobs or responsible public positions. By 2040 these new scoring systems had replaced nearly all other methods of determining financial trust for an individual or organization.

 

By the year 2050, most smaller countries had outsourced their treasury functions to either Amazon or Alibaba. For most of these countries the new currencies offered access and stability and an opportunity to grow their economies. Entering the global marketplace on an equal footing allowed many countries to shake off the ‘third world’ label, and this was especially true on the continent of Africa.

 

© 2019 E Alex Floate

Tags:  digital economy  economics  Facebook 

Share |
PermalinkComments (0)
 

Morality First, Knowledge Second?

Posted By Administration, Thursday, May 24, 2018
Updated: Monday, February 25, 2019

Polina Silakova‘s fifth post in our Emerging Fellows program explores the role of morality and manners amid disruptive technologies. The views expressed are those of the author and not necessarily those of the APF or its other members.

If you have ever travelled around Vietnam, you might have noticed at the main entrance of some schools the motto, previously ubiquitous in the communist era: “Tien hoc le, hau hoc van”. The direct meaning refers to the importance to learn proper manners in human relations first, and only then start learning other things that you would normally learn at school. Loosely it can be translated as “morality comes before knowledge”. In the past, it has served as a good call for millions of Vietnamese students and really, would not hurt anyone to be reminded of it. We are wondering if this prioritisation would still be applicable to our world of rapidly growing technologies?

The past couple of months offered us some food for thought on the evolution of business ethics in the light of technological progress.

– Facebook makes money from selling our data, which it gets in exchange for letting us share this very data free of charge – is it a fair deal? While regulators are only attempting to catch up with technicalities of this business model, Facebook continues benefiting from this knowledge gap.
– The first pedestrian died from an autonomous car approved by Uber for public roads, even with a vehicle operator behind the steering wheel. Was it human complacency with an autonomous vehicle offering a relaxing ride? Did the launch of the system happen too early, rushed by the appetite for a quicker return on investments? Or was it the lack of maturity in this field that prevented good judgement on whether the system is ready for operation?

What previously was good or bad as black and white, has now shifted into a grey area.

While in these cases Facebook users and Uber testing the driverless technology might be victims of ignorance and lack of caution, some other innovations make us concerned about the way the ethics of consumers might evolve in the future in our market society. Augmented reality and cruel video games; robots and the sex industry; more generally, robots as household servants (or slaves?). One can say that whatever people choose to do in their free time is their business, but wouldn’t it be naive to assume that the change in our own morality will have no implication for society?

A further twist to these already ambiguous scenarios came out of the study on human-robot interaction conducted by researchers from MIT and Stanford. Their experiments have shown that when people work with autonomous robots and errors occur, humans tend to blame the robots rather than themselves. Interestingly, when a success occurs, we humans take the credit more frequently than giving it to the machine. In another word, our habit to shift responsibility for mistakes from ourselves to other people remains unchanged when we get to deal with autonomous tech-friends instead of our familiar colleagues.

This poses further questions on what implications this might have for ethics in a high-tech post-capitalistic world. Who will take responsibility for decisions made by a board which consists of both humans and AI? One of the first non-human board directors – VITAL – already gets to vote in board meetings together with five human directors in a venture capital firm in Hong Kong. While VITAL only takes decisions on investments, where its skills in scanning large volumes of data come in particularly handy, we can only imagine how this might play out with advancements in deep learning. Will we still be sure that the machine is acting in the company’s interests? And if reality shows the opposite, who is to blame?

How will ethical decision-making evolve in the future? Will it be something a majority demands? Something the powerful agree on? Or something that AI would recommend as the least harmful option? What is clear is that it is becoming increasingly dependent upon how much we know about technology and its implications for society. Knowledge starts to inform morality and we should challenge ourselves to stay up to speed to make sure we take decisions that meet our moral standards.

© Polina Silakova 2018

Tags:  Facebook  society  technology 

Share |
PermalinkComments (0)