top of page

Techistentialism: Could Superstupidity be as Dangerous as Superintelligence?

In 2017, we named our strategic foresight practice Techistential,[1] a play on “technology” and “existential.” Today, humanity faces both technological and existential conditions that can no longer be separated. We define this phenomenon as “Techistentialism.”[2]



Our existential condition is an uncertain one, considering the inherent dualities, paradoxes, and tensions of life. Our techistential condition is no different.


Martin Heidegger, the German existential philosopher, long challenged the view that we actually master technology, or that we have the ability to solve any collateral issues that may arise as technology evolves. As technology continues to come into existence, it may reveal itself to be beyond our involvement. In this case, as technology grows beyond our control, it is not merely a human activity.[3]


It is the paradox of technology, the magic at one end and hazards at the other, which gives technology a unique status. 


At the very least, technology’s existential risks lie in Heidegger’s observation that “it drives out every other possibility of revealing.” 


Technology is so dominant that it can eclipse all other ways we understand the world, for better or worse.


Through the lens of existential philosophy, we each have the agency to explore contingencies, serendipity, and emergence. Contingency is the idea that possible events are uncertain. Choice exists because of contingency. Our freedom as individuals is determined through our own choices and actions. If everything were predetermined - if life was fixed by design - we would lack choice and power. 


Techistentialism is Existentialism 2.0: Decision-Making in Our Technological World



Today, technology is shaping society by influencing decision-making and enabling manipulation at scale. Simultaneously, it impedes upon our individual existence as acting agents. Through AI, technology is challenging us in a realm historically specific to humans. As AI continues to develop, machines will become increasingly autonomous in making decisions. It is here that the use of technology confronts the existential dimension. Here, we stand on the edge of our free will and our fundamental concepts of choice. Computationally rational technology is no longer neutral because it drives away contingency and choice.


Standing on the shoulders of Heidegger and fellow philosopher Søren Kierkegaard, it was Jean-Paul Sartre who so powerfully articulated the human condition: “existence precedes essence.” By this, Sartre meant that our agency emerges through choice. While existence is indeterminate and thus unknowable, we are always defining our essence as it emerges and, in doing so, moving in a direction that we define. But if technology is determining outcomes on our behalf, our agency is curtailed and our choices may be beyond our control.


Techistentialism is our attempt to apply this philosophical perspective to sense-making and decision-making in our contemporary technocratic environment.


Given rapid advances in AI, the fundamental  issue  relates  to  both the potential reach of AI and our relationship with AI. We need not speculate on artificial general intelligence (AGI) or a superintelligent machine to wonder whether machines might still come to challenge us. The issue at hand is a question of understanding the nature of our own capabilities in relation to the nature of a machine’s computational rationality.


With this in mind, we observe that AI is rapidly advancing up the decision-making value chain. Humans should remain wary of an inadvertent reliance on prescriptive algorithms -- those that go beyond the pattern recognition of descriptive algorithms to actually recommend courses of action. We should not underestimate the potential scope and severity of our de-skilling by delegating our decision-making capabilities to algorithms, and how reliance may slip into dependence.


The question is not how much machines will augment human decision-making, but whether humans will remain involved in the process at all. If humans fail to sufficiently develop our capabilities, rapidly learning machines could surpass us. To shift the relationship between humans and machines, AI does not have to reach AGI. It just needs to become better than us at handling complex systems. To mitigate this existential possibility, we must become Anticipatory, Antifragile, and with the Agility to bridge the short-term with long-term decision-making (AAA).[4]




Existential Risk, Catastrophes and Extinction


More recently than the existential philosophers of the 19th and 20th centuries, an existential risk was defined by Nick Bostrom as “…one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.”[5] While human extinction is the most obvious existential catastrophe in relation to AI, there is a wide spectrum between existential impacts and extinction. The curtailing of humanity’s agency and choice is a concrete existential risk.


Could Superstupidity Be as Dangerous as Superintelligence?


As AI advances, incomprehensibility can reach even higher levels as fusing technologies generate highly complex unpredictable systems. As multiple AI systems interact, it becomes increasingly difficult to discern how algorithms make decisions, which exposes us to both human and machine errors. “Stupid” machines in nonlinear environments can be dangerous, especially since the idea that machines cannot have goals is a myth (an infrared-seeking missile has a goal that’s based on what it is programmed to achieve: track, follow, strike a heat-emitting target). 


Complex systems in technology (robots, supercomputers, power and nuclear plants, communications, healthcare, semi-autonomous    lethal    weapons)   all   have many moving parts and interacting systems that can be prone to catastrophic failure; and every day, we develop more powerful computers. This raises a couple key questions:


Have we developed an overreliance on increasingly complex and dynamic systems, which are unpredictable and can fail? 


How easy would it be for autonomous machines, or humans, to make a consequential, maybe even irreversible, mistake that goes undetected?


At its extremes, could superstupidity be as much of an existential catastrophic risk as artificial superintelligence? Superstupidity could take on multiple features, including overtrust and overreliance on the underlying “intelligence” of these systems. For instance, believing that AI can be a proxy for our own understanding and decision-making as we delegate more power to algorithms may be superstupid. Further, consider AI or data ineptitude. What might appear as incompetence may simply be algorithms acting on bad data; more or better data may not help machines make improved decisions -- which does not seem to be the case for humans.


Determining whether AI is on the road to superintelligence or superstupidity may not matter as much as ensuring that humanity does not end up relying on AI without a solid understanding of the consequences. 


Maybe the existential risk is not machines taking over the world or reaching human-level intelligence, but rather the opposite, where human beings start thinking and responding like idle machines -- unable to connect the emerging dots of our complex, systemic world.


Is Humanity Stupid Enough to Rely on Machines?


In the future, we may realize our main worry is not AI suddenly turning evil, but accidents, misalignment, or shortsightedness. If humans fail to become sufficiently AAA,[6] rapidly learning machines could surpass us.


Asking whether our own creations will reach or surpass human intelligence may be the wrong question, as reaching human intelligence is not a prerequisite for AI to cause irreversible damage, and doing dumb things can be as dangerous as superintelligence. Superstupidity can counter any level of intelligence.


Idiocracy is a 2006 dark comedy where, set in the distant future of 2505, humanity relinquishes control of society to advanced technology systems managed by multinational   corporations.  As   these   AI systems evolve, humans themselves become increasingly superstupid and entirely dependent on the controlling technology. This  movie  acts  as  a  satirical warning -- today, we must ensure it does not become prophetic.


Update Education and Skills for Relevancy


To make sure that Idiocracy is not a harbinger of the future, updating our education system should now become an existential priority. Education’s effectiveness in problem-solving should be evaluated on whether it can help humanity become relevant and future-ready for our complex 21st century. We should inspire passion, nurture curiosity, emphasize uncertainty, develop range, and foster critical thinking to use Socratic questioning to examine assumptions. 


Most importantly, we need to form a new relationship with inquiry, experimentation, and failure (which goes hand in hand with creativity). We must harness curiosity and diverse perspectives because today’s standard knowledge will never solve tomorrow’s surprises. These features should help us problem-solve out of the most complex, systemic, and existential risks.


Just as we have made the “language” of math a requirement, learners should now be fluent in technology’s usages, abuses, and impacts. Proper interaction with technology - including knowing truth from fiction, information from disinformation, and entertainment from addiction -- will separate those who find themselves enslaved by our new technologies from those who harness them for their own aims.


We must recognize that education does not end at the completion of formal schooling or outside the classroom. It is instead a constant, lifelong process of learning, unlearning, and relearning -- starting in the playground all the way to the boardroom.


Editor’s note: Roger Spitz is the lead author of the bestselling book, The Definitive Guide to Thriving on Disruption, from which this article is derived.


 

About the Author:


Roger Spitz (M.Sc., FCA, APF) is an international bestselling author, President of Techistential (Climate & Foresight Strategy), and Chair of the Disruptive Futures Institute in San Francisco. He also sits on several boards, Sustainability committees, and Academic institutions worldwide.


Currently writing his fifth book, Spitz wrote the influential four-volume collection The Definitive Guide to Thriving on Disruption. He teaches and publishes extensively on systemic change, uncertainty, and decision-making in complex environments.


A writer, speaker, and investor in AI, Spitz chairs the Techistential Center for Human & Artificial Intelligence. He has been appointed to a number of AI & Ethics councils, and is known for coining the term “Techistentialism.”


Spitz is also a partner of Vektor (Palo Alto, London), an impact VC fund investing in the future of mobility. As former Global Head of Technology M&A with BNP Paribas, Spitz advised on over 50 transactions with deal value of $25bn.


References and Notes:


1.    Techistential is a global Climate & Foresight Strategy practice based in San Francisco. Roger Spitz is the founder and CEO of Techistential. See www.techistential.ai.


2.    Spitz, R., & Zuin, L. (2022). The Definitive Guide to Thriving on Disruption: Vol. I. Reframing and Navigating Disruption. Disruptive Futures Institute.


3.    Heidegger, M. (1954). The Question Concerning Technology.


4.    Spitz, R. (2020). The Future of Strategic Decision-Making. Journal of Futures Studies. https://jfsdigital.org/2020/07/26/the-future-of-strategic-decision-making/


5.    Bostrom, N. (2013). Existential Risk Prevention as Global Priority. Global Policy 4(1), 15-31. https://doi.org/10.1111/1758-5899.12002


6.   Spitz, R., & Zuin, L. (2022). The Definitive Guide to Thriving on Disruption: Vol. II. Essential Frameworks for Disruption and Uncertainty. Disruptive Futures Institute.

199 views0 comments

Recent Posts

See All
bottom of page