top of page
  • Facebook
  • Twitter
  • LinkedIn
  • Youtube

Re-assessing the IT Revolution in the Age of AI - Reflections from Richard Slaughter

Updated: Jul 25

From IT to AI - how can we re-apprise the IT revolution in the modern era?

By James Balzer

ree

In a thought-provoking webinar hosted by the Association of Professional Futurists (APF), Richard Slaughter delivered a sobering reappraisal of the so-called IT revolution. 


The webinar offered a deep dive into how the promises of digital progress, once heralded as evolutionary milestones in human development, may have masked deeper social, economic, and existential threats - particularly as AI enters the mainstream.


Slaughter, a long-standing voice in the futures studies community, called for a clear-eyed reassessment of how we think about the trajectory of technological change, particularly among the emergence of AI. The prevailing narrative of IT as a force for human empowerment - fueling innovation, enabling global communication, and unlocking efficiencies - has become increasingly inadequate. The emergence of AI, he argued, has forced us to confront the darker undercurrents of this revolution: fragility, inequality, disempowerment, and control.


From Promise to Peril: The Four Phases of the Internet

Slaughter outlined four overlapping phases of the internet era, each of which reflects a changing relationship between society and information technologies:


  1. Enabling (Hopeful Beginnings): In its early days, the internet was celebrated as a tool for empowerment - democratising information, connecting people across geographies, and enhancing productivity. There was a genuine sense of optimism that digital technology would uplift societies and enrich civic life.

  2. Growing Dependencies (Ambiguous Outcomes): Over time, however, dependence on digital platforms grew. Social media became an essential part of both personal and professional life, and cloud-based systems began to underpin entire economies. But with this integration came ambiguity - benefits were mixed with new forms of surveillance, addiction, and inequality.

  3. Increased Vulnerability (Dangerous Systems): As our reliance deepened, so did our exposure to risk. Data breaches, misinformation campaigns, algorithmic discrimination, and mental health crises made it clear that the tools we once embraced could also harm us.

  4. Systemic Threat (Possibly Lethal): Now, Slaughter warns, we face a scenario in which our digital systems pose existential risks - not only through AI’s disruptive capacity but by accelerating social fragmentation, undermining trust in institutions and enabling systemic fragilities that threaten the foundations of democracy and civil society.


Challenging the Technoscientific Worldview

An important concept Slaughter elucidated were the dangers of the “technoscientific worldview”, which frames technological innovation into an apolitical, values-free discussion, separate from social, economic and political systems that define it. Instead, Technoscientism frames technology as a purely technical domain, for engineers and scientists, ignoring technology’s role in shaping human values, identities, worldviews and futures. 


Futures thinking, in contrast, emphasises that technologies are not neutral tools. They are embedded in human cultures and choices. Their impacts depend not only on their design but on the purposes for which they are used, the systems into which they are introduced, and the worldviews that justify their expansion.


Neoliberalism and the Retreat of Regulation

One of the key reasons the technoscientific worldview has spiralled out of control, according to Slaughter, is the retreat of the regulatory state. Over the past four decades, the rise of neoliberal ideology - championing free markets, small government, and deregulation - has severely weakened society’s ability to anticipate, assess, and constrain harmful technological externalities. Examples include the abolition of the Office of Technology Assessment (OTA) in the US Government. 


In the case of AI, this has meant a lack of guardrails to ensure ethical, inclusive, and socially beneficial development. Instead, powerful firms have raced ahead with minimal oversight, often shaping public discourse, market incentives, and even policymaking itself. This has allowed a narrow set of commercial interests to define the direction and priorities of technological innovation.


This neoliberal lens of AI and technology ignores the deep need for regulation and critique of the socio-political and socio-technical domains of AI, instead thinking of it as an economic and technical domain - aligning to the technoscientific worldview. 


Capitalism, Power, and the Technological Messiah Complex

Slaughter referenced the work of critical thinkers like Shoshana Zuboff (The Age of Surveillance Capitalism) and Wendy Liu (Abolish Silicon Valley) to explore the deeper systemic forces behind the digital revolution. Both authors have exposed how today’s tech industry prioritises profit over people, extracting personal data, manipulating behaviour, and accelerating economic inequality.


Moreover, the belief systems of Silicon Valley’s elite - a "technological salvationism" of sorts - are profoundly disconnected from the realities of most people. For many tech billionaires, the end of this world and the beginning of a new, supposedly perfect one is just around the corner, achievable through transhumanism, virtual utopias or space colonisation. But this narrative, Slaughter argued, offers nothing meaningful for the billions who must continue to live in an increasingly unstable, unequal, and algorithmically governed world.


Unmasking the Myths of AI

Central to Slaughter’s critique was the need to demystify artificial intelligence itself. AI, he suggested, is a misnomer. What we call "artificial intelligence" is better understood as "machine intelligence"- systems that generate outputs based on data and statistical models but possess no mind, no consciousness, and no capacity for meaning-making. The intelligence we perceive is projected by us. These tools do not “understand” in any human sense. They do not care, reflect, or have values.


Despite this, society increasingly treats AI systems as if they are autonomous agents, often delegating critical decisions to them. The anthropomorphism of AI leads to overconfidence in its capabilities and underestimation of its risks. More fundamentally, it shifts accountability away from the people, institutions, and ideologies shaping its design and deployment.


A Call for Re-assessment and Agency

In closing, Slaughter urged participants not to accept the current trajectory of digital technology as inevitable. The IT revolution - and now the AI revolution - can and must be re-assessed. We must move beyond awe and abstraction, toward a grounded understanding of how technology interacts with power, values, and responsibility.


Rather than placing blind faith in the next innovation or app, we must ask deeper questions: Who benefits? Who bears the costs? What kind of future is being built, and for whom?


Only by engaging with these questions can we begin to chart a more humane, accountable, and sustainable path forward.



Comments


Association of Professional Futurists

APF plays a unique role in the field of strategic foresight by defining the competencies of professional futurists, the knowledge base of futures studies they use and the standards by which their work can be evaluated.

The Association of Professional Futurists (APF) holds a membership with the Global Futures Society (GFS), an initiative by the Dubai Future Foundation. Through this membership, APF members are also considered part of the GFS network.

Registered Nonprofit 501(c)(6) Association since 2002

Subscribe to our newsletter

Thanks for submitting!

Quick Links

Contacts:

For general inquiry: 

contact@apf.org

For membership inquiry and

member accounts:

membercare@apf.org

© 2023 by Association of Professional Futurists. |  Terms of Use  |  Privacy Policy 

bottom of page