AI and Futures in Oceania - Assessing the Synergies & Prospects
- APF Community
- Aug 25
- 5 min read
By James Balzer
In an era where governments are increasingly called upon to respond to complex, multidimensional challenges—from decarbonisation to digitalisation—there is growing recognition that traditional approaches to policymaking are no longer sufficient.
As someone who works at the intersection of futures thinking, and policy innovation, I’ve long sought tools that can help decision-makers see beyond their institutional blind spots. The rise of generative AI provides a unique opportunity to blend the strategic competencies of futures practice with the analytical prowess of AI. Specifically, Large Language Models (LLMs) enable lateral thinking that goes beyond the traditional ‘bounded rationality’ of policymakers, complementing the principles and methods of futures practice.
So as a futures practitioner in Oceania, what is the future of synthesising generative AI and futures methodologies in our region?
Why Oceania Needs a Futures Lens
In Oceania and the wider Indo-Pacific, the stakes of poor long-term planning are higher than ever. Climate vulnerability, geopolitical uncertainty, and rapid technological disruption are colliding to produce a policymaking environment that is volatile and unpredictable.
But in Oceania specifically, the urgency is uniquely acute. The region is at the frontline of environmental upheaval: rising sea levels threaten entire nations, biodiversity is under siege, and climate migration is already underway. Socially, many Pacific Island communities across Oceania experience fragile governance systems, and often have . small and dispersed populations face challenges of market access, supply chain precarity, and external dependency, which make them vulnerable to shocks and reduce local policy flexibility.
But with volatility comes opportunity. As Interweave and others have documented, the key to unlocking this opportunity in Australian policymaking lies in integrated perspectives: seeing not only across time, but also across disciplines, sectors, and geographies. This is the promise of futures thinking. However, in thinking across disciplines and perspectives, there must be tangible applications and outcomes - not just academic abstraction and ideation.
From Lecture Hall to Policy Lab: Teaching Complexity with AI
Luckily, there is an emergent landscape of policy practitioners realising this benefit - including the AI CoLab, a cross-agency experiment within the Australian Public Service (APS). The AI CoLab is funded by the APS Capability Reinvestment Fund, and is a platform for public servants to use AI for cross-departmental codesign and collaboration. It naturally attracts the application of futures principles to complex policy matters, especially through the use of AI. This includes systems thinking and collective intelligence, as the APS pushes to “put people and business at the centre of policy and services.”
In a recent AI CoLab event I attended, I was pleased to see how generative AI is transforming public sector decision making in Canberra, including at the intersection of strategic foresight. From Huw McKay’s comparison of using LLMs to the “Escher Prompt”, or Chloe Tallentire’s application of LLMs for the Risk Policy team at the Australian Department of Finance to help build stronger foresight capacities, Canberra policymakers are already pushing the envelope on innovative strategic decision making.
Dragonfly Thinking: A Case Study of Intersecting Futures and AI
I had the pleasure of lecturing on the application of LLMs to decarbonisation pathways in the Indonesian Just Energy Transition Partnership (JETP), looking at how LLMs informed my 3 Horizons analysis for Indonesia’s net zero transition. Lecturing on this topic isn’t straightforward. You must help learners grapple with multiple timelines (short, medium and long-term), acknowledge competing perspectives (from fossil fuel workers to solar entrepreneurs), and surface systemic trade-offs that are easy to ignore in linear models. What I needed wasn’t just a tool to organise data—it was a way to think through complexity, without flattening it.
The LLM that I specifically talked about is Australian startup Dragonfly Thinking ,in offering novel lenses to this complex, novel challenge.
Inspired by the compound eye of a dragonfly—thousands of lenses working in unison to produce a seamless, 360-degree view—Dragonfly Thinking allows users to input policy problems and then interrogate them through the dynamic relationships between risk, reward and resilience.
In short:
Risk refers to the potential downsides, vulnerabilities or harms that may emerge across different scenarios or decision pathways.
Reward captures the potential benefits, gains or opportunities that could be achieved through particular actions or strategies.
Resilience reflects the capacity of systems, institutions or societies to absorb shocks, adapt to change, and transform in the face of uncertainty.
Another standout example was Eric Nguyen’s use of Dragonfly Thinking to review a 50+ page supply chain security analysis at the Department of Infrastructure. But Eric’s key insight wasn’t just about improving efficiency of work —it was about shifting the mindsets approaching such work. What transformed wasn’t only the pace of work, but how he approached it:
From linear thinking ➝ to lateral exploration
From crafting perfect prompts ➝ to engaging in structured dialogue
From one-off projects ➝ to continuous evolution
From using a tool ➝ to thinking with it, collaboratively
In this manner, instead of producing one-size-fits-all answers, the LLM surfaces synergies, tensions, and trade-offs between these dimensions across multiple time horizons. In the context of Indonesia’s energy transition, this meant we could visualise and model the short-term shocks, medium-term institutional adaptations, and long-term opportunities for structural transformation—while retaining a justice lens throughout.
In the context of supply chain security analysis, it meant thinking across sectors in a coherent manner, developing directionality through a set of ‘missions’ for Australian infrastructure that helps chart horizons for change.
A New Way to See the Future in Policymaking - Beyond Hierarchies and Into Systems in Oceania
What’s particularly exciting about AI is that it’s not just a teaching tool. It’s becoming an enabler of real public sector innovation in Australia. As demonstrated at the AI CoLab, AI is being used to break down silos between departments and support integrated thinking on everything from supply chain security to decarbonisation.
This is notable in a policymaking culture that has long struggled with hierarchical decision-making, fragmented responsibilities, and a risk-averse ethos. By offering a practical interface for deliberating trade-offs—grounded in real-time language models, but attuned to human priorities like justice, sustainability and trust—it brings future thinking into the mainstream. It empowers policymakers, researchers, and even students to make better decisions now, in light of what they value later.
Human-AI Collaboration, Not Substitution
One of the most important philosophical underpinnings to consider for the intersection of AI and futures thinking is realising that AI cannot solve everything. As Anthea Roberts argues, the goal is not to think faster, but to think better. That means taking multiple perspectives seriously—whether they be a public servant in Canberra or Wellington, or a community advocate in Suva or Port Vila.
It also means recognising that LLMs, like humans, have biases and limitations. But when paired thoughtfully with human judgment, they can surface angles and possibilities that are easy to miss. This is a natural blend with the strategic pillars of futures thinking. This is especially in moments where policymakers are navigating the edge between present constraints and future possibilities.
In this way, for futurists in policymaking, the rise of AI is emblematic of a broader shift. It is not just transforming AI use in Australia’s public sector. It is reimagining how we think about thinking itself—how we reason, how we decide, and how we collaborate for the long term.
Comments