Augmented vs. Artificial Intelligence: A Key Challenge for Democratic Societies

09/25/2025
By Robbin Laird

Recently, I talked with John Blackburn, chairman of the Institute for Integrated Economic Research (IIER) – Australia, about their new project looking at the impact of Artificial Intelligence on Australian society and how to shape an effective way ahead to ensure that AI is used to augment human intelligence, rather than degrading it. The title of the study effort is: A Pandemic of the Mind? The Impact of AI on National Security and Resilience.

The prologue to the study effort provides clear insight into the focus of the project. It reads: “Artificial Intelligence’s (AI) most profound impacts may not be technological, but cognitive, affecting how humans process information, make decisions, and maintain the mental capacities essential for the effective functioning of a democratic society and for national security. The metaphor of a “Pandemic of the Mind” captures the systemic, spreading nature of cognitive changes that transcend individual choices to become societal vulnerabilities.”

The distinction between augmented and artificial intelligence represents far more than a semantic choice for it embodies a fundamental decision about the future of human cognition and democratic governance. As advanced AI systems rapidly integrate into society, democratic nations face a critical juncture: Will these technologies enhance or augment human capabilities and strengthen democratic institutions, or will they accelerate cognitive decline and undermine the intellectual foundations upon which democratic societies depend?

Recent years have witnessed explosive advances in large language models and agentic AI. Surprisingly, national resilience research in Australia, despite extensive consultation and systems analysis, did not originally consider AI as a major risk vector. This oversight points to how quickly AI emerged into mainstream concern and how difficult it is for societies to anticipate the waves of innovation that can reshape everything from national security to democratic culture.

Public debate often centers on cyber threats, disinformation, or infrastructure sabotage as the main risks stemming from AI. However, Blackburn argues that the most profound impact may be cognitive: AI affects how individuals process information, make decisions, and sustain the mental skills needed for effective citizenship in a democracy. In other words, technology can erode the very capacities that democracies depend on.

To understand AI’s potential impact, we focused on the cognitive landscape into which these technologies are arriving. As Blackburn underscored, democratic societies already face a troubling decline in critical thinking capabilities. Studies across OECD countries document falling adult and child literacy levels, with researchers attributing much of this decline to smartphone-dominated, short-form media consumption. Social media platforms optimize for distraction, delivering dopamine rushes through brief content that undermines the sustained attention required for complex reasoning.

As Blackburn emphasized, this “post-literacy digital environment” particularly affects long-form reading, which academic research identifies as crucial for building vocabulary, analytical ability, and linear reasoning. The implications extend beyond individual capability to democratic participation itself. When an estimated 40+% of the population struggles to read at levels sufficient for understanding legal or societal issues, the foundational premise of informed democratic participation faces severe strain.

Global mental health statistics compound these concerns, with youth suicide rates and psychological distress reaching levels that have prompted legislative responses, including minimum age requirements for social media access. This cognitive crisis provides the context into which AI technologies are being introduced and not into healthy, intellectually robust societies, but into populations already experiencing levels of cognitive decline.

Blackburn calls this a “pandemic of the mind”: a systemic transformation that transcends individuals and becomes a collective vulnerability in democracy itself. The challenge is not simply to fend off mis/disinformation, but to reverse a slide towards post-literacy and social fragmentation.

Two interconnected cycles drive this phenomenon. The first, cognitive atrophy, begins when individuals delegate mental tasks to AI systems. This cognitive offloading initially appears beneficial, reducing mental workload and increasing apparent productivity. However, sustained delegation can weaken underlying cognitive capabilities, creating what might be termed “cognitive debt” or the accumulated cost of relying on artificial systems for complex mental tasks.

As individuals lose confidence in their independent cognitive abilities, they increase their reliance on AI tools, further accelerating skill degradation. Universities already report concerning examples: graduates who cannot compose basic emails without AI assistance, or who struggle with fundamental analytical tasks when technological support becomes unavailable. This creates a dependency cycle where reduced cognitive capability necessitates greater AI dependence, which further erodes independent thinking capacity.

The second cycle involves what Blackburn highlights as the foreign dependency of Australia. The superior quality and accessibility of foreign AI platforms, primarily from the United States, drives widespread adoption across government, industry, and civil society. This adoption pattern discourages investment in sovereign AI capabilities, creating technological dependence that mirrors other strategic vulnerabilities. Australia now imports 90% of critical pharmaceutical components and 90% of its liquid fuels. Unmanaged AI adoption could create similar dependencies in yet another critical infrastructure component.

Democratic societies are marked by generational shifts and growing cognitive divides. Younger generations, more exposed to digital media, are often less prepared for nuanced reasoning or critical debate. Education systems have struggled to adapt; current trends indicate a crisis in both basic literacy and advanced analytic training.

The implication, Blackburn argues, is that national security, effective governance, and innovation all require a renewed investment in education, cognitive skills, and intellectual rigor. AI must not be allowed to substitute for these skills, but should instead augment them.

The concept of augmented intelligence offers a pathway beyond this dilemma. Rather than replacing human cognition, augmented intelligence enhances human capabilities while preserving essential cognitive skills. This approach treats AI systems as sophisticated tools that amplify human intelligence rather than substitute for it.

As Blackburn emphasized, in practice, augmented intelligence resembles bringing a brilliant but inexperienced intern onto a team. The AI system can rapidly process vast amounts of information, identify patterns across large datasets, and provide diverse analytical perspectives. However, it lacks contextual understanding, makes significant errors, and requires human oversight to function effectively. When properly integrated into human teams, AI can help avoid group think by providing alternative viewpoints, serving as a “red team” to challenge assumptions, or identifying risks and gaps in human analysis.

Blackburn argued that a team-based approach addresses AI’s inherent limitations while leveraging its strengths. Large language models, when queried with complex problems, often require six to eight iterations with different analytical parameters to generate comprehensive responses. For highly complex issues, ten to twelve queries may be necessary to capture the full range of possible insights. Few users employ such systematic approaches, instead accepting initial responses that may capture only a fraction of the AI system’s analytical potential, and can result in ‘hallucinations’ that are not identified by the user.

The choice between augmented and artificial intelligence carries profound implications for democratic society’s future. Artificial intelligence approaches that minimize human involvement may initially appear more efficient, but they risk creating cognitive dependencies that undermine democratic participation and national resilience.

AI will not “cause” cognitive decline or democratic backsliding, but it will strongly reinforce pre-existing trajectories. The hope for democracies rests in the intelligent, deliberate, and disciplined use of augmented intelligence. This means re-embedding AI within collaborative, critical, and accountable teams; investing in sovereign capacity; and sustaining a relentless focus on education, debate, and civic engagement.

The deliberate development of augmented intelligence systems offers the possibility of cognitive renaissance, where AI tools enhance human capabilities while preserving the critical thinking, moral reasoning, and participatory engagement that democratic societies require.

Success will require unprecedented coordination between educational institutions, government agencies, and private sector actors. It demands recognition that AI deployment is not merely a technological challenge but a fundamental question about human development and democratic governance.

Most importantly, it requires the intellectual humility to acknowledge that preserving human cognitive capabilities alongside technological advancement represents not a limitation but the essential foundation upon which democratic societies’ future prosperity and security depend.

Agentic AI is a form of artificial intelligence characterized by autonomous decision-making and independent action toward achieving defined goals, often with minimal human supervision or intervention. Unlike traditional rule-based automation or generative AI, which largely follow instructions or generate content on command, agentic AI leverages networks of AI agents capable of reasoning, learning, and adapting dynamically within complex environments.