Throughout this post, we argue that AI literacy plays a critical role in ensuring a positive future for democracy and society. We begin our inquiry with a fictional story of the future, followed by an in-depth real-world case study, which we leverage to demonstrate the importance of AI literacy in maintaining vibrant and resilient democratic value structures, concluding with a direct analysis of the relationship between AI literacy and democracy.
Let’s begin with a story of the future.
It’s the year 2050. Over the last two decades, widespread automation has replaced much of the working class, especially in the agricultural, transportation, and manufacturing sectors, leading to a dramatic increase in global poverty and unemployment rates. Natural resources, particularly those used to power modern digital technologies and infrastructures, have reached unprecedented scarcity levels. Disinformation has become virtually indistinguishable from truthful information, inspiring numerous orchestrated and successful attempts at mass manipulation, indoctrination, and coercion.
Criminal syndicates have become fully digitalized, executing frequent, advanced, and virtually undetectable cyber attacks, while also leveraging dark web channels to disseminate their products to every corner of the globe. Consequently, brief but intense ideological and resource-driven conflicts have sprung up all around the world, destabilizing economies and the workforce, and increasing global crime rates despite constant diplomatic negotiations, government, and humanitarian intervention.
To counter this, major cities morph into mass surveillance centers, deploying 24/7 autonomous surveillance drones and predictive policing forces, fueled by facial, emotional, and behavioral recognition data. Governments implement social creditworthiness systems whereby citizens’ access to essential goods and services is determined by their social credit score and trustworthiness, pushing financial and healthcare institutions, who, lacking the necessary time and resources to address the rapidly growing needs of civil society, are forced to differentially distribute services by reference to social credit.
Recognizing the imminent potential for a global dystopic decline, leaders of the world’s most prominent democracies convene, determining that they should turn to the world’s elite educational institutions, research organizations, and nonprofits for guidance on how best to navigate the future. But here too, they are met with disappointment. The overwhelming majority of elite institutions have turned into political indoctrination hubs that no longer see nor value a distinction between fact and fiction, teaching their pupils to preach rather than educate, grooming a generation that harbors and promotes fundamentally divisive beliefs and ideological structures, is easily manipulated, and possesses almost no real-world skills. The world appears to be on the brink of collapse, and something drastic must be done.
Having lost faith in world leaders’ abilities to preserve and protect democracy and civil rights, the G7 nations come together to consider what might be humanity’s last hope: Artificial General Intelligence (AGI). AGI emerged in 2045, but world leaders unanimously decided to ban the technology, fearing that its continued development would result in a singularity, giving rise to superintelligent systems that create uncontrollable and irreversible technological growth, and possibly, human extinction.
Nonetheless, seeing as human governance attempts are quickly failing, the ban is lifted, and G7 nations establish the Global AI Governance Council, whereby each G7 nation is represented by a benevolent yet culturally representative AGI system that carries out all government functions from healthcare to agriculture.
Over the coming years, these systems become the sole arbiters of democracy and civil rights within their respective nations, their only purpose being to protect fundamental human rights and maximize human wellbeing by making unbiased decisions favoring the greater good. Every 3 months, the G7 AGIs convene at the Global AI Governance Council, outputting a progress report that outlines the steps, mechanisms, and initiatives taken to preserve and protect democracy and civil rights. Based on citizen feedback, this progress report is then updated both at the G7 scale and at the level of individual G7 nations, to reflect the needs and preferences of citizens.
By 2060, it appears that AGI-driven governance has succeeded with flying colors. Resource scarcity has declined due to the development of more efficient and sustainable resource harvesting methods, resulting in more equitable resource distribution and the implementation of Universal Basic Income (UBI) services. Poverty and unemployment rates have fallen dramatically, leading to decreases in crime and conflict, making mass surveillance, predictive policing, and social creditworthiness systems obsolete. International crime syndicates are quickly subdued and disempowered via highly sophisticated AGI-generated digital security protocols coupled with automated cyber attack prevention and detection methods. Disinformation, while still present within the digital ecosystem, is now easily visible due to AGI-generated authentication methods, and elite institutions, having little power as political indoctrination hubs in this new world, have returned to traditional knowledge-based education approaches. Overall, democracy seems to be more transparent and effective than it ever has been.
But, a new problem emerges: the AI divide. Recall that every three months, the G7 AGIs convene to formulate a democracy progress report, each of which is updated in accordance with citizen feedback. However, the majority of citizens don’t possess a concrete understanding of the functions these systems perform on their behalf, the means by which they are executed, or their pre-defined safety limitations—for example, the inability to execute purely utilitarian decisions or guide individuals on what is “meaningful” for them—so most people don’t realize that their feedback falls outside the realistic confines of what these systems can do, and is therefore ignored. In other words, most people are not AI literate.
In effect, a small portion of the population that possesses a deep understanding of AGI gains a disproportionate amount of decision-making power during these feedback sessions, influencing the course of democracy and society for their benefit—from the AGI’s perspective, citizen feedback is weighed in terms of feasibility as it relates to democratic objectives. On the surface, democracy works, but at a much deeper level, most citizens are left feeling purposeless and engaged in a constant search for meaning. Over time, civil unrest grows and the cycle begins again.
Sure, the story we’ve just told is deeply speculative and grounded in numerous hypothetical assumptions such as the emergence of AGI. But, let’s consider another story, and this time, one that isn’t only true, but concerns the world’s third most powerful military, second largest economy, and biggest exporter of AI technologies: China.
As we are about to see, many of the factors we discuss in our future story are already present in China. While this doesn’t indicate that the Chinese government—which doesn’t represent the Chinese people—has set itself on an inherently dystopian track, it’s naive to assume that the values of an authoritarian regime, in particular one that’s so economically influential and power-motivated, don’t pose a fundamental threat to democracy and civil rights at the global scale. To this point, it’s crucial to consider that weaker democracies are more likely to import AI technologies—specifically surveillance AI—during periods of civil unrest. So, let’s get into it.
A decade ago, the Chinese Communist Party (CCP) began implementing a social creditworthiness system, aimed at assessing the economic and social reputations of individuals and businesses to determine their trustworthiness. While this system is still fragmented and continually evolving, it functions through a punishment vs. rewards mechanism, whereby “good” behaviors give rise to perks like tax breaks and lower public transportation fees whereas “bad” behaviors can lead to travel restrictions, loan denials, and public shaming.
The problem here isn’t necessarily how the system functions but rather who—in this case, the CCP—determines what is deemed “good” or “bad.” This is even more concerning when considering that this system supports the creation of blacklists, which the CCP already leverages to restrict the rights of citizens who expose government corruption and censorship—over the years, many Chinese journalists and political dissidents have been imprisoned for disclosing this kind of information, paralleling the social control practices of the Soviet Union.
On the other hand, it’s no secret that China is a modern surveillance state. Investments in facial recognition and surveillance AI technologies are steadily increasing, and estimates conclude that China has over 626 million surveillance cameras in use (more than half of the world’s active surveillance cameras), 200 million of which comprise the Skynet system, which is controlled by Chinese law enforcement. To put it bluntly, China is the most surveilled country on the planet.
Perhaps this wouldn’t be as worrisome if China didn’t have a track record for leveraging surveillance tech to undermine basic human rights, most notably in the Xinjiang province where over 13 million Uyghur and Turkic Muslim ethnic minorities have been systematically oppressed, and in Tibet, where Chinese law enforcement, as of 2013, initiated mass DNA collection alongside network monitoring, facial, and voice recognition practices. These are some of the well-known examples, though there are many more.
More recently, the FBI uncovered and dismantled a malware botnet run by a state-subsidized Chinese hacker group known as “Volt Typhoon.” According to the Department of Justice, this botnet was utilized to conceal Chinese hacking attempts at US critical infrastructure. But, this barely scratches the surface.
Over the last year alone, Chinese hackers have orchestrated numerous cyber attacks and espionage initiatives at a global scale, from breaching US military reconnaissance systems, email servers at the State Department, and Japan’s Space Agency directory, to stealing over $20 million in US Covid-19 relief funds, and running digital espionage campaigns in Uzbekistan, South Korea, Vietnam, Thailand, Indonesia, various parts of Africa, and the EU. In terms of China’s cyber espionage, it’s critical to note that many of these initiatives target pro-democracy and pro-human rights advocates in destabilized nations, implying that they are part of a broader effort to undermine democracy globally.
All these factors, when coupled with the CCP’s stringent censorship of the Chinese digital ecosystem, the widespread dissemination of anti-democratic government-subsidized propaganda through Chinese media outlets and educational institutions, and China’s global leadership in AI research publications and patent filings, indicate that foreign adversarial AI-driven threats to global democracy should be taken very seriously.
While exponential AI innovation inspires a wide variety of risks at various levels—from localized and systemic to existential scales—the most immediate threats to democracy stem from exponential AI proliferation. The rapidly increasing spread of AI technologies, not only throughout industries and domains, but also nations, and most importantly, foreign adversaries like China and Russia (Russia has also played a major role in cyber attack, espionage, and disinformation initiatives aimed at disrupting Western Democracy) is deeply worrisome. However, this isn’t to say that democratic nations are saints by default.
Israel currently uses the Blue Wolf facial recognition system and Pegasus spyware to suppress Palestinian human rights activism and peaceful dissent. Palantir, a US-based surveillance tech and defense company with a track record for secrecy and civil rights violations, has secured major contracts within each G7 nation except for Germany. In 2013, government leaks uncovered mass surveillance campaigns run by the NSA and British Intelligence Services, with the UK having the highest per capita rate of CCTV cameras of any European country not to mention being the epicenter of the Cambridge-Analytica Scandal.
The bottom line is this: if there are opportunities to leverage AI-driven technologies for social control, surveillance, or simply to increase government or corporate power and oversight, regardless of whether such opportunities arise in a democratic or authoritarian nation, they will be taken, or at least considered, by someone, somewhere. So, how do we stop this from happening? Enter, AI literacy.
In an ideal democracy, the following high-level characteristics would be present: 1) fair, stable, and transparent rule of law and government, 2) free, fair, and regular elections, 3) protection of fundamental rights and freedoms, 4) a clear separation of powers, and crucially, 5) an informed, educated, and engaged citizenry.
The effect of education on democratization is well-documented. Countries like Denmark, Norway, Finland, and Sweden, which have some of the highest rates of education in the world, rank at the top of the global democracy index, whereas countries like the US consistently rank much lower. Why could this be the case? Because democracies are in a state of constant flux, whereby their structure is adapted in light of the evolving needs, preferences, and values of those they govern. However, when people lack the means required—education—to express and articulate their needs, preferences, and values, their ability to influence the course of democracy and meaningfully engage with lawmakers dwindles away.
The more you know about AI—the more AI literate you are—the more equipped you will be to identify both the risks and benefits it may inspire. It’s therefore unsurprising that the most comprehensive piece of AI legislation to date—the EU AI Act—outlines AI literacy as one of its core objectives. In fact, the Act states that one of the main purposes of AI literacy is to promote “public awareness and understanding of the benefits, risks, safeguards, rights and obligations in relation to the use of AI systems” and to sustain an “innovation path of trustworthy AI in the Union” (Article 9b).
Seeing as the ultimate goal of the AI Act is to preserve EU democratic and Union values, the explicit incorporation of AI literacy shouldn’t be taken lightly. While the AI Act is the only piece of AI legislation that explicitly mentions AI literacy, other large-scale policies—the White House Blueprint for an AI Bill of Rights and President Biden’s Executive Order on Safe, Secure, and Trustworthy AI—also underscore the importance of promoting AI education and awareness to ensure responsible AI development and deployment, especially throughout the workforce and government agencies.
We can also view the relationship between AI literacy and democracy through the lens of bargaining power. For example, power asymmetries between the state and the individual citizen always grant the state more bargaining power. If one person objects to a particular policy, even if the objection is well-grounded, the state, being responsible for all its citizens as the ultimate arbiter of legislation, can easily override the individual. However, if the state’s citizens rally behind this individual, they can apply enough pressure on the state to shift its view in favor of the citizenry. Still, this requires that the citizenry understands, at a pragmatic level, why the state’s policy is wrong for them, and this necessitates a certain degree of education followed by active engagement. Let’s consider an example.
A state establishes a policy initiative that enables universities to leverage AI for admissions procedures insofar as the intended purpose of the system is to increase diversity throughout the application review process. On the surface, this intention appears largely positive, but to the state’s surprise, citizens, especially those who belong to underrepresented groups, vehemently oppose the legislation. Why? Because the majority of the citizenry is AI literate, understanding that such a system would be inherently biased and far more likely to be used as a means to address a superficial “diversity quota,” resulting in active discrimination once the quota is fulfilled.
Consequently, the state revises its policy initiative, proposing the following: universities may leverage AI for admissions procedures insofar as the intended purpose of the system is to identify high-performing students throughout underrepresented communities, according to a universal merit-based structure, with the intent to invite them to apply to the university in question. The state also adds that if universities wish to leverage this AI system, the number of students identified must represent at least half of the total applicant pool, and students who lack the means or resources required to apply, must be fully subsidized. The state’s citizens are much happier this time around, given that this new approach prioritizes diversity through equality of opportunity as opposed to a baseless equality of outcome metric.
AI literacy doesn’t just involve knowing how to use AI effectively and responsibly, but also, understanding how others might use it. Fortunately, a high degree of AI literacy enables you to anticipate the incentives that may motivate others to leverage AI in potentially harmful ways.
For instance, we know that generative AI (GenAI) systems can be designed to autonomously detect and address cybersecurity breaches, but this capability is double-edged—if GenAI systems can enhance cybersecurity protocols, they can also threaten them. In other words, AI can be leveraged as a tool or a weapon and this depends on the intention of the actor that uses it. The more we know about AI, the more accurately we can predict the intentions of potentially malicious actors—those of us who have used a hammer before know that it’s a valuable building tool, but we also know that it could be used for more morbid purposes.
Unfortunately, AI is much more complicated than a hammer. It’s a tool that is constantly evolving and embedding itself into every corner of society, becoming an increasingly potent factor in both conscious and subconscious human decision-making processes, revealing new forms of scientific inquiry and discovery, revolutionizing the way knowledge is accessed, ingested, and interpreted, restructuring the flow and creation of information throughout digital ecosystems, and accelerating the pace of innovation to an unprecedented degree. Taken together, all these characteristics make AI both the most transformative and disruptive technology in human history and more importantly, the most difficult technology to understand, particularly in terms of the impacts it generates.
Outwardly, democracies may appear stable but inwardly, they are profoundly fragile. Democracy is like a gingerbread house—initially, it holds its form and structure, but as time passes, the icing begins to melt and the walls start to crumble. If it isn’t constantly maintained, it collapses in on itself.
Now, imagine that while you’re maintaining this gingerbread house, irregular vibrations that progressively increase in intensity permeate the house’s foundation, morphing it from a flat surface to one that fluctuates. No matter how quickly rebuild it, its rigid walls continue to collapse, and you realize that a fundamental change must be made before it’s too late. Fortunately, you remember that you have some Jello left over, and you decide to rebuild your gingerbread house, but this time, encasing it in a block of Jello. Now, the gingerbread house holds its form and structure, even as the very foundation on which it’s built continues to change.
Democracy is the gingerbread house, AI is the vibrations, and AI literacy is the Jello. If you want to save your gingerbread house before the vibrations get too strong, you need to make your Jello now. To put it concretely, democracies must possess a uniform structure that is inherently flexible and adaptable to accommodate change across social, cultural, political, and economic boundaries. However, exponential AI innovation and proliferation is rapidly accelerating change across all of these domains simultaneously and often unpredictably, necessitating a much higher degree of collective engagement for the continued maintenance of democratic uniformity. In simple terms, if you want your voice to be heard and your rights to be preserved in the future, you must begin cultivating AI literacy now.