×
Redefining the Nature of Work: What Leaders Can Do to Motivate Collective AI Literacy (Part 2)
Written by
Published on

Introduction

In addressing the imperative for collective AI literacy for a prosperous future of work, we focus on the crucial role of leadership in motivating this transition. While highlighting the perils of a workforce only partially literate in AI, we pivot to how leaders can steer away from a risk-centric approach that hampers collective action. Drawing from psychological and economic theories, we argue that risk perception can prompt self-interested behaviors detrimental to collaborative goals. By balancing the risks and benefits of AI, and implementing AI literacy incentive structures, leaders can shift the workforce mindset from fear to opportunity, emphasizing human-AI cooperation and human-centric AI principles. This approach, grounded in insights extracted from self-determination theory and behavioral economics, is pivotal in fostering an AI literate workforce, which is essential for innovation, economic growth, and a shared prosperous future.

Shifting the Focus: Balancing Risks and Benefits 

In the first essay of this series, we demonstrated the importance of cultivating AI literacy at the individual level. However, at the population level, collective AI literacy will play a crucial role in ensuring a prosperous future of work. A world where only a minority of the workforce is AI literate could increase inequality, reduce opportunity, and stifle innovation and productivity. An AI literate workforce would be more adaptable and resilient, continually generating value even as AI becomes progressively more sophisticated, capable, and widespread. As Connor Wright, Partnerships Manager at the Montreal AI Ethics Institute claims, “We need to see AI as infrastructure,” and we need to embed ourselves within this infrastructure to grow with it, rather than beside it. Simply put, collective AI literacy makes the workforce an integral part of AI infrastructure. 

“We need to see AI as infrastructure.”

What’s more, collective AI literacy is crucial to shaping a future in which AI benefits all of humanity. A prosperous future of work, while it’ll be a central dimension of this future, won’t guarantee it. Socio-economic inequality may someday be eliminated through mechanisms such as Universal Basic Income and decentralized governance, but this doesn’t imply that people will continue to find meaning and purpose in their lives—this is already difficult for many of us today—nor that bad actors will be prevented from using AI to cause harm. Therefore, leaders must emphasize the benefits of collective AI literacy not just for the future of work, but for the future of humanity—keep this point in mind while reading, since we will return to it at the end of this essay. This is a difficult task, but Paul Marca, VP of Learning and Dean of Ikigai Labs Academy, offers a grain of optimism, “You can create a breadcrumb trail to meaningful education if you can get people involved and engaged.” 

In reality, collective action—when a large group of people works together to accomplish a common objective—is notoriously hard to achieve, and it’s unlikely to occur in the absence of strong leadership. Leaders help set and define objectives and incentive structures, profoundly influencing how individuals behave in a group. As a result, leaders tend to possess the most consequential decision-making power. Therefore, when considering what version of the future of work to strive toward, it’s imperative that leaders not only understand the risks AI innovation may inspire but also the benefits. 

“The opportunity to learn from different examples, different models, different education disciplines is really important and provides a richness of perspective that will make you better suited to make tough decisions as an executive or leader,” says Marca.  

“Be curious and playful—just ask questions and see how it [AI] can respond—get comfortable exploring.”

If leaders encourage overly risk-centric attitudes, they may actually cultivate conditions that disincentivize individuals from working toward a shared and economically prosperous future of work. Leaders need to encourage their workforce to “be curious and playful—just ask questions and see how it [AI] can respond—get comfortable exploring,” suggests Stephen Smith, Co-Founder and CEO at SimplyPut AI. A little fear can be healthy, but too much fear can be stifling and counterproductive. Research across the fields of game theory, and social and evolutionary psychology has demonstrated that when risk is salient, individuals are more likely to act in their self-interest. We explore some examples below: 

  • “Fight or Flight”: Most animals, including humans, exhibit “fight or flight” behavior in response to perceived danger—in this context, danger represents a survival risk, which motivates a stress response.21 This response triggers survival instincts, which drive self-preservation behaviors. For instance, during the early stages of COVID-19, widespread consumer-driven “panic purchases” quickly depleted several critical global supply chains such as those for pharmaceuticals and medical devices.16 This kind of problem is formally referred to as the tragedy of the commons
  • Risk Perception and Social Dilemmas: Research on social dilemmas, such as the prisoner’s dilemma, shows that when individuals perceive more risk in cooperative behavior (e.g., the risk of being exploited by others), they tend to choose to act in their self-interest to mitigate potential losses.1,18,19 In other words, if the benefits of cooperation are not clearly articulated and emphasized, individuals will act selfishly. 
  • Game Theory: Research throughout economic game theory shows that when individuals are engaged in games that involve risk, such as betting or investment games, they’re more likely to act in accordance with self-interest to maximize their expected utility.19 Expected utility is a measure of the value of an outcome multiplied by the probability that it will happen. However, establishing concrete probabilities for future of work scenarios is extremely challenging, and given the current widespread focus on AI risk mitigation, it’s likely that such probability estimates will be driven by risk-averse rather than risk-neutral attitudes. 
  • Stress-motivated behavior: Stress, which can manifest as a behavioral response to risk, can lead to self-centered behavior.4 When people experience stress, it often motivates them to focus solely on their own needs, causing them to overlook potential solutions to their problems that involve communication or collaboration with others. 

“If you’re just the idea of yourself and something comes along and threatens it, you’re going to have big problems. But, if you can evolve and change as the environment changes around you, you’re going to be in a much better spot to handle it.” 

Humans intuitively focus on risk under conditions of uncertainty.13 It’s therefore unsurprising that the majority of government and corporate initiatives on the future of AI take this angle.3,5,6,11,15 Nonetheless, an overly risk-centric approach increases the probability of potent biases, which include negativity and confirmation bias,14 affect and availability heuristic,17,22 as well as loss aversion,9 all of which can cause individuals to overestimate risks and underestimate or overlook benefits, thereby increasing the likelihood of selfish behavior. In the insightful words of Tony Doran, Co-Founder of SimplyPut AI, “If you’re just the idea of yourself and something comes along and threatens it, you’re going to have big problems. But, if you can evolve and change as the environment changes around you, you’re going to be in a much better spot to handle it.” 

Leaders therefore have a responsibility to identify and manage AI risks as they emerge, while also continually fostering positive visions of the future of work and AI. In simple terms, leaders need to balance potential risks with potential benefits of AI-driven impacts on the future of work and “look externally or orthogonally to different industries in order to gain insights about what’s happening,” suggests Marca. In doing so, they can begin shifting the perspectives of their workers from an “I have everything to lose” to an “I have everything to gain” mindset, or as one F500 healthcare company executive told us, leaders need to adopt “a concerted well-rounded effort to bring people along and help them realize what they’re missing.” For instance, which of these two claims is more persuasive: 1) cultivate AI literacy, because if you don’t, you will have nothing of value to offer in the future or 2) cultivate AI literacy because it will allow you to develop new skills, source novel opportunities, and continue to provide value in the future. 

One thing is clear: a prosperous future of work, where humans benefit from AI and continue to provide economic value, can’t emerge without at least some degree of collective action. To reiterate, collective action requires group-level cooperation motivated by the achievement of a common goal. However, cooperation is fragile and vulnerable, so we need people to guide us and keep us in check. In this age of exponential innovation, our leaders have never been more critical. 

Positive Visions for the Future of Work 

Mainstream discussions considering the impacts of AI innovation on the future of work tend to adopt a “doom and gloom” perspective. In an attempt to cut through some of this negative hype, we illustrate a series of realistic outlooks on the future of work and AI that offer an optimistic perspective. We hope that by illustrating these possible outcomes, all of which will require some degree of collective AI literacy, leaders will realize the value of framing AI literacy initiatives in terms of the benefits they inspire rather than the risks they mitigate. In essence, leaders should be asking themselves and their workforce, “What kind of augmentative tool can AI be to what I’m doing, and how can I use it to augment what I’m doing to give me more time to do something else?” as Wright questions.  

“What kind of augmentative tool can AI be to what I’m doing, and how can I use it to augment what I’m doing to give me more time to do something else?”

  • Human-AI Cooperation: Human skills and AI capabilities need not be mutually exclusive. Even the most advanced AI systems still struggle with problem-solving in novel contexts, emotional intelligence, and critical thinking, often requiring human expertise, judgment, and good taste that carefully considers the nuances of a given situation. Humans can leverage AI to increase their productivity and foster innovation, but ultimately, most AI-driven decision-making will still require human oversight since even the most sophisticated models continue to lack transparency and explainability. According to Smith, “Embracing AI and understanding how it can be a great partner for the human in the loop is the right mentality.” Moreover, the arrival of generative AI inspires opportunities for human-AI cooperation, in the form of co-creation of original content, the synthesis of novel ideas, and the development and execution of business objectives and laboratory experiments, to name a few examples.
  • Human Augmentation: Some AI systems, such as DeepMind’s AlphaFold—a program that predicts protein structures with remarkable accuracy—will continue to revolutionize the pace and scale of scientific discovery. For instance, as AlphaFold becomes more powerful, researchers could leverage it to build synthetic biological structures or biotechnologies. Such technologies might someday be used to augment human capabilities, both in terms of cognitive and physical attributes. Future augmented humans could then design and build technologies that their predecessors were incapable of envisioning or constructing themselves. They could discover how to make humans impervious to all known diseases or, through the development of sophisticated brain-computer interfaces, create mechanisms by which humans can form digitally connected collective superintelligences or hiveminds. It’s worth noting, if taken seriously, that this version of the future is several decades if not centuries away. 
  • Human-Centric AI: Given the increasing global emphasis on ethical AI, namely systems that maintain fairness, transparency, accountability, safety, and privacy, AI companies are experiencing potent societal and regulatory pressures to build systems whose primary objective is to empower and benefit humans. This could also result in companies that are more closely aligned with fundamental human values, with their primary goal being to create technologies that benefit humanity. 
  • Meaningful Work: “You can’t worry about AI taking away your current responsibility, you should view it as opening up new opportunities,” claims Smith. Through the automation of mundane or repetitive tasks, AI will empower humans to pursue more meaningful and impactful work by enabling them to exercise higher-level judgment and creative thinking. Human-AI collaboration will enhance human skill sets while also facilitating the development of novel skill sets that allow humans to cultivate new and interesting opportunities. 
  • Better Work Relationships: Collaborative platforms such as Slack, Notion, and Trello are being adopted at scale, due to their abilities to enhance workplace collaboration and communication, and as Marca alludes, “value is potentially in human interactions, in providing engagement.” Therefore, a future where employees communicate over dispersed networks mediated by AI doesn’t appear unlikely, especially since many of these platforms have already started integrating built-in AI features. AI-mediated communication could enhance teamwork, help employees find the right person to talk to about a given problem, cultivate diversity of ideas, and increase inclusivity, among many other benefits. 

“Having a love of learning as part of the lexicon for organizations and for individuals is absolutely vital.”

  • Education and Continuous Learning: The rapid evolution of AI necessitates a workforce that demonstrates adaptability alongside a commitment to continuous learning, and Marca agrees that “having a love of learning as part of the lexicon for organizations and for individuals is absolutely vital.” The constantly changing AI environment encourages a culture of lifelong learning and upskilling, where humans continually adapt and find new ways to integrate AI into their work.
  • Economic Growth and Efficiency: The total additional value that AI could deliver to the global economy ranges between $2.6 and $4.4 trillion, and leading up to 2040, generative AI could increase the rate of labor productivity by .6% annually.12 This massive influx of value won’t only help facilitate economic growth, but also the creation of new industries and markets, and consequently, new employment opportunities. 
  • Collective Economic Prosperity: Increased productivity and efficiency, optimized resource management, better access to information, financial and education resources, job creation, streamlined innovation—AI could enhance all of these things, laying the groundwork for a future in which there’s less inequality and more opportunity, giving rise to or improving public institutions such that they’re more inclusive, diverse, and representative of people’s interests and values. Such institutions might be more inclined or even fundamentally motivated to share AI-generated wealth across society. 
  • Flexible Work Arrangements and Labor Markets: Humans are highly adaptable and creative, which allows them to create new professions and skills alongside large-scale technological change. The widespread adoption of remote work arrangements during the COVID-19 pandemic and the development of novel professions like AI writers, artists, prompt engineers, and ethicists demonstrates this point. 

Many complex socio-economic, political, and environmental factors will impact whether or not these scenarios come to fruition. However, if a population possesses at least some degree of collective AI literacy, it will be better equipped to enact and capitalize on such scenarios when opportunities to do so emerge. 

The possession of AI-specific skill sets, a current understanding of the limits and capabilities of AI systems, and an adaptable mindset will be critical in shaping and executing positive outcomes as work evolves, especially when considering that, as indicated by one F500 healthcare company executive, “The nature of job functions will change because AI will augment things.” Although AI literacy may appear to be obviously essential to the future of work, most people will nonetheless require guidance and motivation—humans are expert procrastinators.2 Consequently, we must ask ourselves, what can leaders do to motivate collective AI literacy?

Motivating Collective AI Literacy 

“Once you have established a strategy, you need to provide education so that people can leverage these tools to get their work done—and then think differently about the roles they’re engaging in. The question is, do you have a workforce that’s flexible and adaptable, and what do you do with those people who are maybe not as flexible and adaptable?”

Even when people recognize that it’s in their best interest to take action, they often lack the motivation to do so. The best course of action may be uncertain or require substantial effort, the amount of available information might be overwhelming, a belief that it will simply “work itself out” might be present, or conversely, a fear of making mistakes could be paralyzing. The process of cultivating AI literacy is vulnerable to all of these characteristics, and even though AI literate individuals will possess a competitive edge in the future of work, most workers will require intrinsic motivation—motivation to do something for its own sake, rather than for instrumental or necessary reasons—in their journey to AI literacy. 

“Once you have established a strategy, you need to provide education so that people can leverage these tools to get their work done—and then think differently about the roles they’re engaging in. The question is, do you have a workforce that’s flexible and adaptable, and what do you do with those people who are maybe not as flexible and adaptable?” inquires Marca. 

Leaders are responsible for the well-being of those they lead, but they typically don’t have the time or resources required for one-on-one leadership guidance. If leaders want to cultivate a resilient and adaptable workforce in the age of AI, they’ll need to develop mechanisms by which to intrinsically motivate workers to cultivate AI literacy, and as Marca claims, “We need to educate those who are in the workforce to become resilient in the age of AI both in terms of their job as well as in terms of the company opportunity.” Intrinsic motivation is difficult to instill, however, leaders have a variety of high-utility psychological tools at their disposal in this respect. 

One such tool is self-determination theory,7 which posits that for people to be fulfilled by the work they do, three core psychological needs must be satisfied: the need for competence, autonomy, and relatedness (i.e., meaningful connections with others). Therefore, leaders can frame discussions and initiatives around AI literacy to directly address these core needs. When individuals experience self-determined motivation they display higher productivity, organizational commitment, and job satisfaction.7 

Moreover, the distribution of future skill sets will likely skew in favor of high-level cognitive and emotional skills. These skills are more likely to be developed and sustained when driven by self-determined motivation, and they also increase an individual’s ability to cope with uncertainty. In essence, how will the future of work fulfill, or fail to fulfill, the psychological needs proposed by self-determination theory?

Leaders can draw additional insights from behavioral and agency theory. Behavioral theory10 emphasizes that talented workers are critical in driving enterprise development. However, as many leaders know, identifying and cultivating the right talent can be really hard, especially when employees lack the intrinsic motivation to unlock their potential. To this point, behavioral theory suggests that leaders should adopt a “people-oriented” approach whereby they ensure a strong relationship between their organization and its employees through clear and consistent internal communication and initiatives to involve employees in management procedures. To relate this back to self-determination theory, using this approach, leaders can enhance their ability to satisfy workers’ needs for competence, autonomy, and relatedness, resulting in improved intrinsic motivation and worker adaptability. 

Agency theory,8 on the other hand, addresses the principal-agent problem—when there is goal misalignment between the principal (a manager or leader) and the agent (an employee). For example, a leader may want to optimize employee output and productivity while an employee might want to find the best way to do as little work as possible while making the same salary. These problems typically arise due to information asymmetries or inadequate managerial oversight, and they can lead employees to pursue opportunistic behaviors that ultimately harm or undermine the organization in question. 

However, eliminating information asymmetries between leaders and employees may not always be feasible or realistic, especially in large organizations, and increasing managerial oversight could negatively affect employees’ feelings of competence and autonomy. Consequently, a different approach might be more useful, namely one where leaders develop incentive structures that motivate workers to align their goals with the goals of the organization.20  

“Leveraging AI as a tool is a necessary skill that the workforce needs to adopt.”

What do these theories tell us about how leaders should motivate collective AI literacy within their organizations? 

  • Leaders need to realize that their workers have fundamental human needs that must be fulfilled: Framing AI literacy goals and initiatives so that they directly address these needs could improve workers’ intrinsic motivation to become AI literate. At a fundamental level, “For individuals, the question is how do I understand what’s happening so I can keep my job,” states Marca. 
  • Leaders should adopt a “people-oriented” approach: ” Where do you want the tool to begin and where do you want it to end for the human?” questions Doran. Including employees in the discussion on AI literacy, listening to their concerns, and demonstrating why AI literacy is valuable for them, not just the organization, will be core aspects of motivating collective AI literacy. 
  • Although organizational goals may be beneficial to employees, employees may still behave opportunistically in the absence of robust incentive structures: Developing AI literacy is a labor-intensive long-term solution, and while it will benefit both the individual and the organization, most people are fixated on short-term payoffs, hence the importance of incentive structures. These incentive structures should promote the idea that “Leveraging AI as a tool is a necessary skill that the workforce needs to adopt,” as one F500 healthcare company executive comments. 

Specific AI-Literacy Incentives for Leaders to Consider 

“The real key isn’t only to be able to personally use AI, but to trust it to be released to a broader organization and believe that they’re going to use it in the correct way—that’s what we’re doing with data at SimplyPut.”

While the previously discussed shifts in mindset can motivate leaders to prioritize collective AI literacy within their organization, ensuring it will depend upon whether there are concrete incentives in place that highlight the benefits of AI literacy for employees. Importantly, leaders should think critically about how they apply these incentives since their effectiveness will vary on a case-by-case basis. 

“The real key isn’t only to be able to personally use AI, but to trust it to be released to a broader organization and believe that they’re going to use it in the correct way—that’s what we’re doing with data at SimplyPut,” says Smith. Fellow Co-Founder Doran also adds, “Trust for us isn’t as simple as ‘the AI did it’, it’s also social. So we show that the human that gave that example for the AI to use is translated all the way through to the end user.” 

Below we list several AI literacy incentives for leaders to consider:

  • Provide equal opportunities for AI education and training at minimal or no cost to ensure that all employees within your organization have the ability to upskill and re-skill as necessary. The more accessible AI education and training are, the more likely people will be to pursue it. 
  • Reward the completion of AI education and training initiatives to encourage employees to maintain and update their AI-skills repertoire in accordance with the recent AI innovations and integrations. Rapid changes in the AI landscape will require frequent upskilling, and sometimes people will need an extra nudge to pursue these initiatives. 
  • Hold internal and external AI competitions to motivate both employees and members of the public to come up with novel and effective AI-driven solutions to complex problems. Reward the winners of these competitions to encourage further AI development and experimentation. 
  • Commit to ethical and responsible AI to demonstrate to employees, regulators, and the public that you carefully consider the risks of AI integration within your organization and that you will actively prevent any AI harms or threats to human well-being. Leaders should consider what “trust looks like in a technological sense, in a social sense, and based on context,” according to Doran. Examples of this include transparent data collection methods and regular audits of AI systems—if people feel less threatened by AI they’ll be more likely to engage with it. 
  • Reward responsible AI experimentation and punish irresponsible AI experimentation. Employees may not always be able to recognize which AI use cases are appropriate, and setting examples early on could allow employees to identify in which contexts AI experimentation is both useful and appropriate. 
  • Run public awareness campaigns that articulate the benefits of AI literacy by collaborating with other organizations/platforms that provide AI education and training opportunities and publicizing the results of these campaigns in an easily digestible format. Leaders should, “engage with people that don’t understand AI or don’t know about it, so they’re not afraid to come across as being ignorant, and are open to having a dialogue,” claims one F500 healthcare company executive.
  • Demonstrate how AI literacy will enhance human autonomy by emphasizing that AI literate individuals are more equipped to influence the course of AI innovation and increase the probability that AI will be used to augment rather than replace human labor functions. The more AI literate individuals are, the more prepared they’ll be to understand AI regulations, risks and benefits, use cases, and internal AI initiatives, resulting in a higher degree of individual decision-making power. 
  • Hold regular AI-literacy feedback sessions by consistently engaging internal and external stakeholders in discussions on AI literacy. Encourage stakeholders to voice their concerns in terms of what works and what doesn’t for them, and ensure the AI literacy frameworks are updated and revised per such concerns and the most recent AI innovations. 
  • Foster a work culture that centers on continuous learning by rewarding the completion of sponsored AI education and learning initiatives and by recognizing and rewarding when employees pursue such initiatives on their own. Employees should also be encouraged to propose their own AI education and training initiatives and receive the proper funding and support to pursue them where appropriate. 
  • Foster a work culture that embraces failure and uncertainty by encouraging employees who are hesitant to use AI to experiment with the technology without fear of repercussions. Be sure to clearly articulate which forms of AI experimentation are considered responsible to prevent potential risks and harms. 
  • Ensure the presence of mechanisms that prevent information overload by providing employees with easily digestible resources and allowing them to “start small” on their journey toward AI literacy. Internal AI review boards and experts should also be leveraged to guide employees on where to look for and take advantage of AI-driven opportunities. 
  • Be patient and compassionate by understanding that learning and adaptation rates and strategies vary between individuals. Some people will need more time and resources to become AI literate than others, and this doesn’t indicate that they’re less valuable employees. Patient and compassionate leaders will be able to cultivate a more loyal and engaged workforce that is better prepared to overcome future AI challenges. 

Leaders may not need to implement every single one of these incentives to run successful AI literacy campaigns internally. However, given how varied motivation levels between individuals can be, it’s unlikely that simply articulating the benefits of AI literacy will be enough to ensure a sufficiently high degree of collective engagement. Therefore, leaders need to think carefully about how they incentivize AI literacy among their workforce, and should continually investigate both financial and behavioral incentive structures. 

A Final Note: The Future and A Path to Collective AI Literacy 

What Might the Future Look Like? 

Collective AI literacy will lay the foundation for developing a population-level common sense understanding and intuition of AI.

Collective AI literacy is crucial to ensuring a future in which AI benefits all of humanity. To understand why, consider the idea of exponential learning. Imagine two people, person A and person B, both of whom have never used a phone before. Person A is given a flip phone and person B receives a smartphone. Both are allowed to practice using the phones for as much time as they need to feel comfortable. Once they’re done practicing, they’re each given a modern-day laptop, which, like the phone, neither of them has ever used. They’re not allowed to practice with the laptop, and they must complete a series of tasks, such as navigating the internet and signing up for a social media account, within a two-hour timeframe. Of these two individuals, which one will be more likely to complete the tasks given to them? 

If you answered that person B would be more likely to complete the allotted tasks, you’d be right. But, why? The modern-day smartphone and laptop share many similar functional features—many of the skills required to operate a smartphone are transferable to operating a laptop. This means that person B will have a much quicker learning curve than person A, and what’s more, person B will be able to accelerate their acquisition of new skills more rapidly than person A. In essence, person B might start with a 10x learning advantage over person A, but this advantage will rapidly increase as person B builds on the skills they’ve acquired while person A is stuck learning the fundamentals. From this point forward, it won’t take person B much time to reach a 100x or even 1000x learning advantage over person A. In other words, person A may never be able to catch up to person B. 

The simple thought experiment above highlights the following idea: if an individual begins cultivating AI literacy now, especially while the most advanced AI systems are still in the early stages of development and deployment, they’ll be exponentially more capable of reaping the benefits and opportunities that these systems provide as they become more powerful. Moreover, given that AI is itself an exponential technology, exponential learning will be required to keep up with AI innovations. 

However, while individual AI literacy is valuable, it doesn’t guarantee that humanity will collectively benefit from AI. If only a minority of the population is AI literate, this increases the possibility of a deeply unequal future, namely one in which power is concentrated among a small fraction of AI literate individuals. If, on the other hand, a majority of the population is AI literate, the benefits inspired by future AI systems will need to be shared and evenly distributed, and hopefully, future AI innovations will be intentionally designed to benefit the collective good. In essence, a collectively AI literate population will have the decision-making power required to influence the course of AI innovation for the benefit of humanity. 

To drive this point home, we illustrate what we think is a plausible vision of the future:

A decade from now, AI R&D breakthroughs may enable the creation of AI models that are orders of magnitude more sophisticated and capable than current AI systems. Such systems might have an unprecedented degree of emotional and cultural intelligence, the ability to solve complex problems in novel or changing environments, and creative capabilities that equal those of humans, in addition to dramatically enhanced abilities for generalization, personalization, adaptation, common sense reasoning, and the design, development, and execution of long-term plans, scientific experiments, and objectives. 

These future AIs might not only drive innovation but in fact, become an integral part of it, streamlining the development and discovery of fundamentally revolutionary technologies at a pace, scale, and intensity that humanity has never before experienced. Such systems may not have the capacity for subjective experience, agency, or self-awareness, although they will likely match or exceed human intelligence in many more contexts than what we see today.  

If only a minority of the population is AI literate leading up to this point, they’ll be able to understand and leverage AI in ways that no one else can, such as for eugenics—and few will have the skills or knowledge necessary to stop them. For example, future AI systems may be able to optimize gene editing, enhance embryo selection through genetic analysis, and provide personalized gene therapy, which could allow individuals to genetically engineer the traits of their offspring to create a generation of humans that are impervious to most diseases, have longer lifespans, and are far stronger and more intelligent than today’s humans. To go one step further, such future humans may use their skills to build even more sophisticated innovations, discovering ways to integrate technology with the human body and mind, and developing new tools only they can use. 

As time goes on, these “humans” will become further and further removed from what it means to be human. Moreover, since such individuals would be by far the most capable members of society, they would quickly ascend the ranks and eventually hold all decision-making power. If they can no longer grasp or resonate with what it means to be human for the rest of us, how can we guarantee that they’ll make decisions that continue to benefit us? History and evolutionary biology have shown that as organisms become more powerful, they also become more dominant, usurping their environment and sometimes even manipulating it for their own needs, just as humans have done with modern infrastructures and industrialization. 

On the other hand, if a significant majority of the population is AI literate, most people will be able to understand how to leverage AI. This means they’ll be able to both utilize existing AI capabilities and identify potential use cases. Though opportunists will always be present and consistently attempt to further their well-being at the cost of others—such as by leveraging AI for eugenics—doing so behind closed doors will be much more difficult. In other words, everyone will know that AI can be used for eugenics, and although some won’t have a problem with this, most will find it morally distasteful if not downright unacceptable. However, while AI-driven eugenics is certainly a bad thing, specific practices, such as genetically engineering embryos for immunity to diseases that cause immense suffering at a global scale (e.g., Malaria or Cancer), might be the next step in ensuring a prosperous future for all of humanity. Conversely, people might just decide that anything to do with leveraging AI for eugenics should be prohibited. 

If the majority of people are AI literate, even if they’re not certain about how their governments and corporations plan to use AI, they will be able to more accurately anticipate the risks and benefits that AI may inspire across known and unknown use cases.

We simply don’t know what long-term decisions corporations and governments will make regarding how AI is used and implemented as it becomes more powerful, especially when such decisions are confidential. However, we do know that knowledge is power—if the majority of people are AI literate, even if they’re not certain about how their governments and corporations plan to use AI, they will be able to more accurately anticipate the risks and benefits that AI may inspire across known and unknown use cases. By leveraging a fundamental understanding of how and where AI can be used, people could put pressure on their governments and corporations to align their AI development and integration strategies with objectives that benefit the whole of society. In essence, AI literate individuals will be able to vote, in an informed manner, on AI regulations, understand whether AI is developed and used responsibly or irresponsibly, identify novel AI use cases, and ensure that AI innovation progresses safely and ethically. 

Finally, you may be asking yourself, how could a collectively AI literate population accurately anticipate how individual malicious actors may leverage AI for purposes that undermine collective well-being? Consider the following comparison: the predictive capabilities of today’s AI systems can be improved by enhancing the quality and quantity of data on which they’re trained. The human mind is similar, except that much of the data we’re trained on comes in the form of collective knowledge that’s passed down through generations. This underlies the universal human ability for common sense reasoning and intuition—we don’t need to understand gravity to know that a ball will always fall downwards just as we know that we can’t jump from a second-story window without incurring a serious injury. 

Collective AI literacy will lay the foundation for developing a population-level common sense understanding and intuition of AI. Nonetheless, intuition and common sense reasoning are sometimes unreliable, leading people to pursue inadvertently harmful objectives, like drinking too much during stressful times or believing information simply because it comes from someone they trust. Still, while our common sense and intuition regarding AI may not always be correct, it will always be shared among us. For instance, you may not cope with stress by drinking, but you know that others will. In the same vein, you may find it morally unacceptable to use AI for eugenics, but you know that others won’t. Collective AI literacy will enable a shared understanding of how AI might be used and implemented, especially as it continues to innovate, making it easier to identify potent benefits and risks as they emerge. 

Embarking on Path to Collective AI Literacy Through Employee Responsibility

AI’s workplace application can give rise to negative social and political consequences, such as job loss among low and middle-skilled workers, widened income gaps, and legal challenges like data privacy and algorithmic discrimination. Seeing as employees are key assets in any organization, they must be treated well, and therefore provided with the necessary training and resources—AI literacy capabilities—to continue providing value as AI innovations make the future of work more uncertain. For the future of work to provide shared economic prosperity, leaders will need to motivate their workforces to develop AI literacy.  

While AI integration does increase companies’ production efficiency, only improvements in employee responsibility have been shown to increase innovation output and efficiency.

The path to collective AI literacy won’t be easy, however, the interventions suggested in this essay target a critical aspect of corporate social responsibility frameworks, namely, employee responsibility. Employee responsibility entails treating workers well through the provision of both financial and non-financial rewards, an inclusive workplace, and novel opportunities for learning and self-improvement. While AI integration does increase companies’ production efficiency, only improvements in employee responsibility have been shown to increase innovation output and efficiency.23 

How AI literacy initiatives are framed will significantly impact employees’ motivation to become AI literate. To realize the value of AI literacy at the organizational level, leaders must recognize the importance of placing their employees at the heart of their AI literacy initiatives. The clearer the benefits of AI literacy are to employees, the more likely it is that they’ll be motivated to develop it. “By keeping an open mind and exploring what’s out there you’re able to cut through that hype and to cut through what is genuine gold within the AI economy,” remarks Wright. 

Consequently, organizations with high degrees of collective AI literacy will improve employee responsibility, increasing the probability of sustained innovation and profit, even as AI continues to disrupt and transform the work landscape. 

Before concluding, let’s play devil’s advocate for a moment. Some leaders may be asking themselves “Why should I contribute to a future of work that provides shared economic prosperity when doing so might compromise the competitive edge that my company has?” 

In short, a future of work that provides shared economic prosperity is the only version of a future where your company is guaranteed to continue profiting and innovating (insofar as it’s aligned with relevant norms and regulations). An economically unequal future of work would enable a select few organizations to reap enormous profits, while others are quickly outcompeted. The former version supports the development of a robust, adaptable, and resilient workforce that possesses elevated skill sets and continues to generate value and novel opportunities for innovation, thereby providing companies with a large talent pool to draw from when building their competitive edge. The latter version would likely result in a workforce where only the minority possesses the necessary talent required for continued innovation, making talent much more expensive and difficult to identify. 

All in all, if leaders begin cultivating collective AI literacy now, they’re implicitly future-proofing their organizations. But, it doesn’t stop there—individuals, organizations, and society at large could all benefit substantially from collective AI literacy. To conclude in Wright’s words, “It’s not just scientists and biologists getting excited about neural networks and AI, it’s the general population of a whole country.” 

References

*note: references are ordered alphabetically by author name, with links provided where appropriate.

  1. The Evolution of Cooperation (Axelrod & Hamilton, 1981)
  1. Psychology of Procrastination (American Psychological Association, 2010)
  1. Emerging AI Risks Underscore Urgent Need for Responsible AI (BCG, 2023)
  1. Threat‐Induced Anxiety and Selfishness in Resource Sharing: Behavioral and Neural Evidence (Cui et al., 2023)
  1. The General Data Protection Regulation Act (EU Commission, 2018)
  1. The AI Act (EU Commission, 2023
  1. Understanding and Shaping the Future of Work with Self-Determination Theory (Gagne et al., 2022
  1. Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure (Jensen & Meckling, 1976
  1. Prospect Theory: An Analysis of Decision under Risk (Kahneman & Tversky, 1979)
  1. The Human Problems of an Industrial Civilization (Mayo, 1934)
  1. Confronting the Risks of Artificial Intelligence (Mckinsey, 2019
  1. The Economic Potential of Generative AI: The Next Productivity Frontier (Mckinsey, 2023
  1. Risk, Unexpected Uncertainty, and Estimation Uncertainty: Bayesian Learning in Unstable Settings (Payzan-LeNestour & Bossaerts, 2011)
  1. Negativity Bias, Negativity Dominance, and Contagion (Rozin & Royzman, 2001)
  1. Senator Wiener Introduces Safety Framework in Artificial Intelligence Legislation (Safety in Artificial Intelligence Act, 2023
  1. Global Supply Chains in a Post-Pandemic World (Shih, 2020)
  1. The Affect Heuristic (Slovic et al., 2006)
  1. The Prisoner’s Dilemma (Stanford Encyclopedia of Philosophy, 2019)
  1. Game Theory (Stanford Encyclopedia of Philosophy, 2023
  1. Agency Theory and Variable Pay Compensation Strategies (Stroh et al., 1996)
  1. How the Fight or Flight Response Works (The American Institute for Stress, 2019
  1. Availability: A heuristic for judging frequency and probability (Tversky & Kahneman, 1973
  1. AI technology application and employee responsibility (Wang, Xing & Zhang, 2023)

Recent Articles

NVIDIA’s CEO Envisions a Future Where Companies Hire AI Agents by the Million

Jensen Huang Explains Why Traditional Data Centers Are Dead and Intelligence Factories Will Power the Next Industrial Revolution

Gumloop Founders Make A Magical App That Turns Everyone Into an Automation Wizard 🪄

This AI-powered workflow platform is democratizing automation, enabling business teams to build powerful solutions without coding—potentially saving your company countless hours and resources

Apple Intelligence is Apple’s Calculated Late Arrival to the AI Race

In Silicon Valley's AI gold rush, Apple's delayed entry wasn't a misstep—it was a master class in strategic patience. Here's how the tech giant is quietly rewriting the rules of artificial intelligence, one feature at a time.