In addressing the imperative for collective AI literacy for a prosperous future of work, we focus on the crucial role of leadership in motivating this transition. While highlighting the perils of a workforce only partially literate in AI, we pivot to how leaders can steer away from a risk-centric approach that hampers collective action. Drawing from psychological and economic theories, we argue that risk perception can prompt self-interested behaviors detrimental to collaborative goals. By balancing the risks and benefits of AI, and implementing AI literacy incentive structures, leaders can shift the workforce mindset from fear to opportunity, emphasizing human-AI cooperation and human-centric AI principles. This approach, grounded in insights extracted from self-determination theory and behavioral economics, is pivotal in fostering an AI literate workforce, which is essential for innovation, economic growth, and a shared prosperous future.
In the first essay of this series, we demonstrated the importance of cultivating AI literacy at the individual level. However, at the population level, collective AI literacy will play a crucial role in ensuring a prosperous future of work. A world where only a minority of the workforce is AI literate could increase inequality, reduce opportunity, and stifle innovation and productivity. An AI literate workforce would be more adaptable and resilient, continually generating value even as AI becomes progressively more sophisticated, capable, and widespread. As Connor Wright, Partnerships Manager at the Montreal AI Ethics Institute claims, “We need to see AI as infrastructure,” and we need to embed ourselves within this infrastructure to grow with it, rather than beside it. Simply put, collective AI literacy makes the workforce an integral part of AI infrastructure.
What’s more, collective AI literacy is crucial to shaping a future in which AI benefits all of humanity. A prosperous future of work, while it’ll be a central dimension of this future, won’t guarantee it. Socio-economic inequality may someday be eliminated through mechanisms such as Universal Basic Income and decentralized governance, but this doesn’t imply that people will continue to find meaning and purpose in their lives—this is already difficult for many of us today—nor that bad actors will be prevented from using AI to cause harm. Therefore, leaders must emphasize the benefits of collective AI literacy not just for the future of work, but for the future of humanity—keep this point in mind while reading, since we will return to it at the end of this essay. This is a difficult task, but Paul Marca, VP of Learning and Dean of Ikigai Labs Academy, offers a grain of optimism, “You can create a breadcrumb trail to meaningful education if you can get people involved and engaged.”
In reality, collective action—when a large group of people works together to accomplish a common objective—is notoriously hard to achieve, and it’s unlikely to occur in the absence of strong leadership. Leaders help set and define objectives and incentive structures, profoundly influencing how individuals behave in a group. As a result, leaders tend to possess the most consequential decision-making power. Therefore, when considering what version of the future of work to strive toward, it’s imperative that leaders not only understand the risks AI innovation may inspire but also the benefits.
“The opportunity to learn from different examples, different models, different education disciplines is really important and provides a richness of perspective that will make you better suited to make tough decisions as an executive or leader,” says Marca.
If leaders encourage overly risk-centric attitudes, they may actually cultivate conditions that disincentivize individuals from working toward a shared and economically prosperous future of work. Leaders need to encourage their workforce to “be curious and playful—just ask questions and see how it [AI] can respond—get comfortable exploring,” suggests Stephen Smith, Co-Founder and CEO at SimplyPut AI. A little fear can be healthy, but too much fear can be stifling and counterproductive. Research across the fields of game theory, and social and evolutionary psychology has demonstrated that when risk is salient, individuals are more likely to act in their self-interest. We explore some examples below:
Humans intuitively focus on risk under conditions of uncertainty.13 It’s therefore unsurprising that the majority of government and corporate initiatives on the future of AI take this angle.3,5,6,11,15 Nonetheless, an overly risk-centric approach increases the probability of potent biases, which include negativity and confirmation bias,14 affect and availability heuristic,17,22 as well as loss aversion,9 all of which can cause individuals to overestimate risks and underestimate or overlook benefits, thereby increasing the likelihood of selfish behavior. In the insightful words of Tony Doran, Co-Founder of SimplyPut AI, “If you’re just the idea of yourself and something comes along and threatens it, you’re going to have big problems. But, if you can evolve and change as the environment changes around you, you’re going to be in a much better spot to handle it.”
Leaders therefore have a responsibility to identify and manage AI risks as they emerge, while also continually fostering positive visions of the future of work and AI. In simple terms, leaders need to balance potential risks with potential benefits of AI-driven impacts on the future of work and “look externally or orthogonally to different industries in order to gain insights about what’s happening,” suggests Marca. In doing so, they can begin shifting the perspectives of their workers from an “I have everything to lose” to an “I have everything to gain” mindset, or as one F500 healthcare company executive told us, leaders need to adopt “a concerted well-rounded effort to bring people along and help them realize what they’re missing.” For instance, which of these two claims is more persuasive: 1) cultivate AI literacy, because if you don’t, you will have nothing of value to offer in the future or 2) cultivate AI literacy because it will allow you to develop new skills, source novel opportunities, and continue to provide value in the future.
One thing is clear: a prosperous future of work, where humans benefit from AI and continue to provide economic value, can’t emerge without at least some degree of collective action. To reiterate, collective action requires group-level cooperation motivated by the achievement of a common goal. However, cooperation is fragile and vulnerable, so we need people to guide us and keep us in check. In this age of exponential innovation, our leaders have never been more critical.
Mainstream discussions considering the impacts of AI innovation on the future of work tend to adopt a “doom and gloom” perspective. In an attempt to cut through some of this negative hype, we illustrate a series of realistic outlooks on the future of work and AI that offer an optimistic perspective. We hope that by illustrating these possible outcomes, all of which will require some degree of collective AI literacy, leaders will realize the value of framing AI literacy initiatives in terms of the benefits they inspire rather than the risks they mitigate. In essence, leaders should be asking themselves and their workforce, “What kind of augmentative tool can AI be to what I’m doing, and how can I use it to augment what I’m doing to give me more time to do something else?” as Wright questions.
Many complex socio-economic, political, and environmental factors will impact whether or not these scenarios come to fruition. However, if a population possesses at least some degree of collective AI literacy, it will be better equipped to enact and capitalize on such scenarios when opportunities to do so emerge.
The possession of AI-specific skill sets, a current understanding of the limits and capabilities of AI systems, and an adaptable mindset will be critical in shaping and executing positive outcomes as work evolves, especially when considering that, as indicated by one F500 healthcare company executive, “The nature of job functions will change because AI will augment things.” Although AI literacy may appear to be obviously essential to the future of work, most people will nonetheless require guidance and motivation—humans are expert procrastinators.2 Consequently, we must ask ourselves, what can leaders do to motivate collective AI literacy?
Even when people recognize that it’s in their best interest to take action, they often lack the motivation to do so. The best course of action may be uncertain or require substantial effort, the amount of available information might be overwhelming, a belief that it will simply “work itself out” might be present, or conversely, a fear of making mistakes could be paralyzing. The process of cultivating AI literacy is vulnerable to all of these characteristics, and even though AI literate individuals will possess a competitive edge in the future of work, most workers will require intrinsic motivation—motivation to do something for its own sake, rather than for instrumental or necessary reasons—in their journey to AI literacy.
“Once you have established a strategy, you need to provide education so that people can leverage these tools to get their work done—and then think differently about the roles they’re engaging in. The question is, do you have a workforce that’s flexible and adaptable, and what do you do with those people who are maybe not as flexible and adaptable?” inquires Marca.
Leaders are responsible for the well-being of those they lead, but they typically don’t have the time or resources required for one-on-one leadership guidance. If leaders want to cultivate a resilient and adaptable workforce in the age of AI, they’ll need to develop mechanisms by which to intrinsically motivate workers to cultivate AI literacy, and as Marca claims, “We need to educate those who are in the workforce to become resilient in the age of AI both in terms of their job as well as in terms of the company opportunity.” Intrinsic motivation is difficult to instill, however, leaders have a variety of high-utility psychological tools at their disposal in this respect.
One such tool is self-determination theory,7 which posits that for people to be fulfilled by the work they do, three core psychological needs must be satisfied: the need for competence, autonomy, and relatedness (i.e., meaningful connections with others). Therefore, leaders can frame discussions and initiatives around AI literacy to directly address these core needs. When individuals experience self-determined motivation they display higher productivity, organizational commitment, and job satisfaction.7
Moreover, the distribution of future skill sets will likely skew in favor of high-level cognitive and emotional skills. These skills are more likely to be developed and sustained when driven by self-determined motivation, and they also increase an individual’s ability to cope with uncertainty. In essence, how will the future of work fulfill, or fail to fulfill, the psychological needs proposed by self-determination theory?
Leaders can draw additional insights from behavioral and agency theory. Behavioral theory10 emphasizes that talented workers are critical in driving enterprise development. However, as many leaders know, identifying and cultivating the right talent can be really hard, especially when employees lack the intrinsic motivation to unlock their potential. To this point, behavioral theory suggests that leaders should adopt a “people-oriented” approach whereby they ensure a strong relationship between their organization and its employees through clear and consistent internal communication and initiatives to involve employees in management procedures. To relate this back to self-determination theory, using this approach, leaders can enhance their ability to satisfy workers’ needs for competence, autonomy, and relatedness, resulting in improved intrinsic motivation and worker adaptability.
Agency theory,8 on the other hand, addresses the principal-agent problem—when there is goal misalignment between the principal (a manager or leader) and the agent (an employee). For example, a leader may want to optimize employee output and productivity while an employee might want to find the best way to do as little work as possible while making the same salary. These problems typically arise due to information asymmetries or inadequate managerial oversight, and they can lead employees to pursue opportunistic behaviors that ultimately harm or undermine the organization in question.
However, eliminating information asymmetries between leaders and employees may not always be feasible or realistic, especially in large organizations, and increasing managerial oversight could negatively affect employees’ feelings of competence and autonomy. Consequently, a different approach might be more useful, namely one where leaders develop incentive structures that motivate workers to align their goals with the goals of the organization.20
What do these theories tell us about how leaders should motivate collective AI literacy within their organizations?
While the previously discussed shifts in mindset can motivate leaders to prioritize collective AI literacy within their organization, ensuring it will depend upon whether there are concrete incentives in place that highlight the benefits of AI literacy for employees. Importantly, leaders should think critically about how they apply these incentives since their effectiveness will vary on a case-by-case basis.
“The real key isn’t only to be able to personally use AI, but to trust it to be released to a broader organization and believe that they’re going to use it in the correct way—that’s what we’re doing with data at SimplyPut,” says Smith. Fellow Co-Founder Doran also adds, “Trust for us isn’t as simple as ‘the AI did it’, it’s also social. So we show that the human that gave that example for the AI to use is translated all the way through to the end user.”
Below we list several AI literacy incentives for leaders to consider:
Leaders may not need to implement every single one of these incentives to run successful AI literacy campaigns internally. However, given how varied motivation levels between individuals can be, it’s unlikely that simply articulating the benefits of AI literacy will be enough to ensure a sufficiently high degree of collective engagement. Therefore, leaders need to think carefully about how they incentivize AI literacy among their workforce, and should continually investigate both financial and behavioral incentive structures.
Collective AI literacy is crucial to ensuring a future in which AI benefits all of humanity. To understand why, consider the idea of exponential learning. Imagine two people, person A and person B, both of whom have never used a phone before. Person A is given a flip phone and person B receives a smartphone. Both are allowed to practice using the phones for as much time as they need to feel comfortable. Once they’re done practicing, they’re each given a modern-day laptop, which, like the phone, neither of them has ever used. They’re not allowed to practice with the laptop, and they must complete a series of tasks, such as navigating the internet and signing up for a social media account, within a two-hour timeframe. Of these two individuals, which one will be more likely to complete the tasks given to them?
If you answered that person B would be more likely to complete the allotted tasks, you’d be right. But, why? The modern-day smartphone and laptop share many similar functional features—many of the skills required to operate a smartphone are transferable to operating a laptop. This means that person B will have a much quicker learning curve than person A, and what’s more, person B will be able to accelerate their acquisition of new skills more rapidly than person A. In essence, person B might start with a 10x learning advantage over person A, but this advantage will rapidly increase as person B builds on the skills they’ve acquired while person A is stuck learning the fundamentals. From this point forward, it won’t take person B much time to reach a 100x or even 1000x learning advantage over person A. In other words, person A may never be able to catch up to person B.
The simple thought experiment above highlights the following idea: if an individual begins cultivating AI literacy now, especially while the most advanced AI systems are still in the early stages of development and deployment, they’ll be exponentially more capable of reaping the benefits and opportunities that these systems provide as they become more powerful. Moreover, given that AI is itself an exponential technology, exponential learning will be required to keep up with AI innovations.
However, while individual AI literacy is valuable, it doesn’t guarantee that humanity will collectively benefit from AI. If only a minority of the population is AI literate, this increases the possibility of a deeply unequal future, namely one in which power is concentrated among a small fraction of AI literate individuals. If, on the other hand, a majority of the population is AI literate, the benefits inspired by future AI systems will need to be shared and evenly distributed, and hopefully, future AI innovations will be intentionally designed to benefit the collective good. In essence, a collectively AI literate population will have the decision-making power required to influence the course of AI innovation for the benefit of humanity.
To drive this point home, we illustrate what we think is a plausible vision of the future:
A decade from now, AI R&D breakthroughs may enable the creation of AI models that are orders of magnitude more sophisticated and capable than current AI systems. Such systems might have an unprecedented degree of emotional and cultural intelligence, the ability to solve complex problems in novel or changing environments, and creative capabilities that equal those of humans, in addition to dramatically enhanced abilities for generalization, personalization, adaptation, common sense reasoning, and the design, development, and execution of long-term plans, scientific experiments, and objectives.
These future AIs might not only drive innovation but in fact, become an integral part of it, streamlining the development and discovery of fundamentally revolutionary technologies at a pace, scale, and intensity that humanity has never before experienced. Such systems may not have the capacity for subjective experience, agency, or self-awareness, although they will likely match or exceed human intelligence in many more contexts than what we see today.
If only a minority of the population is AI literate leading up to this point, they’ll be able to understand and leverage AI in ways that no one else can, such as for eugenics—and few will have the skills or knowledge necessary to stop them. For example, future AI systems may be able to optimize gene editing, enhance embryo selection through genetic analysis, and provide personalized gene therapy, which could allow individuals to genetically engineer the traits of their offspring to create a generation of humans that are impervious to most diseases, have longer lifespans, and are far stronger and more intelligent than today’s humans. To go one step further, such future humans may use their skills to build even more sophisticated innovations, discovering ways to integrate technology with the human body and mind, and developing new tools only they can use.
As time goes on, these “humans” will become further and further removed from what it means to be human. Moreover, since such individuals would be by far the most capable members of society, they would quickly ascend the ranks and eventually hold all decision-making power. If they can no longer grasp or resonate with what it means to be human for the rest of us, how can we guarantee that they’ll make decisions that continue to benefit us? History and evolutionary biology have shown that as organisms become more powerful, they also become more dominant, usurping their environment and sometimes even manipulating it for their own needs, just as humans have done with modern infrastructures and industrialization.
On the other hand, if a significant majority of the population is AI literate, most people will be able to understand how to leverage AI. This means they’ll be able to both utilize existing AI capabilities and identify potential use cases. Though opportunists will always be present and consistently attempt to further their well-being at the cost of others—such as by leveraging AI for eugenics—doing so behind closed doors will be much more difficult. In other words, everyone will know that AI can be used for eugenics, and although some won’t have a problem with this, most will find it morally distasteful if not downright unacceptable. However, while AI-driven eugenics is certainly a bad thing, specific practices, such as genetically engineering embryos for immunity to diseases that cause immense suffering at a global scale (e.g., Malaria or Cancer), might be the next step in ensuring a prosperous future for all of humanity. Conversely, people might just decide that anything to do with leveraging AI for eugenics should be prohibited.
We simply don’t know what long-term decisions corporations and governments will make regarding how AI is used and implemented as it becomes more powerful, especially when such decisions are confidential. However, we do know that knowledge is power—if the majority of people are AI literate, even if they’re not certain about how their governments and corporations plan to use AI, they will be able to more accurately anticipate the risks and benefits that AI may inspire across known and unknown use cases. By leveraging a fundamental understanding of how and where AI can be used, people could put pressure on their governments and corporations to align their AI development and integration strategies with objectives that benefit the whole of society. In essence, AI literate individuals will be able to vote, in an informed manner, on AI regulations, understand whether AI is developed and used responsibly or irresponsibly, identify novel AI use cases, and ensure that AI innovation progresses safely and ethically.
Finally, you may be asking yourself, how could a collectively AI literate population accurately anticipate how individual malicious actors may leverage AI for purposes that undermine collective well-being? Consider the following comparison: the predictive capabilities of today’s AI systems can be improved by enhancing the quality and quantity of data on which they’re trained. The human mind is similar, except that much of the data we’re trained on comes in the form of collective knowledge that’s passed down through generations. This underlies the universal human ability for common sense reasoning and intuition—we don’t need to understand gravity to know that a ball will always fall downwards just as we know that we can’t jump from a second-story window without incurring a serious injury.
Collective AI literacy will lay the foundation for developing a population-level common sense understanding and intuition of AI. Nonetheless, intuition and common sense reasoning are sometimes unreliable, leading people to pursue inadvertently harmful objectives, like drinking too much during stressful times or believing information simply because it comes from someone they trust. Still, while our common sense and intuition regarding AI may not always be correct, it will always be shared among us. For instance, you may not cope with stress by drinking, but you know that others will. In the same vein, you may find it morally unacceptable to use AI for eugenics, but you know that others won’t. Collective AI literacy will enable a shared understanding of how AI might be used and implemented, especially as it continues to innovate, making it easier to identify potent benefits and risks as they emerge.
AI’s workplace application can give rise to negative social and political consequences, such as job loss among low and middle-skilled workers, widened income gaps, and legal challenges like data privacy and algorithmic discrimination. Seeing as employees are key assets in any organization, they must be treated well, and therefore provided with the necessary training and resources—AI literacy capabilities—to continue providing value as AI innovations make the future of work more uncertain. For the future of work to provide shared economic prosperity, leaders will need to motivate their workforces to develop AI literacy.
The path to collective AI literacy won’t be easy, however, the interventions suggested in this essay target a critical aspect of corporate social responsibility frameworks, namely, employee responsibility. Employee responsibility entails treating workers well through the provision of both financial and non-financial rewards, an inclusive workplace, and novel opportunities for learning and self-improvement. While AI integration does increase companies’ production efficiency, only improvements in employee responsibility have been shown to increase innovation output and efficiency.23
How AI literacy initiatives are framed will significantly impact employees’ motivation to become AI literate. To realize the value of AI literacy at the organizational level, leaders must recognize the importance of placing their employees at the heart of their AI literacy initiatives. The clearer the benefits of AI literacy are to employees, the more likely it is that they’ll be motivated to develop it. “By keeping an open mind and exploring what’s out there you’re able to cut through that hype and to cut through what is genuine gold within the AI economy,” remarks Wright.
Consequently, organizations with high degrees of collective AI literacy will improve employee responsibility, increasing the probability of sustained innovation and profit, even as AI continues to disrupt and transform the work landscape.
Before concluding, let’s play devil’s advocate for a moment. Some leaders may be asking themselves “Why should I contribute to a future of work that provides shared economic prosperity when doing so might compromise the competitive edge that my company has?”
In short, a future of work that provides shared economic prosperity is the only version of a future where your company is guaranteed to continue profiting and innovating (insofar as it’s aligned with relevant norms and regulations). An economically unequal future of work would enable a select few organizations to reap enormous profits, while others are quickly outcompeted. The former version supports the development of a robust, adaptable, and resilient workforce that possesses elevated skill sets and continues to generate value and novel opportunities for innovation, thereby providing companies with a large talent pool to draw from when building their competitive edge. The latter version would likely result in a workforce where only the minority possesses the necessary talent required for continued innovation, making talent much more expensive and difficult to identify.
All in all, if leaders begin cultivating collective AI literacy now, they’re implicitly future-proofing their organizations. But, it doesn’t stop there—individuals, organizations, and society at large could all benefit substantially from collective AI literacy. To conclude in Wright’s words, “It’s not just scientists and biologists getting excited about neural networks and AI, it’s the general population of a whole country.”
*note: references are ordered alphabetically by author name, with links provided where appropriate.