The more AI literate individuals are, the more valuable they’ll be as the future of work evolves. AI literate professionals can identify, create, and capitalize on AI-driven opportunities. Moreover, adopting an “AI as a tool” mindset allows individuals to focus on expanding the utility of the collaborative human-AI relationship, fostering continuous growth and learning, resulting in a wider opportunity landscape. Additionally, we argue that sourcing opportunities depends on the quality of the questions we ask, and in this context, generative AI can be a powerful tool, namely in terms of question optimization. Following this, we present a series of actionable steps individuals can take to cultivate AI literacy, concluding by exploring how AI can act as a catalyst that enhances human thought and understanding.
In the first essay of this series, we demonstrated the importance of individual AI literacy and presented evidence in favor of three positive trends concerning AI and the future of work:
In our second essay, we went one step further, emphasizing the role of collective AI literacy in shaping a future that promotes shared economic prosperity. Current trends may indicate a positive outlook for the future of work and AI, but only those who are AI literate can capitalize on the opportunities that these trends produce.
To this point, one F500 healthcare company executive we interviewed suggests that “it’s almost like an absorption into your culture where you let AI augment based on what it does for you.” But, to understand what AI can do for you, you need to become AI literate, otherwise, you may find yourself stumbling around in the dark, pursuing opportunities that appear attractive but are nonetheless futile.
On the other hand, as we consider how AI literacy may fuel opportunities in the future of work, we must remind ourselves that AI, like all other technologies created by humans, is a tool, and as Paul Marca, VP of Learning and Dean of Ikigai Labs Academy succinctly puts it, “The tool is only going to be as good as the person behind it.” Though this may appear obvious, it sheds light on four important characteristics that are universal to any tool:
Viewing AI through this lens strengthens the case for AI literacy. Concerning the first two characteristics, AI literate professionals will be able to optimize their use of AI systems, maximizing the utility they gain from them by 1) maintaining a continual learning mindset that allows them to adapt and improve their AI skills, and 2) identifying which models or applications are best suited to certain contexts, tasks, or use cases. In terms of the latter two characteristics, AI literate individuals have an increased ability to 1) identify novel contexts in which certain AI models or tools can provide value, and 2) suggest ways to expand the AI capabilities repertoire that allow for more streamlined and targeted task execution.
Moreover, recall two points made in the second essay in this series. First, AI literacy enables exponential learning, fostering more streamlined adaptation to AI innovations. For instance, let’s assume AI-literate individuals are currently 10x more productive than their non-AI-literate counterparts. As AI innovation progresses, these individuals will be more equipped to build on their already existing AI skills and knowledge base, accelerating their acquisition of new skills and knowledge—AI literacy can promote a positive feedback loop. In other words, it may not take long for those who are AI literate to reach a point at which they’re 100x or even 1000x more productive than their non-AI literate counterparts. “AI is a language, and it’s a language that we’ve got to master—to be an employee in the age of AI, you’ve got to be literate, and it’s sort of on you,” says Marca.
Second, we hope that as the proportion of AI literate people increases—cultivating a higher degree of collective AI literacy—novel AI innovations will be intentionally designed to address the collective well-being of humanity. In other words, a collectively AI literate population will have more power to influence the course of responsible AI innovation, resulting in AI that we can trust. Developing collective AI literacy is a crucial part of, “the game of setting the right context so that the right trust is made between the human and the tool [AI],” as Tony Doran, Co-Founder of SimplyPut AI, advises.
Now, reconsider the three previously mentioned positive trends for the future of work. For these trends to become a reality, professionals will require the ability to identify and create AI-driven opportunities for themselves, and in doing so, ensure that they continue to provide value in an employment landscape where work requirements are changing rapidly. Moreover, the “AI as a tool” mindset highlights the utility of the collaborative human-AI relationship—people who learn how to leverage AI now will develop a competitive edge over those who don’t. In the near term, what seems more likely, the idea that many of us will be replaced by AI or replaced by humans who’ve learned to optimize their use of AI?
Opportunities don’t present themselves equally to everyone—the majority of people need to source their opportunities. However, the process of discovering opportunities can be extremely difficult, even when they’re plentiful. People may not know where to look, the amount of available information might be overwhelming, and it may be difficult to distinguish actionable from prospective opportunities. In essence, discovering opportunity requires the ability to ask the right kinds of questions.
Historically, the success of this inquisitive process depended upon one’s ability to identify and leverage connections within their professional network, conduct industry and job-related research, figure out which skill sets will continue to provide value, and remain up-to-date on the most recent innovations and trends. Before the advent of advanced generative AI (GenAI), particularly multi-modal Large Language Models (LLMs), the burden of this time-consuming and labor-intensive process would have fallen on the individual.
“If you define AI in terms of doing something that was originally just reserved for humans, then a calculator is a form of AI. So, I don’t think many people realized what AI was until they started using ChatGPT,” remarks Connor Wright, Partnerships Manager at the Montreal AI Ethics Institute.
Now, however, regular people have the ground-breaking ability to leverage the vast stores of human knowledge on which LLMs are trained to inform how best to go about building their future career trajectories. As one company10 claims, “GenAI has made data conversational, and as such, an active participant in the decision-making process.”
GenAI can dramatically improve the novelty, diversity, and relevance of the questions we ask.5 However, as Wright cautions, “The inception of the idea and the creativity is always with the human—ChatGPT always has to be prompted to give you something.” We know GenAI is a powerful tool, but harnessing its potential depends on how well we use it. Fortunately, we have experience in this domain, hence the insights and recommendations illustrated below:
GenAI will continue to enhance the way that people formulate and ask questions. However, those who wish to leverage this capability to anticipate how they could benefit from novel and existing opportunities as the future of work evolves will require a basic level of AI literacy—this includes some prompt engineering skills and a general awareness of the current limits and capabilities of the most advanced AI models. And, to this latter point, Marca questions, “How do you establish a baseline around what AI can do?” Our answer is simple: understanding what AI can do for you, especially in consequential intellectual pursuits like asking better questions, depends on how AI literate you are.
“People will have 4, 5, 6, jobs in their careers—that’s one every 5 to 7 years, depending on how long you’re in the workforce. And, that requires a commitment to learning so that you can pivot and move and grow—LLMs help me grow faster and adapt more quickly,” advocates Marca.
However, AI literacy isn’t only about cultivating the ability to leverage AI to ask better questions. At a much deeper level, it’s about creating a knowledge base that empowers individuals to find and execute their purpose as the future of work becomes progressively more uncertain. There’s no limit to AI literacy, but there are concrete steps that individuals can take to develop it.
The core characteristics of an AI literate professional can be roughly summarized as follows (for a more detailed summary of these characteristics, see here):
We can also view these core characteristics as distinct objectives for anyone wishing to cultivate AI literacy, breaking them down in terms of the steps required to achieve them.
Objective 1: A basic functional understanding of AI. This objective can be broken down into two parts: theory and practice. Theory concerns the ability to differentiate between different kinds of AI systems, whereas practice concerns the ability to recognize where such systems can be applied for real-world problem-solving.
Objective 2: The possession of AI-specific skills. This objective can also be broken down into two parts: 1) rudimentary AI skills, and 2) advanced AI skills. Rudimentary AI skills concern the ability to independently leverage the basic capabilities of existing AI systems (e.g., text summary, document analysis, research synthesis, etc.). Advanced AI skills concern the ability to leverage AI capabilities in novel, creative, or complex contexts (e.g., synthesis of novel ideas, creative ideation, development of business objectives and strategies, etc.). For more information on these skills, see here.
Objective 3: An up-to-date knowledge of AI applications and use cases. Of all the objectives we discussed, this is the easiest one to achieve. For people to maintain current knowledge of AI applications and use cases, they must consistently track the most recent innovations and developments throughout the AI landscape. To do so, they can explore a variety of mediums, including popular tech media outlets, podcasts, blog posts, and newsletters from reputable research institutes like the National Science Foundation or The Future of Life Institute, as well as online community platforms dedicated to AI literacy, like COAI. For additional information, see here.
Objective 4: An adaptable mindset via continuous learning and education. The ability to reach this objective is predicated on two things: 1) the ability to reach the three prior objectives we discussed, and 2) the ability to accept change. Though humans are highly adaptable creatures, they often resist change.7 Below, we identify some of the main psychological factors and mechanisms that commonly give rise to change resistance:
The process of cultivating AI literacy will be continual and iterative. Though it may be difficult to initiate and maintain, we urge individuals to stick with it. AI literacy will directly correspond with the ability to leverage AI systems to capitalize on and create opportunities as the future of work continues to evolve. The more AI-literate individuals are, the more valuable they’ll be down the line.
In his seminal work, What is Called Thinking, the 20th century German philosopher, Martin Heidegger, explores the nature of thought and understanding.6 In doing so, he illustrates a series of concepts relevant to our current discussion. By elaborating on these concepts, we highlight the catalytic effects of GenAI on human thought and understanding, namely in terms of how they may be leveraged to tame the ego and ensure a continuous learning mindset. In other words, “This technology [AI] is so powerful that people who are stuck with an idea of themselves that is static will be threatened—the learning mindset is a way to get outside of this threat,” claims Doran.
Moreover, while human-AI interaction may sometimes be eerily similar to human-human interaction, an AI doesn’t judge a human in the same way that other humans do. Despite how smart AI might be, it doesn’t ridicule, gossip, or make fun of us for asking “dumb” questions or coming up with weird ideas—when we interact with AI, our ego isn’t threatened. Wright echoes these words using a more cautionary tone, “One thing that I fear when it comes to these systems is that they can lead us to overestimate how human the system is—if I was telling somebody who’s an AI novice to look out for one thing at the moment, it would be anthropomorphism.”
Before we dive in, however, we want to stress that ego isn’t an inherently bad thing—without it, no great scientist, explorer, thinker, or revolutionary would’ve ever had the courage and ambition to pursue their goals. However, ego can obscure people’s ability to optimize decision-making by preventing them from exploring alternative sources of information, listening to others, and acknowledging their weaknesses. We need to “understand that if a statement is untrue, we can explore it—you are different from the idea that you actually have,” as Doran counsels. Therefore, in this context, GenAI is best leveraged to keep the human ego in check, not to dismantle or eliminate it.
Language as a critical feature of thought: Heidegger suggests that the language we use is integral to the way that we think. Language provides an opportunity for communication and the ability to engage with the world. In essence, language allows us to understand not only our existence, but also others—by “others” we don’t only mean other humans, but everything that comprises our physical environment, knowledge, and experiences. In simple terms, language is the tool that humans use to frame how they think about anything and everything, even in cases where it might be insufficient. Therefore, human language often reflects individual lived experiences, especially in terms of how individuals perceive themselves throughout their lives—language can be viewed as a reflection of ego.
Fortunately, GenAI is perfectly suited to combatting this problem. There are four basic prompts users can consider to better balance the language of their prompts: 1) run a sentiment analysis on the prompts provided and de-bias them accordingly, 2) identify any language in this prompt, either at the level of individual words or sentences, that has an emotional valence, 3) if there is any material in the prompt that takes the form of an opinion or belief, provide a counter argument for each opinion or belief, and 4) provide alternatives to this prompt written from the perspective of the previously identified counter-arguments.
These prompts are all rudimentary and they can be tailored and enhanced through more concrete details and parameters. However, they will help users become more aware of the language they use and subsequently, how they think about the world in relation to themselves and others.
The limitations of rationality: Rational thinking is the ability to think logically about things. Heidegger is critical of the modern world’s emphasis on rational thought, believing that it precludes us from understanding the more profound characteristics of our humanity and existence. For instance, the commonly invoked phrase, “There’s no such thing as a dumb question” encapsulates the nature of this point—most of us are hesitant to ask questions that appear irrational or nonsensical because we’re afraid of being wrong or of how others will perceive us (our ego is threatened). In doing so, we overlook that humans are both cognitively and emotionally intelligent—some of the best questions are motivated by intuitive emotional responses to certain experiences or stimuli.
While GenAI systems don’t themselves “think,” they can inspire users to explore different approaches to their thought processes, as well as perspectives on their understanding of a given issue or problem that they typically would not have arrived at through logical deliberation.
Fortunately, AI systems don’t judge us for the questions we ask—no matter how ridiculous they seem—so they objectively pose no threat to our ego. If we recognize this fact, we can free ourselves up to push the boundaries of rational thinking without fear of ridicule. As Marca exclaims, “What if you could digest, using LLMs, all of Abraham Lincoln’s work and animate his face to create a conversation with a smart chatbot? Suddenly, Abraham Lincoln is talking back to you!”
Let’s also consider a more concrete example: one can logically deliberate about the evolutionary origin and utility of love in terms of social bonding and reciprocity. For instance, one could prompt an AI to “Provide an argument driven by evolutionary theory that explores the utility of love in terms of social bonding and reciprocity.” Robust empirical evidence, even if it doesn’t always point in the same direction or is open to interpretation, makes it possible to answer this question concretely. However, some questions, such as “Are dreams a gateway into another dimension?” lack concrete answers—most of us don’t take such questions seriously because they’re regarded as irrational and un-scientific despite being highly interesting.
AI doesn’t care what questions we ask and by design, must always provide an answer. However, for AI to generate answers that push the boundaries of rational thought, the prompt above could be adjusted to the following: “Assume that we know dreams are a gateway to another dimension, but we don’t know what principles enable this phenomenon. If you had to make an educated guess as to what these principles might be, what would it be?” In simple terms, leveraging AI to help us push the boundaries of rational thought requires that 1) we prompt it to accept an irrational idea as reality, and 2) we prompt it to consider the necessary conditions that would be required to justify this idea. Obviously, there are many ways to enhance this process through more sophisticated prompting, but this technique offers a solid starting point.
The unthought: Heidegger believes that one of the most critical characteristics of thinking concerns the exploration of the “unthought,” namely, those ideas, concepts, or notions that aren’t fully understood or explored. The “unthought” can take the form of irrational ideas, such as the ones mentioned above, and novel or non-paradigmatic rational ideas, such as Evolutionary Theory, which although widely accepted now, was considered radical when it emerged. Importantly, the unthought also concerns the ability to establish connections between disparate ideas and concepts, especially those that appear unrelated. However, most people are reluctant to explore the unthought because it either requires too much work or makes us vulnerable to criticism from others, thereby threatening the ego.
For leveraging GenAI to explore the unthought, both the prompting technique mentioned above as well as the recommendations for improving the quality of the questions we ask can prove useful. These techniques are well-suited to helping us form new and interesting ideas, but they may be limited in their ability to help us materialize connections between disparate concepts.
Thankfully, there is a way to prompt AI for this purpose, but it requires a few components: 1) describe the ideas you have in as much detail as possible, 2) describe, as best as you can, what you think the possible connections are between these ideas, 3) identify and rank which of these connections you think are most plausible and least plausible, 4) explicitly state that you would like to identify alternative connections beyond the ones you have provided as well as feedback on the feasibility of the ones you have provided, and 5) request that the output includes all sources used to generate the answer, to ensure that further research is possible. Each of these components can be structured as individual prompts that are iteratively refined through human-AI interaction, or, if the user is confident in their prompting skills, combined into one cohesive prompt.
The historical nature of thinking: Heidegger suggests that the way we think is influenced by those who preceded us. He emphasizes the role that culture plays in shaping our thought processes and the idea that thinking involves a continual dialogue with the past. Through this dialogue with the past, we identify ideas and thinkers that we resonate with. However, disentangling ourselves from these ideas and thinkers, especially when they’re wrong about something, can be really hard. In other words, the historical nature of thinking makes it difficult for people to accept when they’re wrong—acknowledging one’s mistakes requires the taming of one’s ego and the biases that come with it.
There are two specific approaches for leveraging AI in this context: 1) input an argument that you feel is strong and prompt AI to de-bias it, and 2) using the same argument, prompt AI to generate counterarguments that aggressively dismantle every claim that you make using established facts and expert perspectives.
Based on the output you receive for the second approach, you can identify the strengths and weaknesses of your argument by reference to the strengths and weaknesses of the counterarguments provided. Understanding how strong the counterarguments are may require some human judgment, however, the exact process mentioned above can be repeated with AI-generated counterarguments as individual or chained prompts. In other words, you can ask AI to come up with counterarguments to the counterarguments it generated—in theory, if your initial arguments were strong, the second round of AI-generated counterarguments should mirror your initial arguments. If it does not, it doesn’t necessarily mean that your initial arguments were weak, though it could indicate they failed to consider important characteristics, facts, or perspectives.
We recognize that this last section may have been philosophically heavy for readers, but this doesn’t discount the importance of the concepts discussed above—an awareness of these concepts allows individuals to better tap into the catalytic potential of AI to expand the ways in which they think about and understand the world, especially as it progresses.
Through human-AI interaction, humans can tame their egos, correspondingly widening their perspectives, cultivating a deeper understanding of their existence by dismantling their biases, exploring novel ideas, making sense of their lived experiences, and ultimately, maintaining a continuous learning mindset. In Marca’s words, “If you love to learn something and you’re learning, you’re learning how to learn, and that may be the most important tool that you ever take away from any learning experience.”
Importantly, while we’ve suggested specific prompting techniques to address each one of the Heideggerian concepts above, these techniques need not be mutually exclusive and may actually be transferable in many different contexts.
In terms of opportunities in the future of work, the AI-driven expansion of human thought and understanding could significantly enhance individuals’ abilities to identify novel opportunities in novel contexts. By entertaining a wide array of perspectives and thought processes, people become better equipped, both cognitively and emotionally, to increase the probability that they source, identify, and capitalize on valuable and actionable opportunities as they emerge.
*note: references are ordered alphabetically by author name, with links provided where appropriate.