×
Investing in Innovation: How AI Literacy Fuels Opportunities in the Future of Work (Part 3)
Written by
Published on

Introduction

The more AI literate individuals are, the more valuable they’ll be as the future of work evolves. AI literate professionals can identify, create, and capitalize on AI-driven opportunities. Moreover, adopting an “AI as a tool” mindset allows individuals to focus on expanding the utility of the collaborative human-AI relationship, fostering continuous growth and learning, resulting in a wider opportunity landscape. Additionally, we argue that sourcing opportunities depends on the quality of the questions we ask, and in this context, generative AI can be a powerful tool, namely in terms of question optimization. Following this, we present a series of actionable steps individuals can take to cultivate AI literacy, concluding by exploring how AI can act as a catalyst that enhances human thought and understanding.

AI is Just Another Tool

In the first essay of this series, we demonstrated the importance of individual AI literacy and presented evidence in favor of three positive trends concerning AI and the future of work:

  1. AI will facilitate the creation of new jobs. 
  2. Businesses are increasingly prioritizing AI skills. 
  3. AI is more likely to augment rather than automate human labor. 

In our second essay, we went one step further, emphasizing the role of collective AI literacy in shaping a future that promotes shared economic prosperity. Current trends may indicate a positive outlook for the future of work and AI, but only those who are AI literate can capitalize on the opportunities that these trends produce. 

“AI is a language, and it’s a language that we’ve got to master—to be an employee in the age of AI, you’ve got to be literate, and it’s sort of on you.”

To this point, one F500 healthcare company executive we interviewed suggests that “it’s almost like an absorption into your culture where you let AI augment based on what it does for you.” But, to understand what AI can do for you, you need to become AI literate, otherwise, you may find yourself stumbling around in the dark, pursuing opportunities that appear attractive but are nonetheless futile. 

On the other hand, as we consider how AI literacy may fuel opportunities in the future of work, we must remind ourselves that AI, like all other technologies created by humans, is a tool, and as Paul Marca, VP of Learning and Dean of Ikigai Labs Academy succinctly puts it, “The tool is only going to be as good as the person behind it.” Though this may appear obvious, it sheds light on four important characteristics that are universal to any tool: 

  1. Tools are designed for a specific purpose meaning they can be used incorrectly. 
  2. Some tools are easy to use and intuitive while others require lots of training and practice. 
  3. Though some tools may be designed for a specific purpose it’s not uncommon for novel use cases that deviate from that purpose to emerge. 
  4. Tools can be improved or changed to suit a different purpose. 

Viewing AI through this lens strengthens the case for AI literacy. Concerning the first two characteristics, AI literate professionals will be able to optimize their use of AI systems, maximizing the utility they gain from them by 1) maintaining a continual learning mindset that allows them to adapt and improve their AI skills, and 2) identifying which models or applications are best suited to certain contexts, tasks, or use cases. In terms of the latter two characteristics, AI literate individuals have an increased ability to 1) identify novel contexts in which certain AI models or tools can provide value, and 2) suggest ways to expand the AI capabilities repertoire that allow for more streamlined and targeted task execution. 

It may not take long for those who are AI literate to reach a point at which they’re 100x or even 1000x more productive than their non-AI literate counterparts.

Moreover, recall two points made in the second essay in this series. First, AI literacy enables exponential learning, fostering more streamlined adaptation to AI innovations. For instance, let’s assume AI-literate individuals are currently 10x more productive than their non-AI-literate counterparts. As AI innovation progresses, these individuals will be more equipped to build on their already existing AI skills and knowledge base, accelerating their acquisition of new skills and knowledge—AI literacy can promote a positive feedback loop. In other words, it may not take long for those who are AI literate to reach a point at which they’re 100x or even 1000x more productive than their non-AI literate counterparts. “AI is a language, and it’s a language that we’ve got to master—to be an employee in the age of AI, you’ve got to be literate, and it’s sort of on you,” says Marca. 

Second, we hope that as the proportion of AI literate people increases—cultivating a higher degree of collective AI literacy—novel AI innovations will be intentionally designed to address the collective well-being of humanity. In other words, a collectively AI literate population will have more power to influence the course of responsible AI innovation, resulting in AI that we can trust. Developing collective AI literacy is a crucial part of, “the game of setting the right context so that the right trust is made between the human and the tool [AI],” as Tony Doran, Co-Founder of SimplyPut AI, advises. 

Now, reconsider the three previously mentioned positive trends for the future of work. For these trends to become a reality, professionals will require the ability to identify and create AI-driven opportunities for themselves, and in doing so, ensure that they continue to provide value in an employment landscape where work requirements are changing rapidly. Moreover, the “AI as a tool” mindset highlights the utility of the collaborative human-AI relationship—people who learn how to leverage AI now will develop a competitive edge over those who don’t. In the near term, what seems more likely, the idea that many of us will be replaced by AI or replaced by humans who’ve learned to optimize their use of AI? 

Sourcing Opportunity: Leveraging Generative AI to Improve the Questions We Ask

Opportunities don’t present themselves equally to everyone—the majority of people need to source their opportunities. However, the process of discovering opportunities can be extremely difficult, even when they’re plentiful. People may not know where to look, the amount of available information might be overwhelming, and it may be difficult to distinguish actionable from prospective opportunities. In essence, discovering opportunity requires the ability to ask the right kinds of questions. 

“The inception of the idea and the creativity is always with the human—ChatGPT always has to be prompted to give you something.”

Historically, the success of this inquisitive process depended upon one’s ability to identify and leverage connections within their professional network, conduct industry and job-related research, figure out which skill sets will continue to provide value, and remain up-to-date on the most recent innovations and trends. Before the advent of advanced generative AI (GenAI), particularly multi-modal Large Language Models (LLMs), the burden of this time-consuming and labor-intensive process would have fallen on the individual.

“If you define AI in terms of doing something that was originally just reserved for humans, then a calculator is a form of AI. So, I don’t think many people realized what AI was until they started using ChatGPT,” remarks Connor Wright, Partnerships Manager at the Montreal AI Ethics Institute. 

Now, however, regular people have the ground-breaking ability to leverage the vast stores of human knowledge on which LLMs are trained to inform how best to go about building their future career trajectories. As one company10 claims, “GenAI has made data conversational, and as such, an active participant in the decision-making process.”

GenAI can dramatically improve the novelty, diversity, and relevance of the questions we ask.5 However, as Wright cautions, “The inception of the idea and the creativity is always with the human—ChatGPT always has to be prompted to give you something.” We know GenAI is a powerful tool, but harnessing its potential depends on how well we use it. Fortunately, we have experience in this domain, hence the insights and recommendations illustrated below:  

  • Streamlining the ability to ask complex questions: Companies like Google and OpenAI have recently integrated AI-assisted search capabilities. Before this innovation, complex search queries had to be broken down into parts to reveal immediately useful answers. For instance, if a user were to ask “As a double major in English and math, which industries might offer the most lucrative career trajectories for me?” A quick Google search would reveal plenty of information, but the user would still have to identify what information is relevant to them. Consequently, they might break the question into two parts: “What are the most lucrative career trajectories for English majors?” and “What are the most lucrative career trajectories for math majors?” By contrast, AI-assisted search can tackle complex search queries, synthesizing relevant information to produce a holistic answer, where users would’ve typically had to do this themselves. 
  • Fostering diversity of thought: Human conversations often begin by exploring some topic, eventually deviating from it to explore related sub-domains, culminating in a more well-rounded perspective that addresses potential nuance and novelty around the original topic. Seeing as GenAI is typically operated through a conversational interface, users can prompt the system to explore different perspectives on their questions and answers while also challenging any preconceived notions or biases they might have. For instance, users could prompt a model to break down the most lucrative career trajectories for double majors in English and Math across industries and nations. If they wish to go a step further, they could also prompt the model to provide perspectives on their qualifications and career trajectories from the viewpoint of well-established industry specialists. 
  • Enhancing question novelty: AI is great at identifying patterns in structured and unstructured datasets, uncovering trends or phenomena that humans struggle to identify, and allowing us to ask more targeted and original questions. For instance, a user could prompt a model to analyze the most common career trajectories for English majors from 2000 to the present day. The model may reveal that the proportion of English majors in corporate leadership positions has been steadily increasing since 2000, inspiring the user to ask the following question: “What steps can I take to demonstrate my leadership capabilities in a professional setting?” Through an iterative refinement process, users can improve the novelty of their questions while increasing the probability of innovative “category jumping” questions that apply insights from one area to a completely different one.
  • Summarizing knowledge and identifying key concepts: Digital technologies have brought a wealth of information to our fingertips but just because this information is accessible doesn’t mean it’s easy to interpret or work through. LLMs demonstrate impressive capabilities for information summary and the identification of key concepts nested within that information. When paired with AI-assisted search capabilities, users can streamline their abilities to identify and integrate critical pieces of information into the questions they ask, such as relevant keywords and research findings. 
  • Suggesting alternative phrasing: How a question is framed significantly influences an answer. For example, the question “What commonalities exist between careers in finance and manufacturing?” would yield very different results from the question “What careers operate at the intersection between finance and manufacturing?” In this respect, users can prompt GenAI models to suggest alternatives to the questions they ask while they ask them, to generate more targeted and useful answers. Moreover, when questions are unclear or ambiguous, models will still produce an answer, but they’ll usually indicate to users that ambiguities must be clarified, or alternatively, output a series of additional questions and recommendations that may be of interest. 
  • Encouraging depth and specificity: Simple questions typically yield simple answers. The more specific and detailed questions are, the more informative their answers will be. For instance, a user could ask an LLM “Why is the sky blue?” The LLM might respond with a brief explanation outlining the phenomenon of Rayleigh Scattering. The user may then ask the question “Why does Rayleigh Scattering make the sky appear to be blue?” prompting a much more detailed response that considers various factors such as the dominance of blue light or wavelength and scattering efficiency. Prolonged interaction with LLMs can implicitly encourage users to increase the depth and specificity of their questions as they progressively gain more informative answers.
  • Increasing question frequency: A survey by the Harvard Business Review discovered that 79% of respondents increased their question frequency when interacting with GenAI models.5 GenAI’s ability to answer questions quickly and successively encourages people to continue asking them. However, it’s important to note that this process still requires human judgment because 1) more questions don’t always translate to better questions, and 2) GenAI outputs can be unreliable yet persuasive, so users should cross-reference important information with reputable sources. 

Understanding what AI can do for you, especially in consequential intellectual pursuits like asking better questions, depends on how AI literate you are.

GenAI will continue to enhance the way that people formulate and ask questions. However, those who wish to leverage this capability to anticipate how they could benefit from novel and existing opportunities as the future of work evolves will require a basic level of AI literacy—this includes some prompt engineering skills and a general awareness of the current limits and capabilities of the most advanced AI models. And, to this latter point, Marca questions, “How do you establish a baseline around what AI can do?” Our answer is simple: understanding what AI can do for you, especially in consequential intellectual pursuits like asking better questions, depends on how AI literate you are. 

“People will have 4, 5, 6, jobs in their careers—that’s one every 5 to 7 years, depending on how long you’re in the workforce. And, that requires a commitment to learning so that you can pivot and move and grow—LLMs help me grow faster and adapt more quickly,” advocates Marca. 

However, AI literacy isn’t only about cultivating the ability to leverage AI to ask better questions. At a much deeper level, it’s about creating a knowledge base that empowers individuals to find and execute their purpose as the future of work becomes progressively more uncertain. There’s no limit to AI literacy, but there are concrete steps that individuals can take to develop it. 

Becoming AI Literate: A Series of Actionable Steps 

The core characteristics of an AI literate professional can be roughly summarized as follows (for a more detailed summary of these characteristics, see here): 

  1. A basic functional understanding of AI.
  2. The possession of AI-specific skills.
  3. An up-to-date knowledge of current AI applications and use cases.
  4. An adaptable mindset via continuous learning and education.

We can also view these core characteristics as distinct objectives for anyone wishing to cultivate AI literacy, breaking them down in terms of the steps required to achieve them. 

Objective 1: A basic functional understanding of AI. This objective can be broken down into two parts: theory and practice. Theory concerns the ability to differentiate between different kinds of AI systems, whereas practice concerns the ability to recognize where such systems can be applied for real-world problem-solving. 

  • Theory requires a general technical understanding of AI. This understanding can be subdivided into 1) the main kinds of learning paradigms in which AI systems are grounded (e.g., reinforcement learning vs. deep learning or supervised vs. unsupervised learning), 2) the tasks for which certain systems are suited (e.g., classification vs. prediction), and 3) the strengths and weaknesses of the most advanced AI models (e.g., information synthesis vs. information hallucination). For a more in-depth discussion of these characteristics, see the first and final essay in this series
  • Individuals can build their theoretical understanding by 1) exploring the comprehensive and easily digestible reports provided by major organizations and research institutes such as Mckinsey, Boston Consulting Group, IBM, Google, and several others, 2) enrolling in online introductory courses provided by accredited academic institutions and platforms such as Harvard, Stanford, MIT, and Coursera 3) reading the blogs of leading AI organizations such as DeepMind, OpenAI, and Anthropic, 4) leveraging LLMs’ AI-assisted search and summarization capabilities to find additional useful resources and break them down accordingly, and 5) joining community-based online platforms such as COAI and Learn AI Together. There are many more steps individuals could take to build up their theoretical understanding, but these offer a strong starting point. 
  • Practice concerns only one vital step: experimentation with as many different AI models, tools, and platforms as possible. However, the process of experimentation shouldn’t be random—a content creator might choose to experiment with text-to-image generators whereas a banker would find AI-powered financial analysis tools more useful. 

Objective 2: The possession of AI-specific skills. This objective can also be broken down into two parts: 1) rudimentary AI skills, and 2) advanced AI skills. Rudimentary AI skills concern the ability to independently leverage the basic capabilities of existing AI systems (e.g., text summary, document analysis, research synthesis, etc.). Advanced AI skills concern the ability to leverage AI capabilities in novel, creative, or complex contexts (e.g., synthesis of novel ideas, creative ideation, development of business objectives and strategies, etc.). For more information on these skills, see here

  • To develop rudimentary AI skills, users should first identify what they want to use AI for. A writer may use AI for editing and research, an educator may use it for personalized learning and grading, and a doctor may use it for image screening and treatment identification. Seeing as AI is more likely to augment rather than automate, rudimentary AI skills should be developed with the idea of improving upon the primary human skill sets that are required to excel in a given profession. Therefore, if individuals can identify the most important skills for them to possess in their profession, they can then begin experimenting with AI to find the best ways to enhance them, increasing the value they provide. 
  • Developing advanced AI skills could require a counterintuitive shift in mindset—though we’ve expressed that AI should initially be viewed as a tool, advanced AI skills may come more easily to us if we view AI as an extension of our intelligence. When we think of something as a tool, we limit our understanding of it to the context in which it applies or the purpose for which it’s designed. Conversely, when we think of something as an extension of our intelligence, the possible array of applications is limited only by our intellect and physical circumstances. In other words, humans can think and reason about virtually anything, whereas tools are relevant and applicable in certain contexts. 

Objective 3: An up-to-date knowledge of AI applications and use cases. Of all the objectives we discussed, this is the easiest one to achieve. For people to maintain current knowledge of AI applications and use cases, they must consistently track the most recent innovations and developments throughout the AI landscape. To do so, they can explore a variety of mediums, including popular tech media outlets, podcasts, blog posts, and newsletters from reputable research institutes like the National Science Foundation or The Future of Life Institute, as well as online community platforms dedicated to AI literacy, like COAI. For additional information, see here

  • However, we offer a word of caution: like any new and powerful technology, AI is subject to hype and sensationalism, which can easily distort or manipulate our views of this technology. When tracking the most recent AI innovations and developments, we strongly suggest that individuals cross-reference all the information they come across, even when it’s produced by reputable media outlets or institutions. 

Objective 4: An adaptable mindset via continuous learning and education. The ability to reach this objective is predicated on two things: 1) the ability to reach the three prior objectives we discussed, and 2) the ability to accept change. Though humans are highly adaptable creatures, they often resist change.7 Below, we identify some of the main psychological factors and mechanisms that commonly give rise to change resistance: 

  • Confirmation Bias, Cognitive Dissonance, information overload, habit formation, mental fatigue, Loss Aversion, threats to one’s values or way of life, fear of uncertainty, and previous life experiences. 
  • These mechanisms and factors typically manifest themselves in the form of implicit biases—biases that operate subconsciously. Fortunately, however, existing research1,2,3,4,7,8 provides evidence in favor of the claim that developing an awareness and deliberate need to address these biases can significantly reduce their effect on behavior. Therefore, we suggest that individuals familiarize themselves with these psychological factors and mechanisms, and maintain an open mindset whereby they regularly question the legitimacy of their resistance to AI-inspired changes. 
  • Many of the mechanisms and factors mentioned above also contribute to a broader phenomenon: people might resist learning because it threatens their ego, either through intimidation, fear, or by being forced to confront outdated skills. Therefore, when cultivating AI literacy and attempting to maintain a continuous learning mindset, people must remind themselves that 1) it’s okay to start small, 2) there is nothing wrong with being a student, and 3) whether you like it or not, AI is here to stay. 

The process of cultivating AI literacy will be continual and iterative. Though it may be difficult to initiate and maintain, we urge individuals to stick with it. AI literacy will directly correspond with the ability to leverage AI systems to capitalize on and create opportunities as the future of work continues to evolve. The more AI-literate individuals are, the more valuable they’ll be down the line.  

Taming the Ego: The Catalytic Effects of Generative AI on Human Thought and Understanding

“This technology [AI] is so powerful that people who are stuck with an idea of themselves that is static will be threatened—the learning mindset is a way to get outside of this threat.”

In his seminal work, What is Called Thinking, the 20th century German philosopher, Martin Heidegger, explores the nature of thought and understanding.6 In doing so, he illustrates a series of concepts relevant to our current discussion. By elaborating on these concepts, we highlight the catalytic effects of GenAI on human thought and understanding, namely in terms of how they may be leveraged to tame the ego and ensure a continuous learning mindset. In other words, “This technology [AI] is so powerful that people who are stuck with an idea of themselves that is static will be threatened—the learning mindset is a way to get outside of this threat,” claims Doran. 

Moreover, while human-AI interaction may sometimes be eerily similar to human-human interaction, an AI doesn’t judge a human in the same way that other humans do. Despite how smart AI might be, it doesn’t ridicule, gossip, or make fun of us for asking “dumb” questions or coming up with weird ideas—when we interact with AI, our ego isn’t threatened. Wright echoes these words using a more cautionary tone, “One thing that I fear when it comes to these systems is that they can lead us to overestimate how human the system is—if I was telling somebody who’s an AI novice to look out for one thing at the moment, it would be anthropomorphism.”

Before we dive in, however, we want to stress that ego isn’t an inherently bad thing—without it, no great scientist, explorer, thinker, or revolutionary would’ve ever had the courage and ambition to pursue their goals. However, ego can obscure people’s ability to optimize decision-making by preventing them from exploring alternative sources of information, listening to others, and acknowledging their weaknesses. We need to “understand that if a statement is untrue, we can explore it—you are different from the idea that you actually have,” as Doran counsels. Therefore, in this context, GenAI is best leveraged to keep the human ego in check, not to dismantle or eliminate it. 

Heideggerian Concepts to Consider

Language as a critical feature of thought: Heidegger suggests that the language we use is integral to the way that we think. Language provides an opportunity for communication and the ability to engage with the world. In essence, language allows us to understand not only our existence, but also others—by “others” we don’t only mean other humans, but everything that comprises our physical environment, knowledge, and experiences. In simple terms, language is the tool that humans use to frame how they think about anything and everything, even in cases where it might be insufficient. Therefore, human language often reflects individual lived experiences, especially in terms of how individuals perceive themselves throughout their lives—language can be viewed as a reflection of ego.

Fortunately, GenAI is perfectly suited to combatting this problem. There are four basic prompts users can consider to better balance the language of their prompts: 1) run a sentiment analysis on the prompts provided and de-bias them accordingly, 2) identify any language in this prompt, either at the level of individual words or sentences, that has an emotional valence, 3) if there is any material in the prompt that takes the form of an opinion or belief, provide a counter argument for each opinion or belief, and 4) provide alternatives to this prompt written from the perspective of the previously identified counter-arguments. 

These prompts are all rudimentary and they can be tailored and enhanced through more concrete details and parameters. However, they will help users become more aware of the language they use and subsequently, how they think about the world in relation to themselves and others. 

The limitations of rationality: Rational thinking is the ability to think logically about things. Heidegger is critical of the modern world’s emphasis on rational thought, believing that it precludes us from understanding the more profound characteristics of our humanity and existence. For instance, the commonly invoked phrase, “There’s no such thing as a dumb question” encapsulates the nature of this point—most of us are hesitant to ask questions that appear irrational or nonsensical because we’re afraid of being wrong or of how others will perceive us (our ego is threatened). In doing so, we overlook that humans are both cognitively and emotionally intelligent—some of the best questions are motivated by intuitive emotional responses to certain experiences or stimuli. 

While GenAI systems don’t themselves “think,” they can inspire users to explore different approaches to their thought processes, as well as perspectives on their understanding of a given issue or problem that they typically would not have arrived at through logical deliberation. 

Fortunately, AI systems don’t judge us for the questions we ask—no matter how ridiculous they seem—so they objectively pose no threat to our ego. If we recognize this fact, we can free ourselves up to push the boundaries of rational thinking without fear of ridicule. As Marca exclaims, “What if you could digest, using LLMs, all of Abraham Lincoln’s work and animate his face to create a conversation with a smart chatbot? Suddenly, Abraham Lincoln is talking back to you!”

Let’s also consider a more concrete example: one can logically deliberate about the evolutionary origin and utility of love in terms of social bonding and reciprocity. For instance, one could prompt an AI to “Provide an argument driven by evolutionary theory that explores the utility of love in terms of social bonding and reciprocity.” Robust empirical evidence, even if it doesn’t always point in the same direction or is open to interpretation, makes it possible to answer this question concretely. However, some questions, such as “Are dreams a gateway into another dimension?” lack concrete answers—most of us don’t take such questions seriously because they’re regarded as irrational and un-scientific despite being highly interesting. 

AI doesn’t care what questions we ask and by design, must always provide an answer. However, for AI to generate answers that push the boundaries of rational thought, the prompt above could be adjusted to the following: “Assume that we know dreams are a gateway to another dimension, but we don’t know what principles enable this phenomenon. If you had to make an educated guess as to what these principles might be, what would it be?” In simple terms, leveraging AI to help us push the boundaries of rational thought requires that 1) we prompt it to accept an irrational idea as reality, and 2) we prompt it to consider the necessary conditions that would be required to justify this idea. Obviously, there are many ways to enhance this process through more sophisticated prompting, but this technique offers a solid starting point. 

The unthought: Heidegger believes that one of the most critical characteristics of thinking concerns the exploration of the “unthought,” namely, those ideas, concepts, or notions that aren’t fully understood or explored. The “unthought” can take the form of irrational ideas, such as the ones mentioned above, and novel or non-paradigmatic rational ideas, such as Evolutionary Theory, which although widely accepted now, was considered radical when it emerged. Importantly, the unthought also concerns the ability to establish connections between disparate ideas and concepts, especially those that appear unrelated. However, most people are reluctant to explore the unthought because it either requires too much work or makes us vulnerable to criticism from others, thereby threatening the ego. 

For leveraging GenAI to explore the unthought, both the prompting technique mentioned above as well as the recommendations for improving the quality of the questions we ask can prove useful. These techniques are well-suited to helping us form new and interesting ideas, but they may be limited in their ability to help us materialize connections between disparate concepts. 

Thankfully, there is a way to prompt AI for this purpose, but it requires a few components: 1) describe the ideas you have in as much detail as possible, 2) describe, as best as you can, what you think the possible connections are between these ideas, 3) identify and rank which of these connections you think are most plausible and least plausible, 4) explicitly state that you would like to identify alternative connections beyond the ones you have provided as well as feedback on the feasibility of the ones you have provided, and 5) request that the output includes all sources used to generate the answer, to ensure that further research is possible. Each of these components can be structured as individual prompts that are iteratively refined through human-AI interaction, or,  if the user is confident in their prompting skills, combined into one cohesive prompt. 

The historical nature of thinking: Heidegger suggests that the way we think is influenced by those who preceded us. He emphasizes the role that culture plays in shaping our thought processes and the idea that thinking involves a continual dialogue with the past. Through this dialogue with the past, we identify ideas and thinkers that we resonate with. However, disentangling ourselves from these ideas and thinkers, especially when they’re wrong about something, can be really hard. In other words, the historical nature of thinking makes it difficult for people to accept when they’re wrong—acknowledging one’s mistakes requires the taming of one’s ego and the biases that come with it. 

There are two specific approaches for leveraging AI in this context: 1) input an argument that you feel is strong and prompt AI to de-bias it, and 2) using the same argument, prompt AI to generate counterarguments that aggressively dismantle every claim that you make using established facts and expert perspectives. 

Based on the output you receive for the second approach, you can identify the strengths and weaknesses of your argument by reference to the strengths and weaknesses of the counterarguments provided. Understanding how strong the counterarguments are may require some human judgment, however, the exact process mentioned above can be repeated with AI-generated counterarguments as individual or chained prompts. In other words, you can ask AI to come up with counterarguments to the counterarguments it generated—in theory, if your initial arguments were strong, the second round of AI-generated counterarguments should mirror your initial arguments. If it does not, it doesn’t necessarily mean that your initial arguments were weak, though it could indicate they failed to consider important characteristics, facts, or perspectives. 

Wrap Up

We recognize that this last section may have been philosophically heavy for readers, but this doesn’t discount the importance of the concepts discussed above—an awareness of these concepts allows individuals to better tap into the catalytic potential of AI to expand the ways in which they think about and understand the world, especially as it progresses. 

Through human-AI interaction, humans can tame their egos, correspondingly widening their perspectives, cultivating a deeper understanding of their existence by dismantling their biases, exploring novel ideas, making sense of their lived experiences, and ultimately, maintaining a continuous learning mindset. In Marca’s words, “If you love to learn something and you’re learning, you’re learning how to learn, and that may be the most important tool that you ever take away from any learning experience.” 

Importantly, while we’ve suggested specific prompting techniques to address each one of the Heideggerian concepts above, these techniques need not be mutually exclusive and may actually be transferable in many different contexts. 

In terms of opportunities in the future of work, the AI-driven expansion of human thought and understanding could significantly enhance individuals’ abilities to identify novel opportunities in novel contexts. By entertaining a wide array of perspectives and thought processes, people become better equipped, both cognitively and emotionally, to increase the probability that they source, identify, and capitalize on valuable and actionable opportunities as they emerge. 

References 

*note: references are ordered alphabetically by author name, with links provided where appropriate.

  1. Interventions Designed to Reduce Implicit Prejudices and Implicit Stereotypes in Real World Contexts: a Systematic Review (Fitzgerald et al., 2019
  1. Effects of Procedural and Distributive Justice on Reactions to Pay Raise Decisions (Folger & Konovsky, 2017)
  1. Resistance to Change: The Rest of the Story (Ford, Ford & D’Amelio, 2008)
  1. Perspective-taking: decreasing stereotype expression, stereotype accessibility, and in-group favoritism (Galinsky & Moskowitz, 2000)
  1. AI Can Help You Ask Better Questions — and Solve Bigger Problems (Gregersen & Bianzino, 2023)
  1. What is Called Thinking (Heidegger, 1952) 
  1. We Are Hardwired to Resist Change (Pennington, 2018)
  1. Bringing automatic stereotyping under control: implementation intentions as efficient means of thought control (Stewart & Payne, 2008
  1. 6 Positive AI Visions for the Future of Work (World Economic Forum, 2021
  1. Researchers and Analysts: Enhancing Knowledge and Insights with GenAI-powered Answers (Zoghbi, 2023)

Recent Articles

NVIDIA’s CEO Envisions a Future Where Companies Hire AI Agents by the Million

Jensen Huang Explains Why Traditional Data Centers Are Dead and Intelligence Factories Will Power the Next Industrial Revolution

Gumloop Founders Make A Magical App That Turns Everyone Into an Automation Wizard 🪄

This AI-powered workflow platform is democratizing automation, enabling business teams to build powerful solutions without coding—potentially saving your company countless hours and resources

Apple Intelligence is Apple’s Calculated Late Arrival to the AI Race

In Silicon Valley's AI gold rush, Apple's delayed entry wasn't a misstep—it was a master class in strategic patience. Here's how the tech giant is quietly rewriting the rules of artificial intelligence, one feature at a time.