We value your privacy and security By clicking “Sign in” you agree to our Terms of Service.This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
AI and Human Thought Part 1: LLMs and the Nature of Thought
AI's profound impact on human cognition and experience is explored, revealing both its potential to enhance and diminish key thought processes, while emphasizing the critical importance of AI literacy in harnessing its benefits and navigating its risks.
Written by Sasha Cadariu
Published on
Introduction
Throughout this post, the first in our AI and human thought series, we examine the impact of AI, specifically Large Language Models (LLMs), on human thought and experience. In doing so, we highlight LLM’s potential to enhance cognitive abilities like problem-solving, abstract reasoning, and creativity, while also emphasizing how LLMs might be leveraged to cognitively diminish key mechanisms and abilities integral to human thought. Importantly, our discussion takes an AI literacy perspective, stressing certain AI skills, tactics, and AI-specific mindsets as critical attributes relevant to optimizing the value the AI can provide in specific contexts. We also warn against overreliance on AI and conclude by suggesting that the future integration of AI with other technologies will indiscriminately reshape human existence and cognition.
What is AI thinking?
What is thinking? For centuries, humanity’s brightest minds have attempted to provide a concrete answer to this question, and in doing so, have revealed—sometimes inadvertently—the deeply complex, interconnected, multifaceted, and mysterious nature of human cognition. Despite this mystery and the ongoing questions of what constitutes human consciousness, perception, and creativity, humanity has maintained a fascination with building machines that can think as humans do.
Current state-of-the-art AI systems like ChatGPT, Gemini, and Claude, though they would easily pass a Turing test—an admittedly poor measure of whether machines can think like humans—and display capabilities surpassing those of the average human across several domains, are still fundamentally limited in what they can do. Nonetheless, the emergence of these systems has reawakened the world to the immense power of AI, thrusting us into an AI summer with no clear end in sight.
The commercialization, popularization, and impressive capabilities repertoire of Large Language Models (LLMs) has reignited the existential flame, prompting many AI researchers, philosophers, scientists, and futurists to begin seriously reconsidering the question of what might happen if and when AI reaches human-level or artificial general intelligence (AGI)—though LLMs are unlikely to get us there. This is a critical question to address, but its notoriety in the AI discourse can diminish attention paid to equally if not more important questions. One such question—the one this post seeks to address—concerns the possible effects of human-AI interaction on the evolution of human thought. Simply put, how can AI change the way that we think?
The empirical basis for answering this question is thin at best. Several studies1 have compared and contrasted the process by which AI systems arrive at certain outputs with various elements of human cognition, but virtually none have systematically examined the effects of human-AI interaction on human cognition. So, if there’s no direct empirical basis for answering this question, why ask it? Because, AI, just like any other technology, is a tool—throughout human history, humans have built tools to help them solve problems, which has allowed them to build more sophisticated tools to solve even more complex problems, and so on. In other words, embedded in the nature of any tool is the capacity for change, which inspires humans to think about the problems they face in different, novel, and sometimes concerning ways.
For AI, the question of how it will change human thought is perhaps more important than it would be for other kinds of technologies, especially when considering its unique capacity to assist with or drive human decision-making. Moreover, AI, like social media, is a digital information technology that operates in the realm of the Internet of Things (IoT), which reaches every corner of the globe. In this respect, the advent of social media has clearly demonstrated that information technologies can and do non-discriminately impact human thought, as evidenced by well-documented phenomena such as the steep increase in rates of anxiety and depression among young people alongside rapidly accelerating political instability, ideological censorship, and the sustenance of surveillance capitalism. Had people been proactive, and addressed the question of how social media impacts human cognition during its early stages, these problems might have been more easily resolved, enabling an even wider array of social media benefits than we see today.
Throughout the remainder of this post, we’ll discuss several key cognitive functions essential to the way that humans think, and consider how AI, specifically LLMs, can be used to enhance or diminish them. In other words, whether we can leverage LLMs to improve the way that we think will depend upon how AI literate we are. For the purposes of this discussion, AI literacy refers to the skills individuals must possess to operate AI systems effectively, which includes understanding when not to use AI.
Seven Mechanisms and Abilities Integral to Human Thought
LLMs are powerful tools for cognitive enhancement, and might even be understood as an extension of human intelligence rather than a replacement of it. In this respect, we’ve identified seven high-order abilities and mechanisms ingrained within human thought processes that we argue LLMs could significantly enhance (or diminish, though we’ll discuss this later on):
Complex problem solving: The ability to solve a potentially ambiguous or novel problem with multiple possible solutions by drawing from different perspectives, thought processes, and ideas.
Abstract reasoning: The ability to reason about abstract concepts, such as ideas, principles, or theories.
Long-term planning: The ability to define, execute, and adapt plans over a long period of time and in consideration of uncertainty.
Social bonding and connectedness: The formation and sustenance of human relationships through communication, cooperation, and empathy.
Language: The mechanism through which the majority of human thoughts and ideas are concretely expressed and made understandable to others.
Morality: The mechanism through which humans determine how to value ideas and actions, namely in terms of their impacts on human well-being and lived experience.
Creativity: The ability to do things differently, whether it concerns solving a novel problem, exploring an uncharted territory, or coming up with a new music genre.
There are certainly many other facets of human cognition that LLM use could significantly impact. Still, we believe these to be the most important because they constitute the foundation of both crystallized (what we know) and fluid (how we use what we know) intelligence. For example, social bonding, language, morality, and creativity are all essential to an individual’s crystallized intelligence—through social interaction, we learn about our knowledge gaps and how to fill them, through language, we communicate what we’ve learned, cement it, and gain access to the ideas of others, through morality, we determine what else we should learn to live in accordance with our values and social norms, and through creativity, we push the boundaries of what we think we know already. As for fluid intelligence, it’s through complex problem-solving that we leverage crystallized knowledge to inform judgment in novel or ambiguous situations, through abstract reasoning, that we make sense of such situations by reference to the foundational principles and ideas that underlie them, and through long-term planning, that we enable the management and mitigation of these situations and their impacts.
Another way to understand the distinction between fluid and crystallized intelligence is through the lens of system 1(quick, intuitive thinking) vs. system 2 (slow, deliberate thinking) processing. Examples of system 1 processing involve activities like riding a bike, driving a car, or laughing at a joke—all emotional responses inherently fall within the category of system 1 processing, though not all system 1 processes are emotionally driven. By contrast, examples of system 2 processing include things like learning to ride a bike or drive a car, solving a math problem, writing an essay, or cooking a new dish. Importantly, with some tasks, once we practice them enough, they can shift from system 1 to system 2 processing—learning to ride a bike was difficult at first, but now, you don’t even think about it. Conversely, once you’ve solved a math problem, solving another similar problem might be easier, but arriving at an answer will still require some time—not all learned tasks automatically fall under system 1 processing, though subtasks, such as the methodology by which to solve a math problem, might.
When leveraging AI to enhance how we think, the system 1 vs. system 2 distinction is useful for a few reasons: 1) AI can help us identify and address the biases embedded in our crystallized knowledge to mitigate the role of system 1 processing in human critical thinking processes (e.g., refusing to listen to someone whose opinions are distasteful to you), 2) AI can accelerate the rate at which we learn new things, helping us shift certain complex tasks and/or subtasks into the realm of system 1 processing, 3) AI can adopt the role of crystallized intelligence in situations where humans require a high degree of system 2 processing, such as by synthesizing vast amounts of information for humans to analyze, and 4) the limitations of AI systems, for instance, their lack of emotional and cultural intelligence, can inform the areas in which human thought is still valuable.
Now that we’ve covered the seven mechanisms and abilities integral to human thought, as well as the motivation and reasoning underpinning our selection of them, let’s move on to the heart of our discussion. For better or worse, how can AI, specifically LLMs, change the way that we think?
AI as a Cognitive Diminisher
Leveraging LLMs to enhance the way we think is a critical skill to develop. However, whether it proves effective depends not only on an individual’s understanding of prompt engineering (which we discuss in the following section) but also on their ability to grasp what AI can’t do. Understanding the foundational limits of current state-of-the-art LLMs informs how we use them, and more importantly, dictates our conception of which of our cognitive functions deserve to be preserved vs. supplemented vs. replaced (e.g., complex problem solving vs. information analysis vs. information synthesis). To this point, we illustrate several of the most prevalent LLM limitations:
Hallucinations: The tendency to produce inaccurate and/or untruthful outputs that appear legitimate. We can’t believe anything an LLM says, we need to think critically about certain outputs and scrutinize them in terms of real-world evidence.
Domain-specific knowledge: Despite the wealth of human knowledge on which LLMs are trained, displaying deep-seated expertise in highly specialized fields laden with jargon still constitutes a major difficulty. If you’re an expert in your field, LLMs can make for a great virtual assistant but always remember that you’re the expert and that you’re accountable for your own work.
Inherent persuasiveness: An output will always be produced, regardless of whether it’s correct. For instance, if someone always answers your questions regardless of what you ask, you may be more inclined to assume their answers are correct. Failing to account for the inherent persuasiveness of LLM outputs can make you cognitively lazy and vulnerable, especially if you choose to leverage them in a professional setting.
Dependency on input quality: LLMs may be easy to use, but they’re not easy to optimize. Output quality heavily depends on input quality—the more specific and knowledgeable users are with their prompts, the more likely they are to receive good outputs. Most users have no idea how much information they can include in a prompt—a highly detailed and clearly articulated prompt with a well-defined scope, examples, and parameters (i.e., guidelines or rules you want the LLM to follow when generating an output) will yield a much better output than a simple question or statement.
Sequential and hierarchical planning: Language contains a lot of information about the world, but that doesn’t translate into an understanding of how complex actions and events might unfold over time, especially when chains of events/actions must follow a certain order for an objective to be achieved. While LLMs can inform some functional elements of the planning ideation process, a more complete world model is necessary in this context.
Non-linear narratives: At their core, most LLMs work by predicting the next most probable unit of text by reference to the sequence of text units that preceded it. If a text is non-linear, anticipating what comes next requires a degree of abstraction, whereby unifying high-level ideas and themes are identified. LLMs are great at pattern recognition, but comprehending the concepts behind these patterns is something they still struggle with profoundly—an LLM might be able to detect a non-linear pattern, however, understanding the reasons for its existence is another thing entirely. As humans, we’re imbued with abstract reasoning capabilities, which allow us to grasp high-level themes and ideas unifying non-linear narratives.
Emotional, cultural, and moral intelligence: LLMs can analyze sentiment and cultural trends but they fundamentally lack an understanding of human emotion and culture, primarily due to their inability to feel and/or experience things. These limitations also preclude LLMs from developing a moral compass—though they may be able to reason about morally-laden issues by drawing from the philosophical work on which they’re trained, they can’t actually understand what makes something good or bad. In other words, try to avoid taking targeted cultural, emotional, or moral advice from LLMs since it will likely be generic and/or inaccurate, failing to grasp the nuances of human existence.
Common sense knowledge: Humans obtain common sense knowledge through lived experience, which LLMs simply don’t have. This kind of knowledge can play a major role in higher-order functions like complex problem-solving and long-term planning by motivating the implicit assumptions that sometimes underlie these processes. LLMs might mimic some common sense knowledge, like knowing not to touch a stove when it’s hot, but only because they’ve been trained on texts that describe it, not because they “understand” that getting burned hurts.
Understanding implicit knowledge: Language is insufficient for developing a world model—our world model is continually informed and updated, not only by language but by all our sensory experiences simultaneously. This means that humans can make implicit assumptions about the world—rooted in lived experience—when communicating with each other, whereas LLMs can’t. Though common sense knowledge can frequently take the form of implicit knowledge, implicit knowledge tends to be more specific—for example, you know that driving to your friend’s place during the afternoon rush hour will take an extra 30 minutes because you’ve made the drive with and without traffic several times.
Understanding visual context: A linguistic description of an image does not indicate an understanding of that image. For example, describing the layout of your living room is different than understanding why you’ve chosen that particular layout. Understanding visual context doesn’t only depend on vision, but even more importantly, on embodiment—the ability to act in the physical world by manipulating objects and understanding the relationships between them.
Interpreting ambiguity: Comprehending material that can be interpreted in numerous different ways or contains conflicting and mixed viewpoints requires more nuanced abilities, ranging beyond pattern recognition and prediction. Humans often face ill-defined problems and vague situations, though they can often overcome them through creativity and inventiveness, by exploring alternative approaches and maintaining an open mindset.
Causal reasoning: Predictive inference doesn’t equate to causal inference—correlation is not causation. LLMs predict what’s most likely to come next based on what came before it, but they don’t possess an understanding of why this might be the case. Exercise caution when leveraging LLMs to try and understand the “why” behind a given question or hypothesis.
Memory retention: LLMs struggle to retain information in long conversations or texts. Unlike humans, LLMs don’t possess a heightened capacity for working memory—the ability to leverage recently acquired information in real-time. For instance, asking an LLM to reference earlier parts of a conversation may lead to things like hallucinations or misrepresentations of previously discussed ideas, however, progress is being made to overcome these limitations.
Generalization beyond training data: Understanding how to navigate novel situations or complex changing environments requires information beyond what most LLMs have been trained on. This limitation is especially relevant to the role LLMs might play in assisting humans with complex problem-solving and long-term planning, both of which are frequently laden with uncertainty and ambiguity.
User modeling: Though LLMs can somewhat personalize their individual interactions with users by reference to their preferences, history, and context, the ability to craft a deep user profile that draws information from all previous interactions is not yet possible. In other words, LLMs don’t know us, even though they can be very convincing.
Whether our use of LLMs enhances or diminishes the way we think depends on our understanding of LLMs’ limitations, which forms a crucial dimension of AI literacy. Knowing what a tool can’t do—or where it falls short—allows us to determine for what potential use cases it may prove unreliable or inappropriate, or concretely, in which areas human abilities remain unmatched and necessary. In this respect, the descriptions above allude to some critical human ability—abstract reasoning, long-term planning, or complex problem solving for instance—that remains relevant precisely because of a given LLM limitation. If you intend to leverage LLMs but don’t have the time or interest to explore the specifics of how to optimize them, you should, at the very least, be aware of their limitations. The reality is this: if you make a poor decision grounded in your LLM use, the world will not see you as the victim of AI, but rather, as a victim of your own judgment.
There is another important point to consider here: humans are prone to overreliance on technology. For example, we know that climate change is a real phenomenon, yet most of us are unwilling to give up our cars and stop using plastics because we’d have to majorly reimagine the way that we live, and that would be hard. Even at a more basic level, imagine asking someone to give up their smartphone and use a flip phone instead—today, how many people do you know who would willfully embrace this change?
Technology makes our lives way easier, but it also exposes our vulnerabilities, which become more prolific and intense as technology embeds itself deeper and deeper into the fabric of reality and society—technology use often takes the form of a collective action problem (e.g., if everyone agreed not to use smartphones, most people wouldn’t have any trouble giving them up). Understanding LLMs’ limitations while leveraging them allows us to build on our capabilities while minimizing our vulnerabilities.
AI as a Cognitive Enhancer
Prompt engineering is the process of optimizing prompts to elicit a desired output—recall that LLM outputs are heavily dependent on user inputs (i.e., prompts). In this section, we’ll discuss some specific purposes for which we can design LLM prompts to streamline the performance of various subtasks associated with the functions of complex problem-solving, long-term planning, abstract reasoning, social bonding, language, morality, and creativity. Before we dive in, however, we urge readers to keep in mind the LLM limitations described previously—because they will play a major role in influencing the scope and structure of the prompts we design—and an additional prompt engineering hack: provide concrete examples with your prompts, especially when they are action-oriented.
Now, let’s get into the nitty-gritty details.
Complex Problem Solving
Complex problem-solving is not unique to humans, however, to say that humans face more complex problems than any other species wouldn’t be far-fetched—humans are the only species that builds technology to solve problems, and in doing so, creates even more problems. LLMs won’t solve complex problems for us, but they can help us break them down, consider alternative viewpoints, establish problem-solving objectives, and facilitate a more profound understanding of the context or environment in which the problem occurs.
Problem breakdown: Provide a detailed written summary of the problem you’re facing and prompt the LLM to break it down in terms of key themes, actions, and/or objectives you identified. Providing one or several examples of the kind of problem breakdown you are looking for can also significantly enhance the LLM’s output quality and depth.
Identifying key problem-solving objectives: Follow the same steps as above, but prompt the LLM to conduct a problem breakdown whereby each section of the problem corresponds with an actionable goal. Following this, prompt the LLM to hierarchically categorize each goal with respect to a high-level objective—this tends to work better when you reiterate the process for each problem section rather than the whole problem all at once.
Problem-solving strategy: Once you have defined problem-solving objectives, you can then begin building a problem-solving strategy—recall that the problem breakdown unveils actionable goals. Now, you can prompt the LLM to develop a sequence of steps or actions for each of these actionable goals, to ensure that they align with high-level objectives. Be careful though, since LLMs’ planning abilities can be unreliable and any steps or actions proposed will likely require further human review and validation.
Interdisciplinary approaches: Complex problems have multiple solutions, and understanding which one is best can require an interdisciplinary approach—once you have drafted your problem breakdown and problem-solving strategy, include it in your prompt, and further prompt the LLM to compare and contrast your approach with other similar approaches spanning across different disciplines and perspectives.
Environmental context: Describe, in detail 1) the context in which the problem you are facing occurs, and 2) the problem itself, and then prompt the LLM to extrapolate from this context, what other factors might be influencing the problem. However, don’t take these factors for granted, and be sure to closely review them.
Abstract Reasoning
Since the dawn of human civilization, humans have maintained a fascination with questions and ideas bigger than themselves, which has powered many of our greatest discoveries, from space travel to AI. However, making sense of these kinds of thoughts, especially when they aren’t wholly grounded or substantiated by physical reality, can be challenging, in particular, for those who are hard-skill oriented. In this respect, LLMs simplify certain parts of the abstract reasoning process and help with a variety of other related subtasks, namely in terms of concretizing abstract information.
Thematic analysis: LLMs are great at pattern recognition—when you’re struggling to grasp the thematic elements ingrained in a difficult abstract text, such as a philosophy dissertation, you can feed certain parts of the text (large texts should be broken down into chunks) into the LLM and request that high-level themes are identified and extracted. Once you’ve identified all high-level themes, you can include them in another prompt and ask the LLMs to analyze them accordingly to reveal ongoing thematic connections and related insights.
Sentiment analysis: The emotional undertones of a text can inform the foundation on which abstract ideas and concepts are built—by following the same steps as above, you can prompt LLMs to conduct sentiment analyses, whereby results are then mapped onto any key themes that were previously identified.
High-level overviews and key takeaways: Sometimes, things appear to be more abstract than they actually are, mainly because abstract ideas can be convoluted and confusing. When dealing with this kind of material, you can feed it into an LLM and request a high-level overview accompanied by key takeaways, but be sure to provide a detailed outline of the parameters you want the LLM to follow in generating its answer—what are the objectives of the overview and what should the key takeaways reveal?
Relating disparate ideas and concepts: Understanding how disparate ideas relate to each other, even when they’re concrete, can be cognitively exhausting. When you have a few ideas you think might be abstractly related to one another, simply prompt an LLM to suggest some possible connections—if you already have a general conception of the relational structure between ideas, you may be able to gain more targeted insights by describing this structure in as much detail as you can in your prompt.
Outlining a chain of thought: Once you’ve determined the abstract connections between various concrete ideas and/or how abstract ideas relate to each other, you can then prompt an LLM to outline a chain of thought. Seeing as this chain of thought will be predicated upon logic, it’s critical that you possess a general understanding of the logic you want to uphold and describe it accordingly—in steps if possible—in your prompt.
Grounding abstraction in tangible concepts: We can’t yet empirically prove the existence of human consciousness, yet we often question its nature, because the experience of it is self-evident—most abstract ideas have roots in reality. Prompting an LLM with an abstract idea and then asking it to trace back its origin can help reveal the extent to which it’s grounded in reality or some tangible concept.
Long-Term Planning
Many of humanity’s greatest creations and advancements could not have occurred without long-term planning—building a skyscraper, a rocketship or even AI requires the coordination of enormous amounts of time and resources, which is typically achieved through long-term planning. However, leveraging LLMs to assist with long-term planning can be tricky due to their limitations in sequential/hierarchical planning and dealing with ambiguity. Nonetheless, LLMs can still prove highly useful in this context at both the early and mature stages of the planning process.
During the early planning stage, LLMs can be useful for:
Determining a desired outcome, identifying planning stages, subdividing plans into manageable chunks corresponding with each pre-identified planning stage, and establishing benchmarks: The first three prompt techniques previously outlined under complex problem solving can also be applied here: 1) describe your desired outcome(s) in as much detail as possible and prompt the LLM to suggest a plan structure 2) prompt the LLM to breakdown your plan in terms of key planning stages by reference to desired outcomes, and 3) prompt the LLMs to suggest benchmarks that correspond with each planning stage and desired outcomes. Seeing as plans take the form of real-world actions, and LLMs lack a world model, human oversight and validation will be essential components of this process.
During mature planning stages, LLMs can be leveraged to:
Identify potential areas of uncertainty: Once you’ve established a long-term plan, you can then feed it back into the LLM, and request that potential pain points, bottlenecks, vulnerabilities, and/or areas of uncertainty be identified. Providing a detailed summary of the context in which you intend to execute this plan, which includes the time and resources you require, can also enhance the quality and utility of LLM outputs.
Evaluate the level of uncertainty: To evaluate the level of uncertainty within certain parts of your plan, you need to describe the uncertainty (i.e., risk) threshold you are comfortable with. You need to be extremely specific here, and provide concrete metrics in your prompt if you have them—what kinds of uncertainty can you handle/not handle (e.g., risk to reputation vs. risk of harm to consumers), and what factors will you use to measure uncertainty (e.g., ROI vs. threat of harm vs. compliance)? Once you’ve answered these questions, you can prompt the LLM to analyze the level of uncertainty you face.
Analyze benchmark completion: To ensure that your plan actually works, you’ll need a way to determine whether benchmarks are reached. To do this, you can follow the same process as above, but your questions should target 1) what constitutes whether a benchmark has been successfully reached (e.g., “X” amount of dollars made), and 2) what are the metrics by which benchmark success is evaluated (e.g., “X” amount of dollars were made, but several people were harmed in the process)?
Social Bonding
At the surface level, it’s not exactly clear how LLMs could enhance social bonding among humans. However, when we consider the fact that humans typically form social bonds through shared experiences with others, LLMs can emerge as a useful empathy-enhancing tool (insofar as close attention is paid to potential biases in LLM outputs).
Building a more complete worldview: Despite how digitally connected we think we are to others around the world, the majority of us still exist very narrowly—our vision of the world is far from complete. Fortunately, LLMs can be one of our best allies here. By including nondescript information on our upbringing, social status, culture, and lived experience in a prompt, and then asking an LLM to point us toward sources of information that differ from what we think we know, we can correspondingly expand our worldview. If you do decide to go this route, be sure not to include any sensitive personal information in your prompts to protect your privacy as a user.
Making sense of others’ experiences: Empathy works well when we share experiences with others—when we act out of empathy in the absence of shared experience, the consequences can end up producing more harm than good. In this context, we can 1) describe the profile of someone (e.g., an individual or group) we know but struggle to relate with (like we did above for ourselves), 2) outline a similar experience of a thing that we’ve both had and 3) ask the LLM to envision a few possible ways in which the other’s experience of the thing might differ from our own. Once more, be careful with the information you decide to include, and also take note of the fact that historical and systemic biases still have a strong foothold in LLMs.
Challenging your own biases: LLMs can be instrumental in helping us minimize the role that certain biases play in our decision-making. For one, LLMs themselves are biased—being curious while interacting with LLMs and considering whether outputs may be biased in some way, can be a great way to deepen our understanding of the biases we hold, especially because LLM biases mirror human biases. Second, we can explicitly ask LLMs to de-bias the content that we feed them, whether it’s something we’ve written or otherwise—if you know you have a tendency toward a specific kind of bias, you can also include this in your prompt to signal to the LLM what it should look out for.
Language
Language forms the essence of human communication, and is increasingly important in a world where face-to-face interactions are becoming progressively less common—humans use a variety of complex social cues to communicate, but when most communication is digitized, these kinds of social cues will be much more difficult to interpret, and many will be lost. In this respect, LLMs are remarkably powerful tools that humans can leverage to refine their language and ensure that their ideas and feelings are clearly communicated.
Text editing: LLMs are fantastic editing tools, and can perform a variety of tasks from simple formatting and SEO optimization to more complex content editing skills like argumentative feedback and structural critique. Understanding how to prompt LLMs for editing purposes depends on whether you can provide a detailed explanation of what you want to gain from a particular kind of edit or form of feedback—you can increase the degree of output sophistication by outlining specific parameters for the LLM to follow when providing feedback (e.g., focus on word choice and sentence structure). For higher-level editing, you can also prompt an LLM to interpret the text you feed it and score it according to an editing rubric that prioritizes key areas of importance to you, such as clarity, cohesiveness, tone, argumentative structure, and so on. If you choose to go this route, you will need to either use an already existing rubric or create your own, for which you should clearly define each measure you select alongside relevant examples.
Text summarization: LLMs’ text-summarization capabilities are unparalleled, which is critical in a world that’s so heavily saturated with information. By prompting LLMs to break texts down into manageable chunks, summarize them, and extract key takeaways, you can dramatically increase the amount of information you ingest without having to spend a fraction of the time you would normally. Leveraging summarization capabilities can also be useful in more complex or abstract contexts, such as when numerous different kinds of content or broad themes and ideas must be summarized and categorized accordingly.
Knowing your audience: For any kind of writing, knowing your audience is essential, regardless of whether the audience is general or specific—some ideas and concepts are universal, while others only appeal to certain groups or individuals. By describing the concepts and ideas you wish to address in your writing, and then prompting LLMs to suggest a few different audience demographics that might be of relevance, you can gain a much better understanding of who to target. To know how to target your audience—how to structure your ideas and how to find the right tone—you can select an audience demographic, describe it in detail in your prompt, include the ideas and concepts you wish to address, and ask the LLM to suggest some alternatives and/or changes to your ideas that might make them more impactful and interesting to your target audience.
Morality
Morality penetrates every human domain, from science to politics. Today, morality has admittedly made itself at home in some places where it doesn’t quite belong, however, this speaks to the larger importance of this mechanism: morality often determines whether certain ideas or actions are worth pursuing. You don’t have to agree with a moral structure to find value in moral reasoning—there are a lot of weak moral arguments out there—and this is where LLMs can prove highly useful.
Understanding moral reasoning: When humans justify things, it’s usually via their moral compass—understanding the moral reasons others might invoke for their actions is another thing entirely. LLMs lack morality, but they are trained on the works of many of the world’s most renowned philosophers, ethicists, and thinkers, and can therefore be leveraged to uncover moral reasoning structures across situations we’re not familiar with. To take advantage of this capability, you can prompt an LLM with a moral claim like, “Poverty shouldn’t exist,” and then ask it to adopt the moral perspective of a given viewpoint or individual. For instance, what moral reasons might a capitalist offer to substantiate this claim, and how might these reasons contrast with those offered by Confucius, Aristotle, and Nietzsche?
Aligning moral reasoning with reality: Morality is about what should happen, whereas reality is about what does happen—sometimes, what should happen simply can’t happen. Aligning moral reasoning with reality depends on whether moral objectives can realistically be achieved. To leverage an LLM for this purpose, you need to first describe the moral objectives you wish to uphold, the real-world context of the problem you’re addressing (i.e., real-world objectives and limitations), and conclude the prompt by asking the LLM to determine whether it’s possible to map your proposed moral objectives onto the real-world context of the problem—for this to work well, a lot of detail will be required, and specific parameters such as, “if a moral objective can’t be easily achieved in reality, identify the reasons for why this is the case” should also be included.
Refining your own moral compass: Leveraging LLMs in the above-stated ways can ultimately result in a more refined moral compass. However, if you want to get even more targeted with it, you can describe your core values in detail and prompt an LLM to critique each one by reference to contradictory moral arguments, to uncover weaknesses or vulnerabilities in your moral reasoning structure. You can also ask an LLM to evaluate your core values with respect to the evolution of human morality, both at the level of civilization and individual societies around the world.
Understanding where morality doesn’t belong: Most things are morally-laden, but not all of them deserve to be. For instance, history shouldn’t be omitted or rewritten because it’s deemed “offensive” and well-respected academics shouldn’t have their careers destroyed when they politely disagree with social norms. LLMs can’t tell us in what kinds of situations morality belongs, but if you use them in the ways we discussed above, they will certainly enhance your moral judgment, and allow you to navigate these kinds of situations with more compassion, consideration, critical thinking, and objectivity than you would have.
Creativity
The threat AI poses to human creativity is real, but to say that AI will only hurt human creativity would be both untrue and unfair to the many artists and creatives leveraging AI in their work today. At its core, creativity is about expanding the way we think, and LLMs can be integral to this process by helping push the boundaries of our preconceived notions.
Dismantling bias: To think outside the box, we need to know what the box is made of, and it’s usually our biases—what we automatically assume we can’t or aren’t allowed to do. To leverage LLMs to increase creative freedom and dismantle bias, you can follow the same approach laid out under the social bonding section, but be sure to tailor your prompts to biases specific to your intended creative output.
Thinking bigger: Sometimes, we don’t want to break down the box, we just want to expand it, so there’s more room for us to play inside. Seeing as most LLMs are trained on the wealth of text-based human knowledge, we can leverage them to reconnect with some of humanity’s greatest minds, discoveries, and creative endeavors, to push ourselves to think bigger. If you have a great idea for painting, prompt an LLM to evaluate it from the perspective of a well-known painter whose style resembles your own, if you’ve made a major discovery but don’t know how to handle its ramifications, prompt an LLM to explore whether similar discoveries have been made previously and what means were taken to address them. The array of possibilities here is essentially infinite.
Getting things done differently: The box is fine, but it’s old and outdated—we need to make a new box, which means we need to figure out what materials we need. In other words, our creative objective remains the same, but the approach we use changes. Leveraging LLMs to explore the potential benefits and drawbacks of different creative ideation strategies, such as design thinking, mind mapping, or storyboarding, in a context-specific manner can be highly informative. Utilizing LLMs for this purpose requires a clear description of the proposed creative context—your creative skills, tools, medium, and objective—coupled with a description of what you hope to gain by exploring different creative strategies (i.e., parameters like “I want to improve step X of my creative process”). For those wanting to take a deeper dive into this kind of LLM-driven creative strategy formation—which is likely to require repeated (iterative) prompting and chaining (running prompts in sequence)—you can go a step further by prompting an LLM to synthesize multiple different creative frameworks and fine-tune an ultimate strategy suited to your specific creative needs.
There are obviously many other ways in which LLMs can be leveraged as AI tools for cognitive enhancement. Still, the prompting techniques, strategies, and approaches we discuss in this section offer numerous high-utility and impact starting points for LLM-driven improvements in human thought processes. Before concluding, we also remind readers of a critical point: LLM prompting approaches are not necessarily mutually exclusive—even though we discussed specific prompting techniques in relation to certain cognitive faculties, many of these techniques are transferable, so readers should maintain an open mindset when developing their prompt engineering skills repertoire.
Conclusion
LLMs have reached a remarkable level of popularity with the general public because it doesn’t take much to use them—all you really need is an internet connection and the capacity for language, which many people, even those in less wealthy countries, already have. However, the ease of use and accessibility of LLMs can be tricky to navigate if you’re not AI literate. If you don’t know how to create a well-structured prompt or understand the tasks for which LLMs are poorly suited, adopting the superficial “let’s just ask ChatGPT” approach is likely to prove more destructive than productive.
Sure, AI will simplify many things, but it’ll also complexify many others—as the arbiters of our own lives, we must hold ourselves accountable for our use of technology. The choice of how much we want to delegate to or rely on AI for is ultimately ours. If we want to leverage LLMs to make ourselves smarter, we also need to understand how they can make us dumber.
Moving forward, part two of this series will look further into the future, venturing beyond LLMs and speculating about how the likely synthesis of AI with numerous different kinds of technologies, from entire homes to wearable devices, will change how humans exist in the world. This fundamental change will reshape our lived experience and subsequently inform the development of a new kind of world model–one that humanity has yet to experience or entertain. To some, this world may appear far off and not worth thinking about, and if you happen to fall into this camp, we urge you to consider the following point: humans suck at grasping the concept of exponential growth.
AI is an exponential technology—one year’s worth of progress made next year may be equivalent to five year’s worth of progress under current conditions. We may not be able to predict, with certainty, where AI will be even a few years from now. But, we do know one thing for sure: AI will inspire much more change than we expect. The ability to embrace this change will not only be a matter of mindset but also, a direct consequence of how AI literate we are.
Explore AI: Beginner’s Workshop on ChatGPT & Practical AI!
Jumpstart your AI journey in our hands-on workshop designed for beginners. Learn to harness the power of ChatGPT and practical AI applications with ease!
This AI-powered workflow platform is democratizing automation, enabling business teams to build powerful solutions without coding—potentially saving your company countless hours and resources
In Silicon Valley's AI gold rush, Apple's delayed entry wasn't a misstep—it was a master class in strategic patience. Here's how the tech giant is quietly rewriting the rules of artificial intelligence, one feature at a time.