×
Written by
Published on

Introduction

Throughout this final post in our series on AI and human thought, we consider the current course, evolution, and implications of the human-AI relationship, viewing it through the lens of AI literacy. We begin by breaking down AI’s two-fold function as a resource and tool, after which we examine the notion of what it means to be a “user” in the age of AI, concluding by cementing our understanding of the human-AI relationship.

We began this series by examining several ways in which Large Language Models (LLMs) can be leveraged to enhance or diminish human thought processes, highlighting the importance of AI literacy as a mechanism that enables effective and responsible AI use. 

In our second post, we widened our perspective and explored more unknown territory, envisioning a world where AI is omnipresent, namely in terms of its impacts on human thought and experience—here, we ironed out the distinction between technologies of necessity and convenience and demonstrated the criticality of AI literacy as a skill that allows us to determine to what degree reliance on AI is appropriate. 

In this post, we’ll build on these ideas, diving into an equally interesting yet different concept: the current and future trajectory of the human-AI relationship. As with our previous approaches, we’ll disseminate this topic through the lens of AI literacy, focusing on the pragmatic value that AI literacy offers in terms of fostering a continual and proactive understanding of this very relationship. Nonetheless, before we tackle this subject, we need to consider what human relationships with tools and technology have looked like in the past. 

For most of human history, humans have been “users” or “creators” of tools.

For most of human history, humans have been “users” or “creators” of tools. For instance, we used pots to cook, hammers to build, weapons to hunt, fight, and protect, cars and planes to travel, phones to communicate, TVs to entertain, and computers to help us solve complex problems. Conversely, we’ve leveraged technological resources like hydroelectric power and coal to fuel our factories and create new tools and resources that we then utilize to replace or improve upon earlier ones, such as solar panels and wind turbines. The point is that tools were always created to be used by someone somewhere for some purpose. 

Each of the examples above constitutes a technology with a pragmatic task-oriented purpose, which underlies its classification as a tool. However, technologies are not always tools by default, in fact, the value of a tool is defined by whether it can be used in a certain context to accomplish a necessary task or objective. Beyond its parts, an engine is worthless if it doesn’t power something and a weapon is unnecessary in the absence of the need for protection or survival. Similarly, if the world we lived in were analog-based, electricity would be useless until the creation of technologies that harness it. So, why is this relevant to AI? 

To answer the question we’ve just posed, we’ll begin this discussion by outlining why AI can function as both a tool and resource—but we’ll also go one step further, arguing that this two-fold function is fundamentally different from those of other technologies like solar panels and wind turbines. Extrapolating from this argument, we’ll then explore how the two-fold function of AI shifts our understanding of humans as tool “users” to something else. Following this, we’ll break down why AI literacy plays an integral role in substantiating an informed understanding of the human-AI relationship, both today and in the future. 

AI as a Tool and a Resource 

As we’ve seen with the advent of frontier AI models like ChatGPT and Gemini, AI systems, especially generalist agents—agentic models capable of performing a wide variety of tasks across different domains—can be extremely valuable tools, from helping us streamline research, data analysis, and the identification of actionable real-world insights, to driving creative ideation, strategy execution, and supporting workflow management. In such cases, AI produces an immediate pragmatic value—by utilizing it, we directly increase our ability to accomplish a task, reach an objective, or solve a problem. 

AI can provide value as an independent product or as a component of a product.

Nonetheless, AI can also provide indirect value as a resource that’s leveraged to enhance existing tools or create new ones. In this respect, recall all the examples of “smart” technologies we reviewed in our previous post, like smart mirrors and vacuums. Alternatively, consider technologies like AlphaFold and FrameDiff, both of which are designed for protein synthesis and discovery, and which will likely play a major role in the development of future gene therapies, pharmaceutical treatments, and scientific research. In other words, AI can provide value as an independent product or as a component of a product—keep this in mind as we move forward. 

In a technological context, resources are typically defined by their ability to power technologies, whereas technologies are defined by their ability to harness resources effectively. For instance, a wind turbine harnesses wind to produce electricity—the more effective the turbine, the more efficiently it converts wind power to electric power. However, let’s say we want to optimize wind turbine efficiency, and so we build a predictive AI model that’s trained on massive amounts of wind turbine data from all around the country. After inputting our turbine data into the model, the model generates a new wind turbine design that’s supposedly 20% more efficient than its predecessor. After the new design is reviewed by the engineering team, the project is approved and work begins. In this example, is AI a resource or a tool? 

Technologies can become resources when co-dependencies emerge, but this doesn’t mean they lose their status as tools.

The answer is both. Technologies can become resources when co-dependencies emerge, but this doesn’t mean they lose their status as tools—in the example above, AI is a tool because we directly use it to improve upon the previous wind turbine design, but it’s also a resource that’s being harnessed to streamline the development of a design that otherwise would have taken much longer to create. This characteristic isn’t unique to AI. For instance, dams become resources when they’re used to power things like factories or cities whereas cars emerge as valuable resources in rural communities where the nearest hospital or school might be far away. 

Simply put, the value that AI provides as a tool or resource isn’t bound by any specific context—a dam in the desert is pointless just like a car in a city with great public transport infrastructure.

Nonetheless, there’s still something strangely unique about the two-fold function that AI possesses. Simply put, the value that AI provides as a tool or resource isn’t bound by any specific context—a dam in the desert is pointless just like a car in a city with great public transport infrastructure. However, if we were to take our wind turbine AI and retrain on enough power grid data, we’d be able to obtain similarly insightful recommendations for efficiency improvements in power grid design. We can also extend this reasoning to generalist agents, which are by definition, designed to be generally capable of performing a wide variety of tasks across different contexts—a tool like ChatGPT is both instrumentally and intrinsically valuable, serving as a means to end and an end in itself. For example, you can leverage ChatGPT to streamline research into a complex topic while also using it as a thought partner that you bounce ideas and concepts off of in the absence of a clear goal or objective. 

In a similar vein, let’s return to the idea that AI can provide value as a standalone product or a component of a product. Once more, this property isn’t unique to AI—street lights help us safely navigate our cities and roads at night, but we also have headlights in our cars and many different kinds of lighting fixtures in our homes. Where AI differs from other technologies possessing this property is in the fact that when it functions as a component of a product, it can monumentally increase the value of the very product it’s a part of. 

For instance, regardless of whether light is used on the street or in the home, the value it provides is constant—it allows us to see better, and it would also be absurd to purchase a car without headlights or a home with no lighting fixtures. That being said, consider another example. If all you’re interested in is keeping track of time, a regular watch might be the right choice for you, but if you have the budget and want to track your fitness activities, sleep, and overall health, a smartwatch with built-in AI features is objectively a much better option. Alternatively, consider the case of autonomous cars as a thought experiment—why would we be trying to create fully autonomous cars if we didn’t think they would prove far more valuable than conventional cars? The point is this: if you find a way to integrate AI into a product, you fundamentally alter the nature of that product—no one reasonably buys a smartwatch purely because it tells time. 

From Users to Something Else 

Now that we’ve discussed what makes AI’s two-fold function as a tool and resource unique, we can begin examining the consequences of this argument, namely what they mean for the human-AI relationship. We’ll address two specific forms of AI technologies here: 1) generalist agents, and 2) products with built-in AI features. While these two kinds of technologies most clearly illustrate the points we wish to make, it’s important to note that we’re not implying that the arguments in this section are non-transferable to other types of AI or AI-driven technologies. Nonetheless, let’s get to it. 

The term continues to imply a form of interaction with a system or product whereby the system or product always represents a means to an end, working for the user rather than with the user.

Historically, the term user was used to describe individuals as components of a larger system, whereas today, it’s vaguely understood as a word that describes a person who uses a digital product or service. Regardless, the term continues to imply a form of interaction with a system or product whereby the system or product always represents a means to an end, working for the user rather than with the user—we’ll return to this claim later on. 

For example, you might join a social media site to widen your array of available marketing channels for the product you’re launching or, on the other hand, you might just be interested in exploring different kinds of content and connecting with friends. Sure, social media can be valuable to different people for different reasons, but its universal attractiveness stems from a specific quality—the ability to provide a personalized experience for each individual who uses it. This quality, which is fueled by machine learning and AI, is present in virtually every single consumer-centric digital information technology from search engines, e-commerce sites, and news platforms to edge devices like wearables and even electric vehicles. Ironically, without users, the personalized digital experience would not be viable because users have been commodified as a product of this experience—without user data, the personalization capabilities that such platforms and technologies provide would be severely limited, if not entirely inadequate. 

A user is someone who benefits from a personalized digital experience at some cost.

So, in a technological context, it seems that a user isn’t just someone who uses digital information technology as a means to an end. More precisely, a user is someone who benefits from a personalized digital experience at some cost, most notably privacy—consider how odd it would be to refer to a chef as a user of knives or a doctor as a user of a stethoscope. In a nutshell, users don’t actually have to do anything requiring a higher level of thought or action, other than joining a platform, buying a product, or liking a post, to enact the benefits of personalized digital experience. But, when it comes to generalist agents, things start to get muddy. 

While generalist agents like ChatGPT, Claude, Gemini, Falcon 180B, and Llama are fundamentally limited in their abilities for personalization, this will surely change as further advancements in the generative AI ecosystem occur. Assuming generalist agents will soon possess high-level personalization capabilities, what might make the term user unsuitable? 

For one, interacting with generalist agents, irrespective of their personalization capabilities, requires something more than passive engagement. In rudimentary cases like inquiring about a historical fact or interpreting basic data, the value that generalist agents provide might be immediate—a simple prompt is often enough to elicit actionable insights in low-level cases. However, when it comes to high-level complex tasks like building a sales pitch, defining a marketing strategy, identifying connections between abstract concepts, or refining a piece of long-form content, a lot more work is needed. 

Moreover, for those who regularly leverage such models, it’s clear that oftentimes, we turn to them with no specific goal or objective in mind, finding value in their role as a thought partner and creative inspirer, or simply a tool that helps us reflect on our own lives, career choices, politics, and pretty much anything else of interest. 

Additionally, while generalist agents are easy to use, they’re not necessarily easy to optimize. Social media, e-commerce sites, and search engines all optimize your experience for you, but for generalist agents, the quality of user inputs is a major determining factor influencing the quality of model outputs. If you don’t know how to create well-structured prompts that are coherent, set clear parameters for the model to follow, and illustrate necessary examples, you’ll find that your experience with generalist agents proves to be sub-optimal. In other words, you need to actively learn how to optimize your use of these technologies within specific contexts, which can require a lot of experimentation, and most importantly, AI literacy. 

All that being said, the question of how to modify or replace the term user in this context is perhaps more easily answered by considering what role generalist agents play in our lives. Are they assistants, collaborators, research aides, digital companions, interactive knowledge bases, or some combination of all of these roles? 

Given their versatility, generalist agents can fulfill all these roles and others to some degree, illustrating their two-fold property as tools and resources, and implying that their role more broadly aligns with that of a thought partner, deliberator, or even intellectual explorer. If this is true, then the role that humans play in their interaction with generalist agents might be more accurately described as an active user—this won’t always be the case and we’ll come back to this idea shortly. This distinction may appear trivial, but consider its ramifications—it would allow us to clearly distinguish between technologies that require a more active vs. passive form of user engagement, which would allow regulators, industry specialists, safety researchers, and philosophers to craft policies, ethics frameworks, and risk management approaches that are much more targeted and standardized without having to rely on generic, ill-defined terms like users, consumers, or affected persons

Moreover, this distinction would also allow us to identify when AI is being used as a resource or a tool. For example, in the case of products with built-in AI features, like wearables or other smart appliances, AI functions as a resource, powering the personalization capabilities that such technologies offer. If you find that you benefit from built-in AI features without having to do anything tangible to realize these benefits—other than purchasing the product and using it as you see fit—this is a signal that you’re in the position of a passive user. But passive users aren’t solely defined by their use of technologies with personalization capabilities, seeing as this distinction also allows us to peer into the nature of our relationship with generalist agents. 

If you’re leveraging a generalist agent as you would a search engine, posing simple questions like “Why do dogs bark?” or “What was Abraham Lincoln known for?” directly implies that the generalist agent is working for you rather than with you—recall our earlier point in this respect. When a generalist agent represents nothing more than a means to an end, the term passive user is appropriate—if you find that you rarely write prompts longer than a few words or even a sentence or two when interacting with models like ChatGPT, then you’re likely in this position. For those who maintain long-form conversations with generalist agents, providing highly detailed prompts and taking full advantage of their diverse capabilities, the term active user is best. 

Evidently, as the capabilities repertoire of generalist agents continues to expand and more people learn to optimize their use of these models, it may admittedly become more difficult to distinguish between the terms we’ve suggested. Things will get even more complicated once generalist agents themselves become components of a product, perhaps even reaching a point where they’re embedded in the infrastructure and fabric of society. Nonetheless, this very problem highlights the necessity for clear terms that outline humans’ relationship with AI, especially as it evolves. Fortunately, this is a solvable problem, and it’s through AI literacy that we’ll discover our solutions. 

AI Literacy: Understanding the Human-AI Relationship

Understanding the human-AI relationship through the lens of AI literacy requires that we focus on two specific components of AI literacy: 1) the ability to leverage state-of-the-art commercially available AI systems like Gemini, Stable Diffusion, and ChatGPT effectively (hereafter, we’ll refer to such systems as advanced AI), and 2) the ability to understand where AI might provide value, whether as a component of a product or in the form of a real-world use case. 

For now, the evolutionary trajectory of AI can’t be disentangled from the evolutionary trajectory of humanity (provided that we don’t reach a singularity anytime soon). In other words, technological evolution is simultaneously a direct product and artifact of human evolution—the Industrial Revolution couldn’t have happened without the Agricultural Revolution, which itself was predicated upon the invention of new crop rotation techniques, improvements in selective breeding, and the development of novel farming tools, to name a few relevant factors. 

There’s no way to guarantee that AI research advancements and collectively-favored AI use cases will, by default, secure a safe and beneficial AI-driven future.

Consequently, the future trajectory of AI development will be both a product of independent research advancements, likely made by major tech companies, research, and academic institutes, as well as humanity’s collective use of AI, which will inspire market, social, and game theoretic pressures that select for certain kinds of AI use cases or value propositions. However, there’s no way to guarantee that AI research advancements and collectively-favored AI use cases will, by default, secure a safe and beneficial AI-driven future. If unconvinced by this line of argument, consider the invention of the atomic bomb, which preceded the creation of the world’s first nuclear power plant by almost a decade, despite being inspired by the same set of nuclear reactor experiments run in 1942. Like nuclear power, AI is deeply prone to dual-use cases, some of which could generate catastrophic consequences for humanity, and it’s for this reason that AI literacy is crucial. 

To understand how to shape a safe and beneficial AI-driven future, we need to understand what it is that we really want from AI, and we need to be specific.

To understand how to shape a safe and beneficial AI-driven future, we need to understand what it is that we really want from AI, and we need to be specific. But, we can’t do this without some frame of reference that outlines our relationship with AI today. In this respect, the current parameters of the human-AI relationship are defined by what the most sophisticated AI users can do with the most advanced commercially available AI systems, to produce unexpected, high-impact AI use cases or tail events. Simply put, changes to the collective understanding of a topic, whether it’s AI or education, are most likely to occur when the extremes become reality—had the atomic bomb never been dropped, humanity might have never realized the horrific scope and consequences of a global nuclear war. 

The deeper your understanding of advanced AI is, the more able you’ll be to identify potential high-impact AI use cases and applications, whether they emerge as fine-tuned models or built-in components of a product.

Fortunately, the fundamental collective human tendency to instigate scalable change in response to extreme events cuts both ways—deeply positive tail events or highly impactful scenarios can also cause profound shifts in the collective mindset. Nonetheless, the main point is this: the deeper your understanding of advanced AI is, the more able you’ll be to identify potential high-impact AI use cases and applications, whether they emerge as fine-tuned models or built-in components of a product. Moreover, it’s exactly these types of cases that will produce the most tangible and immediate effects on the human-AI relationship, which will subsequently influence the design, function, and intended purpose of future AI applications, for better or worse. 

We recognize that the level of AI literacy required for fostering a continual understanding of the human-AI relationship is perhaps even higher than it would be for something like learning to optimize your use of generalist agents or anticipating where AI might be integrated at the edge. But this doesn’t mean it’s any less crucial, especially when the process of cultivating AI literacy, like any other educational endeavor, must build upon previous knowledge and experience. So, before concluding, let’s entertain a few specific objectives highlighting what humans might want from AI—we’ll call these AI desires

  • Maximizing profit by eliminating workflow inefficiency.
  • Automating mundane, time-consuming, and dangerous tasks.
  • Elevating the opportunity to pursue pleasurable experiences.
  • Leveling the intellectual playing field.  
  • Encouraging social interaction and bonding. 
  • Alleviating mental and physical health problems through personalized interventions. 
  • Accelerating the rate of scientific research and discovery. 
  • Exploring unknown corners of the world and universe. 
  • Securing a competitive military advantage.
  • Dehumanizing warfare. 
  • Ensuring the equitable and unbiased distribution of critical goods and services. 
  • Laying the socio-economic groundwork for universal basic income.
  • Enabling the development of ethical and representative governance structures. 
  • Improving public safety and disaster response. 
  • Strengthening national security and intelligence practices. 
  • Orchestrating covert mass surveillance and social control initiatives. 
  • Promoting a concentration of power at the top. 
  • Fostering global connectivity and interdependence. 
  • Gaining deeper insights into human nature and consciousness. 
  • Redefining the boundaries of human creativity. 
  • Reducing cognitive load for complex decision-making. 

We’ll refrain from commenting on the AI desires for now, not because we don’t have anything to say, but because we wish to encourage readers to think critically about these desires on their own and to profoundly examine what the universal human-AI relationship would look like if such desires came to fruition. Are we right to have these AI desires, do we need new ones, and if so, what might they look like and how can we ensure that their value is internalized across cultural, political, economic, and national boundaries? And, on a more nuanced note, how might apparently “positive” AI desires generate negative consequences? 

Recent Articles

cursor ai coding app image

Best Cursor AI coding videos

Improve Your Productivity with Cutting-Edge AI Assistants and Expert Video Guides

Magic Fixup: Adobe Is Revolutionizing Image Editing for Creative Professionals

Enabling users to edit images with a simple cut-and-paste like approach, and fixup those edits automatically.
FLUX.1 Logo

FlUX.1: The Next Wave in AI Image Generation

Stable Diffusion creators launch Flux: A 12-billion parameter AI that outperforms industry leaders in image generation.