Google is doubling down on its AI-powered search experience with four significant upgrades to AI Mode, the feature that replaces traditional link-heavy results with AI-generated summaries and overviews. These enhancements signal the company’s commitment to making artificial intelligence a central part of how people find and interact with information online.
AI Mode fundamentally changes the search experience by providing conversational, contextual responses instead of lists of website links. While this approach can streamline information gathering, it has also sparked debate about its impact on web publishers and concerns about AI accuracy. Google’s latest updates appear designed to address skeptics while expanding the feature’s practical applications.
The four new capabilities span image analysis, project planning, live camera integration, and webpage assistance—each targeting different user needs and workflows.
Ask questions about your uploaded images
Google is expanding its image analysis capabilities beyond mobile devices to desktop browsers. Previously available only through the Google app on Android and iOS, users can now upload images directly through AI Mode on desktop and ask specific questions about what they see.
The process works by switching to AI Mode in Google Search, uploading an image, and posing questions about its contents. Google’s AI analyzes the visual elements, cross-references relevant online information, and provides detailed responses along with source links for verification.
This feature will soon extend beyond static images. Google plans to support PDF uploads in the coming weeks, followed by integration with Google Drive files in the coming months. This expansion could prove particularly valuable for professionals who need to quickly extract information from documents, diagrams, or visual materials without manually reviewing entire files.
Use Canvas to design comprehensive project plans
Canvas represents Google’s interactive workspace feature that allows real-time content creation, from coding to web design. The latest update integrates Canvas with AI Mode to help users build detailed project plans and study guides dynamically.
To access this feature, users launch AI Mode and select the “Create Canvas” option. Google’s AI then assembles relevant information that users can modify and customize according to their specific requirements. Future updates will allow users to upload notes and reference materials to further personalize their plans.
This capability addresses a common workplace challenge: transforming scattered ideas and requirements into structured, actionable plans. Rather than starting from blank templates, users can leverage AI to create comprehensive frameworks that they can then refine based on their particular needs.
Canvas in AI Mode will roll out in the coming weeks, but only to users who have enrolled in Google’s AI Mode Labs experiment in the United States. When available, the “Create Canvas” option appears automatically when users request help with planning or organizational tasks.
Combine Search Live with Google Lens for real-time exploration
Google is merging two of its AI-powered tools—Google Lens and Search Live—to create a more interactive search experience. Google Lens uses device cameras to identify and analyze real-world objects, while Search Live enables conversational exploration of search results.
The combined feature allows users to point their phone’s camera at any object and engage in real-time conversations about what they’re seeing. Users open the Google Lens app, aim their camera at an item of interest, tap the “Live” icon, and begin asking questions. The AI responds with relevant information and can maintain an ongoing dialogue as users explore different aspects of the subject.
This integration transforms passive visual search into an interactive discovery tool. Instead of simply identifying objects, users can now dig deeper into topics, ask follow-up questions, and explore related concepts—all while maintaining the visual context through their camera feed.
The feature is currently rolling out to US users enrolled in the AI Mode Labs experiment, with broader availability expected in the coming months.
Ask Google AI about your current webpage
The fourth enhancement leverages the existing integration between Google Lens and Chrome browser to make webpage analysis more accessible. Currently, users must manually select Google Lens from Chrome’s menu to analyze webpage content. The upcoming update streamlines this process significantly.
Soon, users will be able to click directly on Chrome’s address bar and select “Ask Google about this page” for instant AI analysis. This triggers an AI Overview in the browser’s side panel, highlighting key information and insights about the current webpage.
The feature supports follow-up questions through AI Mode integration, accessible either by selecting AI Mode at the top of search results or clicking the “Dive deeper” button at the bottom of the overview. This creates a more fluid research experience, allowing users to explore webpage content without leaving their current browsing session.
Broader implications for search behavior
These four updates reflect Google’s strategic vision for AI-integrated search, moving beyond simple query responses toward more interactive, context-aware experiences. Each feature addresses different aspects of how people gather and process information—from visual analysis to project planning to real-time exploration.
The rollout strategy, focusing initially on AI Mode Labs participants, suggests Google is taking a measured approach to deployment. This allows the company to gather user feedback and refine the features before broader release, potentially addressing some of the concerns that have emerged around AI-powered search accuracy and reliability.
For businesses and professionals, these enhancements could significantly change research and planning workflows. The ability to analyze uploaded documents, create structured project plans, and get instant webpage insights may reduce the time spent on information gathering and organization tasks.
However, the features also raise questions about the evolving relationship between AI-mediated search and direct website engagement. As Google makes it easier to get answers without visiting source websites, publishers and content creators may need to adapt their strategies for reaching audiences in an AI-first search environment.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...