The most advanced phones on the market come with the promise of making your life easier with new generative AI features, including composing texts, creating images from prompts and removing unwanted people or things from your photos. Now, the tech industry has set its sights on the next generation of artificial intelligence as companies begin to hype up so-called agentic AI — in other words, AI agents catering to your needs — that they say will change how we use devices with novel suggestions, recommendations and other ways to truly become bonus brains.
Put simply, the makers of your phones and favorite tech gadgets claim AI agents will be supercharged versions of Siri and other voice assistants that can take inputs from personal apps, data and web searches to give truly nuanced answers. It’s smarter AI that will supposedly fulfill a lot of the promises that current assistants can’t, like anticipating needs and comprehending complex questions.
Qualcomm, MediaTek and others have been openly including AI agents in their feature forecasting for the years ahead. Those chipmakers mentioned the technology when introducing their top-tier processors headed for next year’s premium Android phones.
“When we speak of agents and [generative] AI-driven agents for your personal devices, we’re talking of software that can basically be contextualized to you and your needs, and then advise you within the context you operate personally — your daily life, your calendar, your needs, whatever it is,” said Lari Hämäläinen, senior partner and analyst for McKinsey & Company.
AI agents have one mission, Hämäläinen said: “How do we automate and bring convenience to people’s lives?”
The trendy AI du jour, generative AI, debuted on premium smartphones at the end of last year with the Google Pixel 8 lineup, followed by this year’s Samsung Galaxy S24 series and finally, Apple Intelligence in the iPhone 15 and iPhone 16. In the years to come, generative AI features are expected to trickle down to cheaper devices, eventually being bundled into around 70% of phones by 2028, according to a recent IDC report. Analysts wouldn’t even spitball a window for when AI agents could arrive.
In the interim, Google’s Gemini AI and its Project Astra (which recognizes objects through phone cameras or smart glasses) don’t outright mention AI agents, but the capabilities they’re building toward seem like they would bridge the gap between today’s AI and the agents that define the next generation. The tech giant recently unveiled its next augmented reality concept headset made in partnership with Samsung, Project Moohan, which pairs AR and AI to guide wearers through the world commenting on what they’re looking at. Google’s continued experiments with AI software and hardware could bridge the gap to AI agents, getting users used to trusting artificial intelligence with queries and tasks.
Still, AI agents are just ideas — promises from companies eager to jump on the AI bandwagon. Skepticism is appropriate. People are starting to include AI tools like ChatGPT and Midjourney into their at-home workflows, but they have yet to integrate into the workforce at large, let alone into people’s lives on the go. Here’s how agentic AI on computers and smartphones could further change our day-to-day lives.
Agent AI could digitally solve your problems like a human assistant
As a forthcoming technology, AI agents don’t have an explicit definition. But experts believe that AI this advanced will be able to go further than Siri or Google Assistant, which are limited to answering single questions at a time, when solving complex requests. They may even be able to negotiate with other AI agents on your behalf. Like with generative AI and 5G before it, this newest trend hasn’t identified a “killer app” that would make it a must-have innovation.
The first time I heard about AI agents was from MediaTek describing its newest Dimensity 9400 mobile silicon, which includes an Agentic AI Engine. This doesn’t mean phones with the chip will get AI agents — rather, it’s a toolset to help device-makers and developers make their own AI agents and applications. Developers could harness the tech to supercharge in-app searches or use personal data to predict behavior and anticipate needs — perhaps nudging users to take actions according to their usual schedule. This is one of the entry steps to get app makers to integrate or make their own AI agents as they manifest into being in the years to come.
MediaTek’s chipmaking rival Qualcomm wasn’t far behind. At its Snapdragon Summit in Maui in October, the company introduced several ways that AI agents would change your habits. They can search through your apps for relevant information, tailor advice based on your schedule and even offer suggestions before you make requests, Durga Malladi, Qualcomm senior vice president and general manager of technology planning and edge solutions, said during the summit.
“We’re just entering the era of more proactive computing with pervasive AI constantly running in the background and anticipating your next move, figuring out what you might be doing next and getting input solutions before you even ask for them,” Malladi said.
Ultimately, Qualcomm sees these AI agents as ways to supplant apps entirely. People will simply ask questions, and the agents will do all the work for them, providing clear answers to requests. “[Apps are] still there, but they’re in the background,” Malladi told CNET Senior European Correspondent Katie Collins in October.
Read more: Your Apps Are on Borrowed Time. AI Agents Are on the Way
That tracks with what analysts expect in the coming years, but it does raise questions about how much an AI can know about you to take action on your behalf. Trusting AI agents to spend money or handle scheduling requires more than just access to credentials, says Avi Greengart, president and lead analyst at Techsponential; the agents need to have deep knowledge of your preferences to make decisions users agree with. Users need to trust their AI.
“This isn’t just a technological problem but a personal and cultural one as well,” Greengart said.
AI agents on the go in tomorrow’s cars
While generative AI has only recently spread through consumer gadgets, big tech companies have been investing in AI car tech for years. The holy grail of automotive AI remains self-driving technology, especially as self-driving robotaxis flood the streets of cities like San Francisco. But now some tech companies are thinking about how AI agents can integrate with cars, too.
Qualcomm debuted a pair of auto-focused chips at Snapdragon Summit and announced that Mercedes Benz and Li Auto would use the silicon in future models. The chipmaker laid out its ideas for how its silicon could help with functions of your life on wheels, using the neural processing unit to handle the flurry of sensors in tomorrow’s cars — not just the ones outside vehicles scanning the roads, but the ones inside that can track your movements. If you say, “Roll down that window,” and point, your car should follow along.
But eventually, Qualcomm believes AI agents will make it into cars to augment those experiences, too. Car-based chips have more processing power, will be able to sync with and expand the capabilities of the driver’s network of devices and could deliver more complex solutions. Qualcomm exec Malladi gave a hypothetical example: On a drive home, you might ask your agent to make a dinner reservation, which consults all your local preferences.
“[The agent] responds to you saying ‘here’s the reservation that I found, and by the way, you didn’t ask for it, but I’m going to read it out to you because you’re driving to that restaurant and sent an SMS to your wife saying just meet him there,'” Malladi said at the Snapdragon Summit, emphasizing the difference from reactive AI to one that makes inferences and takes action. “In this case, the agent is anticipating what your next question could be.”
It would be wise to take claims that the car could be a center of AI agent functionality with caution, analyst Hämäläinen said. Putting aside the self-interest of Qualcomm and auto companies in hyping the AI auto experience, people just don’t spend enough time in cars to warrant that centrality — and it would be tough for companies to count on AI centrality for the many millions of people who don’t have cars.
“What is very true is that the car is an extremely central and important device and an asset, and that will have a lot of AI compute in them. I do not think it will be that tied to the personal agent experiences,” Hämäläinen said.
He gave Apple as an example of a company that can count on its iPhones as the center of an AI experience. “Most people are not tied like that into their cars. They’re much more tied to their phones because the phone is always with them. The car is not,” Hämäläinen said.
What AI agents mean for how you use phones
While generative AI is slowly filtering into phones, its applications have been scattered across varying use cases like text generation, improved Siri and voice assistant responses, expanding images beyond their borders and other photo-editing capabilities. The expansive promises companies made and our ideas of how to use phone-based generative AI in real-life situations haven’t come to pass yet.
As CNET continues forecasting, we turn to what AI agents could be capable of: bigger and more exciting applications that help users complete more complex responsibilities. Juggling last-minute tasks, developing itineraries with personal preferences in mind and sending reminders to stay on schedule are some of the ways we think AI agents could help phone owners in their day-to-day lives.
But there’s no consensus on when AI agents will enter the mainstream. There are some technical obstacles to realizing that technology, from improving mobile chips to refining large language models to produce coherent outputs. More important are the issues of scale needed to make AI agents work on everyone’s phone, Hämäläinen noted. It’s one thing for OpenAI to field a volume of ChatGPT requests every day, but it’s another question to meet the AI demand from the hundreds of millions of phones that ship every year to users who are on them all the time.
“The [phone] usage is constant. When you’re on your phone, you’re on the phone like five hours a day, right? So it’s very demanding,” he said. “Companies like OpenAI or Microsoft with CoPilot, they don’t have to deal with that level of instant demand all the time.”
At its Snapdragon Summit, Qualcomm was bullish that augmented reality would go hand in hand with the future of AI agents, especially coordinating what AR glasses see with user questions. Given how nascent that gadget niche is, with the Ray-Ban Meta Smart Glasses as the only mainstream model, it could be a while before those kinds of interfaces — including gesture control, a la the Apple Vision Pro — become widespread. Google’s Project Astra is pushing this marriage of AI and visual interfaces forward, and though it’s still in the testing stages, it could offer another software solution for glasses hardware to embrace.
Consumers will probably sooner adapt to a different interface: shifting to relying more on voice controls, analysts predict. Alexa owners and folks with Google Assistant-directed smart homes are already getting used to this method of interacting with gadgets, which is handy when your hands are full or as an accessibility feature. But the better reason consumers will get used to it is because it’s already on devices in their pockets and running through wireless earbuds as tech giants continue upgrading Siri and other voice assistants. Perhaps the road to AI agents lies in letting go of the tactile phone experience as people know it today.
“I think what we’ll see is that the device experience will get a lot automated as well, and you can just prompt your phone based on a voice command,” Hämäläinen said. “I think you can expect a lot that you today handle by [tapping] your finger will get done automatically.”
Read the full article here