Close Menu
Modern Life Today
  • Home
  • Tech
  • Smart Home
  • Energy
  • Home Security
  • Kitchen & Household
  • Outdoor
  • Home Internet
Trending Now

Samsung Said ‘AI’ a Lot at Unpacked. Except When It Talked About the Environment

February 26, 2026

Alexa Plus Now Adapts to Your Moods in Real Time: We Try It Out

February 25, 2026

Samsung Galaxy Buds 4 Pro Review: Their New Look and Improved Sound Made Me Smile

February 25, 2026
Facebook X (Twitter) Instagram
Modern Life Today
  • Home
  • Tech
  • Smart Home
  • Energy
  • Home Security
  • Kitchen & Household
  • Outdoor
  • Home Internet
Subscribe
Modern Life Today
Home»Home Security»Forget the Chatbots. AI’s True Potential Is Cheap, Fast and on Your Devices
Home Security

Forget the Chatbots. AI’s True Potential Is Cheap, Fast and on Your Devices

Press RoomBy Press RoomDecember 26, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email

When I tap the app for Anthropic’s Claude AI on my phone and give it a prompt — say, “Tell me a story about a mischievous cat” — a lot happens before the result (“The Great Tuna Heist”) appears on my screen.

My request gets sent to the cloud — a computer in a big data center somewhere — to be run through Claude’s Sonnet 4.5 large language model. The model assembles a plausible response using advanced predictive text, drawing on the massive amount of data it’s been trained on. That response is then routed back to my iPhone, appearing word by word, line by line, on my screen. It’s traveled hundreds, if not thousands, of miles and passed through multiple computers on its journey to and from my little phone. And it all happens in seconds.

This system works well if what you’re doing is low-stakes and speed isn’t really an issue. I can wait a few seconds for my little story about Whiskers and his misadventure in a kitchen cabinet. But not every task for artificial intelligence is like that. Some require tremendous speed. If an AI device is going to alert someone to an object blocking their path, it can’t afford to wait a second or two.

Other requests require more privacy. I don’t care if the cat story passes through dozens of computers owned by people and companies I don’t know and may not trust. But what about my health information, or my financial data? I might want to keep a tighter lid on that.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Speed and privacy are two major reasons why tech developers are increasingly shifting AI processing away from massive corporate data centers and onto personal devices such as your phone, laptop or smartwatch. There are cost savings too: There’s no need to pay a big data center operator. Plus, on-device models can work without an internet connection. 

But making this shift possible requires better hardware and more efficient — often more specialized — AI models. The convergence of those two factors will ultimately shape how fast and seamless your experience is on devices like your phone.

Mahadev Satyanarayanan, known as Satya, is a professor of computer science at Carnegie Mellon University. He’s long researched what’s known as edge computing — the concept of handling data processing and storage as close as possible to the actual user. He says the ideal model for true edge computing is the human brain, which doesn’t offload tasks like vision, recognition, speech or intelligence to any kind of “cloud.” It all happens right there, completely “on-device.”

“Here’s the catch: It took nature a billion years to evolve us,” he told me. “We don’t have a billion years to wait. We’re trying to do this in five years or 10 years, at most. How are we going to speed up evolution?”

You speed it up with better, faster, smaller AI running on better, faster, smaller hardware. And as we’re already seeing with the latest apps and devices — including those expected at CES 2026 — it’s well underway.

AI is probably running on your phone right now

On-device AI is far from novel. Remember in 2017 when you could first unlock your iPhone by holding it in front of your face? That face recognition technology used an on-device neural engine – it’s not gen AI like Claude or ChatGPT, but it is fundamental artificial intelligence. 

Today’s iPhones use a much more powerful and versatile on-device AI model. It has about 3 billion parameters — the individual calculations of weight given to a probability in a language model. That’s relatively small compared to the big general-purpose models most AI chatbots run on. Deepseek-R1, for example, has 671 billion parameters. But it’s not intended to do everything. Instead, it’s built for specific on-device tasks such as summarizing messages. Just like facial recognition technology to unlock your phone, this is something that can’t afford to rely on an internet connection to run off a model in the cloud.

Apple has boosted its on-device AI capabilities — dubbed Apple Intelligence — to include visual recognition features, like letting you look up things you took a screenshot of.  

On-device AI models are everywhere. Google’s Pixel phones run the company’s Gemini Nano model on its custom Tensor G5 chip. That model powers features such as Magic Cue, which surfaces information from your emails, messages and more — right when you need it — without you having to search for it manually.

Developers of phones, laptops, tablets and the hardware within them are building devices with AI in mind. But it goes beyond those. Think about the smart watches and glasses, which offer far more limited space than even the thinnest phone?

“The system challenges are very different,” said Vinesh Sukumar, head of generative AI and machine learning at Qualcomm. “Can I do all of it on all devices?”

Right now, the answer is usually no. The solution is fairly straightforward. When a request exceeds the model’s capabilities, it offloads the task to a cloud-based model. But depending on how that handoff is managed, it can undermine one of the key benefits of on-device AI: keeping your data entirely in your hands.

More private and secure AI

Experts repeatedly point to privacy and security as key advantages of on-device AI. In a cloud situation, data is flying every which way and faces more moments of vulnerability. If it remains on an encrypted phone or laptop drive, it’s much easier to secure.

The data employed by your devices’ AI models could include things like your preferences, browsing history or location information. While all of that is essential for AI to personalize your experience based on your preferences, it’s also the kind of information you may not want falling into the wrong hands.

“What we’re pushing for is to make sure the user has access and is the sole owner of that data,” Sukumar said.

There are a few different ways offloading information can be handled to protect your privacy. One key factor is that you’d have to give permission for it to happen. Sukumar said Qualcomm’s goal is to ensure people are informed and have the ability to say no when a model reaches the point of offloading to the cloud.

Another approach — and one that can work alongside requiring user permission — is to ensure that any data sent to the cloud is handled securely, briefly and temporarily. Apple, for example, uses technology it calls Private Cloud Compute. Offloaded data is processed only on Apple’s own servers, only the minimum data needed for the task is sent and none of it is stored or made accessible to Apple. 

AI without the AI cost

AI models that run on devices come with an advantage for both app developers and users in that the ongoing cost of running them is basically nothing. There’s no cloud services company to pay for the energy and computing power. It’s all in your phone. Your pocket is the data center.

That’s what drew Charlie Chapman, developer of a noise machine app called Dark Noise, to using Apple’s Foundation Models Framework for a tool that lets you create a mix of sounds. The on-device AI model isn’t generating new audio, just selecting different existing sounds and volume levels to make one mix.

Because the AI is running on-device, there’s no ongoing cost as you make your mixes. For a small developer like Chapman, that means there’s less risk attached to the scale of his app’s user base. “If some influencer randomly posted about it and I got an incredible amount of free users, it doesn’t mean I’m going to suddenly go bankrupt,” Chapman said.

Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts

On-device AI’s lack of ongoing costs allows small, repetitive tasks like data entry to be automated without huge costs or computing contracts, Chapman said. The downside is that the on-device models differ based on the device, so developers would have to do even more work to ensure their apps work on different hardware.

The more AI tasks are handled on consumer devices, the less AI companies have to spend on the massive data center buildout that has every major tech company scrambling for cash and computer chips. “The infrastructure cost is so huge,” Sukumar said. “If you really want to drive scale, you do not want to push that burden of cost.”

The future is all about speed

Especially when it comes to functions on devices like glasses, watches and phones, so much of the genuine usefulness of AI and machine learning isn’t like the chatbot I used to make a cat story at the beginning of this article. It’s things like object recognition, navigation and translation. Those require more specialized models and hardware — but they also require more speed.

Satya, the Carnegie Mellon professor, has been researching different uses of AI models and whether they can work accurately and quickly enough using on-device models. When it comes to object image classification, today’s technology is doing pretty well — it’s able to deliver accurate results within 100 milliseconds. “Five years ago, we were nowhere able to get that kind of accuracy and speed,” he said.

But for four other tasks — object detection, instant segmentation (the ability to recognize objects and their shape), activity recognition and object tracking — devices still need to offload to a more powerful computer somewhere else. 

“I think in the next number of years, five years or so, it’s going to be very exciting as hardware vendors keep trying to make mobile devices better tuned for AI,” Satya said. “At the same time we also have AI algorithms themselves getting more powerful, more accurate and more compute-intensive.”

The opportunities are immense. Satya said devices in the future might be able use computer vision to alert you before you trip on uneven payment or remind you who you’re talking to and provide context around your past communications with them. These kinds of things will require more specialized AI and more specialized hardware.

“These are going to emerge,” Satya said. “We can see them on the horizon, but they’re not here yet.”



Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Articles

Stop Helping Burglars: All the Places You Should Never Install a Security Camera

February 25, 2026

Stay Alert With This Impressive $130 Arlo Essential XL Outdoor Security Camera Bundle Deal

February 25, 2026

Got an Old Android or iPhone Sitting in a Drawer? Turn It Into a Free Security Camera in Just 3 Steps

February 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Demo
Top Articles

All the Pet Tech That Stood Out at CES 2026

January 8, 2026

Premier League Soccer: Stream Bournemouth vs. Tottenham Live

January 7, 2026

Best Mobile VPN of 2026: Enjoy Privacy Protection on the Go

February 6, 2026

Apple’s Next M5 MacBook Pros Could Drop With MacOS 26.3

February 3, 2026
Don't Miss

Ricoh GR IV Review: Peak Pocket Photos to Potentially Outperform Your Phone

By Press RoomFebruary 25, 20260

Pros Fantastic size Fast and easy to use Takes great photos Cons Only slightly different…

Walmart Deals of the Day: $100 Discount Knocks Sennheiser Headphones Down to a New Low

February 25, 2026

Gemini Can Now Book You an Uber or Order a DoorDash Meal on Your Phone. Here’s How It Works

February 25, 2026

Dreaming of a Touchscreen MacBook? You’d Better Be a Fan of Apple’s Dynamic Island

February 25, 2026
About Us
About Us

Modern Life Today is your one-stop website for the latest gadget and technology news and updates, follow us now for the news that matters to you.

Facebook X (Twitter) Pinterest YouTube
Featured News

MSI Pro MP165 E6 Review: A No-Frills Portable Monitor at a Great Price

December 26, 2025

Review: HP ZBook 8 Gli 14-Inch

December 26, 2025

Flush With Holiday Leftovers? Please Don’t Microwave Them — Do This Instead

December 26, 2025
Trending Now

Your Router Is Probably in the Wrong Place. Here Are the 4 Tweaks to Fix Your Slow Internet

December 26, 2025

Forget the Chatbots. AI’s True Potential Is Cheap, Fast and on Your Devices

December 26, 2025

Want to Try Film Photography? Here Are the Analog Cameras You Should Buy

December 26, 2025
  • Home
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
2026 © Prices.com LLC. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.