As we head into 2025, consumers could be forgiven for thinking that “AI” has become a dirty word, particularly in home tech.
LG is the latest brand to unveil its new term for home AI, called “Affectionate Intelligence,” a frankly dystopian phrase for an advanced voice assistant that responds to more natural language and controls smart home devices for you.
And, of course, there’s Apple Intelligence (emphasis on Apple, not Artificial) which, if the rumors are true, may help control Apple security cameras or face-recognizing doorbells in the coming years. From Google’s Gemini and “agents” to Vivint’s Salesforce-powered “Agentforce” or Swann Security’s “SwannBuddy” companies keep trying to get artificial intelligence inside our homes — they just don’t want to call it that.
While I’m a fan of AI algorithms in tech like security cameras, I can also understand why companies are struggling to find a palatable angle for this new wave of ubiquitous AI features. Ever since Ray Bradbury’s The Veldt and countless stories that sprang up after, people have been leery about all-powerful computers controlling their homes and manipulating their loved ones to disastrous results.
Home is where you are free to be vulnerable and private: Big Brother is a poor fit. Voice assistants and smart speakers are already a bridge too far for some, and the new wave of AI pulls people toward them even harder.
So home tech companies are left in a difficult position. They want a world where you talk to your house every morning like it’s your chatty butler. That’s also a world where sometimes error-prone AI is always able to listen and has its fingers in every possible appliance and device. Creators would like to sidestep that uneasy issue and quickly claim that this is a fine, safe, even cuddly development — but it’s far from a finished conversation.
3 issues with welcoming AI throughout our smart homes
LG’s Affectionate Intelligence and various other AI revamps aren’t likely to cause us physical harm — and with respect to Bradbury, getting eaten by lions remains especially unlikely. Some, like SimpliSafe’s or ADT’s facial recognition, are really trying to keep us safer or at least more informed. The real problems are more insidious and, as usual, it’s about our privacy and data.
Data collection
First, there’s data collection. While you may be saying, “Hey LG (or Google, or Siri, etc.) I’m headed to bed, please turn everything off for me,” your home AI isn’t just hearing your command. It’s gathering information about when you said it, what you wanted done and the words you used, maybe even the tone you used them in.
That’s valuable data for companies to continue training their AIs, build customer profiles and possibly repackage to sell to advertisers and not everyone allows to you to peek behind the scenes. We’ve already had this tug-of-war with voice assistants and the many, many settings you need to change so companies can’t harvest your data: Now it’s round two, with even more at stake.
Data vulnerabilities
Second, all that integration through AI can also create more data vulnerabilities. While direct smart home hacking is rare, there’s always a black market for databases with personal information. And we also have to be careful of threats like security employees spying through our cameras or strangers suddenly getting an inside look at our homes. Protocols like Matter are trying to make things safer, but Matter still hasn’t extended to video cameras and other security vulnerabilities remain a concern.
Inaccuracy
Third, conversational AI and chatbots remain annoyingly inaccurate. They can hallucinate things that aren’t happening. They get important questions about home safety wrong. And on the object recognition side, they frequently confuse pets with humans. Putting them in charge of your smart home and expecting these kinds of AI models to get things right is a dubious proposition, which could create rather than relieve stress.
AI has a lot of questions to answer before it does a home takeover
Personally, I think conversational AI has its place in the smart home, especially if you already use voice assistants to control your devices. But we can’t let that serve as a backdoor for more data vulnerabilities or the collection of personal information we’d rather keep to ourselves. That means lots of testing and questions about data storage for tech hubs like CNET and a good dose of wariness for homeowners inviting AI into their thermostats, lights, locks and appliances.
Facial recognition services with their advanced algorithms are a trickier subject, especially since other people rarely have a choice in who creates face profiles for them (all you need is a quick peek at someone with a Nest doorbell, for example, to whip up a face contact for them).
Face profiles have unique privacy issues but are becoming an integral part of the AI-powered smart home. That’s already starting to clash with U.S. identity laws, which are outlawing this kind of technology from Portland, Oregon, to the state of Illinois. That battle is just beginning, and we’ll be covering it more in the coming year.
Home AI is waiting for you to chat with it about your security, your plans for the day, your habits and your worries. In return, it wants to take care of all the smart home details for you. But there’s a lot of benefit to managing your home routines yourself, and questions about just how this happy-faced AI will work out in the long term. For now, we’re taking it slow.
Read the full article here