OpenAI CEO Sam Altman confirmed this week that the company is creating brand new AI-first device. He says this will be in stark contrast to the clutter and chaos of our phones and apps. Moreover, he compared its use to “sitting in the most beautiful cabin by the lake and in the mountains and just enjoying the peace and quiet.” But understanding you in context and analyzing your habits, moods and daily routines seems closer than most people get with their loved ones, let alone any piece of equipment.
His image obscures a completely different reality. A device designed to constantly monitor your life, collecting details about where you are, what you are doing, how you speak and other suffocating sounds. An electronic monitor absorbing every nuance of your behavior and adapting to your life may seem normal until you remember what that data goes through to provide analysis.
Summoning a calming device is like closing your eyes and hoping you're invisible. This observation is voluntary but comprehensive. The promise of peace of mind seems like a cunning cover for giving up privacy and worse. 24/7 understanding of context does not mean peace.
AI is looking at you
Loneliness and peace are based on a sense of security. A device that claims to give me peace of mind by dissolving these boundaries only exposes me. Altman's lakeside cabin analogy is seductive. Who hasn't dreamed of escaping the constant ringing of notifications, flashing ads, and the algorithmic chaos of modern apps, getting away from it all and retreating into peaceful solitude? But peace based on constant observation is an illusion.
This is not just gizmo-skepticism. There is a deep-rooted paradox here. The more context-aware and responsive this device becomes, the more it knows about you. The more he knows, the greater the likelihood of an invasion.
The version of calm that Altman is trying to sell us depends on vague discretion. We must trust all of our data to the right people and trust that the algorithm and the company behind it will always handle our personal information with respect and care. We must trust that they will never turn data into leverage, never use it to influence our thoughts, our decisions, our policies, our shopping habits, our relationships.
This is a big question, even without considering Altman's history with intellectual property rights.
See and take
Altman has repeatedly defended the use of copyrighted works for teaching without permission or compensation to the creators. In a 2023 interview, he admitted that AI models were “harvesting work from all over the internet,” including copyrighted material, without explicit permission, simply ingesting it en masse as training data. He tried to frame it as a problem that could only be solved “after we come up with some kind of economic model that works for people.” He acknowledged that many creatives were upset, but made only vague promises that something better might come someday.
He said giving creators the opportunity to participate and get a share of the revenue could be “cool” if they wanted, but declined to guarantee that such a model would ever come to fruition. If ownership and consent are extra amenities for creators, why should consumers be treated any differently?
Remember that within hours of its launch, Sora 2 was flooded with clips using copyrighted characters and famous franchises without permission, causing a legal backlash. The company quickly reversed course, announcing it would give copyright holders “more granular control” and move to a voluntary consent model for likenesses and characters.
This reversal can look like responsibility. But it's also a tacit admission that the original plan was essentially to treat everyone's creative efforts as free raw material. Treat content as something that belongs to you, rather than as something you respect.
In both art and personal data, Altman seems to believe that large-scale access is more important than consent. A device that claims to bring peace of mind by eliminating friction and smoothing your digital life means a device that controls that life. Convenience is not the same as comfort.
I'm not saying that all AI assistants are evil. But treating AI as a toolbox is not the same as making it my trusted agent in every aspect of my life. Some might argue that if the design of the device is good, then there are real guarantees there. But this argument assumes an ideal future, governed by ideal people. History is not on our side.
The device Altman and OpenAI plan to sell may be great for all sorts of things and is worth the privacy trade-off, but be clear about that trade-off. This tranquil lake might as well be a camera lens, but don't pretend there isn't a lens.
Follow TechRadar on Google News. And add us as your preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the “Subscribe” button!
And of course you can also Follow TechRadar on TikTok for news, reviews, unboxing videos and get regular updates from us on whatsapp too much.
The best business laptops for any budget.






