At CES 2026, LG announced a new wave of speakers in its xboom range, once again produced in collaboration with will.i.am. Last year's models produced some real hits – we gave the xboom Stage 301 and the xboom Grab glowing reviews, so I was looking forward to seeing the new range.
LG has talked a lot about how the sound profile was really tuned and decided with will.i.I am, and I got to talk to him at the show about what that really means – what kind of technical involvement he had with the speakers, and how he approaches sound tuning.
But it's not just will.i.am's music background that interests me – he's been a huge technology guy for over a decade. He was one of the key influences and investors in the original launch of Beats, and has launched a bunch of his own tech products, including audio tech in the form of Buttons, and more recently, an AI-based personalized radio service called RAiDiO.FYI.
This is your second round of speakers that you've done with LG. Did you have anything in mind for what you wanted to do with a new range?
The Stage 301 was for me the hero last year, so [I wanted to] expand that POV – that design aesthetic, where it's positioned like your traditional stage monitor, with the precision and audio dynamics of a studio monitor, and the portability of a Bluetooth speaker.
So for the buskers out there, we have a bigger xboom from the perspective of the Stage 501. Expanding that is what I'm really excited about.
This collection we have, I learned a lot from, like the xboom Frab and how you can personalize it and accessorize it. Usually, speakers are put in bags, so this year I have a capsule collection of ‘boombags', to hopefully inspire other designers out there to see speakers as something to wear.
And we've experienced that in the past. My other go around in audio with Beats, people wore their buds. They just didn't wear the buds; they put them around their neck like jewelry – and so borrowing from that, I have my boombags, and we're going to do the same for a new item, the xboom Mini. The xboom Mini is powerful, it's cute, and it allows for other ways to express how to wear it. I have a boombelt coming that's designed around the Mini.
What's the process like of working with LG, telling them what you'd like to get out of the collaboration, and how do they feed stuff back to you?
That's an awesome collaboration where we fine-tune, co-design, and have iterations. When it comes to the audio tuning of the speaker, me and my team, that's our expertise. I've done 20-plus years of sound design and EQ-ing and mixing, not just for Black Eyed Peas but for artists from Mary J. Blige to Mariah Carey to Whitney Houston to Michael Jackson, John Legend, Rolling Stones, U2…
My audio engineering team really gets in there to make sure everything sounds right, and to outperform the competitors as far as [recreating] the studio experience. How do we give that studio experience and allow everyone to feel the way we feel when we're in the studio – especially on the Stage 501 and 301.
LG has talked a lot about working on the sound profile with you, and you're talking about making it sound ‘right'. So what does that mean to you, making something sound right? What is the kind of profile you're looking for?
When you're making music, it's a little different because it's about what you want it to sound like. And then when you finish making a song, you go into the studio and you A/B it. A/B it means you play a song and you're like, ‘Hm, I like this kick drum. I like this bass.' You have the freedom to make it sound how you want, with the guardrails of life. These frequencies should be ignored. These frequencies should be heightened.
Now, when you're making a speaker, you gotta think of all songs, all genres. You can't think of just, like, ‘Yo, this is gonna be great for trap music,' cause it's gonna sound horrible for classical music.
I mean, there are people who make a speaker that way. You can do it if you want.
Yeah, but you shouldn't. You should have some perspective.
That's the reason why I keep everyone in mind that I worked with, from the Rolling Stones – thank you so much for letting me work with you guys – U2 – thank you, Larry and Bono and Adam – thank you, John Legend, Britney, you know.
So when I'm making the speaker, I think of all the artists that I've ever worked with, I can't just be genre-specific. That hasn't been a reflection of my career.
So your team who works on the feedback on the speakers' sound profile is like also part of your team for producing and mixing music?
Yeah, so there's a whole science to spectrum analyzing and listening to the competition's speakers. Like, seeing ‘Oh, what frequencies are they putting out in bass, what are they doing over here with the treble? What are they doing here on the highs? What are they doing here on the compression? What are they doing on the limiter? Where's the power going to, how much power is going to that freaking magnet pushing it.' We know all that shit, and then we compare it.
My go-to song is Take Five. I start with this song. It's the best song to tune any audio to.
Is this one of your favorite songs, or do you just particularly think this is the sweet spot for tuning?
One: It's my favorite song. The rim, the ride, the acoustic drum kit is the most perfect recording of that soft drum playing, isolated by itself. And then the piano that comes in, and then the bass – but it's not low bass, it's an acoustic bass, so I get to hear it well. It's so well mixed. And then the sax feels like it's… like I can hear the spit on the reed.
Even when I do Black Eyed Peas music, this is my song for getting the ear and the room tune.
Is this a song where, if you're listening to a speaker you're comparing to or you're testing something you're interested to hear, you can tell within three seconds whether you like the signature because you know the track inside out?
Yeah, yeah, yeah, but after I get that for the highs and the mids, then I'll go to the lows. I pick a song for just boom and bass. Then I'll look for distortion – like, what song is distorted, so to make sure that when you turn it up, you're not adding extra distortion to it.
Then I do my rock songs, and then what songs have a distorted SM57 mic, where that doesn't mess up the mids, especially on a mid-range speaker like this. And then how does that transfer into how we tune into the xboom Buds.
With the spectrum analyzer, it's not just your own ears. Thomas and Dylan and our team are coming with the spectrometers and bringing that frequency math. They're like, ‘I know you like that, but look what's happening over here on the data. I'm like, really? All right, but what's the medium here?' You're dealing with things that you cannot hear, but the spectrum's telling you.
I've been dealing with that my whole entire music career, and bringing that knowledge to the xboom series is fantastic.
So when you and the team are working with LG, it's in-depth, frequency-level technical feedback?
Yeah. For example, before we came to CES, I went to Korea. I flew to Korea to do the 501, and I'm like ‘Wow, this is great'. You listen to the 501 that sounds awesome, but then you listen to the xboom Mini, and you wanna apply the same type of imagery, like frequency language, like the personality of the 501 and 301. How does that translate [to the Mini]?
And a lot of times, if you dial [the 501's signature] in and you bring it here [to the Mini], this starts to fart. To get it right, Thomas needs to bring his freaking spectrum equipment. I've already did my part, but there's some knowledge that my guy needs to apply here. I've got it so far on the 501, the 301, but to translate that to the Mini, I need Thomas.
It's talent. These guys are like musicians, and it's the reason why certain things have a personality. It's talent.
Me, Thomas, and Dylan, you know, it's like a different Black Eyed Peas, where it's about the curation of three different POVs. Expertise, taste, and technical, you know. LG trusts our POV.
It's why Marshall is Marshall, Sennheiser is Sennheiser, why B&O is B&O, why Beats were Beats, and why xboom is xboom.
This year is 10 years since Buttons came out, and it's longer than that since you've been working with Beats. Has your perception on how you make these things changed in that time?
To my head, it's been three years. It still feels it was yesterday.
I can tell you what's changed: Shenzhen.
This place is… Wow. This place is amazing, bro. Like, the things that can be made now. I get goosebumps just thinking about it, how awesome Shenzhen is.
I wish London had a Shenzhen. I wish France had a Shenzhen. Every country's creative hub needs a Shenzhen. Something that can get this stuff made, get it out there.
It's an injustice to the creative community in the UK that London doesn't have a Shenzhen; that LA doesn't have a Shenzhen; that New York doesn't have a Shenzhen is that place where you can just make something. Every place you go to, you feel like that ability to find this new hardware and develop it is like giving you more opportunities as a creator.
Before, in 2016, going to Shenzhen wasn't as easy as it is now. Shenzhen wasn't the Shenzhen that you know now. It's tech city, bro.
That's what's changed, for the whole consumer electronics world, and we could learn from it. We could all be inspired, every mayor, every freaking city developer. Like consumer electronic studios – in an age where you could 3D print, and 3D printers are looking dope; in the age where I could vibe code, I can't vibe create.
Rapid development – that's what I'm, that's what I'm talking about. That's what I'm so excited about to be creative right now, because back then, not a lot of folks that come from my world were making and dabbling in consumer electronics. Still right now, people aren't – like, there's a lot of people who pick fashion.
They're all doing alcohol brands.
Yeah, like, ‘Ooh, we got a new alcohol!' Consumer electronics is the world. And you mean to tell me there's not a Shenzhen in every dope city? The world needs that.
3D printing does seem like the kind of thing you'd be into, because you like to come up with these, these ideas for fashion, for technology designs.
Well, I've got a 3D lab.
Of course you have!
OK, here's what happened: July of last year I went to Shenzhen. And we're working in this one spot on some stuff, and I looked around like, ‘Eddie, look at their freaking rapid machines over there, figure out what they are and order them, get them to LA'.
And so we got like a couple of them in a room making rapid prototypes. And then I'm like, ‘Yo, Eddie, we need more machines. More machines, bro! Here's why: We're in an agentic world now. We're in a vibe code world now.
We used to hang out in studios to make music. The studio turned into a laptop. Now phones.
The new studio’s coming for hardware, around agents, and we need to have rapid prototyping where people can materialize ideas quickly. Where they just get awesome stuff locally… like, the Kinko's for consumer electronics doesn't exist, my dude.
Especially if we get to the point where describing something to an AI is able to make a good 3D model of it. Then you don't need elaborate design skills, you just need ideas.
And you need the Kinko's for consumer electronics,
Right, and that.
That's all plausible stuff, but in order to do that, we need to have a lab.
But moving back to xboom, what's also changed between 2016 and 2026 is rocking with LG.
When we had Beats, we were supercharged by the power of Interscope. Jimmy Iovine ran Interscope. He took my wild idea, like, ‘Hey Jimmy, let's make our own hardware,' and out came Beats.
From there, I started I.am+ and Buttons. Learned a lot, didn't have the power of Interscope. Rocking with LG is like the best consumer electronics company. They provided the freaking monitors for Apple to do a lot of their computers. They're the monitors in Mercedes. They are like the ingredients for a lot of companies to function, technically. So to rock with LG is like, what an honor.
You're big in on generative AI, especially with RAiDiO.FYI. As a musician, what do you think about AI-generated music? It really feels like we're at the point where people are gonna have to think about whether they want that in their music streaming services, and how they feel about it being served to them.
I don't look at it like that. Here's why.
AI-generated music is awesome. Really awesome. As far as like, ‘Wait what? You did that? How long did you sit and render, render, render, render till you got what you got? Oh, you've been sitting there for an hour? Let me hear the first one you did. Well, why don't you like that one? OK, it was not what was in your head. Got it, but still it was dope.'
Somebody's still generating it. So I'm not worried about that. When the machine's just fucking doing it on its own? Talk to me when we get there.
As long as someone is having an idea that they're expressing through music, whether it's through AI or not… I make music for therapy. So, if an AI was able to stretch, I don't really care because I need to stretch. If an AI was a yoga instructor, that's cool. If an AI was, like, a great sneezer or cougher, I don't care, cause I cough. I need to cough to clear my throat, so I need to make music to clear my soul.
I think it's a liberator for folks that always wanted to make music that didn't know a skill, couldn't sing, couldn't play an instrument – but now their ideas can materialize. That's awesome for them, you know, because the true performers need to go out there and perform.
But then at the same time, has TikTok ruined music as far as our attention span, and in the quality of music chasing its algorithms… that's the one.
AI music – that's people still expressing themselves.
TechRadar will be extensively covering this year's CES, and will bring you all of the big announcements as they happen. Head over to our CES 2026 news page for the latest stories and our hands-on verdicts on everything from wireless TVs and foldable displays to new phones, laptops, smart home gadgets, and the latest in AI.
And don’t forget to follow us on TikTok and WhatsApp for the latest from the CES show floor!





