Artificial Intelligence - Yanko Design https://www.yankodesign.com Modern Industrial Design News Tue, 17 Jun 2025 09:58:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Angry AI Study Lamp Keeps You Honest and Off Your Phone https://www.yankodesign.com/2025/06/17/angry-ai-study-lamp-keeps-you-honest-and-off-your-phone/?utm_source=rss&utm_medium=rss&utm_campaign=angry-ai-study-lamp-keeps-you-honest-and-off-your-phone Tue, 17 Jun 2025 13:20:05 +0000 https://www.yankodesign.com/?p=559363

Angry AI Study Lamp Keeps You Honest and Off Your Phone

Picking up our phones in the middle of a work session or while studying has become a reflex that is almost impossible to resist. The...
]]>

Picking up our phones in the middle of a work session or while studying has become a reflex that is almost impossible to resist. The smartest among us do not even try to rely on sheer willpower, instead turning to apps and services that promise to block out distractions. Sadly, these digital helpers do not always deliver, and sometimes they ask for more time, money, or personal data than most of us are willing to give.

That is where this AI-powered DIY lamp comes in, bringing back the classic method of shaping behavior: immediate, memorable consequences. In this case, it takes the form of a desk lamp that does not just shine a light, but gets hilariously angry when it catches you slacking off. There is something delightfully old-school about a gadget that reacts with a scolding glare and a grumpy voice, all in the name of keeping you on track.

Designer: Arpan Mondal (Makestreme)

It all began as a bit of a joke, the kind of idea you toss out with no real intention of making it happen. But as the pieces started coming together, it became clear that this could actually be a clever way to help people build better habits. Now, the lamp is a real, working device that glows a fierce red and shouts at you whenever it spots a phone on your desk; no excuses, no mercy, just instant feedback.

Building the lamp is surprisingly approachable for anyone with a bit of DIY experience. The main ingredients are a Raspberry Pi 4, a Pi Camera HQ, a ring of programmable LEDs, and a small speaker. Toss in a 3D-printed or laser-cut shell, and you have got all you need. The camera keeps watch over your workspace, while the Raspberry Pi runs a custom-trained AI model that can spot a phone in an instant.

What sets this lamp apart is how simple and effective it is. All the image processing happens right on the device, so you do not have to worry about your camera sending anything to the cloud. The lamp does not keep logs or track your habits, it just reacts in real time, rewarding you with a calming white glow when you stay focused and unleashing its full fury when you slip up. The immediate feedback makes it much harder to ignore your own bad habits.

That does not mean it is perfect. You will need a bit of tech know-how to put everything together and train the AI if you want to tweak what it looks for. The lamp also needs to be placed just right, with good lighting, to do its job well. But for anyone willing to dive into a fun project and who needs a little extra push to stay off their phone, this lamp is hard to beat.

The post Angry AI Study Lamp Keeps You Honest and Off Your Phone first appeared on Yanko Design.

]]>
Apple’s Liquid Glass Hands-On: Why Every Interface Element Now Behaves Like Physical Material https://www.yankodesign.com/2025/06/12/apples-liquid-glass-hands-on-why-every-interface-element-now-behaves-like-physical-material/?utm_source=rss&utm_medium=rss&utm_campaign=apples-liquid-glass-hands-on-why-every-interface-element-now-behaves-like-physical-material Thu, 12 Jun 2025 17:20:17 +0000 https://www.yankodesign.com/?p=558413

Apple’s Liquid Glass Hands-On: Why Every Interface Element Now Behaves Like Physical Material

Liquid Glass represents more than an aesthetic update or surface-level polish. It functions as a complex behavioral system, precisely engineered to dictate how interface layers...
]]>

Liquid Glass represents more than an aesthetic update or surface-level polish. It functions as a complex behavioral system, precisely engineered to dictate how interface layers react to user input. In practical terms, this means Apple devices now interact with interface surfaces not as static, interchangeable panes, but as dynamic, adaptive materials that fluidly flex and respond to every interaction. Interface elements now behave like physical materials with depth and transparency, creating subtle visual distortions in content beneath them, like looking through textured glass.

Designer: Apple

This comprehensive redesign permeates every pixel across the entire Apple ecosystem, encompassing iOS, iPadOS, macOS, and watchOS, creating consistent experience regardless of platform. Born out of close collaboration between Apple’s design and engineering teams, Liquid Glass uses real-time rendering and dynamically reacts to movement with specular highlights. The system extends from the smallest interface elements (buttons, switches, sliders, text controls, media controls) to larger components including tab bars and sidebars. What began as experimental explorations within visionOS has evolved into a foundational cornerstone across all of Apple’s platforms.

Yanko Design (Vincent Nguyen): What was that initial simple idea that sparked Liquid Glass? And second, how would you describe the concept of “material” in this context to everyday users who don’t understand design?

Alan Dye (VP of Human Interface Design, Apple): “Well, two things. I think what got us mostly excited was the idea of whether we could create a digital material that could morph and adapt and change in place, and still have this beautiful transparency so it could show through to the content. Because I think, initially, our goal is always to celebrate the user’s content, whether that’s media or the app.”

 

This technical challenge reveals the core problem Apple set out to solve: creating a digital material that maintains form-changing capabilities while preserving transparency. Traditional UI elements either block content or disappear entirely, but Apple developed a material that can exist in multiple states without compromising visibility of underlying content. Dye’s emphasis on “celebrating user content” exposes Apple’s hierarchy philosophy, where the interface serves content instead of competing with it. When you tap to magnify text, the interface doesn’t resize but stretches and flows like liquid responding to pressure, ensuring your photos, videos, and web content remain the focus while navigation elements adapt around them.

“And then in terms of what we would call the data layer, we liked the idea that every application has its content. So Photos has all the imagery of your photos. We want that to be the star of the show. Safari, we want the webpage to be the focal point. So when you scroll, we’re able to get those controls out of the way, shrink the URL field in that case.”

Apple has established a clear priority system where Photos imagery, Safari web pages, and media content take precedence over navigational elements, instead of treating interface chrome and user content as equal elements competing for attention. This represents a shift from interface-centric design to content-centric design. The practical implementation becomes apparent when scrolling through Safari, where the URL field shrinks dynamically, or in Photos, where the imagery dominates the visual hierarchy while controls fade into the background. Controls fade and sharpen based on what you’re doing, creating interfaces that feel more natural and responsive, where every interaction provides clear visual feedback about what’s happening and where you are in the system.

“For everyday users, we think there’s this layer that’s the top level. Menu systems, back buttons, and controls. And then there’s the app content beneath. That’s how we determine what’s the glass layer versus the application layer.”

Dye’s explanation of the “glass layer versus application layer” architecture provides insight into how Apple technically implements this philosophy. The company has created a distinct separation between functional controls (the glass layer) and user content (the application layer), allowing each to behave according to different rules while maintaining visual cohesion. This architectural decision enables the morphing behavior Dye described, where controls can adapt and change while content remains stable and prominent.

The Physical Reality Behind Digital Glass

During one of Apple’s demo setups, my attention was drawn to a physical glass layer arranged over printed graphics. This display served as tangible simulation of the refractive effect that Liquid Glass achieves in the digital realm. As I stood above the installation, I could discern how the curves and layering of the glass distorted light, reshaping the visual hierarchy of the underlying graphics. This physical representation was more than decorative flourish; it served as a bridge, translating the complex theoretical underpinnings of Apple’s design approach into something tactile and comprehensible.

That moment of parallax and distortion functioned as a compelling real-world metaphor, illustrating how interface controls now transition between foreground and background elements. What I observed in that physical demonstration directly translated to my hands-on experience with the software: the same principles of light refraction, depth perception, and material behavior that govern real glass now influence how digital interfaces respond to interaction.

Hands-On: How Liquid Glass Changes Daily Interactions

My hands-on experience with the newly refreshed iOS 26, iPadOS 26, macOS Tahoe, and watchOS 26 immediately illuminated the essence of Liquid Glass. What Apple describes as “glass” now transcends static texture and behaves as a dynamic, responsive environment. Consider the tab bars in Music or the sidebar in Notes app: as I scrolled through content, subtle distortions became apparent beneath these interface elements, accompanied by live refraction effects that gently bent the underlying content. The instant I ceased scrolling, this distortion smoothly resolved, allowing the content to settle into clarity.

My focus this year remained on the flat-screen experience, as I did not demo Vision Pro or CarPlay. iOS, iPadOS, and macOS serve as demonstrations of how Liquid Glass adapts to various input models, with a mouse hover eliciting distinct behaviors compared to direct tap or swipe. The material possesses understanding of when to amplify content for prominence and when to recede into the background. Even during media playback, dynamic layers expand and contract, responding directly to how and when you engage with the screen.

The lock screen clock exemplifies Liquid Glass principles perfectly. The time display dynamically scales and adapts to the available space behind it, creating a sense that the interface is responding to the content instead of imposing rigid structure upon it. This adaptive behavior extends beyond scaling to include weight adjustments and spacing modifications that ensure optimal legibility regardless of wallpaper complexity.

On macOS, hovering with a mouse cursor creates subtle preview states in interface elements. Buttons and controls show depth and transparency changes that indicate their interactive nature without overwhelming the content beneath. Touch interactions on iOS and iPadOS create more pronounced responses, with elements providing haptic-like visual feedback that corresponds to the pressure and duration of contact. The larger screen real estate of iPadOS allows for more complex layering effects, where sidebars and toolbars create deeper visual hierarchies with multiple levels of transparency and refraction.

The difference from current iOS becomes apparent in specific scenarios. In the current Music app, scrolling through your library feels like moving through flat, static layers. With Liquid Glass, scrolling creates a sense of depth. You can see your album artwork subtly shifting beneath the translucent controls, creating spatial awareness of where interface elements sit in relation to your content. The tab bar doesn’t just scroll with you; it creates gentle optical distortions that make the underlying content feel physically present beneath the glass surface.

However, the clear aesthetic comes with notable trade-offs. While the transparency creates visual depth, readability can suffer in certain lighting conditions or with complex wallpapers. Apple has engineered an adaptive system that provides light backgrounds for dark content and dark backgrounds for light content, but the system faces challenges when backgrounds contain mixed lighting conditions. While testing the clear home screen option, where widgets and icons adopt full transparency, the aesthetic impact is striking but raises practical concerns. The interface achieves a modern, visionOS-inspired look that feels fresh and contemporary, yet this approach can compromise text legibility, with busy wallpapers or varying lighting conditions creating readability issues that become apparent during extended use.

The challenge becomes most apparent with notification text and menu items, where contrast can diminish to the point where information becomes difficult to parse quickly. Apple provides the clear transparency as an optional setting, acknowledging that maximum transparency isn’t suitable for all users or use cases. This represents one of the few areas where the visual appeal of Liquid Glass conflicts with practical usability, requiring users to make conscious choices about form versus function.

Even keyboard magnification, when activated by tapping to edit text, behaved not as resizing but as fluid digital glass reacting organically to touch pressure. This response felt natural, almost organic in its execution. The system rewards motion with clarity and precision, creating transitions that establish clear cause and effect while guiding your understanding of your current location within the interface and your intended destination. Across all platforms, this interaction dynamically ranges between 1.2x and 1.5x magnification, with the value determined by specific gesture, contextual environment, and interface density at that moment instead of being rigidly fixed.

This logic extends to watchOS, where pressing an icon or notification amplifies the element, creating magnification that feels less like conventional zoom and more like digital glass stretching forward. On the small watch screen, this creates a sense of interface elements having physical presence and weight. Touch targets feel more substantial with reflective surfaces and enhanced depth cues, making interactions feel more tactile despite the flat display surface.

While this interaction feels natural, the underlying mechanics are precisely controlled and deeply integrated. Apple has engineered a system that responds intelligently to context, gesture, and content type. Apple’s intention with Liquid Glass extends beyond replicating physical glass and instead represents recognition of the inherent qualities of physical materials: how light interacts with them, how they create distortion, and how they facilitate layering. These characteristics are then applied to digital environments, liberating them from the restrictive constraints of real-world physics.

Why This Matters for Daily Use

The result is a system that is elastic, contextually aware, and designed to recede when its presence is not required. Most individuals will not pause to dissect the underlying reasons why a particular interaction feels improved. Instead, they will perceive enhanced grounding when navigating iPadOS or watchOS, with sidebar elements conveying heightened solidity and magnification effects appearing intentional. Apple does not overtly publicize these changes; it engineers them to resonate with the user’s sense of interaction.

This translates to practical benefits: reduced cognitive load when navigating between apps, clearer visual hierarchy that helps you focus on content, and interface feedback that feels more natural and predictable. When you’re editing photos, the tools recede to let your images dominate. When you’re reading articles in Safari, the browser chrome adapts to keep text prominent. When you’re scrolling through messages, the conversation content remains clear while navigation elements provide subtle depth cues.

Liquid Glass represents a fundamental recalibration of how digital interfaces convey motion, spatial relationships, and control. The outcome is an experience that defies easy verbal articulation, yet one that you will find yourself unwilling to relinquish.

The post Apple’s Liquid Glass Hands-On: Why Every Interface Element Now Behaves Like Physical Material first appeared on Yanko Design.

]]>
Tailor Is A Playful Tabletop Robot That Brings AI Voices to Life https://www.yankodesign.com/2025/05/30/tailor-is-a-playful-tabletop-robot-that-brings-ai-voices-to-life/?utm_source=rss&utm_medium=rss&utm_campaign=tailor-is-a-playful-tabletop-robot-that-brings-ai-voices-to-life Fri, 30 May 2025 17:00:06 +0000 https://www.yankodesign.com/?p=555520

Tailor Is A Playful Tabletop Robot That Brings AI Voices to Life

Imagine a future where your favorite AI assistant isn’t hiding in your phone or smart speaker but is sitting right beside you, nodding along and...
]]>

Imagine a future where your favorite AI assistant isn’t hiding in your phone or smart speaker but is sitting right beside you, nodding along and making eye contact. That’s the dream behind Tailor, the tabletop robot concept, which puts a friendly face and a little personality on the invisible voices we’ve all grown used to. No more talking into the void; Tailor makes digital conversation feel delightfully grounded.

Tailor isn’t some clunky robot with blinking lights and awkward limbs. Instead, its charm lies in a gentle, tiltable head that acts as both a screen and a face, propped up by a hinge that works like a neck. When you speak, Tailor listens, perks up, or even tilts with curiosity, bringing a sense of presence to those every day chats with your AI. It’s a bit like having a pet that reacts to your mood, only this one’s powered by the Tail AI system.

Designer: Sseo Kimm

1

There’s something oddly comforting about watching Tailor’s head move and its screen-face animate in response to your words. It takes the coldness out of technology, making every interaction feel a little warmer, a little more genuine. Instead of invisible algorithms, you get a companion who is right there on your desk, ready to nod, tilt, or glance around as if it’s sharing the moment with you.

The magic is in the details: the way Tailor’s head gently pivots when it’s thinking, or how it rests in a relaxed pose when waiting for your next command. Its body is sleek, with soft edges and a neutral color scheme that helps it blend into any room, but it’s the expressive movement that catches your attention. The hinge lets Tailor look attentive or bashful, depending on the mood of the exchange, and its digital face keeps things simple and inviting.

The Tail AI system that powers Tailor is designed to roam across your digital life, but this robot concept gives it a real seat at the table, literally. It’s easy to imagine Tailor quietly keeping you company, responding with subtle gestures whether you’re asking for the weather or searching for a lost file. The physicality of the robot bridges the gap between abstract AI and the tangible world you can actually touch.

There’s a playful side to Tailor, too. Watching it react to your voice with a tilt or a nod feels like a secret handshake between you and your gadget. It turns routine interactions into moments of connection, making even the simplest tasks feel kind of special. The robot’s approachable design keeps things light, never veering into uncanny territory, and always seeming ready to listen.

Even though Tailor only lives as a concept for now, it hints at a future where artificial intelligence isn’t just a voice in the air. It becomes something you can look at, talk to, and maybe even feel a little attached to. For anyone who’s ever wished their AI helper could be a bit more like a friend, this design is a peek into a friendlier, more tactile tomorrow.

The post Tailor Is A Playful Tabletop Robot That Brings AI Voices to Life first appeared on Yanko Design.

]]>
PIEK Is An AI-Powered Guitar Tutor Concept That Makes Learning Fun for Beginners https://www.yankodesign.com/2025/05/29/piek-is-an-ai-powered-guitar-tutor-concept-that-makes-learning-fun-for-beginners/?utm_source=rss&utm_medium=rss&utm_campaign=piek-is-an-ai-powered-guitar-tutor-concept-that-makes-learning-fun-for-beginners Thu, 29 May 2025 16:20:24 +0000 https://www.yankodesign.com/?p=555400

PIEK Is An AI-Powered Guitar Tutor Concept That Makes Learning Fun for Beginners

Learning guitar has always had a certain mystique, the kind that draws in people eager to strum their favorite tunes or just jam along with...
]]>

Learning guitar has always had a certain mystique, the kind that draws in people eager to strum their favorite tunes or just jam along with friends. There’s an undeniable thrill in picking up a guitar for the first time, imagining all the songs you’ll soon be able to play. Yet, as any beginner quickly discovers, getting started isn’t always as simple as it seems, and the obstacles can pile up before you even learn your first chord.

It’s easy to think that buying a guitar is the only real hurdle, but the reality is a little more complicated. The cost of amplifiers, cables, and other must-have accessories can catch new players off guard. And even once you’ve assembled all your gear, the price of private lessons or music school can feel out of reach, especially if you’re just hoping for a casual hobby. Many turn to YouTube tutorials as a budget-friendly option, but these videos only go so far.

Designer: Haneul Kang

Forming good habits early on is crucial when learning guitar, but beginners practicing on their own often pick up bad techniques without realizing it. Things like awkward wrist angles, clumsy picking, or crooked finger placement creep in and become hard to reverse over time. The problem with online videos is that they can’t watch you play or tell you when you’re making a mistake. You’re left guessing whether your posture is correct or your rhythm is steady.

That’s where PIEK, a clever AI-powered guitar tutor concept, comes into play. Shaped like a familiar guitar pick, PIEK clips onto your guitar headstock and uses a camera and smart sensors to analyze your hand movements. It’s not just watching; it’s learning with you, offering instant feedback so you can catch and fix bad habits before they stick. Practicing feels less like guesswork and more like having a patient guide by your side.

PIEK SOLO is the compact version of the device, designed for quick, mobile use. It attaches right to the headstock, tracking your picking hand with a camera and sensors that notice everything from your picking strength to your finger patterns. The connected app beams feedback straight to your phone, so you know whether your technique is on point or needs a little nudge. Its lightweight build and simple design mean you can clip it on and start learning right away, wherever inspiration strikes.

The PIEK DUO takes things a step further for those who want even deeper insights. Instead of clipping onto the guitar, DUO is a freestanding device that sits in front of the player, using a wide-angle camera to track both hands. Its LED strip pulses like a metronome that adjusts to your tempo, giving you a visual cue to help lock in your groove. With its AI-driven rhythm detection, DUO knows exactly when you’re falling behind or rushing ahead, and will let you know.

For the most thorough experience, you can use SOLO and DUO together, watching both hands in real-time. This dual setup means you get feedback on your fretting and strumming, making it easier than ever to smooth out those early rough patches. For anyone who’s struggled to learn guitar on their own, or who found traditional lessons out of reach, PIEK offers a fun, affordable, and smart new way to practice.

The post PIEK Is An AI-Powered Guitar Tutor Concept That Makes Learning Fun for Beginners first appeared on Yanko Design.

]]>
Everything We Know About Jony Ive’s $6.5 Billion Dollar ‘Secret’ AI Gadget https://www.yankodesign.com/2025/05/27/everything-we-know-about-jony-ives-6-5-billion-dollar-secret-ai-gadget/?utm_source=rss&utm_medium=rss&utm_campaign=everything-we-know-about-jony-ives-6-5-billion-dollar-secret-ai-gadget Wed, 28 May 2025 00:30:24 +0000 https://www.yankodesign.com/?p=554662

Everything We Know About Jony Ive’s $6.5 Billion Dollar ‘Secret’ AI Gadget

Let’s be honest, the tech world hasn’t felt this electric since Steve Jobs pulled the original iPhone from his pocket. Sure, we felt a few...
]]>

Let’s be honest, the tech world hasn’t felt this electric since Steve Jobs pulled the original iPhone from his pocket. Sure, we felt a few sparks fly in 2024 when Rabbit and Humane announced their AI devices, but that died down pretty quickly post-launch. However, when news broke that OpenAI had acquired Jony Ive’s mysterious startup “io” for a staggering $6.5 billion, the speculation machine kicked into overdrive. What exactly are the legendary Apple designer and ChatGPT’s creators cooking up together? The official announcement speaks vaguely of “a new family of products” and moving beyond traditional interfaces, but the details remain frustratingly sparse.

What we do know with certainty is limited. OpenAI and Ive’s company, io, are building something that’s reportedly “screen-free,” pocket-sized, and designed to bring AI into the physical world in a way that feels natural and ambient. The founding team includes Apple veterans Scott Cannon, Evans Hankey, and Tang Tan, essentially the hardware dream team that shaped the devices in your pocket and on your wrist. Beyond these confirmed facts lies a vast expanse of rumors, educated guesses, and wishful thinking. So let’s dive into what this device might be, with the appropriate grains of salt at the ready.

The Design: Ive’s Aesthetic Philosophy Reimagined

AI Representation

If there’s one thing we can reasonably predict, it’s that whatever emerges from Ive’s studio will be obsessively considered down to the micron. His design language at Apple prioritized simplicity, honest materials, and what he often called “inevitable” solutions, designs that feel so right they couldn’t possibly be any other way. A screen-free AI device presents a fascinating challenge: how do you create something tactile and intuitive without the crutch of a display?

I suspect we’ll see a device that feels substantial yet effortless in the hand, perhaps with a unibody construction milled from a single piece of material. Aluminum seems likely given Ive’s history, though ceramic would offer an interesting premium alternative with its warm, almost organic feel. The absence of a screen suggests the device might rely on subtle surface textures, perhaps with areas that respond to touch or pressure. Ive’s obsession with reducing visual complexity, eliminating unnecessary seams, screws, and buttons, will likely reach its logical conclusion here, resulting in something that looks deceptively simple but contains remarkable complexity.

Color choices will probably be restrained and sophisticated, think the elegant neutrals of Apple’s “Pro” lineup rather than the playful hues of consumer devices. I’d wager on a palette of silver, space gray, and possibly a deep blue, with surface finishes that resist fingerprints and wear gracefully over time. The environmental considerations that have increasingly influenced Ive’s work will likely play a role too, with recycled materials and sustainable manufacturing processes featured prominently in the eventual marketing narrative.

Technical Possibilities: AI in Your Pocket

AI Representation

The technical challenge of creating a screen-free AI device is immense. Without a display, every interaction becomes an exercise in invisible design, the device must understand context, anticipate needs, and communicate through means other than visual interfaces. This suggests an array of sophisticated sensors and input methods working in concert.

Voice recognition seems an obvious inclusion, likely using multiple microphones for spatial awareness and noise cancellation. Haptic feedback, perhaps using Apple-like Taptic Engine technology or something even more advanced, could provide subtle physical responses to commands or notifications. The device might incorporate motion sensors to detect when it’s being handled or carried, automatically waking from low-power states. Some reports hint at environmental awareness capabilities, suggesting cameras or LiDAR might be included.

The processing requirements for a standalone AI device are substantial. Running large language models locally requires significant computational power and memory, all while maintaining reasonable battery life. This points to custom silicon, possibly developed with TSMC or another major foundry, optimized specifically for AI workloads. Whether OpenAI has the hardware expertise to develop such chips in-house remains an open question, though their Microsoft partnership might provide access to specialized hardware expertise. Battery technology will be crucial; a device that needs charging multiple times daily would severely limit its utility as an always-available AI companion.

The User Experience: Beyond Screens and Apps

AI Representation

The most intriguing aspect of this rumored device is how we’ll actually use it. Without a screen, traditional app paradigms become irrelevant. Instead, we might see a return to conversational computing, speaking naturally to an assistant that understands context and remembers previous interactions. The “ambient computing” vision that’s been promised for years might finally materialize.

I imagine a device that feels less like a gadget and more like a presence, something that fades into the background until needed, then responds with uncanny intelligence. Perhaps it will use subtle audio cues or haptic patterns to indicate different states or notifications. The lack of a visual interface could actually enhance privacy; without a screen displaying potentially sensitive information, the device becomes more discreet in public settings. Of course, this also raises questions about accessibility, how will deaf users interact with a primarily audio-based device?

Integration with existing ecosystems will be crucial for adoption. Will it work seamlessly with your iPhone, Android device, or Windows PC? Can it control your smart home devices or integrate with your calendar and messaging apps? The answers remain unknown, but OpenAI’s increasingly broad partnerships suggest they understand the importance of playing nicely with others. The real magic might come from its predictive capabilities, anticipating your needs based on time, location, and past behavior, then proactively offering assistance without explicit commands.

Market Positioning and Price Speculation

AI Representation

How much would you pay for an AI companion designed by the man behind the iPhone? The pricing question looms large over this project. Premium design and cutting-edge AI technology don’t come cheap, suggesting this will be positioned as a high-end device. Looking at adjacent markets provides some clues, Humane’s AI Pin launched at $699, while Rabbit’s R1 came in at $199, though both offer significantly less sophisticated experiences than what we might expect from OpenAI and Ive.

My educated guess places the device somewhere between $499 and $799, depending on capabilities and materials. A lower entry point might be possible if OpenAI adopts a subscription model for premium AI features, subsidizing hardware costs through recurring revenue. The target market initially appears to be tech enthusiasts and professionals, people willing to pay a premium for cutting-edge technology and design, before potentially expanding to broader consumer segments as costs decrease and capabilities improve.

As for timing, the supply chain whispers and regulatory tea leaves suggest we’re looking at late 2025 at the earliest, with full availability more likely in 2026. Hardware development cycles are notoriously unpredictable, especially for first-generation products from newly formed teams. The $6.5 billion acquisition price suggests OpenAI sees enormous potential in this collaboration, but also creates substantial pressure to deliver something truly revolutionary.

The Competitive Landscape: A New Category Emerges

AI Representation

The AI hardware space is still in its infancy. Early entrants like Humane have struggled with fundamental questions about utility and user experience. What makes a dedicated AI device compelling when smartphones already offer capable assistants? The answer likely lies in specialized capabilities that phones can’t match, perhaps always-on contextual awareness without battery drain, or privacy guarantees impossible on multipurpose devices.

OpenAI and Ive are betting they can define a new product category, much as Apple did with the iPhone and iPad. Success will require not just technical excellence but a compelling narrative about why this device deserves space in your life. The competition won’t stand still either, Apple’s rumored AI initiatives, Google’s hardware ambitions, and countless startups will ensure a crowded marketplace by the time this device launches.

The most fascinating aspect might be how this hardware play fits into OpenAI’s broader strategy. Does physical embodiment make AI more trustworthy, useful, or personable? Will dedicated devices provide capabilities impossible through software alone? These philosophical questions underpin the entire project, suggesting that Ive and Altman share a vision that extends beyond quarterly profits to how humans and AI will coexist in the coming decades.

What This Could Mean for the Future of Computing

AI Representation

If successful, this collaboration could fundamentally reshape our relationship with technology. The screen addiction that defines contemporary digital life might give way to something more ambient and less demanding of our visual attention. AI could become a constant companion rather than an app we occasionally summon, always listening, learning, and assisting without requiring explicit commands for every action.

The privacy implications are both promising and concerning. A device designed from the ground up for AI interaction could incorporate sophisticated on-device processing, keeping sensitive data local rather than sending everything to the cloud. Conversely, an always-listening companion raises obvious surveillance concerns, requiring thoughtful design and transparent policies to earn user trust.

For Jony Ive, this represents a chance to define the post-smartphone era, potentially creating his third revolutionary product category after the iPod and iPhone. For OpenAI, hardware provides a direct channel to users, bypassing platform gatekeepers like Apple and Google. The stakes couldn’t be higher for both parties, and for us, the potential users of whatever emerges from this collaboration.

Waiting for the Next Big Thing

AI Representation

The partnership between OpenAI and Jony Ive represents the most intriguing collision of AI and design talent we’ve seen yet. While concrete details remain scarce, the ambition is clear: to create a new kind of computing device that brings artificial intelligence into our physical world in a way that feels natural, beautiful, and essential.

Will they succeed? History suggests caution; creating new product categories is extraordinarily difficult, and first-generation devices often disappoint (raise your hands if you own a bricked Humane AI Pin or Rabbit R1. Yet the combination of OpenAI’s technical prowess and Ive’s design sensibility offers reason for optimism. Whatever emerges will undoubtedly be thoughtfully designed and technically impressive. Whether it finds a permanent place in our lives depends on whether it solves real problems in ways our existing devices cannot.

For now, we wait, analyzing every patent filing, supply chain rumor, and cryptic statement for clues about what’s coming. The anticipation itself speaks volumes about the state of consumer technology: in an era of incremental smartphone updates and me-too products, we’re hungry for something genuinely new. Jony Ive and Sam Altman just might deliver it.

The post Everything We Know About Jony Ive’s $6.5 Billion Dollar ‘Secret’ AI Gadget first appeared on Yanko Design.

]]>
Infinix Note 50 Pro+ 5G: Flagship features packed in a budget phone https://www.yankodesign.com/2025/05/21/infinix-note-50-pro-5g-flagship-features-packed-in-a-budget-phone/?utm_source=rss&utm_medium=rss&utm_campaign=infinix-note-50-pro-5g-flagship-features-packed-in-a-budget-phone Wed, 21 May 2025 15:20:28 +0000 https://www.yankodesign.com/?p=553713

Infinix Note 50 Pro+ 5G: Flagship features packed in a budget phone

Smartphone shoppers often face a frustrating dilemma: spend a fortune on a premium device with all the bells and whistles, or settle for a budget...
]]>

PROS:


  • Impressive charging capabilities

  • Generous package including charger and MagSafe compatible case

  • Seamless AI integration through “One-Tap Infinix AI”

  • Versatile camera set up


CONS:


  • Limited software update support

  • Not available in the US

RATINGS:

AESTHETICS
ERGONOMICS
PERFORMANCE
SUSTAINABILITY / REPAIRABILITY
VALUE FOR MONEY

EDITOR'S QUOTE:

The Infinix Note 50 Pro+ 5G impresses with its robust performance, premium design, and an array of thoughtful features, all wrapped in an affordable price tag of $370.

Smartphone shoppers often face a frustrating dilemma: spend a fortune on a premium device with all the bells and whistles, or settle for a budget phone that cuts too many corners to hit its price point. This compromise typically means sacrificing camera quality, display performance, or processing power – the very features that enhance our daily digital experiences. The mid-range market attempts to bridge this gap, but rarely delivers a truly satisfying balance of high-end specifications and reasonable cost without significant compromises in build quality or user experience.

The Infinix Note 50 Pro+ 5G boldly challenges this status quo by bringing genuine flagship-level features to the budget-conscious consumer. What makes this offering particularly intriguing is how Infinix has prioritized features that genuinely impact user experience rather than simply checking specification boxes for marketing purposes. Let’s see if it manages to meet those goals or if it cut too many corners to achieve its mouth-watering price point.

Designer: Infinix

Aesthetics

The Infinix Note 50 Pro+ 5G stands out in terms of design, drawing inspiration from automotive engineering. The frame is crafted from Armor Alloy, a robust blend of Damascus steel and aerospace-grade aluminum alloy, paired with a durable glass back panel. This combination enhances both strength and premium appeal.

The Note 50 Pro+ 5G is available in three color variants: Titanium Grey, Enchanted Purple, and the Racing Edition. We had the chance to review the Racing Edition, which draws influence from BMW’s Physital design philosophy, blending physical and digital aesthetics. The Racing Edition features a matte silver back panel with textured vertical lines, complemented by the iconic tri-color racing stripes, symbolizing dynamism and speed.

The device is also defined by its glossy octagonal camera island, located at the upper left corner. The camera island houses a triple camera setup, paired with the Bio-Active Halo AI Lighting System and an LED flashlight. The lighting system reacts to your phone’s activity, changing colors in response to charging, notifications, incoming calls, and gaming, adding a touch of flair to everyday interactions. It is also a sensor for measuring heart rate and blood oxygen levels. While the concept is intriguing, the term “AI” seems a bit of a stretch, as the feature feels more gimmicky than groundbreaking.

Ergonomics

With dimensions of 163.36 x 74.35 x 7.99 mm and a weight of 209 grams, the device feels solid and premium in hand. However, this solid build comes with a slight downside. That is, the phone is a bit top-heavy, creating an unbalanced feel when holding it. Another notable ergonomic issue occurs when the phone is placed face up on a flat surface. Due to the pronounced camera island, the device has a tendency to wobble, making it less stable when resting on a desk or table.

While the phone’s design is generally comfortable for regular use, gaming is where the camera island becomes an ergonomic hurdle. When holding the device horizontally for gaming, the raised camera module interferes with your grip, which can be distracting and uncomfortable during longer sessions. The fingerprint scanner, located near the bottom of the display, also presents a bit of a challenge. The placement makes the transition from unlocking the device to navigating through the interface somewhat awkward.

Overall, the Infinix Note 50 Pro+ 5G is solidly built, but the top-heavy design, wobbling issue, and less-than-ideal fingerprint scanner placement can make for an occasionally frustrating user experience. While it’s not uncomfortable to hold, these small design decisions can impact long-term usability.

Performance

The Infinix Note 50 Pro+ 5G boasts a 6.79-inch AMOLED display with a resolution of 1080 x 2436, supporting a 144Hz refresh rate, HDR10+, and up to 1300 nits peak brightness (550 nits typical). The display is vibrant and fluid, offering rich colors and smooth animations whether you’re browsing, watching videos, or gaming. Even in direct sunlight, the screen stays bright and readable. Additionally, it remains responsive when used with wet fingers or in wet conditions, ensuring precision without any issues. The bezels are impressively thin and nearly symmetrical, enhancing the immersive viewing experience.

Complementing the display is a dual speaker setup tuned by JBL, which promises a more premium audio experience on paper. However, in practice, the speakers felt inconsistent. Volume remains relatively quiet up to around 80%, then suddenly spikes when pushed beyond that. The overall sound lacks balance. Bass is weak, and the mids and highs don’t carry much depth. For casual use, it’s adequate, but audio enthusiasts may find it underwhelming.

Under the hood, the Note 50 Pro+ 5G is powered by the MediaTek Dimensity 8350 Ultimate chipset, coupled with 12GB of RAM (expandable virtually to 24GB) and 256GB of storage. Running Android 15 with Infinix’s XOS 15 skin, the phone delivers a smooth, responsive experience across the board. Performance holds up impressively well even during graphically intense games like Call of Duty: Mobile and Genshin Impact, with no noticeable lag or stutter.

For the first time in an Infinix smartphone, the Note 50 Pro+ 5G introduces a comprehensive suite of AI features. What stands out, however, is how these tools are seamlessly integrated through “One-Tap Infinix AI”. By simply long-pressing the power button, regardless of the app you’re using, Folax, Infinix’s AI assistant, is instantly accessible. From summarizing or translating on-screen content to describing images or even editing photos, the AI offers a wide range of functions.

The addition of Google’s Circle to Search further enhances the experience. This integration feels incredibly well-thought-out, as it consolidates multiple AI tools under a single gesture, eliminating the need to switch between apps. It’s a convenient, user-friendly feature that simplifies multitasking without compromising functionality.

Another interesting addition is the ability to measure heart rate and blood oxygen levels by placing your finger on the Bio-Halo AI lighting sensor. While it may not replace dedicated health devices, it’s a novel feature to have built into a smartphone, particularly at this price point. Its usefulness will vary depending on user habits, but it adds an unexpected layer of utility.

The Infinix Note 50 Pro+ 5G boasts, which is relatively rare in this price range. The system consists of a 50 MP main sensor, a 50 MP 3x telephoto lens, and an 8 MP ultra-wide camera, offering great versatility for mobile photography. While this combination is a standout feature at this price point, the real question is how well it performs in everyday use. Let’s dive in and see how it stacks up.

The main camera uses a 1/1.56-inch Sony IMX896 sensor with an f/1.8 aperture and optical image stabilization (OIS). In well-lit conditions, it produces sharp, detailed images with vibrant colors, though the contrast can sometimes be a bit strong. Night mode performance is solid, capturing clear and well-exposed shots with minimal noise, although light sources can occasionally appear overexposed.

The telephoto camera delivers 6x optical zoom with lossless clarity and extends up to 100x zoom. Between 3x and 6x, photos are rich in detail, with a good dynamic range that performs well across different lighting conditions. Beyond 6x, image quality starts to degrade, which is expected at higher magnifications. The ultra-wide camera also performs admirably. While it’s not as sharp as the main or zoom cameras, it still captures vibrant and clear images. Selfies from the 32 MP front-facing camera are generally good, though they can sometimes appear a bit faded.

For video, the Note 50 Pro+ 5G can record up to 4K at 60 FPS with the main and telephoto cameras, while the ultra-wide is limited to 2K at 30 FPS. The front-facing camera is capped at 4K at 30 FPS. Video footage from the main and telephoto cameras is smooth, though there are some minor hiccups. Unfortunately, you cannot switch between cameras while recording. Additionally, panning can cause stuttering in the viewfinder, and rapid movement results in judder in the video. Fortunately, you can turn on ultra-stabilization at 4K 60 FPS, and it works quite well.

With its sizable 5,200 mAh silicon-carbon battery, the Infinix Note 50 Pro+ 5G ensures you can go about your day without worrying about battery life. It easily lasts a full day of regular use. But the impressive battery specs don’t end there. The device supports 100W wired charging and 50W wireless charging, both of which are flagship-level capabilities. Additionally, the phone offers reverse charging, providing 10W through wired connections and 7.5W wirelessly, adding even more versatility to its power management.

Sustainability

The Infinix Note 50 Pro+ 5G is designed with durability in mind. The phone’s side frame is made from Armor Alloy, a robust blend of Damascus steel and aerospace-grade aluminum alloy, ensuring the phone is built to last. Paired with a durable glass back panel, this combination enhances the phone’s overall sturdiness, making it a reliable option for everyday use.

Additionally, the phone comes with an IP64 rating, offering protection against dust and water splashes. While this level of protection is not the highest available, it provides sufficient durability for typical day-to-day scenarios, giving users confidence that their device can handle the occasional exposure to water or rough environments.

However, when it comes to software longevity, the phone’s sustainability potential falls short. Infinix promises two years of Android updates and three years of security updates only, which is relatively limited when compared to other devices in the same price range. Many competing smartphones offer three or more years of operating system updates and security patches for up to four or five years, which means that the Note 50 Pro+ 5G may require a replacement sooner than some users might expect in order to stay up-to-date with the latest features and security improvements.

Value

At a price of $370, the Infinix Note 50 Pro+ 5G delivers exceptional value for money. With a feature set that includes a 6.79-inch AMOLED display, powerful performance, and a versatile triple-camera setup, it competes well in the mid-range smartphone market. Infinix has certainly packed a lot of premium features into an affordable device.

What truly sets the Note 50 Pro+ 5G apart is the inclusivity of its package. Along with the phone, Infinix includes a 100W charger brick, a USB-C to USB-C cable, earphones, a MagSafe-like phone case, and a glass screen protector, offering a generous bundle that enhances the overall value of the device. That said, it’s worth noting that the Note 50 Pro+ 5G isn’t available in the US.

Verdict

The Infinix Note 50 Pro+ 5G impresses with its robust performance, premium design, and an array of thoughtful features, all wrapped in an affordable price tag of $370. It stands out in the mid-range segment by offering a large, vibrant AMOLED display, a capable triple-camera setup, and strong performance driven by the MediaTek Dimensity 8350 chipset. Additionally, the generous package that includes a 100W charger, USB-C cable, earphones, a MagSafe-like case, and a glass screen protector further enhances its value proposition, making it a complete package for those who want more out of their device.

While the device has a few ergonomic quirks, such as a top-heavy design and camera island wobbling, the overall user experience remains solid. The AI integration through “One-Tap Infinix AI” is a standout feature, providing quick and effortless access to a wide range of AI tools. Despite some limitations in software support, the Infinix Note 50 Pro+ 5G remains a well-rounded, feature-packed option for tech enthusiasts, gamers, and photographers who don’t want to break the bank. However, its absence in the US market is a downside for those hoping to purchase locally.

The post Infinix Note 50 Pro+ 5G: Flagship features packed in a budget phone first appeared on Yanko Design.

]]>
TECNO Unveils Comprehensive AI Ecosystem at COMPUTEX 2025 https://www.yankodesign.com/2025/05/20/tecno-unveils-comprehensive-ai-ecosystem-at-computex-2025/?utm_source=rss&utm_medium=rss&utm_campaign=tecno-unveils-comprehensive-ai-ecosystem-at-computex-2025 Tue, 20 May 2025 17:27:44 +0000 https://www.yankodesign.com/?p=553560

TECNO Unveils Comprehensive AI Ecosystem at COMPUTEX 2025

TECNO brings a full suite of AI-powered devices to COMPUTEX 2025, showcasing how its self-developed edge-side AI model transforms everyday technology interactions. The company returns...
]]>

TECNO brings a full suite of AI-powered devices to COMPUTEX 2025, showcasing how its self-developed edge-side AI model transforms everyday technology interactions. The company returns to Taipei with products spanning laptops, smart glasses, and wearables under its “Mega Leap with AI” theme.

Designer: TECNO

The centerpiece of their showcase is the new MEGABOOK S16 AI PC, complemented by the world’s lightest 14-inch OLED laptop weighing just 899 grams. These devices represent TECNO’s vision for AI that works seamlessly both online and offline, addressing the growing need for intelligent computing that adapts to users rather than forcing adaptation.

MEGABOOK S16: Flagship AI Performance in a Premium Package

The MEGABOOK S16 integrates TECNO’s self-developed edge-side AI model, enabling AI functionality even without internet connectivity. Powered by Intel’s Core i9-13900HK processor with 14 cores and 20 threads reaching 5.4 GHz turbo speeds, the system delivers substantial computational power for demanding AI applications.

Despite its performance capabilities, the S16 maintains a surprisingly portable profile at just 1.3kg and 14.9mm thick. The all-metal chassis houses TECNO’s first 16-inch display in the flagship laptop line, responding to user demand for larger screens without sacrificing mobility.

The system particularly excels in multitasking scenarios where AI assistance proves most valuable. Users can seamlessly switch between creative work, productivity tasks, and entertainment without the performance degradation typically associated with running multiple demanding applications.

MEGABOOK S14: Redefining Ultralight Computing

Perhaps more impressive from an engineering standpoint is the MEGABOOK S14, which achieves a remarkable 899g weight while incorporating a 2.8K OLED display. TECNO offers the system in two variants: one with Qualcomm’s Snapdragon X Elite compute platform and another with Intel’s Core Ultra 9 processor.

The magnesium-alloy chassis contributes to the ultralight design without compromising structural integrity. For users requiring additional graphics performance, TECNO provides an external graphics dock with NVIDIA GPU options that transforms the ultraportable into a creative workstation or gaming system.

The S14 represents TECNO’s first OLED implementation in a laptop, delivering 2.8K resolution with a 120Hz refresh rate and 91% screen-to-body ratio. The display carries TÜV Rheinland eye comfort certification for extended viewing sessions.

K-Series: Accessible AI Computing

TECNO also showcases its K15S and K14S models, representing new size options in its entry-level lineup. The K15S features an all-metal design with a 15.6-inch display, Intel Core i5-13420H processor, and expandable memory up to 32GB.

Despite its more accessible positioning, the K15S incorporates a substantial 70Wh battery with 65W GaN fast charging technology, addressing a common pain point in the category. The system includes a full-sized keyboard with numeric keypad and four-level backlighting for productivity in various lighting conditions.

AI Capabilities Across the Lineup

All MEGABOOK models now feature TECNO’s upgraded AI model with DeepSeek-V3, enhancing offline capabilities while enabling comprehensive online AI searches through a Personal GPT function. The system offers six core AI functionalities designed to streamline common workflows.

The AI Gallery, which TECNO claims is a world-first on Windows, connects wirelessly with TECNO smartphones for photo backup, smart album creation, and image searches. The Ella AI Assistant manages tasks and schedules, while AI PPT localizes and completes presentations using TECNO’s AI sources.

For professionals, the AI Meeting Assistant provides real-time transcription with speaker identification and key point extraction. The system also includes AI Drawing tools for creative applications.

AI Glasses: Smartphone-Grade Photography in Eyewear

Moving beyond computing, TECNO introduces its first AI Glasses series with two models: AI Glasses and AI Glasses Pro. Both incorporate a 50MP camera system using the same OV50D sensor, ISP, and imaging algorithms found in TECNO’s flagship CAMON 40 Premier smartphone.

The glasses feature a “SmartSnap” function that recognizes scenes and automatically generates captions for social sharing. The AI Info function compiles notifications from multiple apps into concise reports, while real-time translation supports over 100 languages.

The Pro model adds WaveGuide AR display technology co-developed with Meta-Bounds, featuring a MicroLED screen with 30° field of view and 1500 nits brightness. This enables navigation overlays, meeting translations, and other augmented reality applications.

Both models offer approximately 8 hours of mixed use on a 30-minute charge of their 250mAh batteries. The standard model features an aviator design, while the Pro adopts a browline style.

Ecosystem Integration and Market Positioning

TECNO emphasizes the interconnectivity of its AI ecosystem through OneLeap technology, which enables multi-screen sharing, file transfer, and cross-device collaboration between MEGABOOK laptops, smartphones, and tablets.
This approach addresses a common friction point for users working across multiple devices, allowing content and context to follow the user rather than remaining siloed on individual devices.

TECNO positions its AI ecosystem as democratizing advanced technology for emerging markets, with a presence in over 70 markets across five continents. Their “Stop At Nothing” brand philosophy guides product development toward accessible innovation.

The comprehensive lineup demonstrates TECNO’s commitment to AI as a transformative technology rather than a marketing checkbox. By developing its own edge-side AI model, the company maintains control over the user experience while ensuring functionality even in regions with inconsistent connectivity.

For users seeking to experience TECNO’s vision of AI-enhanced computing, the company’s booth at COMPUTEX 2025 (N1302) showcases all products through May 23rd.

The post TECNO Unveils Comprehensive AI Ecosystem at COMPUTEX 2025 first appeared on Yanko Design.

]]>
Copilot Dock Concept Channels Retro Calculators for Smarter AI Access https://www.yankodesign.com/2025/05/16/copilot-dock-concept-channels-retro-calculators-for-smarter-ai-access/?utm_source=rss&utm_medium=rss&utm_campaign=copilot-dock-concept-channels-retro-calculators-for-smarter-ai-access Fri, 16 May 2025 17:00:21 +0000 https://www.yankodesign.com/?p=552722

Copilot Dock Concept Channels Retro Calculators for Smarter AI Access

AI services like Microsoft Copilot are quickly moving from digital assistants buried in our devices to visible, tangible features built right into our workflow. With...
]]>

AI services like Microsoft Copilot are quickly moving from digital assistants buried in our devices to visible, tangible features built right into our workflow. With the addition of the Copilot key on the latest Windows laptops, launching Microsoft’s AI is now as easy as a single tap. But for those who lean on AI for everyday tasks, getting real work done often means repeating the same prompts and clicking through menus again and again.

That’s where the Retro Calculator-inspired Copilot Dock Concept comes in: a clever gadget aimed squarely at heavy Copilot users and anyone craving a bit more efficiency with a dash of nostalgia. Rather than typing out similar requests over and over, this stylish device puts your favorite prompts just one button away, making AI interaction smoother, faster, and a lot more fun.

Designer: Braz de Pina

The concept takes cues from today’s shortcut decks and macro keypads, which are already a favorite among streamers and power users. These devices let you map buttons to specific actions or sequences, freeing you from memorizing complicated keyboard combos. The Copilot Dock brings this same idea to AI, but with a twist: it’s built from the ground up with Copilot in mind, streamlining your most common commands into physical buttons.

Nine numbered keys are perched on the face of the dock, designed to be mapped to the prompts and actions you use most often. Whether you’re generating summaries, coding snippets, or drafting emails, you can assign each to a specific button for instant access. There’s even a display that shows which prompt you’ve selected, so you always know what’s coming next.

What really sets this concept apart, though, is its retro aesthetic. Inspired by vintage calculators and classic Olivetti typewriters, the Copilot Dock sports a slanted display and chunky keys that evoke another era, one where tech was tactile, satisfying, and just a little bit playful. The Copilot dial in the corner adds another layer of interaction, letting you start or end your AI session with a simple twist.

For anyone using AI as a regular copilot throughout the workday, the Copilot Dock concept is an enticing blend of old-school charm and futuristic convenience. It’s proof that sometimes, the best way to get more from new technology is to revisit what made the tools of the past so delightful to use. Whether you’re an AI enthusiast or just someone who loves a good gadget, this retro-inspired dock is a fresh take on making digital helpers feel a bit more hands-on.

The post Copilot Dock Concept Channels Retro Calculators for Smarter AI Access first appeared on Yanko Design.

]]>
LegoGPT lets you build the set of your dreams https://www.yankodesign.com/2025/05/14/legogpt-lets-you-build-the-set-of-your-dreams/?utm_source=rss&utm_medium=rss&utm_campaign=legogpt-lets-you-build-the-set-of-your-dreams Wed, 14 May 2025 08:45:35 +0000 https://www.yankodesign.com/?p=552107

LegoGPT lets you build the set of your dreams

Legos have always been more of a creative endeavor than just a mere toy for kids. Actually, there may be more adults who own Lego...
]]>

Legos have always been more of a creative endeavor than just a mere toy for kids. Actually, there may be more adults who own Lego sets and make Lego builds than actual kids (although most parents start them young). It’s always interesting to see what new (and expensive) builds that Lego comes up with but sometimes there are things that people want to see but they would have to come up with on their own. That’s when your creativity (and a little math and engineering) really comes in. But there should be an easier way to come up with these dream builds right?

Say hello to LegoGPT, an innovative AI model that promises to revolutionize how we imagine and construct with those iconic little blocks. Developed by researchers at Carnegie Mellon University, this is not just another digital building simulator. Instead, it acts as a co-creator, capable of generating custom Lego set ideas based on simple prompts. Imagine typing in “a cozy cafe in the countryside” or “a futuristic house in space,” and having LegoGPT conjure up unique designs, complete with suggested brick compositions and even potential building instructions.

Designers: Carnegie Mellon University (Pun, Ava and Deng, Kangle and Liu, Ruixuan and Ramanan, Deva and Liu, Changliu and Zhu, Jun-Yan)

This groundbreaking technology taps into the vast library of existing Lego elements and design principles, combined with the power of advanced natural language processing. By understanding the nuances of your requests, LegoGPT can translate abstract ideas into tangible, brick-based concepts. This means that even if you’re not a seasoned Master Builder, you can still bring your imaginative visions to life with the help of AI. The implications of LegoGPT are far-reaching. For casual builders, it offers a fantastic starting point for new projects, breaking through creative blocks and suggesting designs they might never have conceived on their own. For serious Lego enthusiasts, it could serve as a powerful brainstorming tool, pushing the boundaries of what’s possible with the existing brick system.

Beyond personal use, LegoGPT holds exciting possibilities for education. Imagine students learning about different architectural styles by prompting the AI to design buildings from various historical periods. Or, think about the potential for interactive storytelling, where children can describe a scene and then build the Lego set that the AI generates. While the full capabilities and public availability of LegoGPT are still unfolding, the concept itself is a thrilling glimpse into the future of creative play. It blends the timeless appeal of Lego with the cutting-edge potential of artificial intelligence, suggesting a world where our imaginations, empowered by technology, can build bigger and bolder than ever before.

The post LegoGPT lets you build the set of your dreams first appeared on Yanko Design.

]]>
This Tiny Robot Dog With AI Vision Might Be Smarter Than Your Coworkers https://www.yankodesign.com/2025/05/06/this-tiny-robot-dog-with-ai-vision-might-be-smarter-than-your-coworkers/?utm_source=rss&utm_medium=rss&utm_campaign=this-tiny-robot-dog-with-ai-vision-might-be-smarter-than-your-coworkers Wed, 07 May 2025 00:30:24 +0000 https://www.yankodesign.com/?p=549733

This Tiny Robot Dog With AI Vision Might Be Smarter Than Your Coworkers

  No puppy should be this smart. Especially one with stainless steel tendons, a brain powered by a Raspberry Pi, and the gait of a...
]]>

 

No puppy should be this smart. Especially one with stainless steel tendons, a brain powered by a Raspberry Pi, and the gait of a predator trained in motion physics. The PuppyPi isn’t pretending to be man’s best friend—it’s angling to be your next research assistant, AI sandbox, and mechanical muse. And it wears that ambition in its aluminium alloy frame like armor.

Barely larger than a hardcover book and weighing just 720 grams, the PuppyPi quadruped robot looks like it belongs on a desk—but acts like it’s ready for a DARPA challenge. This isn’t the Petoi Bittle’s plastic cousin that trots across classrooms with toy-like charm. This one trades PLA for a CNC-machined aluminum exoskeleton, swaps plug-and-play servos for eight stainless steel coreless motors pushing 8 kg-cm of torque, and throws in onboard AI vision backed by ROS and Raspberry Pi. It’s got brains, brawn, and a build quality that’s surprisingly serious for something small enough to tiptoe across your keyboard.

Designer: Hiwonder

Each leg houses a coreless servo capable of 8 kg-cm torque, which is plenty for a robot that weighs less than a bag of coffee beans. But it’s not just torque for torque’s sake. This bot uses a linkage mechanism that translates motor spin into fluid motion, giving each step a lifelike arc. The leg system is tuned for walk, trot, and amble—terms familiar to biomechanical engineers and animators alike—each customizable by lift height, touchdown timing, and stride phase offset. It’s the difference between watching a robot move and watching one walk.

That walking, by the way, happens with real-time posture correction thanks to an IMU sensor keeping tabs on its orientation. It’s not flawless, but it’s convincingly stable, and you can fine-tune everything from its height to its pitch angle via a PC app or mobile device. Android and iOS are both supported, with FPV streamed directly from the wide-angle HD camera mounted up front. Think real-time dog vision, straight to your screen.

A 130° wide-angle 480p AI-powered camera stares ahead like a pair of digital eyes, feeding video to your smartphone via Android or iOS apps. You get FPV vision and remote control over Wi-Fi—handy for navigating complex environments or just geeking out as your tiny robot dog wanders under your couch. The onboard Python scripts come with detailed annotations, meaning high schoolers can pick it up fast, but the depth is there for deeper tinkering. Object tracking, line following, even gesture recognition—it’s all fair game.

For the code-savvy, the Python libraries are open source and annotated. You’re not stuck in a walled garden. Dive in, clone the repo, and go wild. Want it to dance when it sees your face? Go for it. Want it to find red balls and bring them to a marked zone? It’s already half-written for you. There’s even Gazebo simulation support, so you can test your algorithms in a digital world before committing real silicon and servos.

The Pro model adds SLAM to the mix via a 12-meter TOF LiDAR system clocking in at 4500Hz. That’s not just mapping—that’s mapping fast. It supports single- and multi-point navigation, obstacle detection, and rerouting in real-time. Gmapping, Hector SLAM, and Karto are all supported, making this pint-sized pup capable of navigating a living room like it’s a warehouse floor. And yes, you can give it waypoints, watch it build a map, and track its position with precision you’d normally expect from robots ten times the price.

And if you’re thinking beyond quadruped locomotion, PuppyPi Pro even supports a robotic arm. Vision-guided, object-grabbing, programmable. Suddenly, this isn’t a pet or a toy—it’s a multi-modal mobile manipulator. Pick up a piece of trash, flip a switch, draw a circle in the sand. The use cases aren’t locked behind hardware limitations.

PuppyPi’s backbone is its Raspberry Pi ecosystem—choose between Pi 4B or Pi 5, with corresponding expansion boards loaded with GPIO, PWM, I2C, RGB LEDs, buzzers, and signal indicators. The Pi 5 board even brings a 32-bit ARM controller and CRC support. No flat cables. No clunky adapters. Just clean integration and room for secondary development.

A 2200mAh LiPo battery gives you a solid 60 minutes of runtime. You get real-world testing without needing a wall outlet every 15 minutes. A voltage monitor keeps you in check so you don’t end your SLAM session with a dead bot in a blind corner.

This is what happens when a robot dog is built not to imitate biology, but to accelerate learning. It’s unapologetically geeky, technically rich, and begging to be modded. Whether you’re a student writing your first gait sequence or an engineer testing a SLAM pipeline in your living room, PuppyPi doesn’t condescend. It delivers.

The post This Tiny Robot Dog With AI Vision Might Be Smarter Than Your Coworkers first appeared on Yanko Design.

]]>