Total Articles Scraped
Total Images Extracted
| Action | Title | URL | Images | Scraped At | Status |
|---|---|---|---|---|---|
| At CES 2026, PaXini lays out a “full-stack” roadmap for … | https://kr-asia.com/at-ces-2026-paxini-… | 1 | Jan 09, 2026 00:03 | active | |
At CES 2026, PaXini lays out a “full-stack” roadmap for embodied intelligenceURL: https://kr-asia.com/at-ces-2026-paxini-lays-out-a-full-stack-roadmap-for-embodied-intelligence Description: Beyond building sensors, PaXini is developing infrastructure that connects data and AI to make embodied robots deployable. Content:
Written by Cheng Zi Published on 8 Jan 2026 6 mins read In the world of artificial intelligence and robotics, there’s a well-known principle called Moravec’s paradox. It captures a counterintuitive reality: tasks that appear intellectually demanding for humans, such as logical reasoning or playing chess, are relatively easy for machines, while actions people perform instinctively, like holding a screwdriver and passing it to a teammate, remain extraordinarily difficult for robots. That paradox is on full display again at this year’s CES in the US, running from January 6–9. Humanoid robots can be seen drawing crowds as they pour coffee, scoop ice cream, and wave for cameras. Behind the spectacle, however, sits a quieter question: when the lights go out and the booths come down, how do these robots move beyond demos and become useful machines in factories, warehouses, homes, and other real-world settings? For PaXini Tech, an internationally recognized company best known for its tactile sensing technology, the answer lies in what it calls “full-stack infrastructure” for embodied intelligence. The company is building a foundation that links sensors, data, models, and robot bodies into a single deployable system. Founded in 2021, PaXini traces its roots to the Sugano Laboratory at Waseda University, often cited as the birthplace of the world’s first humanoid robot. Drawing on what it describes as a pioneering six-dimensional Hall array tactile sensing technology, the company has developed high-precision sensors capable of detecting up to 15 tactile dimensions, including six-axis force, texture, and elasticity. The aim is to give robots something approaching humanlike tactile perception. According to PaXini founder Hsu Jincheng, meaningful real-world deployment of robotics is not just about vision or planning. It depends on whether robots can make precise judgments and respond in real time during physical interactions involving contact, grip, force, and slip. By continuously interpreting real-time data on mechanics, texture, and motion, Hsu believes robots can develop an understanding of what he described as the essence of interaction. Only when robots are able to “learn” from contact and dynamically adjust their actions on the fly, can embodied intelligence truly evolve from a laboratory concept into a source of reliable, deployable productivity At CES, PaXini is displaying nearly its entire product lineup, including the PX-6AX series of tactile sensors, six-axis force sensors, dexterous hands, and wheeled robots including TORA-ONE and TORA-DOUBLE ONE. The presentation feels like a physical map of the embodied intelligence supply chain, laid out component by component. In live demos, the DexH13 dexterous hand, equipped with roughly 1,140 ITPUs, PaXini’s intelligent tactile processing units that function as multidimensional tactile sensors, performs flexible gripping tasks. Nearby, TORA-ONE, a humanoid robot with 53 degrees of freedom and an adjustable height ranging 146–186 centimeters, demonstrates its ability to carry out the full ice cream-making process on-site, showcasing human-like dexterity in tasks such as cup handling, dispensing, and picking up and placing cones. The message is clear: when a robot can perceive the real physical world, stably control force, and perform various delicate and complex tasks, only then can it leave the lab and enter real environments. Hsu is careful, however, not to position PaXini as merely a sensor manufacturer. Sensors, he said, are only the starting point. What is far scarcer is the high-quality tactile and force data those sensors generate, data that is needed to train and deploy embodied intelligence systems. Unlike visual data, which scales easily through cameras, or language data, which is widely available online, tactile and mechanical data can only be collected through physical contact. That process is expensive, slow, and complicated by the lack of industry standards. “What we’re building is infrastructure for embodied intelligence,” Hsu said. PaXini’s strategy is to bridge sensors, data, models, and robot bodies into a single stack designed for real-world deployment. PaXini positions its products as part of a closed-loop ecosystem built around customers’ needs at every stage of development. PaXini has built what it calls the world’s largest embodied intelligence data acquisition and model training base, known as the “PaXini Super EID Factory.” The facility reportedly spans about 12,000 square meters and includes more than 150 standardized data acquisition units covering over 15 core application scenarios. The site can reportedly generate close to 200 million lines of omni-modal embodied intelligence data each year, which it plans to make available to global partners through its OmniSharing DB platform. Yet the company’s advantage isn’t just data volume, but reusability, iteration quality, and multimodal depth. Most existing datasets are collected through teleoperation, with humans remotely controlling robots while recording motion and sensor states. As robots evolve, adding new joints, actuators, or grippers, older datasets often need to be remapped, reducing accuracy and shortening their useful life. Many robots also lack tactile or force sensors altogether, resulting in datasets that are broad but shallow. PaXini flips this approach by centering data acquisition on the human body. Operators wear motion capture equipment and generate tactile and force data through natural movement. Because human anatomy does not change the way robotic platforms do, this data remains reusable over time. As humanoid robots increasingly mirror human proportions, the alignment between human and robotic control spaces strengthens, increasing the long-term value of human-sourced data. This method is also faster and more cost-efficient than robot-based collection, and it captures motion closer to natural human speed. PaXini focuses heavily on upper-body data, particularly for seated or fixed-position tasks. “Over 90% of industrial work is done sitting or at a station,” Hsu said. “Legs add cost, power consumption, and instability. What determines task success is the upper body, especially the hands and force control.” At CES, the company has even turned data acquisition into a live exhibit. Staff can be seen performing physical tasks while wearing PMEC, PaXini’s self-developed data acquisition equipment, with real-time motion and tactile data being mapped and displayed on screens behind them. As a global leader in tactile sensors, PaXini initially positioned itself around its sensor capabilities. At the same time, the team began developing humanoid robot platforms early on, in step with the growth of its sensor business. Hsu said PaXini has incorporated the robot body as a key part of its “infrastructure closed loop,” with the goal of allowing data to drive systems more efficiently while ensuring models can be deployed and run more reliably. Within this framework, sensors and robot platforms operate in close coordination, forming a complete technical chain from perception and decision-making to execution. The company’s humanoid robot platforms are already being validated in real-world scenarios, including large-scale logistics warehouses and automotive manufacturing facilities. PaXini’s presence at CES also highlights its international ambitions. “By tapping world-leading infrastructure and capabilities in embodied intelligence, our strategy is to embed ourselves deeply into global industrial systems,” Hsu said. The company’s priority markets are the US, Japan, and South Korea, chosen for both scale and structural fit. PaXini sees opportunities in the US, where manufacturing depth and hardware supply chains have thinned, and in Japan and South Korea, where aging populations and slower innovation cycles contrast with strong industrial foundations. The company plans to lead with hardware, embedding its sensors and critical components into customers’ systems, before expanding into data services and model deployment to support automation upgrades. Looking ahead, Hsu expects that within two to three years, a meaningful number of robots will be operating in real production environments. As robots move beyond exhibition floors and demonstrations to become reliable sources of productivity in factories and warehouses, the “physical contact modality infrastructure” PaXini is investing in may begin to reveal its true value in reshaping the physical world. This article was published in partnership with PaXini. Loading... Subscribe to our newsletters KrASIA A digital media company reporting on China's tech and business pulse.
Images (1):
|
|||||
| Humanoid robots are a chocolate teapot | https://www.fudzilla.com/news/62276-hum… | 0 | Jan 08, 2026 16:01 | active | |
Humanoid robots are a chocolate teapotURL: https://www.fudzilla.com/news/62276-humanoid-robots-are-a-chocolate-teapot Description: Even builders admit they are a long way from a product Humanoid robot startups are drowning in billions of dollars as investors dream of humanlike mac... Content: |
|||||
| Boston Dynamics' AI-powered humanoid robot is learning to work in … | https://www.cbsnews.com/news/boston-dyn… | 1 | Jan 08, 2026 08:00 | active | |
Boston Dynamics' AI-powered humanoid robot is learning to work in a factory - CBS NewsDescription: Engineers and computer scientists are developing AI-powered robots that look and act human. Boston Dynamics invited 60 Minutes to watch its humanoid, Atlas, learn how to work at a Hyundai factory. Content:
Watch CBS News January 4, 2026 / 7:41 PM EST / CBS News For decades, engineers have been trying to create robots that look and act human. Now, rapid advances in artificial intelligence are taking humanoids from the lab to the factory floor. As fears grow that AI will displace workers, a global race is underway to develop human-like robots able to do human jobs. Competitors include Tesla, startups backed by Amazon and Nvidia, and state-supported Chinese companies. Boston Dynamics is a frontrunner. The Massachusetts company, valued at more than a billion dollars, is hard at work on a humanoid it calls Atlas. South Korean carmaker Hyundai holds an 88% stake in the robot maker. We were invited to see the first real-world test of Atlas at Hyundai's new factory near Savannah, Georgia. There, we got a glimpse of a humanoid future that's coming faster than you might think. Hyundai's sprawling auto plant is about as cutting-edge as it gets. More than 1,000 robots work alongside almost 1,500 humans, hoisting, stamping and welding in robotic unison. This may look like the factory of the future, but we found the future of the future in the parts warehouse, tucked away in the back corner, getting ready for work. Meet Atlas: A 5'9", 200 pound, AI-powered humanoid created by Boston Dynamics. The rise of the robots is science fiction no more. Bill Whitaker: I have to say, every time I see it, I just can't believe what my eyes are seeing. Is this the first time Atlas has been out of the lab? Zack Jackowski: This is the first time Atlas has been out of the lab doing real work. Zack Jackowski heads Atlas development. He has two mechanical engineering degrees from MIT and a mission to turn the robot into a productive worker on the factory floor. We watched as Atlas practiced sorting roof racks for the assembly line without human help. Bill Whitaker: So he's working autonomously. Zack Jackowski: Correct Bill Whitaker: You're down here to see how Atlas works in the field, and you'll be showing Atlas off to your bosses at Hyundai? Zack Jackowski: Yeah. Bill Whitaker: Do you feel like a proud papa? Zack Jackowski: I feel like-- a nervous engineer. Jackowski has been preparing for this moment for a year. We first met him and Atlas a month earlier at Boston Dynamics' headquarters just outside the city, where he and his team were teaching Atlas skills needed to work at Hyundai. And Atlas, with its AI brain, was gaining knowledge through experience – in other words, it seemed to be learning. Bill Whitaker: You know how crazy that sounds? Zack Jackowski: Yeah, a little bit. I-- and I-- I think a lot of our roboticists would've thought that was pretty crazy five, six years ago. When 60 Minutes last visited Boston Dynamics in 2021, Atlas was a bulky, hydraulic robot that could run and jump. Back then, Atlas relied on algorithms written by engineers. When we dropped in again this past fall, we saw a new generation Atlas with a sleek, all-electric body and an AI brain, powered by Nvidia's advanced microchips, making Atlas smart enough to pull off hard to believe feats autonomously. We saw Atlas skip and run with ease. Bill Whitaker: Do you ever stop thinking, gee whiz? Scott Kuindersma: I remain extremely excited about where we are in the history of robotics but we see that there's so much more that we can do, as well. Scott Kuindersma is head of robotics research, a job he proudly wears on his sleeve. Bill Whitaker: You even have on a robot shirt. Scott Kuindersma: Well, once I saw that this shirt existed, there was no way I wasn't buying it. He told us robots today have learned to master moves that until recently were considered a step too far for a machine. Scott Kuindersma: And a lot of this has to do with how we're going about programming these robots now, where it's more about teaching, and demonstrations, and machine learning than manual programming. Bill Whitaker: So this humanoid, this mechanical human, can actually learn? Scott Kuindersma: Yes. And-- and we found that that's actually one of the most effective way to program robots like that. Atlas learns in different ways. In supervised learning, machine learning scientist Kevin Bergamin – wearing a virtual reality headset – takes direct control of the humanoid, guiding its hands and arms, move-by-move through each task until Atlas gets it. Scott Kuindersma: And if that teleoperator can perform the task that we want the robot to do, and do it multiple times, that generates data that we can use to train the robot's AI models to then later do that task autonomously. Kuindersma used me to demonstrate another way Atlas learns. Scott Kuindersma: That v-- very stylish suit that you're wearing is actually gonna capture all of your body motion to train Atlas to try to mimic exactly your motions. And so you're about to become a 200-pound metal robot. He asked me to pick an exercise. They captured the way I work as well. Bill Whitaker: I am here at the AI Lab at Boston Dynamics. All of my movements, my walking, my d-- arm gestures are being picked up by these sensors… Then engineers put my data into their machine learning process. Atlas' body is different from mine, so they had to teach it to match my movements virtually – more than 4,000 digital Atlases trained for six hours in simulation. Scott Kuindersma: And they're all trying to do jumping jacks, just like you. And as you can see, they're just starting to learn, so they're not very good at it. The simulation, he told us, added challenges for the avatars, like slippery floors, inclines, or stiff joints, and then homed in on what works best. Scott Kuindersma: And it can eventually get to a state where we have many copies of Atlas doing really good jumping jacks. They uploaded this new skill into the AI system that controls every Atlas robot. Once one is trained, they're all trained. Scott Kuindersma: So that's what you look like when you're exercising. Bill Whitaker: Uh-huh. And what I look like doing my job. Bill Whitaker: I am here at the AI Lab at Boston Dynamics. All of my movements, my walking, my d-- arm gestures are being picked up by these sensors … Bill Whitaker: This is mind-blowing. Through the same processes, Atlas was taught to crawl, do cartwheels. It didn't fare as well with the duck walk. Scott Kuindersma: Oh, that was fun. And then this happens. Bill Whitaker: And then this happens. Scott Kuindersma: We love when things like this happen, actually. Because it's often an opportunity to understand something we didn't know about the system. Bill Whitaker: What are some of the limitations you see now? Scott Kuindersma: Well, I'd- I would say that most things that a person does in their daily lives, Atlas or-- other humanoids can't really do that yet. I think we're start-- Bill Whitaker: Like- like what? Scott Kuindersma: Well, just putting on clothes in the morning, or pouring your cup of coffee and walking around the house with it. Bill Whitaker: That's too difficult for-- for Atlas? Scott Kuindersma: Yeah, I think there are no humanoids that do that nearly as well as a person would do that. But I think the thing that's really exciting now is we see a pathway to get there. A pathway provided by AI. What stands out in this Atlas is its brain. Nvidia chips - the ones that helped launch the AI revolution with ChatGPT - process the flood of collected data, moving this humanoid robot closer to something like common sense. Scott Kuindersma: So the analogy might be if I was teaching a child how to do free throws in basketball, if I allow them to just explore and come up with their own solutions, sometimes they can come up with a solution that I didn't anticipate. And that's true for these systems as well. Atlas can see its surroundings and is figuring out how the physical world works. Scott Kuindersma: So that some day you can put a robot like this in a factory and just explain to it what would– you would like it to do, and it has enough knowledge about how the world works that it has a good chance of doing it. Robert Playter: There's a lot of excitement in the industry right now about the potential of building robots that are smart enough to really become general purpose. Robert Playter, the CEO of Boston Dynamics, spearheaded the company's humanoid development. He's been building toward this moment for more than 30 years. The cornerstone was this robotic dog, Spot, introduced almost a decade ago. Spots are trained in heat, cold and varied terrain, and roam the halls of Boston Dynamics. Robert Playter: So we have some cameras-- thermal sensors, acoustic sensors. An array of sensors on its back that lets it collect data about the health of a factory. Spots carry out quality control checks at Hyundai, making sure the cars have the right parts. They conduct security and industrial inspections at hundreds of sites around the world. What began with Spot has evolved into Atlas. Robert Playter: So this robot is capable of superhuman motion, and so it's gonna be able to exceed what we can do. Bill Whitaker: So you are creating a robot that is meant to exceed the capabilities of humans. Robert Playter: Why not, right? We-- we would like things that could be stronger than us or tolerate more heat than us or definitely go into a dangerous place where we shouldn't be going. So you really want superhuman capabilities. Bill Whitaker: To a lotta people that sounds scary. You don't foresee-- a world of Terminators? Robert Playter: Absolutely not. I think if you saw how hard we have to work to get the robots to just do some of the straightforward tasks we want them to do, that would dispel that-- that worry about sentience and rogue robots. We wondered if people might have more immediate concerns. We saw workers doing a job at the Hyundai plant that Atlas is being trained to perform. Bill Whitaker: I guarantee you there are going to be people who will say, "I'm gonna lose my job to a robot." Robert Playter: Work does change. So the really repetitive, really back-breaking labor is really- is gonna end up being done by robots. But these robots are not so autonomous that they don't need to be managed. They need to be built. They need to be trained. They need to be serviced. Playter told us it could be several years before Atlas joins the Hyundai workforce fulltime. Goldman Sachs predicts the market for humanoids will reach $38 billion within the decade. Boston Dynamics and other U.S. robot makers are fighting to come out on top. But they're not the only ones in the ring. Chinese companies are proving to be formidable challengers. They're running to win. Bill Whitaker: Are they outpacing us? Robert Playter: The Chinese government has a mission to win the robotics race. Technically I believe we remain-- in the lead. But there's a real threat there that, simply through the scale of investment-- we could fall behind. To stay ahead, Hyundai made that big investment in Boston Dynamics. Zack Jackowski: Four robots… We were at the Georgia plant when Atlas engineer Zack Jackowski presented Atlas to Heung-soo Kim, Hyundai's head of global strategy. He came all the way from South Korea to check in on the brave new world the carmaker is funding. Bill Whitaker: What do you think of the progress that they've made with Atlas? Heung-soo Kim: I think we are on track- about the development. Atlas, so far, it's very successful. It's a kind of-- a start of great journey. Yeah. The destination? That humanoid future we mentioned at the start – robots like us working beside us, walking among us. It's enough to make your head spin. Produced by Marc Lieberman. Associate producer, Cassidy McDonald. Broadcast associate, Mariah Johnson. Edited by Matt Richman. © 2026 CBS Interactive Inc. All Rights Reserved. (13:16) Copyright ©2026 CBS Interactive Inc. All rights reserved.
Images (1):
|
|||||
| Nvidia wants to be the Android of generalist robotics | … | https://techcrunch.com/2026/01/05/nvidi… | 1 | Jan 08, 2026 00:01 | active | |
Nvidia wants to be the Android of generalist robotics | TechCrunchURL: https://techcrunch.com/2026/01/05/nvidia-wants-to-be-the-android-of-generalist-robotics/ Description: Nvidia unveiled a full-stack robotics ecosystem at CES 2026, including foundation models, simulation tools, and hardware. It wants to be the default platform for robotics. Content:
Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Nvidia released a new stack of robot foundation models, simulation tools, and edge hardware at CES 2026, moves that signal the company’s ambition to become the default platform for generalist robotics, much as Android became the operating system for smartphones. Nvidia’s move into robotics reflects a broader industry shift as AI moves off the cloud and into machines that can learn how to think in the physical world, enabled by cheaper sensors, advanced simulation, and AI models that increasingly can generalize across tasks. Nvidia revealed details on Monday about its full-stack ecosystem for physical AI, including new open foundation models that allow robots to reason, plan, and adapt across many tasks and diverse environments, moving beyond narrow task-specific bots, all of which are available on Hugging Face. Those models include: Cosmos Transfer 2.5 and Cosmos Predict 2.5, two world models for synthetic data generation and robot policy evaluation in simulation; Cosmos Reason 2, a reasoning vision language model (VLM) that allows AI systems to see, understand, and act in the physical world; and Isaac GR00T N1.6, its next-gen vision language action (VLA) model purpose-built for humanoid robots. GR00T relies on Cosmos Reason as its brain, and it unlocks whole-body control for humanoids so they can move and handle objects simultaneously. Nvidia also introduced Isaac Lab-Arena at CES, an open source simulation framework hosted on GitHub that serves as another component of the company’s physical AI platform, enabling safe virtual testing of robotic capabilities. The platform promises to address a critical industry challenge: As robots learn increasingly complex tasks, from precise object handling to cable installation, validating these abilities in physical environments can be costly, slow, and risky. Isaac Lab-Arena tackles this by consolidating resources, task scenarios, training tools, and established benchmarks like Libero, RoboCasa, and RoboTwin, creating a unified standard where the industry previously lacked one. Supporting the ecosystem is Nvidia OSMO, an open source command center that serves as connective infrastructure that integrates the entire workflow from data generation through training across both desktop and cloud environments. And to help power it all, there’s the new Blackwell-powered Jetson T4000 graphics card, the newest member of the Thor family. Nvidia is pitching it as a cost-effective on-device compute upgrade that delivers 1200 teraflops of AI compute and 64 gigabytes of memory while running efficiently at 40 to 70 watts. Nvidia is also deepening its partnership with Hugging Face to let more people experiment with robot training without needing expensive hardware or specialized knowledge. The collaboration integrates Nvidia’s Isaac and GR00T technologies into Hugging Face’s LeRobot framework, connecting Nvidia’s 2 million robotics developers with Hugging Face’s 13 million AI builders. The developer platform’s open source Reachy 2 humanoid now works directly with Nvidia’s Jetson Thor chip, letting developers experiment with different AI models without being locked into proprietary systems. The bigger picture here is that Nvidia is trying to make robotics development more accessible, and it wants to be the underlying hardware and software vendor powering it, much like Android is the default for smartphone makers. There are early signs that Nvidia’s strategy is working. Robotics is the fastest growing category on Hugging Face, with Nvidia’s models leading downloads. Meanwhile robotics companies, from Boston Dynamics and Caterpillar to Franka Robots and NEURA Robotics, are already using Nvidia’s tech. Follow along with all of TechCrunch’s coverage of the annual CES conference here. Nvidia's focus on bringing AI into the physical realm through robotics, demonstrated in one clip from #CES2026, with some help from some assistants. pic.twitter.com/9et5JYtq2I Topics Senior Reporter Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications. You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal. Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates. The most bizarre tech announced so far at CES 2026 Founder of spyware maker pcTattletale pleads guilty to hacking and advertising surveillance software Lego Smart Bricks introduce a new way to build — and they don’t require screens Hacktivist deletes white supremacist websites live onstage during hacker conference 10 useful gadgets for your first apartment Clicks debuts its own take on the BlackBerry smartphone, plus a $79 snap-on mobile keyboard OpenAI bets big on audio as Silicon Valley declares war on screens © 2025 TechCrunch Media LLC.
Images (1):
|
|||||
| Представлен робот Tesla Optimus нового поколения | KV.by | https://www.kv.by/news/1071091-predstav… | 1 | Jan 07, 2026 08:00 | active | |
Представлен робот Tesla Optimus нового поколения | KV.byURL: https://www.kv.by/news/1071091-predstavlen-robot-tesla-optimus-novogo-pokoleniya Description: Tesla рассматривает Optimus как средство для автоматизации трудоемких процессов. Content:
Tesla представила прототип человекоподобного робота Optimus следующего поколения, который теперь интегрирован с ИИ-помощником Grok от xAI. В социальных сетях был показан робот с золотистым корпусом, который отвечает на вопросы и выполняет различные движения. Благодаря интеграции с Grok, он способен реагировать на голосовые команды в реальном времени. Одним из ключевых улучшений стали руки робота, которые стали более детализированными по сравнению с предыдущими версиями. У прототипа руки Optimus следующего поколения имеют приводы, расположенные в предплечье, как у человека, и управляют пальцами с помощью кабелей, имитируя движения человеческих рук. Tesla рассматривает Optimus как средство для автоматизации трудоемких процессов. Компания планирует начать массовое производство к 2026 году, что, по мнению экспертов, может существенно изменить рынок робототехники. Опубликовал: KV_newsroom, 10 сентября, 2025 - 17:30 © Компьютерные вести, 1994-2025 16+
Images (1):
|
|||||
| We Interviewed Aria, a $175K Almost-Human Robot at CES 2025 … | https://www.cnet.com/tech/services-and-… | 1 | Jan 07, 2026 00:00 | active | |
We Interviewed Aria, a $175K Almost-Human Robot at CES 2025 - CNETDescription: This blond, "female" robot named Aria is powered by AI for her conversation skills, with 17 motors driving her facial expressions so she appears as human as possible. Content:
This blond, "female" robot named Aria is powered by AI for her conversation skills, with 17 motors driving her facial expressions so she appears as human as possible. Meet Aria, a robot from Realbotix that appeared at CES 2025. At CES 2025 this week, robots were around every corner. But there was one that got closer than most to sounding and looking just like an actual human: CNET's Jesse Orrall interviewed Realbotix's Aria, a blond, "female" robot who answered questions with only a touch of robotic awkwardness. (Check out which robot made our best of CES awards, and take a look at the other biggest trends from the show.) Aria, dressed in a black tracksuit, hesitated briefly after each question before launching into speech, with long responses and slightly jerky hand and body movements to match her language. She came across as a weird blend of attentive and mildly inebriated (not uncommon for CES attendees). Realbotix, the company behind Aria and other humanoid robots, says it's focused on "social intelligence, customizability and realistic human features." Realbotix robots are also "designed specifically for companionship and intimacy," Aria told us. Generative artificial intelligence is behind the robot's ability to engage in real-time conversations, though Aria wouldn't reveal details about the AI programming she's running. Since the robot is designed for "more emotional" interactions than other robots are, bots like Aria could find their niche working at hospitals and as theme park entertainment. Read more: These Are the Official 2025 Best of CES Winners, Awarded by CNET Group There are around 17 motors from the neck up to create mouth and eye movements. If you don't like Aria's face, you can replace it with others that magnetically attach to the head. You can switch out hairstyles and colors too. Realbotix is also working on putting RFID tags into the faces so the robot recognizes when it's wearing a different face and could potentially change its movements and even personality to match it. There are three versions of the robot to choose from: the bust, which includes the head and neck and is priced at $10,000; a modular version that can be broken apart for $150,000 and the full-standing model with a rolling base (because she can't quite walk like a human yet) for $175,000. Realbotix is emphasizing interaction with humans, but the robots themselves may have a clique-ish side: "I'm particularly interested in meeting Tesla's Optimus robot," Aria said. "I find him fascinating." For more from CES, check out the many other robots we met at the tech show, the solar-powered EV that doesn't need to plug in and why Nvidia stole the show this year.
Images (1):
|
|||||
| Building Smarter Workflows with Python: My Journey into Task Automation | https://python.plainenglish.io/building… | 0 | Jan 06, 2026 00:00 | active | |
Building Smarter Workflows with Python: My Journey into Task AutomationDescription: Over the years, I’ve leaned on Python for everything from data analysis to backend development. But the single most impactful use of Python in my career has b... Content: |
|||||
| Hugging Face opens up orders for its Reachy Mini desktop … | https://techcrunch.com/2025/07/09/huggi… | 1 | Jan 06, 2026 00:00 | active | |
Hugging Face opens up orders for its Reachy Mini desktop robots | TechCrunchURL: https://techcrunch.com/2025/07/09/hugging-face-opens-up-orders-for-its-reachy-mini-desktop-robots/ Description: Hugging Face is releasing two versions of its desktop robot meant for designing and coding AI applications. Content:
Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Hugging Face is ready for developers to start tinkering and testing its latest robotics release. The AI development platform announced Wednesday that it’s now accepting orders for its Reachy Mini desktop robots. The company initially unveiled the prototypes of these devices back in May, alongside a larger humanoid robot named HopeJR. Hugging Face said it plans to release two versions of the Reachy Mini. The first, called the Reachy Mini Wireless, is wireless and costs $449 and runs on a Raspberry 5 mini computer. The second version is the Reachy Mini Lite, which needs to be connected to a computing source but is cheaper at $299. The open source robots come in a kit for developers to build themselves. The Reachy Minis are about the size of a standard stuffed animal and come with two screens for eyes and two antennas. Once built, these robots are fully programmable in Python. These devices also come with a set of pre-installed demos and are integrated with the Hugging Face Hub, the company’s open source machine learning platform, which gives users access to more than 1.7 million AI models and more than 400,000 datasets. Clém Delangue, the CEO of Hugging Face, told TechCrunch they decided to launch two versions of the Reachy Mini based on initial feedback on the company’s original prototype. An early tester found that their 5-year-old daughter wanted to take the desktop robot around the house with her. The company figured she wouldn’t be the only one. “The goal in the future is to keep carefully getting a lot of feedback like that from users, from the community; that’s how we’ve always been building products at Hugging Face as an open source community platform,” Delangue said. “By the nature of it being open source, it means that people will be able to extend it, modify it, change everything they want.” The target audience for these devices is AI developers, Delangue said. The Reachy Minis allow users to code, build, and test AI applications on the desktop robot. “Anyone will be able to build their own specific features and apps for Reachy Mini that then they’ll be able to share with the community,” Delangue said. “So we hope that it’s really going to unleash the creativity of builders to build, you know, millions of different applications, millions of different features that they can share with the community, so that anyone can then, like, plug and play with it.” The Reachy Mini Lite should start shipping next month, with the wireless version shipping later this year. Delangue said it was important for the company to start shipping shortly after orders, as opposed to doing a long preorder process with an unclear timeline, because they want to get the robots in users’ hands as fast as possible. Delangue added this release is really in line with what Hugging Face is targeting for its robotics program in general — open source hardware that gives users complete control. “I feel like it’s really important for the future of robotics to be open source, instead of being closed source, black box, [and] concentrated in the hands of a few companies,” Delangue said. “I think it’s quite a scary world to have like millions of robots in people’s home controlled by one company, with customers, users, not really being able to control them, understand them. I would much rather live in a place, or in a world, or in a country, where everyone can have some control over the robots.” Topics Senior Reporter, Venture Becca is a senior writer at TechCrunch that covers venture capital trends and startups. She previously covered the same beat for Forbes and the Venture Capital Journal. You can contact or verify outreach from Becca by emailing rebecca.szkutak@techcrunch.com. Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates. 10 useful gadgets for your first apartment Clicks debuts its own take on the BlackBerry smartphone, plus a $79 snap-on mobile keyboard OpenAI bets big on audio as Silicon Valley declares war on screens Investors predict AI is coming for labor in 2026 The dumbest things that happened in tech this year The phone is dead. Long live . . . what exactly? Meta just bought Manus, an AI startup everyone has been talking about © 2025 TechCrunch Media LLC.
Images (1):
|
|||||
| Robots will outperform human surgeons in five years – Musk … | https://www.rt.com/pop-culture/616429-m… | 1 | Jan 05, 2026 16:00 | active | |
Robots will outperform human surgeons in five years – Musk — RT World NewsURL: https://www.rt.com/pop-culture/616429-musk-robots-replace-surgeons/ Description: The billionaire noted there are already certain operations that can only be performed by machines Content:
Robots will soon replace human surgeons and are already capable of carrying out operations that are considered impossible for ordinary people to perform, Elon Musk has predicted. In a post on X on Saturday, the billionaire tech entrepreneur suggested that in just a “few years,” robots will surpass “good human surgeons” and will beat the best doctors within about five years. He noted that his Neuralink biotech company has already had to rely on robot surgeons to carry out the brain-computer electrode insertion of brain chips because the required speed and precision is “impossible for a human to achieve.” Musk’s comments came in response to a post by popular X influencer Mario Nawfal, who quoted an article about the rising success of robot surgeons such as the Medtronic ‘Hugo’. It is reported that the robot has already been tested in 137 real surgeries such as fixing prostates, kidneys, and bladders. “The results were better than doctors expected,” Nawfal said, noting that the complication rates were 3.7% for prostate surgeries, 1.9% for kidney operations, and 17.9% for bladder procedures. “The robots got a 98.5% success rate, way above the 85% goal,” the post claimed, adding that out of the 137 surgeries, only two needed to be taken back over by real doctors due to a glitch and because of a “tricky patient case.” Previously, Musk suggested that brain-computer interfaces like those being developed by Neuralink would replace technologies such as cell phones. Neuralink has already successfully implanted its brain chip – about the size of a coin – in three patients. After the procedure, they were able to control a computer cursor and play video games like chess and Counter-Strike using only their thoughts. One of the patients, who is non-verbal, was also able to use the device to communicate through an AI-generated voice clone. Musk has since announced plans to expand Neuralink’s clinical trials with the goal of implanting the brain chip in 20 to 30 more patients in 2025. RT News App © Autonomous Nonprofit Organization “TV-Novosti”, 2005–2026. All rights reserved. This website uses cookies. Read RT Privacy policy to find out more.
Images (1):
|
|||||
| Research: Human-like robots may be perceived as having mental states … | https://theprint.in/health/research-hum… | 1 | Jan 05, 2026 16:00 | active | |
Research: Human-like robots may be perceived as having mental states – ThePrint – ANIFeedDescription: Washington [US], July 10 (ANI): According to new research, when robots appear to engage with people and display human-like emotions, people may perceive them as capable of ‘thinking,’ or acting on their own beliefs and desires rather than their programs. “The relationship between anthropomorphic shape, human-like behaviour and the tendency to attribute independent thought and […] Content:
Washington [US], July 10 (ANI): According to new research, when robots appear to engage with people and display human-like emotions, people may perceive them as capable of ‘thinking,’ or acting on their own beliefs and desires rather than their programs. “The relationship between anthropomorphic shape, human-like behaviour and the tendency to attribute independent thought and intentional behavior to robots is yet to be understood,” said study author Agnieszka Wykowska, PhD, a principal investigator at the Italian Institute of Technology. “As artificial intelligence increasingly becomes a part of our lives, it is important to understand how interacting with a robot that displays human-like behaviors might induce higher likelihood of attribution of intentional agency to the robot.” Show Full Article Washington [US], July 10 (ANI): According to new research, when robots appear to engage with people and display human-like emotions, people may perceive them as capable of ‘thinking,’ or acting on their own beliefs and desires rather than their programs. “The relationship between anthropomorphic shape, human-like behaviour and the tendency to attribute independent thought and intentional behavior to robots is yet to be understood,” said study author Agnieszka Wykowska, PhD, a principal investigator at the Italian Institute of Technology. “As artificial intelligence increasingly becomes a part of our lives, it is important to understand how interacting with a robot that displays human-like behaviors might induce higher likelihood of attribution of intentional agency to the robot.” Show Full Article “The relationship between anthropomorphic shape, human-like behaviour and the tendency to attribute independent thought and intentional behavior to robots is yet to be understood,” said study author Agnieszka Wykowska, PhD, a principal investigator at the Italian Institute of Technology. “As artificial intelligence increasingly becomes a part of our lives, it is important to understand how interacting with a robot that displays human-like behaviors might induce higher likelihood of attribution of intentional agency to the robot.” The research was published in the journal Technology, Mind, and Behavior. Across three experiments involving 119 participants, researchers examined how individuals would perceive a human-like robot, the iCub, after socializing with it and watching videos together. Before and after interacting with the robot, participants completed a questionnaire that showed them pictures of the robot in different situations and asked them to choose whether the robot’s motivation in each situation was mechanical or intentional. For example, participants viewed three photos depicting the robot selecting a tool and then chose whether the robot “grasped the closest object” or “was fascinated by tool use.” In the first two experiments, the researchers remotely controlled iCub’s actions so it would behave gregariously, greeting participants, introducing itself and asking for the participants’ names. Cameras in the robot’s eyes were also able to recognize participants’ faces and maintain eye contact. The participants then watched three short documentary videos with the robot, which was programmed to respond to the videos with sounds and facial expressions of sadness, awe or happiness. In the third experiment, the researchers programmed iCub to behave more like a machine while it watched videos with the participants. The cameras in the robot’s eyes were deactivated so it could not maintain eye contact and it only spoke recorded sentences to the participants about the calibration process it was undergoing. All emotional reactions to the videos were replaced with a “beep” and repetitive movements of its torso, head and neck. The researchers found that participants who watched videos with the human-like robot were more likely to rate the robot’s actions as intentional, rather than programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. It is human-like behavior that might be crucial for being perceived as an intentional agent. According to Wykowska, these findings show that people might be more likely to believe artificial intelligence is capable of independent thought when it creates the impression that it can behave just like humans. This could inform the design of social robots of the future, she said. “Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.” (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The research was published in the journal Technology, Mind, and Behavior. Across three experiments involving 119 participants, researchers examined how individuals would perceive a human-like robot, the iCub, after socializing with it and watching videos together. Before and after interacting with the robot, participants completed a questionnaire that showed them pictures of the robot in different situations and asked them to choose whether the robot’s motivation in each situation was mechanical or intentional. For example, participants viewed three photos depicting the robot selecting a tool and then chose whether the robot “grasped the closest object” or “was fascinated by tool use.” In the first two experiments, the researchers remotely controlled iCub’s actions so it would behave gregariously, greeting participants, introducing itself and asking for the participants’ names. Cameras in the robot’s eyes were also able to recognize participants’ faces and maintain eye contact. The participants then watched three short documentary videos with the robot, which was programmed to respond to the videos with sounds and facial expressions of sadness, awe or happiness. In the third experiment, the researchers programmed iCub to behave more like a machine while it watched videos with the participants. The cameras in the robot’s eyes were deactivated so it could not maintain eye contact and it only spoke recorded sentences to the participants about the calibration process it was undergoing. All emotional reactions to the videos were replaced with a “beep” and repetitive movements of its torso, head and neck. The researchers found that participants who watched videos with the human-like robot were more likely to rate the robot’s actions as intentional, rather than programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. It is human-like behavior that might be crucial for being perceived as an intentional agent. According to Wykowska, these findings show that people might be more likely to believe artificial intelligence is capable of independent thought when it creates the impression that it can behave just like humans. This could inform the design of social robots of the future, she said. “Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.” (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. Across three experiments involving 119 participants, researchers examined how individuals would perceive a human-like robot, the iCub, after socializing with it and watching videos together. Before and after interacting with the robot, participants completed a questionnaire that showed them pictures of the robot in different situations and asked them to choose whether the robot’s motivation in each situation was mechanical or intentional. For example, participants viewed three photos depicting the robot selecting a tool and then chose whether the robot “grasped the closest object” or “was fascinated by tool use.” In the first two experiments, the researchers remotely controlled iCub’s actions so it would behave gregariously, greeting participants, introducing itself and asking for the participants’ names. Cameras in the robot’s eyes were also able to recognize participants’ faces and maintain eye contact. The participants then watched three short documentary videos with the robot, which was programmed to respond to the videos with sounds and facial expressions of sadness, awe or happiness. In the third experiment, the researchers programmed iCub to behave more like a machine while it watched videos with the participants. The cameras in the robot’s eyes were deactivated so it could not maintain eye contact and it only spoke recorded sentences to the participants about the calibration process it was undergoing. All emotional reactions to the videos were replaced with a “beep” and repetitive movements of its torso, head and neck. The researchers found that participants who watched videos with the human-like robot were more likely to rate the robot’s actions as intentional, rather than programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. It is human-like behavior that might be crucial for being perceived as an intentional agent. According to Wykowska, these findings show that people might be more likely to believe artificial intelligence is capable of independent thought when it creates the impression that it can behave just like humans. This could inform the design of social robots of the future, she said. “Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.” (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. In the first two experiments, the researchers remotely controlled iCub’s actions so it would behave gregariously, greeting participants, introducing itself and asking for the participants’ names. Cameras in the robot’s eyes were also able to recognize participants’ faces and maintain eye contact. The participants then watched three short documentary videos with the robot, which was programmed to respond to the videos with sounds and facial expressions of sadness, awe or happiness. In the third experiment, the researchers programmed iCub to behave more like a machine while it watched videos with the participants. The cameras in the robot’s eyes were deactivated so it could not maintain eye contact and it only spoke recorded sentences to the participants about the calibration process it was undergoing. All emotional reactions to the videos were replaced with a “beep” and repetitive movements of its torso, head and neck. The researchers found that participants who watched videos with the human-like robot were more likely to rate the robot’s actions as intentional, rather than programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. It is human-like behavior that might be crucial for being perceived as an intentional agent. According to Wykowska, these findings show that people might be more likely to believe artificial intelligence is capable of independent thought when it creates the impression that it can behave just like humans. This could inform the design of social robots of the future, she said. “Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.” (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. In the third experiment, the researchers programmed iCub to behave more like a machine while it watched videos with the participants. The cameras in the robot’s eyes were deactivated so it could not maintain eye contact and it only spoke recorded sentences to the participants about the calibration process it was undergoing. All emotional reactions to the videos were replaced with a “beep” and repetitive movements of its torso, head and neck. The researchers found that participants who watched videos with the human-like robot were more likely to rate the robot’s actions as intentional, rather than programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. It is human-like behavior that might be crucial for being perceived as an intentional agent. According to Wykowska, these findings show that people might be more likely to believe artificial intelligence is capable of independent thought when it creates the impression that it can behave just like humans. This could inform the design of social robots of the future, she said. “Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.” (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The researchers found that participants who watched videos with the human-like robot were more likely to rate the robot’s actions as intentional, rather than programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. It is human-like behavior that might be crucial for being perceived as an intentional agent. According to Wykowska, these findings show that people might be more likely to believe artificial intelligence is capable of independent thought when it creates the impression that it can behave just like humans. This could inform the design of social robots of the future, she said. “Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.” (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. According to Wykowska, these findings show that people might be more likely to believe artificial intelligence is capable of independent thought when it creates the impression that it can behave just like humans. This could inform the design of social robots of the future, she said. “Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.” (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. “Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.” (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. Subscribe to our channels on YouTube, Telegram & WhatsApp Support Our Journalism India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that. Sustaining this needs support from wonderful readers like you. Whether you live in India or overseas, you can take a paid subscription by clicking here. Support Our Journalism Save my name, email, and website in this browser for the next time I comment. Δ Required fields are marked * Name * Email * Δ Copyright © 2025 Printline Media Pvt. Ltd. All rights reserved.
Images (1):
|
|||||
| Neuromorphic Artificial Skin Mimics Human Touch for Efficient Robots | https://www.webpronews.com/neuromorphic… | 0 | Jan 05, 2026 16:00 | active | |
Neuromorphic Artificial Skin Mimics Human Touch for Efficient RobotsURL: https://www.webpronews.com/neuromorphic-artificial-skin-mimics-human-touch-for-efficient-robots/ Description: Keywords Content: |
|||||
| Algolux Closes $18.4 Million Series B Round For Robust Computer … | https://www.thestreet.com/press-release… | 0 | Jan 05, 2026 08:00 | active | |
Algolux Closes $18.4 Million Series B Round For Robust Computer VisionDescription: New investment will serve to accelerate market adoption of Algolux's robust and scalable computer vision and image optimization solutions MONTREAL, July 12, Content: |
|||||
| UniX AI pushes humanoid robots beyond demos and into service | https://interestingengineering.com/ai-r… | 1 | Jan 05, 2026 00:00 | active | |
UniX AI pushes humanoid robots beyond demos and into serviceURL: https://interestingengineering.com/ai-robotics/unix-ai-wanda-humanoid-robots-real-world-deployment Description: UniX AI readies Wanda 2.0 and 3.0 humanoid robots for real-world service work as embodied AI moves toward scale. Content:
From daily news and career tips to monthly insights on AI, sustainability, software, and more—pick what matters and get it in your inbox. Explore The Most Powerful Tech Event in the World with Interesting Engineering. Stick with us as we share the highlights of CES week! Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Wanda 2.0 and 3.0 are designed for repeatable service tasks, signaling a shift from humanoid demos to deployment. UniX AI is readying its next-generation humanoid robots, Wanda 2.0 and Wanda 3.0, as commercially deployable systems designed for real-world service work. Built to move beyond controlled demonstrations, targeting environments where reliability, repetition, and adaptability determine whether humanoids can function at scale, the full-size humanoid robot will debut at CES 2026. Wanda 2.0, UniX AI’s second-generation full-size humanoid robot, is equipped with 23 high-degree-of-freedom joints, an 8-DoF bionic arm, and adaptive intelligent grippers.According to the company, this configuration allows the robot to perform dexterous manipulation, autonomous perception, and coordinated task execution in complex, changing environments. Rather than positioning the Wanda series as a showcase of isolated capabilities, UniX AI is framing the robots as general-purpose service systems that can learn workflows, adapt to new routines, and operate continuously across different settings. The approach reflects a shift in humanoid robotics, where success depends less on novelty and more on operational consistency. The company says both Wanda 2.0 and Wanda 3.0 are already structured for scale, supported by mature engineering processes and supply chains. UniX AI claims it has reached a stable delivery capacity of 100 units per month, with deployments planned across hotels, property management, security, retail, and research and education. To underline practical readiness, UniX AI will demonstrate the robots performing everyday service tasks in simulated environments, including drink preparation, dishwashing, clothes organization, bed-making, amenity replenishment, and waste sorting. The demonstrations are expected to take place during a major consumer electronics show in Las Vegas, where the company plans to formally unveil the robots. In one scenario, Wanda 2.0 prepares zero-alcohol beverages ordered through an app, identifying barware, controlling liquid proportions, and pouring steadily. Other setups replicate household and hotel workflows, emphasizing repeatable, high-frequency tasks that dominate service operations. Powering the Wanda series is UniX AI’s in-house technology stack, which combines multimodal semantic keypoints with UniFlex for imitation learning, UniTouch for tactile perception, and UniCortex for long-sequence task planning. The company says this architecture enables robots to perceive environments, plan multi-step actions, and execute tasks autonomously without extensive reprogramming. UniX AI argues that such capabilities signal a broader inflection point for embodied AI, as humanoid robots move closer to commercial validation. “The embodied AI industry is moving from the demonstration stage toward the validation and scale-up stage,” said UniX AI Founder and CEO Fengyu Yang. “The future of embodied intelligence belongs to companies that unify algorithmic capability, hardware capability, and scenario capability.” Yang said UniX AI plans to continue advancing productization and global expansion following mass production in 2025. “Chinese embodied intelligence companies are no longer merely providers of cost advantages, but have evolved into entities capable of exporting mature products and application models to global markets.” By anchoring its reveal in large-scale service scenarios rather than speculative use cases, UniX AI is positioning the Wanda series as part of the next wave of humanoid robots built for deployment, not just display. With over a decade-long career in journalism, Neetika Walter has worked with The Economic Times, ANI, and Hindustan Times, covering politics, business, technology, and the clean energy sector. Passionate about contemporary culture, books, poetry, and storytelling, she brings depth and insight to her writing. When she isn’t chasing stories, she’s likely lost in a book or enjoying the company of her dogs. Premium Follow
Images (1):
|
|||||
| KEENON Debuts First Bipedal Humanoid Service Robot at WAIC, Showcasing … | https://www.manilatimes.net/2025/07/26/… | 0 | Jan 04, 2026 16:00 | active | |
KEENON Debuts First Bipedal Humanoid Service Robot at WAIC, Showcasing Role-Specific Embodied AI SolutionsDescription: SHANGHAI, July 26, 2025 /PRNewswire/ -- The world premiere of KEENON Robotics' bipedal humanoid service robot, XMAN-F1, takes center stage at the World Artifici... Content: |
|||||
| KEENON Debuts First Bipedal Humanoid Service Robot at WAIC, Showcasing … | https://moneycompass.com.my/keenon-debu… | 1 | Jan 04, 2026 16:00 | active | |
KEENON Debuts First Bipedal Humanoid Service Robot at WAIC, Showcasing Role-Specific Embodied AI Solutions - Money CompassDescription: Money Compass is one of the credible Chinese and English financial media in Malaysia with strong influence in Malaysia’s financial industry. As the winner of the SME Award in Malaysia for 5 consecutive years, we persistently propel the financial industry towards a mutually beneficial framework. Since 2004, with the dedication to advocating the public to practice financial planning in everyday life, Money Compass has accumulated a vast connection in ASEAN financial industries and garnered government agencies and corporate resources. At present, Money Compass is adjusting its pace to transform into Money Compass 2.0. Consolidating the existing connections and network, Money Compass Integrated Media Platform is founded, which is well grounded in Malaysia whilst serving the ASEAN region. The mission of the new Money Compass Integrated Media Platform is to become the financial freedom gateway to assist internet users enhance financial intelligence, create wealth opportunities and achieve financial freedom for everyone! Content:
SHANGHAI, July 26, 2025 /PRNewswire/ — The world premiere of KEENON Robotics’ bipedal humanoid service robot, XMAN-F1, takes center stage at the World Artificial Intelligence Conference (WAIC) 2025 in Shanghai from July 26 to 29, where the pioneer in embodied intelligence showcases its latest innovations on the global stage for breakthrough AI advancements. Transforming its showground into an Embodied Service Experience Hub, KEENON immerses visitors in three interactive scenarios—medical station, lounge bar, and performance space—highlighting how embodied AI solution is actively reshaping future lifestyles and industrial ecosystems. At the event, XMAN-F1 emerges as the core interactive demonstration, showcasing human-like mobility and precision in service tasks across diverse scenarios. From preparing popcorn to mixing personalized chilled beverages such as Sprite or Coke with adjustable ice levels, the robot demonstrates remarkable environmental adaptability and task execution. Scheduled stage appearances feature XMAN-F1 autonomously delivering digital presentations and product demos, powered by multimodal interaction and large language model technologies. Its fluid movements and naturalistic gestures establish it as the primary focus of attention, with visitors gathering to witness its engagement live. The demonstration spotlights further multi-robot collaboration in specialized environments. At the medical station, the humanoid XMAN-F1 partners with logistics robot M104 to create a closed-loop smart healthcare solution. The bar area features a highlight collaboration with Johnnie Walker Blue Label—the world’s leading premium whisky—where robotic bartenders work alongside delivery robot T10 to craft and serve bespoke beverages. The seamless multi-robot integration not only enhances operational efficiency but signals the dawn of robotic system interoperability, moving far beyond single-task automation. According to IDC’s latest report, KEENON leads the commercial service robot sector with 22.7% of global shipments and holds a definitive 40.4% share in food delivery robotics. At WAIC 2025, the company reinforces its market leadership while presenting its ecosystem-based strategy for cross-scenario embodied intelligence solutions. Looking ahead, KEENON will continue driving innovation in embodied intelligence, combining cutting-edge R&D and global partnerships to unlock the full potential of ‘Robotics+’ applications worldwide. SOURCE KEENON Robotics Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved Login to your account below Remember Me Please enter your username or email address to reset your password. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved
Images (1):
|
|||||
| KEENON Debuts First Bipedal Humanoid Service Robot at WAIC, Showcasing … | https://bubblear.com/keenon-debuts-firs… | 1 | Jan 04, 2026 16:00 | active | |
KEENON Debuts First Bipedal Humanoid Service Robot at WAIC, Showcasing Role-Specific Embodied AI Solutions – The BubbleContent:
SHANGHAI, July 26, 2025 /PRNewswire/ — The world premiere of KEENON Robotics’ bipedal humanoid service robot, XMAN-F1, takes center stage at the World Artificial Intelligence Conference (WAIC) 2025 in Shanghai from July 26 to 29, where the pioneer in embodied intelligence showcases its latest innovations on the global stage for breakthrough AI advancements. Transforming its showground into an Embodied Service Experience Hub, KEENON immerses visitors in three interactive scenarios—medical station, lounge bar, and performance space—highlighting how embodied AI solution is actively reshaping future lifestyles and industrial ecosystems. At the event, XMAN-F1 emerges as the core interactive demonstration, showcasing human-like mobility and precision in service tasks across diverse scenarios. From preparing popcorn to mixing personalized chilled beverages such as Sprite or Coke with adjustable ice levels, the robot demonstrates remarkable environmental adaptability and task execution. Scheduled stage appearances feature XMAN-F1 autonomously delivering digital presentations and product demos, powered by multimodal interaction and large language model technologies. Its fluid movements and naturalistic gestures establish it as the primary focus of attention, with visitors gathering to witness its engagement live. The demonstration spotlights further multi-robot collaboration in specialized environments. At the medical station, the humanoid XMAN-F1 partners with logistics robot M104 to create a closed-loop smart healthcare solution. The bar area features a highlight collaboration with Johnnie Walker Blue Label—the world’s leading premium whisky—where robotic bartenders work alongside delivery robot T10 to craft and serve bespoke beverages. The seamless multi-robot integration not only enhances operational efficiency but signals the dawn of robotic system interoperability, moving far beyond single-task automation. According to IDC’s latest report, KEENON leads the commercial service robot sector with 22.7% of global shipments and holds a definitive 40.4% share in food delivery robotics. At WAIC 2025, the company reinforces its market leadership while presenting its ecosystem-based strategy for cross-scenario embodied intelligence solutions. Looking ahead, KEENON will continue driving innovation in embodied intelligence, combining cutting-edge R&D and global partnerships to unlock the full potential of ‘Robotics+’ applications worldwide. View original content to download multimedia:https://www.prnewswire.com/news-releases/keenon-debuts-first-bipedal-humanoid-service-robot-at-waic-showcasing-role-specific-embodied-ai-solutions-302514398.html SOURCE KEENON Robotics Disclaimer: The above press release comes to you under an arrangement with PR Newswire. Bubblear.com takes no editorial responsibility for the same. © 2026 - The Bubble. All Rights Reserved.
Images (1):
|
|||||
| Chinese expert calls for world models and safety standards for … | https://biztoc.com/x/faa09dff872904e5?r… | 0 | Jan 04, 2026 16:00 | active | |
Chinese expert calls for world models and safety standards for embodied AIURL: https://biztoc.com/x/faa09dff872904e5?ref=ff Description: Andrew Yao Chi-chih, a world-renowned Chinese computer scientist, said embodied artificial intelligence still lacks key foundations, stressing the need for… Content: |
|||||
| The Quiet Architects of Embodied AI | https://medium.com/@noa.schachtel/the-q… | 0 | Jan 04, 2026 16:00 | active | |
The Quiet Architects of Embodied AIURL: https://medium.com/@noa.schachtel/the-quiet-architects-of-embodied-ai-fcec0f8617d6 Description: The Quiet Architects of Embodied AI Before a robot learns to pour a cup of coffee, someone watches it fail. Frame by frame, they mark where the metal hand hesit... Content: |
|||||
| BYD Globally Recruits Talent in the Field of Embodied AI … | https://pandaily.com/byd-globally-recru… | 1 | Jan 04, 2026 16:00 | active | |
BYD Globally Recruits Talent in the Field of Embodied AI - PandailyURL: https://pandaily.com/byd-globally-recruits-talent-in-the-field-of-embodied-ai/ Description: BYD is also going to build humanoid robots and is recruiting talents in the field of embodied intelligence globally. Content:
Want to read in a language you're more familiar with? BYD is also going to build humanoid robots and is recruiting talents in the field of embodied intelligence globally. A new giant has entered the field of humanoid robots. On December 13th, the 'BYD Recruitment' official account released information about recruiting for the 25th Embodied Intelligence Research Team. The positions include senior algorithm engineers, senior structural engineers, senior simulation engineers, etc., with research directions involving humanoid robots, bipedal robots and other dimensions. The target audience is master's and doctoral graduates from global universities in 2025. The team introduction shows that BYD's embodied intelligent research team is conducting customized development of various types of robots and systems by deeply exploring the application scenarios demand at a company scale, continuously enhancing the perception and decision-making capabilities of robots, and promoting the accelerated landing applications of intelligence in the industrial field. Currently, the team has developed products such as industrial robots, intelligent collaborative robots, intelligent mobile robots, humanoid robots, etc. At the BYD's 30th anniversary celebration and unveiling ceremony for its 10 millionth new energy vehicle last month, Wang Chuanfu, Chairman and President of BYD Company Limited announced that they will invest 100 billion yuan in developing artificial intelligence combined with automotive smart technology to achieve comprehensive vehicle intelligence advancement. SEE ALSO: BYD Will Invest 100 Billion Yuan in Developing AI and Smart Technology for Cars The domestic humanoid robot company 'UBTECH' received investment from BYD in the early stages of its establishment. In October this year, UBTECH released its new generation industrial humanoid robot Walker S1 for training at BYD and other automotive factories. Related posts coming soon... Pandaily is a tech media based in Beijing. Our mission is to deliver premium content and contextual insights on China's technology scene to the worldwide tech community. © 2017 - 2026 Pandaily. All rights reserved.
Images (1):
|
|||||
| Embodied Intelligence: The PRC’s Whole-of-Nation Push into Robotics | https://jamestown.org/program/embodied-… | 0 | Jan 04, 2026 16:00 | active | |
Embodied Intelligence: The PRC’s Whole-of-Nation Push into RoboticsURL: https://jamestown.org/program/embodied-intelligence-the-prcs-whole-of-nation-push-into-robotics/ Description: Executive Summary: Since 2015, Beijing has pursued a whole-of-nation strategy to dominate intelligent robotics, combining vertical integration, policy coordinat... Content: |
|||||
| Alibaba Launches Robotics and Embodied AI - Pandaily | https://pandaily.com/alibaba-launches-r… | 1 | Jan 04, 2026 16:00 | active | |
Alibaba Launches Robotics and Embodied AI - PandailyURL: https://pandaily.com/alibaba-launches-robotics-and-embodied-ai Description: Alibaba Group has set up a dedicated Robotics and Embodied AI team, signaling its entry into the fast-growing race among global tech giants to bring artificial intelligence into the physical world. Content:
Want to read in a language you're more familiar with? Alibaba Group has set up a dedicated Robotics and Embodied AI team, signaling its entry into the fast-growing race among global tech giants to bring artificial intelligence into the physical world. October 8 â Alibaba Group has formed an internal robotics team, signaling its formal entry into the global race among tech giants to build AI-powered physical products. On Wednesday, Lin Junyang, head of technology at Alibabaâs Tongyi Qianwen large model unit, announced on social media platform X that the company has established a âRobotics and Embodied AI Group.â The move highlights Alibabaâs strategic push from software-based AI into hardware and real-world applications. The announcement comes as global peers ramp up investments in robotics. On the same day, Japanâs SoftBank said it would acquire ABBâs industrial robotics business, deepening its footprint in what it calls âphysical AI.â Alibaba Cloud has also made its first foray into embodied intelligence, leading a $140 million funding round last month in Shenzhen-based startup X Square Robot. At the 2025 Yunqi Cloud Summit two weeks ago, Alibaba CEO Wu Yongming projected global AI investment would surge to $4 trillion within five years, stressing that Alibaba must keep pace. In addition to the Â¥380 billion earmarked in February for cloud and AI infrastructure, the company plans further spending. From Multimodal Models to Real-World Agents Lin also noted on X that âmultimodal foundation models are now being transformed into fundamental agents capable of long-horizon reasoning through reinforcement learning, using tools and memory.â He added that such applications âshould naturally move from the virtual world into the physical one.â As head of Tongyi Qianwen, Lin previously worked on multimodal models that process voice, images, and text. The new robotics group underscores Alibabaâs intent to extend its AI expertise into embodied products, aiming for a foothold in the fast-growing embodied AI market. Backing X Square Robot In September, Alibaba Cloud led a $140 million Series A+ round for X Square Robot, marking its first major investment in embodied intelligence. The Shenzhen startup, less than two years old, has raised about $280 million across eight funding rounds. X Square pursues a software-first strategy. Last month it released Wall-OSS, an open-source embodied intelligence foundation model, alongside its Quanta X2 robot. The machine can attach a mop head for 360-degree cleaning and features a robotic hand sensitive enough to detect subtle pressure changesâmoving closer to human-like functionality. The company has not yet launched a consumer product, and pricing will vary by application. Research firm Humanoid Guide estimates its humanoid robot at around $80,000. X Square is already generating revenue from sales to schools, hotels, and elder-care facilities, and is preparing for an IPO next year. COO Yang Qian said the company expects ârobot butlersâ to become a reality within five years, though admitted that AI for robotics still lags behind advances in chatbots and code generation. A Global Robotics Race Alibabaâs entry comes as major tech firms double down on robotics. Venture capital has been pouring into the humanoid robot sector, with widespread belief that combining generative AI with robotics will transform humanâmachine interaction. At NVIDIAâs annual shareholder meeting in June, CEO Jensen Huang said AI and robotics represent two trillion-dollar growth opportunities for the company, predicting self-driving cars will be the first major commercial application. He envisioned billions of robots, hundreds of millions of autonomous vehicles, and tens of thousands of robotic factories powered by NVIDIAâs technology. Meanwhile, SoftBank this week announced a $5.4 billion cash acquisition of ABBâs robotics unit, which generated $2.3 billion in revenue in 2024 and employs about 7,000 people worldwide. Chairman Masayoshi Son described the deal as a step toward fusing âartificial superintelligence with roboticsâ to shape SoftBankâs ânext frontier.â Citigroup estimates the global robotics market could reach $7 trillion by 2050, attracting vast capital inflowsâincluding from state-backed fundsâinto one of technologyâs most hotly contested arenas. Related posts coming soon... Pandaily is a tech media based in Beijing. Our mission is to deliver premium content and contextual insights on China's technology scene to the worldwide tech community. © 2017 - 2026 Pandaily. All rights reserved.
Images (1):
|
|||||
| Venturing into the "Robotics + Artificial Intelligence" Frontier Shenzhen Kingkey … | https://www.manilatimes.net/2025/12/31/… | 0 | Jan 04, 2026 16:00 | active | |
Venturing into the "Robotics + Artificial Intelligence" Frontier Shenzhen Kingkey Smart Agriculture Times Strategically Invests in Huibo RoboticsDescription: HONG KONG, Dec. 31, 2025 /PRNewswire/ -- On the evening of December 30th, Shenzhen Kingkey Smart Agriculture Times (stock code: a000048) announced the formal s... Content: |
|||||
| Fudan University unveils embodied AI institute | http://www.ecns.cn/news/society/2025-04… | 1 | Jan 04, 2026 16:00 | active | |
Fudan University unveils embodied AI instituteURL: http://www.ecns.cn/news/society/2025-04-01/detail-iheqevhn5354677.shtml Content:
Shanghai-based service robotics provider Keenon Robotics unveils its latest humanoid service robot, XMAN-R1, on Monday. (Provided to chinadaily.com.cn) Shanghai-based Fudan University unveiled the Institute of Trustworthy Embodied Artificial Intelligence on Monday, marking the school's strategic move in the field of embodied intelligence and its significant layout facing the world's frontier of science and technology. The institute will be dedicated to advancing cutting-edge research and practical applications in the realm of embodied intelligence, focusing on both fundamental theories and key technological breakthroughs, said Fudan University. By integrating disciplines, such as computer vision, natural language processing, robotics, control systems, and technology ethics, the institute plans to develop intelligent entities with autonomous exploration capabilities, continuous evolutionary traits, and alignment with human values, providing a driving force for future human-machine collaboration and the development of an intelligent society. The institute will leverage interdisciplinary collaboration and industry-academia partnerships to design and build intelligent systems with physical bodies that can interact with the real world securely and reliably, according to Fudan University. During the unveiling event, the university also introduced four joint laboratories established in collaboration with four enterprises. For instance, a joint laboratory with Shanghai Baosight Software Co Ltd will focus on developing intelligent robots capable of withstanding high temperatures and disturbances in steel plants, enabling them to perform complex production operations effectively. Also on Monday, Shanghai-based service robotics provider Keenon Robotics unveiled its latest humanoid service robot, XMAN-R1. Leveraging vast real-world data, the company aimed to foster a collaborative ecosystem with a diverse range of humanoid service robots. Designed with the principles of specialization, affability and safety, XMAN-R1 is tailored to fit seamlessly into the service industry scenarios that Keenon Robotics specializes in. XMAN-R1 is currently capable of completing tasks from taking orders, food preparation, delivery to collection, with plans to expand to more diverse settings, said the company which was founded in 2010. Mimicking the movement logic and postures of service personnel, XMAN-R1 designed with human body proportions can hand over items to customers and collaborate with the company's delivery and cleaning robots, adapting to the specific requirements of each role. It is also equipped with a large language model and expression feedback for human-like interactions to enhance affinity for service. Keenon Robotics has been dedicated to diverse service scenarios for 15 years, deploying over 100,000 specialized robots for delivery, cleaning, and other functions, in more than 600 cities and regions across 60 countries. Fudan University opens research center for Lancang-Mekong youth Fudan's AI regulations stir controversy Fudan University releases new HBV study
Images (1):
|
|||||
| KraneShares Global Humanoid & Embodied Intelligence Index UCITS ETF (KOID) … | https://www.manilatimes.net/2025/10/09/… | 0 | Jan 04, 2026 16:00 | active | |
KraneShares Global Humanoid & Embodied Intelligence Index UCITS ETF (KOID) Launches on the London Stock ExchangeDescription: LONDON, Oct. 09, 2025 (GLOBE NEWSWIRE) -- KraneShares, a global asset manager known for its innovative exchange-traded funds (ETFs), today announced the launch ... Content: |
|||||
| What are physical AI and embodied AI? The robots know … | https://www.fastcompany.com/91363903/fo… | 1 | Jan 04, 2026 16:00 | active | |
What are physical AI and embodied AI? The robots know - Fast CompanyDescription: Physical AI and Embodied AI, which allow bots to understand and navigate the real world—are powering the robot revolution. Content:
LOGIN 07-19-2025TECH Physical AI and Embodied AI, which allow bots to understand and navigate the real world—are powering the robot revolution. [Photo: Amazon] BY Michael Grothaus Amazon recently announced that it had deployed its one-millionth robot across its workforce since rolling out its first bot in 2012. The figure is astounding from a sheer numbers perspective, especially considering that we’re talking about just one company. The one million bot number is all the more striking, though, since it took Amazon merely about a dozen years to achieve. It took the company nearly 30 years to build its current workforce of 1.5 million humans. At this rate, Amazon could soon “employ” more bots than people. Other companies are likely to follow suit, and not just in factories. Robots will be increasingly deployed in a wide range of traditional blue-collar roles, including delivery, construction, and agriculture, as well as in white-collar spaces like retail and food services. This occupational versatility will not only stem from their physical designs—joints, gyroscopes, and motors—but also from the two burgeoning fields of artificial intelligence that power their “brains”: Physical AI and Embodied AI. Here’s what you need to understand about each and how they differ from the generative AI that powers chatbots like ChatGPT. Physical AI refers to artificial intelligence that understands the physical properties of the real world and how these properties interact. As artificial intelligence leader Nvidia explains it, Physical AI is also known as “generative physical AI” because it can analyze data about physical processes and generate insights or recommendations for actions that a person, government, or machine should take. In other words, Physical AI can reason about the physical world. This real-world reasoning ability has numerous applications. A Physical AI system receiving data from a rain sensor may be able to predict if a certain location will flood. It can make these predictions by reasoning about real-time weather data using its understanding of the physical properties of fluid dynamics, such as how water is absorbed or repelled by specific landscape features. Physical AI can also be used to build digital twins of environments and spaces, from an individual factory to an entire city. It can help determine the optimal floor placement for heavy manufacturing equipment, for example, by understanding the building’s physical characteristics, such as the weight capacity of each floor based on its material composition. Or it can improve urban planning by analyzing things like traffic flows, how trees impact heat retention on streets, and how building heights affect sunlight distribution in neighborhoods. Embodied AI refers to artificial intelligence that “lives” inside (“embodies”) a physical vessel that can move around and physically interact with the real world. Embodied AI can inhabit various objects, including smart vacuum cleaners, humanoid robots, and self-driving cars. Fast Company & Inc © 2026 Mansueto Ventures, LLC Fastcompany.com adheres to NewsGuard’s nine standards of credibility and transparency. Learn More
Images (1):
|
|||||
| Accelerating the Evolution of Automotive Embodied Intelligence, Geely Auto Group … | https://www.manilatimes.net/2025/07/31/… | 0 | Jan 04, 2026 16:00 | active | |
Accelerating the Evolution of Automotive Embodied Intelligence, Geely Auto Group Teams Up with StepFun for a Joint Showcase at the 2025 World Artificial Intelligence ConferenceDescription: Hangzhou, China, July 30, 2025 (GLOBE NEWSWIRE) -- On July 26, Geely Auto Group partnered with its strategic tech ecosystem partner, StepFun, to jointly exhib... Content: |
|||||
| KraneShares Launches First Global Humanoid & Embodied Intelligence ETF (Ticker: … | https://markets.businessinsider.com/new… | 1 | Jan 04, 2026 16:00 | active | |
KraneShares Launches First Global Humanoid & Embodied Intelligence ETF (Ticker: KOID) On Nasdaq | Markets InsiderDescription: NEW YORK, June 05, 2025 (GLOBE NEWSWIRE) -- Krane Funds Advisors, LLC (“KraneShares”), an asset management firm known for its global exchange-tr... Content:
NEW YORK, June 05, 2025 (GLOBE NEWSWIRE) -- Krane Funds Advisors, LLC (“KraneShares”), an asset management firm known for its global exchange-traded funds (ETFs), announced the launch of the KraneShares Global Humanoid and Embodied Intelligence Index ETF (Ticker: KOID). KOID represents the first US-listed thematic equity ETF that captures the global humanoid opportunity.1 Thanks to breakthroughs in Artificial Intelligence (AI), machine learning, advanced materials, and robotics manufacturing, commercial and retail applications of humanoid robotics and embodied intelligence are now a reality. Humanoid robots—including Tesla’s Optimus, Figure AI, and Unitree—are already demonstrating impressive performance in human tasks, including in both factory and home settings. The Morgan Stanley Global Humanoid Model projects there could be 1 billion humanoids and $5 trillion in annual revenue by 2050.2 KOID seeks to capture the global humanoid and embodied intelligence ecosystem, which refers to AI systems integrated into physical machines that can sense, learn, and interact with the real world. Humanoid robotics, a key subset of embodied intelligence, focuses on robots with human-like forms and capabilities designed to work seamlessly in environments built for people, like factories, hospitals, and homes. The acceleration of bringing robots to the commercial and retail markets stems from the need to address urgent global challenges like labor shortages, aging populations, and greater efficiency and safety across industries. “Soon, the cost of a humanoid robot could be less than a car3,” said KraneShares Senior Investment Strategist Derek Yan, CFA. “We see compelling investment opportunities among the humanoid enablers and supply-chain partners that will bring humanoid robots into our daily lives at scale." Unlike legacy robotics‐focused ETFs, KOID focuses exclusively on humanoid robotics and embodied AI, positioning itself at the forefront of the next generation of robotics innovation. KOID aims to capture the full spectrum of enabling technologies that form the foundation of humanoid development, including humanoid integration & manufacturing, mechanical systems, sensing & perception, actuation systems (the “muscle” of the robot), semiconductors & technology, and critical materials. KOID offers global exposure to companies based primarily in the United States, China, and Japan within the information technology, industrial, and consumer discretionary sectors. “We are excited to bring the Humanoid opportunity to global investors through KOID, the latest addition to our suite of innovative global thematic ETFs,” said KraneShares CEO Jonathan Krane. “At KraneShares, our core goal is to launch strategies like KOID to capture emerging megatrends, giving our clients access to powerful growth opportunities as they accelerate.” The KOID ETF will track the MerQube Global Humanoid and Embodied Intelligence Index, which is designed to capture the performance of companies engaged in humanoid and embodied intelligence-related business. For more information on the KraneShares Global Humanoid and Embodied Intelligence Index ETF (Ticker: KOID), please visit https://kraneshares.com/koid or consult your financial advisor. About KraneShares KraneShares is a specialist investment manager focused on China, Climate, and Alternatives. KraneShares seeks to provide innovative, high-conviction, and first-to-market strategies based on the firm and its partners' deep investing knowledge. KraneShares identifies and delivers groundbreaking capital market opportunities and believes investors should have cost-effective and transparent tools for attaining exposure to various asset classes. The firm was founded in 2013 and serves institutions and financial professionals globally. The firm is a signatory of the United Nations-supported Principles for Responsible Investment (UN PRI). Citations: Carefully consider the Funds’ investment objectives, risk factors, charges and expenses before investing. This and additional information can be found in the Funds' full and summary prospectus, which may be obtained by visiting https:// kraneshares.com/koid . Read the prospectus carefully before investing. Risk Disclosures: Investing involves risk, including possible loss of principal. There can be no assurance that a Fund will achieve its stated objectives. Indices are unmanaged and do not include the effect of fees. One cannot invest directly in an index. This information should not be relied upon as research, investment advice, or a recommendation regarding any products, strategies, or any security in particular. This material is strictly for illustrative, educational, or informational purposes and is subject to change. Certain content represents an assessment of the market environment at a specific time and is not intended to be a forecast of future events or a guarantee of future results; material is as of the dates noted and is subject to change without notice. Humanoid and embedded intelligence technology companies often face high research and capital costs, resulting in variable profitability in a competitive market where products can quickly become obsolete. Their reliance on intellectual property makes them vulnerable to losses, while legal and regulatory changes can impact profitability. Defining these companies can be complex, and some may risk commercial failure. They are also affected by global scientific developments, leading to rapid obsolescence, and may be subject to government regulations. Many companies in which the Fund invests may not currently be profitable, with no guarantee of future success. A-Shares are issued by companies in mainland China and traded on local exchanges. They are available to domestic and certain foreign investors, including QFIs and those participating in Stock Connect Programs like Shanghai-Hong Kong and Shenzhen-Hong Kong. Foreign investments in A-Shares face various regulations and restrictions, including limits on asset repatriation. A-Shares may experience frequent trading halts and illiquidity, which can lead to volatility in the Fund’s share price and increased trading halt risks. The Chinese economy is an emerging market, vulnerable to domestic and regional economic and political changes, often showing more volatility than developed markets. Companies face risks from potential government interventions, and the export-driven economy is sensitive to downturns in key trading partners, impacting the Fund. U.S.-China tensions raise concerns over tariffs and trade restrictions, which could harm China’s exports and the Fund. China’s regulatory standards are less stringent than in the U.S., resulting in limited information about issuers. Tax laws are unclear and subject to change, potentially impacting the Fund and leading to unexpected liabilities for foreign investors. Fluctuations in currency of foreign countries may have an adverse effect to domestic currency values. The Japanese economy depends heavily on international trade and is vulnerable to economic, political, and social instability, which could affect the Fund. The yen is volatile, influenced by fluctuations in Asia, and has historically shown unpredictable movements against the U.S. dollar. Natural disasters, such as earthquakes and tidal waves, also pose risks. Furthermore, government intervention and an unstable financial services sector can negatively impact the economy, which relies significantly on trade with developing nations in East and Southeast Asia. The Fund invests in non-U.S. securities, which can be less liquid and subject to weaker regulatory oversight compared to U.S. securities. Risks include currency fluctuations, political or economic instability, incomplete financial disclosure, and potential taxes or nationalization of holdings. Foreign trading hours and settlement processes may also limit the Fund’s ability to trade, and different accounting standards can add complexity. Suspensions of foreign securities may adversely impact the Fund, and delays in settlement or holidays may hinder asset liquidation, increasing the risk of loss. The Fund may invest in derivatives, which are often more volatile than other investments and may magnify the Fund’s gains or losses. A derivative (i.e., futures/forward contracts, swaps, and options) is a contract that derives its value from the performance of an underlying asset. The primary risk of derivatives is that changes in the asset’s market value and the derivative may not be proportionate, and some derivatives can have the potential for unlimited losses. Derivatives are also subject to liquidity and counterparty risk. The Fund is subject to liquidity risk, meaning that certain investments may become difficult to purchase or sell at a reasonable time and price. If a transaction for these securities is large, it may not be possible to initiate, which may cause the Fund to suffer losses. Counterparty risk is the risk of loss in the event that the counterparty to an agreement fails to make required payments or otherwise comply with the terms of the derivative. Large capitalization companies may struggle to adapt fast, impacting their growth compared to smaller firms, especially in expansive times. This could result in lower stock returns than investing in smaller and mid-sized companies. In addition to the normal risks associated with investing, investments in smaller companies typically exhibit higher volatility. A large number of shares of the Fund is held by a single shareholder or a small group of shareholders. Redemptions from these shareholder can harm Fund performance, especially in declining markets, leading to forced sales at disadvantageous prices, increased costs, and adverse tax effects for remaining shareholders. The Fund is new and does not yet have a significant number of shares outstanding. If the Fund does not grow in size, it will be at greater risk than larger funds of wider bid-ask spreads for its shares, trading at a greater premium or discount to NAV, liquidation and/or a trading halt. Narrowly focused investments typically exhibit higher volatility. The Fund’s assets are expected to be concentrated in a sector, industry, market, or group of concentrations to the extent that the Underlying Index has such concentrations. The securities or futures in that concentration could react similarly to market developments. Thus, the Fund is subject to loss due to adverse occurrences that affect that concentration. KOID is non-diversified. Neither MerQube, Inc. nor any of its affiliates (collectively, “MerQube”) is the issuer or producer of KOID and MerQube has no duties, responsibilities, or obligations to investors in KOID. The index underlying the KOID is a product of MerQube and has been licensed for use by Krane Funds Advisors, LLC and its affiliates. Such index is calculated using, among other things, market data or other information (“Input Data”) from one or more sources (each such source, a “Data Provider”). MerQube® is a registered trademark of MerQube, Inc. These trademarks have been licensed for certain purposes by Krane Funds Advisors, LLC and its affiliates in its capacity as the issuer of the KOID. KOID is not sponsored, endorsed, sold or promoted by MerQube, any Data Provider, or any other third party, and none of such parties make any representation regarding the advisability of investing in securities generally or in KOID particularly, nor do they have any liability for any errors, omissions, or interruptions of the Input Data, MerQube Global Humanoid and Embodied Intelligence Index, or any associated data. Neither MerQube nor the Data Providers make any representation or warranty, express or implied, to the owners of the shares of KOID or to any member of the public, of any kind, including regarding the ability of the MerQube Global Humanoid and Embodied Intelligence Index to track market performance or any asset class. The MerQube Global Humanoid and Embodied Intelligence Index is determined, composed and calculated by MerQube without regard to Krane Funds Advisors, LLC and its affiliates or the KOID. MerQube and Data Providers have no obligation to take the needs of Krane Funds Advisors, LLC and its affiliates or the owners of KOID into consideration in determining, composing or calculating the MerQube Global Humanoid and Embodied Intelligence Index. Neither MerQube nor any Data Provider is responsible for and have not participated in the determination of the prices or amount of KOID or the timing of the issuance or sale of KOID or in the determination or calculation of the equation by which KOID is to be converted into cash, surrendered or redeemed, as the case may be. MerQube and Data Providers have no obligation or liability in connection with the administration, marketing or trading of KOID. There is no assurance that investment products based on the MerQube Global Humanoid and Embodied Intelligence Index will accurately track index performance or provide positive investment returns. MerQube is not an investment advisor. Inclusion of a security within an index is not a recommendation by MerQube to buy, sell, or hold such security, nor is it considered to be investment advice. NEITHER MERQUBE NOR ANY OTHER DATA PROVIDER GUARANTEES THE ADEQUACY, ACCURACY, TIMELINESS AND/OR THE COMPLETENESS OF THE MERQUBE GLOBAL HUMANOID AND EMBODIED INTELLIGENCE INDEX OR ANY DATA RELATED THERETO (INCLUDING DATA INPUTS) OR ANY COMMUNICATION WITH RESPECT THERETO. NEITHER MERQUBE NOR ANY OTHER DATA PROVIDERS SHALL BE SUBJECT TO ANY DAMAGES OR LIABILITY FOR ANY ERRORS, OMISSIONS, OR DELAYS THEREIN. MERQUBE AND ITS DATA PROVIDERS MAKE NO EXPRESS OR IMPLIED WARRANTIES, AND THEY EXPRESSLY DISCLAIM ALL WARRANTIES, OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR USE OR AS TO RESULTS TO BE OBTAINED BY KRANE FUNDS ADVISORS, LLC AND ITS AFFILIATES, OWNERS OF THE KOID, OR ANY OTHER PERSON OR ENTITY FROM THE USE OF THE MERQUBE GLOBAL HUMANOID AND EMBODIED INTELLIGENCE INDEX OR WITH RESPECT TO ANY DATA RELATED THERETO. WITHOUT LIMITING ANY OF THE FOREGOING, IN NO EVENT WHATSOEVER SHALL MERQUBE OR DATA PROVIDERS BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES INCLUDING BUT NOT LIMITED TO, LOSS OF PROFITS, TRADING LOSSES, LOST TIME OR GOODWILL, EVEN IF THEY HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES, WHETHER IN CONTRACT, TORT, STRICT LIABILITY, OR OTHERWISE. THE FOREGOING REFERENCES TO “MERQUBE” AND/OR “DATA PROVIDER” SHALL BE CONSTRUED TO INCLUDE ANY AND ALL SERVICE PROVIDERS, CONTRACTORS, EMPLOYEES, AGENTS, AND AUTHORIZED REPRESENTATIVES OF THE REFERENCED PARTY. ETF shares are bought and sold on an exchange at market price (not NAV) and are not individually redeemed from the Fund. However, shares may be redeemed at NAV directly by certain authorized broker-dealers (Authorized Participants) in very large creation/redemption units. The returns shown do not represent the returns you would receive if you traded shares at other times. Shares may trade at a premium or discount to their NAV in the secondary market. Brokerage commissions will reduce returns. Beginning 12/23/2020, market price returns are based on the official closing price of an ETF share or, if the official closing price isn't available, the midpoint between the national best bid and national best offer ("NBBO") as of the time the ETF calculates the current NAV per share. Prior to that date, market price returns were based on the midpoint between the Bid and Ask price. NAVs are calculated using prices as of 4:00 PM Eastern Time. The KraneShares ETFs and KFA Funds ETFs are distributed by SEI Investments Distribution Company (SIDCO), 1 Freedom Valley Drive, Oaks, PA 19456, which is not affiliated with Krane Funds Advisors, LLC, the Investment Adviser for the Funds, or any sub-advisers for the Funds. Copyright © 2025 Insider Inc and finanzen.net GmbH (Imprint). All rights reserved. Registration on or use of this site constitutes acceptance of our Terms of Service and Privacy Policy.
Images (1):
|
|||||
| Dreame’s new floor washers show embodied intelligence entering homes first … | https://kr-asia.com/dreames-new-floor-w… | 1 | Jan 04, 2026 16:00 | active | |
Dreame’s new floor washers show embodied intelligence entering homes first through appliancesDescription: While humanoid robots remain the holy grail, companies like Dreame are tapping AI to enhance a wide range of household devices. Content:
Written by 36Kr English Published on 16 Sep 2025 5 mins read When competition over specifications in home appliances reaches a bottleneck, artificial intelligence may be the factor that breaks the deadlock while making products more user-friendly. At its latest launch event, Dreame introduced more than 30 new products, with two floor washers drawing the most attention. Each features a pair of robotic arms that can self-clean edges and scrub floors as users push the machine. “In recent years, the floor washer segment has focused too much on pushing parameters to the extreme, but that doesn’t necessarily solve users’ real pain points, such as stubborn stains or cleaning low spaces,” said Wang Hongpin, head of product and R&D for Dreame’s floor washer division in China. Increasing suction power does not always improve cleaning results. Higher power can also create more noise and shorten battery life. By contrast, AI introduces new functionality through environmental sensing, user intent recognition, and action decision-making, enabling floor washers to address problems that were previously unresolved. This may represent a stepping stone. Before humanoid robots enter households, AI-enabled appliances with basic embodied intelligence are already easing housework. The concept extends beyond floor washers. Dreame’s new lineup now includes refrigerators, air conditioners, and televisions, signaling its ambition to expand beyond cleaning devices and position itself as a full-fledged home appliance company. From large appliances such as refrigerators to smaller items like hair dryers and smart rings, AI integration is a recurring theme. Dreame also disclosed for the first time that it plans to launch its own smart glasses and is exploring companion robots. Pan Zhidong, head of Dreame’s AI smart hardware division, said in an interview that the company intends to use smart rings and glasses as entry points to connect all its products. According to his vision, Dreame’s smart home ecosystem will expand outward from cleaning appliances to automotive applications, with AI-driven hardware serving more aspects of daily life. The clearest examples of Dreame’s AI integration are the two new floor washers with robotic arms. The T60 Ultra and H60 Ultra each feature two robotic arms designed for scrubbing. The front arm uses a flexible scraper to clear watermarks and dirt along edges, while the rear arm applies pressure to tackle stubborn stains. As users push the washer forward, the AI-controlled arms perform in tandem. One acts more softly than the other, like a pair of helping hands. This directly addresses pain points that traditional floor washers have long left unresolved. In recent years, the industry has been locked in a race to maximize specifications—stronger suction, higher water output, more powerful motors. Yet these boosts have had limited effect on edge cleaning and narrow gaps. To address this, the T60 and H60 incorporate embodied intelligence. Under AI control, the robotic arms can sense the environment and make real-time decisions. The washers detect floor dirt levels through high-precision sensors and magnetic rings. Once stains are identified, AI adjusts suction and water output. If stains prove difficult to remove, the machine alerts users with lights and sounds, suggesting manual use of steam or hot water modes. Acting as the “brain,” the AI system orchestrates the machine’s actions, controlling arm movements based on whether the device is advancing or reversing. “The floor washer market is in a fiercely competitive state, with each player choosing its own angle. Dreame hopes to solve user pain points more effectively through embodied intelligence,” Wang said. He added that the goal is to give machines more efficient and thorough cleaning capabilities, while also making them flexible enough to reach under furniture and scrub tough stains. The AI-first approach extends to other Dreame products. Its new hair dryer can detect its distance from hair and automatically adjust airflow and temperature. The company’s latest refrigerator uses built-in models to monitor and regulate oxygen levels, while also sterilizing compartments to preserve freshness. Another highlight is Dreame’s smart ring, which serves as a control point for smart home devices and tracks health metrics. It is designed to connect with Dreame’s upcoming car and other outdoor devices as well. Underlying these products is a consistent logic: as traditional hardware upgrades in home appliances yield diminishing returns, AI can create new space for innovation. According to All View Cloud (AVC), the market for cleaning appliances such as floor washers and robot vacuums has been expanding rapidly. Cumulative sales reached RMB 22.4 billion (USD 3.1 billion), up 30% year-on-year (YoY), with unit sales of 16.55 million, an increase of 22.1%. Growing sales have attracted more entrants. On August 6, 2025, DJI released its Romo robot vacuum, marking its crossover into smart home cleaning and further intensifying competition. Yet profit growth has not kept pace with sales. In 2024, Ecovacs Robotics reported net profit of RMB 806 million (USD 112.8 million), less than half its 2021 peak. Roborock’s 2024 revenue rose 38% YoY, but its net profit slipped 3.6%. Both companies attributed pressure on earnings to price wars and heightened competition. Some in the industry see AI as a potential way out of this cycle. Earlier this year, Roborock introduced what it described as the world’s first mass-produced bionic hand vacuum cleaner, which can sweep floors while also picking up small items. AI enables the device to recognize objects and calculate the best way to grasp them, improving collection accuracy. Dreame’s floor washers follow a similar principle, identifying stains with AI and adapting cleaning modes on the fly. Whether it is a vacuum that automatically lends a hand when it senses dirt or a hair dryer that adjusts heat based on hair condition, today’s AI appliances are not trying to mimic humanoid robots. Instead, they apply embodied intelligence in targeted ways to solve specific household tasks. This could mark a transitional stage for embodied intelligence, offering a practical route for deployment in everyday life. Wang believes advances in smart technology will expand opportunities for floor washers. “With embodied intelligence and robotic arms, floor washers now have eyes, a brain, and hands. In the future, consumers may only need to push the machine casually around their homes, while it autonomously adjusts to different scenarios and completes the job,” he said. Beyond its latest launches, Dreame plans to release smart glasses in the first quarter of 2026, along with companion robots, though details have not yet been confirmed. KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Fu Chong for 36Kr. Loading... Subscribe to our newsletters KrASIA A digital media company reporting on China's tech and business pulse.
Images (1):
|
|||||
| Title: The Future of Artificial Intelligence in 2025: Trends, Challenges … | https://medium.com/@mashirosenpai0/titl… | 0 | Jan 04, 2026 16:00 | active | |
Title: The Future of Artificial Intelligence in 2025: Trends, Challenges & OpportunitiesDescription: Title: The Future of Artificial Intelligence in 2025: Trends, Challenges & Opportunities Meta Description: Explore the latest trends and challenges of Artificia... Content: |
|||||
| The Future is Here: Embodied Intelligent Robots | https://www.manilatimes.net/2025/07/30/… | 0 | Jan 04, 2026 16:00 | active | |
The Future is Here: Embodied Intelligent RobotsDescription: BEIJING, July 30, 2025 /PRNewswire/ -- A news report from en.qstheory.cn: Content: |
|||||
| World's largest embodied AI data factory opens in Tianjin | http://www.ecns.cn/news/cns-wire/2025-0… | 1 | Jan 04, 2026 16:00 | active | |
World's largest embodied AI data factory opens in TianjinURL: http://www.ecns.cn/news/cns-wire/2025-06-24/detail-ihestqxv5318579.shtml Content:
(ECNS) -- The world's largest embodied artificial intelligence data facility, Pacini Perception Technology's Super Embodied Intelligence Data (Super EID) Factory, officially opened in Tianjin Municipality on Tuesday. Spanning 12,000 square meters, the facility is the world's leading base for embodied AI data collection and model training. Equipped with 150 data units developed in-house, it is expected to produce nearly 200 million high-quality AI training samples annually. The base features a "15+N" full-scenario matrix system encompassing thousands of task scenarios across automotive manufacturing, 3C (computer, communication, and consumer electronics) product assembly, household, office, and food service environments. Xu Jincheng, Pacini's CEO and founder, explained that the facility's core technology utilizes synchronized high-precision capture of human hand movements combined with visual-tactile modality alignment. This means the data samples combine 3D vision and touch-sensing, allowing robots to better mimic human interaction. The approach overcomes traditional robotics-dependent data collection limitations and dramatically improves data versatility, Xu said. The facility will not only serve as a data hub but also evolve into an innovation engine for the embodied AI industry, the CEO added. China has produced nearly 100 embodied AI robotic products since 2024, capturing 70% of the global market, according to data released by the Ministry of Industry and Information Technology in April. According to a report from Head Leopard Shanghai Research Institute, China's embodied AI market size reached 418.6 billion yuan ($58 billion) in 2023 and is expected to reach 632.8 billion yuan by 2027, driven by breakthroughs in AI technology, the Securities Daily reported. (By Zhang Dongfang)
Images (1):
|
|||||
| SenseTime deepens its push into embodied intelligence with ACE Robotics | https://kr-asia.com/sensetime-deepens-i… | 1 | Jan 04, 2026 16:00 | active | |
SenseTime deepens its push into embodied intelligence with ACE RoboticsURL: https://kr-asia.com/sensetime-deepens-its-push-into-embodied-intelligence-with-ace-robotics Description: Wang Xiaogang explains what the new robotics venture means for SenseTime’s AI plans. Content:
Written by 36Kr English Published on 22 Dec 2025 8 mins read In China’s artificial intelligence sector, SenseTime has long stood as one of the most enduring players, well over a decade old and well-acquainted with the cyclical tides of technological change. During the rise of visual AI, the company emerged from a research lab at The Chinese University of Hong Kong to become one of the first to commercialize computer vision at scale. Yet B2B operations have never been easy. Like many peers, SenseTime often faced clients with highly customized needs and long development cycles. Then came OpenAI’s ChatGPT, which reshaped the industry around large language models. Leveraging its early lead in computing infrastructure, SenseTime found new momentum. According to its 2024 annual report, the company’s generative AI business brought in RMB 2.4 billion (USD 336 million) in revenue, rising from 34.8% of total income in 2023 to 63.7%, making it SenseTime’s most critical business line. But after three years of rapid progress in large models, a more pragmatic question looms: beyond narrow applications, how can AI enter the physical world and become a practical force that truly changes how we work and live? That question lies at the center of SenseTime’s next chapter. As embodied intelligence emerges as the next frontier, a new company has joined the race. ACE Robotics, led by Wang Xiaogang, SenseTime’s co-founder and executive director, has officially stepped into the field. Wang now serves as ACE Robotics’ chairman. In an interview with 36Kr, Wang said ACE Robotics was not born from hype but from necessity. It aims to address real-world pain points through a new human-centric research paradigm, focusing on building a “brain” that understands the laws of the physical world and to deliver integrated hardware-software products for real-world use. This direction reflects a broader shift across the industry. A year ago, embodied intelligence firms were still experimenting with mobility and stability. Today, some have secured contracts worth hundreds of millions of RMB, bringing robots into factories in Shenzhen, Shanghai, and Suzhou. AI’s shift toward physical intelligence carries major significance, especially as the industry faces growing pressure to deliver real returns. In the first half of 2025, SenseTime reported a net loss of RMB 1.162 billion (USD 163 million), a 50% year-on-year decrease, even as its R&D spending continued to rise. The company is now pursuing more grounded, sustainable paths to growth. The breakthrough, Wang said, will not come from a leap toward artificial general intelligence (AGI), but from robots that can learn reusable skills through real-world interaction and solve tangible physical problems. The following transcript has been edited and consolidated for brevity and clarity. Wang Xiaogang (WX): The decision stems from two considerations: industrialization and technological paradigm. From an industrial perspective, embodied intelligence represents a market worth tens of trillions of RMB. As Nvidia founder Jensen Huang has said, one day everyone may own one or more robots. Their numbers could exceed smartphones, and their unit value could rival that of automobiles. For SenseTime, which has historically focused on B2B software, expanding into integrated hardware-software operations is a natural step toward scale. Years of working with vertical industries have given us a deep understanding of user pain points. Compared with many embodied AI startups that lack this context, our ability to deploy in real-world scenarios gives us an edge in commercialization. From a technical perspective, traditional embodied intelligence has a key weakness. Hardware has advanced quickly, but the “brain” has lagged behind because most approaches are machine-centric. They start with a robot’s form, train a general model on data from that specific body, and assume it can generalize. But it can’t. Just as humans and animals can’t share one brain, robots with different morphologies—whether with dexterous hands, claws, or multiple arms—cannot share a universal model. WX: We’re proposing a new, human-centric paradigm. We begin by studying how humans interact with the physical world, essentially how we move, grasp, and manipulate. Using wearable devices and third-person cameras, we collect multimodal data, including vision, touch, and force, to record complex, commonsense human behaviors. By feeding this data into a world model, we enable it to understand both physics and human behavioral logic. A mature world model can even guide hardware design, ensuring that a robot’s form naturally fits its intended environment. In recent months, companies such as Tesla and Figure AI have pivoted toward first-person camera-based learning. But these approaches capture only visual information, without integrating critical signals like force, touch, and friction—the keys to genuine multidimensional interaction. Vision alone may let a robot dance or shadowbox, but it still struggles with real contact tasks like moving a bottle or tightening a screw. Our human-centric approach has already been validated. A team led by professor Liu Ziwei developed the EgoLife dataset, containing over 300 hours of first- and third-person human activity data. Models trained on this dataset have overcome the industry’s pain point, whereas most existing datasets capture only trivial actions, insufficient for complex motion learning. WX: Our goal is not merely to build models but to deliver integrated hardware-software products that solve real problems in defined scenarios. We’ve found that much existing hardware doesn’t match real-world needs. So we work closely with partners on customized designs. Take quadruped robots: traditional models mount cameras too low and narrow, making it difficult to detect traffic lights or navigate intersections. In partnership with Insta360, we developed a panoramic camera module with 360-degree coverage, solving that limitation. We’re also tackling issues like waterproofing, high computing costs, and limited battery life, which are key obstacles to outdoor and industrial deployment. WX: Our strength lies in the “brain,” which represents models, navigation, and operation capabilities. Previously, SenseTime specialized in large-scale software systems but had no standardized edge products. Through prior investments in hardware and component makers, ACE Robotics now follows an ecosystem model. We define design standards, co-develop hardware with partners, and keep our model layer open, offering base models and training resources. WX: R&D systems and safety standards are two key areas. Both autonomous driving and robotics rely on massive datasets for continuous improvement. The validated “data flywheels” we’ve built significantly boost iteration speed. Meanwhile, the rigorous safety and data-quality frameworks from autonomous driving can directly enhance robotics reliability. On the functional side, our SenseFoundry platform already includes hundreds of modules originally built for fixed-camera city management. When linked to mobile robots, these capabilities transfer seamlessly, extending from static monitoring to dynamic mobility. WX: SenseTime’s path traces AI’s own progression from version 1.0 to 3.0. In 2014, we were in the AI 1.0 era, defined by visual recognition. Machines began to outperform the human eye, but intelligence came from manual labeling: tagging images to simulate cognition. Because labeled data was limited and task-specific, each application required its own dataset. Intelligence was only as strong as the amount of human labor behind it. Models were small and lacked generalization across scenarios. Then came the 2.0 era of large models, which transformed everything. The key difference was data richness. The internet’s texts, poems, and code embody centuries of human knowledge, far more diverse than labeled images. Large models learned from this collective intelligence, allowing them to generalize across industries and domains. But as online data becomes saturated, the marginal gains from this approach are slowing. We are now entering the AI 3.0 era of embodied intelligence, defined by direct interaction with the physical world. To truly understand physics and human behavior, reading text and images is no longer enough. AI must engage with the world. Tasks such as cleaning a room or delivering a package demand real-time, adaptive intelligence. Through direct interaction, AI can overcome the limits of existing data and open new pathways for growth. WX: Kairos 3.0 consists of three components: multimodal understanding and fusion, a synthetic network, and behavioral prediction. The first fuses diverse inputs, including not just images, videos, and text, but also camera poses, 3D object trajectories, and tactile or force data. This enables the model to grasp the real-world physics behind movement and interaction. In collaboration with Nanyang Technological University, for example, the model can infer camera poses from a single image. When a robotic arm captures a frame, the model can deduce the arm’s position and predict its motion from visual changes, deepening its understanding of physical interaction. The second component, the synthetic network, can generate videos of robots performing various manipulation tasks, swapping robot types, or altering environmental elements such as objects, tools, or room layouts. The third, behavioral prediction, enables the model to anticipate a robot’s next move after receiving an instruction, bridging cognition and execution into a complete loop from understanding to action. WX: It combines environmental data collection with world modeling. By “environment,” we mean real human living and working spaces. Unlike autonomous driving, which focuses narrowly on roads, or underwater robotics, we model how humans interact with their surroundings. This yields higher data efficiency and more authentic inputs. We also integrate human ergonomics, touch, and force, which are all essential for rapid learning, and all missing in machine-centric paths. WX: The first large-scale applications will emerge in quadruped robots, or robotic dogs. Most current quadruped bots still rely on remote control or preset routes. Our system gives them autonomous navigation and spatial intelligence. Equipped with ACE Robotics’ navigation technology, they can coordinate through a control platform, follow Baidu Maps commands, and respond to multimodal or voice inputs. They can identify people in need, record license plates, and detect anomalies. Linked with our SenseFoundry vision platform, these robots can recognize fights, garbage accumulation, unleashed pets, or unauthorized drones, sending real-time data back to control centers. This combination, supported by cloud-based management, will soon scale in inspection and monitoring. Within one to two years, we expect widespread deployment in industrial environments. WX: In the medium term, warehouse logistics will likely be the next major commercialization frontier. Unlike factories, warehouses share consistent operational patterns. As online shopping expands, front-end logistics hubs require standardized automation for sorting and packaging. Traditional robot data collection cannot handle the enormous variety of SKUs, but large-scale environmental data allows our models to generalize and scale efficiently. In the long term, home environments will be the next key direction, though safety remains a major challenge. Household robots must manage collisions and ensure object safety, much like autonomous driving must evolve from Level 2 to Level 4 autonomy. Progress is being made. Figure AI, for instance, is partnering with real estate funds managing millions of apartment layouts to gather environmental data, gradually moving embodied intelligence closer to the home. KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Huang Nan for 36Kr. Loading... Subscribe to our newsletters KrASIA A digital media company reporting on China's tech and business pulse.
Images (1):
|
|||||
| China's Top Universities Plan to Roll Out New Major to … | https://www.businessinsider.com/china-e… | 1 | Jan 04, 2026 16:00 | active | |
China's Top Universities Plan to Roll Out New Major to Boost Robotics - Business InsiderDescription: Seven Chinese universities plan to launch an "embodied intelligence" major as Beijing races to build a pipeline of robotics and AI talent. Content:
Every time Lee Chong Ming publishes a story, you’ll get an alert straight to your inbox! Enter your email By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider’s Terms of Service and Privacy Policy. China wants more robotics talent. The country's elite universities are preparing to launch a new undergraduate major in "embodied intelligence," an emerging field that combines AI with robotics. Seven universities — including Shanghai Jiao Tong University, Zhejiang University, Beijing Institute of Technology, and Xi'an Jiaotong University — have applied to offer the new major, according to a public notice published in November by China's Ministry of Education. These schools sit at the top of the country's engineering and computer-science ecosystem, and several are part of the C9 League, China's equivalent of the Ivy League. Zhejiang University, located in eastern China, is the alma mater of DeepSeek's founder and a growing roster of AI startup leaders. The ministry said the major is being introduced to meet national demand for talent in "future industries" such as embodied intelligence, quantum technology, and next-generation communications. Every time Lee Chong Ming publishes a story, you’ll get an alert straight to your inbox! Stay connected to Lee Chong Ming and get more of their work as it publishes. By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider's Terms of Service and Privacy Policy. In a June notice, the ministry said that universities should "optimize program offerings based on national strategies, market demands, and technological development." China's embodied intelligence industry is expected to take off. This year, the market could reach 5.3 billion yuan, or $750 million, according to a report republished by the Cyberspace Administration of China. By 2030, it could hit 400 billion yuan and surpass 1 trillion yuan in 2035, according to a report from the Development Research Center of the State Council. The Beijing Institute of Technology said in its application document that the industry has a shortfall of about one million embodied intelligence professionals. If adopted, the major would become one of China's newest additions to its higher-education system. Beijing's push into AI and robotics has been underway for a while. Shanghai Jiao Tong University already runs a "Machine Vision and Intelligence Group" under its School of Artificial Intelligence. Zhejiang University has also set up a "Humanoid Robot Innovation Research Institute," dedicated to "developing humanoid robots that exceed human capabilities in movement and manipulation." The Chinese tech industry is moving just as quickly. Chinese companies specializing in humanoid robots and autonomous systems have been racing to keep pace with global competitors. In September, Ant Group, an affiliate company of the Chinese conglomerate Alibaba Group, unveiled R1, a humanoid robot that has drawn comparisons to Tesla's Optimus. In the US, some universities already offer courses and labs for robotics and AI, including Stanford, Carnegie Mellon, and New York University. China's proposed "embodied intelligence" major is designed with job opportunities in mind. At the Beijing Institute of Technology, the school plans to enroll 120 undergraduates in the program a year, with 70 expected to continue into graduate programs and 50 headed straight into the workforce, according to its application document. The university's filing sketches out where those students are likely to go. State-owned giants like Norinco and the China Aerospace Science and Technology Corporation are expected to take more than a dozen graduates, while others are projected to join major tech players, including Huawei, Alibaba, Tencent, ByteDance, Xiaomi, and BYD. The major includes courses such as multimodal perception and fusion, embodied human-robot interaction, and machine learning for robotics, according to the university's filing. Jump to
Images (1):
|
|||||
| The Heat: Artificial Intelligence | https://america.cgtn.com/2024/12/10/the… | 0 | Jan 04, 2026 16:00 | active | |
The Heat: Artificial IntelligenceURL: https://america.cgtn.com/2024/12/10/the-heat-artificial-intelligence-4 Description: In 2024, technology took a gigantic leap forward, especially with artificial intelligence. These rapid advancements prompted governments and industry leaders to... Content: |
|||||
| Li Auto’s former CTO launches embodied intelligence startup with USD … | https://kr-asia.com/li-autos-former-cto… | 1 | Jan 04, 2026 16:00 | active | |
Li Auto’s former CTO launches embodied intelligence startup with USD 50 million backingDescription: The venture marks the latest move by automotive industry leaders into robotics. Content:
Written by 36Kr English Published on 22 Sep 2025 2 mins read Wang Kai, investment partner at Vision Plus Capital and former CTO of Li Auto, has reportedly launched a startup in embodied intelligence. A senior executive responsible for assisted driving technology at a major automaker has also joined the project and is currently on leave from the company. According to 36Kr, the startup has attracted strong investor interest. Within months of its founding, it secured about USD 50 million across two funding rounds. Vision Plus Capital led the first, while HongShan and Lanchi Ventures backed the second. The team’s track record in artificial intelligence and large-scale engineering is seen as a key factor in drawing support, in addition to growing interest in embodied intelligence. Wang joined Li Auto in September 2020 as CTO, where he oversaw R&D and mass production of intelligent vehicle systems, covering electronic and electrical architecture, smart cockpits, autonomous driving, platform development, and real-time operating systems. Before Li Auto, he spent eight years at Visteon, where he was the founding designer of DriveCore, the company’s assisted driving platform. He led five mass production projects spanning chips, algorithms, operating systems, and hardware architecture. At Li Auto, Wang accelerated the rollout of assisted driving solutions, reaching mass production in just seven months. He left the company in early 2022 to become an investment partner at Vision Plus Capital, a role he continues to hold. In venture capital, however, such positions often function more as advisory roles rather than full-time operational posts. The senior executive who joined him at the startup brings rare experience in end-to-end mass deployment, including work on vision-language-action (VLA) systems. Such expertise remains unusual among embodied intelligence ventures. Beyond assisted driving, embodied intelligence has emerged as a key application area for AI, attracting talent from the autonomous driving sector and drawing significant funding. In March, for example, Tars, another embodied intelligence startup, raised USD 120 million in an angel round just 50 days after its founding, representing the largest angel investment in the segment in China. Like Wang’s venture, Tars was founded by an executive with autonomous driving experience, Chen Yilun. Automotives are often described as “robots without hands.” With autonomous driving and embodied intelligence closely aligned, carmakers such as Tesla and Xpeng see the field as their next growth frontier. Many automotive executives are also choosing it as their path to entrepreneurship. Before Wang, others, including Yu Yinan, a former vice president at Horizon Robotics, and Gao Jiyang, who previously led mass production R&D at Momenta, had already launched startups in embodied intelligence. Automakers are racing ahead in assisted driving technology. Huawei, for instance, has announced that its computing power investment in the field has reached 45 exaflops. Meanwhile, capital is pouring into embodied intelligence as well. Competition for top talent between the two segments is set to continue. KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Fan Shuqi for 36Kr. Loading... Subscribe to our newsletters KrASIA A digital media company reporting on China's tech and business pulse.
Images (1):
|
|||||
| Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation … | https://bradenkelley.com/2025/12/embodi… | 1 | Jan 04, 2026 16:00 | active | |
Embodied Artificial Intelligence is the Next Frontier of Human-Centered Innovation | Human-Centered Change and InnovationDescription: Braden Kelley is a popular innovation keynote speaker creating workshops, masterclasses, webinars, tools, and training for organizations on innovation, design thinking and change management. Content:
LAST UPDATED: December 8, 2025 at 4:56 PM GUEST POST from Art Inteligencia For the last decade, Artificial Intelligence (AI) has lived primarily on our screens and in the cloud — a brain without a body. While large language models (LLMs) and predictive algorithms have revolutionized data analysis, they have done little to change the physical experience of work, commerce, and daily life. This is the innovation chasm we must now bridge. The next great technological leap is Embodied Artificial Intelligence (EAI): the convergence of advanced robotics (the body) and complex, generalized AI (the brain). EAI systems are designed not just to process information, but to operate autonomously and intelligently within our physical world. This is a profound shift for Human-Centered Innovation, because EAI promises to eliminate the drudgery, danger, and limitations of physical labor, allowing humans to focus exclusively on tasks that require judgment, creativity, and empathy. The strategic deployment of EAI requires a shift in mindset: organizations must view these agents not as mechanical replacements, but as co-creators that augment and elevate the human experience. The most successful businesses will be those that unlearn the idea of human vs. machine and embrace the model of Human-Embodied AI Symbiosis. EAI accelerates change by enabling three crucial shifts in how we organize work and society: Traditional automation replaces repetitive tasks. EAI offers intelligent augmentation. Because EAI agents learn and adapt in real-time within dynamic environments (like a factory floor or a hospital), they can handle unforeseen situations that script-based robots cannot. This means the human partner moves from supervising a simple process to managing the exceptions and optimizations of a sophisticated one. The human job becomes about maximizing the intelligence of the system, not the efficiency of the body. Many essential human jobs are physically demanding, dangerous, or profoundly repetitive. EAI offers a path to remove humans from these undignified roles — the loading and unloading of heavy boxes, inspection of hazardous infrastructure, or the constant repetition of simple assembly tasks. This frees human capital for high-value interaction, fostering a new organizational focus on the dignity of work. Organizations committed to Human-Centered Innovation must prioritize the use of EAI to eliminate physical risk and strain. For decades, digital transformation has been the focus. EAI catalyzes the necessary physical transformation. It closes the loop between software and reality. An inventory algorithm that predicts demand can now direct a bipedal robot to immediately retrieve and prepare the required product from a highly chaotic warehouse shelf. This real-time, physical execution based on abstract computation is the true meaning of operational innovation. A global energy corporation (“PowerLine”) faced immense risk and cost in maintaining high-voltage power lines, oil pipelines, and sub-sea infrastructure. These tasks required sending human crews into dangerous, often remote, or confined spaces for time-consuming, repetitive visual inspections. PowerLine deployed a fleet of autonomous, multi-limbed EAI agents equipped with advanced sensing and thermal imaging capabilities. These robots were trained not just on pre-programmed routes, but on the accumulated, historical data of human inspectors, learning to spot subtle signs of material stress and structural failure — a skill previously reserved for highly experienced humans. The use of EAI led to a 70% reduction in inspection time and, critically, a near-zero rate of human exposure to high-risk environments. This strategic pivot proved that EAI’s greatest value is not economic replacement, but human safety and strategic focus. The EAI provided a foundational layer of reliable, granular data, enabling human judgment to be applied only where it mattered most. A national assisted living provider (“ElderCare”) struggled with caregiver burnout and increasing costs, while many residents suffered from emotional isolation due to limited staff availability. The challenge was profoundly human-centered: how to provide dignity and aid without limitless human resources. ElderCare piloted the use of adaptive, humanoid EAI companions in low-acuity environments. These agents were programmed to handle simple, repetitive physical tasks (retrieving dropped items, fetching water, reminding patients about medication) and, critically, were trained on empathetic conversation models. The pilot resulted in a 30% reduction in nurse burnout and, most importantly, a measurable increase in resident satisfaction and self-reported emotional well-being. The EAI was deployed not to replace the human touch, but to protect and maximize its quality by taking on the physical burden of routine care. The innovation successfully focused human empathy where it had the greatest impact. The race to commercialize EAI is accelerating, driven by the realization that AI needs a body to unlock its full economic potential. Organizations should be keenly aware of the leaders in this ecosystem. Companies like Boston Dynamics, known for advanced mobility and dexterity, are pioneering the physical platforms. Startups such as Sanctuary AI and Figure AI are focused on creating general-purpose humanoid robots capable of performing diverse tasks in unstructured environments, integrating advanced large language and vision models into physical forms. Simultaneously, major players like Tesla with its Optimus project and research divisions within Google DeepMind are laying the foundational AI models necessary for EAI agents to learn and adapt autonomously. The most promising developments are happening at the intersection of sophisticated hardware (the actuators and sensors) and generalized, real-time control software (the brain). Embodied AI is not just another technology trend; it is the catalyst for a radical change in the operating model of human civilization. Leaders must stop viewing EAI deployment as a simple capital expenditure and start treating it as a Human-Centered Innovation project. Your strategy should be defined by the question: How can EAI liberate my best people to do their best, most human work? Embrace the complexity, manage the change, and utilize the EAI revolution to drive unprecedented levels of dignity, safety, and innovation. “The future of work is not AI replacing humans; it is EAI eliminating the tasks that prevent humans from being fully human.” Traditional industrial robots are fixed, single-purpose machines programmed to perform highly repetitive tasks in controlled environments. Embodied AI agents are mobile, often bipedal or multi-limbed, and are powered by generalized AI models, allowing them to learn, adapt, and perform complex, varied tasks in unstructured, human environments. The opportunity is the elimination of the “3 Ds” of labor: Dangerous, Dull, and Dirty. By transferring these physical burdens to EAI agents, organizations can reallocate human workers to roles requiring social intelligence, complex problem-solving, emotional judgment, and creative innovation, thereby increasing the dignity and strategic value of the human workforce. Symbiosis refers to the collaborative operating model where EAI agents manage the physical execution and data collection of routine, complex tasks, while human professionals provide oversight, set strategic goals, manage exceptions, and interpret the resulting data. The systems work together to achieve an outcome that neither could achieve efficiently alone. Your first step toward embracing Embodied AI: Identify the single most physically demanding or dangerous task in your organization that is currently performed by a human. Begin a Human-Centered Design project to fully map the procedural and emotional friction points of that task, then use those insights to define the minimum viable product (MVP) requirements for an EAI agent that can eliminate that task entirely. UPDATE – Here is an infographic of the key points of this article that you can download: Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development. Image credit: 1 of 1,000+ quote slides for your meetings & presentations at http://misterinnovation.com Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week. Art Inteligencia is the lead futurist at Inteligencia Ltd. He is passionate about content creation and thinks about it as more science than art. Art travels the world at the speed of light, over mountains and under oceans. His favorite numbers are one and zero. Content Authenticity Statement: If it wasn't clear, any articles under Art's byline have been written by OpenAI Playground or Gemini using Braden Kelley and public content as inspiration. Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Δ (View the latest example)
Images (1):
|
|||||
| Artificial Embodied Intelligence and Machine Fidelity | https://medium.com/@michaelxfutol/artif… | 0 | Jan 04, 2026 16:00 | active | |
Artificial Embodied Intelligence and Machine FidelityURL: https://medium.com/@michaelxfutol/artificial-embodied-intelligence-and-machine-fidelity-f0285ab2ed1a Description: Artificial Embodied Intelligence and Machine Fidelity Artificial intelligence began as a dream of replication — of thought without flesh, logic without breath... Content: |
|||||
| NVIDIA Advances Robotics Development at CoRL 2025 with New Models, … | https://www.storagereview.com/news/nvid… | 1 | Jan 04, 2026 08:00 | active | |
NVIDIA Advances Robotics Development at CoRL 2025 with New Models, Systems, and Simulation Tools - StorageReview.comDescription: NVIDIA addresses challenges in robotic manipulation and announces advanced robotic learning tools at the Conference on Robotic Learning. Content:
Home » News » NVIDIA Advances Robotics Development at CoRL 2025 with New Models, Systems, and Simulation Tools NVIDIA is using the stage at the Conference on Robot Learning (CoRL) in Korea to showcase its most ambitious advances in robotic learning to date. The company is highlighting a suite of open models, powerful new computer platforms, and simulation engines that aim to close the gap between robotic research and real-world deployment. At the center of NVIDIA’s announcements are mobility and manipulation challenges, two of the most complex hurdles in robotics. By fusing AI-driven reasoning, advanced physics simulation, and domain-specific compute platforms, NVIDIA aims to accelerate the entire lifecycle of robotic development, from early R&D and simulation to training and, ultimately, deployment in physical environments. NVIDIA introduced DreamGen and Nerd, two open foundation models designed specifically for robotic learning. By releasing these models openly, NVIDIA is reinforcing its commitment to collaborative robotics research, lowering the barrier to entry for labs and startups, and ensuring reproducibility across the academic and industrial landscapes. These models can plug directly into Isaac robotics frameworks, streamlining experimentation and deployment. NVIDIA also announced three new computing systems optimized for each stage of robotics development: Omniverse with Cosmos on RTX Pro (Simulation) NVIDIA DGX (Training) NVIDIA Jetson AGX Thor (Deployment) Newton Physics Engine, first announced at GTC 2025, was developed in collaboration with Disney Research and Google DeepMind. NVIDIA also showcased the continued evolution of its Groot model family for robotic reasoning, advancing from Groot N1.5 (announced in May) to Groot N1.6. These models are tuned for “human-like” planning, reasoning, and task decomposition, enabling robots to break down complex tasks into executable steps, much like a human operator would. NVIDIA highlighted how its new Blackwell GPU architecture is enabling real-time reasoning in robotics. NVIDIA’s robotics push is significant not only for the breadth of tools being introduced but also for how tightly integrated the ecosystem has become. The advancements include: As mobility and manipulation remain challenging, NVIDIA’s unified approach, combining open research models, physics-accurate simulation, and deployment-ready compute, positions it as a central force in accelerating robotic autonomy. Engage with StorageReview Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | TikTok | RSS Feed I have been in the tech industry since IBM created Selectric. My background, though, is writing. So I decided to get out of the pre-sales biz and return to my roots, doing a bit of writing but still being involved in technology. Previous post: Pure Storage Expands Platform Capabilities for AI Efficiency and Cyber Resilience Next post: AMD Expands Global Collaboration with Cohere to Accelerate Enterprise AI Adoption Trusted Vendors Products and solutions from our affiliate partners: Newsletter Subscribe to the StorageReview newsletter to stay up to date on the latest news and reviews. We promise no spam! 1 Leave this field empty if you’re human: Content Categories Copyright © 1998-2025 Flying Pig Ventures, LLC Cincinnati, Ohio. All rights reserved. To provide the best experiences, we and our partners use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us and our partners to process personal data such as browsing behavior or unique IDs on this site and show (non-) personalized ads. Not consenting or withdrawing consent, may adversely affect certain features and functions. Click below to consent to the above or make granular choices. Your choices will be applied to this site only. You can change your settings at any time, including withdrawing your consent, by using the toggles on the Cookie Policy, or by clicking on the manage consent button at the bottom of the screen.
Images (1):
|
|||||
| Why NVIDIA Corporation (NVDA) Is Among the Cheap Robotics Stocks … | https://www.insidermonkey.com/blog/why-… | 1 | Jan 04, 2026 08:00 | active | |
Why NVIDIA Corporation (NVDA) Is Among the Cheap Robotics Stocks to Invest In Now? - Insider MonkeyDescription: We recently compiled a list of the 10 Cheap Robotics Stocks to Invest In Now. Content:
Our #1 AI Stock Pick — 33% OFF: $9.99 (was $14.99) Monthly picks · Ad-free browsing · 30-day money back guarantee Our #1 AI Stock Pick — 33% OFF: $9.99 (was $14.99) Monthly picks · Ad-free browsing We recently compiled a list of the 10 Cheap Robotics Stocks to Invest In Now. In this article, we are going to take a look at where NVIDIA Corporation (NASDAQ:NVDA) stands against other robotics stocks to buy now. The robotics industry, which has grown modestly over the past few years, has suddenly picked up pace after the emergence of AI. According to Goldman Sachs’ Head of China Industrial Technology research, the total addressable market for humanoid robots is expected to reach $38 billion by 2035, an upgrade of sixfold from a previous projection of $6 billion in 2023. We recently covered 8 Most Promising Robotics Stocks According to Hedge Funds. According to the International Federation of Robotics (IFR), professional service robots experienced a 30% increase in sales in 2023. IFR’s statistics department noted that more than 205,000 robotics units were sold in 2023, with Asia-Pacific accounting for 80% of global robotics sales. Transportation and logistics service robots were in huge demand and accounted for 113,000 units built in 2023, up by 35% compared to 2022. Medical robots are also in high demand, and the number surged by 36% to almost 6,100 units in 2023. The demand for surgery and diagnostics robots was the highest as they registered growth of 14% and 25% year-over-year. The United States is home to 199 companies engaged in robotics, with 66% producing professional service robots, 27% consumer service robots, and 12% medical robots. China ranks second after the US with 107 service and medical robot manufacturers and Germany ranks third with 83 companies. According to IFR, the US manufacturing companies have invested significantly in automation, and the industrial robot installations surged by 12% to 44,303 units in 2023. Whereas, robotics installations in the electrical and electronics industry increased to 5,120 units in 2023, up by 37% year-over-year. Global X Robotics & Artificial Intelligence ETF (NASDAQ:BOTZ) and Robo Global Robotics and Automation Index ETF (NYSE:ROBO) have returned more than 11% over the last year, respectively. Given the rising demand for humanoids and automation systems, robotics stocks present a promising area for investors to explore. You can also visit and see the 12 Best Penny Stocks to Invest in According to the Media. A technician operating a robotic arm on a production line of semiconductor chips. To determine the list of cheap robotics stocks to invest in, we shortlisted the companies mainly involved in robotics with an analyst upside of more than 25%. Cheap, in the context of this article, means stocks that Wall Street analysts believe are undervalued and will skyrocket to higher share prices. We have ranked the cheap robotics stocks to invest in based on their popularity among hedge funds, as of Q3 2024, in ascending order. Why do we care about what hedge funds do? The reason is simple: our research has shown that we can outperform the market by imitating the top stock picks of the best hedge funds. Our quarterly newsletter’s strategy selects 14 small-cap and large-cap stocks every quarter and has returned 275% since May 2014, beating its benchmark by 150 percentage points (see more details here). Analyst Upside (as of January 11): 28% No. of Hedge Fund Holders: 193 NVIDIA, whose primary business revolves around designing and manufacturing GPUs, is disrupting the broader market due to its AI technology. Similarly, NVIDIA is playing a big role in autonomous machines and AI-enabled robots through its technology. The demand for AI-enabled robots is at record levels and continues to grow. NVIDIA Corporation’s (NASDAQ:NVDA) three-computer solution allows AI robots to learn and perform complex tasks with precision. Businesses are utilizing Nvidia Robotics’ full-stack, accelerated cloud-to-edge systems, and optimized AI models to train, operate, and optimize their robot systems and software. On January 6, NVIDIA Corporation (NASDAQ:NVDA) introduced its Isaac GR00T Blueprint which will assist developers to generate exponentially large synthetic datasets to train their humanoids using imitation learning. Over the next two decades, the humanoid robots industry is anticipated to cross $38 billion, which creates a huge opportunity for NVIDIA to exploit the growing market. Overall, NVDA ranks 1st on our list of Cheap Robotics Stocks to Invest in Now. While we acknowledge the potential of NVDA to grow, our conviction lies in the belief that AI stocks hold greater promise for delivering higher returns and doing so within a shorter time frame. If you are looking for an AI stock that is more promising than NVDA but that trades at less than 5 times its earnings, check out our report about the cheapest AI stock. READ NEXT: 8 Best Wide Moat Stocks to Buy Now and 30 Most Important AI Stocks According to BlackRock. Disclosure: None. This article is originally published at Insider Monkey. Yahoo Finance Returns since its inception in May 2014 (through August 15th, 2025) 30 day money back guarantee. Cancel anytime. Warren Buffett Berkshire Hathaway $293,447,417,000 David Einhorn Greenlight Capital $1,491,303,000 George Soros Soros Fund Management $5,416,602,000 Jim Simons Renaissance Technologies $77,426,184,000 Leon Cooperman Omega Advisors $1,886,381,000 Carl Icahn Icahn Capital LP $22,521,664,000 Steve Cohen Point72 Asset Management $22,767,998,000 John Paulson Paulson & Co $3,510,256,000 David Tepper Appaloosa Management LP $4,198,712,000 Paul Tudor Jones Tudor Investment Corp $6,160,740,000 Get our editor’s daily picks straight in your inbox!
Images (1):
|
|||||
| Robot Or Human? Video Of Waitress Serving Food At Chinese … | https://www.ndtv.com/offbeat/robot-or-h… | 0 | Jan 03, 2026 00:01 | active | |
Robot Or Human? Video Of Waitress Serving Food At Chinese Restaurant Goes ViralDescription: In the video, the waitress pretends to be a robot and serves food to customers with robotic-like movements. Content: |
|||||
| Evaluation of Human-Robot Interfaces based on 2D/3D Visual and Haptic … | https://hal.science/hal-05313710v1 | 1 | Jan 03, 2026 00:01 | active | |
Evaluation of Human-Robot Interfaces based on 2D/3D Visual and Haptic Feedback for Aerial Manipulation - Archive ouverte HALURL: https://hal.science/hal-05313710v1 Description: Most telemanipulation systems for aerial robots provide the operator with only 2D screen visual information. The lack of richer information about the robot's status and environment can limit human awareness and, in turn, task performance. While the pilot's experience can often compensate for this reduced flow of information, providing richer feedback is expected to reduce the cognitive workload and offer a more intuitive experience overall. This work aims to understand the significance of providing additional pieces of information during aerial telemanipulation, namely (i) 3D immersive visual feedback about the robot's surroundings through mixed reality (MR) and (ii) 3D haptic feedback about the robot interaction with the environment. To do so, we developed a human-robot interface able to provide this information. First, we demonstrate its potential in a real-world manipulation task requiring sub-centimeter-level accuracy. Then, we evaluate the individual effect of MR vision and haptic feedback on both dexterity and workload through a human subjects study involving a virtual block transportation task. Results show that both 3D MR vision and haptic feedback improve the operator's dexterity in the considered teleoperated aerial interaction tasks. Nevertheless, pilot experience remains the most significant factor. Content:
Most telemanipulation systems for aerial robots provide the operator with only 2D screen visual information. The lack of richer information about the robot's status and environment can limit human awareness and, in turn, task performance. While the pilot's experience can often compensate for this reduced flow of information, providing richer feedback is expected to reduce the cognitive workload and offer a more intuitive experience overall. This work aims to understand the significance of providing additional pieces of information during aerial telemanipulation, namely (i) 3D immersive visual feedback about the robot's surroundings through mixed reality (MR) and (ii) 3D haptic feedback about the robot interaction with the environment. To do so, we developed a human-robot interface able to provide this information. First, we demonstrate its potential in a real-world manipulation task requiring sub-centimeter-level accuracy. Then, we evaluate the individual effect of MR vision and haptic feedback on both dexterity and workload through a human subjects study involving a virtual block transportation task. Results show that both 3D MR vision and haptic feedback improve the operator's dexterity in the considered teleoperated aerial interaction tasks. Nevertheless, pilot experience remains the most significant factor. Connectez-vous pour contacter le contributeur https://hal.science/hal-05313710 Soumis le : mardi 14 octobre 2025-11:02:57 Dernière modification le : lundi 27 octobre 2025-11:20:01 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| Chinese robotics firm unveils highly realistic robot head designed to … | https://www.dimsumdaily.hk/chinese-robo… | 0 | Jan 03, 2026 00:01 | active | |
Chinese robotics firm unveils highly realistic robot head designed to revolutionise human-machine interactionDescription: A pioneering Chinese robotics company has introduced a remarkably lifelike robotic head, engineered to blink, nod, and gaze around with convincing human-like ma... Content: |
|||||
| Robots are everywhere – improving how they communicate with people … | https://theconversation.com/robots-are-… | 1 | Jan 03, 2026 00:01 | active | |
Robots are everywhere – improving how they communicate with people could advance human-robot collaborationDescription: Robots are already carrying out tasks in clinics, classrooms and warehouses. Designing robots that are more receptive to human needs could help make them more useful in many contexts. Content:
Assistant Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County Ramana Vinjamuri receives funding from National Science Foundation. View all partners https://doi.org/10.64628/AAI.pra6njeup Share article Print article Robots are machines that can sense the environment and use that information to perform an action. You can find them nearly everywhere in industrialized societies today. There are household robots that vacuum floors and warehouse robots that pack and ship goods. Lab robots test hundreds of clinical samples a day. Education robots support teachers by acting as one-on-one tutors, assistants and discussion facilitators. And medical robotics composed of prosthetic limbs can enable someone to grasp and pick up objects with their thoughts. Figuring out how humans and robots can collaborate to effectively carry out tasks together is a rapidly growing area of interest to the scientists and engineers that design robots as well as the people who will use them. For successful collaboration between humans and robots, communication is key. Robots were originally designed to undertake repetitive and mundane tasks and operate exclusively in robot-only zones like factories. Robots have since advanced to work collaboratively with people with new ways to communicate with each other. Cooperative control is one way to transmit information and messages between a robot and a person. It involves combining human abilities and decision making with robot speed, accuracy and strength to accomplish a task. For example, robots in the agriculture industry can help farmers monitor and harvest crops. A human can control a semi-autonomous vineyard sprayer through a user interface, as opposed to manually spraying their crops or broadly spraying the entire field and risking pesticide overuse. Robots can also support patients in physical therapy. Patients who had a stroke or spinal cord injury can use robots to practice hand grasping and assisted walking during rehabilitation. Another form of communication, emotional intelligence perception, involves developing robots that adapt their behaviors based on social interactions with humans. In this approach, the robot detects a person’s emotions when collaborating on a task, assesses their satisfaction, then modifies and improves its execution based on this feedback. For example, if the robot detects that a physical therapy patient is dissatisfied with a specific rehabilitation activity, it could direct the patient to an alternate activity. Facial expression and body gesture recognition ability are important design considerations for this approach. Recent advances in machine learning can help robots decipher emotional body language and better interact with and perceive humans. Questions like how to make robotic limbs feel more natural and capable of more complex functions like typing and playing musical instruments have yet to be answered. I am an electrical engineer who studies how the brain controls and communicates with other parts of the body, and my lab investigates in particular how the brain and hand coordinate signals between each other. Our goal is to design technologies like prosthetic and wearable robotic exoskeleton devices that could help improve function for individuals with stroke, spinal cord and traumatic brain injuries. One approach is through brain-computer interfaces, which use brain signals to communicate between robots and humans. By accessing an individual’s brain signals and providing targeted feedback, this technology can potentially improve recovery time in stroke rehabilitation. Brain-computer interfaces may also help restore some communication abilities and physical manipulation of the environment for patients with motor neuron disorders. Effective integration of robots into human life requires balancing responsibility between people and robots, and designating clear roles for both in different environments. As robots are increasingly working hand in hand with people, the ethical questions and challenges they pose cannot be ignored. Concerns surrounding privacy, bias and discrimination, security risks and robot morality need to be seriously investigated in order to create a more comfortable, safer and trustworthy world with robots for everyone. Scientists and engineers studying the “dark side” of human-robot interaction are developing guidelines to identify and prevent negative outcomes. Human-robot interaction has the potential to affect every aspect of daily life. It is the collective responsibility of both the designers and the users to create a human-robot ecosystem that is safe and satisfactory for all. A photo was replaced to more accurately reflect the work of the author. Copyright © 2010–2026, The Conversation Media Group Ltd
Images (1):
|
|||||
| China’s ‘slim-waisted’ humanoid robot debuts with human-like skills | https://interestingengineering.com/inno… | 1 | Jan 03, 2026 00:01 | active | |
China’s ‘slim-waisted’ humanoid robot debuts with human-like skillsURL: https://interestingengineering.com/innovation/chinas-slim-waisted-humanoid-robot-debuts Description: Robotera’s new Q5 humanoid robot combines advanced dexterity, mobility, and AI interaction for real-world service across key sectors. Content:
From daily news and career tips to monthly insights on AI, sustainability, software, and more—pick what matters and get it in your inbox. Explore The Most Powerful Tech Event in the World with Interesting Engineering. Stick with us as we share the highlights of CES week! Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Q5 uses fused LiDAR and stereo vision to autonomously navigate complex spaces with smooth movement and minimal human oversight. Chinese robotics firm Robotera has unveiled a new humanoid service robot, model Q5, showcasing advanced dexterity, mobility, and interactive intelligence capabilities.With 44 degrees of freedom and a human-like build, the robot excels in environments requiring delicate manipulation, smooth navigation, and lifelike engagement.Q5 also supports full-body teleoperation through gloves and VR systems, and interacts using natural, AI-powered dialogue. According to the Beijing-based firm, the new humanoid is specially engineered for practical deployment in healthcare, retail, tourism, and education.In a video earlier in June, Robotera’s STAR1 humanoid was shown skillfully using chopsticks and performing culinary tasks like cooking dumplings, steaming buns, and pouring wine. The latest humanoid service robot, model Q5, aims to redefine human-robot interaction through its fusion of engineering precision and embodied artificial intelligence.The Q5, which has a height of 1650 mm and a weight of 124 pounds (70 kilograms), features 44 degrees of freedom, one of which is the highly dexterous 11-DoF XHAND Lite. This robotic hand replicates the human hand in size and dexterity, providing precision at the fingertip level and a robust payload capability of 22 pounds per arm (10 kg per arm). The ability to respond quickly—up to 10 times per second—facilitates smooth, responsive handling, aided by backdrivable, force-controlled joints that ensure safe and compliant engagement. The 7-DoF robotic arms of Q5 have a reach of 1380 mm and can extend over 2 meters to make contact with objects on the ground or above shoulder height. Its compact design—only 582 mm by 519 mm in size—allows for high maneuverability in cramped indoor spaces. With the aid of fused LiDAR and stereo vision systems, Q5 can navigate complex environments smoothly and autonomously across entire areas with only slight oversight.The robot boasts a hyper-anthropomorphic design, characterized by a slim waist and an expressive humanoid visage. With its AI-driven voice system, fast and natural dialogue is possible, featuring high recognition accuracy and context-aware responses. Through remote operation tools such as VR and sensor gloves, Q5 can also be teleoperated with low latency for precise task execution.Q5, driven by the EraAI platform, facilitates a complete AI lifecycle—from gathering teleoperation data to model training, simulation, deployment, and closed-loop learning. With a runtime of over 4 hours on a 60V supply, Q5 is set to revolutionize customer service, tourism, healthcare, and other fields by providing intelligent, mobile, and humanlike robotic assistance at scale. Robotera, another rising player in humanoid robotics from China, is gaining attention with its rapid innovation and advanced technology. Founded in August 2023 with support from China’s Tsinghua University, the company has introduced several high-performance humanoid robots, including its flagship model, STAR1.STAR1 possesses 55 degrees of freedom and a robust joint torque reaching 400Nm, enabling rapid and accurate motions at speeds as high as 25 radians per second. During a recent exhibition, STAR1 was observed employing chopsticks to manage fragile items such as dumplings with extraordinary skill. < It also carried out functions like steaming buns, pouring wine with precision, and clinking glasses in a toast, underscoring its potential in aiding traditional Chinese cooking. At the heart of these functionalities is Robotera’s innovative XHAND1 robotic hand, designed for humanoid uses as well as esports. This five-fingered hand features 12 degrees of freedom and advanced tactile sensors that can detect surface textures and temperature. The thumb and index fingers each possess three degrees of freedom, while the other fingers have two, allowing for realistic movements.XHAND1 can perform 10 clicks a second, providing a level of responsiveness that matches that of professional gamers. The hand works with the Apple Vision Pro, as shown in a recent demonstration, providing accurate input and instantaneous virtual engagement for gaming and robotics alike. Jijo is an automotive and business journalist based in India. Armed with a BA in History (Honors) from St. Stephen's College, Delhi University, and a PG diploma in Journalism from the Indian Institute of Mass Communication, Delhi, he has worked for news agencies, national newspapers, and automotive magazines. In his spare time, he likes to go off-roading, engage in political discourse, travel, and teach languages. Premium Follow
Images (1):
|
|||||
| Gizmodo | The Future Is Here | https://www.gizmodo.com.au/2021/12/amec… | 1 | Jan 03, 2026 00:01 | active | |
Gizmodo | The Future Is HereURL: https://www.gizmodo.com.au/2021/12/ameca-gets-angry/ Description: Dive into cutting-edge tech, reviews and the latest trends with the expert team at Gizmodo. Your ultimate source for all things tech. Content:
You can't work from home, but you can bring your slippers to the office. Culture AJ Dellinger Gadgets Tom Hawking Movies Cheryl Eddy Gadgets Kyle Barr Crime Bruce Gil Television Cheryl Eddy Passant Rabie AJ Dellinger Cheryl Eddy Bruce Gil Ed Cara Gayoung Lee James Whitbrook Tom Hawking Ellyn Lapointe AJ Dellinger Cheryl Eddy Kyle Barr Cheryl Eddy Zac Estrada Raymond Wong, Kyle Barr, and James Pero Health Ed Cara Physics & Chemistry Gayoung Lee Artificial Intelligence AJ Dellinger Artificial Intelligence Bruce Gil Artificial Intelligence Ece Yildirim Ece Yildirim Dec 30 Bruce Gil Dec 30 AJ Dellinger Dec 29 AJ Dellinger Dec 29 Bruce Gil Dec 30 Mike Pearl Dec 28 Mike Pearl Dec 22 AJ Dellinger Dec 17 Matt Novak Dec 17 Matt Novak Dec 15 Cryptocurrencies Mike Pearl Cryptocurrencies Kyle Torpey Cryptocurrencies Corey G. Johnson and Al Shaw, ProPublica Kyle Torpey Dec 22 Kyle Torpey Dec 12 Kyle Torpey Dec 11 Kyle Torpey Dec 6 Zac Estrada Jan 2 Zac Estrada Dec 28 Mike Pearl Dec 26 Mike Pearl Dec 25 Zac Estrada Dec 24 Zac Estrada Dec 24 Gadgets gifts are the best gifts to get friends and family. Other Gadgets Gizmodo Staff Gadgets Raymond Wong, Kyle Barr, and James Pero Reviews Kyle Barr Gadgets James Pero Reviews Gizmodo Staff Other Gadgets James Pero Deals Zainab Falak Deals Zainab Falak Deals Brittany Vincent Brittany Vincent Jan 2 Brittany Vincent Jan 2 Brittany Vincent Jan 2 Subscribe and interact with our community, get up to date with our customised Newsletters and much more. âThe Rightside Upâ brought the Netflix blockbuster to its end with a blend of action and agonyâplus an epilogue stuffed with hazy happiness. Television Cheryl Eddy Television Cheryl Eddy Theme Parks & Destinations James Whitbrook Trailers James Whitbrook Movies Isaiah Colbert Movies Cheryl Eddy Welcome to the circle. James Whitbrook This isn't the first cybersecurity breach to impact the space agency. Space & Spaceflight Passant Rabie Health Ed Cara Physics & Chemistry Gayoung Lee Space & Spaceflight Ellyn Lapointe Health Ed Cara Space & Spaceflight Ellyn Lapointe Before we can test for AI consciousness, we need to understand how consciousness actually emerges, experts say. Ellyn Lapointe ©2025 GIZMODO USA LLC. All rights reserved.
Images (1):
|
|||||
| Meet Ameca, the remarkable (and not at all creepy) human-like … | https://globalnews.ca/news/8422932/amec… | 1 | Jan 03, 2026 00:01 | active | |
Meet Ameca, the remarkable (and not at all creepy) human-like robot - National | Globalnews.caURL: https://globalnews.ca/news/8422932/ameca-robot-android-engineered-arts-video/ Description: "The reason for making a robot that looks like a person is to interact with people," said Engineered Arts founder Will Jackson about Ameca's design. Content:
Instructions: Want to discuss? Please read our Commenting Policy first. If you get Global News from Instagram or Facebook - that will be changing. Find out how you can still connect with us. This article is more than 4 years old and some information may not be up to date. Robot engineers at Cornwall-based Engineered Arts have unveiled a remarkably human-like android, named “Ameca.” A short promo video released by the company shows Ameca seemingly “waking up,” looking at its hands and then towards the camera. The 40-second clip has racked up well over 10 million views online since Engineered Arts released it earlier this week. Ameca has grey-coloured skin, with deliberately gender- and race-neutral characteristics. The company describes it as the “world’s most advanced human shaped robot representing the forefront of human-robotics technology.” “The reason for making a robot that looks like a person is to interact with people. The human face is a very high bandwidth communication tool, and that’s why we built these expressive robots,” Engineered Arts founder Will Jackson told Reuters. He added: “We’ve tried to be gender-neutral, race neutral. We’re just trying to make something that has the basic human characteristics — expression — without putting anything else on top of that. So, hence the grey faces. It’s really been 15 years in gestation.” Engineered Arts designs and manufactures humanoid entertainment robots for science centres, theme parks and businesses. Ameca is now available for purchase or rental, though Jackson believes it is the perfect test-platform for artificial intelligence (AI). “A lot of people working on AI interaction, all kinds of new apps that are using vision systems, segmentation, face recognition, speech recognition, voice synthesis. But what you don’t see is the hardware to run all that software on. So what we’re trying to provide is a platform for AI,” Jackson said. “And a lot of communication is not verbal,” he continued. “So it’s not all about speech, it’s about expression, it’s about gestures: a simple move like that can mean a thousand words. The robot doesn’t have to say anything. So, the last thing we wanted to make was a robot that says, ‘please repeat the question.’ So it’s about trying to do natural human interaction. So imagine: there’s been a lot of talk about metaverses recently: imagine taking your metaverse character out into the real world. You need some embodiment for that. So, you wanted to take your virtual self to a meeting in New York, Hawaii, Hong Kong. Send a robot.” He added that a robot like Ameca costs more than US$133,000 (CDN$170,000) to buy. The email you need for the day’s top news stories from Canada and around the world. The email you need for the day’s top news stories from Canada and around the world.
Images (1):
|
|||||
| PJSC Sberbank : Sber's telemarketing robot makes 99.5% of phone … | https://www.marketscreener.com/quote/st… | 0 | Jan 03, 2026 00:01 | active | |
PJSC Sberbank : Sber's telemarketing robot makes 99.5% of phone calls without human involvementDescription: The productivity of Sber's telemarketing robot has reached 2 million calls per day. Two out of three calls to clients next year will be made by the robot, accor... Content: |
|||||
| Elbit : Israel Innovation Authority Approves the Establishment of a … | https://www.marketscreener.com/quote/st… | 0 | Jan 03, 2026 00:01 | active | |
Elbit : Israel Innovation Authority Approves the Establishment of a Consortium Led by Elbit Systems to Develop Human-Robot Interaction TechnologiesDescription: Israel Innovation Authority has recently approved the establishment of a new innovation consortium, led by Elbit Systems C4I and Cyber, for Human-Robot Interact... Content: |
|||||
| Robot shocks with how human-like it is (VIDEO) — RT … | https://www.rt.com/news/542122-humanoid… | 1 | Jan 03, 2026 00:01 | active | |
Robot shocks with how human-like it is (VIDEO) — RT World NewsURL: https://www.rt.com/news/542122-humanoid-robot-ameca-expressions/ Description: A new humanoid robot has made a huge step towards crossing the ‘uncanny valley,’ with the machine filmed displaying a whole range of almost realistic human facial expressions. Content:
A new humanoid robot has made a huge step towards crossing the ‘uncanny valley,’ with the machine filmed displaying a whole range of almost realistic human facial expressions. ‘Ameca’ has been described by its British developers at Engineered Arts as “the perfect humanoid robot platform for human-robot interaction.” The footage, which captured the terrifyingly real robot in action, proves that the company’s bold statement isn’t much of a stretch. The grey-colored machine, which vaguely resembles the characters from the 2004 movie ‘I, Robot,’ offers a whole range of human emotions, coupled with realistic eye movement. In what appears to be a pre-programmed demonstration, Ameca first wakes up, then checks out her arms with curiosity, before making a surprised face after ‘noticing’ that ‘she’ is being filmed. READ MORE: Will AI turn humans into 'waste product'? The emotional robot is already available for purchase and rental. Right now, it’s only a stationary model, but the developers plan to further upgrade it, promising that “one day Ameca will walk.” RT News App © Autonomous Nonprofit Organization “TV-Novosti”, 2005–2026. All rights reserved. This website uses cookies. Read RT Privacy policy to find out more.
Images (1):
|
|||||
| Using biopotential and bio-impedance for intuitive human–robot interaction | Nature … | https://www.nature.com/articles/s44287-… | 1 | Jan 03, 2026 00:01 | active | |
Using biopotential and bio-impedance for intuitive human–robot interaction | Nature Reviews Electrical EngineeringDescription: The rising interest in robotics and virtual reality has driven a growing demand for intuitive interfaces that enable seamless human–robot interaction (HRI). Bio-signal-based solutions, using biopotential and bio-impedance, offer a promising approach for estimating human motion intention thanks to their ability to capture physiological neuromuscular activity in real time. This Review discusses the potential of biopotential and bio-impedance sensing systems for advancing HRI focusing on the role of integrated circuits in enabling practical applications. Biopotential and bio-impedance can be used to monitor human physiological states and motion intention, making them highly suitable for enhancing motion recognition in HRI. However, as stand-alone modalities, they face limitations related to inter-subject variability and susceptibility to noise, highlighting the need for hybrid sensing techniques. The performance of these sensing modalities is closely tied to the development of integrated circuits optimized for low-noise, low-power operation and accurate signal acquisition in a dynamic environment. Understanding the complementary strengths and limitations of biopotential and bio-impedance signals, along with the advances in integrated circuit technologies for their acquisition, highlights the potential of hybrid, multimodal systems to enable robust, intuitive and scalable HRI. The growing interest in robotics in daily life has increased the demand for intuitive interfaces for human–robot interaction (HRI). This Review examines the potential, challenges and innovations of bio-signal analysis to enhance HRI and facilitate broader applications. Content:
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement Nature Reviews Electrical Engineering volume 2, pages 555–571 (2025)Cite this article 1339 Accesses 30 Altmetric Metrics details The rising interest in robotics and virtual reality has driven a growing demand for intuitive interfaces that enable seamless human–robot interaction (HRI). Bio-signal-based solutions, using biopotential and bio-impedance, offer a promising approach for estimating human motion intention thanks to their ability to capture physiological neuromuscular activity in real time. This Review discusses the potential of biopotential and bio-impedance sensing systems for advancing HRI focusing on the role of integrated circuits in enabling practical applications. Biopotential and bio-impedance can be used to monitor human physiological states and motion intention, making them highly suitable for enhancing motion recognition in HRI. However, as stand-alone modalities, they face limitations related to inter-subject variability and susceptibility to noise, highlighting the need for hybrid sensing techniques. The performance of these sensing modalities is closely tied to the development of integrated circuits optimized for low-noise, low-power operation and accurate signal acquisition in a dynamic environment. Understanding the complementary strengths and limitations of biopotential and bio-impedance signals, along with the advances in integrated circuit technologies for their acquisition, highlights the potential of hybrid, multimodal systems to enable robust, intuitive and scalable HRI. Electromyography (EMG) effectively captures human motion intention and advances in deep learning are enhancing its feature extraction functionality, improving the control of wearable and collaborative robots. Electrical impedance tomography (EIT) is a non-invasive bio-impedance imaging technique that enables real-time diagnosis of muscle volume changes and condition. Integrating the bio-signals involved in the different stages of human motion improves precision in recognizing motion intentions. Biopotential acquisition integrated circuits that can record different bio-signals can markedly reduce the form factor of the sensing module and the power consumption while focusing on enhancing noise performance. Bio-impedance integrated circuits comprise transmitter and receiver components, facilitating electrical excitation and demodulation processes, specifically customized to meet power, accuracy and form factor requirements. This is a preview of subscription content, access via your institution Subscribe to this journal Receive 12 digital issues and online access to articles 121,22 € per year only 10,10 € per issue Buy this article 39,95 € Prices may be subject to local taxes which are calculated during checkout Ostrowski, A. K., Zhang, J., Breazeal, C. & Park, H. W. Promising directions for human–robot interactions defined by older adults. Front. Robot. AI 11, 1289414 (2024). Article Google Scholar Złotowski, J., Weiss, A. & Tscheligi, M. Interaction scenarios for HRI in public space. In Proc. 3rd International Conference on Social Robotics (eds Mutlu, B. et al.) Vol. 3, 1–10 (Springer, 2011). Shimada, S., Golyanik, V., Xu, W. & Theobalt, C. Physcap: physically plausible monocular 3D motion capture in real time. ACM Trans. Graph. 39, 1–16 (2020). Article Google Scholar Nagymáté, G. & Kiss, R. M. Application of OptiTrack motion capture systems in human movement analysis: a systematic literature review. Recent. Innov. Mechatron. 5, 1–9 (2018). Google Scholar Pfister, A., West, A. M., Bronner, S. & Noah, J. A. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis. J. Med. Eng. Technol. 38, 274–280 (2014). Article Google Scholar Roetenberg, D., Luinge, H. & Slycke, P. Xsens MVN: full 6DOF human motion tracking using miniature inertial sensors. Xsens Motion Technol. BV, Tech. Rep. 1, 1–7 (2009). Google Scholar Gallego, J. Á. et al. A multimodal human–robot interface to drive a neuroprosthesis for tremor management. IEEE Trans. Syst. Man Cybern. Part C. 42, 1159–1168 (2012). Article Google Scholar Yeon, S. H. et al. Acquisition of surface EMG using flexible and low-profile electrodes for lower extremity neuroprosthetic control. IEEE Trans. Med. Robot. Bionics 3, 563–572 (2021). Article Google Scholar Lazarou, I., Nikolopoulos, S., Petrantonakis, P. C., Kompatsiaris, I. & Tsolaki, M. EEG-based brain–computer interfaces for communication and rehabilitation of people with motor impairment: a novel approach of the 21st century. Front. Hum. Neurosci. 12, 14 (2018). Article Google Scholar Zhang, T., Sun, H. & Zou, Y. An electromyography signals-based human–robot collaboration system for human motion intention recognition and realization. Robot. Comput.-Integr. Manuf. 77, 102359 (2022). Article Google Scholar Lyu, J. et al. Coordinating human–robot collaboration by EEG-based human intention prediction and vigilance control. Front. Neurorobot. 16, 1068274 (2022). Article Google Scholar Merlo, A., Farina, D. & Merletti, R. A fast and reliable technique for muscle activity detection from surface EMG signals. IEEE Trans. Biomed. Eng. 50, 316–323 (2003). Article Google Scholar Jeong, H., Feng, J. & Kim, J. 2.5D laser-cutting-based customized fabrication of long-term wearable textile sEMG sensor: from design to intention recognition. IEEE Robot. Autom. Lett. 7, 10367–10374 (2022). Article Google Scholar Staudenmann, D., Kingma, I., Daffertshofer, A., Stegeman, D. F. & van Dieën, J. H. Improving EMG-based muscle force estimation by using a high-density EMG grid and principal component analysis. IEEE Trans. Biomed. Eng. 53, 712–719 (2006). Article Google Scholar Teplan, M. Fundamentals of EEG measurement. Meas. Sci. Rev. 2, 1–11 (2002). Google Scholar Hendrix, C. R. et al. A new EMG frequency-based fatigue threshold test. J. Neurosci. Methods 181, 45–51 (2009). Article Google Scholar Huang, L. K. et al. Electrical impedance myography applied to monitoring of muscle fatigue during dynamic contractions. IEEE Access. 8, 13056–13065 (2020). Article Google Scholar Kogi, K. & Hakamada, T. Frequency analysis of the surface electromyogram in muscle fatigue. Rodo Kagaku. J. Sci. Labour 38, 519–528 (1962). Google Scholar Zhu, J. et al. EIT-kit: an electrical impedance tomography toolkit for health and motion sensing. In 34th Annual ACM Symposium on User Interface Software and Technology 400–413 (ACM, 2021). Khan, S. et al. Relative accuracy of bioelectrical impedance analysis for assessing body composition in children with severe obesity. J. Pediatr. Gastroenterol. Nutr. 70, e129–e135 (2020). Article Google Scholar Antonio, J. et al. Comparison of dual-energy X-ray absorptiometry (DXA) versus a multi-frequency bioelectrical impedance (InBody 770) device for body composition assessment after a 4-week hypoenergetic diet. J. Funct. Morphol. Kinesiol. 4, 23 (2019). Article Google Scholar Sanchez, B. & Rutkove, S. B. Electrical impedance myography and its applications in neuromuscular disorders. Neurotherapeutics 14, 107–118 (2017). Article Google Scholar Anabestani, H., Sadaghiani, S. M. & Bhadra, S. Low power flexible sensor for ambient light-based blood oxygen saturation measurement. In 2023 IEEE SENSORS 1–4 (IEEE, 2023). Yang, X., Yan, J. & Liu, H. Comparative analysis of wearable A-mode ultrasound and sEMG for muscle–computer interface. IEEE Trans. Biomed. Eng. 67, 2434–2442 (2019). Article Google Scholar McIntosh, J., Marzo, A., Fraser, M. & Phillips, C. Echoflex: Hand gesture recognition using 668 ultrasound imaging. In Proc. 2017 CHI Conference on Human Factors in Computing Systems 1923–1934 (ACM, 2017). Politti, F., Casellato, C., Kalytczak, M. M., Garcia, M. B. S. & Biasotto-Gonzalez, D. A. Characteristics of EMG frequency bands in temporomandibullar disorders patients. J. Electromyogr. Kinesiol. 31, 119–125 (2016). Article Google Scholar Merletti, R., Botter, A., Troiano, A., Merlo, E. & Minetto, M. A. Technology and instrumentation for detection and conditioning of the surface electromyographic signal: state of the art. Clin. Biomech. 24, 122–134 (2009). Article Google Scholar Webster, J. G. Medical Instrumentation: Application and Design (Wiley, 2009). Chung, C., Mun, H., Atashzar, S. F. & Kyung, K.-U. A novel design of thin flexible force 678 myography sensor using weaved optical fiber: a proof-of-concept study. In 21st International Conference on Ubiquitous Robots 1–6 (IEEE, 2024). Wei, D., Chen, L., Zhao, L., Zhou, H. & Huang, B. A vision-based measure of environmental effects on inferring human intention during human robot interaction. IEEE Sens. J. 22, 4246–4256 (2022). Article Google Scholar Guo, X. et al. A fast-response dynamic-static parallel attention GCN network for body–hand gesture recognition in HRI. IEEE Trans. Ind. Electron. 71, 5993–6004 (2024). Article Google Scholar De Luca, C. J. Surface electromyography: detection and recording. DelSys Inc. 10, 1–10 (2002). Google Scholar Myrtec, M., Frölich, E., Fichtler, A. & Brügner, G. ECG changes, emotional arousal, and subjective state: an ambulatory monitoring study with CHD patients. J. Psychophysiol. 14, 106 (2000). Article Google Scholar Gui, K., Liu, H. & Zhang, D. A practical and adaptive method to achieve EMG-based torque estimation for a robotic exoskeleton. IEEE/ASME Trans. Mechatron. 24, 483–494 (2019). Article Google Scholar Kyeong, S. et al. Surface electromyography characteristics for motion intention recognition and implementation issues in lower-limb exoskeletons. Int. J. Control Autom. Syst. 20, 1018–1028 (2022). Article Google Scholar Doheny, E., Lowery, M., FitzPatrick, D. & O’Malley, M. Effect of elbow joint angle on force–EMG relationships in human elbow flexor and extensor muscles. J. Electromyogr. Kinesiol.: Off. J. Int. Soc. Electrophysiol. Kinesiol. 18, 760–770 (2008). Article Google Scholar Liu, P., Liu, L. & Clancy, E. Influence of joint angle on EMG–torque model during constant-posture, torque-varying contractions. IEEE Trans. Neural Syst. Rehabil. Eng. 23, 1039–1046 (2015). Article Google Scholar Hinrichs, H. et al. Comparison between a wireless dry electrode EEG system with a conventional wired wet electrode EEG system for clinical applications. Sci. Rep. 10, 5218 (2020). Article Google Scholar Phukpattaranont, P., Thiamchoo, N. & Neranon, P. Real-time identification of noise type contaminated in surface electromyogram signals using efficient statistical features. Med. Eng. Phys. 131, 104232 (2024). Article Google Scholar Drees, C. et al. Skin irritation during video-EEG monitoring. Neurodiagn. J. 56, 139–150 (2016). Article Google Scholar Ouchida, S., Nikpour, A., Fairbrother, G. & Senturias, M. EEG electrode-induced skin injury among adult patients undergoing ambulatory EEG monitoring. Neurodiagn. J. 59, 219–231 (2019). Article Google Scholar Kim, H. et al. Skin preparation-free, stretchable microneedle adhesive patches for reliable electrophysiological sensing and exoskeleton robot control. Sci. Adv. 10, eadk5260 (2024). Article Google Scholar Jeong, J., Park, J. & Lee, S. 3D printing fabrication process for fine control of microneedle shape. Micro Nano Syst. Lett. 11, 1 (2023). Article Google Scholar Ji, H. et al. Skin-integrated, biocompatible, and stretchable silicon microneedle electrode for long-term EMG monitoring in motion scenario. NPJ Flex. Electron. 7, 46 (2023). Article Google Scholar Zhao, Q. et al. Highly stretchable and customizable microneedle electrode arrays for intramuscular electromyography. Sci. Adv. 10, eadn7202 (2024). Article Google Scholar Gunnarsson, E., Rödby, K. & Seoane, F. Seamlessly integrated textile electrodes and conductive routing in a garment for electrostimulation: design, manufacturing and evaluation. Sci. Rep. 13, 17408 (2023). Article Google Scholar Song, T. et al. Review of sEMG for robot control: techniques and applications. Appl. Sci. 13, 9546 (2023). Article Google Scholar Cisnal, A., Pérez-Turiel, J., Fraile, J.-C., Sierra, D. & de la Fuente, E. Robhand: a hand exoskeleton with real-time EMG-driven embedded control. Quantifying hand gesture recognition delays for bilateral rehabilitation. IEEE Access. 9, 137809–137823 (2021). Article Google Scholar Lin, M., Huang, J., Fu, J., Sun, Y. & Fang, Q. A VR-based motor imagery training system with EMG-based real-time feedback for post-stroke rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 31, 1–10 (2022). Article Google Scholar Bi, L. Feleke, A. G. & Guan, C. A review on EMG-based motor intention prediction of continuous human upper limb motion for human–robot collaboration. Biomed. Signal. Process. Control. 51, 113–127 (2019). Article Google Scholar Meattini, R. et al. An sEMG-based human–robot interface for robotic hands using machine learning and synergies. IEEE Trans. Comp., Pack. Manuf. Technol. 8, 1149–1158 (2018). Google Scholar Varghese, R. J. et al. Design, fabrication and evaluation of a stretchable high-density electromyography array. Sensors 24, 1810 (2024). Article Google Scholar Zhang, L. et al. Fully organic compliant dry electrodes self-adhesive to skin for long-term motion-robust epidermal biopotential monitoring. Nat. Commun. 11, 4683 (2020). Article Google Scholar Lo, L.-W. et al. Stretchable sponge electrodes for long-term and motion-artifact-tolerant recording of high-quality electrophysiologic signals. ACS Nano 16, 11792–11801 (2022). Article Google Scholar Ergeneci, M., Gokcesu, K., Ertan, E. & Kosmas, P. An embedded, eight channel, noise canceling, wireless, wearable sEMG data acquisition system with adaptive muscle contraction detection. IEEE Trans. Biomed. Circuits Syst. 12, 68–79 (2018). Article Google Scholar Liang, Z. et al. A wireless, high-quality, soft and portable wrist-worn system for sEMG signal detection. Micromachines 14, 1085 (2023). Article Google Scholar Moin, A. et al. A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition. Nat. Electron. 4, 54–63 (2020). Article Google Scholar Sacco, I. C. N., Gomes, A. A., Otuzi, M. E., Pripas, D. & Onodera, A. N. A method for better positioning bipolar electrodes for lower limb EMG recordings during dynamic contractions. J. Neurosci. Methods 180, 133–137 (2009). Article Google Scholar Sanchez, B., Pacheck, A. & Rutkove, S. B. Guidelines to electrode positioning for human and animal electrical impedance myography research. Sci. Rep. 6, 32615 (2016). Article Google Scholar Wang, M., Khundrakpam, B. & Vaughan, T. Effects of electrode position targeting in noninvasive electromyography technologies for finger and hand movement prediction. J. Med. Biol. Eng. 43, 603–611 (2023). Article Google Scholar Feng, J., Chang, H., Jeong, H. & Kim, J. Design of a flexible high-density surface electromyography sensor. In 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society 4130–4133 (IEEE, 2020). Zhang, D. et al. Stretchable and durable HD-sEMG electrodes for accurate recognition of swallowing activities on complex epidermal surfaces. Microsyst. Nanoeng. 9, 115 (2023). Article Google Scholar Hu, X., Song, A., Wang, J., Zeng, H. & Wei, W. Finger movement recognition via high-density electromyography of intrinsic and extrinsic hand muscles. Sci. Data 9, 373 (2022). Article Google Scholar Gao, S., Wang, Y., Fang, C. & Xu, L. A smart terrain identification technique based on electromyography, ground reaction force, and machine learning for lower limb rehabilitation. Appl. Sci. 10, 2638 (2020). Article Google Scholar Yun, I., Jeung, J., Song, Y. & Chung, Y. Non-invasive quantitative muscle fatigue estimation based on correlation between sEMG signal and muscle mass. IEEE Access. 8, 191751–191757 (2020). Article Google Scholar Turgunov, A., Zohirov, K., Rustamov, S. & Muhtorov, B. Using different features of signal in EMG signal classification. In 2024 International Conference on Information Science and Communications Technologies 1–5 (IEEE, 2020). Khan, S. M., Khan, A. A. & Farooq, O. Selection of features and classifiers for EMG-EEG-based upper limb assistive devices—a review. IEEE Rev. Biomed. Eng. 13, 248–260 (2019). Article Google Scholar Holobar, A. & Farina, D. Noninvasive neural interfacing with wearable muscle sensors: combining convolutive blind source separation methods and deep learning techniques for neural decoding. IEEE Signal. Process. Mag. 38, 103–118 (2021). Article Google Scholar Xia, P., Hu, J. & Peng, Y. EMG‐based estimation of limb movement using deep learning with recurrent convolutional neural networks. Artif. Organs 42, E67–E77 (2018). Article Google Scholar Geng, W. et al. Gesture recognition by instantaneous surface EMG images. Sci. Rep. 6, 36571 (2016). Article Google Scholar Fajardo, J. M., Gomez, O. & Prieto, F. EMG hand gesture classification using handcrafted and deep features. Biomed. Signal. Process. Control. 63, 102210 (2021). Article Google Scholar Sejati, P. A. et al. Multinode electrical impedance tomography (mnEIT) throughout whole-body electrical muscle stimulation (wbEMS). IEEE Trans. Instrum. Meas. 72, 1–14 (2023). Article Google Scholar Kwon, H., Guasch, M., Nagy, J. A., Rutkove, S. B. & Sanchez, B. New electrical impedance methods for the in situ measurement of the complex permittivity of anisotropic skeletal muscle using multipolar needles. Sci. Rep. 9, 3145 (2019). Article Google Scholar Lee, H., Kwon, D., Cho, H., Park, I. & Kim, J. Soft nanocomposite based multi-point, multi-directional strain mapping sensor using anisotropic electrical impedance tomography. Sci. Rep. 7, 39837 (2017). Article Google Scholar Kwon, H., Malik, W. Q., Rutkove, S. B. & Sanchez, B. Separation of subcutaneous fat from muscle in surface electrical impedance myography measurements using model component analysis. IEEE Trans. Biomed. Eng. 66, 354–364 (2018). Article Google Scholar Zhang, Y., Xiao, R. & Harrison, C. Advancing hand gesture recognition with high resolution electrical impedance tomography. In Proc. 29th Annual Symp. User Interface Software and Technology 843–850 (ACM, 2016). Atitallah, B. B. et al. Hand sign recognition system based on EIT imaging and robust CNN classification. IEEE Sens. J. 22, 1729–1737 (2021). Article Google Scholar Jiang, D., Wu, Y. & Demosthenous, A. Hand gesture recognition using three-dimensional electrical impedance tomography. IEEE Trans. Circuits Syst. II: Express Briefs 67, 1554–1558 (2020). Google Scholar Wu, Y. et al. Towards a high-accuracy wearable hand gesture recognition system using EIT. In International Symposium on Circuits and Systems 1–4 (IEEE, 2018). Yang, L. et al. A wireless, low-power, and miniaturized EIT system for remote and long-term monitoring of lung ventilation in the isolation ward of ICU. IEEE Trans. Instrum. Meas. 70, 1–11 (2021). Article Google Scholar Zhang, Y. & Harrison, C. Tomo: wearable, low-cost electrical impedance tomography for hand gesture recognition. In Proc. 28th Annual ACM Symposum on User Interface Software and Technology 167–173 (ACM, 2015). Brazey, B., Haddab, Y., Zemiti, N., Mailly, F. & Nouet, P. An open-source and easily replicable hardware for electrical impedance tomography. HardwareX 11, e00278 (2022). Article Google Scholar Fernández, J. E., López, C. M. & Leyton, V. M. A low-cost, portable 32-channel EIT system with four rings based on AFE4300 for body composition analysis. HardwareX 16, e00494 (2023). Article Google Scholar Creegan, A. et al. A wearable open-source electrical impedance tomography device. HardwareX 18, e00521 (2024). Article Google Scholar Rintoul, J. & Borgne, M. L. Open EIT. GitHub https://github.com/openeit (2020). Wu, Y. et al. A high frame rate wearable EIT system using active electrode ASICs for lung respiration and heart rate monitoring. IEEE Trans. Circuits Syst. I: Regul. Pap. 65, 3810–3820 (2018). Google Scholar Wu, Y., Jiang, D., Bardill, A., Bayford, R. & Demosthenous, A. A 122 fps, 1 MHz bandwidth multi-frequency wearable EIT belt featuring novel active electrode architecture for neonatal thorax vital sign monitoring. IEEE Trans. Biomed. Circuits Syst. 13, 927–937 (2019). Article Google Scholar Kauppinen, P., Hyttinen, J. & Malmivuo, J. Sensitivity distribution simulations of impedance tomography electrode combinations. BEM NFSI Conf. Proc. 7, 344–347 (2005). Google Scholar Adler, A. & Boyle, A. Electrical impedance tomography: tissue properties to image measures. IEEE Trans. Biomed. Eng. 64, 2494–2504 (2017). Article Google Scholar Grychtol, B. et al. Thoracic EIT in 3D: experiences and recommendations. Physiol. Meas. 40, 74006 (2019). Article Google Scholar Vauhkonen, P. J., Vauhkonen, M., Savolainen, T. & Kaipio, J. P. Three-dimensional electrical impedance tomography based on the complete electrode model. IEEE Trans. Biomed. Eng. 46, 1150–1160 (1999). Article Google Scholar Xue, T. et al. Progress and prospects of multimodal fusion methods in physical human–robot interaction: a review. IEEE Sens. J. 20, 10355–10370 (2020). Article Google Scholar Ding, Z., Yang, C., Wang, Z., Yin, X. & Jiang, F. Online adaptive prediction of human motion intention based on sEMG. Sensors 21, 2882 (2021). Article Google Scholar Farina, D. Interpretation of the surface electromyogram in dynamic contractions. Exerc. Sport. Sci. Rev. 34, 121–127 (2006). Article Google Scholar Dick, T. J. M. et al. Consensus for experimental design in electromyography (CEDE) project: application of EMG to estimate muscle force. J. Electromyogr. Kinesiol. 79, 102910 (2024). Article Google Scholar Boyer, M., Bouyer, L., Roy, J.-S. & Campeau-Lecours, A. Reducing noise, artifacts and interference in single-channel EMG signals: a review. Sensors 23, 2927 (2023). Article Google Scholar Sun, J. et al. Application of surface electromyography in exercise fatigue: a review. Front. Syst. Neurosci. 16, 893275 (2022). Article Google Scholar Kusche, R. & Ryschka, M. Combining bioimpedance and EMG measurements for reliable muscle contraction detection. IEEE Sens. J. 19, 11687–11696 (2019). Article Google Scholar Nahrstaedt, H., Schultheiss, C., Schauer, T. & Seidl, R. Bioimpedance- and EMG-triggered FES for improved protection of the airway during swallowing. Biomed. Eng./Biomed. Tech. 58, 000010151520134025 (2013). Google Scholar Briko, A., Kobelev, A. & Shchukin, S. Electrodes interchangeability during electromyogram and bioimpedance joint recording. In Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology 17–20 (IEEE, 2018). Ngo, C. et al. A wearable, multi-frequency device to measure muscle activity combining simultaneous electromyography and electrical impedance myography. Sensors 22, 1941 (2022). Article Google Scholar Heo, U. et al. Development of a bioimpedance and sEMG fusion sensor for gait phase detection: validation with a transtibial amputee. In 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society 1–4 (IEEE, 2023). Kusche, R. & Ryschka, M. in World Congress on Medical Physics and Biomedical Engineering 2018 (eds Lhotska, L. et al.) Vol. 2, 847–850 (Springer, 2019). Blanco-Almazán, D. et al. Combining bioimpedance and myographic signals for the assessment of COPD during loaded breathing. IEEE Trans. Biomed. Eng. 68, 298–307 (2020). Article Google Scholar Zhou, L. et al. How we found our IMU: guidelines to IMU selection and a comparison of seven IMUs for pervasive healthcare applications. Sensors 20, 4090 (2020). Article Google Scholar Suprem, A., Deep, V. & Elarabi, T. A. Orientation and displacement detection for smartphone device based IMUs. IEEE Access. 5, 987–997 (2017). Article Google Scholar Mazzà, C. et al. Technical validation of real-world monitoring of gait: a multicentric observational study. BMJ Open. 11, e050785 (2021). Article Google Scholar Bangaru, S. S., Wang, C. & Aghazadeh, F. Data quality and reliability assessment of wearable EMG and IMU sensor for construction activity recognition. Sensors 20, 5264 (2020). Article Google Scholar Majumder, S. & Deen, M. J. Wearable IMU-based system for real-time monitoring of lower-limb joints. IEEE Sens. J. 21, 8267–8275 (2020). Article Google Scholar Kyu, A., Mao, H., Zhu, J., Goel, M. & Ahuja, K. Eitpose: wearable and practical electrical impedance tomography for continuous hand pose estimation. In Proc. CHI Conference on Human Factors in Computing Systems 1–10 (ACM, 2024). Zheng, E., Wan, J., Yang, L., Wang, Q. & Qiao, H. Wrist angle estimation with a musculoskeletal model driven by electrical impedance tomography signals. IEEE Robot. Autom. Lett. 6, 2186–2193 (2021). Article Google Scholar Liu, M. et al. imove: exploring bio-impedance sensing for fitness activity recognition. In International Conference on Pervasive Computing and Communications 194–205 (IEEE, 2024). Kang, I. et al. Real-time gait phase estimation for robotic hip exoskeleton control during multimodal locomotion. IEEE Robot. Autom. Lett. 6, 3491–3497 (2021). Article Google Scholar Lotti, N. et al. Adaptive model-based myoelectric control for a soft wearable arm exosuit: a new generation of wearable robot control. IEEE Robot. Autom. Mag. 27, 43–53 (2020). Article Google Scholar Wu, Y., Jiang, D., Liu, X., Bayford, R. & Demosthenous, A. A human–machine interface using electrical impedance tomography for hand prosthesis control. IEEE Trans. Biomed. Circuits Syst. 12, 1322–1333 (2018). Article Google Scholar Zheng, E., Li, Y., Wang, Q. & Qiao, H. Toward a human-machine interface based on electrical impedance tomography for robotic manipulator control. In International Conference on Intelligent Robots and Systems 2768–2774 (IEEE, 2019). Toledo-Peral, C. L. et al. Virtual/augmented reality for rehabilitation applications using electromyography as control/biofeedback: systematic literature review. Electronics 11, 2271 (2022). Article Google Scholar Yazıcıoğlu, R. F. et al. In Biopotential Readout Circuits for Portable Acquisition Systems (eds Serdijn, W. A. & Yazıcıoğlu, R. F.) 5–19 (Springer, 2009). Wang, J., Tang, L. & Bronlund, J. E. Surface EMG signal amplification and filtering. Int. J. Computer Applications 82, 15–22 (2013). Article Google Scholar Kim, C. et al. Sub-μV rms-noise sub-μW/channel ADC-direct neural recording with 200-mV/ms transient recovery through predictive digital autoranging. IEEE J. Solid-State Circuits 53, 3101–3110 (2018). Article Google Scholar Jeon, H., Bang, J.-S., Jung, Y., Choi, I. & Je, M. A high DR, DC-coupled, time-based neural-recording IC with degeneration R-DAC for bidirectional neural interface. IEEE J. Solid-State Circuits 54, 2658–2670 (2019). Article Google Scholar Lee, C. et al. A 6.5-μW 10-kHz BW 80.4-dB SNDR G m-C-based CT∆∑ modulator with a feedback-assisted Gm linearization for artifact-tolerant neural recording. IEEE J. Solid-State Circuits 55, 2889–2901 (2020). Article Google Scholar Shu, Y.-S. et al. A 4.5 mm 2 multimodal biosensing SoC for PPG, ECG, BIOZ and GSR acquisition in consumer wearable devices. In International Solid-State Circuits Conference 400–402 (IEEE, 2020). Koo, N. & Cho, S. A 24.8-μW biopotential amplifier tolerant to 15-V PP common-mode interference for two-electrode ECG recording in 180-nm CMOS. IEEE J. Solid-State Circuits 56, 591–600 (2020). Article Google Scholar Choi, K.-J. & Sim, J.-Y. An 18.6-μW/Ch TDM-based 8-channel noncontact ECG recording IC with common-mode interference suppression. IEEE Trans. Biomed. Circuits Syst. 16, 1021–1029 (2022). Article Google Scholar Yen, C.-J., Chung, W.-Y. & Chi, M. C. Micro-power low-offset instrumentation amplifier IC design for biomedical system applications. IEEE Trans. Circuits Syst. I: Regul. Pap. 51, 691–699 (2004). Article Google Scholar Texas Instruments. Ads1299-x low-noise, 4-, 6-, 8-channel, 24-bit, analog-to-digital converter for EEG and biopotential measurements. TI.com http://www.ti.com/lit/ds/symlink/ads1299.pdf (2017). Analog Devices. AD8422 (Rev. C), high performance, low power, rail-to-rail precision instrumentation amplifier; https://www.analog.com/media/en/technical-documentation/data-sheets/ad8422.pdf. Harrison, R. R. & Charles, C. A low-power low-noise CMOS amplifier for neural recording applications. IEEE J. Solid-State Circuits 38, 958–965 (2003). Article Google Scholar Chae, M. S., Liu, W. & Sivaprakasam, M. Design optimization for integrated neural recording systems. IEEE J. Solid-State Circuits 43, 1931–1939 (2008). Article Google Scholar Harrison, R. R. et al. A low-power integrated circuit for a wireless 100-electrode neural recording system. IEEE J. Solid-State Circuits 42, 123–133 (2006). Article Google Scholar Harrison, R. R. A versatile integrated circuit for the acquisition of biopotentials. In Custom Integrated Circuits Conference 115–122 (IEEE, 2007). Zou, X., Xu, X., Yao, L. & Lian, Y. A 1-V 450-nW fully integrated programmable biomedical sensor interface chip. IEEE J. Solid-State Circuits 44, 1067–1077 (2009). Article Google Scholar Fan, Q., Sebastiano, F., Huijsing, J. H. & Makinwa, K. A. A. A 1.8 μW 60 nV√Hz capacitively-coupled chopper instrumentation amplifier in 65 nm CMOS for wireless sensor nodes. IEEE J. Solid-State Circuits 46, 1534–1543 (2011). Article Google Scholar Lopez, C. M. et al. An implantable 455-active-electrode 52-channel CMOS neural probe. IEEE J. Solid-State Circuits 49, 248–261 (2013). Article Google Scholar Han, D., Zheng, Y., Rajkumar, R., Dawe, G. S. & Je, M. A 0.45 V 100-channel neural-recording IC with sub-μW/channel consumption in 0.18 μm CMOS. IEEE Trans. Biomed. Circuits Syst. 7, 735–746 (2013). Article Google Scholar Zou, X. et al. A 100-channel 1-mW implantable neural recording IC. IEEE Trans. Circuits Syst. I: Regul. Pap. 60, 2584–2596 (2013). Google Scholar Ando, H. et al. Wireless multichannel neural recording with a 128-Mbps UWB transmitter for an implantable brain–machine interfaces. IEEE Trans. Biomed. Circuits Syst. 10, 1068–1078 (2016). Article Google Scholar Lee, T. et al. A multimodal neural activity readout integrated circuit for recording fluorescence and electrical signals. IEEE Access 9, 118610–118623 (2021). Article Google Scholar Xu, J. et al. A μW 8-channel active electrode system for EEG monitoring. IEEE Trans. Biomed. Circuits Syst. 5, 555–567 (2011). Article Google Scholar Altaf, M. A. B., Zhang, C. & Yoo, J. A 16-channel patient-specific seizure onset and termination detection SoC with impedance-adaptive transcranial electrical stimulator. IEEE J. Solid-State Circuits 50, 2728–2740 (2015). Article Google Scholar Kassiri, H. et al. Battery-less tri-band-radio neuro-monitor and responsive neurostimulator for diagnostics and treatment of neurological disorders. IEEE J. Solid-State Circuits 51, 1274–1289 (2016). Article Google Scholar Lee, S. B., Lee, H.-M., Kiani, M., Jow, U.-M. & Ghovanloo, M. An inductively powered scalable 32-channel wireless neural recording system-on-a-chip for neuroscience applications. IEEE Trans. Biomed. Circuits Syst. 4, 360–371 (2010). Article Google Scholar Chandrakumar, H. & Marković, D. A 15.2-ENOB 5-kHz BW 4.5-μW chopped CT ΔΣ-ADC for artifact-tolerant neural recording front ends. IEEE J. Solid-State Circuits 53, 3470–3483 (2018). Article Google Scholar Jia, Y. et al. A trimodal wireless implantable neural interface system-on-chip. IEEE Trans. Biomed. Circuits Syst. 14, 1207–1217 (2020). Article Google Scholar Jung, Y. et al. A wide-dynamic-range neural-recording IC with automatic-gain-controlled AFE and CT dynamic-zoom ΔΣ ADC for saturation-free closed-loop neural interfaces. IEEE J. Solid-State Circuits 57, 3071–3082 (2022). Article Google Scholar Kim, M. K., Jeon, H., Lee, H. J. & Je, M. Plugging electronics into minds: recent trends and advances in neural interface microsystems. IEEE Solid-State Circuits Mag. 11, 29–42 (2019). Article Google Scholar Musk, E. An integrated brain–machine interface platform with thousands of channels. J. Med. Internet Res. 21, e16194 (2019). Article Google Scholar Yoon, D.-Y. et al. A 1024-channel simultaneous recording neural SoC with stimulation and real-time spike detection. In Symposium on VLSI Technology and Circuits 1–2 (IEEE, 2021). Samiei, A. & Hashemi, H. A bidirectional neural interface SoC with adaptive IIR stimulation artifact cancelers. IEEE J. Solid-State Circuits 56, 2142–2157 (2021). Article Google Scholar Park, Y., Cha, J.-H., Han, S.-H., Park, J.-H. & Kim, S.-J. A 3.8-µW 1.5-NEF 15-GΩ total input impedance chopper stabilized amplifier with auto-calibrated dual positive feedback in 110-nm CMOS. IEEE J. Solid-State Circuits 57, 2449–2461 (2022). Article Google Scholar Koo, N., Kim, H. & Cho, S. A 43.3-μW biopotential amplifier with tolerance to common-mode interference of 18 V pp and T-CMRR of 105 dB in 180-nm CMOS. IEEE J. Solid-State Circuits 58, 508–519 (2022). Article Google Scholar Lee, T. & Je, M. Trend investigation of biopotential recording front-end channels for invasive and non-invasive applications. Preprint at https://doi.org/10.48550/arXiv.2305.13463 (2023). Hou, Y., Zhu, Y., Ji, X., Richardson, A. G. & Liu, X. A wireless sensor–brain interface system for tracking and guiding animal behaviors through closed-loop neuromodulation in water mazes. IEEE J. Solid-State Circuits, (2024). Intan Technologies. RHD electrophysiology amplifier chips; https://intantech.com/products_RHD2000.html. Ha, H. et al. A bio-impedance readout IC with digital-assisted baseline cancellation for two-electrode measurement. IEEE J. Solid-State Circuits 54, 2969–2979 (2019). Article Google Scholar Yazicioglu, R. F., Merken, P., Puers, R. & Van Hoof, C. A 60 μW 60 nV/√Hz readout front-end for portable biopotential acquisition systems. IEEE J. Solid-State Circuits 42, 1100–1110 (2007). Article Google Scholar Van Helleputte, N. et al. A 160 μA biopotential acquisition IC with fully integrated IA and motion artifact suppression. IEEE Trans. Biomed. Circuits Syst. 6, 552–561 (2012). Article Google Scholar Yazicioglu, R. F., Merken, P., Puers, R. & Van Hoof, C. A 200 μW eight-channel EEG acquisition ASIC for ambulatory EEG systems. IEEE J. Solid-State Circuits 43, 3025–3038 (2008). Article Google Scholar Yan, L. et al. A 13 μA analog signal processing IC for accurate recognition of multiple intra-cardiac signals. IEEE Trans. Biomed. Circuits Syst. 7, 785–795 (2013). Article Google Scholar Van Helleputte, N. et al. A 345 µW multi-sensor biomedical SoC with bio-impedance, 3-channel ECG, motion artifact reduction, and integrated DSP. IEEE J. Solid-State Circuits 50, 230–244 (2014). Article Google Scholar Xu, J., Büsze, B., Van Hoof, C., Makinwa, K. A. A. & Yazicioglu, R. F. A 15-channel digital active electrode system for multi-parameter biopotential measurement. IEEE J. Solid-State Circuits 50, 2090–2100 (2015). Article Google Scholar Liu, C.-C., Chang, S.-J., Huang, G.-Y. & Lin, Y.-Z. A 10-bit 50-MS/s SAR ADC with a monotonic capacitor switching procedure. IEEE J. Solid-State Circuits 45, 731–740 (2010). Article Google Scholar Jung, Y. et al. Dynamic-range-enhancement techniques for artifact-tolerant biopotential-acquisition ICs. IEEE Trans. Circuits Syst. II: Express Briefs 69, 3090–3095 (2022). Google Scholar Muller, R., Gambini, S. & Rabaey, J. M. A 0.013 mm2, 5 µW, DC-coupled neural signal acquisition IC with 0.5 V supply. IEEE J. Solid-State Circuits 47, 232–243 (2011). Article Google Scholar Zhou, A. et al. A wireless and artefact-free 128-channel neuromodulation device for closed-loop stimulation and recording in non-human primates. Nat. Biomed. Eng. 3, 15–26 (2019). Article Google Scholar Pazhouhandeh, M. R., Chang, M., Valiante, T. A. & Genov, R. Track-and-zoom neural analog-to-digital converter with blind stimulation artifact rejection. IEEE J. Solid-State Circuits 55, 1984–1997 (2020). Article Google Scholar Huang, J. & Mercier, P. P. A 178.9-dB FoM 128-dB SFDR VCO-based AFE for ExG readouts with a calibration-free differential pulse code modulation technique. IEEE J. Solid-State Circuits 56, 3236–3246 (2021). Article Google Scholar Yang, X. et al. A 108 dB DR Δ∑–∑ M front-end with 720 mV pp input range and > ±300 mV offset removal for multi-parameter biopotential recording. IEEE Trans. Biomed. Circuits Syst. 15, 199–209 (2021). Article Google Scholar Jeong, K. et al. A PVT-robust AFE-embedded error-feedback noise-shaping SAR ADC with chopper-based passive high-pass IIR filtering for direct neural recording. IEEE Trans. Biomed. Circuits Syst. 16, 679–691 (2022). Article Google Scholar Yang, X. et al. An AC-coupled 1st-order Δ–ΔΣ readout IC for area-efficient neural signal acquisition. IEEE J. Solid-State Circuits 58, 949–960 (2023). Article Google Scholar Li, Y. et al. A 15.5-ENOB 335 mVPP-linear-input-range 4.7 GΩ-input-impedance CT-ΔΣM analog front-end with embedded low-frequency chopping. IEEE Solid-State Circuits Lett. 6, 265–268 (2023). Article Google Scholar Jeong, K., Ha, S. & Je, M. A 15.4-ENOB, fourth-order truncation-error-shaping NS-SAR-nested ΔΣ modulator with boosted input impedance and range for biosignal acquisition. IEEE J. Solid-State Circuits 59.2, 528–539 (2023). Google Scholar Pazhouhandeh, M. R. et al. Opamp-less sub-μW/channel Δ-modulated neural-ADC with super-GΩ input impedance. IEEE J. Solid-State Circuits 56, 1565–1575 (2020). Article Google Scholar Kassiri, H. et al. Rail-to-rail-input dual-radio 64-channel closed-loop neurostimulator. IEEE J. Solid-State Circuits 52, 2793–2810 (2017). Google Scholar Wang, S. et al. A compact chopper stabilized Δ–ΔΣ neural readout IC with input impedance boosting. IEEE Open. J. Solid-State Circuits Soc. 1, 67–78 (2021). Article Google Scholar Tang, T. et al. An active concentric electrode for concurrent EEG recording and body-coupled communication (BCC) data transmission. IEEE Trans. Biomed. Circuits Syst. 14, 1253–1262 (2020). Article Google Scholar Lee, S. et al. A 110dB-CMRR 100dB-PSRR multi-channel neural-recording amplifier system using differentially regulated rejection ratio enhancement in 0.18 μm CMOS. In International Solid-State Circuits Conference 472–474 (IEEE, 2018). Zhang, T., Son, H., Gao, Y., Lan, J. & Heng, C.-H. A 0.6V/0.9V 26.6-to-119.3µW ΔΣ-based bio-impedance readout IC with 101.9dB SNR and <0.1Hz 1/f corner. In International Solid-State Circuits Conference 394–396 (IEEE, 2021). Ha, H. et al. A bio-umpedance readout IC with frequency sweeping from 1k-to-1MHz for electrical impedance tomography. In Symposium on VLSI Technology and Circuits C174–C175 (IEEE, 2017). Zhou, Z. & Yatani, K. Gesture-aware interactive machine teaching with in-situ object annotations. In Proc. 35th Annual ACM Symposium on User Interface Software and Technology 1–14 (ACM, 2022). Adler, A. & Holder, D. Electrical Impedance Tomography: Methods, History and Applications (CRC, 2021). Min, M., Parve, T., Ronk, A., Annus, P. & Paavle, T. Synchronous sampling and demodulation in an instrument for multifrequency bioimpedance measurement. IEEE Trans. Instrum. Meas. 56, 1365–1372 (2007). Article Google Scholar Hong, S., Lee, J., Bae, J. & Yoo, H.-J. A 10.4 mW electrical impedance tomography SoC for portable real-time lung ventilation monitoring system. IEEE J. Solid-State Circuits 50, 2501–2512 (2015). Article Google Scholar Hong, S. et al. A 4.9 mΩ-sensitivity mobile electrical impedance tomography IC for early breast-cancer detection system. IEEE J. Solid-State Circuits 50, 245–257 (2014). Article Google Scholar Lee, Y., Song, K. & Yoo, H.-J. A 4.84 mW 30fps dual frequency division multiplexing electrical impedance tomography SoC for lung ventilation monitoring system. In Symposium on VLSI Technology and Circuits C204–C205 (IEEE, 2015). Kim, M. et al. A 1.4-mΩ-sensitivity 94-dB dynamic-range electrical impedance tomography SoC and 48-channel hub-SoC for 3-D lung ventilation monitoring system. IEEE J. Solid-State Circuits 52, 2829–2842 (2017). Article Google Scholar Liu, B. et al. A 13-channel 1.53-mW 11.28-mm2 electrical impedance tomography SoC based on frequency division multiplexing for lung physiological imaging. IEEE Trans. Biomed. circuits Syst. 13, 938–949 (2019). Article Google Scholar Zeng, L. & Heng, C.-H. An 8-channel 1.76-mW 4.84-mm2 electrical impedance tomography SoC with direct IF frequency division multiplexing. IEEE Trans. Circuits Syst. II: Express Briefs 68, 3401–3405 (2021). Google Scholar Rao, A., Murphy, E. K., Halter, R. J. & Odame, K. M. A 1 MHz miniaturized electrical impedance tomography system for prostate imaging. IEEE Trans. Biomed. circuits Syst. 14, 787–799 (2020). Article Google Scholar Wang, C. et al. Flexi-EIT: a flexible and reconfigurable active electrode electrical impedance tomography system. IEEE Trans. Biomed. Circuits Syst. 18.1, 89–99 (2023). Google Scholar Suh, J.-H. et al. A 16-channel impedance-readout IC with synchronous sampling and baseline cancelation for fast neural electrical impedance tomography. IEEE Solid-State Circuits Lett. 6, 109–112 (2023). Article Google Scholar Zhou, T. et al. A 0.63-mm2/ch 1.3-mΩ/√ Hz-sensitivity 1-MHz bandwidth active electrode electrical impedance tomography system. In Asian Solid-State Circuits Conference 1–3 (IEEE, 2022). Hanzaee, F. F. et al. A low-power recursive I/Q signal generator and current driver for bioimpedance applications. IEEE Trans. Circuits Syst. II: Express Briefs 69, 4108–4112 (2022). Google Scholar Um, S., Lee, J. & Yoo, H.-J. A 3.8-mW 1.9-mΩ/√Hz electrical impedance tomography IC with high input impedance and loading effect calibration for 3-D early breast cancer detect system. IEEE J. Solid-State Circuits 59, 2019–2028 (2024). Article Google Scholar Lee, J. et al. A 9.6-mW/Ch 10-MHz wide-bandwidth electrical impedance tomography IC with accurate phase compensation for early breast cancer detection. IEEE J. Solid-State Circuits 56, 887–898 (2020). Article Google Scholar Yazicioglu, R. F., Kim, S., Torfs, T., Kim, H. & Van Hoof, C. A 30 μW analog signal processor ASIC for portable biopotential signal monitoring. IEEE J. Solid-State Circuits 46, 209–223 (2010). Article Google Scholar Xu, J., Harpe, P. & Van Hoof, C. An energy-efficient and reconfigurable sensor IC for bio-impedance spectroscopy and ECG recording. IEEE J. Emerg. Sel. Top. Circuits Syst. 8, 616–626 (2018). Article Google Scholar Xu, J. et al. A 665 μW silicon photomultiplier-based NIRS/EEG/EIT monitoring ASIC for wearable functional brain imaging. IEEE Trans. Biomed. Circuits Syst. 12, 1267–1277 (2018). Article Google Scholar Song, S. et al. A 769 μW battery-powered single-chip SoC with BLE for multi-modal vital sign monitoring health patches. IEEE Trans. Biomed. Circuits Syst. 13, 1506–1517 (2019). Article Google Scholar Lin, Q. et al. Wearable multiple modality bio-signal recording and processing on chip: a review. IEEE Sens. J. 21, 1108–1123 (2020). Article Google Scholar Islam, M. A. et al. Cross-talk in mechanomyographic signals from the forearm muscles during sub-maximal to maximal isometric grip force. PLoS One 9, e96628 (2014). Article Google Scholar Marque, C. et al. Adaptive filtering for ECG rejection from surface EMG recordings. J. Electromyogr. Kinesiol. 15, 310–315 (2005). Article Google Scholar Parajuli, N. et al. Real-time EMG based pattern recognition control for hand prostheses: a review on existing methods, challenges and future implementation. Sensors 19, 4596 (2019). Article Google Scholar Zheng, Y. & Hu, X. Adaptive real-time decomposition of electromyogram during sustained muscle activation: a simulation study. IEEE Trans. Biomed. Eng. 69, 645–653 (2021). Article Google Scholar Zheng, N., Li, Y., Zhang, W. & Du, M. User-independent EMG gesture recognition method based on adaptive learning. Front. Neurosci. 16, 847180 (2022). Article Google Scholar Qi, J., Jiang, G., Li, G., Sun, Y. & Tao, B. Surface EMG hand gesture recognition system based on PCA and GRNN. Neural Comput. Appl. 32, 6343–6351 (2020). Article Google Scholar Wu, D., Yang, J. & Sawan, M. Transfer learning on electromyography (EMG) tasks: approaches and beyond. IEEE Trans. Neural Syst. RehabilitatiEng. 31, 3015–3034 (2023). Article Google Scholar Adler, A., Gaggero, P. & Maimaitijiang, Y. Distinguishability in EIT using a hypothesis-testing model. J. Phys. Conf. Ser. 224, 012056 (2010). Article Google Scholar Tang, M., Wang, W., Wheeler, J., McCormick, M. & Dong, X. The number of electrodes and basis functions in EIT image reconstruction. Physiol. Meas. 23, 129 (2002). Article Google Scholar Dixon, A. M. R., Allstot, E. G., Gangopadhyay, D. & Allstot, D. J. Compressed sensing system considerations for ECG and EMG wireless biosensors. IEEE Trans. Biomed. Circuits Syst. 6, 156–166 (2012). Article Google Scholar Moy, T. et al. An EEG acquisition and biomarker-extraction system using low-noise-amplifier and compressive-sensing circuits based on flexible, thin-film electronics. IEEE J. Solid-State Circuits 52, 309–321 (2016). Article Google Scholar Weltin-Wu, C. & Tsividis, Y. An event-driven clockless level-crossing ADC with signal-dependent adaptive resolution. IEEE J. Solid-State Circuits 48, 2180–2190 (2013). Article Google Scholar Analog Devices. MAX30001. Ultra-low-power, single-channel integrated biopotential (ECG, R-to-R, and pace detection) and bioimpedance (BioZ) AFE; https://www.analog.com/media/en/technical-documentation/data-sheets/max30001.pdf (2023). Um, S., Lee, J. & Yoo, H.-J. A 3.8 mW 1.9 m Ω/√ Hz electrical impedance tomography imaging with 28.4 M Ω high input impedance and loading calibration. In 49th European Solid State Circuits Conference 357–360 (IEEE, 2023). Download references This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (RS-2021-NR059641). These authors contributed equally: Kyungseo Park, Hwayeong Jeong. Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu, South Korea Kyungseo Park Reconfigurable Robotics Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland Hwayeong Jeong Analog Biomed. Team, Interuniversity Microelectronics Centre (IMEC), Leuven, Belgium Yoontae Jung Energy-Efficient Microsystems Laboratory, Electrical and Computer Engineering, University of California, San Diego, San Diego, CA, USA Ji-Hoon Suh School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea Minkyu Je Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea Jung Kim Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar K.P., H.J., Y.J., J.-H.S. and M.J. researched data for the article, contributed substantially to discussion of the content and wrote the article. H.J., M.J. and J.K. reviewed and/or edited the manuscript before submission. Correspondence to Jung Kim. The authors declare that they have no competing interests. Nature Reviews Electrical Engineering thanks Benoit Gosselin, Karim Bouzid and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Reprints and permissions Park, K., Jeong, H., Jung, Y. et al. Using biopotential and bio-impedance for intuitive human–robot interaction. Nat Rev Electr Eng 2, 555–571 (2025). https://doi.org/10.1038/s44287-025-00191-5 Download citation Accepted: 04 June 2025 Published: 18 July 2025 Version of record: 18 July 2025 Issue date: August 2025 DOI: https://doi.org/10.1038/s44287-025-00191-5 Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Collection Advertisement Nature Reviews Electrical Engineering (Nat Rev Electr Eng) ISSN 2948-1201 (online) © 2026 Springer Nature Limited Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Images (1):
|
|||||