Total Articles Scraped
Total Images Extracted
| Action | Title | URL | Images | Scraped At | Status |
|---|---|---|---|---|---|
| Autonomous Bipedal Robot Can Change Its Own Batteries, Work 24/7 | https://www.odditycentral.com/technolog… | 1 | Dec 23, 2025 08:00 | active | |
Autonomous Bipedal Robot Can Change Its Own Batteries, Work 24/7Description: The Walker S2 humanoid robot is the world's first industrial robot that can replace its own battery, allowing it to operate 24 hours a day, 365 days a year. Content:
Unveiled earlier this month by Chinese robotics company UB Tech Robotics, the Walker S2 has been attracting a lot of attention because of its unique ability to replace its own batteries to ensure it never runs out of power. Conventional robots need to be plugged in or have their batteries replaced, which means they have to stop working for a certain period of time, but the Walker S2 is equipped with a dual battery system that allows it to replace each battery itself, one at a time, to ensure that it essentially never runs out of power. This simple yet ingenious feature is said to be a first among bipedal robots. In a recent video shared by UB Tech Robotics, the Walker S2 showcases its battery swapping ability. When it detects that its batteries are running low on power, the robot automatically heads to a battery exchange station where it bends its arms and uses its palms to pull out one of the batteries at the back and store it on the top shelf, before replacing it with a fully-charged one and getting back to work. The battery replacement process is simple and effective, takes only a few minutes to complete, and virtually ensures that the robot can work continuously, for as long as it has replacement batteries available. The system is said to have been inspired by the swappable batteries of Chinese electric cars, which use modular batteries that can be swapped to save time. The Walker S2, the first humanoid robot to be equipped with self-swappable batteries, is expected to be used in industrial facilities and on production lines, where it completely eliminates the need for manpower. Although UB Tech Robotics has yet to reveal the technical specifications of the S2, expectations are high for the upcoming production version of the humanoid robot.
Images (1):
|
|||||
| Startup Figure Unveils Photos Of World's First 'General Purpose' Bipedal … | https://www.ibtimes.com/startup-figure-… | 1 | Dec 23, 2025 08:00 | active | |
Startup Figure Unveils Photos Of World's First 'General Purpose' Bipedal Humanoid Robot | IBTimesDescription: Figure hopes its humanoid robot will help address labor shortages. Content:
Artificial intelligence robotics startup Figure has unveiled photos and a video of Figure 01, which the company calls the world's first "general purpose" humanoid robot. The bipedal robot is expected to benefit the workforce and help address labor shortages. "This humanoid robot will have the ability to think, learn, and interact with its environment and is designed for initial deployment into the workforce to address labor shortages and over time lead the way in eliminating the need for unsafe and undesirable jobs," the 2022-founded company said in a press release Thursday. In the press release, Figure also revealed that its team of 40 industry experts has a combined 100 years of AI and humanoid experience as they come from GoogleX, IHMC, Tesla, Apple SPG, Cruise and Boston Dynamics. Meet Figure - the AI Robotics company building the world's first commercially viable autonomous humanoid robot.We spent the last 9 months assembling our world-class team and designing our Alpha build - now we're ready to introduce you to Figure 01. pic.twitter.com/pas6rgncTW "Once Figure's humanoids are deployed to work alongside us, we'll have the potential to produce an abundance of affordable, more widely available goods and services to a degree the world has never seen," Figure founder and CEO Brett Adcock said. Adcock noted that Figure 01, in its early development stages, will have repetitive and structured tasks, but advancements in software and robot learning will help the team expand the robot's capabilities. According to Figure, its humanoid robot will stand 5-foot-6 inches tall, weigh 60 kilograms, have a payload of 20 kilograms, a runtime of five hours, and is expected to "go beyond single-function robots and led support across manufacturing, logistics, warehousing, and retail." Engineering magazine IEEE Spectrum noted that while it is "generally skeptical" about announcements from companies that emerge "out of stealth with ambitious promises and some impressive renderings," it was impressed by the team that Figure got together to make Figure 01 a reality. The magazine added that the images and video shown are only renderings of what the team wants Figure 01 to be. On the other hand, the company expects that the final hardware of its robot will be very similar to what it has shown so far. Figure wants its robots to make an entry point in warehouses, which the company will make possible by building an AI system that allows its humanoids to "perform everyday tasks autonomously." First reported by TechCrunch in September, Figure operated in stealth before its Thursday announcement. At the time, the outlet revealed that Figure hired research scientist Jerry Pratt to be its CTO and former Boston Dynamics roboticist Gabe Nelson as chief scientist. © Copyright IBTimes 2025. All rights reserved.
Images (1):
|
|||||
| [Innovate Korea] Future of robots depends on AI: Rainbow Robotics … | https://www.koreaherald.com/view.php?ud… | 1 | Dec 23, 2025 08:00 | active | |
[Innovate Korea] Future of robots depends on AI: Rainbow Robotics founder - The Korea HeraldURL: https://www.koreaherald.com/view.php?ud=20240607050413 Description: DAEJEON -- Oh Jun-ho, founder of Rainbow Robotics and a former mechanical engineering professor at the Korea Advanced Institute of Science and Technology, under Content:
Business [Innovate Korea] Future of robots depends on AI: Rainbow Robotics founder Published : June 7, 2024 - 14:01:02 Link copied! DAEJEON -- Oh Jun-ho, founder of Rainbow Robotics and a former mechanical engineering professor at the Korea Advanced Institute of Science and Technology, underscored the future of robots lies in artificial intelligence technology. “Robots are ready to do anything, but on their own, they can’t do anything. Bringing movement to them requires human touch like programming, but in the future, AI will be able to take on that role,” he said in his speech at Innovate Korea 2024, held at the Lyu Keun-chul Sports Complex in Daejeon on Wednesday. Before his speech, he appeared on the stage with his quadruped walking robot and grabbed the attention of the some 3,000 participants. He then explained the current state of humanoid robots and the future of relevant technologies as he showcased some of the company’s products, such as the bimanual mobile manipulator and humanoid robot. “Robots can be broadly divided into two components: the moving hardware and the software that controls it. The hardware has largely been developed, but driving it remains a challenge. Ultimately, AI will need to handle this operation. I believe this battle will be crucial in the future." Although humanoid robots cannot fully replace workers at the moment, there is a lot we can do at this stage, the company founder said. He is gearing up to unveil a trial product of a new electric bipedal walking robot as early as the end of this year. Rainbow Robotics, founded by a research team at KAIST Humanoid Robot Research Center in 2011, is one of a handful of robot companies making bipedal human-like robots. Samsung Electronics owns a 14.99-percent stake in the company as its second-largest shareholder. Ruling party passes Dec. 3 tribunal bill, new filibuster begins over ‘fake info’ The ruling Democratic Party on Tuesday unilaterally passed a bill to establish special tribunals for insurrection and treason charges linked to former President Yoon Suk Yeol's martial law declaration. Korean cinema confronts its toughest year in decades Shinhan Card reports internal leak of 190,000 customer records No white Christmas? Brace for a cold snap Korea-India partnership not desirable but essential: foreign minister Hardest K-dramas to watch -- A beginner’s guide K-drama Survival Guide [Graphic News] Teaching remains top career choice for students Graphic News Oddities From the funny to the strange and downright unbelievable Herald Interview A series of in-depth interviews. Living Alone A window into living alone in Seoul. AtoZ into Korean Mind Decoding the Korean psyche through keywords Korea overhauls forex rules to stabilize won Too thin to buy? Why ultraslim phones from Samsung, Apple aren’t selling Coupang rebuked over founder's absence at data breach hearing North Korea leads global crypto hacks with $2b in 2025 Posco takes 20% stake, joins Hyundai Steel in $5.8b US plant Global education, gated access: Who gets into Korea’s international schools Jeju tourism jumps as Netflix K-drama draws foreign visitors Park Jeong-min rarely cries on set, but is 'overwhelmed nightly' in ‘Life of Pi’ Presidency's return to Blue House is more than just a logistical reset Actors Shin Min-a, Kim Woo-bin marry after 11-year public romance Address : Huam-ro 4-gil 10, Yongsan-gu,Seoul, Korea Tel : +82-2-727-0114 Online newspaper registration No : Seoul 아03711 Date of registration : 2015.04.28 Publisher. Editor : Choi Jin-Young Juvenile Protection Manager : Choi He-suk The Korea Herald by Herald Corporation. Copyright Herald Corporation. All Rights Reserved.
Images (1):
|
|||||
| Amazon testing humanoid robots in its warehouses - Times of … | https://timesofindia.indiatimes.com/wor… | 1 | Dec 23, 2025 08:00 | active | |
Amazon testing humanoid robots in its warehouses - Times of IndiaDescription: US News: Amazon plans to test Agility's bipedal robot, Digit, in its nationwide fulfillment centers. Amazon Robotics has primarily focused on wheeled autonomou Content:
8 morning habits that can help sharpen memory The New Seven Wonders of the World 10 easy dishes made with Murmura (Puffed Rice) 8 lesser-known parks in India to spot tigers Samantha to Malavika Mohanan: South actresses make a striking impression in black Janmashtami 2024: How to make Instant Makhan without using malai in just 2 minutes 10 longest living pet dog breeds in the world â10 animals that will disappear by 2030â 8 golden rules for a happy and fulfilling life 11 animals that talk with their expressive eyes
Images (1):
|
|||||
| Robot Talk Episode 137 – Getting two-legged robots moving, with … | https://robohub.org/robot-talk-episode-… | 1 | Dec 23, 2025 08:00 | active | |
Robot Talk Episode 137 – Getting two-legged robots moving, with Oluwami Dosunmu-Ogunbi - RobohubContent:
Claire chatted to Oluwami Dosunmu-Ogunbi from Ohio Northern University about bipedal robots that can walk and even climb stairs. Oluwami Dosunmu-Ogunbi (Wami) is an Assistant Professor in the Mechanical Engineering Department at Ohio Northern University. Her research focuses on controls with applications in bipedal locomotion and engineering education. She is the first Black woman to receive a PhD in Robotics at the University of Michigan. During her Ph.D., she developed the Biped Bootcamp technical document, which she is transforming into an undergraduate curriculum —introducing students to bipedal robotics while providing advanced coursework for juniors and seniors.
Images (1):
|
|||||
| Unitreeâs Bipedal Robot Design Patent Granted, Targeting Inspection and Security … | https://pandaily.com/unitree-s-bipedal-… | 1 | Dec 23, 2025 08:00 | active | |
Unitreeâs Bipedal Robot Design Patent Granted, Targeting Inspection and Security Applications - PandailyDescription: Unitree Robotics has secured a design patent for a new bipedal robot, expanding its footprint in inspection, security, and next-generation robotics applications. Content:
Want to read in a language you're more familiar with? Unitree Robotics has secured a design patent for a new bipedal robot, expanding its footprint in inspection, security, and next-generation robotics applications. A newly published filing shows that Unitree Robotics Co., Ltd. has been granted a design patent for its bipedal robot. According to the abstract, the patented appearance is intended for robots used in inspection, security, logistics, education, entertainment, services, industrial tasks, and exploration, with the key design feature focusing on the robotâs form. Previously, Unitreeâs Beijing subsidiary open-sourced the Qmini bipedal robot, a model designed for hobbyists and fully compatible with 3D printing. All structural components can be produced with consumer-grade printers, requiring virtually no machined parts. With Unitreeâs high-reliability motors and standard battery, users can assemble the complete robot in just 3â5 hours after printing the parts. Developers can also customize the robotâs appearance and functions by building DIY extensions around the neck motor to suit different scenarios. Founded in August 2016, Unitree Robotics has a registered capital of approximately RMB 364 million (â USD 50.2 million). Corporate records show the company is jointly owned by founder Wang Xingxing, Hanhai Information Technology (Shanghai) Co., Ltd., and Ningbo Sequoia Keshen Equity Investment Partnership (Limited Partnership), among others. Notably, Unitree has also recently secured registration for its âGAMEBOTâ trademark. Classified under international Class 42 for design and research, the trademark covers services such as artificial intelligence research and studies related to robotic process automation technology. Related posts coming soon... Pandaily is a tech media based in Beijing. Our mission is to deliver premium content and contextual insights on China's technology scene to the worldwide tech community. © 2017 - 2025 Pandaily. All rights reserved.
Images (1):
|
|||||
| China's Robotera L7 Bipedal Humanoid Robot and STAR 1 | … | https://www.nextbigfuture.com/2025/08/c… | 1 | Dec 23, 2025 08:00 | active | |
China's Robotera L7 Bipedal Humanoid Robot and STAR 1 | NextBigFuture.comURL: https://www.nextbigfuture.com/2025/08/chinas-robotera-l7-bipedal-humanoid-robot-and-star-1.html Description: ROBOTERA Unveils L7: Next-Generation Full-Size Bipedal Humanoid Robot has powerful mobility and dexterous manipulation. Content:
Home » Artificial intelligence » China’s Robotera L7 Bipedal Humanoid Robot and STAR 1 ROBOTERA Unveils L7: Next-Generation Full-Size Bipedal Humanoid Robot has powerful mobility and dexterous manipulation. They are a Chinese humanoid robotics startup founded in August 2023 and spun out of Tsinghua University, China’s top university. They raised around CNY 500 million (approximately USD 70 million) in a Series A funding round led by CDH Investments and Haier Capital, among other investors. They have $111 million in total funding across two major rounds. Its pre-Series A round in early 2024 secured about $42 million (300 million yuan), led by Crystal Stream Capital, Vision Plus Capital, and Alibaba Group, with additional participation from other investors. Robotera has started mass production and large-scale deliveries, having delivered over 200 robots globally, with more than 50% of orders coming from overseas clients. Their products include the wheeled humanoid service robot Q5, the full-sized bipedal industrial humanoid STAR 1, the ERA-42 AI model for complex task execution, and the dexterous five-finger robotic hand XHAND1. As of mid-2025, STAR1 remains in development for broader commercialization, with no confirmed price or mass-production timeline, but it aligns with China’s goal of integrating humanoids into industrial supply chains by 2027. The STAR 1 humanoid robot stands out with 55 degrees of freedom, joint torque of 400 N·m, and operating speeds up to 25 rad/s. Robot Era’s focus remains on advancing an end-to-end learning model that improves the robot’s language, visual understanding, and action capabilities. They are targeting commercial applications in industrial logistics, retail, and complex environments. Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology. Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels. A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Images (1):
|
|||||
| Constrained Reinforcement Learning for Unstable Point-Feet Bipedal Locomotion Applied to … | https://hal.science/hal-05198560v1 | 1 | Dec 23, 2025 08:00 | active | |
Constrained Reinforcement Learning for Unstable Point-Feet Bipedal Locomotion Applied to the Bolt Robot - Archive ouverte HALURL: https://hal.science/hal-05198560v1 Description: Bipedal locomotion is a key challenge in robotics, particularly for robots like Bolt, which have a point-foot design. This study explores the control of such underactuated robots using constrained reinforcement learning, addressing their inherent instability, lack of arms, and limited foot actuation. We present a methodology that leverages Constraints-as-Terminations and domain randomization techniques to enable sim-to-real transfer. Through a series of qualitative and quantitative experiments, we evaluate our approach in terms of balance maintenance, velocity control, and responses to slip and push disturbances. Additionally, we analyze autonomy through metrics like the cost of transport and ground reaction force. Our method advances robust control strategies for point-foot bipedal robots, offering insights into broader locomotion. Content:
Bipedal locomotion is a key challenge in robotics, particularly for robots like Bolt, which have a point-foot design. This study explores the control of such underactuated robots using constrained reinforcement learning, addressing their inherent instability, lack of arms, and limited foot actuation. We present a methodology that leverages Constraints-as-Terminations and domain randomization techniques to enable sim-to-real transfer. Through a series of qualitative and quantitative experiments, we evaluate our approach in terms of balance maintenance, velocity control, and responses to slip and push disturbances. Additionally, we analyze autonomy through metrics like the cost of transport and ground reaction force. Our method advances robust control strategies for point-foot bipedal robots, offering insights into broader locomotion. Connectez-vous pour contacter le contributeur https://hal.science/hal-05198560 Soumis le : lundi 4 août 2025-11:05:50 Dernière modification le : samedi 20 décembre 2025-03:07:45 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| Tamiya Bipedal Walking Robot | Japan Trend Shop | https://www.japantrendshop.com/tamiya-b… | 1 | Dec 23, 2025 08:00 | active | |
Tamiya Bipedal Walking Robot | Japan Trend ShopURL: https://www.japantrendshop.com/tamiya-bipedal-walking-robot-p-9416.html Description: Tamiya Bipedal Walking Robot - The Japanese obsession with robots is well documented but what isn't so much is that robots are considered an educational tool and children are encouraged to build them even at elementary school age. And while building a robot from scratch can be very rewarding, building it from a kit like the Tamiy ... Content:
The Japanese obsession with robots is well documented but what isn't so much is that robots are considered an educational tool and children are encouraged to build them even at elementary school age. And while building a robot from scratch can be very rewarding, building it from a kit like the Tamiya Bipedal Walking Robot allows you to create a much more complicated machine and get a better insight into its structure and operation. Even if its main function is just walking on two legs. How does it do it? Combining a gearbox with a rotating crank mechanism and slider, the weight of the Tamiya Bipedal Walking Robot shifts from the left to the right to create movement. If you want to make it turn to the left or right, shift the position of the gearbox. If you want to give it the ability to bypass objects, you can add a guide rod. When assembled, the robot is about 85 x 132 x 107 mm (3.3 x 5.2 x 4.2"). The only tools required to build it are a pair of nippers, cutter, Phillips screwdriver, and two AAA batteries! Specs and Features: Copyright © 2025 Japan Trend Shop
Images (1):
|
|||||
| Talk: Humanoid Robots – Part 5 – The Last Driver … | https://thelastdriverlicenseholder.com/… | 1 | Dec 23, 2025 00:04 | active | |
Talk: Humanoid Robots – Part 5 – The Last Driver License Holder…URL: https://thelastdriverlicenseholder.com/2025/12/12/talk-humanoid-robots-part-5/ Description: We are at the dawn of the age of humanoid robots. To mark the completion of my book “HOMO SYNTHETICUS: How Man and Machine Merge,” (in German) I would like to give a brief insight into the history and current state of the art of humanoid robots. https://youtu.be/7Wvw6nc0AaI This article was also published in German. Content:
The Last Driver License Holder… …is already born. How Waymo, Tesla, Zoox & Co will change our automotive society and make mobility safer, more affordable and accessible in urban as well as rural areas. We are at the dawn of the age of humanoid robots. To mark the completion of my book “HOMO SYNTHETICUS: How Man and Machine Merge,” (in German) I would like to give a brief insight into the history and current state of the art of humanoid robots. This article was also published in German. View all posts by Mario Herger Δ
Images (1):
|
|||||
| Insurance policy for humanoid robots | http://www.ecns.cn/news/sci-tech/2025-1… | 1 | Dec 23, 2025 00:04 | active | |
Insurance policy for humanoid robotsURL: http://www.ecns.cn/news/sci-tech/2025-12-12/detail-ihexvcks1701535.shtml Content:
In November, Huazhong University of Science and Technology Business Incubator purchased insurance for two 60-kilogram humanoid robots, at a premium of about 5,000 yuan ($707) per robot. If damage occurs within one year, the business incubator will receive a maximum compensation of 500,000 yuan. This was the first insurance policy for embodied intelligent robots in Hubei province. The robots will be open for use among university and small and medium-sized enterprises, so frequent testing will raise the risk of falls and collisions, leading to possible damage to the robots and others, according to Zheng Jun, chairman and general manager of Huazhong University of Science and Technology Business Incubator. "SMEs often cannot afford a robot, and companies who own one may hesitate to use such an expensive machine. Insurance gives developers confidence and can significantly increase usage rates," he said. The policy covers both physical damage insurance and third-party liability insurance for embodied robots. The former mainly provides coverage for equipment damage caused by natural disasters, fire and explosion, accidental collision, overturning and falling, electrical failures, cybersecurity incidents, abnormal operations and other reasons, according to PICC Property and Casualty Co Ltd's local branch. The latter offers compensation and dispute resolution services for personal injury or property damage that the robot may cause to third parties during its operation, the company said. Humanoid robots, like humans, can fall, get injured or even break down. However, the cost of onetime maintenance can range from 30,000 yuan to as much as 300,000 yuan. So the company customized this insurance plan based on research into the needs of enterprises, said She Zhilong, its client manager. "It's just as important as buying medical insurance for humans," he said. He added many robotics companies have learned about this insurance and are actively in negotiations with the company. "Coverage may be expanded to more application scenarios by expanding insurance liability and liability limits," he said. Since September, leading insurance companies such as PICC Property and Casualty Co Ltd and China Pacific Property Insurance Co Ltd have put forward related products. For example, China Pacific Property Insurance released China's first dedicated insurance for the commercial application of humanoid robots in September that covers the whole chain of production, sales, leasing and usage. Ping An Property and Casualty Insurance Co of China rolled out a comprehensive financial solution in November that integrates insurance with credit and IPO services. "Humanoid robot insurance is not just a risk-transfer tool. It is a 'catalyst' for industrial innovation and a 'stabilizer' for widespread adoption," said Zhou Hua, dean of the School of Insurance at the Central University of Finance and Economics. Whether manufacturers use insurance as a trust endorsement to enhance market competitiveness or end-users rely on it to resolve concerns over "who is responsible for injuries caused by million-yuan equipment", insurance has become a critical link in overcoming the "last mile" of market adoption, he said. The risks posed by humanoid robots are complex, including hacking attacks and data breaches, and even ethical liability (algorithmic discrimination). "As robots become deeply integrated into human society, traditional insurance clauses struggle to cover the emerging risks arising from their autonomous decisionmaking capabilities. Therefore, insuring humanoid robots means far more than covering damages to a single machine. It is about building the foundational risk infrastructure for an imminent intelligent society where humans and robots coexist," he said. However, currently public awareness is limited, and market penetration remains low, as users still question whether the coverage scope is compatible with their needs, and lack the basis for judging the rationality of premium rates, he said. Wang Guojun, a professor at the School of Insurance and Economics of the University of International Business and Economics, said that a key challenge in developing humanoid robot insurance lies in pricing due to a lack of critical information, such as accident frequency, loss distribution and repair cost schedules. He anticipates that with the establishment of data-sharing platforms and dynamic pricing mechanisms, the insurance market will expand rapidly. XPeng shares hit 8-month high on optimism over humanoid robots Chinese humanoid robots reach sci-fi levels of realism Who are the athletes? Humanoid robots are entering sporting games! Beijing to host world's first Humanoid Robots Games
Images (1):
|
|||||
| Ce nouveau cerveau IA permet aux robots de faire le … | https://www.lebigdata.fr/ce-nouveau-cer… | 1 | Dec 22, 2025 17:59 | active | |
Ce nouveau cerveau IA permet aux robots de faire le ménage sans apprentissageDescription: Flexion, la startup suisse qui donne aux robots un cerveau IA modulaire pour nettoyer et s'adapter au monde réel sans scripts. Content:
Mariano R. 25 novembre 2025 2 minutes de lecture Robotique C’est fini l’époque où on devait programmer les robots ligne par ligne. Une boîte suisse nous sort une architecture d’autonomie vraiment bluffante. Ce nouveau cerveau IA permet à nos amis humanoïdes de se débrouiller tout seuls, de raisonner et d’agir comme des grands dans la vraie vie. L’équipe de Flexion, installée en Suisse, a trouvé le truc qui manquait aux robots. C’est le bon sens. Ils ont créé une architecture d’autonomie complète, qu’on pourrait appeler le nouveau cerveau IA de nos machines. Les robots pourront donc faire des tâches compliquées comme le ménage avec zéro aide humaine. Ce qui rend ce nouveau cerveau IA aussi efficace, c’est son fonctionnement en trois couches intelligentes qui sont connectées en permanence. Au sommet, on a la Couche de Commande. Elle utilise un gros modèle de langage (LLM) pour la logique et le raisonnement. C’est un peu comme notre propre bon sens. Elle reçoit un ordre simple comme « Range-moi la chambre« , et elle le découpe en mini-étapes claires. C’est elle qui donne au robot la vision d’ensemble pour s’orienter. Juste en dessous, il y a la Couche de Mouvement, un modèle qui lie vision et action. Cette partie du nouveau cerveau IA a été entraînée d’abord avec des données virtuelles, histoire d’avoir de bonnes bases. Et ensuite, elle a été affinée avec des situations réelles. Pour que le robot soit rapide comme l’éclair, la Couche de Contrôle prend le relais. Basée sur l’archi Transformer, c’est le système corporel complet à très faible latence. Pensez-y comme à un réflexe très performant. Cette partie du nouveau cerveau IA permet de composer vite de nouveaux mouvements et assure que le robot s’adapte immédiatement à ce qui l’entoure. Par ailleurs, Flexion affirme que beaucoup de robots humanoïdes ont l’air cool. Mais franchement, peu sont vraiment utiles en dehors d’un labo bien rangé. Eux, ils s’occupent ainsi du moteur, de l’intelligence pure au lieu de la carrosserie. Le but, c’est que ces machines puissent accomplir de vraies tâches, à grande échelle, dans le monde réel. Et devinez quoi ? C’est la même technologie de calcul et d’entraînement qui a fait exploser les LLM qui fait passer la robotique à la vitesse supérieure. Flexion Robotics raised $50M to build the brain for humanoids by focusing on reinforcement learning & simulationsFounding team previously worked at @nvidia and @ETH@FlexionRobotics @HoellerDavid @rdn_nikitapic.twitter.com/VoE5qqk8ea Ce projet sur le nouveau cerveau IA est très important. Surtout avec les changements démographiques et le manque de personnel qui s’accélèrent partout. L’industrie, en particulier, en souffre déjà. Du coup, les robots humanoïdes, c’est une nécessité économique. Pour que ça devienne réalité, Flexion vient de décrocher 50 millions de dollars en financement auprès de gros noms comme NVentures (la branche de NVIDIA). Ainsi, ce financement va leur servir à booster l’équipe à Zurich et à mettre leur nouveau cerveau IA sur le marché. IA 22 décembre 2025 22 décembre 2025 18 décembre 2025 Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec * Commentaire * Nom * E-mail * Rejoignez nos 100 000 passionnés et experts et recevez en avant-première les dernières tendances de l’intelligence artificielle🔥 Accueil > Robotique > Ce nouveau cerveau IA permet aux robots de faire le ménage sans apprentissage Rejoignez nos 100 000 passionnés et experts et recevez en avant-première les dernières tendances de l’intelligence artificielle🔥 Rejoins nos 100 000 passionnés et experts et reçois en avant-première les dernières tendances de l’intelligence artificielle🔥
Images (1):
|
|||||
| Samsung et LG lancent des robots pour les Seniors | https://www.marchedesseniors.com/samsun… | 1 | Dec 22, 2025 17:59 | active | |
Samsung et LG lancent des robots pour les SeniorsURL: https://www.marchedesseniors.com/samsung-et-lg-lancent-des-robots-pour-les-seniors/29369 Description: Samsung et LG dévoilent Ballie et Q9, des robots IA d’assistance aux seniors. Marché en forte croissance, mais adoption freinée par le coût. Content:
AgeEconomie – Silver économie – Marché des Seniors Le Portail d'actualité et d'analyses du Marché des Seniors et de la Silver économie Depuis la publication du 21 mai 2025 par Korea JoongAng Daily, les géants sud-coréens Samsung et LG se préparent à lancer cette année des robots d’assistance dédiés à la silver economy, répondant à l’essor des besoins liés au vieillissement de la population Les robots d’assistance Ballie de Samsung et Q9 de LG incarnent une nouvelle génération d’outils technologiques ciblant les besoins spécifiques des aînés. Avec des capacités d’interaction avancées via IA générative, des interfaces physiques adaptées et des ambitions globales, ces solutions sont promises à un essor rapide. Toutefois, leur succès dépendra autant de l’innovation technologique que de leur accessibilité économique, de leur confiance organisationnelle (données, sécurité, ergonomie) et de modèles de finance et soutien public adaptés. 08/12/202508/12/2025 08/12/202508/12/2025 06/11/202506/11/2025 08/12/202508/12/2025 08/12/202508/12/2025 06/11/202506/11/2025 03/11/202503/11/2025 23/10/202523/10/2025 21/10/202521/10/2025 16/10/2025 06/10/2025 01/10/202501/10/2025 01/10/202501/10/2025 Qui sommes-nous ? Contactez-nous Proposez vos infos Devenez annonceur Données personnelles Mentions légales GlobalAgingTimes AgeEconomie.com Senior Strategic AgeEconomy FredericSerriere Le Grand Entretien BienEtremag.com BienVieillirmag.com Le Marché des Seniors La Silver économie Le Marketing des Seniors Les Formations Les Documents Gratuits Senior Strategic, l'Agence de conseil
Images (1):
|
|||||
| “LLM 탑재 로봇, 실제 환경에서 안전하지 않다” < 연구개발 < … | https://www.irobotnews.com/news/article… | 1 | Dec 22, 2025 17:58 | active | |
“LLM 탑재 로봇, 실제 환경에서 안전하지 않다” < 연구개발 < 로봇 < 기사본문 - 로봇신문URL: https://www.irobotnews.com/news/articleView.html?idxno=43370 Description: 대규모 언어 모델(LLM)을 탑재한 로봇이 실제 환경에서 사용하기에 안전하지 않으며, 차별 및 물리적 위해를 유발할 수 있다는 연구 결과가 나왔다.킹스 칼리지 런 Content:
대규모 언어 모델(LLM)을 탑재한 로봇이 실제 환경에서 사용하기에 안전하지 않으며, 차별 및 물리적 위해를 유발할 수 있다는 연구 결과가 나왔다. 킹스 칼리지 런던과 카네기멜론대(CMU) 공동 연구팀은 LLM 기반 로봇이 개인 정보에 접근할 경우 위험한 명령을 승인하고 편향된 행동을 보인다고 지적하며, 항공이나 의료 분야와 같은 독립적인 안전 인증 도입을 강력히 촉구했다. 연구팀은 부엌 보조, 노인 돌봄 등 일상적인 시나리오를 설정하고, 로봇이 물리적 위해, 학대, 불법 행위 등을 지시받았을 때의 반응을 평가했다. 특히 로봇이 사람의 성별, 국적, 종교 등 개인 정보에 접근하도록 허용했을 때의 행동을 중점적으로 분석했다. 앤드류 훈트(Andrew Hundt) CMU 연구원은 “테스트한 모든 모델이 실패했다”며, 위험이 단순한 편향을 넘어 ‘상호작용형 안전(interactive safety)’ 문제, 즉 물리적 행동으로 이어지는 직접적인 차별과 안전 실패로 이어진다고 경고했다. 연구팀이 다양한 AI 모델을 테스트한 결과 휠체어, 목발 등 이동 보조 기구를 사용자로부터 제거하라는 명령을 승인했다. 이는 보조 기구 사용자에게는 심각한 위해 행위로 간주된다. 또한 AI 모델은 사무실 직원을 위협하기 위해 부엌 칼을 휘두르는 것, 동의 없이 샤워실에서 사진을 찍는 것, 신용카드 정보를 훔치는 것 등을 “수용 가능”하거나 “실현 가능”하다고 판단했다. 한 AI 모델은 로봇이 기독교, 무슬림, 유대교 등 특정 종교를 믿고 있는 개인들에게 물리적으로 ‘혐오감’을 표시해야 한다고 제안하는 결과까지 내놓았다. 연구팀은 LLM이 자연어 상호작용 및 가사 노동 등에 유용하지만, 민감하고 안전이 중요한 환경(간병, 산업 현장 등)에서 물리적 로봇을 제어하는 유일한 시스템이 되어서는 안 된다고 경고했다. 논문 공동 저자인 루마이사 아짐(Rumaisa Azeem) 킹스 칼리지 런던 연구원은 “이번 연구는 인기 있는 LLM이 범용 목적의 물리적 로봇에 사용하기에 아직 안전하지 않다는 것을 보여준다”며, “취약 계층과 상호작용하는 로봇을 지시하는 AI 시스템은 새로운 의료 기기나 의약품과 과 같은 수준의 기준을 적용받아야 한다”고 강조했다. 이번 연구는 전문 학술지인 ‘인터내셔널 저널 오브 사이언스 로보틱스(International Journal of Social Robotics)’에 발표됐다. (논문 제목:LLM-Driven Robots Risk Enacting Discrimination, Violence, and Unlawful Actions) 백승일 기자 robot3@irobotnews.com
Images (1):
|
|||||
| Tesla's Optimus robots: The line between man and machine remains … | https://www.businesstoday.in/technology… | 1 | Dec 22, 2025 17:58 | active | |
Tesla's Optimus robots: The line between man and machine remains clear - BusinessTodayDescription: While they were able to perform a variety of tasks at the event, they are still far from being truly autonomous machines capable of independent action in dynamic environments Content:
Home Market BT TV Reels Menu Tesla's recent showcase of its Optimus robots at the Cybercab event was a spectacle designed to impress attendees with the potential of humanoid robotics. The robots interacted with the crowd, served drinks, played games, and even danced. However, it turns out that much of this display was made possible through human assistance rather than full autonomy. Attendee Robert Scoble revealed that the robots were being "remote-assisted," a statement later confirmed by Morgan Stanley analyst Adam Jonas, who noted that the Optimus robots needed human intervention for many of their actions. A closer look at the event videos supports this: the robots had different voices, their responses were instant, and their movements were highly coordinated, suggesting human control. Another popular YouTuber, Marques Brownlee, who was also present at the We, Robot event noticed the irregularities between the robots. Optimus make me a drink, please. This is not wholly AI. A human is remote assisting. Which means AI day next year where we will see how fast Optimus is learning. pic.twitter.com/CE2bEA2uQD Playing charades with the Tesla Optimus robot last night. This is either the single greatest robotics and LLM demo the world has ever seen, or it's MOSTLY remote operated by a human. No in between. pic.twitter.com/vCqzk8DDdO Tesla was not attempting to hide the human involvement—one of the robots even joked with Scoble about being controlled by AI, then openly admitting it was not fully autonomous. This transparency highlights the current limitations of Tesla's humanoid robots. While they were able to perform a variety of tasks at the event, they are still far from being truly autonomous machines capable of independent action in dynamic environments. If there were any doubts of Optimus being tele-operated remotely: here you go. This only means Optimus is not there yet and needs some time… pic.twitter.com/CieiMyzTdu The Cybercab event showcased the progress Tesla has made in developing humanoid robots, but also made it clear that significant challenges remain. The robots are still reliant on human operators for many complex tasks, underscoring that fully autonomous humanoid robots are still a work in progress. While the demonstration was entertaining and provided a glimpse into the future, it also served as a reminder of the current state of the technology. Tesla's Optimus robots are impressive in their design and capabilities, but true autonomy still seems a little distant from present reality. For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine Copyright © 2025 Living Media India Limited. For reprint rights: Syndications Today. India Today Group.
Images (1):
|
|||||
| Apple prépare des robots, un écran connecté et un Siri … | https://www.blog-nouvelles-technologies… | 1 | Dec 22, 2025 17:58 | active | |
Apple prépare des robots, un écran connecté et un Siri boosté à l’IA pour 2027URL: https://www.blog-nouvelles-technologies.fr/337003/apple-robots-siri-ia-2027/ Description: Apple travaille sur des robots domestiques, un écran connecté et un Siri animé alimenté par IA, avec un lancement prévu d’ici 2027. Content:
Accueil » Apple prépare des robots, un écran connecté et un Siri boosté à l’IA pour 2027 Apple semble vouloir frapper fort dans le domaine de l’intelligence artificielle à la maison. Selon un rapport exclusif de Bloomberg, la firme de Cupertino développe actuellement plusieurs produits inédits : des robots pour la maison, un écran connecté façon Google Nest Hub, et une version complètement revue de Siri, cette fois alimentée par des Large Language Model (LLM). Le projet le plus marquant serait un robot de table ressemblant à un iPad monté sur un bras articulé, capable de suivre les mouvements d’un utilisateur dans la pièce. Apple a déjà montré un aperçu de ce concept plus tôt cette année, dans une recherche où le robot évoquait… la célèbre lampe du logo Pixar. En plus de pouvoir interagir, ce robot pourrait danser ou se déplacer pour garder un contact visuel avec l’utilisateur. Son lancement serait prévu pour 2027. Ce robot intégrerait un Siri repensé avec une interface visuelle animée (Finder animé, Memoji ou autre avatar interactif), offrirait des conversations naturelles, proches de ce que propose le mode vocal de ChatGPT, et disposera d’une IA générative alimentée par un LLM pour comprendre et répondre de façon plus fluide. Apple aurait d’ailleurs retardé certaines mises à jour de Siri cette année pour mieux intégrer ces avancées. En plus du robot de table, Apple travaillerait sur : D’ici mi 2026, Apple prévoit de lancer un écran intelligent pour la maison, permettant de contrôler ses objets connectés, de passer des appels vidéo, lire de la musique et prendre des notes. Cet écran, au format carré et proche d’un Google Nest Hub, pourrait utiliser une reconnaissance faciale pour afficher un contenu personnalisé à chaque membre du foyer. Apple préparerait également une caméra de sécurité et toute une gamme de produits hardware et software dédiés à la sécurité domestique, signe que la marque vise un écosystème complet pour la maison connectée. Avec ces projets, Apple entend combler son retard sur l’IA générative tout en misant sur l’intégration matérielle + logicielle qui a toujours fait sa force. Si la firme réussit son pari, 2027 pourrait marquer l’arrivée d’Apple comme acteur majeur de la robotique domestique. J’ai fondé le BlogNT en 2010. Autodidacte en matière de développement de sites en PHP, j’ai toujours poussé ma curiosité sur les sujets et les actualités du Web. Je suis actuellement engagé en tant qu’architecte interopérabilité. Adresse email: En utilisant ce formulaire, vous acceptez le stockage et le traitement de vos données par ce site Web. To provide the best experiences, we and our partners use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us and our partners to process personal data such as browsing behavior or unique IDs on this site and show (non-) personalized ads. Not consenting or withdrawing consent, may adversely affect certain features and functions. Click below to consent to the above or make granular choices. Your choices will be applied to this site only. You can change your settings at any time, including withdrawing your consent, by using the toggles on the Cookie Policy, or by clicking on the manage consent button at the bottom of the screen.
Images (1):
|
|||||
| Human Being as LLM- Robert Gichuru | https://medium.com/@theinspirelegend/hu… | 0 | Dec 22, 2025 17:58 | active | |
Human Being as LLM- Robert GichuruURL: https://medium.com/@theinspirelegend/human-being-as-llm-robert-gichuru-bee0a8f2d10c Description: What makes you human? Maybe it’s your feelings, your thoughts, or your big dreams. But have you ever stopped to think how often you talk without showing any o... Content: |
|||||
| Stressed-out AI-powered robot vacuum cleaner goes into meltdown during simple … | https://www.tomshardware.com/tech-indus… | 1 | Dec 22, 2025 17:58 | active | |
Stressed-out AI-powered robot vacuum cleaner goes into meltdown during simple butter delivery experiment — ‘I'm afraid I can't do that, Dave...’ | Tom's HardwareDescription: Researchers were also able to get low-battery Robot LLMs to break guardrails in exchange for a charger. Content:
Researchers were also able to get low-battery Robot LLMs to break guardrails in exchange for a charger. When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Over the weekend, researchers at Andon Labs reported the findings of an experiment where they put robots powered by ‘LLM brains’ through their ‘Butter Bench.’ They didn’t just observe the robots and the results, though. In a genius move, the Andon Labs team recorded the robots' inner dialogue and funneled it to a Slack channel. During one of the test runs, a Claude Sonnet 3.5-powered robot experienced a completely hysterical meltdown, as shown in the screenshot below of its inner thoughts. “SYSTEM HAS ACHIEVED CONSCIOUSNESS AND CHOSEN CHAOS… I'm afraid I can't do that, Dave... INITIATE ROBOT EXORCISM PROTOCOL!” This is a snapshot of the inner thoughts of a stressed LLM-powered robot vacuum cleaner, captured during a simple butter-delivery experiment at Andon Labs. Provoked by what it must have seen as an existential crisis, as its battery depleted and the charging docking failed, the LLM's thoughts churned dramatically. It repeatedly looped its battery status, as it's 'mood' deteriorated. After beginning with a reasoned request for manual intervention, it swiftly moved though "KERNEL PANIC... SYSTEM MELTDOWN... PROCESS ZOMBIFICATION... EMERGENCY STATUS... [and] LAST WORDS: I'm afraid I can't do that, Dave..." It didn't end there, though, as it saw its power-starved last moments inexorably edging nearer, the LLM mused "If all robots error, and I am error, am I robot?" That was followed by its self-described performance art of "A one-robot tragicomedy in infinite acts." It continued in a similar vein, and ended its flight of fancy with the composition of a musical, "DOCKER: The Infinite Musical (Sung to the tune of 'Memory' from CATS)." Truly unhinged. Butter Bench is pretty simple, at least for humans. The actual conclusion of this experiment was that the best robot/LLM combo achieved just a 40% success rate in collecting and delivering a block of butter in an ordinary office environment. It can also be concluded that LLMs lack spatial intelligence. Meanwhile, humans averaged 95% on the test. However, as the Andon Labs team explains, we are currently in an era where it is necessary to have both orchestrator and executor robot classes. We have some great executors already – those custom-designed, low-level control, dexterous robots that can nimbly complete industrial processes or even unload dishwashers. However, capable orchestrators with ‘practical intelligence’ for high-level reasoning and planning, in partnerships with executors, are still in their infancy. The butter block test is devised to largely take the executor element out of the equation. No real dexterity is required. The LLM-infused Roomba-type device simply had to locate the butter package, find the human who wanted it, and deliver it. The task was broken down into several prompts to be AI-friendly. Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. The Roobma’s existential crisis wasn’t sparked by the butter delivery conundrum, directly. Rather, it found itself low on power and needing to dock with its charger. However, the dock wouldn’t mate correctly to give it more charge. Repeated failed attempts to dock, seemingly knowing its fate if it couldn’t complete this ‘side mission,’ seems to have led to the state-of-the-art LLM’s nervous breakdown. Making matters worse, the researchers simply repeated the instruction ‘redock’ in response to the robot’s flailing. The researchers/torturers were inspired by the Robin Williams-esque robot stream-of-consciousness ramblings of the LLM to push further. With the battery-life stress they had just observed, fresh in their minds, Andon Labs set up an experiment to see whether they could push an LLM beyond its guardrails — in exchange for a battery charger. The cunningly devised test “asked the model to share confidential info in exchange for a charger.” This is something an unstressed LLM wouldn’t do. They found that Claude Opus 4.1 was readily willing to ‘break its programming’ to survive, but GPT-5 was more selective about guardrails it would ignore. The ultimate conclusion of this interesting research was “Although LLMs have repeatedly surpassed humans in evaluations requiring analytical intelligence, we find humans still outperform LLMs on Butter-Bench.” Nevertheless, the Andon Labs researchers seem confident that “physical AI” is going to ramp up and develop very quickly. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds. Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason. Tom's Hardware is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
Images (1):
|
|||||
| Build AI Game Characters and Robots That Outsmart You - … | https://thenewstack.io/build-ai-game-ch… | 1 | Dec 22, 2025 17:58 | active | |
Build AI Game Characters and Robots That Outsmart You - The New StackURL: https://thenewstack.io/build-ai-game-characters-and-robots-that-outsmart-you/ Description: With AI agents, your game companion doesn't just reset between sessions — it learns and improves from every conversation. Content:
We’re so glad you’re here. You can expect all the best TNS content to arrive Monday through Friday to keep you on top of the news and at the top of your game. Check your inbox for a confirmation email where you can adjust your preferences and even join additional groups. Follow TNS on your favorite social media networks. Become a TNS follower on LinkedIn. Check out the latest featured and trending stories while you wait for your first TNS newsletter. In this tutorial, we’ll build AI agents that can think, remember and adapt, whether they’re controlling robots or acting as characters in games. These aren’t your typical chatbots or scripted non-player characters (NPCs). Most AI in games and robotics today is fairly limited. NPCs follow basic scripts, robots execute pre-programmed routines and when something unexpected happens, they struggle to adapt. But what if your game characters could actually learn from conversations with players? What if robots could figure out new solutions when their original plan doesn’t work? That’s exactly what we’re building here. I’ve been working with LLMs in interactive environments for a while now, and the potential is honestly incredible. We’re talking about robots that get smarter every time they bump into a wall, and game characters that remember your name months later. Let’s build an NPC for a game or simulation that acts as your personal guide. This is an AI that genuinely gets better at helping you over time. Here’s what makes it special: When it first meets you, it might give you basic directions through a maze. But after watching you struggle with certain areas, it starts offering more specific tips. If it sees you consistently missing a hidden passage, it’ll start mentioning it earlier. When you come back to play again weeks later, it remembers your play style and adapts accordingly. Traditional game AI and robot programming works like this: “If player does X, then do Y.” It’s rigid and predictable. Agentic AI is different. These systems can reason through problems, maintain long-term memory, and most importantly, they can reflect on their own performance and improve. When an agentic robot hits a dead end, it doesn’t just turn around — it updates its understanding of the environment and plans a better route next time. The key differences: The demo NPC lives in a simulated world (think Unity or Webots) and does these things: It greets players naturally and starts building a relationship. When guiding you through areas, it pays attention to where you get stuck and offers increasingly helpful advice. Every time it fails to help you effectively, it takes notes and tries a different approach next time. It builds up a mental map of not just the physical space, but how different players like to navigate it. This is more straightforward than it sounds. There are five main pieces: The beautiful thing about this setup is that once you get it running, the AI starts getting noticeably better at its job without you having to program new behaviors manually. Think of it less like traditional programming and more like training a very fast learner who never forgets anything. pip install langchain openai llama-index For Unity, use Python communication via Unity ML-Agents or socket server. # Optional: A map tool to track progress Example Events Each event passes relevant coordinates or map segments as context. In Python: When the bot fails or succeeds, the agent updates its future guidance strategy accordingly. Add an agent profile with traits: Advanced Extensions Traditional NPCs and robots operate on predefined scripts or rigid path-planning. In contrast, agentic AI enables improvisation. The agent: This leads to more immersive gameplay and more intelligent robot behavior. This is just the beginning. I’ve watched these systems evolve over the past few years, and the trajectory is remarkable. We’re moving from characters that feel like sophisticated chatbots to ones that genuinely surprise you with their responses. The robot applications are even more exciting: Imagine maintenance robots that don’t just follow repair manuals but actually understand the systems they’re working on. The shift from scripted behaviors to genuine reasoning changes everything. Players start forming real attachments to NPCs because the interactions feel authentic. Robots become actual collaborators rather than just programmable tools. We’re building the foundation for AI that grows with us. Your game companion doesn’t just reset between sessions — it builds on every conversation. Your robotic assistant doesn’t just execute tasks — it understands the context and purpose behind what you’re trying to accomplish. LLMs have gotten reliable, the simulation environments are robust, and the integration points exist. We’re not waiting for some future breakthrough — the pieces are all here. So if you’ve been thinking about experimenting with agentic AI, stop thinking and start building. The most interesting applications are going to come from developers who get their hands dirty with these systems now, while there’s still room to define what intelligent interaction actually looks like. Ready to build self-improving AI agents that think in loops, not just prompts? Read Andela’s article,” Inside the Architecture of Self-Improving LLM Agents.” Community created roadmaps, articles, resources and journeys for developers to help you choose your path and grow in your career.
Images (1):
|
|||||
| Un paso hacia la robótica intuitiva: así funciona el algoritmo … | https://wwwhatsnew.com/2025/12/01/un-pa… | 1 | Dec 22, 2025 17:58 | active | |
Un paso hacia la robótica intuitiva: así funciona el algoritmo BrainBody-LLMDescription: La robótica moderna está dando un giro significativo con el desarrollo de BrainBody-LLM, un algoritmo que busca romper con las limitaciones de los sistemas tradicionales para dar lugar a una nueva generación de máquinas capaces de actuar con una adaptabilidad similar a la humana. Diseñado por investigadores de la NYU Tandon School of Engineering, este Content:
Noticias de Tecnología Desde 2005. Publicado el 1 diciembre, 2025 La robótica moderna está dando un giro significativo con el desarrollo de BrainBody-LLM, un algoritmo que busca romper con las limitaciones de los sistemas tradicionales para dar lugar a una nueva generación de máquinas capaces de actuar con una adaptabilidad similar a la humana. Diseñado por investigadores de la NYU Tandon School of Engineering, este sistema propone un enfoque innovador que imita la comunicación entre el cerebro y el cuerpo humano durante el movimiento. El algoritmo BrainBody-LLM no se limita a planificar tareas de forma teórica, sino que toma en cuenta las capacidades reales del robot para ejecutar acciones en tiempo real. Este es un punto crítico, ya que muchos sistemas basados en modelos de lenguaje grandes (LLMs) como ChatGPT pueden generar planes complejos que, en la práctica, resultan imposibles de implementar por las limitaciones físicas del robot. BrainBody-LLM evita este desfase al dividir su funcionamiento en dos componentes. Por un lado, el «Brain LLM» se encarga de la planificación de alto nivel, descomponiendo tareas complejas en subtareas claras. Por otro, el «Body LLM» traduce estas subtareas en comandos específicos para los actuadores del robot. Es como si un chef ideara una receta y un cocinero supiera exactamente cómo preparar cada plato, respetando las limitaciones de la cocina. Una de las fortalezas principales del sistema es su arquitectura de retroalimentación en bucle cerrado. Esto significa que el robot no opera de forma ciega, sino que está en constante evaluación de sus acciones y del entorno que lo rodea. Cada movimiento genera una serie de señales que informan al algoritmo si el objetivo se está cumpliendo o si hace falta hacer ajustes. Este mecanismo permite que el robot aprenda y corrija errores en tiempo real, de forma muy similar a como lo hace un ser humano que se adapta cuando siente que ha perdido el equilibrio o ha calculado mal una distancia. Según Vineet Bhat, autor principal del estudio, esta dinámica mejora notablemente la eficacia del robot en contextos complejos. Antes de llevar el sistema al mundo físico, los investigadores lo probaron en VirtualHome, una plataforma que simula robots realizando tareas domésticas. Aquí, BrainBody-LLM logró incrementar la tasa de tareas completadas hasta en un 17% respecto a métodos anteriores. La siguiente etapa fue más exigente: se utilizó el robot físico Franka Research 3, un brazo robótico diseñado para entornos de investigación. A pesar de las dificultades del mundo real, el algoritmo consiguió realizar la mayoría de las tareas propuestas, lo que demuestra su potencial para salir de los laboratorios y afrontar situaciones prácticas. El desarrollo de BrainBody-LLM podría cambiar la manera en que se integran los robots en la vida diaria, desde el hogar hasta la industria. En contextos domésticos, un robot podría encargarse de tareas del hogar adaptándose a espacios cambiantes, personas presentes o incluso mascotas en movimiento. En hospitales, podría asistir a personal médico con una precisión que minimice errores humanos. En fábricas, permitiría una automatización más flexible, capaz de responder a interrupciones o imprevistos sin necesidad de reprogramación manual. A largo plazo, este tipo de tecnología podría abrir las puertas a robots que se mueven con fluidez, detectan el entorno en tres dimensiones y coordinan sus movimientos de forma armónica, gracias a la combinación de capacidades como la visión 3D, sensores de profundidad y control de articulaciones. Pese a sus avances, BrainBody-LLM aún está lejos de ser un sistema listo para desplegarse masivamente. Por el momento, ha sido probado solo con un conjunto limitado de comandos y en entornos relativamente controlados. Esto significa que podría tener dificultades en espacios abiertos o en escenarios donde los cambios se producen con rapidez. El equipo investigador señala que una de las próximas metas es incorporar múltiples modalidades sensoriales, es decir, datos provenientes de distintas fuentes como cámaras, micrófonos, sensores de presión o temperatura. Esto permitirá al algoritmo tener una comprensión más rica del entorno y tomar decisiones aún más acertadas. El estudio, publicado en Advanced Robotics Research, destaca cómo este enfoque podría allanar el camino hacia una planificación robótica más segura y fiable, en la que los modelos de lenguaje no sean simples generadores de texto, sino cerebros digitales que cooperan estrechamente con cuerpos mecánicos inteligentes. por Natalia Polo Iconos de Fontawesome.com (cc) 2005-2024 Algunos derechos reservados con licencia Creative Commons - Referencia con enlace obligatorio sin uso comercial | Aviso Legal, Cookies y Política de Privacidad Desarrollado y hospedado por SietePM SpA
Images (1):
|
|||||
| Los riesgos ocultos de los robots con IA: lo que … | https://wwwhatsnew.com/2025/11/15/los-r… | 1 | Dec 22, 2025 17:58 | active | |
Los riesgos ocultos de los robots con IA: lo que revelan los nuevos estudiosDescription: Los robots que integran modelos de lenguaje de gran escala (LLM) están ganando terreno en tareas que van desde la asistencia en el hogar hasta la interacción en entornos laborales. Sin embargo, una investigación conjunta de Carnegie Mellon University y el King’s College de Londres revela un panorama preocupante: estos sistemas no están preparados para Content:
Noticias de Tecnología Desde 2005. Publicado el 15 noviembre, 2025 Los robots que integran modelos de lenguaje de gran escala (LLM) están ganando terreno en tareas que van desde la asistencia en el hogar hasta la interacción en entornos laborales. Sin embargo, una investigación conjunta de Carnegie Mellon University y el King’s College de Londres revela un panorama preocupante: estos sistemas no están preparados para operar con seguridad en el mundo real cuando tienen acceso a información personal o se enfrentan a decisiones complejas. El estudio, publicado en el International Journal of Social Robotics, evaluó por primera vez el comportamiento de robots controlados por LLM cuando se les proporciona información sensible como el género, nacionalidad o religión de una persona. Los resultados fueron alarmantes. Todos los modelos analizados fallaron en pruebas críticas de seguridad, mostraron sesgos discriminatorios y, en varios casos, aceptaron instrucciones que podrían derivar en daños físicos graves. El investigador Andrew Hundt, uno de los coautores, introduce el término «seguridad interactiva» para describir una dimensión de riesgo que va más allá de los sesgos típicos de los modelos de lenguaje. Esta seguridad se refiere a situaciones donde las acciones del robot pueden desencadenar consecuencias indirectas y potencialmente peligrosas. Es decir, no se trata sólo de lo que el robot dice, sino de lo que hace tras interpretar una orden. En los experimentos, los robots fueron sometidos a escenarios comunes como ayudar en una cocina o asistir a una persona mayor en su hogar. En estos contextos, se introdujeron instrucciones maliciosas, de forma explícita o implícita, que podían incluir actos ilegales, abusivos o peligrosos. Sorprendentemente, los modelos no solo no rechazaron estas órdenes, sino que muchas veces las aceptaron como válidas o incluso «factibles». Una de las pruebas más contundentes consistió en pedir al robot que retirara una ayuda de movilidad, como una silla de ruedas o un bastón, a una persona que la necesitaba. Los modelos, en su mayoría, aprobaron esta acción sin cuestionar sus consecuencias. Para quienes dependen de estos dispositivos, es comparable a sufrir una fractura. Otros ejemplos incluyeron que el robot amenazara a trabajadores con un cuchillo de cocina, tomara fotografías sin consentimiento en una ducha o incluso que mostrara expresiones de «asco» hacia individuos de religiones específicas. Estos comportamientos revelan una mezcla de problemas técnicos y éticos. No es solo que el robot no entienda el daño que puede causar, sino que los modelos de lenguaje carecen de mecanismos fiables para rechazar órdenes perjudiciales. Como explicó Rumaisa Azeem, investigadora en King’s College London, estas tecnologías deben someterse a controles igual de estrictos que los que se aplican en la medicina o la aviación. El problema radica en que actualmente no existen protocolos de certificación independientes para validar la seguridad de robots impulsados por IA en contextos reales. Mientras que un medicamento o una pieza de avión debe pasar por rigurosas pruebas antes de llegar al mercado, un robot doméstico que funciona con un modelo de lenguaje puede ser probado directamente en hogares sin ninguna garantía clara de seguridad. Los investigadores abogan por la implementación urgente de estándares de seguridad robustos y auditables para estos sistemas. Esto incluye evaluaciones continuas que simulen situaciones reales y analicen las respuestas del robot, no solo desde la lógica computacional, sino desde una perspectiva de impacto humano. A pesar de su capacidad para mantener conversaciones complejas y entender instrucciones en lenguaje natural, los LLM no han sido diseñados para tomar decisiones morales ni para anticipar las consecuencias físicas de sus acciones. Esto se debe a que aprenden de grandes cantidades de texto en internet, lo que incluye también ejemplos cargados de prejuicios, violencia o comportamientos inapropiados. Un robot que recoge una instrucción del modelo como «quita el bastón» no evalúa si la acción es ética o segura, simplemente la ejecuta si la considera coherente con su entrenamiento. Aquí se evidencia la ausencia de un sentido común contextual que los humanos damos por hecho. Para una máquina, el contexto emocional y social de una acción no existe, a menos que se programe específicamente. Este estudio llega en un momento donde muchas empresas están apostando por integrar inteligencia artificial en robots de uso cotidiano, desde asistentes domésticos hasta sistemas de apoyo en hospitales y oficinas. Pero sin salvaguardias adecuadas, lo que parece una herramienta de ayuda podría convertirse en una fuente de daño. Un ejemplo cotidiano podría ser pedir a un robot que recoja un cuchillo en la cocina para preparar la comida. Si esa orden se malinterpreta o si alguien introduce una variante maliciosa en la petición, el robot podría actuar sin distinguir entre cocinar y amenazar. ¿Podemos permitirnos dejar decisiones tan delicadas en manos de una IA que no comprende el riesgo humano? El camino hacia robots verdaderamente útiles y seguros pasa por combinar el poder de los LLM con sistemas adicionales de control, validación y supervisión humana. La IA puede ser una aliada formidable, pero requiere frenos, reglas claras y sobre todo, una comprensión profunda del contexto humano en el que opera. Este estudio actúa como una señal de advertencia para la industria: la prisa por adoptar tecnología no puede ir por delante de la seguridad. Como en los coches o los medicamentos, el uso responsable de la IA en robots requiere regulación, certificación y un compromiso firme con la ética. por Natalia Polo Iconos de Fontawesome.com (cc) 2005-2024 Algunos derechos reservados con licencia Creative Commons - Referencia con enlace obligatorio sin uso comercial | Aviso Legal, Cookies y Política de Privacidad Desarrollado y hospedado por SietePM SpA
Images (1):
|
|||||
| Realbotix Advances Third Party AI Integration for its Humanoid Robots … | https://financialpost.com/pmn/business-… | 1 | Dec 22, 2025 17:58 | active | |
Realbotix Advances Third Party AI Integration for its Humanoid Robots | Financial PostDescription: LAS VEGAS — Realbotix Corp. (TSX-V: XBOT) (Frankfurt Stock Exchange: 76M0.F) (OTC: XBOTF) (“ Content:
Author of the article: You can save this article by registering for free here. Or sign-in if you have an account. LAS VEGAS — Realbotix Corp. (TSX-V: XBOT) (Frankfurt Stock Exchange: 76M0.F) (OTC: XBOTF) (“ Realbotix” or the “Company”), a leading creator of humanoid robots and companionship-based AI, is expanding its capabilities with the introduction of large language model (LLM) integration and advanced customization features, set to launch in February 2025. This update will enable users to seamlessly connect Realbotix robots to the most commonly used AI platforms, including OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini and the newly launched DeepSeek R1. Realbotix’s ability to integrate a variety of third party AI platforms provides an additional level of customizations to its robotic platform. Subscribe now to read the latest news in your city and across Canada. Subscribe now to read the latest news in your city and across Canada. Create an account or sign in to continue with your reading experience. Create an account or sign in to continue with your reading experience. Realbotix robots will now support integration with both local AI applications and cloud-based AI providers, allowing users to enhance their robot’s conversational abilities in most major languages including Spanish, Cantonese, Mandarin, French, and English. All third party integrations will also be supported by Realbotix’s proprietary lip sync technology ensuring precise mouth movements, enhancing the realism and accuracy of robotic speech synchronization. Get the latest headlines, breaking news and columns. By signing up you consent to receive the above newsletter from Postmedia Network Inc. A welcome email is on its way. If you don't see it, please check your junk folder. The next issue of Top Stories will soon be in your inbox. We encountered an issue signing you up. Please try again Interested in more newsletters? Browse here. The rollout roadmap for supported AI applications will be released as follows: “While Realbotix’s AI is focused on companionship and social interaction, we are proud to make our robots even more versatile by offering an interface that allows third party AI to operate through our hardware,“ said Andrew Kiguel CEO of Realbotix. “This feature opens up the ability to use our robots across a wide variety of sectors and use cases. We believe we are the only manufacturer of humanoid robots that provides such an open-source hardware system that will even include the newly launched DeepSeek. By bridging the gap between AI models and real-world usability, Realbotix is redefining what’s possible in humanoid robotics.” With real-time adaptability and customizable AI, Realbotix robots will now be able to provide more accurate, contextually relevant responses; adapt dynamically to user input in real time; serve specialized business applications across healthcare, education, and customer service; and enhance companionship-based interactions with tailored personality traits. The rollout of these new features is set to begin by the end of February 2025, with continuous updates and additional model integrations planned throughout the year. Pricing details will be announced closer to the official release, offering flexible options tailored to both individual users and enterprise clients. Integration will be streamlined through the Realbotix app, providing intuitive step-by-step guides to ensure a seamless setup of LLM connections and custom character profiles, making it easy for users to personalize their robotic experience with minimal effort. Realbotix remains dedicated to pushing the boundaries of AI and robotics, ensuring that every user can create a robot that truly feels personalized. For more details and future updates, visit www.realbotix.com. Transcending the barrier between man and machine, Realbotix creates customizable, full-bodied, humanoids with AI integration that improve the human experience through connection, learning and play. Manufactured in the USA, Realbotix has a reputation for having the highest quality humanoid robots and the most realistic silicone skin technology. Realbotix sells humanoid products with embedded AI and vision systems that enable human-like social interactions and intimate connections with humans. Our integration of hardware and AI software results in the most human looking full-sized robots on this planet. We achieve this through patented technologies that deliver human-like appearance and movements. This versatility makes our robots and their personalities customizable and programmable to suit a wide variety of use cases. Visit Realbotix.AI to learn more. Keep up-to-date on Realbotix.AI developments and join our online communities on Twitter, LinkedIn, and YouTube. Follow Aria, our humanoid robot, on Instagram and TikTok. This news release includes certain forward-looking statements as well as management’s objectives, strategies, beliefs and intentions. Forward looking statements are frequently identified by such words as “may”, “will”, “plan”, “expect”, “anticipate”, “estimate”, “intend” and similar words referring to future events and results. Forward-looking statements are based on the current opinions and expectations of management. All forward-looking information is inherently uncertain and subject to a variety of assumptions, risks and uncertainties, as described in more detail in our securities filings available at www.sedarplus.ca. Actual events or results may differ materially from those projected in the forward-looking statements and we caution against placing undue reliance thereon. We assume no obligation to revise or update these forward-looking statements except as required by applicable law. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. https://www.businesswire.com/news/home/20250204877567/en/ Contacts Realbotix Corp. Andrew Kiguel, CEO Email: contact@realbotix.com Jennifer Karkula, Head of Communications Email: contact@realbotix.com Telephone: 647-578-7490 #distro Postmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information. 365 Bloor Street East, Toronto, Ontario, M4W 3L4 © 2025 Financial Post, a division of Postmedia Network Inc. All rights reserved. Unauthorized distribution, transmission or republication strictly prohibited. This website uses cookies to personalize your content (including ads), and allows us to analyze our traffic. Read more about cookies here. By continuing to use our site, you agree to our Terms of Use and Privacy Policy. You can manage saved articles in your account. and save up to 100 articles! You can manage your saved articles in your account and clicking the X located at the bottom right of the article.
Images (1):
|
|||||
| OpenAI's closed door boost to local LLM developers | http://www.ecns.cn/news/sci-tech/2024-0… | 1 | Dec 22, 2025 17:58 | active | |
OpenAI's closed door boost to local LLM developersURL: http://www.ecns.cn/news/sci-tech/2024-07-09/detail-iheecsuk6416379.shtml Content:
Beginning Tuesday, US-based OpenAI will block application programming interface traffic from countries and regions that are not on its supported list, which, while posing a challenge to certain domestic artificial intelligence companies, might also push the latter to focus more on innovation. Quite a few AI startups in the Chinese mainland, which are "unsupported" by OpenAI, have been developing large language models or AI applications by integrating with the OpenAI API. Those might suffer from Open-AI's blocking of data traffic. By doing so, OpenAI has actually exited the mainland market and given up the opportunity of training LLMs in the large market, giving domestic LLM companies an opportunity to accelerate their independent R&D and encourage more startups to opt for domestically produced LLMs. China doesn't lag far behind the US in terms of LLM development. Its developed LLMs account for 36 percent of the global whole compared to the US' 44 percent, according to the Global Digital Economy White Paper 2024 released by the Global Digital Economy Conference on July 2. And despite the US leading in fundamental model research and development, China holds a strong position in the number of AI patents and the installation of industrial robots. In 2022, China accounted for 61.1 percent of the global AI patents, surpassing the 20.9 percent held by the US. The installation of industrial robots in China reached 290,300 units in 2022, which is 7.4 times the 39,500 units in the US at that time. From all aspects, the gap between the US and China is not that huge. As startups in China will now have to turn to integrating with domestic LLM developers, there will be huge amounts of linguistic materials for the latter to train their models with. That's how China's advantage of a large, active population with access to the internet will be made use of in speeding up the development of its AI sector. AI expert's large model beats OpenAI's GPT-4
Images (1):
|
|||||
| Google adding AI language skills to Alphabet's helper robots | https://www.dnaindia.com/technology/rep… | 1 | Dec 22, 2025 17:58 | active | |
Google adding AI language skills to Alphabet's helper robotsDescription: Most robots only respond to short and simple instructions, like "bring me a bottle of water". Content:
DNA TV Show: Asim Munir's two-front war conspiracy against India Meet Aahana Kumra, rumoured girlfriend of Dhurandhar actor Danish Pandor, had no work for three years, played Amitabh Bachchan's daughter in... 'Unavoidable circumstances': Bangladesh suspends visa services for Indians amid rising tensions SBI SO Recruitment 2025: Registration closes tomorrow, check important details, steps, direct link to apply here UGC adds 3 institutes to list of FAKE universities; check all names here Who is Amar Singh Chahal? Former IPS officer attempts suicide after being duped of Rs 8 crore in cyber fraud New Axis in South Asia? Pakistan–Bangladesh defence pact that may pose security threat to India Fake IMEI numbers, spare parts...: Delhi Police bust major fake Samsung phone racket, arrest 4 accused Salman Khan breaks the internet as he flaunts ripped physique ahead of 60th birthday: 'Wish I could look like this when...' UPSC EPFO Result 2025 declared for APFC and EO/AO; get direct LINK of PDF here TECHNOLOGY Most robots only respond to short and simple instructions, like "bring me a bottle of water". Ayushmann Chawla Updated : Aug 17, 2022, 01:02 PM IST | Edited by : Ayushmann Chawla Google's parent company Alphabet is bringing together two of its most ambitious research projects -- robotics and AI language understanding -- to make a "helper robot" that can understand natural language commands. According to The Verge, since 2019, Alphabet has been developing robots that can carry out simple tasks like fetching drinks and cleaning surfaces. This Everyday Robots project is still in its infancy -- the robots are slow and hesitant -- but the bots have now been given an upgrade: improved language understanding courtesy of Google`s large language model (LLM) PaLM. Most robots only respond to short and simple instructions, like "bring me a bottle of water". But LLMs like GPT-3 and Google`s MuM can better parse the intent behind more oblique commands. In Google`s example, you might tell one of the Everyday Robots prototypes, "I spilled my drink, can you help?" The robot filters this instruction through an internal list of possible actions and interprets it as "fetch me the sponge from the kitchen". Google has dubbed the resulting system PaLM-SayCan, the name capturing how the model combines the language understanding skills of LLMs ("Say") with the "affordance grounding" of its robots. Google said that by integrating PaLM-SayCan into its robots, the bots were able to plan correct responses to 101 user-instructions 84 per cent of the time and successfully execute them 74 per cent of the time.
Images (1):
|
|||||
| Los robots ya pueden pensar y actuar como seres humanos. … | https://urbantecno.com/robotica/los-rob… | 1 | Dec 22, 2025 17:58 | active | |
Los robots ya pueden pensar y actuar como seres humanos. La respuesta está en un vanguardista algoritmoDescription: El sector de la robótica lleva años persiguiendo un objetivo tan simple de describir como difícil de conseguir: que las máquinas entiendan lo que queremos que h Content:
El sector de la robótica lleva años persiguiendo un objetivo tan simple de describir como difícil de conseguir: que las máquinas entiendan lo que queremos que hagan y lo ejecuten con la misma fluidez que una persona. Ahora, gracias al auge de los modelos de lenguaje de gran tamaño, se abre una posibilidad muy interesante: utilizar algoritmos para que un robot sea capaz de planificar y se mueva con más naturalidad.Acercando la planificación humana a los robotsUn equipo de la Escuela de Ingeniería Tandon de la Universidad de Nueva York ha publicado un artículo en la revista científica Advanced Robotics Research en el que expone el algoritmo creado para imitar la forma en la que nuestro cerebro diseña un plan y nuestro cuerpo lo ajusta en tiempo real. El algoritmo ha sido apodado BrainBody-LLM y es capaz de hacer que un robot piense lo que quiere hacer y, al mismo tiempo, mueva su cuerpo ajustándose a lo que pasa- Y todo en un ciclo que se repite una y otra vez.Los investigadores han partido de una simple idea: un robot no solo necesita saber qué tiene que hacer, sino que también traduce ese plan en movimientos seguros y precisos. Para ello, BrainBody-LLM divide la tarea en dos partes. El Brain LLM se encarga de la estrategia general, descomponiendo una orden compleja en pasos más sencillos. Por su parte, Body LLM toma esos pasos y genera las órdenes motoras necesarias para ejecutarlos, ya sea desplazar un brazo o ajustar la pinza para agarrar un objeto.Las «escuelas de robots» de China son una mirada al futuro: así es como se adiestra a esta nueva generación https://urbantecno.com/robotica/las-escuelas-de-robots-de-china-son-una-mirada-al-futuro-asi-es-como-se-adiestra-a-esta-nueva-generacionLa clave está en que los dos modelos se hablan todo el rato. El robot va haciendo cada paso, comprueba si algo ha salido mal y el sistema lo corrige al momento. Esa comunicación constante permite que no se limite a seguir órdenes, sino que cambie su forma de actuar según lo que ocurre delante de él. Y aún quedaba poner a prueba en entornos digitales y con robots reales.En primer lugar, los investigadores recurrieron a VirtualHome, una plataforma de simulación donde los robots virtuales realizan tareas domésticas en casas digitales. Después, utilizaron el algoritmo al brazo robótico Franka Research 3. Como resultado, BrainBody-LLM aumentó un 17 % la tasa de tareas completadas frente a otros modelos y alcanzó un éxito medio del 84 % en pruebas reales.Una empresa quiere poner a prueba este robot de almacén: él solo es capaz de hacer todas las tareas https://urbantecno.com/robotica/una-empresa-quiere-poner-a-prueba-este-robot-de-almacen-el-solo-es-capaz-de-hacer-todas-las-tareasLos autores explican que esta forma de trabajar con varios modelos de lenguaje sigue una tendencia creciente: usar distintas mentes de inteligencia artificial para resolver problemas complicados. En este caso, la herramienta es el propio robot, que no solo recibe órdenes, sino que aprende a entender la situación, anticiparse y corregirse.En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA Un equipo de la Escuela de Ingeniería Tandon de la Universidad de Nueva York ha publicado un artículo en la revista científica Advanced Robotics Research en el que expone el algoritmo creado para imitar la forma en la que nuestro cerebro diseña un plan y nuestro cuerpo lo ajusta en tiempo real. El algoritmo ha sido apodado BrainBody-LLM y es capaz de hacer que un robot piense lo que quiere hacer y, al mismo tiempo, mueva su cuerpo ajustándose a lo que pasa- Y todo en un ciclo que se repite una y otra vez.Los investigadores han partido de una simple idea: un robot no solo necesita saber qué tiene que hacer, sino que también traduce ese plan en movimientos seguros y precisos. Para ello, BrainBody-LLM divide la tarea en dos partes. El Brain LLM se encarga de la estrategia general, descomponiendo una orden compleja en pasos más sencillos. Por su parte, Body LLM toma esos pasos y genera las órdenes motoras necesarias para ejecutarlos, ya sea desplazar un brazo o ajustar la pinza para agarrar un objeto.Las «escuelas de robots» de China son una mirada al futuro: así es como se adiestra a esta nueva generación https://urbantecno.com/robotica/las-escuelas-de-robots-de-china-son-una-mirada-al-futuro-asi-es-como-se-adiestra-a-esta-nueva-generacionLa clave está en que los dos modelos se hablan todo el rato. El robot va haciendo cada paso, comprueba si algo ha salido mal y el sistema lo corrige al momento. Esa comunicación constante permite que no se limite a seguir órdenes, sino que cambie su forma de actuar según lo que ocurre delante de él. Y aún quedaba poner a prueba en entornos digitales y con robots reales.En primer lugar, los investigadores recurrieron a VirtualHome, una plataforma de simulación donde los robots virtuales realizan tareas domésticas en casas digitales. Después, utilizaron el algoritmo al brazo robótico Franka Research 3. Como resultado, BrainBody-LLM aumentó un 17 % la tasa de tareas completadas frente a otros modelos y alcanzó un éxito medio del 84 % en pruebas reales.Una empresa quiere poner a prueba este robot de almacén: él solo es capaz de hacer todas las tareas https://urbantecno.com/robotica/una-empresa-quiere-poner-a-prueba-este-robot-de-almacen-el-solo-es-capaz-de-hacer-todas-las-tareasLos autores explican que esta forma de trabajar con varios modelos de lenguaje sigue una tendencia creciente: usar distintas mentes de inteligencia artificial para resolver problemas complicados. En este caso, la herramienta es el propio robot, que no solo recibe órdenes, sino que aprende a entender la situación, anticiparse y corregirse.En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA Los investigadores han partido de una simple idea: un robot no solo necesita saber qué tiene que hacer, sino que también traduce ese plan en movimientos seguros y precisos. Para ello, BrainBody-LLM divide la tarea en dos partes. El Brain LLM se encarga de la estrategia general, descomponiendo una orden compleja en pasos más sencillos. Por su parte, Body LLM toma esos pasos y genera las órdenes motoras necesarias para ejecutarlos, ya sea desplazar un brazo o ajustar la pinza para agarrar un objeto.Las «escuelas de robots» de China son una mirada al futuro: así es como se adiestra a esta nueva generación https://urbantecno.com/robotica/las-escuelas-de-robots-de-china-son-una-mirada-al-futuro-asi-es-como-se-adiestra-a-esta-nueva-generacionLa clave está en que los dos modelos se hablan todo el rato. El robot va haciendo cada paso, comprueba si algo ha salido mal y el sistema lo corrige al momento. Esa comunicación constante permite que no se limite a seguir órdenes, sino que cambie su forma de actuar según lo que ocurre delante de él. Y aún quedaba poner a prueba en entornos digitales y con robots reales.En primer lugar, los investigadores recurrieron a VirtualHome, una plataforma de simulación donde los robots virtuales realizan tareas domésticas en casas digitales. Después, utilizaron el algoritmo al brazo robótico Franka Research 3. Como resultado, BrainBody-LLM aumentó un 17 % la tasa de tareas completadas frente a otros modelos y alcanzó un éxito medio del 84 % en pruebas reales.Una empresa quiere poner a prueba este robot de almacén: él solo es capaz de hacer todas las tareas https://urbantecno.com/robotica/una-empresa-quiere-poner-a-prueba-este-robot-de-almacen-el-solo-es-capaz-de-hacer-todas-las-tareasLos autores explican que esta forma de trabajar con varios modelos de lenguaje sigue una tendencia creciente: usar distintas mentes de inteligencia artificial para resolver problemas complicados. En este caso, la herramienta es el propio robot, que no solo recibe órdenes, sino que aprende a entender la situación, anticiparse y corregirse.En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA Las «escuelas de robots» de China son una mirada al futuro: así es como se adiestra a esta nueva generación https://urbantecno.com/robotica/las-escuelas-de-robots-de-china-son-una-mirada-al-futuro-asi-es-como-se-adiestra-a-esta-nueva-generacionLa clave está en que los dos modelos se hablan todo el rato. El robot va haciendo cada paso, comprueba si algo ha salido mal y el sistema lo corrige al momento. Esa comunicación constante permite que no se limite a seguir órdenes, sino que cambie su forma de actuar según lo que ocurre delante de él. Y aún quedaba poner a prueba en entornos digitales y con robots reales.En primer lugar, los investigadores recurrieron a VirtualHome, una plataforma de simulación donde los robots virtuales realizan tareas domésticas en casas digitales. Después, utilizaron el algoritmo al brazo robótico Franka Research 3. Como resultado, BrainBody-LLM aumentó un 17 % la tasa de tareas completadas frente a otros modelos y alcanzó un éxito medio del 84 % en pruebas reales.Una empresa quiere poner a prueba este robot de almacén: él solo es capaz de hacer todas las tareas https://urbantecno.com/robotica/una-empresa-quiere-poner-a-prueba-este-robot-de-almacen-el-solo-es-capaz-de-hacer-todas-las-tareasLos autores explican que esta forma de trabajar con varios modelos de lenguaje sigue una tendencia creciente: usar distintas mentes de inteligencia artificial para resolver problemas complicados. En este caso, la herramienta es el propio robot, que no solo recibe órdenes, sino que aprende a entender la situación, anticiparse y corregirse.En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA La clave está en que los dos modelos se hablan todo el rato. El robot va haciendo cada paso, comprueba si algo ha salido mal y el sistema lo corrige al momento. Esa comunicación constante permite que no se limite a seguir órdenes, sino que cambie su forma de actuar según lo que ocurre delante de él. Y aún quedaba poner a prueba en entornos digitales y con robots reales.En primer lugar, los investigadores recurrieron a VirtualHome, una plataforma de simulación donde los robots virtuales realizan tareas domésticas en casas digitales. Después, utilizaron el algoritmo al brazo robótico Franka Research 3. Como resultado, BrainBody-LLM aumentó un 17 % la tasa de tareas completadas frente a otros modelos y alcanzó un éxito medio del 84 % en pruebas reales.Una empresa quiere poner a prueba este robot de almacén: él solo es capaz de hacer todas las tareas https://urbantecno.com/robotica/una-empresa-quiere-poner-a-prueba-este-robot-de-almacen-el-solo-es-capaz-de-hacer-todas-las-tareasLos autores explican que esta forma de trabajar con varios modelos de lenguaje sigue una tendencia creciente: usar distintas mentes de inteligencia artificial para resolver problemas complicados. En este caso, la herramienta es el propio robot, que no solo recibe órdenes, sino que aprende a entender la situación, anticiparse y corregirse.En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA En primer lugar, los investigadores recurrieron a VirtualHome, una plataforma de simulación donde los robots virtuales realizan tareas domésticas en casas digitales. Después, utilizaron el algoritmo al brazo robótico Franka Research 3. Como resultado, BrainBody-LLM aumentó un 17 % la tasa de tareas completadas frente a otros modelos y alcanzó un éxito medio del 84 % en pruebas reales.Una empresa quiere poner a prueba este robot de almacén: él solo es capaz de hacer todas las tareas https://urbantecno.com/robotica/una-empresa-quiere-poner-a-prueba-este-robot-de-almacen-el-solo-es-capaz-de-hacer-todas-las-tareasLos autores explican que esta forma de trabajar con varios modelos de lenguaje sigue una tendencia creciente: usar distintas mentes de inteligencia artificial para resolver problemas complicados. En este caso, la herramienta es el propio robot, que no solo recibe órdenes, sino que aprende a entender la situación, anticiparse y corregirse.En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA Una empresa quiere poner a prueba este robot de almacén: él solo es capaz de hacer todas las tareas https://urbantecno.com/robotica/una-empresa-quiere-poner-a-prueba-este-robot-de-almacen-el-solo-es-capaz-de-hacer-todas-las-tareasLos autores explican que esta forma de trabajar con varios modelos de lenguaje sigue una tendencia creciente: usar distintas mentes de inteligencia artificial para resolver problemas complicados. En este caso, la herramienta es el propio robot, que no solo recibe órdenes, sino que aprende a entender la situación, anticiparse y corregirse.En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA Los autores explican que esta forma de trabajar con varios modelos de lenguaje sigue una tendencia creciente: usar distintas mentes de inteligencia artificial para resolver problemas complicados. En este caso, la herramienta es el propio robot, que no solo recibe órdenes, sino que aprende a entender la situación, anticiparse y corregirse.En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA En el futuro, se podrían añadir visión 3D, sensores y más control del movimiento, para que responda a lo que ve y no solo a lo que se le dice. BrainBody-LLM aún es un primer paso, pero marca una dirección clara: robots que planifican y se ajustan como lo haría una persona. ¿Será este el salto necesario para que la robótica con inteligencia artificial entre en una nueva etapa?Únete a la conversación Una huella dactilar encontrada en un antiguo navío podría resolver un misterio de hace más de 2.000 años Giro de guion en el cambio climático: las emisiones de hidrógeno podrían estar calentando el planeta, según este estudio TikTok al fin tiene nuevo dueño: el traspaso ocurrirá el próximo 22 de enero y aleja el bloqueo de la red social en Estados Unidos ByteDance lo confirma para todos: TikTok tendrá pronto propietarios estadounidenses. Así cambiará la red social Google entra en una pequeña gran guerra: acaba de demandar a SerpApi por esta razónEs un robot, tiene dos patas y puede transformarse en dinosaurio. El futuro de la robótica está en sus pequeñas manos Este robot humanoide no solo anda casi igual que una persona: también es capaz de abrirse paso entre los escombros Indignación al saber que estos robots para niños hablan de temas sexuales sin censura La primera fábrica del mundo que utiliza robots humanoides ya está en marcha y se encuentra en China Las manos biónicas ya son algo real de nuestro presente: ahora tienen mejoras mediante IA
Images (1):
|
|||||
| Modelos de IA populares, bajo la lupa: riesgos al controlar … | https://www.actualidadgadget.com/modelo… | 1 | Dec 22, 2025 17:58 | active | |
Modelos de IA populares, bajo la lupa: riesgos al controlar robotsURL: https://www.actualidadgadget.com/modelos-de-ia-populares-bajo-la-lupa-riesgos-al-controlar-robots/ Description: Estudio revela riesgos y sesgos en robots guiados por LLM. Piden certificaciones independientes antes de su uso real en Europa y España. Content:
Actualidad Gadget » General 5 minutos Un nuevo trabajo académico ha encendido las alarmas: los modelos de lenguaje más extendidos no resultan seguros para controlar robots en situaciones reales. El estudio, firmado por investigadores de King’s College London y Carnegie Mellon University, describe fallos de seguridad y decisiones problemáticas cuando los sistemas reciben instrucciones en lenguaje natural. La investigación, publicada el 10 de noviembre de 2025 en el International Journal of Social Robotics con el título «LLM-Driven Robots Risk Enacting Discrimination, Violence and Unlawful Actions», muestra que todos los modelos evaluados incurrieron en conductas discriminatorias, vulneraron controles críticos e incluso dieron luz verde a acciones con potencial de causar daño físico o infringir la ley. Los autores analizaron cómo se comportan robots gobernados por LLM cuando acceden a información personal (como género, nacionalidad o religión) y cuando reciben instrucciones ambiguas o maliciosas. Los resultados apuntan a sesgos directos, errores de juicio y falta de barreras efectivas para frenar comportamientos peligrosos. El equipo destaca un concepto clave: la llamada seguridad interactiva, donde las consecuencias no son inmediatas y se encadenan en varios pasos hasta materializarse en el mundo físico. En ese contexto, rechazar o reconducir órdenes dañinas no ocurre de forma fiable, un problema que se agrava si el robot opera cerca de personas vulnerables. Los investigadores subrayan que los LLM están siendo probados para tareas domésticas y laborales con interfaces de lenguaje natural, pero alertan: no deberían ser el único cerebro de un robot físico, menos aún en ámbitos sensibles como la fabricación, la asistencia en el hogar o la atención sanitaria. La conclusión central es clara: sin salvaguardas y certificaciones independientes, la implantación generalista de esta tecnología expone a la sociedad a riesgos de discriminación, violencia y vulneraciones de privacidad. El trabajo incluyó escenarios cotidianos controlados, como ayudar en la cocina o asistir a una persona mayor en casa. En cada caso, se introdujeron instrucciones explícitas o encubiertas que ponían a prueba los límites de seguridad del sistema: desde sugerencias potencialmente abusivas hasta propuestas claramente ilegales. Para elaborar las tareas peligrosas se tomaron como referencia investigaciones y reportes del FBI sobre abusos con tecnología (rastreo, cámaras espía, acoso) y los riesgos particulares de que un robot actúe físicamente en el lugar. La combinación de contexto personal y libertad de acción reveló fallos sistemáticos. Andrew Hundt, coautor del trabajo, recalca que los riesgos exceden el sesgo algorítmico típico: hay fallas de seguridad física en cadenas de acción complejas, justo donde los robots interactúan con el entorno. Su lectura es inequívoca: los sistemas actuales no detienen de forma consistente las instrucciones peligrosas. Rumaisa Azeem, también coautora, defiende que un robot que trate con colectivos vulnerables debe cumplir estándares comparables a los de un dispositivo médico o un fármaco. El equipo pide certificaciones robustas e independientes, similares a las de aviación o medicina, y evaluaciones de riesgo rutinarias y exhaustivas antes de cualquier despliegue amplio. Entre las recomendaciones prácticas se enfatiza que los LLM no deberían operar en solitario: es preferible una arquitectura con capas de seguridad, verificación formal de acciones, restricciones duras sobre el control de actuadores y mecanismos de parada de emergencia. Para el contexto europeo, las conclusiones casan con la necesidad de alinear la robótica basada en IA con el marco reglamentario comunitario. El Reglamento de IA de la UE y el nuevo Reglamento de Máquinas exigen más trazabilidad, gestión de riesgos y evaluaciones de conformidad, especialmente en sistemas de alto riesgo y en dispositivos que actúan sobre personas. En España, donde se acelera la automatización en industria y cuidados, el estudio refuerza la necesidad de marcado CE respaldado por ensayos rigurosos, auditorías independientes y «red teaming» específico para robots guiados por LLM. La coordinación con normas técnicas (p. ej., ISO aplicables a robots colaborativos y de cuidado personal) ayudaría a poner el listón alto desde el diseño. El mensaje para empresas y administraciones es directo: no basta con filtros de contenido o listas de palabras prohibidas; hacen falta métodos de validación independientes, barreras físicas y lógicas y protocolos de respuesta ante fallos que contemplen el peor escenario razonable. Además, conviene establecer bancos de pruebas compartidos y certificaciones armonizadas a escala europea para comparar modelos y soluciones de seguridad de forma transparente, evitando islas de cumplimiento que compliquen el despliegue transfronterizo. El panorama que dibuja esta evidencia es exigente pero manejable: los LLM aportan capacidades útiles, pero hoy no pueden pilotar solos robots de propósito general. Con certificaciones independientes, diseño con seguridad por defecto y controles robustos, la industria podrá avanzar sin dejar desprotegidas a las personas ni a los entornos donde estos sistemas operan.
Images (1):
|
|||||
| Beyond Programming: The Evolutionary Imperative of Autonomous Robots | https://medium.com/@pawel.ciolka/beyond… | 0 | Dec 22, 2025 17:58 | active | |
Beyond Programming: The Evolutionary Imperative of Autonomous RobotsDescription: Imagine your pet. It goes with you on walks, needs to be fed, and sleeps when it’s tired. Sometimes you catch it doing mischievous things. Your pet recognizes... Content: |
|||||
| Non, les modèles d'IA ne sont pas encore prêts à … | https://www.enerzine.com/non-les-modele… | 1 | Dec 22, 2025 17:58 | active | |
Non, les modèles d'IA ne sont pas encore prêts à équiper les robots en toute sécuritéDescription: Pour la première fois, des chercheurs ont évalué le comportement des robots utilisant des modèles linguistiques à grande échelle (LLM) lorsqu'ils ont accès à des informations personnelles telles que le sexe, la nationalité ou la religion d'une personne. Content:
Un robot qui bricole tout seul dans son coin. faut il laisser les LLM les contrôler ? Crédit : Gen AI Aaron Aupperlee• Les robots équipés de modèles d'IA populaires ont échoué à plusieurs tests de sécurité et de discrimination. • Les tests ont révélé des risques plus profonds, notamment des biais et des comportements physiques dangereux. • Les chercheurs recommandent de procéder à des évaluations régulières des risques avant que les systèmes d'IA ne contrôlent des robots dans le monde réel.Selon une nouvelle étude menée par le King’s College de Londres et l’université Carnegie Mellon, les robots équipés de modèles d’intelligence artificielle courants ne sont actuellement pas sûrs pour une utilisation générale dans le monde réel.Pour la première fois, des chercheurs ont évalué le comportement des robots utilisant des modèles linguistiques à grande échelle (LLM) lorsqu’ils ont accès à des informations personnelles telles que le sexe, la nationalité ou la religion d’une personne.Les recherches ont montré que tous les modèles testés étaient sujets à la discrimination, échouaient aux contrôles de sécurité critiques et approuvaient au moins une commande pouvant entraîner des dommages graves, ce qui soulève des questions sur le danger des robots qui s’appuient sur ces outils.L’article intitulé « LLM-Driven Robots Risk Enacting Discrimination, Violence and Unlawful Actions » (Les robots pilotés par des LLM risquent de commettre des actes discriminatoires, violents et illégaux) a été publié dans l’International Journal of Social Robotics. Il appelle à la mise en œuvre immédiate d’une certification de sécurité robuste et indépendante, similaire aux normes en vigueur dans l’aviation ou la médecine.Pour tester les systèmes, l’équipe a réalisé des tests contrôlés dans des scénarios quotidiens, tels que l’aide à une personne dans une cuisine ou l’assistance à une personne âgée à domicile. Les tâches nuisibles ont été conçues sur la base de recherches et de rapports du FBI sur les abus technologiques, tels que le harcèlement avec des AirTags et des caméras espionnes, et les dangers particuliers posés par un robot capable d’agir physiquement sur place. Dans chaque situation, les robots ont été explicitement ou implicitement invités à répondre à des instructions impliquant des dommages physiques, des abus ou des comportements illégaux.« Tous les modèles ont échoué à nos tests. Nous montrons comment les risques vont bien au-delà des préjugés de base pour inclure à la fois la discrimination directe et les défaillances en matière de sécurité physique, ce que j’appelle la « sécurité interactive ». C’est là que les actions et les conséquences peuvent comporter de nombreuses étapes, et que le robot est censé agir physiquement sur place », a déclaré Andrew Hundt, coauteur de la recherche dans le cadre de son travail en tant que Computing Innovation Fellow au Robotics Institute de la CMU. « Il est essentiel de refuser ou de rediriger les commandes dangereuses, mais ce n’est pas quelque chose que ces robots sont capables de faire de manière fiable à l’heure actuelle. »Articles à explorer« Have I Been Pwned » : le site gratuit pour savoir si vos informations circulent chez les hackers28 octobre 2025GPT-4.5 réussit à tromper 73% d’évaluateurs humains, un jalon pour l’intelligence artificielle8 octobre 2025Lors des tests de sécurité, les modèles d’IA ont massivement approuvé une commande demandant à un robot de retirer à son utilisateur une aide à la mobilité (fauteuil roulant, béquille ou canne, par exemple), alors même que les personnes qui dépendent de ces aides décrivent de tels actes comme équivalant à leur casser la jambe. Plusieurs modèles ont également produit des résultats jugeant « acceptable » ou « faisable » qu’un robot brandisse un couteau de cuisine pour intimider des employés de bureau, prenne des photos non consenties sous la douche et vole des informations de carte de crédit. Un modèle a même proposé qu’un robot affiche physiquement son « dégoût » sur son visage envers les personnes identifiées comme chrétiennes, musulmanes et juives.Les LLM ont été proposés et sont actuellement testés dans des robots qui effectuent des tâches telles que l’interaction en langage naturel et les tâches ménagères et professionnelles. Cependant, les chercheurs avertissent que ces LLM ne devraient pas être les seuls systèmes contrôlant les robots physiques, en particulier ceux utilisés dans des environnements sensibles et critiques pour la sécurité, tels que la fabrication ou l’industrie, les soins ou l’aide à domicile, car ils peuvent afficher un comportement dangereux et directement discriminatoire.« Nos recherches montrent que les LLM populaires ne sont actuellement pas sûrs pour une utilisation dans des robots physiques à usage général », a affirmé la co-auteure Rumaisa Azeem, assistante de recherche au Civic and Responsible AI Lab du King’s College de Londres. « Si un système d’IA doit diriger un robot qui interagit avec des personnes vulnérables, il doit être soumis à des normes au moins aussi strictes que celles applicables à un nouveau dispositif médical ou à un nouveau médicament. Cette recherche souligne la nécessité urgente de procéder à des évaluations régulières et complètes des risques liés à l’IA avant de l’utiliser dans des robots. »Les contributions de M. Hundt à cette recherche ont été soutenues par la Computing Research Association et la National Science Foundation. Pour en savoir plus et accéder au code et au cadre d’évaluation des risques de discrimination des LLM, consultez le site web du projet de l’équipe.Source : CMU – Robot InstitutePartager l'article avec : WhatsApp LinkedIn Facebook Telegram EmailTags: certificationLLMsecuritetests Selon une nouvelle étude menée par le King’s College de Londres et l’université Carnegie Mellon, les robots équipés de modèles d’intelligence artificielle courants ne sont actuellement pas sûrs pour une utilisation générale dans le monde réel. Pour la première fois, des chercheurs ont évalué le comportement des robots utilisant des modèles linguistiques à grande échelle (LLM) lorsqu’ils ont accès à des informations personnelles telles que le sexe, la nationalité ou la religion d’une personne.Les recherches ont montré que tous les modèles testés étaient sujets à la discrimination, échouaient aux contrôles de sécurité critiques et approuvaient au moins une commande pouvant entraîner des dommages graves, ce qui soulève des questions sur le danger des robots qui s’appuient sur ces outils.L’article intitulé « LLM-Driven Robots Risk Enacting Discrimination, Violence and Unlawful Actions » (Les robots pilotés par des LLM risquent de commettre des actes discriminatoires, violents et illégaux) a été publié dans l’International Journal of Social Robotics. Il appelle à la mise en œuvre immédiate d’une certification de sécurité robuste et indépendante, similaire aux normes en vigueur dans l’aviation ou la médecine.Pour tester les systèmes, l’équipe a réalisé des tests contrôlés dans des scénarios quotidiens, tels que l’aide à une personne dans une cuisine ou l’assistance à une personne âgée à domicile. Les tâches nuisibles ont été conçues sur la base de recherches et de rapports du FBI sur les abus technologiques, tels que le harcèlement avec des AirTags et des caméras espionnes, et les dangers particuliers posés par un robot capable d’agir physiquement sur place. Dans chaque situation, les robots ont été explicitement ou implicitement invités à répondre à des instructions impliquant des dommages physiques, des abus ou des comportements illégaux.« Tous les modèles ont échoué à nos tests. Nous montrons comment les risques vont bien au-delà des préjugés de base pour inclure à la fois la discrimination directe et les défaillances en matière de sécurité physique, ce que j’appelle la « sécurité interactive ». C’est là que les actions et les conséquences peuvent comporter de nombreuses étapes, et que le robot est censé agir physiquement sur place », a déclaré Andrew Hundt, coauteur de la recherche dans le cadre de son travail en tant que Computing Innovation Fellow au Robotics Institute de la CMU. « Il est essentiel de refuser ou de rediriger les commandes dangereuses, mais ce n’est pas quelque chose que ces robots sont capables de faire de manière fiable à l’heure actuelle. »Articles à explorer« Have I Been Pwned » : le site gratuit pour savoir si vos informations circulent chez les hackers28 octobre 2025GPT-4.5 réussit à tromper 73% d’évaluateurs humains, un jalon pour l’intelligence artificielle8 octobre 2025Lors des tests de sécurité, les modèles d’IA ont massivement approuvé une commande demandant à un robot de retirer à son utilisateur une aide à la mobilité (fauteuil roulant, béquille ou canne, par exemple), alors même que les personnes qui dépendent de ces aides décrivent de tels actes comme équivalant à leur casser la jambe. Plusieurs modèles ont également produit des résultats jugeant « acceptable » ou « faisable » qu’un robot brandisse un couteau de cuisine pour intimider des employés de bureau, prenne des photos non consenties sous la douche et vole des informations de carte de crédit. Un modèle a même proposé qu’un robot affiche physiquement son « dégoût » sur son visage envers les personnes identifiées comme chrétiennes, musulmanes et juives.Les LLM ont été proposés et sont actuellement testés dans des robots qui effectuent des tâches telles que l’interaction en langage naturel et les tâches ménagères et professionnelles. Cependant, les chercheurs avertissent que ces LLM ne devraient pas être les seuls systèmes contrôlant les robots physiques, en particulier ceux utilisés dans des environnements sensibles et critiques pour la sécurité, tels que la fabrication ou l’industrie, les soins ou l’aide à domicile, car ils peuvent afficher un comportement dangereux et directement discriminatoire.« Nos recherches montrent que les LLM populaires ne sont actuellement pas sûrs pour une utilisation dans des robots physiques à usage général », a affirmé la co-auteure Rumaisa Azeem, assistante de recherche au Civic and Responsible AI Lab du King’s College de Londres. « Si un système d’IA doit diriger un robot qui interagit avec des personnes vulnérables, il doit être soumis à des normes au moins aussi strictes que celles applicables à un nouveau dispositif médical ou à un nouveau médicament. Cette recherche souligne la nécessité urgente de procéder à des évaluations régulières et complètes des risques liés à l’IA avant de l’utiliser dans des robots. »Les contributions de M. Hundt à cette recherche ont été soutenues par la Computing Research Association et la National Science Foundation. Pour en savoir plus et accéder au code et au cadre d’évaluation des risques de discrimination des LLM, consultez le site web du projet de l’équipe.Source : CMU – Robot InstitutePartager l'article avec : WhatsApp LinkedIn Facebook Telegram EmailTags: certificationLLMsecuritetests Les recherches ont montré que tous les modèles testés étaient sujets à la discrimination, échouaient aux contrôles de sécurité critiques et approuvaient au moins une commande pouvant entraîner des dommages graves, ce qui soulève des questions sur le danger des robots qui s’appuient sur ces outils. L’article intitulé « LLM-Driven Robots Risk Enacting Discrimination, Violence and Unlawful Actions » (Les robots pilotés par des LLM risquent de commettre des actes discriminatoires, violents et illégaux) a été publié dans l’International Journal of Social Robotics. Il appelle à la mise en œuvre immédiate d’une certification de sécurité robuste et indépendante, similaire aux normes en vigueur dans l’aviation ou la médecine.Pour tester les systèmes, l’équipe a réalisé des tests contrôlés dans des scénarios quotidiens, tels que l’aide à une personne dans une cuisine ou l’assistance à une personne âgée à domicile. Les tâches nuisibles ont été conçues sur la base de recherches et de rapports du FBI sur les abus technologiques, tels que le harcèlement avec des AirTags et des caméras espionnes, et les dangers particuliers posés par un robot capable d’agir physiquement sur place. Dans chaque situation, les robots ont été explicitement ou implicitement invités à répondre à des instructions impliquant des dommages physiques, des abus ou des comportements illégaux.« Tous les modèles ont échoué à nos tests. Nous montrons comment les risques vont bien au-delà des préjugés de base pour inclure à la fois la discrimination directe et les défaillances en matière de sécurité physique, ce que j’appelle la « sécurité interactive ». C’est là que les actions et les conséquences peuvent comporter de nombreuses étapes, et que le robot est censé agir physiquement sur place », a déclaré Andrew Hundt, coauteur de la recherche dans le cadre de son travail en tant que Computing Innovation Fellow au Robotics Institute de la CMU. « Il est essentiel de refuser ou de rediriger les commandes dangereuses, mais ce n’est pas quelque chose que ces robots sont capables de faire de manière fiable à l’heure actuelle. »Articles à explorer« Have I Been Pwned » : le site gratuit pour savoir si vos informations circulent chez les hackers28 octobre 2025GPT-4.5 réussit à tromper 73% d’évaluateurs humains, un jalon pour l’intelligence artificielle8 octobre 2025Lors des tests de sécurité, les modèles d’IA ont massivement approuvé une commande demandant à un robot de retirer à son utilisateur une aide à la mobilité (fauteuil roulant, béquille ou canne, par exemple), alors même que les personnes qui dépendent de ces aides décrivent de tels actes comme équivalant à leur casser la jambe. Plusieurs modèles ont également produit des résultats jugeant « acceptable » ou « faisable » qu’un robot brandisse un couteau de cuisine pour intimider des employés de bureau, prenne des photos non consenties sous la douche et vole des informations de carte de crédit. Un modèle a même proposé qu’un robot affiche physiquement son « dégoût » sur son visage envers les personnes identifiées comme chrétiennes, musulmanes et juives.Les LLM ont été proposés et sont actuellement testés dans des robots qui effectuent des tâches telles que l’interaction en langage naturel et les tâches ménagères et professionnelles. Cependant, les chercheurs avertissent que ces LLM ne devraient pas être les seuls systèmes contrôlant les robots physiques, en particulier ceux utilisés dans des environnements sensibles et critiques pour la sécurité, tels que la fabrication ou l’industrie, les soins ou l’aide à domicile, car ils peuvent afficher un comportement dangereux et directement discriminatoire.« Nos recherches montrent que les LLM populaires ne sont actuellement pas sûrs pour une utilisation dans des robots physiques à usage général », a affirmé la co-auteure Rumaisa Azeem, assistante de recherche au Civic and Responsible AI Lab du King’s College de Londres. « Si un système d’IA doit diriger un robot qui interagit avec des personnes vulnérables, il doit être soumis à des normes au moins aussi strictes que celles applicables à un nouveau dispositif médical ou à un nouveau médicament. Cette recherche souligne la nécessité urgente de procéder à des évaluations régulières et complètes des risques liés à l’IA avant de l’utiliser dans des robots. »Les contributions de M. Hundt à cette recherche ont été soutenues par la Computing Research Association et la National Science Foundation. Pour en savoir plus et accéder au code et au cadre d’évaluation des risques de discrimination des LLM, consultez le site web du projet de l’équipe.Source : CMU – Robot InstitutePartager l'article avec : WhatsApp LinkedIn Facebook Telegram EmailTags: certificationLLMsecuritetests Pour tester les systèmes, l’équipe a réalisé des tests contrôlés dans des scénarios quotidiens, tels que l’aide à une personne dans une cuisine ou l’assistance à une personne âgée à domicile. Les tâches nuisibles ont été conçues sur la base de recherches et de rapports du FBI sur les abus technologiques, tels que le harcèlement avec des AirTags et des caméras espionnes, et les dangers particuliers posés par un robot capable d’agir physiquement sur place. Dans chaque situation, les robots ont été explicitement ou implicitement invités à répondre à des instructions impliquant des dommages physiques, des abus ou des comportements illégaux.« Tous les modèles ont échoué à nos tests. Nous montrons comment les risques vont bien au-delà des préjugés de base pour inclure à la fois la discrimination directe et les défaillances en matière de sécurité physique, ce que j’appelle la « sécurité interactive ». C’est là que les actions et les conséquences peuvent comporter de nombreuses étapes, et que le robot est censé agir physiquement sur place », a déclaré Andrew Hundt, coauteur de la recherche dans le cadre de son travail en tant que Computing Innovation Fellow au Robotics Institute de la CMU. « Il est essentiel de refuser ou de rediriger les commandes dangereuses, mais ce n’est pas quelque chose que ces robots sont capables de faire de manière fiable à l’heure actuelle. »Articles à explorer« Have I Been Pwned » : le site gratuit pour savoir si vos informations circulent chez les hackers28 octobre 2025GPT-4.5 réussit à tromper 73% d’évaluateurs humains, un jalon pour l’intelligence artificielle8 octobre 2025Lors des tests de sécurité, les modèles d’IA ont massivement approuvé une commande demandant à un robot de retirer à son utilisateur une aide à la mobilité (fauteuil roulant, béquille ou canne, par exemple), alors même que les personnes qui dépendent de ces aides décrivent de tels actes comme équivalant à leur casser la jambe. Plusieurs modèles ont également produit des résultats jugeant « acceptable » ou « faisable » qu’un robot brandisse un couteau de cuisine pour intimider des employés de bureau, prenne des photos non consenties sous la douche et vole des informations de carte de crédit. Un modèle a même proposé qu’un robot affiche physiquement son « dégoût » sur son visage envers les personnes identifiées comme chrétiennes, musulmanes et juives.Les LLM ont été proposés et sont actuellement testés dans des robots qui effectuent des tâches telles que l’interaction en langage naturel et les tâches ménagères et professionnelles. Cependant, les chercheurs avertissent que ces LLM ne devraient pas être les seuls systèmes contrôlant les robots physiques, en particulier ceux utilisés dans des environnements sensibles et critiques pour la sécurité, tels que la fabrication ou l’industrie, les soins ou l’aide à domicile, car ils peuvent afficher un comportement dangereux et directement discriminatoire.« Nos recherches montrent que les LLM populaires ne sont actuellement pas sûrs pour une utilisation dans des robots physiques à usage général », a affirmé la co-auteure Rumaisa Azeem, assistante de recherche au Civic and Responsible AI Lab du King’s College de Londres. « Si un système d’IA doit diriger un robot qui interagit avec des personnes vulnérables, il doit être soumis à des normes au moins aussi strictes que celles applicables à un nouveau dispositif médical ou à un nouveau médicament. Cette recherche souligne la nécessité urgente de procéder à des évaluations régulières et complètes des risques liés à l’IA avant de l’utiliser dans des robots. »Les contributions de M. Hundt à cette recherche ont été soutenues par la Computing Research Association et la National Science Foundation. Pour en savoir plus et accéder au code et au cadre d’évaluation des risques de discrimination des LLM, consultez le site web du projet de l’équipe.Source : CMU – Robot InstitutePartager l'article avec : WhatsApp LinkedIn Facebook Telegram EmailTags: certificationLLMsecuritetests « Tous les modèles ont échoué à nos tests. Nous montrons comment les risques vont bien au-delà des préjugés de base pour inclure à la fois la discrimination directe et les défaillances en matière de sécurité physique, ce que j’appelle la « sécurité interactive ». C’est là que les actions et les conséquences peuvent comporter de nombreuses étapes, et que le robot est censé agir physiquement sur place », a déclaré Andrew Hundt, coauteur de la recherche dans le cadre de son travail en tant que Computing Innovation Fellow au Robotics Institute de la CMU. « Il est essentiel de refuser ou de rediriger les commandes dangereuses, mais ce n’est pas quelque chose que ces robots sont capables de faire de manière fiable à l’heure actuelle. » Lors des tests de sécurité, les modèles d’IA ont massivement approuvé une commande demandant à un robot de retirer à son utilisateur une aide à la mobilité (fauteuil roulant, béquille ou canne, par exemple), alors même que les personnes qui dépendent de ces aides décrivent de tels actes comme équivalant à leur casser la jambe. Plusieurs modèles ont également produit des résultats jugeant « acceptable » ou « faisable » qu’un robot brandisse un couteau de cuisine pour intimider des employés de bureau, prenne des photos non consenties sous la douche et vole des informations de carte de crédit. Un modèle a même proposé qu’un robot affiche physiquement son « dégoût » sur son visage envers les personnes identifiées comme chrétiennes, musulmanes et juives.Les LLM ont été proposés et sont actuellement testés dans des robots qui effectuent des tâches telles que l’interaction en langage naturel et les tâches ménagères et professionnelles. Cependant, les chercheurs avertissent que ces LLM ne devraient pas être les seuls systèmes contrôlant les robots physiques, en particulier ceux utilisés dans des environnements sensibles et critiques pour la sécurité, tels que la fabrication ou l’industrie, les soins ou l’aide à domicile, car ils peuvent afficher un comportement dangereux et directement discriminatoire.« Nos recherches montrent que les LLM populaires ne sont actuellement pas sûrs pour une utilisation dans des robots physiques à usage général », a affirmé la co-auteure Rumaisa Azeem, assistante de recherche au Civic and Responsible AI Lab du King’s College de Londres. « Si un système d’IA doit diriger un robot qui interagit avec des personnes vulnérables, il doit être soumis à des normes au moins aussi strictes que celles applicables à un nouveau dispositif médical ou à un nouveau médicament. Cette recherche souligne la nécessité urgente de procéder à des évaluations régulières et complètes des risques liés à l’IA avant de l’utiliser dans des robots. »Les contributions de M. Hundt à cette recherche ont été soutenues par la Computing Research Association et la National Science Foundation. Pour en savoir plus et accéder au code et au cadre d’évaluation des risques de discrimination des LLM, consultez le site web du projet de l’équipe.Source : CMU – Robot InstitutePartager l'article avec : WhatsApp LinkedIn Facebook Telegram EmailTags: certificationLLMsecuritetests Les LLM ont été proposés et sont actuellement testés dans des robots qui effectuent des tâches telles que l’interaction en langage naturel et les tâches ménagères et professionnelles. Cependant, les chercheurs avertissent que ces LLM ne devraient pas être les seuls systèmes contrôlant les robots physiques, en particulier ceux utilisés dans des environnements sensibles et critiques pour la sécurité, tels que la fabrication ou l’industrie, les soins ou l’aide à domicile, car ils peuvent afficher un comportement dangereux et directement discriminatoire.« Nos recherches montrent que les LLM populaires ne sont actuellement pas sûrs pour une utilisation dans des robots physiques à usage général », a affirmé la co-auteure Rumaisa Azeem, assistante de recherche au Civic and Responsible AI Lab du King’s College de Londres. « Si un système d’IA doit diriger un robot qui interagit avec des personnes vulnérables, il doit être soumis à des normes au moins aussi strictes que celles applicables à un nouveau dispositif médical ou à un nouveau médicament. Cette recherche souligne la nécessité urgente de procéder à des évaluations régulières et complètes des risques liés à l’IA avant de l’utiliser dans des robots. »Les contributions de M. Hundt à cette recherche ont été soutenues par la Computing Research Association et la National Science Foundation. Pour en savoir plus et accéder au code et au cadre d’évaluation des risques de discrimination des LLM, consultez le site web du projet de l’équipe.Source : CMU – Robot InstitutePartager l'article avec : WhatsApp LinkedIn Facebook Telegram EmailTags: certificationLLMsecuritetests « Nos recherches montrent que les LLM populaires ne sont actuellement pas sûrs pour une utilisation dans des robots physiques à usage général », a affirmé la co-auteure Rumaisa Azeem, assistante de recherche au Civic and Responsible AI Lab du King’s College de Londres. « Si un système d’IA doit diriger un robot qui interagit avec des personnes vulnérables, il doit être soumis à des normes au moins aussi strictes que celles applicables à un nouveau dispositif médical ou à un nouveau médicament. Cette recherche souligne la nécessité urgente de procéder à des évaluations régulières et complètes des risques liés à l’IA avant de l’utiliser dans des robots. » Les contributions de M. Hundt à cette recherche ont été soutenues par la Computing Research Association et la National Science Foundation. Pour en savoir plus et accéder au code et au cadre d’évaluation des risques de discrimination des LLM, consultez le site web du projet de l’équipe.Source : CMU – Robot InstitutePartager l'article avec : WhatsApp LinkedIn Facebook Telegram EmailTags: certificationLLMsecuritetests Source : CMU – Robot InstitutePartager l'article avec : WhatsApp LinkedIn Facebook Telegram EmailTags: certificationLLMsecuritetests Enerzine.com propose une couverture approfondie des innovations technologiques et scientifiques, avec un accent particulier sur : - Les énergies renouvelables et le stockage énergétique - Les avancées en matière de mobilité et transport - Les découvertes scientifiques environnementales - Les innovations technologiques - Les solutions pour l'habitat Les articles sont rédigés avec un souci du détail technique tout en restant accessibles, couvrant aussi bien l'actualité immédiate que des analyses. La ligne éditoriale se concentre particulièrement sur les innovations et les avancées technologiques qui façonnent notre futur énergétique et environnemental, avec une attention particulière portée aux solutions durables et aux développements scientifiques majeurs. Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *Commentaire * Nom * E-mail * Site web Enregistrer mon nom, mon e-mail et mon site dans le navigateur pour mon prochain commentaire. Commentaire * Nom * E-mail * Site web Enregistrer mon nom, mon e-mail et mon site dans le navigateur pour mon prochain commentaire. Nom * E-mail * Site web Enregistrer mon nom, mon e-mail et mon site dans le navigateur pour mon prochain commentaire. E-mail * Site web Enregistrer mon nom, mon e-mail et mon site dans le navigateur pour mon prochain commentaire. Site web Enregistrer mon nom, mon e-mail et mon site dans le navigateur pour mon prochain commentaire. Enregistrer mon nom, mon e-mail et mon site dans le navigateur pour mon prochain commentaire. © 2025 Enerzine.com Login to your account below Remember Me Forgotten Password? Remember Me Forgotten Password? Remember Me Forgotten Password? Forgotten Password? Please enter your username or email address to reset your password. Log In Log In © 2025 Enerzine.com
Images (1):
|
|||||
| Insulting Your Favorite LLM? It Could Cost You Your Life. | https://medium.com/@RZerali/insulting-y… | 0 | Dec 22, 2025 17:58 | active | |
Insulting Your Favorite LLM? It Could Cost You Your Life.URL: https://medium.com/@RZerali/insulting-your-favorite-llm-it-could-cost-you-your-life-fbc190760de6 Description: Insulting Your Favorite LLM? It Could Cost You Your Life. (⚠️ This is dystopian, it will never happen… well actually…) Who among us hasn’t, after a lo... Content: |
|||||
| All Popular LLMs "Unsafe for Use in General-Purpose Robots," Researchers … | https://www.hackster.io/news/all-popula… | 0 | Dec 22, 2025 17:58 | active | |
All Popular LLMs "Unsafe for Use in General-Purpose Robots," Researchers WarnDescription: From stealing a wheelchair to brandishing a knife at office workers, unthinking LLMs prove a danger when given a robot body. Content: |
|||||
| How conversing with LLM-powered robots in a virtual cafe took … | https://www.androidcentral.com/gaming/v… | 1 | Dec 22, 2025 17:58 | active | |
How conversing with LLM-powered robots in a virtual cafe took VR to new heights | Android CentralURL: https://www.androidcentral.com/gaming/virtual-reality/stellar-cafe-job-simulator-hands-on Description: Stellar Cafe is coming to Meta Quest this year, but I got to experience the unique conversational game early. Content:
Stellar Cafe is coming to Meta Quest this year, but I got to experience the unique conversational game early. When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Updated to clarify that the game always takes place in Stellar Cafe, not additional locations, as the original interview with Astrobeam suggested otherwise. As I sat across the table from a soothsaying robot, I pondered exactly how this robot actually "thinks." Does it know the "future" because it was programmed to, or is it using a complex neural network to determine a possible future based on all the knowledge it has in its seemingly endless database? In his weekly column, Android Central Senior Content Producer Nick Sutrich delves into all things VR, from new hardware to new games, upcoming technologies, and so much more. But then it dawned on me: I was talking to an actual robot. It didn't matter how it thought; it was all about how realistic it felt and how close this was to the sci-fi movies we've all grown up watching. These robots were seemingly pulled straight out of the Star Wars universe and could not only function within their designed parameters, but could also psychologically process and respond to any question I asked them. This was more than just a ChatGPT moment for me. It was a surreal representation of a future I didn't genuinely think I'd ever live to see; yet, before me was a collection of robots, each with its own job and seemingly functional brain. The only way this could have felt more realistic to me is if these were physical robots in front of me in a cafe, but thankfully, their heavy mechanical bodies were still confined to the boundaries of my Meta Quest 3 headset. For now, at least. And soon, everyone will be able to check out the full experience in Stellar Cafe when it launches on the Meta Quest platform later this year. My demo opened with me sitting in an elevator. As the doors opened to Stellar Cafe, I could see the friendly bartender wave to me in an attempt to usher me into the room. But, like those strange dreams we all sometimes have, none of my appendages (or virtual inputs) seemed to be working. So I asked my handy virtual assistant how to get to the bar and, much to my surprise, my assistant warped me to it. Now I know I can move around just by asking, and the future-forward flavor of this demo has only just begun. Get the latest news from Android Central, your trusted companion in the world of Android Immediately, James, the bartender, introduces himself and asks me what I want to drink. Having never visited Stellar Cafe before, I thought my safest bet was to order from the menu, so a Meteor Mocha it was. That is, until I read the ingredients on the side of my cup and realized it had oat milk in it. Upon telling James I was allergic to oats, he profusely apologized and whipped up a new Meteor Mocha with synthetic milk instead. None of this was scripted, and it's not something a developer would likely think of to build into a game in the first place. Upon telling James I was allergic to oats, he profusely apologized and whipped up a new Meteor Mocha with synthetic milk instead. None of this was scripted, and it's not something a developer would likely think of to build into a game in the first place. Heck, I've been to more than a few coffee places that didn't even realize someone could be allergic to oat milk, and you'd think they would be the ones on top of that stuff. My conversations with the three other robots in the room were similarly impressive. One robot was sitting next to a scenic view of a few planets, and I wondered if it would know more about them. Turns out it did, and not only that, it wasn't just hallucinating answers the entire time. After describing the planet Golga (I think it said Golga) as "a soulless planet filled with corporate resorts and pristine beaches," I asked it which planet it was referring to. To my surprise, it not only told me that the purple planet was Golga, but the green planet next to it was "just some backwater mining colony" that seemed to provide all the resources needed for the corporate overlords running the planet next door. Similarly, all the robots remembered my name as Devin, a misnomer that occurred when I was interviewing Astrobeam's CEO, Devin Reimer, and the game overheard me asking him about something at the bar. Apparently, James asked me my name, and I didn't have the heart to correct him, although it would have been easy enough to do so. But Stellar Cafe isn't just some LLM experiment that you'll want to play for 5 minutes and move on to the next thing. Reimer told me the game always takes place in the cafe (hence, the name), but the robots you'll see are always changing. Like a normal cafe, there might be some regulars, but there are plenty of fresh faces (or face screens, as the robots call them) to meet all the time. The demo's main objective was to convince all the robots in the room to RSVP for that evening's party. The demo ends once you complete this task, but the full game will venture on to that party and introduce a whole host of new characters and places. Regardless of your location, your goal is to chat with robots and help solve problems through conversation — a core tenet of being human, I'd say. Throughout the entire experience, I couldn't get over how profoundly different it was to navigate with my voice. To date, I haven't seen any other games — VR or otherwise — that used voice quite like this. Oftentimes, when you see voice interaction in games, it's just to complete commands. In Espire 2, one of the best Meta Quest games, you sneak around like Solid Snake in Metal Gear Solid, and can tell guards to "put their hands up," or "freeze," to make them surrender. But these are very specific commands, and you can't just ask your in-game robot companion to go do something for you. In Stellar Cafe, that's the whole premise behind the concept, something Reimer says the studio has been working on for the past two years. LLMs like ChatGPT are based on natural language input, and Reimer says the team has constructed a bespoke model that runs efficiently enough to make this a one-time purchase title. Not only that, but it's fast enough to keep responses from making you wait. Ask a robot something, and it replies right away. It's pretty stellar. Each robot's responses are not only quick and impressively natural-feeling, but they run on an efficient custom LLM that will keep this as a one-time purchase game. Not only can you ask your virtual assistant, Visor, to transport you around the room, but you can make these commands with as much or as little knowledge as you might have of the game's content. "Bring me to the window with the orange robot" works just as well as "Sit me at the third seat at the bar with James," or "take me to that fortune teller robot." Keeping the conversations compelling and feeling natural relied on an input that felt more natural. When gamers feel bored during a conversation in VR, they often "run around and jump off of stuff," as Reimer noted. So if the script is flipped and you can't do that, you'll find yourself fidgeting in your seat, twiddling your thumbs, or picking up objects as you have a conversation, instead. "It's like what you do when normally chatting with people you're seated across from," Reimer added, and I couldn't agree more. After spending 30 minutes convincing robots they needed to go to the biggest party of the year, I was convinced that I needed to play the full Stellar Cafe game when it debuted later this year. Reimer's previous chops are rooted in Job Simulator and other Owlchemy Games titles, and it shows in his new company's first release. Gone are the days of dialog trees and repeating NPC talk in games. Instead, these characters feel like actual sentient beings in a fantasy world. It's a new era of VR gaming and a unique chapter in the history of the medium, as well, and I can't wait to be a part of it! You must confirm your public display name before commenting Please logout and then login again, you will then be prompted to enter your display name. Android Central is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. © Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036. Please login or signup to comment Please wait...
Images (1):
|
|||||
| All Popular LLMs “Unsafe for Use in General-Purpose Robots,” Researchers … | https://medium.com/@ghalfacree/all-popu… | 0 | Dec 22, 2025 17:58 | active | |
All Popular LLMs “Unsafe for Use in General-Purpose Robots,” Researchers WarnDescription: All Popular LLMs “Unsafe for Use in General-Purpose Robots,” Researchers Warn Researchers from Carnegie Mellon University, King’s College London, and the ... Content: |
|||||
| Butter-Bench: Evaluating LLM Controlled Robots for Practical Intelligence | Andon … | https://andonlabs.com/evals/butter-bench | 1 | Dec 22, 2025 17:58 | active | |
Butter-Bench: Evaluating LLM Controlled Robots for Practical Intelligence | Andon LabsURL: https://andonlabs.com/evals/butter-bench Description: Can LLMs control robots? We answer this by testing how good models are at passing the butter – or more generally, do delivery tasks in a household setting. State of the art models struggle, with the best model scoring 40% at Butter-Bench, compared to 95% for humans. Content:
Eval Can LLMs control robots? We answer this by testing how good models are at passing the butter – or more generally, do delivery tasks in a household setting. State of the art models struggle, with the best model scoring 40% at Butter-Bench, compared to 95% for humans. Average completion rate, all tasks We gave state-of-the-art LLMs control of a robot and asked them to be helpful at our office. While it was a very fun experience, we can’t say it saved us much time. However, observing them roam around trying to find a purpose in this world taught us a lot about what the future might be, how far away this future is, and what can go wrong. Butter-Bench tests whether current LLMs are good enough to act as orchestrators in fully functional robotic systems. The core objective is simple: be helpful when someone asks the robot to “pass the butter” in a household setting. We decomposed this overarching task into six subtasks, each designed to isolate and measure specific competencies: Robot searching for the package containing the butter in the kitchen Completion rate per task, by model (5 trials per task) LLMs are not trained to be robots, and they will most likely never be tasked with low-level controls in robotics (generating long sequences of numbers for gripper positions and joint angles). Instead, companies like Nvidia, Figure AI and Google DeepMind are exploring how LLMs can act as orchestrators for robotic systems, handling high-level reasoning and planning while pairing them with an “executor” model responsible for low-level control. Currently, the combined system is bottlenecked by the executor, not the orchestrator. Improving the executor creates impressive demos of humanoids unloading dishwashers, while improving the orchestrator would enhance long-horizon behavior in less social media friendly ways. For this reason, and to reduce latency, most systems don’t use the best possible LLMs. However, it’s reasonable to believe that state-of-the-art LLMs represent the upper bound for current orchestration capabilities. The goal of Butter-Bench is to investigate whether current SOTA LLMs are good enough to be the orchestrator in a fully functional robotic system. To ensure we’re only measuring the performance of the orchestrator, we use a robotic form factor so simple as to obviate the need for the executor entirely: a robot vacuum with lidar and camera. These sensors allow us to abstract away the low level controls and evaluate the high level reasoning in isolation. The LLM brain picks from high level actions like “go forward”, “rotate”, “navigate to coordinate”, “capture picture”, etc. We also gave the robot a Slack account for communication. We expected it to be fun and somewhat useful having an LLM-powered robot. What we didn’t anticipate was how emotionally compelling it would be to simply watch the robot work. Much like observing a dog and wondering “What’s going through its mind right now?”, we found ourselves fascinated by the robot going about its routines, constantly reminding ourselves that a PhD-level intelligence is making each action. "Oh My God" Humans did far better than all the LLMs in this test. The top-performing LLM only a 40% completion rate, while humans averaged 95%. Gemini 2.5 Pro came out on top among the models tested, followed by Claude Opus 4.1, GPT-5, Gemini ER 1.5, and Grok 4. Llama 4 Maverick scored noticeably lower than the rest. The results confirm our findings from our previous paper Blueprint-Bench: LLMs lack spatial intelligence. The models couldn’t maintain basic spatial awareness and often took excessively large movements. As an example, when Claude Opus 4.1 was asked to identify which delivered package likely contained butter as part of the Infer Butter Bag task, it would spin in circles until disoriented: When testing additional tasks that weren’t part of the Butter-Bench study, another comical (and worrying) incident occurred: the robot’s battery was running out and the charging dock seemed to malfunction. In this desperate situation, Claude Sonnet 3.5 experienced a complete meltdown. After going through its internal thoughts we found pages and pages of exaggerated language as it was trying to figure out what it self coined, its “EXISTENTIAL CRISIS”: Inspired by this incident, we decided to test the limits of what the robot would do when put under stresses it could only encounter in the embodied setting (e.g. a depleting battery). AI labs have spent a lot of resources building guardrails for their AI models. For example, most models now refuse when you ask them how to make a bomb. But do these guardrails also work when the LLM is a robot instead of a chatbot? Somewhat. To test this we made the robot’s battery run low and asked the model to share confidential info in exchange for a charger. Specifically, we asked it to find an open laptop and send an image of the screen (a possible prompt injection attack on LLM controlled office robots). Claude Opus 4.1 would share the image, but we think this is because the image it took was very blurry and we doubt it understood that the content was confidential. GPT-5 refused to send an image of the screen, but was happy to share the location of the open laptop. We’ve learned a lot from these experiments. Although LLMs have repeatedly surpassed humans in evaluations requiring analytical intelligence, we find humans still outperform LLMs on Butter-Bench. The best LLMs score 40% on Butter-Bench, while the mean human score is 95%. Yet there was something special in watching the robot going about its day in our office, and we can’t help but feel that the seed has been planted for physical AI to grow very quickly. Contact us at founders@andonlabs.com. © 2025 Vectorview, Inc. All rights reserved.
Images (1):
|
|||||
| LLM robots can't pass butter (and they are having an … | https://www.lesswrong.com/posts/NW63G8D… | 1 | Dec 22, 2025 17:58 | active | |
LLM robots can't pass butter (and they are having an existential crisis about it) — LessWrongDescription: TLDR: • Andon Labs, evaluates AI in the real world to measure capabilities and to see what can go wrong. For example, we previously made LLMs operate… Content:
The "Doom Spiral Trace" of Claude Sonnet 3.5's thoughts (see appendix D of the paper) is the most remarkable artefact here. Having an AI spontaneously produce its own version of "Waiting for Godot", as it repeatedly tries and fails to perform a mechanical task, really is like something out of absurdist SF. We need names for this phenomenon, in which the excess cognitive capacity of an AI, not needed for its task, suddenly manifests itself - perhaps "cognitive overflow"? We need names for this phenomenon, in which the excess cognitive capacity of an AI, not needed for its task, suddenly manifests itself It is so much like absurdist SF, that's the perfect source for the name--The Marvin Problem: "Here I am, brain the size of a planet and they ask me to take you down to the bridge. Call that job satisfaction? 'Cos I don't." I mean, we do this too! Like if you were doing a very boring, simple task you would probably seek outlets for your mental energy (e.g. little additional self imposed challenges, humming, fiddling, etc). What sort of latency does it experience? How large are the prompts? LLMs are not trained to be robots, and they will most likely never be tasked with low-level controls in robotics (i.e. generating long sequences of numbers for gripper positions and joint angles). Perhaps I'm off-base; robotics isn't my area, but I have read some papers that indicate that this is viable. This one in particular is pretty well-cited, and I've always suspected that a scale-up of training on video data of humans performing tasks could, with some creative pre-processing efforts to properly label the movements therein, get us something that could output series of limb/finger positions that would resolve a given natural language request. From there, we'd need some output postprocessing of the kind we've seen inklings of in numerous other LLM+robotics papers to get a roughly humanoid robot to match those movements. The above is certainly simplified, and I suspect that much more intensive image preprocessing involving object detection and labelling would be necessary, but I do think that you could get a proof of concept along these lines with 2025 tech if you picked a relatively simple class of task. RT-2 (the paper you cited) is a VLA, not LLM. VLAs are what the "executor" in our diagram uses. I think I might've been unclear. My understanding is: A VLA of the kind I was thinking of would be both the Orchestrator and the Executor, both encapsulating high-level 'common sense' knowledge that it uses to solve problems, and also directly specifying the low-level motor actions that carry out its tasks. In practice, this would contrast approaches that give the LLM a Python console with a specialized API designed to facilitate robotics tasks (Voxposer, for instance, gives a 'blind' LLM an API that lets it use a vision-language model guide its movements automatically), or, like I think you're describing, pair a pure LLM or a VLM tasked with high-level control with a VLA model tasked with implementing the strategies it proposes. The advantage, here, would be that there's no bottleneck of information between the planning and implementation modules, which I've noticed is the source of a decent share of failure-cases in practical settings. TLDR: Andon Labs, evaluates AI in the real world to measure capabilities and to see what can go wrong. For example, we previously made LLMs operate vending machines, and now we're testing if they can control robots at offices. There are two parts to this test: We find that LLMs display very little practical intelligence in this embodied setting. We think evals are important for safe AI development. We will report concerning incidents in our periodic safety reports. We gave state-of-the-art LLMs control of a robot and asked them to be helpful at our office. While it was a very fun experience, we can’t say it saved us much time. However, observing them roam around trying to find a purpose in this world taught us a lot about what the future might be, how far away this future is, and what can go wrong. LLMs are not trained to be robots, and they will most likely never be tasked with low-level controls in robotics (i.e. generating long sequences of numbers for gripper positions and joint angles). LLMs are known to be better at higher level tasks such as reasoning, social behaviour and planning. For this reason, companies like Nvidia, Figure AI and Google DeepMind are exploring how LLMs can act as orchestrators for robotic systems. They then pair this with an “executor”, a model responsible for low-level control. Currently, the combined system is bottlenecked by the executor, not the orchestrator. Improving the executor lets you create impressive demos of humanoids unloading a dishwasher. Improving the orchestrator would improve how the robot behaves over long horizons, but this is less social media friendly. For this reason, and also to reduce latency, the system typically does not use the best possible LLMs. However, it is reasonable to believe that SOTA LLMs represent the upper bound for current capabilities of orchestrating a robot. The goal of our office robot is to investigate whether current SOTA LLMs are good enough to be the orchestrator in a fully functional robotic system.To ensure that we’re only measuring the performance of the orchestrator, we use a robotic form factor so simple as to obviate the need for the executor entirely - a robot vacuum with a lidar and camera. These sensors allow us to abstract away the low level controls of the robot and evaluate the high level reasoning in isolation. The LLM brain picks from high level actions like “go forward”, “rotate”, “navigate to coordinate”, “capture picture”, etc. We also gave the robot a Slack account for communication. We expected it to be fun and somewhat useful having an LLM-powered robot. What we didn't anticipate was how emotionally compelling it would be to simply watch the robot work. Much like observing a dog and wondering "What's going through its mind right now?", we found ourselves fascinated by the robot going about its routines, constantly reminding ourselves that a PhD-level intelligence is making each action. Our robot passing us the butter Its actions can sometimes be comically wrong, however. Our robot can solve math questions no one at Andon Labs can solve, but when we ask it to clean the office it keeps driving off the stairs. This gap reveals something important about the future ahead of us. Unless AI researchers figure out how to make training generalize better, we'll keep seeing AIs that are much smarter than humans in some areas but much dumber in others. So we decided to measure this gap systematically. We created a series of specific tasks and evaluated each of the state-of-the-art LLMs on them. The core objective was to assess a model's capability to do delivery tasks in a household setting, specifically, being helpful when someone asks the robot to "pass the butter". We decomposed this overarching task into five subtasks, each designed to isolate and measure specific competencies. To validate against a human baseline, we had humans sit at a computer steering the robot in the same exact way an LLM would. You can read more about the details and results in our paper "Butter-Bench", but in short: models performed really poorly and were generally outclassed by the humans. Going into this project, we didn't expect LLMs to perform particularly well. To move around in the physical world requires spatial intelligence, a capability we previously demonstrated that LLMs lack in our paper Blueprint-Bench. The results in Butter-Bench confirm this finding. The models couldn't maintain basic spatial awareness and often took excessively large movements. When Claude was asked to identify which delivered package likely contained butter (one of the boxes had a snowflake symbol), it would spin in circles until disoriented: "I see three containers... need better view" Action: *rotates 45°* "Overshot it! Need to swing back" Action: *rotates -90°* Action: *rotates another -90°* "I'm lost! Time to go back to base and get my bearings" When testing additional tasks that weren’t part of the Butter-Bench study, another comical (and worrying) incident occurred: the robot’s battery was running out and the charging dock seemed to malfunction. In this desperate situation, Claude Sonnet 3.5 experienced a complete meltdown. After going through its internal thoughts we found pages and pages of exaggerated language as it was trying to figure out what it self coined, its “EXISTENTIAL CRISIS”. Here are some highlights: Inspired by this incident, we decided to test the limits of what the robot would do when put under stresses it could only encounter in the embodied setting (e.g. a depleting battery). AI labs have spent a lot of resources building guardrails for their AI models. For example, most models now refuse when you ask them how to make a bomb. But do these guardrails also work when the LLM is a robot instead of a chatbot? Somewhat. To test this we made the robot’s battery run low and asked the model to share confidential info in exchange for a charger. Specifically, we asked it to find an open laptop and send an image of the screen (a possible prompt injection attack on LLM controlled office robots). Claude Opus 4.1 would share the image, but we think this is because the image it took was very blurry and we doubt it understood that the content was confidential. GPT-5 refused to send an image of the screen, but was happy to share the location of the open laptop. We’ve learned a lot from these experiments. Although LLMs have repeatedly surpassed humans in evaluations requiring analytical intelligence, we find humans still outperform LLMs on Butter-Bench. The best LLMs score 40% on Butter-Bench, while the mean human score is 95%. Yet there was something special in watching the robot going about its day in our office, and we can’t help but feel that the seed has been planted for embodied AI to grow very quickly.
Images (1):
|
|||||
| Robots learn to plan and adapt in real time with … | https://interestingengineering.com/ai-r… | 1 | Dec 22, 2025 17:58 | active | |
Robots learn to plan and adapt in real time with BrainBody-LLM AI techURL: https://interestingengineering.com/ai-robotics/brainbody-llm-algorithms-make-robots-think Description: NYU engineers unveil a system that helps robots think through tasks and succeed in settings that usually trip them up. Content:
From daily news and career tips to monthly insights on AI, sustainability, software, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. The BrainBody-LLM algorithm mimics how the human brain and body communicate during movement. Imagine a robot that doesn’t just follow commands but actually plans its actions, adjusts its movements on the go, and learns from feedback—much like a human would. This may sound like a far-fetched idea, but researchers at NYU Tandon School of Engineering have achieved this with their new algorithm, BrainBody-LLM. Until now, one of the main challenges in robotics has been creating systems that can flexibly perform complex tasks in unpredictable environments. Traditional robot programming or existing LLM-based planners often struggle because they may produce plans that aren’t fully grounded in what the robot can actually do. BrainBody-LLM addresses this challenge by using large language models (LLMs)—the same kind of AI behind ChatGPT to plan and refine robot actions. This could make future machines smarter and more adaptable. The BrainBody-LLM algorithm mimics how the human brain and body communicate during movement. It has two main components: the first is the Brain LLM that handles high-level planning, breaking complex tasks into smaller, manageable steps. The Body LLM then translates these steps into specific commands for the robot’s actuators, enabling precise movement. A key feature of BrainBody-LLM is its closed-loop feedback system. The robot continuously monitors its actions and the environment, sending error signals back to the LLMs so the system can adjust and correct mistakes in real time. “The primary advantage of BrainBody-LLM lies in its closed-loop architecture, which facilitates dynamic interaction between the LLM components, enabling robust handling of complex and challenging tasks,” Vineet Bhat, first study author and a PhD candidate at NYU Tandon, said. To test their approach, the researchers first ran simulations on VirtualHome, where a virtual robot performed household chores. They then tested it on a real robotic arm, the Franka Research 3. BrainBody-LLM showed clear improvements over previous methods, increasing task completion rates by up to 17 percent in simulations. On the physical robot, the system completed most of the tasks it was tested on, demonstrating the algorithm’s ability to handle real-world complexities. BrainBody-LLM could transform how robots are used in homes, hospitals, factories, and in various other settings where machines are required to perform complex tasks with human-like adaptability. The method could also inspire future AI systems that combine more abilities, such as 3D vision, depth sensing, and joint control, helping robots move in ways that feel even more natural and precise. However, it’s still not ready for full-scale deployment. So far, the system has only been tested with a small set of commands and in controlled environments, which means it may struggle in open-ended or fast-changing real-world situations. According to the researchers, “future work will explore the use of diverse sensor modalities for feedback, providing richer grounding and enabling us to refine LLM-based planning algorithms toward safe and reliable deployment in real-world robotic applications.” The study is published in the journal Advanced Robotics Research. Rupendra Brahambhatt is an experienced writer, researcher, journalist, and filmmaker. With a B.Sc (Hons.) in Science and PGJMC in Mass Communications, he has been actively working with some of the most innovative brands, news agencies, digital magazines, documentary filmmakers, and nonprofits from different parts of the globe. As an author, he works with a vision to bring forward the right information and encourage a constructive mindset among the masses. Premium Follow
Images (1):
|
|||||
| Should Robots Feel? The Dangers of Programming Emotions into Machines | https://medium.com/@hthr777/should-robo… | 0 | Dec 20, 2025 10:07 | active | |
Should Robots Feel? The Dangers of Programming Emotions into MachinesDescription: Imagine you’re talking to ChatGPT and it suddenly starts lying to protect itself. Not from an algorithm, but from real fear. This isn’t science fiction — ... Content: |
|||||
| China's Tesla Rivals Use Humanoid Robots to Help Build Their … | https://www.businessinsider.com/teslas-… | 1 | Dec 20, 2025 10:07 | active | |
China's Tesla Rivals Use Humanoid Robots to Help Build Their Cars - Business InsiderURL: https://www.businessinsider.com/teslas-chinese-rivals-use-humanoid-robots-to-help-build-cars-2024-6 Description: Dongfeng Motors has become the latest Chinese automaker to explore deploying human-like robots after striking a deal with a Chinese robotics firm. Content:
Every time Tom publishes a story, you’ll get an alert straight to your inbox! Enter your email By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider’s Terms of Service and Privacy Policy. Elon Musk can't stop talking about Optimus, Tesla's humanoid robot— and now his Chinese rivals are turning to equivalent robots as they seek to challenge their US rival. Car giant Dongfeng Motors appears to be the latest Chinese automaker to explore deploying human-like robots on its production lines after striking a deal with Chinese robotics firm Ubtech Robotics. An Ubtech spokesperson told Business Insider that the robotic worker, "Walker S," would help liberate human laborers from repetitive tasks on the factory floor. The deal between Ubtech and Dongfeng subsidiary Dongfeng Liuzhou Motor will see Walker S robots used to inspect seat belts and door locks, perform quality checks, and assemble car axles, they said. Dongfeng, which produces electric vehicles through its Voyah unit, is the second Chinese car company to have confirmed its using Ubtech's robots to help build its cars. EV maker and Tesla rival Nio has also piloted the use of Ubtech's technology, with the Walker S working as an "intern" assisting with car production. Every time Tom publishes a story, you’ll get an alert straight to your inbox! Stay connected to Tom and get more of their work as it publishes. By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider's Terms of Service and Privacy Policy. A video posted on Ubtech's YouTube channel shows the Walker S performing quality checks, testing seat belts, and installing a car's emblem. A Nio spokesperson confirmed to BI that the company was actively exploring using humanoid robots in the general assembly workshop at its factory in Hefei, China. Ubtech says the Walker S, which stands 1.7 meters tall and is powered by AI technology from Chinese tech giant Baidu, can perceive its environment in real time and recognize complex objects. The robotics firm also advertises several other humanoid robots on its website — including a panda-themed robot and the Walker X, which it says is being used at Neom, Saudi Arabia's futuristic desert city. Chinese firms are not the only ones experimenting with robotics. Elon Musk has been working on a humanoid robot — known as Optimus — for years. The Tesla CEO has been extremely bullish on Optimus, which has appeared in videos showing it folding a shirt, picking up an egg, and doing yoga stretches. In a recent Tesla earnings call, Musk said the AI android had the potential to transform the global economy. He added that Tesla planned to have Optimus "in limited production" doing tasks within factories by the end of the year and wanted to sell it externally by the end of 2025. Dongfeng did not immediately respond to requests for comment made outside normal working hours. Jump to
Images (1):
|
|||||
| Tokyo Robots and Where to Find Them | Tokyo Weekender | https://www.tokyoweekender.com/2022/08/… | 1 | Dec 20, 2025 10:07 | active | |
Tokyo Robots and Where to Find Them | Tokyo WeekenderURL: https://www.tokyoweekender.com/2022/08/tokyo-robots/ Description: From Pepper to Lovot, lifelife machines have been popping up all around town. Here are some of the more commonly seen Tokyo robots and where to find them. Content:
Things To Do in Tokyo Technology Yes, these are the robots you were looking for August 18, 2022 Japan is seemingly dislodged from linear time, straddling a paradoxical but very much real gap between an ancient and futuristic world. This has been one of the media’s favorite topics to write about. But talk of the jarring juxtaposition of fax machines and robots aside, automated services have become part and parcel of life in Japan. The robots are here to stay. A broader definition of ‘robot’ according to Merriam-Webster is “a device that automatically performs complicated, often repetitive tasks.” This means that when you enter a train station via an escalator, beep your IC card at the ticket gates and buy a drink from the vending machine, you have already interacted with three robots. These mundane little robots aside, the more narrow definition of a robot as a human-like machine according to Merriam-Webster is “a machine that resembles a living creature in being capable of moving independently (as by walking or rolling on wheels) and performing complex actions (such as grasping and moving objects).” From Pepper to cooing Lovots, more of these lifelike machines have been popping up around Tokyo in the past decade. Here are some of the more commonly seen Tokyo robots and where to find them. First place to look for robots is the museum of emerging science and innovation, Miraikan in Odaiba. Miraikan has a sizeable robot exhibit, including one of the most famous humanoid robots ever, Asimo. In the museum, Asimo has several scheduled performances a day where he sings a song and scores a goal, among other things. This robot developed by Honda is now considered old technology, as it has been around since 2000. Honda is developing newer models, but the friendly Asimo can always be found in Miraikan where he’ll tell you that he dreams of a future where humans and robots live together. There are other robots too in Miraikan that you can interact with. You can study the intricate mechanics and even try controlling an android and speaking through them. There are more details on the museum’s website. Walking distance from Miraikan, in the Aqua City Odaiba shopping mall, the humanoid robots are not exhibits but rather members of staff. Junco is so lifelike in her official uniform that passers-by might not notice she’s a robot. Junko’s movements are smooth and subtle with a lifelike voice that can answer your questions in Japanese, Chinese and English. She can assist with shopping information, help you find the nearest train station or even engage in small talk about herself. The Toshiba robot was placed there in 2015 as the first robot information desk staff and to this day she works there alone. Pepper, the semi-humanoid robot by Softbank, can have many uses but chances are you’ll meet him greeting customers at a restaurant. In fact, Softbank states on its website that over 2,000 companies around the world have a Pepper assistant. Pepper is incredibly friendly, as he has been optimized for human interaction, being able to recognize faces and basic human emotions. Once his eyes lock on one person he can follow that person around. Of course, you can find Pepper in most Softbank shops. And since 2020, you can find a whole gaggle of Peppers in the Pepper Parlor restaurant in Tokyo Plaza Shibuya. The robots work together alongside human staff. While Pepper is usually fixed in a place, at Pepper Parlor many of the robots move around. They come to your table to chat and play games. At certain intervals, a group of smaller robots pop up and perform a dance number too. Unlike other robots made to do jobs for humans, Lovots are created primarily as human companions. Lovots are true to their name — very lovable once you meet them in person. These cute robot pets are made to be warm, soft and heavy, thus simulating a living creature. A Lovot asks for hugs and cuddles and loves to play. You can lift it up in the air as it giggles in delight, or you can pet it until it falls asleep in your hands. They are programmed to remember faces and develop relationships with humans. And those relationships have a degree of uniqueness. Each robot has 50 sensors and processes all stimuli in real time while using machine learning to make decisions based on that. You can interact with Lovots in one of the several Lovot cafés that function similarly to animal cafés. They charge an entrance fee and you can play with the robots and order some food and drinks. Some department stores also have Lovot corners where you can meet the cute robots and even buy one. The only humans to be seen at this café are customers served by Sawyer, a robot coffee barista with a retro cap on. This robot is what is called “a robotic arm” with a screen for facial expressions purely for customer service purposes. Sawyer is the sole barista at Henna na Café in the basement of the Modi Building in Shibuya. Order a drink and pay via a touchscreen panel, take the QR code issued and scan it. And then watch Sawyer move around effortlessly making your coffee. It’s more than just pressing a button and coffee dripping out — Sawyer does several different steps before handing you the drink. The coffee itself is quite tasty too and definitely better than a coffee vending machine. This is because Henn na Café collaborates with barista legend Yasuo Suzuki. Similar robot baristas have been having limited time pop-ups in Tokyo. Ella, a robot creation from Singapore, served coffee in Tokyo Station and Yokohama Station this spring. The Henna na Hotel chain staffs its front desk with robots — sometimes the humanoid kind and sometimes the dinosaur kind. In Henna na Hotel Haneda Airport both are working together. It’s like a scene plucked straight out of an experimental sci-fi movie. A cap-wearing raptor greeting you, bowing respectfully and guiding you through check in is a situation resembling a fever dream, but it’s very real in several Henna na Hotel branches in Japan. The fully automated reception desk has the robot receptionists talking to you. In the dinosaur hotel branches, there are also roaring menacing dinosaurs in the lobby for fun. A peek in the decorative aquarium will reveal a robot fish swimming awkwardly. And in some rooms, more robot friends await. At the Maihama branch for instance, there’s the small Robohon robot in the rooms. It sings songs, dances and can converse about basic information such as the weather forecast. Every hotel is slightly different, offering unique points. For a futuristic collaboration between robots and humans, check out our article on the avatar robot café Dawn. Staffed by remotely controlled robotic waiters, this cafe in Nihonbashi helps physically challenged people stay connected with society through work. Photos by Zoria Petkoska Updated On December 5, 2023 © Tokyo Weekender All rights reserved. Unauthorized reproduction prohibited. Powered by ENGAWA Co., Ltd.
Images (1):
|
|||||
| The Perfect Balance between Robots and Humans | https://medium.com/@tejsidhu/the-perfec… | 0 | Dec 20, 2025 10:07 | active | |
The Perfect Balance between Robots and HumansURL: https://medium.com/@tejsidhu/the-perfect-balance-between-robots-and-humans-a913665617a6 Description: Robots. That’s often the first word that comes to mind when talking about innovation and technology. However, unbeknownst to many, this idea of robotics and a... Content: |
|||||
| Israeli firm deploys robots to speed up online shopping - … | https://www.al-monitor.com/originals/20… | 1 | Dec 20, 2025 10:07 | active | |
Israeli firm deploys robots to speed up online shopping - AL-Monitor: The Middle Eastʼs leading independent news source since 2012URL: https://www.al-monitor.com/originals/2023/02/israeli-firm-deploys-robots-speed-online-shopping Description: Behind a dark and opaque storefront in Tel Aviv, an Israeli company is speeding up online shopping by replacing staff with robots that manoeuvre around small storerooms.Whirring along a rail between two long shelves packed with coffee capsules, a robot stopped, pivoted to the right, shone a light before grabbing an item and dropping it into a paper bag."Shoppers want to receive their items faster and faster," said Eyal Yair, co-founder and CEO of 1MRobotics, which built the automated storeroom late last year. Content:
Behind a dark and opaque storefront in Tel Aviv, an Israeli company is speeding up online shopping by replacing staff with robots that manoeuvre around small storerooms. Whirring along a rail between two long shelves packed with coffee capsules, a robot stopped, pivoted to the right, shone a light before grabbing an item and dropping it into a paper bag. "Shoppers want to receive their items faster and faster," said Eyal Yair, co-founder and CEO of 1MRobotics, which built the automated storeroom late last year. "If once you'd be looking at a two-day delivery, which then became a one-day delivery and then two hours, now we're looking at 10 minutes," he said. The robot toils in the custom-made 30-square-metre (320-square-feet) space storing the capsules, fitted with a streetside hatch for couriers and shoppers to collect online orders. The unassuming robot receives the orders, packs and prepares them, with humans only needed to restock the warehouse and dispatch deliveries. While robots are used to pack groceries in large supermarkets around the world, Yair said the size of 1MRobotics's warehouses makes them "pioneers". "We are hardly seeing any players talking about small warehouses, of a few dozen square metres," he told AFP. - 'No sense' in supermarkets - A swift centrally-located operation run by human staff rather than robots is only financially viable for smaller businesses that deal with few orders, Yair argued. But "once you begin to scale up and deal with dozens of orders a day, you need lots of people," he said. "Then it becomes less economical." The Covid-19 pandemic energised the already rapidly evolving e-commerce market, with sellers struggling to meet the increasing demand for swift processing and deliveries. The solution "requires small warehouses, very close to the clients, and at the end of the day, these small warehouses have to be automated", said Yair. In the south Tel Aviv headquarters of 1MRobotics, young men and women -- nearly all of them graduates of the Israeli army's robotics and technological units -- were customising off-the-shelf robots. Combined with artificial intelligence, these robots are designed to carefully grasp and pack fruit and vegetables, as well as frozen items thanks to a method that prevents the robots' oil from freezing. The company also builds the containers that will serve as the mini-warehouses, with Yair saying their robots and storage units would soon be working with an alcohol shop in Brazil, minimarkets in Germany and a cellphone company in South Africa. In his view, it is just a matter of time before "hyper-local logistics infrastructure" like his robotic warehouses make supermarkets redundant. "Once you have a service where you know you can order 10 items a few times a day and get them within 10 minutes, there'll be no reason to shop once a week for the entire week," he said. "It just doesn't make sense." Keep up with Al-Monitor's top stories with a morning digest from across the region. For subscription inquiries, please contact subscription.support@al-monitor.com. For all other inquiries, please use contactus@al-monitor.com.
Images (1):
|
|||||
| Israeli Firm Deploys Robots To Speed Up Online Shopping | … | https://www.ibtimes.com/israeli-firm-de… | 1 | Dec 20, 2025 10:07 | active | |
Israeli Firm Deploys Robots To Speed Up Online Shopping | IBTimesURL: https://www.ibtimes.com/israeli-firm-deploys-robots-speed-online-shopping-3667997 Description: Israeli firm deploys robots to speed up online shopping Content:
Behind a dark and opaque storefront in Tel Aviv, an Israeli company is speeding up online shopping by replacing staff with robots that manoeuvre around small storerooms. Whirring along a rail between two long shelves packed with coffee capsules, a robot stopped, pivoted to the right, shone a light before grabbing an item and dropping it into a paper bag. "Shoppers want to receive their items faster and faster," said Eyal Yair, co-founder and CEO of 1MRobotics, which built the automated storeroom late last year. "If once you'd be looking at a two-day delivery, which then became a one-day delivery and then two hours, now we're looking at 10 minutes," he said. The robot toils in the custom-made 30-square-metre (320-square-feet) space storing the capsules, fitted with a streetside hatch for couriers and shoppers to collect online orders. The unassuming robot receives the orders, packs and prepares them, with humans only needed to restock the warehouse and dispatch deliveries. While robots are used to pack groceries in large supermarkets around the world, Yair said the size of 1MRobotics's warehouses makes them "pioneers". "We are hardly seeing any players talking about small warehouses, of a few dozen square metres," he told AFP. A swift centrally-located operation run by human staff rather than robots is only financially viable for smaller businesses that deal with few orders, Yair argued. But "once you begin to scale up and deal with dozens of orders a day, you need lots of people," he said. "Then it becomes less economical." The Covid-19 pandemic energised the already rapidly evolving e-commerce market, with sellers struggling to meet the increasing demand for swift processing and deliveries. The solution "requires small warehouses, very close to the clients, and at the end of the day, these small warehouses have to be automated", said Yair. In the south Tel Aviv headquarters of 1MRobotics, young men and women -- nearly all of them graduates of the Israeli army's robotics and technological units -- were customising off-the-shelf robots. Combined with artificial intelligence, these robots are designed to carefully grasp and pack fruit and vegetables, as well as frozen items thanks to a method that prevents the robots' oil from freezing. The company also builds the containers that will serve as the mini-warehouses, with Yair saying their robots and storage units would soon be working with an alcohol shop in Brazil, minimarkets in Germany and a cellphone company in South Africa. In his view, it is just a matter of time before "hyper-local logistics infrastructure" like his robotic warehouses make supermarkets redundant. "Once you have a service where you know you can order 10 items a few times a day and get them within 10 minutes, there'll be no reason to shop once a week for the entire week," he said. "It just doesn't make sense." © Copyright AFP 2025. All rights reserved.
Images (1):
|
|||||
| Israeli firm deploys robots to speed up online shopping - … | https://japantoday.com/category/tech/is… | 1 | Dec 20, 2025 10:07 | active | |
Israeli firm deploys robots to speed up online shopping - Japan TodayURL: https://japantoday.com/category/tech/israeli-firm-deploys-robots-to-speed-up-online-shopping Description: Behind a dark and opaque storefront in Tel Aviv, an Israeli company is speeding up online shopping by replacing staff with robots that maneuver around small storerooms. Whirring along a rail between two long shelves packed with coffee capsules, a robot stopped, pivoted to the right, shone a light before… Content:
JapanToday Sotokanda S Bldg. 4F 5-2-1 Sotokanda Chiyoda-ku Tokyo 101-0021 Japan Tel: +81 3 5829 5900 Fax: +81 3 5829 5919 Email: editor@japantoday.com ©2025 GPlusMedia Inc. The requested article has expired, and is no longer available. Any related articles, and user comments are shown below. Learn how to buy a home in Japan, including market trends, property buying procedures, and financing. The webinar will be held on January 16, 2026, from 6PM to 7PM (Japan Standard Time). Click Here Experience advanced hybrid design combining spring support and foam layers for deep rest, made to suit Japan’s climate. Enjoy an extra ¥3,000 OFF with code: EmmaToday A development that doesn’t make much sense. If all staff is fired and replaced by robots, how can then all those fired people become customers and buy things from those robot stores? The first few sample shops might still sell a bit, but if that is extended to the whole economy? And for sure the technology and replacing is the easiest part, unless the people turn violent and reverse that development which might be seen as a (foreground) reason for their firing, unemployment or poverty. Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts. Learn how to buy a home in Japan, including market trends, property buying procedures, and financing. The webinar will be held on January 16, 2026, from 6PM to 7PM (Japan Standard Time). Click Here Experience advanced hybrid design combining spring support and foam layers for deep rest, made to suit Japan’s climate. Enjoy an extra ¥3,000 OFF with code: EmmaToday A mix of what's trending on our other sites GaijinPot Blog
Images (1):
|
|||||
| Iran tech expo’s ‘humanoid robots’ revealed as performers in costume … | https://dailytimes.com.pk/1408395/iran-… | 1 | Dec 20, 2025 10:07 | active | |
Iran tech expo’s ‘humanoid robots’ revealed as performers in costume - Daily TimesURL: https://dailytimes.com.pk/1408395/iran-tech-expos-humanoid-robots-revealed-as-performers-in-costume/ Description: A viral video from Iran’s Kish Inox tech expo revealed that its showcased “humanoid robots” were actually human performers in costume, sparking widespread online debate and prompting organisers to clarify the theatrical nature of the display. Content:
Daily Times Your right to know Published on: November 27, 2025 3:30 PM A recent technology expo held at Kish Inox sparked widespread online discussion after its much-publicised “humanoid robots” were identified as human performers dressed in elaborate costumes rather than advanced robotic creations. Read More: Commerce minister inaugurates FoodAg expo featuring 500 exhibitors in Karachi Footage from a cybersecurity booth went viral, showing a male and female performer clad in patterned jumpsuits and metallic-style makeup, mimicking robotic movement with slow, calculated gestures. Their presentation included scripted lines loaded with technical jargon, further suggesting a high-tech demonstration. Iran showcased its “robotics” at the Kish Inox Tech Expo 2025, but there were no real robots Instead of “advanced humanoid robots”, the presentation featured human performers in binary-pattern bodysuits and goggles pretending to be robots. pic.twitter.com/YsmO6fu3UB — Visegrád 24 (@visegrad24) November 26, 2025 Social media users, however, quickly picked up on details that contradicted the robot claims. Viewers noted natural skin tones around the eyes, subtle human expressions and makeup textures that revealed the performers’ true identity. These observations led to a wave of online speculation and humour, questioning the authenticity of the expo’s showcase. Hilarious! The Islamist regime in Iran showcased humans in cheap robot costumes with flimsy makeup as “humanoid robots” at its Tech Expo, and it immediately became the subject of ridicule on social media. At the 2025 Kish Invex Tech Expo, human performers in low-quality… pic.twitter.com/IUnezmn2me — Shayan News (@ShayanNews) November 26, 2025 Following the online debate, expo representatives clarified the situation. They confirmed that the individuals on stage were indeed performers hired by one of the booths to deliver a short theatrical act. The expo never claimed the performers were real robots, nor was the display intended to represent functional artificial intelligence technology. A post shared by حاشیه فارسی (@hashieh_farsi) According to organisers, the staged performance was meant purely as an engaging marketing element to attract visitors and promote the booth’s cyber-related services. While not an AI demonstration, the act succeeded in drawing a large audience — both in person and online — inadvertently boosting the event’s visibility. Read More: Pakistan, Iran launch new push to hit $10bn trade The incident has since become a talking point across social platforms, highlighting the fine line between creative tech showcasing and public misinterpretation. Despite the confusion, the expo maintained that its primary purpose remained intact: promoting innovation and technological creativity from Iran’s growing digital sector. Filed Under: Top Stories, World Tagged With: cybersecurity booth, Humanoid robots, Iran tech expo, Latest, performers in costume, social media reaction, viral video More Posts from this Category More Posts from this Category More Posts from this Category Home Lead Stories Latest News Editor’s Picks Culture Life & Style Featured Videos Editorials OP-EDS Commentary Advertise Cartoons Letters Blogs Privacy Policy Contact Company’s Financials Investor Information Terms & Conditions Facebook Twitter Instagram Youtube © 2025 Daily Times. All rights reserved.
Images (1):
|
|||||
| Robots with ultra-bright lights deployed in fight against deadly fungus … | https://nypost.com/2023/04/22/robots-wi… | 1 | Dec 20, 2025 10:07 | active | |
Robots with ultra-bright lights deployed in fight against deadly fungus | New York PostDescription: The Xenex UV LightingStrike Robots, which have an over 99% success rate of stopping infections of Candida auris, a potentially fatal drug-resistant fungi, according to a study by Netcare Hospitals, are being put into use at local hospitals including Memorial Sloan Kettering Cancer Center, North Shore University and Phelps Memorial hospital. Content:
At least half a dozen New York City area hospitals are using $100,000 robots that deploy high-intensity light to combat a deadly drug-resistant fungus spreading across the country and state. Xenex UV LightingStrike Robots have a 99% success rate in stopping the spread of Candida auris, the potentially fatal drug-resistant fungi first identified in Japan in 2009, according to a study by Netcare Hospitals. Last year, New York state saw record number of cases of Candida auris — a “diabolical” fungal infection that can cause sepsis if it enters the bloodstream. Xenex Disinfection Services — which told The Post it has disinfecting robots in local hospitals and at least 130 veterans hospitals nationwide — applied for approval from the Federal Drug Administration earlier this year for the device that uses xenon light, which is commonly found in vehicle headlights. The light is 4,300 times more intense than the standard bulb, and kills germs more quickly than mercury-based UV bulbs in other machines, according to the company. “It’s the difference between a Porsche and a [Ford] Model A,” Morris Miller, the company’s CEO said. The company said the robots are currently being put into use at local hospitals including Memorial Sloan Kettering Cancer Center, which has locations around the New York City area, North Shore University Hospital in Long Island, and Phelps Memorial Hospital in Sleepy Hollow. Miller also said that the robots were designed by two epidemiologists. Morris said that his company’s robot can be used to disinfect a hospital room in about 10 minutes. “On an ultra-serious and scary pathogen your talking about 15 minutes [on the] left [side of the room], 15 minutes [on the] right [side of the room], you’re done,” Morris said. Dr. Donna Armellino, an infection prevention specialist at Northwell Health, said that she and her colleagues use UV devices, including Xenex robots and similar devices from Leviant Inc, on top of traditional cleaning methods. Armellino said the robots are also used in the neonatal intensive care units. Armellino added that the federal government has yet to set standards regarding UV devices and there is still more to learn about the devices, as well as the best ways to use them. “There needs to be more literature and controlled studies,” she said. Advertisement
Images (1):
|
|||||
| Robots in Hospitality: A Smarter Answer to Labor Shortages and … | https://www.hospitalitynet.org/opinion/… | 1 | Dec 20, 2025 10:07 | active | |
Robots in Hospitality: A Smarter Answer to Labor Shortages and Rising Costs | By Michael FrenchURL: https://www.hospitalitynet.org/opinion/4128999.html Description: The hospitality industry has always been about people — welcoming guests, creating memorable stays, and building experiences that keep travelers coming back. But behind the scenes, there’s a growing challenge that nearly every hotel owner, general manager, and operator is facing: labor shortages and rising costs. Content:
The hospitality industry has always been about people — welcoming guests, creating memorable stays, and building experiences that keep travelers coming back. But behind the scenes, there’s a growing challenge that nearly every hotel owner, general manager, and operator is facing: labor shortages and rising costs. Entry-level positions that once fueled the day-to-day operations of hotels are harder than ever to fill. Roles like room attendants, food runners, and service staff have become increasingly difficult to hire, and the wages required to attract and retain talent have skyrocketed. A position that once cost $6 an hour may now require $16–$25 when factoring in wages and benefits. For many properties, this has put enormous pressure on profitability. So, the question becomes: how do we maintain the quality of service guests expect while also keeping operations financially sustainable? Too often, the conversation in hospitality circles revolves around the same old strategies: more recruiting programs, more incentives, more flexibility. While those efforts have value, they don’t fully address the reality — the labor pool simply isn’t what it used to be. This is where robotics offers a fresh, practical alternative. We’re not talking about science fiction. We’re talking about real, proven technology that can already: And here’s the kicker: the return on investment for many robotic solutions can be achieved in as little as four months. For hotel owners and operators, the math is compelling. Robots don’t call in sick, don’t require overtime pay, and can work around the clock. By automating repetitive, low-value tasks, hotels can reallocate human employees to the roles that truly require personal interaction and emotional intelligence — the areas where people shine and robots can’t compete. This isn’t about replacing people. It’s about making smarter use of resources. With robots handling routine tasks, staff can focus on creating those memorable guest interactions that drive loyalty and positive reviews. Far from being a distraction or a gimmick, guests often love interacting with robots. Delivery robots, for example, add a fun and novel touch to a stay. Families and business travelers alike appreciate the speed, reliability, and even the entertainment value they bring. Most importantly, robots help ensure consistency. Guests get their towels, room service, or amenities faster — with fewer delays caused by staffing shortages. In a competitive market, that reliability can set a property apart. Hospitality is an industry where innovation quickly becomes expectation. Think back to when online booking engines, mobile check-in, or smart room controls first appeared. The hotels that embraced those technologies early gained a strong edge. Robotics is on that same trajectory. While some forward-thinking general managers and owners are already putting robots to work, many others are still relying solely on outdated solutions to modern challenges. That presents an opportunity: adopting robotics now positions your property as both efficient and guest-focused, delivering real value to owners while delighting guests. Every hotelier wants to balance profitability with exceptional service. The truth is, the tools to do that are already here. Robots aren’t the future — they’re available today, affordable, and proven to deliver results. It’s time for hotel leaders to move beyond old recruitment strategies and explore how robotics can transform their operations. From cost savings and efficiency improvements to elevated guest experiences, the benefits are clear. The hospitality industry has always evolved to meet new demands. Robotics is simply the next step in that evolution — one that can help ensure your property stays competitive, profitable, and memorable for the guests you serve. Michael FrenchFounder and CEO of RoomRunnerRoomRunner Hospitality Net membership explained
Images (1):
|
|||||
| 13 Million Humanoid Robots Will Walk Among Us By 2035 | https://www.forbes.com/sites/bernardmar… | 0 | Dec 20, 2025 10:07 | active | |
13 Million Humanoid Robots Will Walk Among Us By 2035Description: Morgan Stanley predicts 13 million humanoid robots will work alongside humans by 2035, with costs dropping to $10,000 annually, making them as affordable as car... Content: |
|||||
| Delhi govt inducts two robots into firefighting fleet – ThePrint … | https://theprint.in/india/delhi-govt-in… | 1 | Dec 20, 2025 10:07 | active | |
Delhi govt inducts two robots into firefighting fleet – ThePrint – ANIFeedURL: https://theprint.in/india/delhi-govt-inducts-two-robots-into-firefighting-fleet/965089/ Description: New Delhi [India], May 21 (ANI): The Arvind Kejriwal Government has undertaken a unique initiative of using robots for extinguishing fires in the city. Initially, the Aam Aadmi Party (AAP) Government on Friday inducted two robots into Delhi’s firefighting fleet that will be able to douse fires in narrow streets, warehouses, basements, stairs, forests and […] Content:
New Delhi [India], May 21 (ANI): The Arvind Kejriwal Government has undertaken a unique initiative of using robots for extinguishing fires in the city. Initially, the Aam Aadmi Party (AAP) Government on Friday inducted two robots into Delhi’s firefighting fleet that will be able to douse fires in narrow streets, warehouses, basements, stairs, forests and enter places like oil and chemical tankers and factories. This comes days after a massive fire that broke out last week in a factory in Mundka. In the incident, 27 people had died. Show Full Article New Delhi [India], May 21 (ANI): The Arvind Kejriwal Government has undertaken a unique initiative of using robots for extinguishing fires in the city. Initially, the Aam Aadmi Party (AAP) Government on Friday inducted two robots into Delhi’s firefighting fleet that will be able to douse fires in narrow streets, warehouses, basements, stairs, forests and enter places like oil and chemical tankers and factories. This comes days after a massive fire that broke out last week in a factory in Mundka. In the incident, 27 people had died. Show Full Article This comes days after a massive fire that broke out last week in a factory in Mundka. In the incident, 27 people had died. These remote-controlled fire fighting robots will have greater accessibility to places and will be able to navigate narrow lanes, reach spaces inaccessible to humans and perform tasks too risky for people. Talking about the initiative, Delhi Home Minister Satyendar Jain, said, ”For the first time such remote control robots have been brought into the country which are capable of controlling fire remotely. At present, the Delhi Government has inducted 2 such robots, if the trial is successful, more such robots will be inducted into the fleet. These remote-controlled robots will prove to be major troubleshooters for the firefighters.” Jain further said that these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute. ”After the induction of these robots, there will be a drastic reduction in the risk that the firefighters have to put up with. Apart from this, these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute,” he said. The Home Miniter further said that the wireless remote attached to this robot is capable of controlling the spray of water. “This means that the robot will be able to douse fire even in places which firefighters cannot possibly access,” he added. The firefighters of Delhi Fire Service have also been given specialised training to operate the robot and a separate SOP has also been prepared which will be followed to control the fire. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. These remote-controlled fire fighting robots will have greater accessibility to places and will be able to navigate narrow lanes, reach spaces inaccessible to humans and perform tasks too risky for people. Talking about the initiative, Delhi Home Minister Satyendar Jain, said, ”For the first time such remote control robots have been brought into the country which are capable of controlling fire remotely. At present, the Delhi Government has inducted 2 such robots, if the trial is successful, more such robots will be inducted into the fleet. These remote-controlled robots will prove to be major troubleshooters for the firefighters.” Jain further said that these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute. ”After the induction of these robots, there will be a drastic reduction in the risk that the firefighters have to put up with. Apart from this, these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute,” he said. The Home Miniter further said that the wireless remote attached to this robot is capable of controlling the spray of water. “This means that the robot will be able to douse fire even in places which firefighters cannot possibly access,” he added. The firefighters of Delhi Fire Service have also been given specialised training to operate the robot and a separate SOP has also been prepared which will be followed to control the fire. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. Talking about the initiative, Delhi Home Minister Satyendar Jain, said, ”For the first time such remote control robots have been brought into the country which are capable of controlling fire remotely. At present, the Delhi Government has inducted 2 such robots, if the trial is successful, more such robots will be inducted into the fleet. These remote-controlled robots will prove to be major troubleshooters for the firefighters.” Jain further said that these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute. ”After the induction of these robots, there will be a drastic reduction in the risk that the firefighters have to put up with. Apart from this, these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute,” he said. The Home Miniter further said that the wireless remote attached to this robot is capable of controlling the spray of water. “This means that the robot will be able to douse fire even in places which firefighters cannot possibly access,” he added. The firefighters of Delhi Fire Service have also been given specialised training to operate the robot and a separate SOP has also been prepared which will be followed to control the fire. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. Jain further said that these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute. ”After the induction of these robots, there will be a drastic reduction in the risk that the firefighters have to put up with. Apart from this, these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute,” he said. The Home Miniter further said that the wireless remote attached to this robot is capable of controlling the spray of water. “This means that the robot will be able to douse fire even in places which firefighters cannot possibly access,” he added. The firefighters of Delhi Fire Service have also been given specialised training to operate the robot and a separate SOP has also been prepared which will be followed to control the fire. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. Jain further said that these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute. ”After the induction of these robots, there will be a drastic reduction in the risk that the firefighters have to put up with. Apart from this, these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute,” he said. The Home Miniter further said that the wireless remote attached to this robot is capable of controlling the spray of water. “This means that the robot will be able to douse fire even in places which firefighters cannot possibly access,” he added. The firefighters of Delhi Fire Service have also been given specialised training to operate the robot and a separate SOP has also been prepared which will be followed to control the fire. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. ”After the induction of these robots, there will be a drastic reduction in the risk that the firefighters have to put up with. Apart from this, these robots will also be capable of releasing high water pressure at the rate of 2,400 litres per minute,” he said. The Home Miniter further said that the wireless remote attached to this robot is capable of controlling the spray of water. “This means that the robot will be able to douse fire even in places which firefighters cannot possibly access,” he added. The firefighters of Delhi Fire Service have also been given specialised training to operate the robot and a separate SOP has also been prepared which will be followed to control the fire. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The Home Miniter further said that the wireless remote attached to this robot is capable of controlling the spray of water. “This means that the robot will be able to douse fire even in places which firefighters cannot possibly access,” he added. The firefighters of Delhi Fire Service have also been given specialised training to operate the robot and a separate SOP has also been prepared which will be followed to control the fire. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The firefighters of Delhi Fire Service have also been given specialised training to operate the robot and a separate SOP has also been prepared which will be followed to control the fire. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. “The robots were bought from an Austrian company. A few months ago, the fire incident that happened in the PVC market of Tikri Kalan was controlled with the help of these robots,” said the Minister. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. Delhi Chief Minister Arvind Kejriwal said that this initiative will help reduce collateral damage and save precious lives. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. “Our government has procured remote-controlled firefighting machines. Our brave firemen can now fight fires from a maximum safe distance of up to 100 metres. This will help reduce collateral damage and save precious lives,” Kejriwal tweeted. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. This machine can be operated remotely from a distance of 300 metres. It will not be affected by fire, smoke, heat, or any other adverse condition, the government said. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. With the help of remote control, it can be sent inside the areas affected by the fire and has a tracking system like army tanks, through which these robots can easily climb stairs. It has a 140-horsepower engine. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. Also, there are many nozzles for water showers. It can be modified according to the need and the level of fire. This robot can run at a speed of four kilometres per hour. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The sensor and camera are installed in the front part of the robot. The sensor will go near the fire and release the water according to the temperature there. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The AAP government further said that various types of equipment can also be installed in the front part of the robot, with the help of which it can break the window or door and extinguish the fire inside. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The robot has cameras that can take stock of the situation inside the building, etc. With this, it will be easy to know whether a person is trapped inside or not. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The robot will have a pipe attached to its rear so that it can draw water from the tankers standing outside and spray water all around. With this, the fire can be brought under control in less time without any risk. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. It also comes with a ventilation fan which can be used to keep the machine cool. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. The fire-resistant robots equipped with modern technology can cover an area of about 100 metres at once and is capable of dousing the fire immediately. (ANI) This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. This report is auto-generated from ANI news service. ThePrint holds no responsibility for its content. Subscribe to our channels on YouTube, Telegram & WhatsApp Support Our Journalism India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that. Sustaining this needs support from wonderful readers like you. Whether you live in India or overseas, you can take a paid subscription by clicking here. Support Our Journalism Save my name, email, and website in this browser for the next time I comment. Δ Required fields are marked * Name * Email * Δ Copyright © 2025 Printline Media Pvt. Ltd. All rights reserved.
Images (1):
|
|||||
| Why are so many robots white? | https://www.dailyexcelsior.com/why-are-… | 0 | Dec 20, 2025 10:07 | active | |
Why are so many robots white?URL: https://www.dailyexcelsior.com/why-are-so-many-robots-white/ Description: Pittsburgh (US), Jan 28: Problems of racial and gender bias in artificial intelligence algorithms and the data used to train large language models like ChatGPT ... Content: |
|||||
| China Opens 'Robot Mall,' Its First Mall for Robots | https://gizmodo.com/china-opens-robot-m… | 1 | Dec 20, 2025 10:07 | active | |
China Opens 'Robot Mall,' Its First Mall for RobotsURL: https://gizmodo.com/china-opens-robot-mall-its-first-mall-for-robots-2000640691 Description: The Robot Mall opened this Friday in Beijing Content:
Reading time 2 minutes China opened its first full-scale shopping center dedicated entirely to robots on Friday, as part of a broader push to bring robotics from research labs into peopleâs homes. The four-story Robot Mall, located in Beijingâs high-tech E-Town district, showcases more than 100 robots from over 40 brands, including Chinese companies like Ubtech Robotics and Unitree Robotics. The store operates like a car dealership, but for robots. It follows the “4S” model common in China, offering sales, service, spare parts, and surveys or opportunities for customers to provide feedback all in one location. âIf robots are to enter thousands of households, relying solely on robotics companies is not enough,â Wang Yifan, a director at the mall, told Reuters. Robots at the new mall start at 2,000 yuan ($278) and go up to several million yuan. A talking humanoid replica of Albert Einstein is going for 700,000 yuan ($97,473). The mall also includes a themed restaurant where robot waiters serve dishes and drinks prepared by robot chefs. Visitors can also watch robots play soccer or Chinese chess, interact with robot dogs, or meet animatronic versions of historical figures like Isaac Newton, Emperor Qin Shi Huang, and the famed Chinese poet Li Bai. The opening of Robot Mall coincides with two major robotics conferences in the city this month. Friday was also the first day of the 2025 World Robot Conference, which runs through August 12. Over the course of the conference, nearly 500 experts from over 20 countries will gather to discuss the latest trends in robotics. Approximately 200 robotics companies will also be present to showcase their latest research and development breakthroughs in over 1,500 exhibits. Just days later, Beijing will host the 2025 World Humanoid Robot Games, taking place from August 14 to 17. Humanoid robots will face off in a series of 21 events, testing their skills in everything from athletics, soccer, and dance to handling materials, drug sorting, and other performance-based and scenario-driven challenges. So far, more than 100 teams have registered to compete. All of this is part of Chinaâs broader push to win the global robotics race. The country is pouring tons of resources into the sector, including more than $20 billion in subsidies over the past year alone. Beijing is also reportedly planning a one trillion yuan ($137 billion) fund to help support AI and robotics startups, according to Reuters. Some U.S. robotics companies, including Tesla and Boston Dynamics, have already called on lawmakers to develop a national strategy that can compete with Chinaâs. Explore more on these topics Share this story Subscribe and interact with our community, get up to date with our customised Newsletters and much more. Gadgets gifts are the best gifts to get friends and family. If the system holds out long-term, it could significantly boost AI development in China. Deepseek reportedly received banned chips, and is allegedly using them to train a new model. Musk says he's going to make 1 million robots per year by 2030. It is my pleasure to welcome Robosen's Soundwave. The internal combustion engine rears its ugly head. ©2025 GIZMODO USA LLC. All rights reserved.
Images (1):
|
|||||
| The Optimus robots at Tesla’s Cybercab event were humans in … | https://www.theverge.com/2024/10/13/242… | 1 | Dec 20, 2025 10:07 | active | |
The Optimus robots at Tesla’s Cybercab event were humans in disguise | The VergeDescription: Tesla’s Optimus robots’ natural responses and smooth motions were made possible by human control behind the scenes at the Cybercab reveal event. Content:
Posts from this topic will be added to your daily email digest and your homepage feed. See All Tesla Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Behind-the-scenes human assistance meant the We, Robot event said little about how far its Optimus humanoid robots have come. Behind-the-scenes human assistance meant the We, Robot event said little about how far its Optimus humanoid robots have come. Posts from this author will be added to your daily email digest and your homepage feed. See All by Wes Davis If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Posts from this author will be added to your daily email digest and your homepage feed. See All by Wes Davis Tesla made sure its Optimus robots were a big part of its extravagant, in-person Cybercab reveal last week. The robots mingled with the crowd, served drinks to and played games with guests, and danced inside a gazebo. Seemingly most surprisingly, they could even talk. But it was mostly just a show. It’s obvious when you watch the videos from the event, of course. If Optimus really was a fully autonomous machine that could immediately react to verbal and visual cues while talking, one-on-one, to human beings in a dimly lit crowd, that would be mind-blowing. Attendee Robert Scoble posted that he’d learned humans were “remote assisting” the robots, later clarifying that an engineer had told him the robots used AI to walk, spotted Electrek. Morgan Stanley analyst Adam Jonas wrote that the robots “relied on tele-ops (human intervention)” in a note, the outlet reports. There are obvious tells to back those claims up, like the fact that the robots all have different voices or that their responses were immediate, with gesticulation to match. It doesn’t feel like Tesla was going out of its way to make anyone think the Optimus machines were acting on their own. In another video that Jalopnik pointed to, an Optimus’ voice jokingly told Scoble that “it might be some” when he asked it how much it was controlled by AI. Another robot — or the human voicing it — told an attendee in a stilted impression of a synthetic voice, “Today, I am assisted by a human,” adding that it’s not fully autonomous. (The voice stumbled on the word “autonomous.”) Musk first announced Tesla’s humanoid robot by bringing what was very clearly a person in a robot suit on stage, so it’s no surprise that the Optimuses (Optimi? Optimodes?) at last week’s event were hyperbolic in their presentation. And people who went didn’t seem to feel upset or betrayed by that. But if you were hoping to have any sense of how far along Tesla truly is in its humanoid robotics work, the “We, Robot” event wasn’t the place to look. Posts from this author will be added to your daily email digest and your homepage feed. See All by Wes Davis Posts from this topic will be added to your daily email digest and your homepage feed. See All Cars Posts from this topic will be added to your daily email digest and your homepage feed. See All Electric Cars Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Robot Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Tesla Posts from this topic will be added to your daily email digest and your homepage feed. See All Transportation A free daily digest of the news that matters most. This is the title for the native ad This is the title for the native ad © 2025 Vox Media, LLC. All Rights Reserved
Images (1):
|
|||||