Total Articles Scraped
Total Images Extracted
| Action | Title | URL | Images | Scraped At | Status |
|---|---|---|---|---|---|
| ByteDance Releases GR-2 Robot AI Large Model - Pandaily | https://pandaily.com/bytedance-releases… | 1 | Jan 01, 2026 00:02 | active | |
ByteDance Releases GR-2 Robot AI Large Model - PandailyURL: https://pandaily.com/bytedance-releases-gr-2-robot-ai-large-model/ Description: ByteDance releases GR-2 robot AI large model, with an average task completion rate of 97.7%, simulating human learning to handle complex tasks. Content:
Want to read in a language you're more familiar with? ByteDance releases GR-2 robot AI large model, with an average task completion rate of 97.7%, simulating human learning to handle complex tasks. The research team of ByteDance has recently launched the second-generation large-scale model GR-2 (Generative Robot 2.0). Its highlight lies in innovatively constructing the "robot infancy" learning stage, imitating human growth and learning complex tasks, possessing outstanding generalization ability and multitask versatility. Like many other AI models, the GR-2 model undergoes two processes: pre-training and fine-tuning. During the pre-training phase, GR-2 'watched' up to 38 million internet videos from various public datasets and 500 billion tokens, covering a wide range of daily scenes such as home, outdoor, and office environments. This enables GR-2 to have generalization capabilities across a wide range of robot tasks and environments in subsequent reinforcement learning. In the fine-tuning stage, the team used robot trajectory fine-tuning for video generation and action prediction, demonstrating outstanding multitasking capabilities, achieving an average success rate of 97.7% in over 100 tasks. In addition, GR-2 demonstrates excellent generalization ability in novel and previously unseen scenarios, including new backgrounds, environments, objects, and tasks. SEE ALSO: ByteDanceâs Doubao Video Generation Large Model Released Related posts coming soon... Pandaily is a tech media based in Beijing. Our mission is to deliver premium content and contextual insights on China's technology scene to the worldwide tech community. © 2017 - 2026 Pandaily. All rights reserved.
Images (1):
|
|||||
| Robot Drummer Achieves 90% Precision, Mimics Human Techniques | https://www.techjuice.pk/robot-drummer-… | 1 | Jan 01, 2026 00:02 | active | |
Robot Drummer Achieves 90% Precision, Mimics Human TechniquesURL: https://www.techjuice.pk/robot-drummer-achieves-90-precision-mimics-human-techniques/ Description: A humanoid robot developed by Swiss and Italian researchers uses AI to drum with 90% accuracy and human-like technique. Content:
A humanoid robot has stunned the tech world by nailing complex drum performances with over 90 percent rhythmic accuracy, even executing humanlike techniques such as stick switching and cross arm hits. Developed by researchers at SUPSI, IDSIA, and Politecnico di Milano, this “Robot Drummer” system relies on reinforcement learning rather than preprogrammed sequences, marking a milestone in machine creativity and physical coordination. Instead of feeding the robot exact instructions, its creators transformed drum scores into “Rhythmic Contact Chains” precise sequences of timed contact events and trained the model on over 30 rock, metal, and jazz tracks. In simulation, the robot mastered rhythmic timing while developing expressive behaviors such as planning strike patterns and adapting stick use spontaneously. The idea originated from a casual chat between researchers who recognized that humanoid robots rarely venture into the expressive realm of music. Drumming became a natural test case for combining rhythm, coordination, and creative expression. The team believes the robot could eventually perform live alongside human musicians. The team’s next ambition is transferring these simulated drumming skills to actual hardware, empowering the robot to improvise live based on musical cues essentially allowing it to respond to its environment just like a human drummer. Figures humanoid robots now march naturally using reinforcement learning trained in simulation and Boston Dynamics is enhancing agility through similar AI driven methods. This reflects a growing movement away from manual programming toward autonomous learning and adaptability in robotics. Drumming demands split second precision, full body coordination, and dynamic timing traits essential to musical artistry. That a robot can emulate these human qualities opens new doors for both entertainment and educational applications. Whether guiding music students or performing on global stages, robotic musicians may no longer be sci fi. Abdul Wasay explores emerging trends across AI, cybersecurity, startups and social media platforms in a way anyone can easily follow. The federal government and NADRA have officially amended the National Identity Card (NIC) Rules to widen the legal definition of biometrics. As of Wednesday, facial. Russia has introduced a new quantum computer prototype, signaling tangible progress in its long running effort to build advanced computing systems without relying on foreign. Taiwan Semiconductor Manufacturing Company has quietly begun volume production of its first 2nm class chips, marking a major manufacturing milestone as the world’s largest contract. Researchers have developed a new physical artificial intelligence system that enables electric vehicles to detect loss of control in real time by combining traditional physics-based. Premier Pakistan technology news website with special focus on startups, entrepreneurship and consumer products. © 2025 TechJuice.PK – All rights reserved.
Images (1):
|
|||||
| New quadruped robot climbs vertically 50 times faster than rivals | https://interestingengineering.com/inno… | 1 | Jan 01, 2026 00:02 | active | |
New quadruped robot climbs vertically 50 times faster than rivalsURL: https://interestingengineering.com/innovation/kleiyn-chimney-climbing-robot-dog Description: Meet KLEIYN, the innovative robot using chimney climbing to scale vertical walls and navigate uneven terrain effortlessly. Content:
From daily news and career tips to monthly insights on AI, sustainability, software, and more—pick what matters and get it in your inbox. Explore The Most Powerful Tech Event in the World with Interesting Engineering. Stick with us as we share the highlights of CES week! Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. The robot uses a combination of machine learning and a flexible “back” to chimney climb with ease. A team of researchers from Japan’s Jouhou System Kougaka Laboratory at the University of Tokyo (JSK) has developed a new robot that can both walk on uneven terrain and climb vertical walls. Called KLEIYN, the robot can scale vertical surfaces using a technique called chimney climbing (pressing its legs against two opposing walls for support). To support their paper on the subject, published in the journal arXiv, the team has also released footage of KLEIYN in action. The robot itself is a four-legged (quadruped) robot with an active waist joint (it can bend in the middle). According to its developers, the robot uses quasi-direct-drive motors for precise and powerful movement. All in all, the robot weighs approximately 40 pounds (18 kg), has 13 joints, and measures around 2.5 feet (76 cm) in length. Furthermore, the robot utilizes a form of machine learning called Reinforcement Learning (RL) in simulation to learn how to move and climb. The team also integrated a special training method called Contact-Guided Curriculum Learning (CGCL), which gradually teaches it how to transition from flat ground to vertical surfaces. Other technical innovations include Asymmetric Actor-Critic RL, an efficient training setup that enables the robot to learn from rich simulation data while utilizing only basic sensors in real-life applications. What makes KLEIYN different from the many other climbing robots out there is the design of its “feet.” Most climbing robots use grippers (like claws) to grab onto things, which limits their walking ability. Instead, KLEIYN uses chimney climbing, which works by pressing its feet against two walls, eliminating the need for grippers. The waist joint lets it adapt to different wall widths, especially narrow ones. This design appears to have paid off remarkably well. During test climbs of walls spaced 31.5 inches (80 cm) to 39.4 inches (1 m) apart, the robot was able to climb at a rate of 6 inches (15 cm) to 6.7 inches (17 cm) per second. That is roughly 50x faster than the previous best (SiLVIA). The robot was also able to walk on rough terrain and climb steps outdoors successfully. The robot can learn how to recover from slipping, making it more robust. But not all is going good for KLEIYN. According to the team, it struggles with any wall gaps wider than 77 inches (1.05 m) due to torque limits. It also moved sideways unintentionally during climbs, meaning it needs better environmental sensing (e.g., LiDAR input). The robot’s motors also tend to overheat during long climbs, suggesting a need for better load balancing. That said, this research pushes the boundaries of robot mobility, enabling one robot to handle both flat ground and vertical obstacles. That could make it valuable for tasks such as search and rescue in collapsed buildings, exploration of complex environments (like caves or disaster zones), and transportation tasks in uneven terrain. You can view the study for yourself in the journal arXiv. You can also check out the design team’s GitHub page for more information on the project. Christopher graduated from Cardiff University in 2004 with a Masters Degree in Geology. Since then, he has worked exclusively within the Built Environment, Occupational Health and Safety and Environmental Consultancy industries. He is a qualified and accredited Energy Consultant, Green Deal Assessor and Practitioner member of IEMA. Chris’s main interests range from Science and Engineering, Military and Ancient History to Politics and Philosophy. Premium Follow
Images (1):
|
|||||
| No title found | https://www.scoop.co.nz/stories/BU2111/… | 0 | Jan 01, 2026 00:02 | active | |
| He Smiles, Serves Coffee, and Learns Your Habits: Meet Neo, … | https://medium.com/@social_57513/he-smi… | 0 | Jan 01, 2026 00:02 | active | |
He Smiles, Serves Coffee, and Learns Your Habits: Meet Neo, the Robot That’s Learning to Live With HumansDescription: He Smiles, Serves Coffee, and Learns Your Habits: Meet Neo, the Robot That’s Learning to Live With Humans If you still think humanoid robots belong in science... Content: |
|||||
| Meta-Learning and Few-Shot Learning | https://www.dailyexcelsior.com/meta-lea… | 0 | Jan 01, 2026 00:02 | active | |
Meta-Learning and Few-Shot LearningURL: https://www.dailyexcelsior.com/meta-learning-and-few-shot-learning/ Content: |
|||||
| Rural Karnataka Teacher Built Robot to Make Learning Fun for … | https://www.thebetterindia.com/318179/k… | 1 | Jan 01, 2026 00:01 | active | |
Rural Karnataka Teacher Built Robot to Make Learning Fun for StudentsDescription: Akshay Mashelkar wanted to make education more interactive for children, and so he built Shiksha, a humanoid robot that imparts education to children. Content:
0 By clicking the button, I accept the Terms of Use of the service and its Privacy Policy, as well as consent to the processing of personal data. Don’t have an account? Signup Karnataka teacher Akshay Mashelkar wanted to make education more interactive for children in his village, and so he built Shiksha, a humanoid robot that imparts education to children under Class 4. Karnataka teacher Akshay Mashelkar wanted to make education more interactive for children in his village, and so he built Shiksha, a humanoid robot that imparts education to children under Class 4. google-news Follow Us Dressed in a blue shirt and tunic, with neatly parted hair styled into two pleats, the humanoid robot named ‘Shiksha’ bears a striking resemblance to the rest of the students of Sirsa village. As she begins delivering the day’s lessons — from rhymers to the days of the weeks, names of different shapes, and more — there’s a sense of wonder in the eyes of each student as they take in this remarkable teaching experience. Shiksha is the brainchild of 30-year-old Akshay Mashelkar, and aims to make learning fun and interactive “Growing up in a village I was very aware of the limitations of schools in rural areas. We still use printed charts and blocks as a means of learning. There are no scientific methods available. I want to change that,” Akshay tells The Better India. Born and raised in the village of Sirsi in Karnataka’s Uttara Kannada district, Akshay grew up in a teaching household. “My mother was a teacher and from a very young age, I knew I wanted to become an educator too. While studying, I realised that I wanted to work towards improving the education system,” he says. Following in the footsteps of his mother, Akshay became a professor at a college in Sirsi after completing his degree in Physics. “While I enjoyed my job as a professor, I had many ideas to implement in the education system. With the work, there was no time for me to start working on it though,” he says. When the COVID pandemic hit and the education sector moved online, Akshay found himself relatively free. “I found the perfect opportunity to work on my ideas. One of the most important things that I have seen in the education sector, especially in Tier-2 and -3 cities and rural areas, is the lack of modern and scientific methods of teaching. On one of my several visits to schools in the village, I saw that teachers were still using charts and blocks to teach,” he says. “Those techniques were used when I was in school. It is sad that the world has advanced so much with smart boards and whatnot, but schools in rural areas are still stuck with handmade charts. This pushed me further to give all my attention to bringing an easier and cheaper solution,” he adds. It took Akshay a good one and half years to do the research. In 2022, ‘Shikha’ — a humanoid robot capable of teaching in regional languages up to Class 4 — was ready. In India, the education sector has been incorporating technology for teaching purposes for several years. Nevertheless, its implementation has primarily been observed in urban regions and expensive schools. On the contrary, rural schools continue to rely on conventional tools like charts and drawings to facilitate learning. Moreover, teachers of government schools are overburdened with students. A recent Quint report states, “The number of teachers in government schools in Karnataka has dropped from 2.08 lakh to 1.99 lakh, forcing 6,529 schools in the state to have only one teacher. The student-teacher ratio is now 23:1 when compared to 21:1 in 2020-21.” The inclusion of such a device could help fix this problem. The robot took nearly Rs 2 lakh to build, which he took out of his own savings. “A lot of money was involved in the research and development. On average, making only a robotic arm costs nearly Rs 50,000. ‘Shiksha’ is an entire robot with several features. The reason why I was able to cut costs was I used jugaad. For instance, I did not use a mould for the body of the robot, instead for the arms I used plastic cricket stumps that you find in toy shops,” he says. Siksha can teach various subjects including rhymes in Kannada and English rhymes; the days of the week; names of shapes; English alphabets, and maths topics such as multiplication, addition and tables. Explaining how the robot works, Akshay says, “The robot has two main cards — the master card that unlocks it, and the normal card to start the desired programme. The teacher has to put the master card on Shiksha’s hand to start it and then they can use the programme cards to start different programmes. She moves her arms to take the card and returns it once scanned. She asks questions, recites poems, and even has trivia options,” he says. The robot has visited over 25 schools in the Uttar Kannada district, including KHB School and Urdu School in Sirsi. So far, Shiksha can teach up to Class 4 and has syllabus accommodation across boards. Sunaina Hegde, who teaches Science and Maths at Model Higher Primary School in Sirsi, says, “Akshay came with Shiksha to our school in April. The children were so happy to see her and they took a greater interest in the class. For them, Shiksha was not a robot, but more like a friend as it was dressed like them too.” “While it is great for students to learn, it is also a great tool to be included in schools for teachers. It reduces our burden, as there are fewer teachers in government schools. Something so interactive helps children to gain more interest in science and technology,” she adds. Akshay notes, “The importance of involving village children in technology is because they are also the future of the country. An average child living in an urban setting, from a very young age, knows how to operate laptops and computers. Sadly, this is not true for kids in rural areas. When the kids saw Shiksha for the first time, I could see the sparkle in their eyes. They were intrigued, amazed and excited.” “My motive behind making Siksha was not only to introduce technology in the classroom but also to encourage children to make their robots,” he says. Taking the thought forward, Akshay also opened a research centre where young robotics enthusiasts can come and learn for free. “In order to keep the cost of the operations of the centre low, we keep our centre mobile. Whenever we get low-cost places to rent in Sirsi, we move to that place. Over 200 children have visited the centre and many are regulars now. They have the space to learn from me and use the tool available in the research centre,” he says. Although the first Shiksha cost him lakhs, Akshay says that he can reduce the cost even more. “There were a lot of errors and a lot of investment in R&D initially, but now there won’t be. With the help of grants and support from the government and NGOs, I can possibly reduce the cost to Rs 35,000. This way it will be cheaper to afford rural schools. My only wish is to take Shiksha to every rural school in Karnataka and make learning fun,” he adds. If you wish to know more about his research centre and be a part of his initiative, you can reach him at 74832 76508. (Edited by Divya Sethu) Subscribe to our Newsletter! Quick Links
Images (1):
|
|||||
| âTesla Optimus learning Kung Fuâ: Elon Muskâs humanoid robot stuns … | https://timesofindia.indiatimes.com/tec… | 1 | Jan 01, 2026 00:01 | active | |
âTesla Optimus learning Kung Fuâ: Elon Muskâs humanoid robot stuns with human-like moves and balance | Watch | - The Times of IndiaDescription: Tech News News: Tesla's Optimus robot showcased remarkable AI by flawlessly performing Kung Fu alongside a human trainer, as revealed in a video shared by Elon Musk. Content:
The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Deskâs news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.Read More The rainbow of the wild: 10 colourful and rare insects across the globe From Neeru Bajwa to Shehnaaz Gill: Leading ladies shaping Punjabi cinemaâs bright future In pics: Vijay Sethupathiâs fierce avatar in Bigg Boss Tamil 9 promo 10 small dog breeds perfect for apartment living: Family-friendly and easy to care for Disha Parmar approved top 10 stylish looks Navratri 2025: Shraddha Kapoor, Alia Bhatt, and other divas' inspired pink ethnic looks for day 9 From Chic to Classy: Priyanka Mohanâs style evolution Mamitha Baiju mesmerizes; nature meets grace Monalisa stuns in red saree for Durga Puja Best family games that will make your kids forget all about screens
Images (1):
|
|||||
| Diffusion Policy: How Diffusion Models Are Transforming Robot Learning from … | https://kargarisaac.medium.com/diffusio… | 0 | Jan 01, 2026 00:01 | active | |
Diffusion Policy: How Diffusion Models Are Transforming Robot Learning from DemonstrationDescription: If you’ve followed the rapid progress of AI in robotics, you’ve likely seen the surge of interest in diffusion models for image and text generation. But wha... Content: |
|||||
| Watch: Elon Musk's Tesla Robot "Stumbles Like A Human" While … | https://www.ndtv.com/world-news/watch-e… | 0 | Jan 01, 2026 00:01 | active | |
Watch: Elon Musk's Tesla Robot "Stumbles Like A Human" While Learning To Walk On SlopesDescription: The robot's unsteady movements, hilariously reminiscent of a drunken person, have sparked a flurry of reactions. Content: |
|||||
| Physical AI Startup CarbonSix Unveils Industry-First Standardized Robot Imitation Learning … | https://www.manilatimes.net/2025/09/19/… | 0 | Jan 01, 2026 00:01 | active | |
Physical AI Startup CarbonSix Unveils Industry-First Standardized Robot Imitation Learning Toolkit for ManufacturingDescription: Physical AI Startup CarbonSix Unveils Industry-First Standardized Robot Imitation Learning Toolkit for Manufacturing Content: |
|||||
| The best LEGO robot kits for hands-on STEM learning – … | https://www.denverpost.com/2025/12/16/t… | 1 | Jan 01, 2026 00:01 | active | |
The best LEGO robot kits for hands-on STEM learning – The Denver PostURL: https://www.denverpost.com/2025/12/16/the-best-lego-robot-kits-for-hands-on-stem-learning/ Description: LEGO robot kits are great gifts that combine fun with learning to build, code and program. They come in variations designed for every age and skill level. Content:
Digital Replica Edition Sign up for Newsletters and Alerts Sign up for Newsletters and Alerts Digital Replica Edition Trending: LEGO has been making childhood toys for more than 50 years and now even offer kits that allow your child to build a robot. These sets encourage young kids to explore and get excited about STEM. LEGO robot kits are great gifts that combine fun with learning to build, code and program. They come in variations designed for every age and skill level. If you or your child wants to build a fully functional intelligent robot that walks, talks, plays games and completes many different tasks, try the LEGO Mindstorm EV3 31313 Robot Kit. Start by choosing one of two main types of robot kits: model robots or programmable robots. The greater the detail involved, the more time and effort required to build the robot. Programmable robot kits will always be more detailed than model robot kits. Look for two key indicators: the number of pieces included and the suggested age range. More detailed and complicated kits will have at least 500 separate pieces. Robot kits with high levels of detail will require tools to assemble and will also be more expensive. Small programmable robot kits have one motor while larger and more detailed robot kits will have two or even three motors. High-tech add-ons like transmitters and sensors have electrical components that need more than one motor to operate properly. You will find simple model kits for $15-$25 and more detailed ones for $100 or more. Programmable model kits start at around $150 and increase in cost as you add pieces, motors, sensors and possibilities. A. Yes, on two different levels. LEGO Boost kits teach younger kids the basic rules of programming through the use of drag-and-drop icons. Those who choose LEGO Mindstorm kits learn how to code by writing their own programs using more complex processes. A. Anyone can have fun with a LEGO robot kit. If you enjoy technology, try using a more complex model. If you’re new to coding, a simpler kit is a good start. LEGO Mindstorm EV3 31313 Robot Kit What you need to know: This fully functional intelligent robot kit allows you to create five different robots that walk, talk and play games. What you’ll love: Builders of all ages will enjoy creating and commanding their own 16- by 15- by 14-inch robot with this 601-piece kit. It includes the intelligent EV3 Brick, three motors and color, touch and infrared sensors. What you should consider: This is a pricey robot that requires some technical and programming skills. LEGO Star Wars VIII BB-8 75187 Building Kit What you need to know: This Star Wars droid has more than 1,100 parts and is designed for kids 10 to 16. What you’ll love: The detailing is authentic. Turn one wheel at the side to rotate the head and another to open the access hatch and extend the welding torch. It also comes with a display stand, decorative fact plaque and mini-figure. What you should consider: This kit is expensive for a model that can’t be programmed. Prices listed reflect time and date of publication and are subject to change. Check out our Daily Deals for the best products at the best prices and sign up here to receive the BestReviews weekly newsletter full of shopping inspo and sales. BestReviews spends thousands of hours researching, analyzing and testing products to recommend the best picks for most consumers. BestReviews and its newspaper partners may earn a commission if you purchase a product through one of our links. Copyright © 2026 MediaNews Group
Images (1):
|
|||||
| The best LEGO robot kits for hands-on STEM learning – … | https://www.ocregister.com/2025/12/16/t… | 1 | Jan 01, 2026 00:01 | active | |
The best LEGO robot kits for hands-on STEM learning – Orange County RegisterURL: https://www.ocregister.com/2025/12/16/the-best-lego-robot-kits-for-hands-on-stem-learning/ Description: LEGO robot kits are great gifts that combine fun with learning to build, code and program. They come in variations designed for every age and skill level. Content:
e-Edition Get the latest news delivered daily! Get the latest news delivered daily! e-Edition Trending: LEGO has been making childhood toys for more than 50 years and now even offer kits that allow your child to build a robot. These sets encourage young kids to explore and get excited about STEM. LEGO robot kits are great gifts that combine fun with learning to build, code and program. They come in variations designed for every age and skill level. If you or your child wants to build a fully functional intelligent robot that walks, talks, plays games and completes many different tasks, try the LEGO Mindstorm EV3 31313 Robot Kit. Start by choosing one of two main types of robot kits: model robots or programmable robots. The greater the detail involved, the more time and effort required to build the robot. Programmable robot kits will always be more detailed than model robot kits. Look for two key indicators: the number of pieces included and the suggested age range. More detailed and complicated kits will have at least 500 separate pieces. Robot kits with high levels of detail will require tools to assemble and will also be more expensive. Small programmable robot kits have one motor while larger and more detailed robot kits will have two or even three motors. High-tech add-ons like transmitters and sensors have electrical components that need more than one motor to operate properly. You will find simple model kits for $15-$25 and more detailed ones for $100 or more. Programmable model kits start at around $150 and increase in cost as you add pieces, motors, sensors and possibilities. A. Yes, on two different levels. LEGO Boost kits teach younger kids the basic rules of programming through the use of drag-and-drop icons. Those who choose LEGO Mindstorm kits learn how to code by writing their own programs using more complex processes. A. Anyone can have fun with a LEGO robot kit. If you enjoy technology, try using a more complex model. If you’re new to coding, a simpler kit is a good start. LEGO Mindstorm EV3 31313 Robot Kit What you need to know: This fully functional intelligent robot kit allows you to create five different robots that walk, talk and play games. What you’ll love: Builders of all ages will enjoy creating and commanding their own 16- by 15- by 14-inch robot with this 601-piece kit. It includes the intelligent EV3 Brick, three motors and color, touch and infrared sensors. What you should consider: This is a pricey robot that requires some technical and programming skills. LEGO Star Wars VIII BB-8 75187 Building Kit What you need to know: This Star Wars droid has more than 1,100 parts and is designed for kids 10 to 16. What you’ll love: The detailing is authentic. Turn one wheel at the side to rotate the head and another to open the access hatch and extend the welding torch. It also comes with a display stand, decorative fact plaque and mini-figure. What you should consider: This kit is expensive for a model that can’t be programmed. Prices listed reflect time and date of publication and are subject to change. Check out our Daily Deals for the best products at the best prices and sign up here to receive the BestReviews weekly newsletter full of shopping inspo and sales. BestReviews spends thousands of hours researching, analyzing and testing products to recommend the best picks for most consumers. BestReviews and its newspaper partners may earn a commission if you purchase a product through one of our links. Copyright © 2026 MediaNews Group
Images (1):
|
|||||
| Learning Robots permet à tous de comprendre l’IA grâce à … | https://www.maddyness.com/2024/04/19/le… | 1 | Jan 01, 2026 00:01 | active | |
Learning Robots permet à tous de comprendre l’IA grâce à un robot éducatifDescription: La startup Learning Robots veut révolutionner la pédagogie autour du fonctionnement de l’IA avec l’AlphAI, un robot éducatif. Content:
Fasciné par le fonctionnement complexe du cerveau animal, Thomas Deneux, chercheur au CNRS, décide de mettre au point une IA permettant de visualiser des réseaux de neurones. Il fonde alors Learning Robots, startup qui propose un logiciel capable d'entraîner un robot pour des activités simples et de visualiser le fonctionnement de ses neurones, et s’associe avec Axel Haentjens. L’objectif ? Sensibiliser le grand public à l’IA en la démystifiant grâce à une vision claire sur ses mécanismes et son apprentissage. « Avec la généralisation des chatbot et des IA génératives comme Chat GPT, l’IA est partout, sauf que peu de gens comprennent comment cela fonctionne ! », rappelle Axel Haentjens, Directeur Général de Learning Robots. « Nous voulons changer la vision du public sur l'IA, la démystifier et la rendre accessible à tous, et pas seulement à une élite. Or, c'est en manipulant et en expérimentant que l'on apprend le mieux.» Grâce à un logiciel, il est possible de visualiser en temps réel les décisions prises par un petit robot, l’AlphAI, et d’expliquer les processus d'algorithmes sous-jacents qui permettent l’apprentissage de la machine. Cette approche rend l'IA tangible et compréhensible, sans concepts abstraits souvent difficiles à saisir, grâce à la mise en pratique. N’importe quel public peut comprendre le fonctionnement de l’IA, comment ses neurones fonctionnent et même observer comment le robot « voit » et prend ses décisions. Les évolutions du robot, au fur et à mesure de son apprentissage, permettent de mieux comprendre l’importance de la qualité des données dans son évolution. « L’IA se trompe beaucoup au début. Mais plus elle apprend, mieux elle va fonctionner. Cela montre aux enfants qu’il est normal de faire des erreurs, pour mieux comprendre et apprendre », précise le Directeur Général. « En entraînant ce robot, tout le monde peut comprendre ce qu’est un algorithme. On peut commencer à l’utiliser dès la classe de sixième, pour sensibiliser à l’IA, puis en seconde, on peut rentrer dans le détail des calculs des algorithmes, et enfin dans le supérieur, commencer à coder ces algorithmes. » La jeune pousse vend des kits robotiques et des licences logicielles pour le suivi et l'entraînement des robots aux établissements. Plusieurs centaines d’établissements, comme l’Université Paris-Saclay, utilisent déjà cette technologie. Néanmoins, si Learning Robots s’intègre parfaitement dans le système éducatif, elle a su trouver sa place également dans le milieu professionnel : des entreprises, notamment du CAC 40, utilisent déjà cette technologie pour former leurs cadres et les aider à comprendre l’IA et réduire leurs appréhensions. Après avoir remporté le Prix International Next Innov by Banque Populaire Val de France, la startup souhaite désormais s’étendre à l’international pour développer de nouvelles fonctionnalités, notamment la connexion de son logiciel à des modèles de langage génératif et ainsi faciliter les interactions homme-machine. Des discussions sont en cours pour établir des partenariats à Hong Kong, en Suisse, au Brésil, en Allemagne et au Royaume-Uni. « Lorsque nous avons entendu parler de Next Innov, j’ai directement flashé sur le nom : Next, c’est préparer le coup d’après. Et innover, c’est notre ADN », conclut Axel Haentjens. « Au-delà des aspects financiers, ce prix va nous permettre d’obtenir une certaine notoriété, la Banque Populaire Val de France étant très connue, et de disposer d’un rayonnement médiatique certain, d’un accompagnement et d’un réseau : ce sont des rencontres qui entraînent des rencontres, et cela est passionnant ».
Images (1):
|
|||||
| Tesla Optimus Robot Masters Kung Fu with AI Imitation Learning | https://www.webpronews.com/tesla-optimu… | 1 | Jan 01, 2026 00:01 | active | |
Tesla Optimus Robot Masters Kung Fu with AI Imitation LearningURL: https://www.webpronews.com/tesla-optimus-robot-masters-kung-fu-with-ai-imitation-learning/ Description: Keywords Content:
In a striking demonstration of robotic agility, Tesla’s Optimus humanoid robot has been captured on video executing a series of kung fu moves with remarkable fluidity, mirroring the motions of a human trainer. Shared by Elon Musk on the social platform X, the 36-second clip shows Optimus blocking strikes, dodging attacks, and performing precise hand gestures, all while maintaining balance on a mat. This isn’t just a parlor trick; it highlights significant advancements in Tesla’s AI-driven robotics, where the machine learns complex physical tasks through imitation and self-correction, without direct human intervention during the performance. The video, which has garnered millions of views, underscores Optimus’s evolution from earlier prototypes that struggled with basic locomotion to a system capable of nuanced, dynamic interactions. Musk emphasized that the robot’s actions are powered entirely by artificial intelligence, with no teleoperation or pre-programmed sequences involved. This leap forward comes amid Tesla’s broader push into humanoid robotics, aiming to deploy these machines for factory work, household chores, and beyond. Advancements in AI Training and Hardware Engineers at Tesla have reportedly trained Optimus using end-to-end neural networks, similar to those in the company’s autonomous driving systems, allowing the robot to process visual data and translate it into physical responses in real time. According to a report from Digital Trends, the display is impressive for an adult-sized robot, showcasing speed, balance, and accuracy that rival human capabilities. The publication notes that while earlier videos featured Optimus dancing or folding laundry, this kung fu routine demonstrates improved joint flexibility and coordination, potentially paving the way for applications in security or entertainment. Comparisons to competitors like Boston Dynamics’ Atlas or Figure’s humanoid bots reveal Tesla’s unique edge: scalability. Musk has projected low-volume production for internal use by next year, with high-volume output targeted for 2026. Insights from Interesting Engineering highlight how Optimus’s kung fu training refines its motor skills, drawing from simulation-based learning where virtual models endure millions of iterations before real-world deployment. Implications for Industry and Ethical Considerations Industry insiders see this as a milestone in humanoid robotics, potentially disrupting labor markets in manufacturing and service sectors. Tesla’s approach leverages its vast data from electric vehicles to accelerate robot development, with Optimus priced under $30,000 per unit for mass appeal. A piece in The Times of India describes the robot flawlessly replicating kung fu alongside a trainer, emphasizing AI’s role in achieving human-like balance without engineer oversight. However, such capabilities raise questions about safety and job displacement. If robots can master martial arts, their potential in hazardous environments—like disaster response—becomes evident, but so do concerns over misuse. Musk has addressed this by stressing ethical AI frameworks, though skeptics on platforms like X argue for more transparency in training data. Future Trajectories and Competitive Pressures Looking ahead, Tesla plans to integrate Optimus with its Grok AI for conversational abilities, expanding beyond physical feats. Posts on X from robotics enthusiasts, including those analyzing the video’s frame-by-frame mechanics, suggest this kung fu demo is a precursor to more autonomous behaviors, like adaptive learning in unstructured settings. Rivals are not idle; companies like Unitree have showcased similar agile robots, but Tesla’s ecosystem integration gives it an advantage. As reported by TechEBlog, Optimus V3’s moves start with a fist touch, evolving into sparring that blurs lines between machine and human. For industry leaders, this signals a new era where robots don’t just assist but emulate human prowess, promising efficiencies while challenging us to redefine work in an automated world. In wrapping up, Tesla’s kung fu-capable Optimus isn’t merely entertainment—it’s a harbinger of robotics’ mainstream integration, driven by relentless innovation and AI prowess. Subscribe for Updates Help us improve our content by reporting any issues you find. Get the free daily newsletter read by decision makers Get our media kit Deliver your marketing message directly to decision makers.
Images (1):
|
|||||
| Robot movement planning for obstacle avoidance using reinforcement learning | … | https://www.nature.com/articles/s41598-… | 1 | Jan 01, 2026 00:01 | active | |
Robot movement planning for obstacle avoidance using reinforcement learning | Scientific ReportsDescription: In modern industrial and laboratory environments, robotic arms often operate in complex, cluttered spaces. Ensuring reliable obstacle avoidance and efficient motion planning is therefore essential for safe performance. Motivated by the shortcomings of traditional path planning methods and the growing demand for intelligent automation, we propose a novel reinforcement learning framework that combines a modified artificial potential field (APF) method with the Deep Deterministic Policy Gradient algorithm. Our model is formulated in a continuous environment, which more accurately reflects real-world conditions compared to discrete models. This approach directly addresses the common local optimum issues of conventional APF, enabling the robot arm to navigate complex three-dimensional spaces, optimize its end-effector trajectory, and ensure full-body collision avoidance. Our main contributions include the integration of reinforcement learning factors into the APF framework and the design of a tailored reward mechanism with a compensation term to correct for suboptimal motion directions. This design not only mitigates the inherent limitations of APF in environments with closely spaced obstacles, but also improves performance in both simple and complex scenarios. Extensive experiments show that our method achieves safe and efficient obstacle avoidance with fewer steps and lower energy consumption compared to baseline models, including a TD3-based variant. These results clearly demonstrate the significant potential of our approach to advance robot motion planning in practical applications. Content:
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement Scientific Reports volume 15, Article number: 32506 (2025) Cite this article 4434 Accesses Metrics details In modern industrial and laboratory environments, robotic arms often operate in complex, cluttered spaces. Ensuring reliable obstacle avoidance and efficient motion planning is therefore essential for safe performance. Motivated by the shortcomings of traditional path planning methods and the growing demand for intelligent automation, we propose a novel reinforcement learning framework that combines a modified artificial potential field (APF) method with the Deep Deterministic Policy Gradient algorithm. Our model is formulated in a continuous environment, which more accurately reflects real-world conditions compared to discrete models. This approach directly addresses the common local optimum issues of conventional APF, enabling the robot arm to navigate complex three-dimensional spaces, optimize its end-effector trajectory, and ensure full-body collision avoidance. Our main contributions include the integration of reinforcement learning factors into the APF framework and the design of a tailored reward mechanism with a compensation term to correct for suboptimal motion directions. This design not only mitigates the inherent limitations of APF in environments with closely spaced obstacles, but also improves performance in both simple and complex scenarios. Extensive experiments show that our method achieves safe and efficient obstacle avoidance with fewer steps and lower energy consumption compared to baseline models, including a TD3-based variant. These results clearly demonstrate the significant potential of our approach to advance robot motion planning in practical applications. Multidegree-of-freedom robotic arms are an integral part of today’s industrial automation, manufacturing, and service applications. As these systems are deployed in increasingly complex and dynamic environments, achieving robust motion planning and reliable obstacle avoidance has become a critical challenge. In many practical scenarios, the robot must navigate through cluttered spaces and avoid collisions while optimizing its trajectory for efficiency and safety. Efficient motion planning is essential for safely performing tasks in environments that are not only cluttered but also subject to dynamic changes. Conventional planning methods often struggle with adapting to unpredictable obstacles or high-dimensional configuration spaces. Traditional methods can be broadly categorized into searching-based, bionic and evolutionary, sampling-based, and gradient-based techniques. Searching-based algorithms, such as A*1, D*2, Dijkstra3, and simulated annealing4, are efficient in static, small-scale environments. However, they become impractical in large-scale, dynamic scenarios. Bionic methods like Genetic Algorithms (GA)5, Particle Swarm Optimization (PSO)6, Ant Colony Optimization (ACO)7, and fuzzy logic8 can handle more complex environments but often suffer from slow convergence and local optima. Sampling-based approaches, such as Rapidly-Exploring Random Trees (RRT)9 and Probabilistic Roadmaps (PRM), can navigate intricate spaces but at the cost of computational complexity. Gradient-based methods, especially the artificial potential field (APF) method10, offer intuitive and computationally efficient solutions by modeling the workspace as a potential field. APF methods attract the end-effector toward the goal and repel it from obstacles, yet they are inherently vulnerable to local minima problems where the robot becomes trapped due to balanced attractive and repulsive forces11,12. To address these limitations, hybrid strategies have been proposed. Integrating methods such as RRT with APF9, ACO with APF13, and Dynamic Window Approach (DWA) with GA14 have been explored. Researchers have also enhanced APF directly by modifying repulsive field functions or introducing auxiliary constructs. Cao15 proposed a velocity potential field (VPF) using robot velocity instead of distance, Flacco et al.16 leveraged repulsive vectors for collision avoidance, and Zhang et al.17 adjusted repulsive force functions to mitigate local minima. In parallel, reinforcement learning (RL) methods, particularly Deep Deterministic Policy Gradient (DDPG), have emerged as robust tools for continuous control problems18. Early RL algorithms, such as Deep Q-Networks (DQNs)19, DDPG18, and SARSA20, enabled agents to learn optimal policies by interacting with their environment. Recent advancements have further expanded these techniques. Chen21 utilized DDPG, TD3, and SAC effectively in robotic tasks, while Imtiaz22 demonstrated the versatility of PPO in fruit-picking applications. Improvements such as the multi-critic mechanism by Sze23 and refined reward functions by Li et al.24 have enhanced policy evaluation and APF optimization, respectively. Hybrid approaches combining RL with traditional methods, like RRT with TD325, DWA with Q-learning26, and PRM with RL27, further illustrate the complementary potential of these strategies. Despite these advancements, critical challenges persist, notably the local minima issue in APF methods and the sensitivity of RL algorithms to reward function design. The APF method’s vulnerability to local minima arises when nearby obstacles counterbalance the attractive goal force, trapping the robot prematurely. Conversely, DDPG and similar RL methods can learn complex continuous action policies but often struggle with limited or misleading rewards in complex environments. Our proposed approach integrates a modified APF mechanism within the DDPG framework, combining APF s structural guidance to alleviate local minima and DDPG s adaptive learning capabilities. Specifically, we enhance the reward signal with APF-derived attractive and repulsive components. Our state space includes both joint angles and end-effector positions, and actions comprise fine-grained joint rotations. This hybrid strategy enables efficient obstacle avoidance and optimized motion planning in a fully continuous state and action space, in contrast to the discrete formulations used in some prior work. While some prior approaches combine APF with RL methods such as Q-learning or PPO28,29, they typically rely on discretized state spaces, task-specific constraints, or lack the fine-grained control needed for articulated robotic arms. Others focus primarily on mobile robots, where the configuration space is of lower dimensionality and local minima are easier to mitigate. In contrast, our approach is explicitly designed for continuous 6-DOF robotic arms operating in 3D space, where both configuration complexity and the risk of local minima are substantially higher. Moreover, our APF-informed reward function is designed to remain informative throughout training, facilitating faster convergence and more stable behavior across environments of varying complexity. The contributions of our work are threefold: First, we present a novel APF-DDPG hybrid framework that combines the advantages of traditional and learning-based methods in the continuous setting. Second, we propose an improved reward mechanism explicitly designed to mitigate local minima. Finally, we validate our framework through extensive experiments across diverse environments, demonstrating significant improvements in convergence, solution quality, and collision avoidance compared to baseline models. Our findings provide robust solutions for real-world robotic arm motion planning. In this work, we employ a standard 6-degree-of-freedom industrial manipulator30 as our experimental platform. The kinematic model of the robot is derived using the standard Denavit–Hartenberg (DH) convention. According to this convention, each joint is assigned a coordinate frame, enabling the definition of spatial transformations between consecutive joints. Using these DH parameters, we calculate the forward kinematics to determine the robot configuration. Each transformation matrix \({}^{i-1}\mathbf{T}{i}\) from joint \(i-1\) to joint i is defined as The complete forward kinematic transformation from the base frame to the end-effector frame is obtained by multiplying these matrices sequentially: From \({}^{0}\mathbf{T}_{6}\), the end-effector position (x, y, z) is extracted and used for collision detection and distance computations in the obstacle avoidance algorithm. By incorporating forward kinematics directly into our methodology, we simplify computational requirements while ensuring accurate spatial data for our reinforcement learning framework. In this work, we propose an obstacle avoidance approach that integrates APF with DDPG reinforcement learning31. This integration addresses critical limitations of each method individually. APF methods often encounter local minima problems, particularly when attractive and repulsive forces oppose each other32. Pure DDPG methods, in contrast, typically require extensive exploration and lack direct spatial information, potentially slowing convergence and complicating navigation. The combined APF-DDPG framework thus leverages structured spatial information from APF and the robust exploratory learning ability of DDPG, facilitating efficient and reliable obstacle avoidance in continuous state-action scenarios. We define the obstacle avoidance problem as a Markov Decision Process (MDP), described by a state space \(\mathscr {S}\), action space \(\mathscr {A}\), and a reward function R(s, a). The continuous state vector \(s \in \mathbb {R}^{9}\) comprises the robot s six joint angles \((\theta _1,\dots ,\theta _6)\) and the end-effector’s Cartesian coordinates (x, y, z) computed from forward kinematics. The action vector \(a \in \mathbb {R}^{6}\) specifies incremental joint angle adjustments within the interval of \([-0.5^\circ , 0.5^\circ ]\) per timestep, allowing smooth and controlled movements. The APF-DDPG framework consists of actor and critic neural networks, their respective target networks, and an experience replay buffer for stable training, as illustrated in Fig. 1. At each timestep, the actor network outputs continuous incremental actions given the current state, which are then executed by the robot. The critic network evaluates state-action pairs by estimating the expected cumulative rewards (Q-values). Integrated APF-DDPG framework for continuous obstacle avoidance. The main networks (actor and critic) use spatial information from APF directions to guide policy updates. Target networks stabilize training. Experiences \((s,a,r,s')\) are sampled from the replay buffer to decorrelate training updates. The networks are trained using mini-batches sampled from the replay buffer, where each stored experience tuple \((s,a,r,s')\) captures interactions with the environment. To ensure stable training and smooth convergence, we follow32 and employ soft target network updates, defined by where \(\theta '\) are target network parameters, \(\theta\) are main network parameters, and \(\tau\) being the smoothing coefficient. We integrate APF into our DDPG framework to explicitly inform the actor network of desirable spatial directions. To do this, we first recall the standard artificial potential field definition10 where \(q\) is the end-effector position, \(q_{\textrm{goal}}\) the goal position, \(\rho (q)\) the distance to the nearest obstacle, and \(\rho _0\) its influence radius. Forces arise as the negative gradients of these potentials: The net direction of the APF is then \(F_\textrm{net} = F_a + F_r\). However, when \(F_a\) and \(F_r\) directly oppose each other, the robot can become trapped in a local minimum (\(\Vert F_\textrm{net}\Vert \approx 0\)). To systematically escape such traps, drawing inspiration from the classical BUG principle33, we add a small orthogonal perturbation. Specifically, we define Because \(F_\perp\) is normal to the plane spanned by \(F_a\) and \(F_r\), this small perturbation guarantees movement off any planar saddle without significantly diverting the primary APF direction. The adjusted force \(F_{\textrm{adjusted}}\) is then provided to the actor network at each time step, biasing policy updates toward collision-free, goal-directed motion. The compensation mechanism is schematically illustrated in Fig. 2. Illustration of perpendicular compensation in APF direction. When repulsive and attractive forces nearly oppose each other, a perpendicular correction is introduced, steering the robot away from local minima. The reward function is carefully designed to balance obstacle avoidance and goal-oriented behavior by combining two components: proximity to the goal and directional alignment with APF guidance. Formally, we write where \(\textrm{distanceGE}\) is the Euclidean distance between the end-effector and the goal in Cartesian space, explicitly encouraging movements toward the goal. The term \(\cos (\sigma )\in [-1,1]\) measures the alignment between the executed displacement vector \(\overrightarrow{\textrm{EE}}\) and the APF-suggested vector \(\overrightarrow{\textrm{APF}}\), computed as We choose \(\mu >1\) so that \(\mu (\cos \sigma -1)\le 0\) with equality only when \(\cos \sigma =1\), ensuring every deviation from perfect alignment incurs a negative reward and thereby shaping the agent to follow efficient, APF-consistent motions. To incorporate safety constraints we distinguish collision scenarios by using separate distance-weighting parameters, defining where \(\lambda _{c}>\lambda _{nc}\) is chosen based on the ratio between the workspace size and the robot arm length so that the distance penalty scales appropriately with the environment s spatial dimensions relative to the manipulator. We allow collisions during training rather than terminating episodes to promote extensive exploration of the state action space and to enable the agent to learn both avoidance and recovery strategies. Unlike pure APF methods that stall in local minima, our formulation maintains a negative distance term \(R_{\textrm{dist}} = -\lambda \,\textrm{distanceGE}\) until the goal is reached. We apply a per-step penalty to prevent hesitation or oscillation within shallow basins. We allow non-terminating collisions with a larger penalty \(\lambda _{c}\), encouraging risky detours and subsequent recovery to uncover escape pathways. We use the alignment term \(R_{\textrm{align}} = \mu (\cos \sigma - 1)\le 0\) to penalize moves that conflict with APF guidance. As a result, the agent converges on efficient, collision-free trajectories. Due to the absence of specialized datasets for robotic arm path planning and obstacle avoidance, we developed a series of custom environments to simulate diverse real-world scenarios. Each obstacle within these environments is assigned a region characterized by a specific repulsion scaling factor, resulting in localized repulsive fields that directly affect the agent s trajectory planning. All environments share a common workspace dimension of \(10 \times 10 \times 10\) units. Spherical obstacles are placed strategically and vary in radius from \(0.2\) to \(0.5\) units, while the robotic arm itself, when fully extended, approximates a cylinder of length \(10\) units with a radius of \(0.1\) units. We designed two distinct categories of task environments to rigorously evaluate the proposed approach: Simple task environments These environments include fewer than ten spherical obstacles strategically arranged to create deliberate local optima traps, as depicted in Fig. 3a. These scenarios provide baseline conditions to assess fundamental obstacle avoidance capabilities. Left: Example of a simple task environment containing fewer than ten spherical obstacles positioned to create local optima traps. Right: Example of a complex task environment, illustrating increased obstacle density and strategic placement to enhance task difficulty. (These images were made with Unity-2022.3.18f1, https://unity.com/de). Complex task environments These scenarios incorporate a higher number and strategic complexity of obstacles, significantly increasing the likelihood of collisions and entrapment in local optima. Obstacles are placed near the initial position to hinder initial movements and along intended trajectories to elevate the complexity of the planning problem. Figure 3b illustrates typical complex environments. An additional challenging scenario includes environments featuring cuboid wall obstacles (0.5 units thickness, 5 units width, and 10 units height), as shown in Fig. 4. The initial placement (Fig. 4a), along with the front (Fig. 4b) and side (Fig. 4c) views of the final state, emphasize how the dimensions of the wall impose substantial restrictions on the agent’s movement, covering approximately 30–40% of the width of the workspace and 50% height, providing a strict test for obstacle avoidance capabilities. A complex task environment featuring cuboid wall obstacles, significantly restricting movement space compared to spherical obstacles. (These images were made with Unity-2022.3.18f1. The experimental setup comprises a total of 30 simple and 30 complex environments. Additionally, multi-goal trajectory planning tasks were designed, where the final state of one iteration serves as the initial state of the next. Task completion is achieved when the end-effector reaches the goal within a 0.1-unit tolerance, optimizing for computational efficiency and practical accuracy. For intuitive visualization and precise control of the robotic agent, Unity, a real-time 3D development platform, is employed. Control scripts are implemented using C# to directly interact with the Unity simulation environment. Experiments were conducted on a computing system with an 11th Gen Intel(R) Core(TM) i5-11320H processor and NVIDIA GeForce MX450 GPU. The reinforcement learning framework was implemented using PyTorch 2.0.1. Detailed hardware and software configurations are summarized in Table 1. The choice of baseline methods was motivated by their contrasting characteristics, strengths, and limitations relevant to robotic arm path planning. To fully evaluate the effectiveness of the proposed DDPG-APF approach, we compare with the following baselines: A* Search (Non-RL) A classical best-first search algorithm operating in a discretized joint-action space (rotations of − 1 , 0 , + 1 per joint). A* is run without a step limit, relying on an admissible and consistent heuristic to guarantee optimality in discrete settings34. This method serves as a non-learning baseline, highlighting the challenges of discretization and potential local-optima loops in our custom environments. Pure-DDPG A standard Deep Deterministic Policy Gradient model without Artificial Potential Field guidance, using only collision penalties and distance-to-goal in its reward function. This baseline isolates the benefit of APF integration and demonstrates the intrinsic performance limitations and convergence challenges faced by reinforcement learning without heuristic guidance. TD3-APF Twin Delayed DDPG with the same APF-based reward as our DDPG-APF agent. TD3 employs dual critics and delayed actor updates to address DDPG’s overestimation bias35, enabling a direct comparison under identical reward and network settings to assess whether these algorithmic improvements provide tangible benefits in complex, continuous task spaces. All reinforcement learning methods share identical network architectures and training regimens, ensuring a fair evaluation of feasibility, optimality, computational efficiency, and convergence behavior. Together, these methods provide a comprehensive performance context, ranging from classical planning paradigms to modern deep reinforcement learning techniques. We evaluated the performance of four distinct algorithms: DDPG-APF, Pure-DDPG, TD3-APF, and the classical A* search algorithm. This evaluation was conducted across tasks of varying complexity, focusing specifically on path optimality, solution feasibility, computational efficiency, and learning dynamics. All RL algorithms were trained under identical hyperparameters, explicitly detailed in Table 2. Initially, we analyzed training dynamics and convergence characteristics. The progression of best solutions, quantified by the number of steps required to reach the goal, revealed clear distinctions among the evaluated methods. Initially, all algorithms maintained a step count at the maximum threshold of 200 steps, indicating the absence of early feasible solutions. Upon discovering feasible solutions, the DDPG-APF algorithm exhibited notably faster convergence, consistently achieving shorter paths at an earlier stage, as demonstrated in Fig. 5. Feasible solution evaluation illustrating episodes required for finding the first feasible path. DDPG-APF identifies feasible paths sooner and more consistently than other methods, reflecting superior stability and efficiency. While the episode count to first feasible solution quantifies when each agent begins to find successful paths, analyzing the evolution of the average reward provides deeper insights into how effectively each policy learns over time. Specifically, tracking rewards captures both the stability of learning dynamics and the efficiency with which algorithms balance exploration, collision avoidance, and path-length minimization (see Fig. 6). Figure 6a illustrates the overall average reward curves across all task environments. Here, DDPG-APF consistently achieves a higher average reward compared to Pure-DDPG and TD3-APF, signaling more efficient policy improvements and a superior exploration-exploitation trade-off. In contrast, Pure-DDPG demonstrates a smoother but slower progression, reflecting a stable yet less efficient learning pattern, whereas TD3-APF displays noticeable fluctuations, likely resulting from increased complexity associated with its dual-critic network structure. Comparison of the presented reward functions: (a) overall model performance across all tasks; (b) sensitivity to reward-function design. We further assessed the sensitivity of the RL algorithms to variations in reward function design. Two distinct reward functions were tested: the original APF method without modifications and our proposed compensated APF reward function. Reward curves averaged across all tasks, displayed in Fig. 6b, demonstrated that DDPG-APF maintained stable performance regardless of reward function variations. Conversely, TD3-APF exhibited greater sensitivity, likely due to additional complexity in its dual-network architecture aimed at mitigating value function overestimation. In comparing RL methods to classical non-learning approaches, we evaluated the A* search algorithm, which operates using discrete action spaces and heuristic search. A comprehensive summary of key performance metrics across all 60 scenarios is provided in Table 3. Despite having unlimited search steps per episode, A* exhibited significantly lower success rates compared to the RL-based methods. Specifically, DDPG-APF demonstrated superior performance in average steps, success rate, and episodes required to achieve first success. Moreover, a dispersion analysis shows that DDPG-APF s variability in average steps is roughly half that of Pure-DDPG and one-third that of TD3-APF, and its variability in episodes to first success is about 45% lower than Pure-DDPG and over 65% lower than TD3-APF clear evidence of its consistency across diverse scenarios. The results highlight the advantages of continuous action adjustments offered by DDPG-APF, particularly in scenarios sensitive to local optima. Further evaluation against other RL metrics, such as time per episode and stability of solution improvement, reinforces the robustness of the DDPG-APF method. DDPG-APF consistently outperformed Pure-DDPG and TD3-APF across all metrics, particularly in computational efficiency (Time/Episode), indicating faster convergence and more effective exploration strategies. Notably, DDPG-APF reduces time-per-episode variability by approximately 25% relative to TD3-APF, underscoring its reliable runtime behavior. TD3-APF, while theoretically robust, showed notably higher variability and longer episodes to achieve first success, highlighting challenges associated with its dual-critic architecture. Computational efficiency was further analyzed by measuring average runtime per episode across tasks with different complexity levels. Table 4 summarizes these results, showing DDPG-APF consistently offering computational advantages over TD3-APF and Pure-DDPG. These runtime trends align well with our task complexity classification. Average episode runtime increases steadily from simple tasks through complex tasks without wall to complex tasks with wall. The additional runtime in scenarios containing walls reflects the greater number of collision checks and potential field updates required to navigate planar obstacles. This monotonic increase in computational load confirms that our categorization of simple and hard tasks is reasonable and that wall environments rightly belong in the hard category. To identify the precise contributors to these runtimes we divide each episode into agent runtime and environment preparation runtime. Agent runtime comprises action selection at every step and network updating every fifty steps and is dominated by reinforcement learning computations. Environment preparation updates the environment state around the agent and is driven primarily by artificial potential field calculations. For the DDPG-APF agent action selection requires around 0.5 ms network updating approximately 240 ms and environment preparation about 2 ms. For the TD3-APF agent action selection requires around 0.7 ms network updating approximately 280 ms and environment preparation about 2 ms. Across all analyzed metrics, DDPG-APF consistently outperforms both RL baselines and the classical A* method, validating the benefit of APF-guided reward shaping in continuous control settings. The APF-DDPG framework illustrates how combining structural guidance with adaptive learning can yield both stability and efficiency in robotic motion planning. Our experiments highlight clear benefits over classical approaches and pure reinforcement learning, but they also expose new compromises that arise from this hybrid design. In the following, we reflect on these strengths and weaknesses, emphasizing the trade-offs between guidance and exploration, efficiency and flexibility, as well as simulation and real-world applicability. Using a continuous state-action formulation provides clear benefits over discretized or inverse kinematics-based methods. Continuous incremental actions allowed the agent to adapt flexibly to dynamic obstacle configurations, enabling responsive trajectory adjustments. At the same time, the use of forward kinematics reduced computational complexity by avoiding costly inverse kinematics calculations, resulting in shorter episode runtimes compared to baseline methods. A drawback is that this formulation inherently enlarges the search domain, requiring careful reward shaping and parameter tuning to maintain stability; without strong guidance, training in such a continuous space may become less robust. The framework demonstrated improved learning efficiency, with faster convergence and more stable reward trajectories. The dense reward function provided continuous and informative feedback, facilitating effective optimization and reducing the number of collision trials. On the other hand, this reliance on a dense reward structure represents a limitation: if such shaping is not available, or must be inferred from noisy real-world signals, efficiency and stability may degrade substantially. Integrating a modified APF within the DDPG framework shaped the reward structure into a smoother potential field that continuously guided the agent toward the goal while repelling it from obstacles. This reduced fluctuations, stabilized learning, and biased exploration toward safer paths, improving sample efficiency and runtime compared to Pure-DDPG. At the same time, this integration introduced important trade-offs. The strong directional bias of APF may suppress the discovery of unconventional but potentially more efficient trajectories, and the perpendicular compensation term, while effective in our experiments, could in principle induce oscillatory motion in narrow passages. Moreover, the continuous computation of potential field forces and collision checks adds a runtime overhead of approximately 2 ms per step. Our experiments indicate that this cost is outweighed by fewer detours, reduced collisions, and faster convergence, yet its impact under noisy sensing or dynamic obstacles remains an open question. Together, these findings highlight both the stabilizing benefits and the potential risks of APF guidance. Our experiments revealed two concrete shortcomings of the current implementation. First, the framework lacks adaptability in adjusting step limits to different task complexities, which led to degraded performance in harder scenarios. Second, the method struggled with smooth transitions in multi-goal trajectory planning, limiting its effectiveness in sequential manipulation tasks. These weaknesses point to the need for adaptive step management and hierarchical or goal-decomposition strategies in future work. The dataset and evaluation design impose further limitations. All environments were manually created to challenge the robot arm, which restricts generalizability and prevents a definitive claim that APF-DDPG consistently outperforms other agents. The absence of standardized benchmarks for manipulator path planning further complicates comparisons across studies. Moreover, although oscillatory behavior was not observed in our fewer than 100 task instances, it cannot be excluded on larger or more uniform benchmarks. Parameter tuning across algorithms is another source of uncertainty: TD3, for example, did not achieve the expected improvements over DDPG, possibly because its double actor-critic structure was less advantageous in our task setup. These findings suggest that notions such as overestimation in reinforcement learning should be reconsidered in context, as optimistic Q-value estimates in DDPG may sometimes aid exploration rather than hinder it. The APF-DDPG framework is particularly suited to industrial scenarios where manipulators must operate safely in highly cluttered but largely static workcells. Examples include dynamic pick-and-place in warehouses or laboratory automation tasks, where dense static obstacles constrain free exploration and safety demands are high. In such settings, the spatial guidance of APF can provide reliable safety margins, while reinforcement learning enables adaptability to variations in object placement. In contrast, long-range navigation or human robot collaboration with highly dynamic obstacles may be less suited, since strong APF bias could hinder the discovery of unconventional solutions. Short- to mid-range manipulation in dense, safety-critical environments therefore appears to be the most promising niche for our approach. Finally, the transferability of our findings from simulation to real-world deployment remains a challenge. The modeling assumptions made in our setup are fragile under realistic conditions. For instance, the UR5 model neglects actuator dynamics, backlash, and latency, which influence fine-grained incremental control. Likewise, APF forces were computed under idealized, noise-free conditions, whereas real-world sensing inevitably introduces uncertainty in obstacle localization. The obstacles themselves were represented by simple geometric primitives, in contrast to the irregular and textured shapes encountered in practice. These simplifications suggest that APF-DDPG may be particularly sensitive to sensor noise and geometric mismatches. Future work should therefore investigate robustness under imperfect perception and more realistic robot models to narrow the sim-to-real gap. In this paper, we presented a reinforcement learning framework integrating a modified APF method with DDPG for robotic arm motion planning in continuous 3D environments. Our approach addresses the local optimum issues inherent in traditional APF methods through a tailored reward mechanism. Experimental results demonstrated that our method can find feasible solutions in fewer episodes and typically achieves more optimal paths compared to baseline models. Although APF introduces extra computation per step, in practice this overhead is outweighed by more efficient trajectories, leading to faster overall execution. The findings suggest that APF-based guidance not only accelerates convergence but also improves computational efficiency, making it suitable for complex obstacle avoidance tasks. However, the method’s current limitations, particularly the inability to automatically adjust step limits for varying complexities and difficulties in smoothly handling multi-goal trajectory planning, highlight important areas for further improvement. Future research should explore adaptive parameter tuning methods to enhance performance consistency across different task complexities. Additionally, developing more efficient algorithms for trajectory planning involving multiple sequential goals and extending the framework to support multi-agent cooperation and dynamic obstacle avoidance are promising directions. Ultimately, addressing these challenges will significantly narrow the gap between theoretical advances and practical robotic applications. The data and code that support the findings of this study are available at the following GitHub repository: https://github.com/JunyanPeng330/RobotPlanning. For further information or access to additional data, please contact the corresponding author. Fu, B. et al. An improved A* algorithm for the industrial robot path planning with high success rate and short length. Robot. Auton. Syst. 106, 26–37. https://doi.org/10.1016/j.robot.2018.04.007 (2018). Article Google Scholar Chen, G. et al. Path planning method with obstacle avoidance for manipulators in dynamic environment. Int. J. Adv. Robot. Syst. 15(6), 1729881418820223 (2018). Article Google Scholar Fusic, S. J., Ramkumar, P., & Hariharan, K. Path planning of robot using modified Dijkstra algorithm. In National Power Engineering Conference (NPEC), 1–5 (IEEE, 2018). Miao, H. & Tian, Y.-C. Dynamic robot path planning using an enhanced simulated annealing approach. Appl. Math. Comput. 222, 420–437 (2013). Google Scholar Xiuli, Yu., Dong, Mingshuai & Yin, Weimin. Time-optimal trajectory planning of manipulator with simultaneously searching the optimal path. Comput. Commun. 181, 446–453 (2022). Article Google Scholar Zhu, A. et al. Trajectory planning of rotor welding manipulator based on an improved particle swarm optimization algorithm. Adv. Comput. Signals Syst. 8(6), 122–129 (2024). Google Scholar Miao, C. et al. Path planning optimization of indoor mobile robot based on adaptive ant colony algorithm. Comput. Ind. Eng. 156, 107230 (2021). Article Google Scholar Hentout, A., Maoudj, A. & Aouache, M. A review of the literature on fuzzy-logic approaches for collision-free path planning of manipulator robots. Artif. Intell. Rev. 56(4), 3369–3444 (2023). Article Google Scholar Liu, W. et al. Path planning for mobile robots based on the improved DAPF-QRRT* strategy. Electronics 13(21), 4233 (2024). Article Google Scholar Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. In Proceedings. 1985 IEEE International Conference on Robotics and Automation, Vol. 2. 500–505 (1985). https://doi.org/10.1109/ROBOT.1985. Chen, Z., Ma, L., & Shao, Z. Path planning for obstacle avoidance of manipulators based on improved artificial potential field. In 2019 Chinese Automation Congress (CAC), 2991–2996 (2019). https://doi.org/10.1109/CAC48633.2019.8996467. Zhao, H. B. & Ren, Y. Improved robotic path planning based on artificial potential field method. Int. J. Adv. Robot. Syst. 37(02), 360–364 (2020). Google Scholar Wang, H., Wang, S. & Yu, T. Path planning of inspection robot based on improved ant colony algorithm. Appl. Sci. 14(20), 9511 (2024). Article CAS Google Scholar Li, Y. et al. A robot path planning method based on improved genetic algorithm and improved dynamic window approach. Sustainability 15(5), 4656 (2023). Article ADS Google Scholar Gao, S., Xu, Q. & Cao, J. Computing streamfunction and velocity potential in a limited domain of arbitrary shape. Adv. Atmos. Sci. 28(6), 1433–1444 (2011). Article Google Scholar Flacco, F. et al. A depth space approach to human–robot collision avoidance. In 2012 IEEE International Conference on Robotics and Automation, 338–345 (2012). https://doi.org/10.1109/ICRA.2012.6225245. Zhang, Y., Gong, P. & Hu, W. Mobile robots path planning based on improved artificial potential field. In 2022 6th International Conference on Wireless Communications and Applications (ICWCAPP), 41–45 (IEEE. 2022). Lillicrap, T. P. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015). Mnih, C. et al. Playing Atari with deep reinforcement learning. arXiv:1312.5602. https://api.semanticscholar.org/CorpusID:15238391 (2013). Rummery, G. & Niranjan, M. On-line Q-learning using connectionist systems. In Technical Report CUED/F-INFENG/TR 166 (1994). Chen, Y. et al. Integrated intelligent control of redundant degrees-of-freedom manipulators via the fusion of deep reinforcement learning and forward kinematics models. Machines 12(10), 667 (2024). Article CAS Google Scholar Imtiaz, M. et al. Trajectory planning and control of serially linked robotic arm for fruit picking using reinforcement learning. In 2023 International Conference on IT and Industrial Technologies (ICIT), 1–6 (IEEE, 2023). Sze, T. & Chhabra, R. Obstacle-free trajectory planning of an uncertain space manipulator: Learning from a fixed-based manipulator. In: American Control Conference (ACC), 3618–3624 (IEEE, 2024). Li, H., Gong, D. & Yu, J. An obstacles avoidance method for serial manipulator based on reinforcement learning and artificial potential field. Int. J. Intell. Robot. Appl. 5, 186–202 (2021). Article Google Scholar Andarge, E. W., Ordys, A. & Abebe, Y. M. Mobile robot navigation system using reinforcement learning with path planning algorithm. Acta Physi. Pol. A 30, 452–452 (2024). Article ADS Google Scholar Chang, L. et al. Reinforcement based mobile robot path planning with improved dynamic window approach in unknown environment. Auton. Robots 45, 51–76 (2021). Article Google Scholar Faust, A. et al. Prm-rl: Long-range robotic navigation tasks by combining reinforcement learning and sampling based planning’. In IEEE International Conference on Robotics and Automation (ICRA), 5113–5120 (IEEE, 2018). Li, P. et al. UGV navigation in complex environment: An approach integrating security detection and obstacle avoidance control. IEEE Trans. Intell. Veh. https://doi.org/10.1109/TIV.2024.3492539 (2024). Article Google Scholar Xu, X., Zhang, C. & Zhang, W. APF guided reinforcement learning for ship collision avoidance. In IET Conference Proceedings CP886, Vol. 2024. 12, 823–828 (IET, 2024). Parham, M. et al. Kinematic and dynamic modelling of UR5 manipulator. In IEEE International Conference on Systems, Man, and Cybernetics (SMC), 004229–004234 (IEEE, 2016). Lillicrap, T. P. et al. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015). Chen, L. et al. Deep reinforcement learning based trajectory planning under uncertain constraints. Front. Neurorobot. 16, 1662–5218. https://doi.org/10.3389/fnbot.2022.883562 (2022). Article Google Scholar Liu, K. A comprehensive review of bug algorithms in path planning. Appl. Comput. Eng. 33, 259–265 (2024). Article Google Scholar Kusuma, M. et al. Humanoid robot path planning and rerouting using A-Star search algorithm. In 2019 IEEE International Conference on Signals and Systems (ICSigSys), 110–115 (IEEE, 2019). Fujimoto, S., Hoof, H., & Meger, D. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning, 1587–1596 (PMLR, 2018). Download references Open Access funding enabled and organized by Projekt DEAL. Linda-Sophie Schneider and Junyan Peng contributed equally to this work. Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany Linda-Sophie Schneider, Junyan Peng & Andreas Maier Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar All authors conceived the experiments, L. S. and J. P. conducted the experiments and analysed the results. J. P. designed the dataset. All authors reviewed the manuscript. Correspondence to Linda-Sophie Schneider. The authors declare no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Reprints and permissions Schneider, LS., Peng, J. & Maier, A. Robot movement planning for obstacle avoidance using reinforcement learning. Sci Rep 15, 32506 (2025). https://doi.org/10.1038/s41598-025-17740-5 Download citation Received: 12 May 2025 Accepted: 26 August 2025 Published: 12 September 2025 Version of record: 12 September 2025 DOI: https://doi.org/10.1038/s41598-025-17740-5 Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Advertisement Scientific Reports (Sci Rep) ISSN 2045-2322 (online) © 2026 Springer Nature Limited Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.
Images (1):
|
|||||
| London's Neuracore raises €2.5 million to replace ‘Frankenstein’ robotics stacks … | https://www.eu-startups.com/2025/11/lon… | 1 | Jan 01, 2026 00:01 | active | |
London's Neuracore raises €2.5 million to replace ‘Frankenstein’ robotics stacks with unified robot-learning infrastructure | EU-StartupsDescription: Neuracore, a British robot learning platform to scale and deploy faster, has closed a €2.5 million ($3 million) pre-Seed funding round to fuel team Content:
Neuracore, a British robot learning platform to scale and deploy faster, has closed a €2.5 million ($3 million) pre-Seed funding round to fuel team expansion, accelerate product development, and support Neuracore’s growth. The round was led by Earlybird Venture Capital, with participation from Clem Delangue (Co-founder & CEO at Hugging Face) and advisors spanning academia, hardware, and AI. “After years in academic and industrial robotics, I saw every team – from research labs to warehouse automation startups – rebuilding the same infrastructure from scratch,” says Stephen James, founder and CEO. “Our mission is to eliminate that duplication and democratise access to high-performance robot learning tools. With this funding and our free academic programme, we’re enabling both researchers and companies to focus on advancing robotics itself, not on building the pipelines to support it.” Neuracore’s pre-Seed round sits within a steadily active 2025 European robotics and physical-AI landscape, where several startups have secured funding across adjacent segments. In Sweden, Stockholm-based Rerun raised €15.6 million to advance multimodal data infrastructure for robotics and autonomous systems. Switzerland saw two sizeable rounds: Zürich’s Flexion secured €43 million to develop reinforcement-learning “brains” for humanoid robots, while mimic closed €13.8 million to expand its dexterous robotic-hand technology. Germany’s Energy Robotics raised €11.5 million for autonomous inspection software for industrial sites, while Italy’s Adaptronics attracted €3.15 million to scale its adaptive gripper technology. Together, these 2025 rounds represent approximately €86 million in disclosed capital across companies building data infrastructure, robotic intelligence layers, and advanced manipulation hardware. Within this context, Neuracore’s focus on robot-learning infrastructure aligns with the increasing investor interest in foundational robotics software rather than solely application-layer systems. Although no UK-based robotics infrastructure company appears alongside these examples, the cluster of European funding rounds indicates a wider continental shift toward platforms that standardise data handling, training, and deployment for robotics teams. Laura Waldenstrom, Principal at Earlybird Venture Capital, adds: “The robotics industry is at an inflection point, moving from the ROS 1.0 era to a data-first paradigm powered by deep learning. Teams are still wasting months building and maintaining their own infrastructure instead of focusing on deployment. Neuracore provides what AWS did for web applications: a reliable, scalable platform that just works. We’re thrilled to support Stephen’s vision to become the infrastructure layer for the coming wave of intelligent robotics.” Founded in 2024 by Stephen James, Assistant Professor of Robot Learning at Imperial College London, Neuracore is building the core infrastructure powering the next generation of intelligent robots. Its platform enables robotics teams to go from data collection to deployed machine learning models in “days not months“, eliminating the bottlenecks that currently consume up to 80% of engineering time. Neuracore explains that their software stack replaces fragmented ‘Frankenstein’ robotics setups with a unified, cloud-based system that handles asynchronous data collection, visualisation, training, and deployment. By integrating the full robot learning pipeline into one platform, Neuracore helps teams focus on innovation rather than infrastructure. The platform is being used by over 50 organisations across commercial and academic robotics, including partnerships with hardware manufacturers. Alongside the funding, Neuracore is launching a free academic programme. This programme gives universities and research institutions worldwide unrestricted access to the full enterprise platform, or in other words, the same infrastructure used by its commercial customers. “Academic researchers are building the foundation for tomorrow’s robots,” James adds. “They shouldn’t waste months setting up data pipelines – they should be innovating. We want Neuracore to be the backbone that lets them do that.” As per data provided by the company – by 2030, the AI-powered robotics sector is projected to exceed €43 billion ($50 billion), driven by advances in deep learning and real-world robot deployment. Yet, Neuracore says most robotics teams remain limited by infrastructure constraints, often spending months integrating disparate systems for data logging, training, and deployment. Neuracore’s platform aims to solve this by providing purpose-built infrastructure optimised for robotics workflows – from asynchronous data streaming and ML-native storage to one-click deployment and continuous improvement loops. With its new academic programme, Neuracore aims to close the accessibility gap between research and industry. Universities and robotics labs will now have free, unlimited access to the full platform, enabling faster experimentation, collaboration, and reproducibility across institutions. EU-Startups.com is the leading online magazine about startups in Europe. Learn more about us and our advertising options. © Menlo Media S.L. - All rights reserved.
Images (1):
|
|||||
| Learning Geometric Reasoning for Robot Task and Motion Planning - … | https://theses.hal.science/tel-05330516… | 1 | Jan 01, 2026 00:01 | active | |
Learning Geometric Reasoning for Robot Task and Motion Planning - TEL - Thèses en ligneURL: https://theses.hal.science/tel-05330516v1 Description: This thesis introduces a learning-based framework to improve the efficiency and scalability of Task and Motion Planning (TAMP) for robotic manipulation tasks in cluttered and complex environments. Traditional TAMP methods face significant challenges due to the high computational cost of repeated geometric feasibility checks and the combinatorial explosion of the search space, especially in multi-robot settings. To address these issues, two deep learning-based models are proposed: AGFPNet, which predicts the feasibility of manipulation actions to reduce reliance on geometric planners, and GRN, which leverages graph neural networks and scene graphs to provide interpretable and generalizable feasibility predictions, including the causes of action failure. These models are further extended to support collaborative multi-robot scenarios and complex, mesh-shaped objects. Building on these models, the thesis proposes a multi-robot, feasibility-informed TAMP algorithm that integrates learned geometric reasoning into a search-based planner. This integration enables faster exploration of the search space by prioritizing promising actions and avoiding infeasible ones. Additionally, a novel informed backtracking mechanism uses predicted infeasibility causes to refine planning constraints and heuristics. Extensive experiments demonstrate that the proposed approach significantly accelerates planning, improves scalability, and maintains generalizability, even in highly cluttered or large-scale environments. Content:
This thesis introduces a learning-based framework to improve the efficiency and scalability of Task and Motion Planning (TAMP) for robotic manipulation tasks in cluttered and complex environments. Traditional TAMP methods face significant challenges due to the high computational cost of repeated geometric feasibility checks and the combinatorial explosion of the search space, especially in multi-robot settings. To address these issues, two deep learning-based models are proposed: AGFPNet, which predicts the feasibility of manipulation actions to reduce reliance on geometric planners, and GRN, which leverages graph neural networks and scene graphs to provide interpretable and generalizable feasibility predictions, including the causes of action failure. These models are further extended to support collaborative multi-robot scenarios and complex, mesh-shaped objects. Building on these models, the thesis proposes a multi-robot, feasibility-informed TAMP algorithm that integrates learned geometric reasoning into a search-based planner. This integration enables faster exploration of the search space by prioritizing promising actions and avoiding infeasible ones. Additionally, a novel informed backtracking mechanism uses predicted infeasibility causes to refine planning constraints and heuristics. Extensive experiments demonstrate that the proposed approach significantly accelerates planning, improves scalability, and maintains generalizability, even in highly cluttered or large-scale environments. Cette thèse propose une approche basée sur l'apprentissage automatique pour améliorer l’efficacité et la scalabilité de la planification de tâches et de mouvements (Task and Motion Planning, TAMP) dans des environnements complexes et encombrés. Les approches TAMP traditionnelles sont confrontées à des défis importants en raison du coût computationnel élevé des vérifications répétées de faisabilité géométrique, ainsi que de l’explosion combinatoire de l’espace de recherche, notamment dans des contextes multi-robots. Pour surmonter ces limitations, deux modèles d’apprentissage profond sont introduits : AGFPNet, un réseau de neurones permettant de prédire la faisabilité des actions de manipulation afin de limiter les appels coûteux aux planificateurs géométriques ; et GRN, un réseau de neurones exploitant des représentations de scènes 3D sous forme de graphes pour fournir des prédictions inter! prétables et généralisables, y compris les causes d’infaisabilité des actions. Ces modèles sont également étendus aux scénarios multi-robots et à la manipulation d’objets de formes complexes (maillés). À partir de ces modèles, la thèse présente un algorithme TAMP multi-robots guidé par les prédictions de faisabilité, qui intègre le raisonnement géométrique par apprentissage au sein d’un planificateur classique. Cette intégration permet d’explorer l’espace de recherche plus efficacement en priorisant les actions prometteuses et en évitant les actions infaisables. Un nouveau mécanisme de backtracking informé est également proposé, exploitant les causes d’infaisabilité pour définir des contraintes de planification et affiner les heuristiques de coût vers l’objectif. Des expérimentations approfondies démontrent que cette approche permet de résoudre des tâches de manipulation complexes avec des horizons de planification longs, dans des environnements fortement encombrés, tout en réduisant considérablement les temps de planification. Connectez-vous pour contacter le contributeur https://laas.hal.science/tel-05330516 Soumis le : lundi 4 août 2025-14:38:22 Dernière modification le : lundi 24 novembre 2025-14:47:05 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| TUM - Human-in-the-Loop Robot Learning in Unstructured Environments | https://portal.mytum.de/schwarzesbrett/… | 1 | Jan 01, 2026 00:01 | active | |
TUM - Human-in-the-Loop Robot Learning in Unstructured EnvironmentsURL: https://portal.mytum.de/schwarzesbrett/hiwi_stellen/NewsArticle_20250831_215852 Description: Studierenden- und Mitarbeiterportal der Technische Universität München Content:
Aktuelles Biodiversität langfristig beobachten Frohe Festtage und ein glückliches Neues Jahr „Steht für eure Überzeugungen ein!“ „Alle Forschenden stehen vor philosophischen Fragen“ Forschen mit der Zivilgesellschaft Wichtiger Brennstofftest für FRM II erfolgreich no events today.
Images (1):
|
|||||
| Elon Musk says Optimus robot is now learning ways of … | https://www.indiatoday.in/technology/ne… | 1 | Jan 01, 2026 00:01 | active | |
Elon Musk says Optimus robot is now learning ways of humans and the world - India TodayDescription: A video shared by Tesla shows the Optimus robot learning new tasks by watching humans do them, just like we do when watching tutorial videos. Content:
Elon Musk has shared an exciting update about Tesla’s humanoid robot, Optimus. A video shared by Tesla shows the Optimus robot learning new tasks by watching humans do them, just like we do when watching tutorial videos. For those wondering why this matters, it’s a big change from the old way of programming robots. It brings Optimus closer to acting like a human trainee. By learning from everyday actions, the robot could become far more useful and learn new things much faster. Musk says Tesla wants Optimus to study real-world videos, like those on YouTube, and then use what it sees to perform similar tasks on its own. Engineering lead at Tesla, Milan Kovac, says this breakthrough has already helped the robot pick up jobs like vacuuming, sorting items, stirring food, or taking out the rubbish. This new learning method means Optimus no longer needs detailed programming for every single task. Instead, it can copy from human examples and be told what to do with simple voice or text commands. Musk believes this could give Optimus “task extensibility,” meaning it could learn almost anything very quickly if shown the right videos. So far, Tesla has shown Optimus doing tasks in short clips — from cleaning and putting things together to even dancing. However, earlier demos faced criticism, as some people believed many of these actions were actually being controlled by humans rather than done by the robot itself.Despite the doubts, Musk believes this way of learning from videos is a key turning point. He has called Optimus “the biggest product of all time” and even thinks it could become more valuable than Tesla’s car business in the future. He expects the robot to be made on a large scale, with possible use in Tesla factories as soon as late 2025, or maybe earlier if progress continues quickly.What could this mean going forward? Instead of coding every single move, developers might simply have Optimus watch a set of instructions — whether it’s a cooking video or a guide to folding clothes — and then simply copy it. In theory, the robot could learn just like a person: by watching, imitating, and improving.Still, many experts point out that Tesla’s current robots are less skilled with fine hand movements and walking compared to leaders like Boston Dynamics. While the idea is promising, critics say there’s still a long way to go before we see a truly general-purpose robot that works well in homes or workplaces.- EndsPublished By: Aman rashidPublished On: Aug 6, 2025Also read | Nostalgia max: Prince of Persia remake is coming and it is a big deal for gamers in IndiaAlso read | Xiaomi 15 Ultra review: A chonky camera kingAlso read | Vivo V50e review: No thrills, but plenty of value Despite the doubts, Musk believes this way of learning from videos is a key turning point. He has called Optimus “the biggest product of all time” and even thinks it could become more valuable than Tesla’s car business in the future. He expects the robot to be made on a large scale, with possible use in Tesla factories as soon as late 2025, or maybe earlier if progress continues quickly. What could this mean going forward? Instead of coding every single move, developers might simply have Optimus watch a set of instructions — whether it’s a cooking video or a guide to folding clothes — and then simply copy it. In theory, the robot could learn just like a person: by watching, imitating, and improving. Still, many experts point out that Tesla’s current robots are less skilled with fine hand movements and walking compared to leaders like Boston Dynamics. While the idea is promising, critics say there’s still a long way to go before we see a truly general-purpose robot that works well in homes or workplaces.- EndsPublished By: Aman rashidPublished On: Aug 6, 2025Also read | Nostalgia max: Prince of Persia remake is coming and it is a big deal for gamers in IndiaAlso read | Xiaomi 15 Ultra review: A chonky camera kingAlso read | Vivo V50e review: No thrills, but plenty of value
Images (1):
|
|||||
| TurboPi: A RPi-Based ROS 2 Robot Kit for AI Vision … | https://www.hackster.io/HiwonderRobot/t… | 1 | Jan 01, 2026 00:01 | active | |
TurboPi: A RPi-Based ROS 2 Robot Kit for AI Vision Learning - Hackster.ioDescription: Skip the setup, start building. TurboPi is the ready-to-run Raspberry Pi 5 & ROS 2 robot kit that gets you straight to AI vision & voice pro. Find this and other hardware projects on Hackster.io. Content:
Add the following snippet to your HTML:<iframe frameborder='0' height='385' scrolling='no' src='https://www.hackster.io/HiwonderRobot/turbopi-a-rpi-based-ros-2-robot-kit-for-ai-vision-learning-ec86c1/embed' width='350'></iframe> Skip the setup, start building. TurboPi is the ready-to-run Raspberry Pi 5 & ROS 2 robot kit that gets you straight to AI vision & voice pro Read up about this project on Skip the setup, start building. TurboPi is the ready-to-run Raspberry Pi 5 & ROS 2 robot kit that gets you straight to AI vision & voice pro Getting started with robotics and AI can feel daunting. Between complex setup, fragmented software, and the fear of hardware becoming a money pit, many beginners hit a wall before they even write their first line of code. The TurboPi kit is designed to break down these barriers, offering a pre-configured, all-in-one platform that gets you from unboxing to a running AI robot in minutes. Built around the Raspberry Pi 5 and the ROS 2 (Robot Operating System 2) framework, it provides a solid foundation for learning modern robotics development. Here’s how it structures the learning journey. The biggest hurdle is often the initial setup. TurboPi tackles this by providing a complete, pre-configured system image. All necessary drivers, dependencies, and demo programs are installed. Simply flash the image to an SD card, boot up, and within minutes you can have the robot performing face detection, gesture control, or object tracking. This immediate, tangible success provides the crucial first feedback loop to keep motivation high. 💡Get all codes, videos, and diagrams from TurboPi Robot Tutorials. Instead of being a shallow jack-of-all-trades, TurboPi concentrates on AI vision and voice interaction—areas that deliver visually impressive and intuitive results quickly. Its vision stack is ready to use out of the box, integrating tools like: With these, you can immediately build projects like an object-following robot, a simple autonomous driving simulator, or an FPV (First-Person View) exploration rover. Beyond vision, TurboPi includes voice interaction capabilities. Using its onboard audio hardware and integrated multimodal models, it can understand natural language commands like "find a red ball, " moving beyond simple remote control to more natural interaction. A frustrating hardware experience can kill a project. TurboPi is designed for robustness and ease of use: This combination allows the platform to execute complex maneuvers reliably, letting you focus on software and AI rather than hardware debugging. Self-directed learning needs a map. TurboPi is supported by extensive documentation, step-by-step TurboPi tutorials, and open-source code examples. The platform leverages the mainstream Raspberry Pi and ROS 2 ecosystems, ensuring the skills you learn are transferable. You're not just buying a robot; you're gaining a guided entry point into a vast world of open-source robotics development. If you are a student, hobbyist, or developer looking for a gentle yet powerful on-ramp to embedded AI, computer vision, and ROS 2, TurboPi is a compelling choice. It removes the initial friction of system integration, letting you spend your time where it matters: building, coding, and experimenting with intelligent robotic behaviors. Hackster.io, an Avnet Community © 2025
Images (1):
|
|||||
| Machine learning-based framework for wall-perching prediction of flying robot | … | https://www.nature.com/articles/s41467-… | 1 | Jan 01, 2026 00:01 | active | |
Machine learning-based framework for wall-perching prediction of flying robot | Nature CommunicationsDescription: In contrast to the diverse landing solutions that the animals have in nature, human-made aircraft struggle when it comes to perching on vertical walls. However, traditional dynamic simulations and experiments lack the high efficiency required to analyze the perching and design the robot. This paper develops an efficient machine learning framework to predict vertical-wall perching success for flying robots with spines, overcoming traditional methods’ inefficiency. A validated knowledge-based model computes the robot’s transient dynamics during high-speed perching, identifying key success factors. By training the mixed sample data, a data-driven model has been proposed to predict the success or failure of an arbitrary perching event. Here, we show that this high-precision prediction optimizes robot control and structural parameters, ensuring stable perching while drastically reducing the time and cost of conventional design approaches, advancing the flying robot capabilities. The authors report an efficient machine learning framework designed to accurately predict and optimize the vertical-wall perching of flying robots, addressing the limitations of traditional design methods. Content:
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement Nature Communications volume 16, Article number: 11038 (2025) Cite this article 1690 Accesses 1 Altmetric Metrics details In contrast to the diverse landing solutions that the animals have in nature, human-made aircraft struggle when it comes to perching on vertical walls. However, traditional dynamic simulations and experiments lack the high efficiency required to analyze the perching and design the robot. This paper develops an efficient machine learning framework to predict vertical-wall perching success for flying robots with spines, overcoming traditional methods’ inefficiency. A validated knowledge-based model computes the robot’s transient dynamics during high-speed perching, identifying key success factors. By training the mixed sample data, a data-driven model has been proposed to predict the success or failure of an arbitrary perching event. Here, we show that this high-precision prediction optimizes robot control and structural parameters, ensuring stable perching while drastically reducing the time and cost of conventional design approaches, advancing the flying robot capabilities. Perching is an innate ability of animals, which means they can land and stay on a branch or another suitable surface. For example, flies can land upside down on a ceiling1, gliding geckos crash into tree trunks with their tails to stabilize landing2, and bats land on the tops of caves to rest3. Meanwhile, creatures have evolved a variety of methods for landing, perching, locomotion, and take-off to adapt to various irregular natural and artificial surfaces4. In contrast to animals, the perching approach of robots (especially aircraft) for surfaces is tougher, a flat ground is necessary. However, having these perching abilities similar to animals can help expand the use of the robots in extreme environments and improve their mobility, robustness, and working time5,6. For instance, in search-and-rescue operations, perching mechanisms would allow drones to land on walls or power lines, enabling persistent environmental monitoring from vantage points while conserving power7. Consequently, achieving reliable perching capability on non-horizontal surfaces has emerged as a significant research frontier in aerial robotics. The optimization challenges pertaining to pitch angle, velocity, and lift force during avian perching8 are equally encountered in developing perching technologies for aerial robots. Zufferey and Tormo-Barbero9 presented a process to autonomously land an ornithopter on a branch. This method explains the combined operation of a pitch-yaw-altitude flapping flight controller, an optical close-range correction system, and a bistable claw appendage design that can grasp a branch within 25 milliseconds and reopen. Guo, Tang et al.10 developed an actuated modular landing gear system achieving stable perching and resting of UAVs on diverse structures. Roderick, Cutkosky, and Lentink11 designed a biomimetic robot achieving dynamic perching on complex surfaces with concurrent grasping of irregular objects, demonstrating that closed-loop balance control critically expands viable perching parameter ranges. Hsiao, Bai et al.12 presented a flying robot achieving perching on diverse substrates – including wet/dry overhangs and walls – via an ultralight attachment module constituting merely 3.3% of total mass. Liu, Tian et al.13 proposed a contact-adaptable robot for perching featuring all-in-one electroadhesive pads, enabling ceiling surface attachment and significant onboard energy conservation. Liu, Dong et al.14 developed a multifunctional aerial manipulation system using a composite cup structure (soft inner cup + rigid outer shell), enabling execution of perching or lateral aerial grasping tasks while reducing reliance on precise control through the soft cup’s adaptability to multicopter-induced angular errors. While the aforementioned studies have made substantial progress in developing and characterizing novel perching mechanisms, no research to date has established predictive frameworks for perching. Nevertheless, perching—which involves rapid maneuvers and is subject to strict velocity constraints—remains one of the most critical challenges for flying robots. This is largely because the inability to predict unsuccessful landings on vertical surfaces is highly likely to result in severe damage to the robot15. Establishing a predictive framework for perching would therefore greatly enhance the success rate of wall-perching maneuvers and reduce the losses associated with landing failures. To predict the wall-perching effect and design the structure configuration of the flying robot with spines, people have recourse to traditional dynamic simulation16 and experimental method14 at present. However, traditional dynamic simulations lack the high efficiency required to handle the substantial computational workload. Furthermore, the limited simulation results are difficult to form the practical control strategies for perching. Similarly, the observation of a limited amount of experimental data from perching experiments is insufficient for the design of flying robots. Fortunately, machine learning, as a part of Artificial Intelligence (AI) technology, offers a promising solution for providing robots with perching experience and improving their capabilities. This potential will be realized through advanced ML implementations, as demonstrated by recent breakthroughs in flight control domains. The integration of machine learning (ML) into flight systems to enhance autonomous capabilities has reached a sophisticated level of maturity such as Kaufmann, Bauersfeld et al.17 presented a champion-level autonomous racing system employing deep reinforcement learning fusing simulated training with physical vehicle telemetry. Ouahouah et al.18 proposed an obstacle avoidance algorithm for UAVs based on probability and Deep Reinforcement Learning (DRL). Recently, ML-driven perching control studies for horizontal surfaces or non-horizontal surfaces are constituting a rapidly evolving frontier. Waldock, Greatwood et al.19 implemented a DRL framework integrating a nonlinear constraint optimizer (IPOPT) with a DQN agent to compute perching trajectories for horizontal surfaces. This approach derives optimal pre-contact pitch/velocity parameters that guarantee structurally stable perching on horizontal surfaces. de Croon, De Wagter et al.20 proposed a machine learning method enabling flying robots to acquire optical-flow-based strategies for complex tasks such as landing, resulting in faster, smoother landings. Ladosz et al.21 employed visual sensing and DRL, incorporating a Deep Regularized Q-learning algorithm and a custom-designed reward scheme to enable autonomous landing of UAV systems on moving platforms. Habas and Cheng15 replicated fly landing behaviors in micro-quadcopters using a generalizable control policy for arbitrary ceiling-approach conditions. Their framework employs reinforcement learning in simulation to optimize discrete sensorimotor pairs across diverse approach velocities/directions, with the first-stage “Flip-Trigger” control implementing a one-class support vector machine (SVM). Despite these advances, no existing ML addresses the challenge of predicting wall-perching outcome while co-designing robot structures − a critical capability for designing flying robotics in complex environments. In this work, to bridge this gap, a machine learning-based framework for the wall-perching prediction and design of the flying robot with spines is developed in this paper. The framework synthesizes the machine learning, computational impact dynamics and experiment. Firstly, an accurate knowledge-based model, validated for accuracy through experimentation, has been developed to calculate the transient responses of the flying robot during perching. The key factors that influence the success rate of perching at relatively low landing speeds are identified. Then, a flying robot prototype is constructed to conduct the perching experiment. In addition, the mixed sample datasets are obtained, comprising experimental data from perching experiments and simulation data computed using the knowledge-based model. By training the mixed sample datasets, a data-driven model is established to predict the success or failure of an arbitrary perching event. The results of this study can aid in designing flying robots and preventing perching failure. To achieve the landing and climbing functions, a spiny flying and wall-climbing robot is designed based on the multimodal robotic design philosophy5. The whole robot is composed of three subsystems, namely the flight system, landing system, and attachment system (Fig. 1a). Here, the flight system mainly consists of a micro quadcopter, which can provide the robot with flying capability. The landing system consists of two flexible tail structures connected by a transverse beam with two wheels (Fig. 1a). During landing, the tail will make initial contact with the vertical wall. The huge impact force is reduced thanks to the large structure compliance of the tail we designed. The attachment system is designed with two carbon fiber rods with spines (Supplementary Figs. 1, 2). The spine morphology exhibits structural similarity to beetle tarsal claws15. Due to the very tiny tip of the spine, it can crawl stably on the wall. a The structure composition of the robot including attachment, flight and landing systems. b Robot flies approaching wall, where v0 and θ0 are the initial horizontal incidence velocity and initial pitch of flying robot, respectively. c The robot contacts the wall by its tail, where αB is the robot’s body acceleration. d The robot executes an upward flip action, where vB is the velocity of the mass center of the robot. e The robot grasps and perches the wall. The robot achieves perching through four designed stages (Fig. 1b-e). In the first stage, the robot approaches the vertical wall using its flight system. Then, the second stage is a contact-impact phase, where the robot uses its flexible tail rods to impact the wall. Subsequently, it enters the third stage (i.e., rotation stage). The robot’s wheel contacts the wall, and then the entire robot body rotates upwards around the wheel. In the fourth stage, the robot uses spines in its attachment system to climb on the wall, and the whole perching process is completed. Although the attachment system (Fig. 1a) can help the robot climb on the wall, this paper focuses on the landing and perching problems of the robot. For detailed descriptions of the spine structure and wall-climbing mechanism, please refer to the Supplementary Note 1. To analyze the landing and perching behavior of the robot, humans resort to a contact-impact dynamic model naturally. This model is a typical knowledge-driven model based on the principles of impact dynamics of a flexible multibody system. In this study, the model is discretized using the finite element method (FEM) (Fig. 2a), and calculations are performed in the LS-DYNA software. We use this model to evaluate the success or failure of perching in different maneuvering states before landing. The numerical results guide the experiments and are used to build the subsequent data-driven model, which will be discussed the next section. a Discrete model based on FEM, which is used to solve the contact-impact problem during the landing event. b Displacement of the spine under three sets of initial landing conditions with lift strengthening (I indicates θ0=−5° and v0=0.4 m/s; II indicates θ0=−5° and v0=0.8 m/s; III indicates θ0=−10° and v0=0.8 m/s). The solid and short dash lines are the displacements in the x and z direction, respectively. The symbol star represents the occurrence of collision between the spine and wall. c The lift curves during the landing event for receding cases. d The lift curves during the landing event for strengthening cases. e The resultant velocity of the spine during the landing process for 4 typical landing behaviors. The success and failure envelop regions (i.e., the green and red areas) that we summarized for the landing events under 114 sets of initial landing conditions are also shown. The three values in legend in this figure are pitch of flying robot θ0,initial horizontal incidence velocity v0 and lift FL, respectively (where LR means lift receding and LS means lift strengthening). f The robot undergoes a fixed-axis rotation where the rotation center is the contact point of the tail during the “perfect landing” process. g The robot body flips upward using the wheels as the rotation center during the “perfect landing” process. h The wheel separates from the wall and then contacts it again during the “normal landing” process. i The maximum von-Mises stress appears at the root of the landing rod when the robot contacts the wall. j The maximum von-Mises stress appears in the connecting piece. If the high velocity causes excessive stress, the structure will be damaged. k Configuration of robot when it is in the success perching state on the wall. The numerical simulations for 114 cases of maneuver states before landing have been performed. The alterable parameters of maneuver states include initial pitch of the flying robot θ0, the initial horizontal incidence velocity v0 and the lift FL. Figure 2b illustrates the displacement of the spine after the tail contacts the wall, which was calculated based on the knowledge-driven model. The peaks of the dashed curves represent the quick release of deformation energy from the tail. In Fig. 2b, when the positive direction of the X-axis rotates counterclockwise to align with the robot’s axis, θ0 is defined as positive. Conversely, when the rotation is in the opposite direction, θ0 is considered negative. The other attitude angles and flying velocities are all equal to zero. It should be noted that the lift FL is an active force, it will be controlled during the contact-impact process. It is set as an increasing or decreasing linear function of time t (Fig. 2c, d) to analyze the effect of FL on the success of landing. The gravity is a constant value during the whole landing process. By our computation for 114 cases based on the knowledge-driven model, it can be found that there are 4 types of typical landing behaviors (Fig. 2e). And regardless of what the initial conditions are, the resultant velocities of spine always experience multiple oscillations since the tail occurs obvious vibration under contact-impact. Figure 2e also shows the envelop regions that are summarized from 114 landing events corresponding to the 114 data points in section “Data-driven model for perching”. The green area means the robot landed successfully while red and purple areas mean robot failed to land (LR means lift receding and LS means lift strengthening). Meanwhile, some simultaneous robot postures and stress states during the landing are also given. From Fig. 2e the 4 types of typical landing behaviors can be summarized as follows: Perfect landing (success). The characteristic feature of such landing behavior is that the robot’s mass center trajectory approximates a semicircular path, with negligible body sway after attachment. The corresponding mass center velocity is indicated by the black curve in Fig. 2e. To get these results, the robot must have the most ideal landing state before contacting the wall. i.e., the robot almost flies horizontally, (−8° < θ0 < −5°) and the sum of velocity of the mass center and the pitch are mid-value (0.6 m/s <θ0 <0.8 m/s). Then, the tail contacts the surface (Fig. 2f) and it occurs bending deformation. Then, the robot performs a fixed-axis rotation where the rotation center is the contact point of the tail. The tail does not keep in contact with the wall until the wheels contact with the wall. Then, the robot body flips upwards (Fig. 2g) using the wheels as the rotation center. Finally, the robot’s spines grasp the surface and attach successfully. Normal landing (success). The characteristic feature of such landing behavior is that the robot’s mass center trajectory approximates a horseshoe-shaped path, with significant body sway after attachment. The corresponding mass center velocity is indicated by the blue curve in Fig. 2e. To achieve this landing process, the robot requires the initial states of −5° < θ0 < − 2° for the angle and 0.8 m/s < v0<1.0 m/s for the velocity before contacting the wall. Subsequently, the robot performs a plane motion to achieve upward flips. However, the wheel undergoes a transition between contact and separation with the wall (Fig. 2h) during the entire upward motion. Finally, the wheel will contact the wall again at the end of the landing process. In other words, the wheel makes contact twice during the landing process. Low-kinetic energy landing (failure). This typical behavior and landing process is shown as the red curve in Fig. 2e. The initial conditions for this landing behavior are v0<0.4 m/s and θ0>−4°. Since the initial kinetic energy is low, the robot cannot perform an upward flip motion, despite the fact that the lift of the quadrotor aircraft is sufficient and its tail does not rebound from the wall. The robot will execute an upward straight motion parallel to the wall surface. Rebound landing (failure). This landing behavior has two forms of rebound landing. The first is when the robot achieves an upward flip motion, but the spine rebounds when it contacts the wall due to the large contact force at the spine. This typical behavior and landing process are shown by the purple curve in Fig. 2e. In this case, the initial conditions are generally v0>1.0 m/s and θ0<−10°. The failure is caused by the spine’s inability to grasp the wall tightly. The second form is when the robot directly rebounds as the tail contacts the wall, preventing it from achieving an upward flip motion. In this case, the initial conditions are generally v0>1.0 m/s and θ0>−2°. The occurrence probability of this form is very low because θ0 is generally less than −2° during the actual flight of the robot under artificial operation. Besides, Fig. 2i–j shows the stress concentration phenomenon in the structure of the landing system. The critical areas where structural failure can occur are primarily located in the root zone of the tail. Overall, the conditions such as initial velocity, initial pitch, and lift of the robot have very complicated effects on landing behavior and the landing process. They sometimes work together to achieve successful landings (Fig. 2k), but other times they conflict with each other and lead to landing failures. It is more intuitive to understand that if the velocity is slower and the absolute value of the pitch angle is smaller, lift assistance is required. On the other hand, if the velocity is faster and the absolute value of the pitch angle is greater, increasing lift may cause the robot to rebound from the wall surface. However, these relationships are not absolute. Therefore, one of the greatest challenges in landing on a vertical wall is predicting success or failure. Through our calculations, it has been determined that the time required for each computation using the knowledge-driven model is substantial. Moreover, it is impractical to exhaustively explore all working conditions and control parameter combinations using this method. On a computer equipped with an Intel CPU 6700 K and DDR4 16GB RAM, the cost of each landing process generally requires around 48 h. Thus, there is an urgent need for an efficient and accurate calculation model to enable numerical evaluation of all landing conditions. Fortunately, the emergence of machine learning methods has made this achievable. A comprehensive discussion on this topic will be presented in the section titled “Data-driven model for perching”. In the experiment of landing on the wall, the robot firstly takes off from a horizontal surface and then is operated randomly to land on the wall, with different initial pitch of flying robot θ0,initial horizontal incidence velocity v0 and lift FL. The video of the experiments were recorded to obtain experimental data (Supplementary Figs. 3, 4). For more details about the process of experiment, refer to Supplementary Note 2. The parameters, including θ0, v0 and FL, were intentionally varied before colliding with the wall to provide a diverse range of experimental results for machine learning in subsequent chapters. This experimental data serves two main purposes: firstly, it can be used to validate the accuracy of the computational results obtained from the knowledge-driven model. Additionally, it can be utilized to identify general patterns in the landing behavior by observing the robot’s actual landing event. Figure 3 shows the experimental images of 4 typical landing behaviors. The video of the continuous movement of the flying robot can be viewed in the Supplementary Movie 1. The movement trajectories for perfect landing (success), normal landing (success), low-kinetic energy landing (failure), and rebound landing (failure) are depicted in Fig. 3a–g, respectively. It has been observed that the detailed characteristics captured by the experiment are consistent with those summarized by the numerical evaluation. Figure 3h–k represents the velocities of the mass center and the pitches of the robot during the perfect landing and rebound landing processes, respectively. During the perfect landing, the pitches change linearly to a constant value, while in the rebound landing, the pitches change non-linearly from negative to positive. a The motion track for the perfect landing (success). b The tail contacts the surface during the perfect landing (success). c The wheels contact with the wall during the perfect landing (success). d The robot’s spines grasp the surface and attach during the perfect landing (success). e The motion track for the normal landing (success). f The motion track for the low-kinetic energy landing (failure). g The motion track for the rebound landing (failure). h The velocities of the robot’s mass center during the perfect landing (success). The shadow area indicates the robot is transitioning from a tail collision to a spine collision. i The robot’s pitches during the perfect landing process. j The velocities of the robot’s mass center during the rebound landing (failure). k The pitches of robot during the rebound landing (failure). The control parameters have a great influence on the landing responses. Due to its high cost and time consumption, merely a limited number of landing experiments can be conducted, and the experimental data under a limited set of control parameters can be recorded. It is hard to conduct experiments covering all working conditions. Nevertheless, the limited experimental results are still valuable, which can be used as sample data for establishing the data-driven model in the next section. Generally, “learning for simulation” is a traditional idea in AI fields. Recently, some scientists have hypothesized that “simulation for learning” may be one trend in computer graphic science22. Here, based on mixed samples including computational data obtained from a knowledge-driven model and experimental data, we have established a data-driven model to predict the landing of a robot. In this work, we propose a technique that combines simulations based on knowledge-driven models and machine learning based on data-driven models, in order to leverage the best aspects of both. The entire training and prediction process is shown in Fig. 4, and includes the following 5 steps: a Simulation for robot landing based on the knowledge-driven model. b Actual experiment for robot landing. c Extreme state when the robot is out of control. d Data points obtained from simulations based on the knowledge-driven model and actual experiments under different lift forces. e Data-driven model of landing behavior is established after training using machine learning. f If the data-driven model predicts a failure in landing, the robot will abort the landing, fly away from the wall, and prepare for another landing attempt. g The robot perform a new landing. When it approaches the landing surface again, predictions are conducted once more through the data-driven model. h The robot lands on the wall if the data-driven model predicts a successful landing. Step 1: Preliminary structural design of the robot. The preliminary design of the robot and its landing strategy are based on Newton mechanics theory. Step 2: Achievement of data sets. An accurate three-dimensional finite element model (i.e., knowledge-driven model) is established to simulate the landing of the robot (Fig. 4a). The computation work of the contact-impact dynamic responses is performed in the software LS-DYNA. Through the computation for 114 cases of maneuvering states, the landing results are obtained. The actual landing experiments with the robot prototypes have also been conducted (Fig. 4b). To define the boundaries of our dataset (Fig. 4c), we identify data points corresponding to invalid states, including out-of-control events. This dataset is composed of the robot’s pitch degree, velocity, lift data, and success/failure outcomes extracted from both computational and experimental results (Fig. 4d and Supplementary Data 1). Step 3: Establishment of the data-driven model. To ensure the machine learning framework achieves effective landing predictions for different lift modes, the dataset is divided into two parts based on lift receding and lift strengthening. Each part contains two features: initial horizontal velocity and initial angle. The dataset is then distributed within a three-dimensional feature space composed of initial horizontal velocity, initial angle, and lift mode (Fig. 4e), representing landing outcomes under various initial conditions. Additionally, there is no noise in the dataset, hence the noise reduction is not required. Step 4: Training. After training with the machine learning models, a decision boundary for Multi-layer Perceptron (MLP) and Random Forest (RF) (i.e. hyperplane for Support Vector Machine) is generated in the landing feature space under different lift conditions (Fig. 4e). The data points representing successful landings are separated from those representing failed landings by the decision boundary. The farther a data point is from the boundary within the decision boundary, the higher the probability of a successful prediction by the model, and the greater the confidence of the robot in a successful landing. By adjusting the model parameters that affect the construction of the decision boundary, we can obtain the optimal model with the highest confidence in its predictions. Step 5: Prediction. According to the training results, when the robot is in a landing situation, the success or failure of landing will be predicted by the data-driven model, and the subsequent behavior of the robot will be determined. In the future actual application of perching, if the prediction result computed by onboard computer indicates failure, the two motors near the wall accelerate their rotational velocity to cause the robot to decelerate, then the robot will adjust its flight states in real-time to fly away from the wall (Fig. 4f) and seek a new landing opportunity (Fig. 4g). Conversely, if the prediction result indicates success, the two motors located away from the wall accelerate their rotational velocity to make the robot flip upwards, and the robot will directly land on the wall (Fig. 4h). The entire computational process is following: After the primary selection of data collected from the robot’s sensors, certain influential factors that affect the success or failure of landing can be determined manually. These influential factors can be employed as dimensions within the dataset. These influences can be interconnected with each other, and there is no limit to the number of dimensions. Both simulated and experimental data samples are utilized to construct datasets. Generally, a larger dataset will yield more accurate prediction results. However, if the selected range of the sample data can encompass all various scenarios and its distribution in the state space is more uniform, even a smaller dataset can still achieve desirable outcomes. The proportion of simulated and experimental data can be adjusted after estimating the time and cost considerations. Generally, simulated data is less expensive and more efficient. When the robot encounters a new scenario, we can instantly predict the outcome, allowing the robot to respond accordingly based on the prediction result. Ultimately, this enables the robot to intelligently handle various new challenges related to perching. This paper employed the Python language to develop the machine learning program. In this study, the datasets were obtained under proper surface conditions, which are sufficient for the robot to land easily. For soft surfaces with low rugosity, the proper condition is when the surface Brinell hardness is less than 2. Many tree bark surfaces can satisfy this condition. For hard wall surfaces, a high level of rugosity is required. Our tests indicate that the surface static coefficient of friction μ approximately needs to be greater than 0.2, and there should be at least a 20% difference between static and dynamic friction. After preparing the dataset, our goal is to ensure that the machine learning method achieves high prediction accuracy. Therefore, we need to optimize the data-driven model. During the model training process, we use the same dataset and continuously adjust the parameters of the generated model in order to identify the best parameter configuration (for SVM and RF) that yields high prediction accuracy. In this way, we are able to find the optimal model. Specifically, the landing success region and failure region are distinguished by the decision boundary. The accuracy of the decision boundary is determined by the ratio of the correctly classified data to the total dataset. Be different from SVM and RF, MLP do not need to identify the best parameter configuration. Figure 5 presents wall-perching results calculated using data-driven models based on MLP, SVM, and RF methods, respectively. Each model predicts outcomes across three dimensions: lift force, pitch angle, and initial velocity. The lift curves illustrated in Fig. 2c, d remain employed here, the hybrid dataset (Fig. 5a, b) used for training combines simulation results from our knowledge-driven model with experimental data. Figure 5c–h displays prediction results from MLP, SVM and RF, respectively. These results demonstrate that regardless of the machine learning method, the decision boundary envelope encompasses a larger successful perching region when lift generation is present, indicating that providing lift enhances landing success rates for flying robots. Comparative analysis reveals considerable differences among the three methods’ predictions, with MLP demonstrating relatively superior performance over SVM and RF. Detailed descriptions of the data-driven models and corresponding machine learning methods are provided in Supplementary Note 3. a Mixed dataset under lift receding obtained using a knowledge-driven model and experiment. b Mixed dataset under lift strengthening obtained using a knowledge-driven model and experiment. c The decision boundary obtained by the MLP model under lift receding. d The decision boundary obtained by the MLP model under lift strengthening. e The decision boundary obtained by the SVM model under lift receding. f The decision boundary obtained by the SVM model under lift strengthening. g The decision boundary obtained by the RF model under lift receding. h The decision boundary obtained by the RF model under lift strengthening. The machine learning-based framework can not only predict the landing, but also guide the design of the flying robot. In the initial design phase, some basic premise need to be considered for achieve the successful landing and perching on the wall, including the following points: 1) The mass of robot should be as light as possible; 2) The structure where the robot collide with the wall needs to have a certain buffering capacity to reduce the impact load; 3) The landing structure is beneficial for the robot to flip up from the horizontal state to the vertical state after collision, so as to achieve successful grasping. Therefore, in view of these three requirements, the overall structure of the robot was preliminarily designed according to the existing experience and knowledge. Compared with other structures, the landing rod is particularly critical for the successful landing of robots. In this paper, the landing rod structure was preliminarily designed as a flexible carbon fiber rod structure. Compared with other forms of buffer structure, its biggest advantage is that it has the lightest weight under the same size, and its structural form is the simplest, which is very conducive to the assembly of the structure. On the basis of the initial design, there are still many parameters to be determined, including the length of the landing rod la, the angle α between the landing rod and the horizontal structure of the robot, etc. These parameters directly determine the success or failure of landing. The length of the landing rod la determines the bending flexibility of the structure and the cushioning performance of the structure; The included angle α determines whether the flying robot can flip up at a reasonable speed. Therefore, the detailed closed-loop optimization design of these key parameters will be carried out below. The process of closed-loop optimization design is shown in Supplementary Fig. 5. During the detailed closed-loop optimization design phase, we focused on optimizing the landing rod structure of the robot while keeping other components unchanged. Building upon the initial design, we developed multiple landing rod configurations with varied parameter combinations. The la was incrementally increased from 2 cm to 4 cm in 1 cm steps, while the α was expanded from 20° to 60° in 5° increments, generating numerous parameter permutations. To identify the optimal parameter combination, we employed an iterative approach. This involved comparing machine learning-predicted successful landing zones across different parameter sets until the resulting hyperplane region achieved maximum coverage in the two-dimensional feature space defined by the robot’s initial horizontal velocity and initial angle—thus accomplishing closed-loop structural optimization. Each iteration integrated FEM simulation analysis, physical landing experiments and machine learning to solve the hyperplane zone in its parameter combination. Through this process, we determined the robot’s optimal structural configuration and geometric dimensions (Supplementary Fig. 6 and Supplementary Table 1). Additionally, this study optimized the lift conditions during the robot’s landing phase. Additionally, for the robot designed in this study, the preceding section determined the optimal set of initial horizontal velocities and initial angles for landing by comparing successful landing regions under two distinct control strategies: lift receding and lift strengthening. Subsequently, based on the data-driven model’s predictions of landing success or failure during actual landings, different control strategies can be implemented for the robot. Optimized control is achieved by adjusting the robot’s flight parameters. Although the prediction framework proposed in this paper has only been validated on the flying robot in this design, the closed-loop design analysis mentioned above for the flying robot is also applicable to other wall-perching robots of different structures. That is, through experiments or simulations, obtain a dataset of the relationship between the influencing factors such as speed, posture, and distance of arbitrary robot during its wall-perching and the success of the perching. Then, use the machine learning method employed in this paper to predict the perching success rate under different initial conditions, and further optimize the design of the robot based on the prediction results. We first discuss the efficiency and accuracy of the three machine learning approaches. A comparative analysis revealed that the MLP training time was only about 5 s, whereas SVM, due to the need to search for optimal parameters, had the longest training time of approximately 180 s—36 times that of MLP. Additionally, by examining the decision boundary results predicted by the three methods, it can be observed that compared to MLP’s predictions, SVM and Random Forest exhibited larger errors. Their success regions contained several characteristic points, which corresponded to failure outcomes in both numerical calculations and experimental tests. Conversely, such contradictions were almost absent in the MLP results. Therefore, for the problem addressed in this paper, the MLP method demonstrates relatively higher prediction accuracy. It is widely acknowledged that traditional methods such as the finite element method or the material point method often require a large amount of computational time to simulate a single landing event. This is primarily due to the nature of the landing event, which presents a typical contact-impact problem. To accurately model this problem, a very small time step is necessary for integrating the high-dimensional dynamic differential equations. Additionally, the iterative computations required to address the contact nonlinearity further contribute to the time-consuming nature of the simulations. Simulating all possible scenarios with arbitrary initial angles and velocities of a robot using traditional methods is clearly impractical. Similarly, conducting experiments to test every case is also unfeasible. The experimental process is not only time-consuming but also expensive. It involves extensive preliminary preparation and equipment debugging, and there is also the risk of potential damage to the robot during experiments. In conclusion, whether through experimentation or traditional numerical simulation, simulating the entire range of coverage states is challenging. The machine learning-based framework employed in this paper allows for predicting the complete space, encompassing all possible cases, using a reduced number of data samples, thereby enhancing the efficiency of landing research. Secondly, the primary goal of AI is generally to achieve “learning to simulation”. This concept entails training models using data, enabling them to perform simulations more efficiently without complex manual modeling and algorithm design. In this paper, we also employ AI to build a data-driven model. By utilizing this model, we can predict the success or failure of a robot’s landing and determine its subsequent behavior. This methodology, known as “learning to simulation”, is widely applicable in diverse domains such as computer graphics, robotics, and physics modeling. However, this paper introduces a concept called “simulation to learning”. Unlike the previous work, the data in this study is not solely derived from experiments but also includes simulation data computed by a knowledge-driven model. The combination of this simulation data and experimental data results in mixed data. The main objective of “simulation to learning” is to obtain mixed data. Subsequently, the mixed data is utilized to establish a data-driven model. Ultimately, this data-driven model enables efficient and accurate simulation of all landing events. Our research validates the effectiveness of the “simulation-to-learning” approach. Finally, we discuss the potential issues in practical applications. The current study demonstrates successful perching of the robotic platform under low-speed flight conditions. However, extending this capability to high-speed operations presents several challenges: sensor delays in gyroscopes and accelerometers may cause attitude control lag during landing, while limited motor response could destabilize rapid flight-to-perch transitions. Accelerometer noise during impact may also interfere with contact detection, though filtering algorithms could mitigate this effect. Additionally, the control strategy must address motor overheating during prolonged stall conditions through optimized power-thermal management. These challenges highlight important directions for future research in high-speed robotic perching. This paper focuses on proposing a machine learning-based prediction framework for wall-perching, where the landing experiments aim to collect samples of successful and failed outcomes. Consequently, our experiments solely examine the process from flight state to perched landing, without yet utilizing prediction results to: assess landing success/failure prior to wall contact, guide the robot’s decision on whether to initiate landing, or direct it to steer away from the wall to reorient both pitch angle and initial approach velocity for subsequent perching attempts. Implementing this comprehensive closed-loop process—using our predictive framework as the decision-making core—remains future work, which will establish complete control strategies for aerial robotic wall-perching and lay the groundwork for developing feedback regulation algorithms. In addition, it must be pointed out that the experimental data we used for prediction were measured under relatively ideal laboratory conditions and are not fully applicable to experimental conditions with high interference or smoother walls. Under high interference conditions (such as when the wind is strong), due to the small size of the robot, it will have a significant impact on the stability of flight, thereby affecting the results of landing prediction. However, the grasping structure used by the robot in this study cannot be successfully attached to smooth walls. Therefore, the prediction model proposed in this study is not applicable to the experimental conditions of smooth walls. The UG software was employed to create a detailed 3D model of the robot (Fig. 1). The robot involves three key systems: flight system, landing system, and attachment system. Ultimately, the Crazyflie 2.1 quadrotor flight platform by Bitcraze was selected. The landing system is composed of two carbon fiber rods, two rubber balls, and two wheels. We have carefully adjusted the thickness and angle of the carbon fiber rods to optimize the energy absorption and conversion capabilities during the landing process. The core components of the attachment system are two spines, which consist of micro-scale curvy metal structures. These spines are connected to curved carbon fiber rods, flexible ropes, a horizontal bar, and a T-shaped bar using revolute joints and holes. The connector parts are 3D printed using resin material. In the flight system, in/out servos and extend/retract servos are employed to control the movement of the two spines. The geometric entities are created within the LS-DYNA software (Fig. 2a). The entity consists of two parts: a vertical wall and a robot. The robot itself is divided into six components: quadcopter, servos, carbon fiber frame, connector, spines, and ropes. Firstly, to capture the transient landing deformation, all parts of the robot are modeled as deformable bodies in terms of the material model. The quadcopter, ropes, and servos are simulated using the 003 PLASTIC isotropic material model to represent plastic behavior. The carbon fiber parts utilize the 054 ENHANCED orthotropic material model to simulate the properties of carbon fiber23. On the other hand, the connectors and spine parts are modeled using the 003 PLASTIC isotropic material model to represent resin material. Lastly, the wall is modeled using the 020 RIGID material model to simulate a rigid body. To facilitate the development of the robot’s dynamics equation, we employ the finite element technique to discretize the structure’s deformation field and create a mesh for the flexible parts. The resulting discrete model in LS-DYNA is depicted in Fig. 2a. Prior to meshing, the parts are divided into simpler shapes, and techniques such as sweeping and mapping are utilized to generate high-quality meshes. Additionally, based on knowledge-driven modeling, a combination of regular and hexagonal meshes is employed to ensure accurate results (Fig. 2a). The total number of elements in the model amounts to 58232. Secondly, in the aspect of constraints, all degrees of freedom of the wall are fully constrained, and an automatic surface-to-surface contact algorithm utilizing a penalty function is employed between the robot and the wall. With consideration for computational convenience, neighboring parts are rigidly connected as constraints. Thirdly, in terms of the loads, the lift of the robot is shown in Fig. 2c and d. The application of the lift exists in two situations: the first is that the lift is always the same as the gravity of the robot. The second is that the lift is the same as the gravity before contacting the surface to simulate normal flight, and then the lift increases to twice the gravity after contact with the wall. The direction of the robot’s initial velocity points towards the wall, ranging from 0 m/s to 1.5 m/s, while the robot’s initial pitch ranges from 5° to −15°. The prestress of the flexible rope has little effect on the collision process, so it is ignored in the simulation calculation. Finally, in terms of numerical integration in calculation, the LS-DYNA central difference method is employed to analyze nonlinear explicit dynamics. The size of time step is determined based on the mesh size and set to be Δt = 7.27 × 10−5s. By utilizing the LS-DYNA software solver, the system’s dynamic response is obtained, which includes the collision contact force, the dynamic stress cloud diagram of the structure, the stress curve for each point, the motion process, and other kinematic parameters. In summary, the dimensions and material properties of the finite element model used in the numerical simulation closely match those of the actual robot. The constraint conditions between components and loading scenarios align with experimental configurations. Meanwhile, the meshing strategy and mesh density account for grid sensitivity in computational results. Furthermore, for the impact dynamics problem of the landing/perching process, the simulation incorporates contact constraints between the robot and the wall while employing explicit dynamics algorithms. This approach ensures computational accuracy alongside enhanced efficiency. Collective implementation of these measures guarantees strong agreement between numerical simulation and experimental results. To further validate the consistency between simulations and experimental results, the landing outcomes of simulation and experiment are compared for selected data points, as presented in Supplementary Table 2. For the establishment of the data-driven model in this paper, three distinct machine learning approaches were employed: Multilayer Perceptron (MLP), Support Vector Machine (SVM), and Random Forest (RF). MLP is a type of feedforward artificial neural network consisting of an input layer, at least one hidden layer, and an output layer. In an MLP, each layer comprises multiple units. Each unit incorporates a differentiable linear function and an activation function, mapping the input vector of the layer to a scalar output unit. This paper pertains to a binary classification problem with two features: initial velocity and initial angle. The combinations of initial velocity and angle yield a complex non-linear decision boundary (Supplementary Fig. 7). Since MLPs can learn complex interactive relationships between features and thereby fit decision boundaries of intricate shapes, it exhibits high adaptability to the research focus of this paper. Consequently, an MLP model composed of 5 fully connected layers was constructed: Input Layer contains 2 neurons, corresponding to the two features— initial velocity and initial angle. Hidden Layers contain 4 layers with 64, 128, 64, and 32 neurons respectively. The ReLU activation function was applied to these layers to introduce non-linear fitting capability. The Output Layer employs the sigmoid activation function to map the final output to a value representing the probability of a successful landing. After training for 150 epochs, the model has achieved satisfactory predictive performance in determining whether the flying robot successfully lands on the wall for given combinations of initial velocity and angle. SVM is a supervised learning algorithm mainly used for classification and regression analysis. Its core concept involves finding an optimal hyperplane in a high-dimensional feature space to separate data points of different classes while maximizing the minimum distance between the hyperplane and samples of both classes (i.e., maximizing the margin). For linearly inseparable data, SVM maps original features to a higher-dimensional space via the Kernel Trick to achieve linear separability. In the binary classification problem studied in this paper, the combination of initial velocity and initial angle forms a complex nonlinear decision boundary (Supplementary Fig. 8). The SVM method used in this paper employs the RBF kernel, which utilizes Gaussian functions to project data into an implicit high-dimensional space. This approach effectively captures complex feature interactions, enabling flexible fitting of intricate nonlinear decision boundaries. The specific implementation process is as follows: Use grid search combined with leave-one-out cross-validation to find optimal parameters c (regularization parameter) and γ (scaling coefficient of RBF kernel) (Supplementary Table 3). Here, c controls the trade-off between classification errors and margin width—smaller c enhances generalization capability while larger c may cause overfitting; determines the influence range of a single sample—larger values yield more complex decision boundaries. The optimal parameters are determined by maximizing cross-validation accuracy rates, subsequently training the final model. RF is an ensemble learning method based on decision trees, comprising multiple independent decision trees. When performing classification in this paper using a random forest, an input sample flows to each decision tree, and these trees independently predict the sample’s outcome. RF uses the majority vote of classification results from multiple decision trees as the final classification result. This research uses the Gini Index as the feature selection criterion. A smaller Gini Index indicates higher sample purity; therefore, node splitting progresses in the direction of decreasing Gini Index. To achieve better classification performance, hyperparameters such as the maximum depth of decision trees, the maximum number of features considered for splitting a node, and the minimum sample count at leaf nodes and internal nodes are configured. As the number of decision trees critically impacts random forest training results, Leave-One-Out Cross-Validation is employed to determine the optimal number of trees (Supplementary Table 3). After setting these hyperparameters, the original data is input into the random forest for training, yielding the model employed in this paper (Supplementary Fig. 9). The dataset of robot landing results used to train the machine learning prediction framework has been deposited in the figshare database at https://doi.org/10.6084/m9.figshare.2963699624. Source data are provided with this paper. The source code is publicly available at https://doi.org/10.5281/zenodo.1747388725. Liu, P., Sane, S. P., Mongeau, J.-M. J., Zhao, J. & Cheng, B. Flies land upside down on a ceiling using rapid visually mediated rotational maneuvers. Sci. Adv. 5, eaax1877 (2019). Article ADS PubMed PubMed Central Google Scholar Siddall, R., Byrnes, G., Full, R. J. & Jusufi, A. Tails stabilize landing of gliding geckos crashing head-first into tree trunks. Commun. Biol. 4, 1020 (2021). Article CAS PubMed PubMed Central Google Scholar Riskin, D. K. et al. Bats go head-under-heels: the biomechanics of landing on a ceiling. J. Exp. Biol. 212, 945–953 (2009). Article PubMed Google Scholar Roderick, W. R. T., Cutkosky, M. R. & Lentink, D. Touchdown to take-off: at the interface of flight and surface locomotion. Interface Focus 7, 20160094 (2017). Article PubMed PubMed Central Google Scholar Pope, M. T. et al. A Multimodal Robot for Perching and Climbing on Vertical Outdoor Surfaces. IEEE Trans. Robot. 33, 38–48 (2017). Article Google Scholar Kovač, M., Germann, J., Hürzeler, C., Siegwart, R. Y. & Floreano, D. A perching mechanism for micro aerial vehicles. J. Micro-Nano Mech. 5, 77–91 (2009). Article Google Scholar Floreano, D. & Wood, R. J. Science, technology and the future of small autonomous drones. Nature 521, 460–466 (2015). Article ADS CAS PubMed Google Scholar KleinHeerenbrink, M., France, L. A., Brighton, C. H. & Taylor, G. K. Optimization of avian perching manoeuvres. Nature 607, 91–96 (2022). Article ADS CAS PubMed PubMed Central Google Scholar Zufferey, R. et al. How ornithopters can perch autonomously on a branch. Nat Commun 13, 7713 (2022). Article ADS CAS PubMed PubMed Central Google Scholar Hang, K. et al. Perching and resting—a paradigm for UAV maneuvering with modularized landing gears. Sci. Robot. 4, eaau6637 (2019). Article PubMed Google Scholar Roderick, W., Cutkosky, M. & Lentink, D. Bird-inspired dynamic grasping and perching in arboreal environments. Sci. Robot. 6, eabj7562 (2021). Article CAS PubMed Google Scholar Hsiao, Y. H. et al. Energy efficient perching and takeoff of a miniature rotorcraft. Commun Eng 2, 38 (2023). Article PubMed Central Google Scholar Liu, H. et al. Electrically active smart adhesive for a perching-and-takeoff robot. Sci. Adv. 9, eadj3133 (2023). Article ADS PubMed PubMed Central Google Scholar Liu, S., Dong, W., Ma, Z. & Sheng, X. Adaptive aerial grasping and perching with dual elasticity combined suction cup. IEEE robot. autom. lett. 5, 4766–4773 (2020). Article Google Scholar Habas, B. & Cheng, B. From flies to robots: inverted landing in small quadcopters with dynamic perching. IEEE Trans. Robot. 41, 1773–1790 (2025). Article Google Scholar Dunlop, D. J. & Minor, M. A. Modeling and simulation of perching with a quadrotor aerial robot with passive bio-inspired legs and feet. Letters Dyn. Sys. Control 1, 021005 (2021). Google Scholar Kaufmann, E. et al. Champion-level drone racing using deep reinforcement learning. Nature 620, 982–987 (2023). Article ADS CAS PubMed PubMed Central Google Scholar Ouahouah, S., Bagaa, M., Prados-Garzon, J. & Taleb, T. Deep reinforcement-learning-based collision avoidance in UAV environment. IEEE Internet Things J 9, 4015–4030 (2022). Article Google Scholar Waldock, A., Greatwood, C., Salama, F. & Richardson, T. Learning to perform a perched landing on the ground using deep reinforcement learning. J. Intell. Robot. Syst. 92, 685–704 (2018). Article Google Scholar de Croon, G. C. H. E., De Wagter, C. & Seidl, T. Enhancing optical-flow-based control by learning visual appearance cues for flying robots. Nat. Mach. Intell. 3, 33–41 (2021). Article Google Scholar Ladosz, P., Mammadov, M., Shin, H., Shin, W. & Oh, H. Autonomous landing on a moving platform using vision-based deep reinforcement learning. IEEE robot. autom. lett.9, 4575–4582 (2024). Deist, T. M. et al. Simulation assisted machine learning. Bioinformatics 35, 4072–4080 (2019). Article CAS PubMed PubMed Central Google Scholar Reuter, C., Sauerland, K.-H. & Troester, T. Experimental and numerical crushing analysis of circular CFRP tubes under axial impact loading. Compos. Struct. 174, 33–44 (2017). Article Google Scholar Shen, Y. et al. Dataset for landing prediction. figshare https://doi.org/10.6084/m9.figshare.29636996 (2025). Article Google Scholar Shen, Y. et al. Machine learning-based framework for wall-perching prediction of flying robot. Zenodo https://doi.org/10.5281/zenodo.17473887 (2025). Article Google Scholar Download references The authors gratefully acknowledge the support from Robotics and Intelligent Machine Lab founded by Yunian Shen, NJUST. This study was sponsored by the Natural Science Foundation of Jiangsu Province grant (BK20221484 to Y.S.) and the Aeronautical Science Foundation of China (2024M058059002 to Y.S.). These authors contributed equally: Yunian Shen, Chenxi Mao. Department of Mechanics and Engineering Science, School of Physics, Nanjing University of Science and Technology, Nanjing, P. R. China Yunian Shen, Chenxi Mao, Zeyu Qi, Kunpeng Liu, Weixu Zhang & An Cao Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Y.S. and C.M. contributed to the conceptualization and methodology of the study. Y.S., C.M., Z.Q., and K.L. conducted the investigation. Y.S. provided supervision throughout the research process. Y.S. and C.M. were responsible for writing the original draft of the paper. Y.S., C.M., W.Z., and A.C. contributed to the review and editing of the manuscript. All authors contributed to the writing of the paper. Correspondence to Yunian Shen. The authors declare no competing interests. Nature Communications thanks Zhongjin Ju, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Reprints and permissions Shen, Y., Mao, C., Qi, Z. et al. Machine learning-based framework for wall-perching prediction of flying robot. Nat Commun 16, 11038 (2025). https://doi.org/10.1038/s41467-025-67386-0 Download citation Received: 14 January 2025 Accepted: 28 November 2025 Published: 11 December 2025 Version of record: 11 December 2025 DOI: https://doi.org/10.1038/s41467-025-67386-0 Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Advertisement Nature Communications (Nat Commun) ISSN 2041-1723 (online) © 2026 Springer Nature Limited Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Images (1):
|
|||||
| Robot Talk Episode 130 – Robots learning from humans, with … | https://robohub.org/robot-talk-episode-… | 0 | Jan 01, 2026 00:01 | active | |
Robot Talk Episode 130 – Robots learning from humans, with Chad JenkinsURL: https://robohub.org/robot-talk-episode-130-robots-learning-from-humans-with-chad-jenkins/ Content: |
|||||
| Wuffy Robot Dog Unveiled: The Advanced Wuffy Robot Puppy Transforming … | https://www.manilatimes.net/2025/11/16/… | 0 | Jan 01, 2026 00:01 | active | |
Wuffy Robot Dog Unveiled: The Advanced Wuffy Robot Puppy Transforming Learning, Play and Emotional SupportDescription: Discover whether the Wuffy Robot Dog is worth buying in 2025. See features, benefits, price, safety, and real customer insights before you order. Content: |
|||||
| STRAP: Robot Sub-Trajectory Retrieval for Augmented Policy Learning — Enhancing … | https://beckmoulton.medium.com/strap-ro… | 0 | Jan 01, 2026 00:01 | active | |
STRAP: Robot Sub-Trajectory Retrieval for Augmented Policy Learning — Enhancing Robot Skills with Smarter DataDescription: STRAP: Robot Sub-Trajectory Retrieval for Augmented Policy Learning — Enhancing Robot Skills with Smarter Data In the rapidly evolving landscape of robotics, ... Content: |
|||||
| Agility獲輝達投資 佳能爭取合作機器人視覺模組 | 產業熱點 | 產業 | 經濟日報 | https://money.udn.com/money/story/5612/… | 1 | Dec 31, 2025 16:00 | active | |
Agility獲輝達投資 佳能爭取合作機器人視覺模組 | 產業熱點 | 產業 | 經濟日報URL: https://money.udn.com/money/story/5612/8998264 Description: 美國人型機器人公司Agility Robotics宣布,在C輪融資中獲得輝達(NVIDIA)投資;能率集團旗下能率亞洲、... Content:
打開 App 聯合報系 著作權所有 ©2024 太棒了!你沒有錯過任何訊息! 本文共1118字 美國人型機器人公司Agility Robotics宣布,在C輪融資中獲得輝達(NVIDIA)投資;能率集團旗下能率亞洲、佳能等先前已共同投資Agility Robotics新台幣3億元;Agility創辦人將於24日訪台,首站行程就安排拜訪佳能,雙方有望合作開發量產機器人視覺模組。 Agility Robotics發布新聞稿宣布,與NVIDIA的關係不僅僅是技術合作,也在C輪融資中獲得NVentures(Nvidia的風險投資部門)的投資;AgilityRobotics將繼續與NVIDIA合作開發下一代AI加速機器人技術。 外傳NVIDIA投資Agility Robotics的金額約2000萬至3000萬美元,但並未獲得證實。 Agility Robotics是一家雙足Mobile ManipulationRobot(MMR)公司,創造首款全球商業化應用的人形機器人產品Digit,客戶包含物流巨頭GXO Logistics和電商龍頭Amazon等。 能率集團積極布局機器人領域,旗下能率亞洲先前已宣布,連同佳能等其他關係企業,簽約投資美國機器人公司Agility Robotics約1000萬美元(約新台幣3億元);除了投資外,佳能也積極爭取Agility Robotics機器人視覺模組的合作開發與生產製造。 此外,Agility Robotics創辦人赫斯特(JonathanHurst)將於24日來台參訪,首發站就安排在佳能企業,預料雙方將針對機器人視覺模組領域討論。 2025年被視為人形機器人元年,能率集團先前宣布,旗下包括能率創新、佳能等,共同攜手投資美國機器人公司Mantis Robotics,佳能也接獲MantisRobotics的感測器訂單,初期約300套,預計第4季出貨,能率創新有望代工組裝機器人手臂。 佳能進一步指出,機器人訂單能見度直達2026年下半年,且人形機器人一台至少8顆鏡頭,視覺模組產品單價(ASP)更高,預估新產品如機器人視覺與無人機相關營收占比,2026年將達5%到10%,2027年挑戰20%,為未來營運添上強勁成長引擎。 Agility Robotics強調,他們目標是製造可在動態環境中與人類一起移動、工作和適應的機器人,多年來NVIDIA幫助Agility Robotics在雙足人形機器人中,實現突破的AI和自主性,透過NVIDIA AI基礎設施和開發框架,可訓練強大的模型,即時處理感測器資料。 Agility Robotics指出,隨著機器人技術進入下一階段,在倉庫、工廠、家庭、醫院、酒店、辦公室和零售店進行大規模部署,NVIDIA的技術平台可提供運算效能和靈活性。 ※ 歡迎用「轉貼」或「分享」的方式轉傳文章連結;未經授權,請勿複製轉貼文章內容 產業熱點 市場焦點 市場焦點 產業熱點 市場焦點 國際專欄 外媒解析 深度報導 科技產業 金融脈動 此為訂戶獨享,每月 10 則,轉傳連結即可邀請朋友免費閱讀全文。 本站使用元件之台股行情、個股基本資料及財務資訊為 凱衛資訊 提供。元件資料來源:台灣證券交易所, 櫃買中心。 本站元件所提供之金融資訊, 係供參考,不能做為投資交易之依據,所有資料以台灣證券交易所、櫃買中心公告為準。 聯合報系 著作權所有 ©2024 前往頁面
Images (1):
|
|||||
| Amazon Tests ‘Down-to-Earth’ Robotics Amid $1.6 Billion Boom | https://www.pymnts.com/technology/2024/… | 1 | Dec 31, 2025 16:00 | active | |
Amazon Tests ‘Down-to-Earth’ Robotics Amid $1.6 Billion BoomURL: https://www.pymnts.com/technology/2024/amazon-tests-down-to-earth-robotics-amid-1-6-billion-boom/ Content:
Amazon said last week that its Innovation Fund is ramping up its investment in robotics. Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required. yesSubscribe to our daily newsletter, PYMNTS Today. By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Δ And as a report Monday (March 4) by Bloomberg News notes, the company already has robots working for it, with a humanoid device called Digit moving bins in one of its warehouses. The Bloomberg report focuses on Digit’s maker, a company called Agility Robotics, saying it has “down-to-earth” ambitions at a time when other robotics startups paint their technology as making science-fiction-worthy accomplishments. Agility, on the other hand, plans to build 10,000 robots a year and for warehouses and storerooms worldwide, with the company taking part in an investment boom that has drawn in $1.6 billion in venture capital in the last five years, the report said. “The humanoid robot is as close to a given as I could imagine from a technology,” Adrian Stoch, chief automation officer at GXO Logistics, told Bloomberg. His company has tested Digit at one of its warehouses, and Stoch praised the robot’s potential flexibility, envisioning bots unloading trucks during the night shift so boxes are ready for human employees on the day shift, while the machines take on other duties. Advertisement: Scroll to Continue The report came the same day that Agility announced the hiring of a new CEO, Microsoft and Qualcomm veteran Peggy Johnson. “In a field cluttered with ‘demo-ware’ and hype, Agility stands apart for having resolutely, steadily, and remarkably made a human-centric robot that actually works — and in demanding customer environments,” Johnson said in a news release covering her appointment. Meanwhile, PYMNTS wrote last week about the way advances in artificial intelligence (AI) are leading to robots with better features and more effective human interaction capabilities. For example, Figure AI has reportedly raised $675 million to develop AI-powered humanoid robots, a project backed by investors like Jeff Bezos’ Explore Investments, as well as technology firms such as Microsoft, Amazon, Nvidia, OpenAI and Intel. “AI can better enable robots to better understand their environments, allowing them to better detect objects and people they come across,” Sarah Sebo, an assistant professor of computer science at the University of Chicago, where she directs the Human-Robot Interaction (HRI) Lab, told PYMNTS in an interview. “AI can allow robots to generate language to more fluidly communicate with people and respond more intelligently to human speech. Finally, AI can help robots to adapt and learn to perform tasks better over time by receiving feedback, making it more likely that the robot behaves in ways that receive positive feedback and less likely to behave in ways that receive negative feedback.” Trust Wallet Integrates Revolut to Facilitate Crypto Purchases in Europe Terraform Labs Co-Founder Do Kwon Sentenced to 15 Years in Prison Costco’s Digital Sales Surge 21% as Members Maintain Spending Appeals Court Says Judge Must Consider Allowing Apple to Collect Commission We’re always on the lookout for opportunities to partner with innovators and disruptors.
Images (1):
|
|||||
| Robotics Stocks: 3 OTC Picks That Can Be Good Long-Term … | https://www.investing.com/analysis/robo… | 1 | Dec 31, 2025 16:00 | active | |
Robotics Stocks: 3 OTC Picks That Can Be Good Long-Term Holds | Investing.comDescription: Market Analysis by covering: ABB Ltd, ABB Ltd, Amazon.com Inc, Advanced Micro Devices Inc. Read 's Market Analysis on Investing.com Content:
Gold prices trade mixed in last session of 2025 Bitcoin price today: climbs to $89k but set for yearly decline, Q4 losses weigh Oil prices set for biggest annual slump since 2020 as supply glut weighs Asia stocks set for stellar yearly gains as AI boom outweighs trade jitters Within the last year, a major milestone was crossed in robotics agility. From Unitree Robotics to Boston Dynamics, humanoid robots have now outpaced many sci-fi movies featuring fictional CGI robotics. Combined with AI agents that give life to agile robotic platforms, fiction is materializing in real time. Citi Global Insights analysts forecast the humanoid robotics industry to scale up to $7 trillion by 2050. Elon Musk is even more optimistic, having noted mid-2024 that Optimus could eventually churn $10 trillion revenue. As with EVs, it is likely that China will have the bulk share in this growth. After all, China is already ahead of the game, having outpaced both Japan and Germany in industrial robot density, according to the International Federation of Robotics (IFR) report. But outside of Tesla (NASDAQ:TSLA) which firms should investors consider for the AI and robotics boom? In 2021, Hyundai Motor Co (OTC:HYMTF) acquired Boston Dynamics from Japanese SoftBank (TYO:9984) for $880 million. To this day, Boston Dynamics remains the envy of the world when it comes to cutting edge humanoid robots. In late March, Hyundai announced a $21 billion investment into the US between 2025 and 2028, boosting domestic supply chains and car manufacturing. Last week, the Trump admin amended the President’s previous executive order (EO) on 25% vehicle tariffs, enabling up to 15% credit if the cars are US-assembled. In short, Hyundai is positioning to be a major player in robotics and EVs domestically, from supplying vehicles to Waymo to scaling up Boston Dynamics’ envelope-pushing work. Year-to-date, Hyundai stock is down 1.2%, currently priced at $51 per share. According to WSJ, analyst ratings for Hyundai shares are overwhelmingly in the “buy” zone with 20 analysts, while zero analysts are recommending to sell. After merging with Vitesco in October 2024, German Schaeffler AG (ETR:SHA0) has become the leading provider of components in the advanced field of motion technology. Investors may have already seen this application with humanoid robot Digit, built by privately held Agility Robotics. In early March, Agility Robotics and Schaeffler Group announced a strategic partnership, enabling investors to have exposure to Agility Robotics in turn. So far, Agility Robotics seems to be at the forefront of deploying Mobile Manipulation Robots (MMRs) at scale. GXO Logistics, with over 1,000 warehouse sites, is the first to deploy Digit as a part of the emerging Robots-as-a-Service (RaaS) business model. According to MarketsandMarkets, the RaaS sector is heading for a CAGR expansion of 17.4% between 2023 and 2028. Operating alongside human workers, Digits will aid in repetitive tasks with more expansion plans. In addition to Digit, GXO Logistics is also looking into Apptronik’s Apollo humanoid robot, but that company is also privately held. In April’s pre-close call for Q1 2025 earnings, scheduled for May 7th, Schaeffler AG reported €5.9 million in sales in Q4 ‘24, slightly lower than €6.1 million in Q1 ‘24. The company expects strong sales growth in e-mobility (electric drives, controls, mechatronics) and vehicle lifetime solutions (repair & maintenance). Even robots are made by robots, more specifically by industrial robots. This is where Swiss-Swedish ABB (ST:ABB) Group comes in, holding 13.5% market share in the industrial robot market. The company is always expanding its list of partnerships, even sponsoring MassRobotics as the US-centric robotics innovation hub, annually supported by AMD (NASDAQ:AMD), Amazon (NASDAQ:AMZN) and other large companies. Most recently in late April, the company announced its plan to split its ABB Robotics division as of Q2 2026, listing it as a separate company. During 2024, ABB Robotics contributed 7% to ABB Group’s revenue, delivering $2.3 billion. If the plan goes ahead with shareholder approval, they will receive “ABB Robotics” shares as dividend in-kind, proportional to their existing stock stake. In the meantime, ABB’s robots are continuing to be deployed across a wide range of businesses. The fast food sector seems a particularly good match, in order to avoid arbitrary minimum-wage increases. BurgerBots is the most recent such restaurant, opened in Los Gatos, California. If BurgerBots takes off, it is likely that global fast food chains will catch up with their much deeper pockets, and ABB Group will be in the middle of it. Year-to-date, ABB stock is up 2.8% to current price of $55 per share. Per WSJ, the average ABB price target is $58.79, while the top is $78.78 per share. If the ABB Robotics split goes through, shareholders could see a substantial split premium not accounted for in the forecasting. *** Neither the author, Tim Fries, nor this website, The Tokenist, provide financial advice. Please consult our website policy prior to making financial decisions. This article was originally published on The Tokenist. Check out The Tokenist’s free newsletter, Five Minute Finance, for weekly analysis of the biggest trends in finance and technology. Join our investing challenges and compete for rewards while you learn!
Images (1):
|
|||||
| Agility 喜獲輝達投資 能率集團沾光 | 產業熱點 | 產業 | 經濟日報 | https://money.udn.com/money/story/5612/… | 1 | Dec 31, 2025 16:00 | active | |
Agility 喜獲輝達投資 能率集團沾光 | 產業熱點 | 產業 | 經濟日報URL: https://money.udn.com/money/story/5612/8999292 Description: 美國AI機器人廠Agility Robotic於11)日宣布在C輪融資中獲得輝達(NVIDIA)投資,而國內能率集團(5... Content:
打開 App 聯合報系 著作權所有 ©2024 太棒了!你沒有錯過任何訊息! 本文共1253字 美國AI機器人廠Agility Robotic於11)日宣布在C輪融資中獲得輝達(NVIDIA)投資,而國內能率集團(5392)旗下佳能(2374)與能率亞洲先前已投資Agility Robotics約1,000萬美元(約新台幣3億元),未來能率集團有望結盟輝達、Agility Robotics揮軍AI機器人市場。 據悉,Agility Robotics成立於2015年,總部位於美國俄勒岡州奧爾巴尼,該公司前身為俄勒岡州立大學的實驗室,該實驗室以開發雙足機器人成果而著名,並在2022年獲得亞馬遜旗下創投青睞,並在今年接連獲得能率集團、輝達入股。 Agility Robotics主要產品為雙足步行機器人,目前共有Cassie與Digit兩個機型,同領域主要競爭對手為特斯拉旗下機器人公司Optimus;業界人士指出,有別於其他新創機器人公司,Agility Robotics所開發的產品最接近黃仁勳曾提及的AI人形機器人,預料未來在AI巨頭協力下,該公司有望進一步擴大AI人形機器人市場版圖。 對此,Agility Robotics透過新聞稿證實,公司與輝達的關係不僅僅是技術合作,也在C輪融資中獲得NVentures(NVIDIA旗下風險投資部門)的投資;AgilityRobotics將繼續與輝達合作開發下一代AI加速機器人技術。 針對輝達投資金融,儘管Agility Robotics並未在新聞稿中透露,但外界推估金額很可能落在2,000至3,000萬美元,不過尚未獲得雙方證實;Agility Robotics目前的客戶包含物流巨頭GXO Logistics和電商龍頭亞馬遜。 Agility Robotics強調,公司的目標是製造可在動態環境中與人類一起移動、工作的AI人形機器人,而輝達多年以來都在協助公司完成雙足人形機器人的開發,未來透過輝達AI基礎建設與開發框架,公司的技術也將進入下一個階段,有望將AI人形機器人大規模部署在工廠、家庭、醫院、飯店、辦公室等場域。 至於能率集團方面,該公司今年宣布以1,000萬美元投資Agility Robotics後,旗下佳能也正在積極爭取與Agility Robotics共同合作開發、生產製造機器人視覺模組,而Agility Robotics創辦人預計24日訪台,且首站參訪企業就是佳能,估計雙方將會有進一步合作進度將公開。 能率集團今年以來在AI機器人市場布局頻頻,除投資Agility Robotics外,先前也投資另一家美國機器人公司Mantis Robotics,而佳能更因此斬獲Mantis Robotics的感測器訂單,初期出貨量約300套,估第4季開始出貨,而能率則有望為其Mantis Robotics代工機器人手臂。 佳能表示,公司機器人相關訂單能見度已直達2026年下半年,目前人形機器人一台至少要8顆鏡頭,而視覺模組的產品單價更高,對公司營收與獲利有顯著貢獻,預料機器人與無人機營收比重將在明年從0達到10%,2027年則成長至20%。 ※ 歡迎用「轉貼」或「分享」的方式轉傳文章連結;未經授權,請勿複製轉貼文章內容 產業熱點 市場焦點 市場焦點 產業熱點 市場焦點 國際專欄 外媒解析 深度報導 科技產業 金融脈動 此為訂戶獨享,每月 10 則,轉傳連結即可邀請朋友免費閱讀全文。 本站使用元件之台股行情、個股基本資料及財務資訊為 凱衛資訊 提供。元件資料來源:台灣證券交易所, 櫃買中心。 本站元件所提供之金融資訊, 係供參考,不能做為投資交易之依據,所有資料以台灣證券交易所、櫃買中心公告為準。 聯合報系 著作權所有 ©2024 前往頁面
Images (1):
|
|||||
| Agility Robotics共同創辦人:原本避免打造人形機器人 | 遠見雜誌 | https://www.gvm.com.tw/article/125528 | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics共同創辦人:原本避免打造人形機器人 | 遠見雜誌URL: https://www.gvm.com.tw/article/125528 Description: 與輝達密切合作、獲國內能率集團投資的美國人形機器人獨角獸Agility Robotics日前訪台,為近來AI機器人熱潮再添柴薪。深研機器人數十年的共同創辦人赫斯特對智慧機器的現況有何觀察?對人形機器人後進又有何建議? Content:
文章 特輯 訂閱 特色頻道 關於我們 讀者服務部電話:(02)2662-0012 服務時間:週一 ~ 週五 9:00~12:30;13:30~17:00 服務信箱:gvm@cwgv.com.tw 傅莞淇 2025-11-03 瀏覽數 喜歡這篇文章嗎 ? 與輝達密切合作、獲國內能率集團投資的美國人形機器人獨角獸Agility Robotics日前訪台,為近來AI機器人熱潮再添柴薪。深研機器人數十年的共同創辦人赫斯特對智慧機器的現況有何觀察?對人形機器人後進又有何建議? 與輝達密切合作、獲國內能率集團投資的美國人形機器人獨角獸Agility Robotics日前訪台,為近來AI機器人熱潮再添柴薪。深研機器人數十年的共同創辦人赫斯特對智慧機器的現況有何觀察?對人形機器人後進又有何建議? 在輝達引領、助長的「實體AI」熱潮中,人形機器人是一大焦點。身為最早簽下商用長約的人形機器人供應商,Agility Robotics的共同創辦人赫斯特(Jonathan Hurst)日前訪台。行程受到台廠密切關注,亦帶動投資方能率、興櫃掛牌的能率亞洲股價大漲。 在諸多廠商尋找機器人供應鏈切入點之時,深研機器人領域數十載的赫斯特對近年急起的這波熱潮有何觀察?身為人形機器人市場主要玩家,他又為何數度強調自己原本「不想做人形」? 「很多人以為,把機器人做得像人形,再結合AI,它就能做所有人類能做的事。」赫斯特表示,「但它不能。」 在大語言模型驅動的聊天機器人打響大眾知名度後,對前沿模型能力的應用及想像延展到其他領域。代理執行數位任務已無法滿足使用者。給予如此聰明的AI模型一個身體,在現實世界中帶來更多價值,似乎是自然的下一步。 然而,必須高度整合軟、硬體的AI機器人研發,遠比純軟體模型來得困難。赫斯特指出,機器人運動數據不如網路圖文資料那樣易得。以人類遙控法累積數據,也僅適用有限用例,如不需用到太大力氣的折餐巾等任務。在跑步、全身體控制等面向,只靠遙控訓練無法達到理想成果。 或許最明顯的是,AI模型能力再強,也需要適合的身體支持,才能在實體世界做到它想做的事。而現在幾乎還不存在專為前沿AI模型打造的硬體供應鏈,需求、規格都還在成形中。諸多原因,使得大眾期待的通用機器人無法在短期付諸實用。 AI機器人的進展雖不如外界想像得簡單、迅速,但確實出現顯著突破。「我們正目睹機器人完成過去無法達成的任務。」赫斯特說。 例如,機器人後空翻這類炫技,是幾年前難以想像的。傳統的機器人動作是由工程師撰寫程式控制。要完成後空翻這樣複雜的動作,需要在極短時間內控制許多關節在正確的時間以適當的角度輸出準確的力度,只要任何一個參數有偏差,機器人就會摔倒在地。再者,也沒有明確的經濟價值驅動工程師研究這樣的動作控制。 但在模擬環境中透過強化學習(RL)等技巧,可以在極大量的反覆試錯中,引導機器人摸索出控制身體完成後空翻的作法。這種以「學習」為基礎的訓練法,為機器人控制開啟了新的可能性。 再加上結合文字、圖像、語音等多模態理解能力的AI模型,為機器人帶入更細膩的語意與環境理解能力,以及更自然的人機互動體驗。目前諸多團隊正在破解的「物理智慧」(physical intelligence),更是機器人能力再上層樓的關鍵拼圖。 因而,赫斯特認為實體AI「不只是炒作」。他表示,「數百年來,人們渴望擁有能完成所有工作的機器。突然之間,人們開始意識到,也許那並非遙不可及的未來,甚至不需要花上五十年,可能就快要達成。因此,我認為人們的興奮是有其理由的。」 這波熱潮也為機器人領域帶進更多資金、人才與關注,對整體進展更是強心劑。赫斯特認為,許多瓶頸有望加速突破。九月底在首爾舉行的機器人學習大會(CoRL),便因為報名人數超額而必須提早關閉登記,「這是前所未見的。」赫斯特指出。 赫斯特認為實體AI「不只是炒作」。黃菁慧攝 為Agility Robotics打響知名度的機器人「Cassie」,便是學習式訓練策略的先驅。控制Cassie的神經網路,是花了一週的現實時間,在模擬環境裡進行大約1年的訓練而成。 訂閱遠見電子報,掌握國內外大事 只有雙腿、沒有上身的Cassie在2022年創下雙足機器人跑百米的金氏世界紀錄。Cassie以站立姿勢起跑、越過終點線,全程沒有摔倒,共花了24.73秒完賽,至今是赫斯特最喜愛的動作紀錄。「我認為那看起來非常、非常自然。」他驕傲地說。 影/Cassie以站立姿勢起跑、越過終點線,全程沒有摔倒。 Agility Robotics在2015年脫胎自奧勒岡州立大學的動態機器人實驗室,關鍵技術就是雙足機器人的動態平衡與運動能力。在研究領域上,擁有卡內基美隆大學機器人博士學位的赫斯特專精於生物力學。宗旨是先理解動物運動的原理,再以馬達等機械硬體重現。這解釋了為什麼Cassie、Digit的步態在人類眼中看起來特別「自然」。 在研發機器人時,赫斯特強調,團隊採用的進路是「仿生啟發」(bio-inspired),而非「仿生模仿」(biomimetic)。「我們想要了解人類或動物運動的原理,再應用到機械上。」他解釋,「而不是直接複製動物。」 長久以來,Agility Robotics想打造出能去到任何人類可去的地方的機器人,在人類為自己打造的環境中,與人們共存、協作。雙腿行走能力是邁向這個願景的基礎,如人的外觀亦有其社會價值。 例如,Digit的頭部除了可以放置攝影機與光達,也有助於與人類互動。透過眨眼這類的信號,讓周圍人類可以輕易察知機器人行動的意圖。不須專業訓練,就可促成人機自然協作。 然而,只有雙腿的Cassie逐步進化為有軀幹、雙臂的Digit的過程,實則充滿著工程計算,而非對人形機器人簡單、直覺的期待。 Digit的頭部除了可以放置攝影機與光達,也有助於與人類互動。Agility Robotics提供 赫斯特解釋,Cassie早期的一大問題是偏航控制能力不足,這導致機器人跨步時容易水平旋轉而摔倒。這是因為機器人跨出一側腿部時,會產生一個反向作用力。Cassie的腳掌太小,摩擦力不足以抵抗這個反作用力,所以容易原地旋轉後側向倒下。 為了解決這個問題,團隊想過安裝一個反作用輪或陀螺儀來抵消反作用力。但這個裝置除了旋轉沒有其他功用,還會增加機器人的重量與耗能。 更好的作法似乎是安裝另一個可移動的質量來施加反作用力,例如說尾巴。尾巴裡的空間還可以安置其他元件。而要控制偏航角度,裝設尾巴較好的位置不會是正後方,而是在對稱中心軸的兩側,可以輪流抵銷兩腿的反作用力。 若尾巴從兩腿旁側伸出,移動時容易撞到周圍事物。藉由將兩隻尾巴的位置上移,機器人也有了可以協助啟動步伐的軀幹,可以透過前傾來改變身體重心,利用重力助力移動。最後,兩隻裝在軀幹兩側的尾巴正好就像是雙臂。 賦予機器人雙臂,也可提升操控物品的能力,以及在摔倒時支撐自己的能力。赫斯特表示,「就像是有三條完全彼此獨立的思路,最終都把你引導到雙臂連結軀幹的同一個位置。」 「我們試過把手臂安裝在別的位置,但(肩部)才是正確答案。」赫斯特說,「所以最後它看起來就像是人形了。」 赫斯特解釋,起初他刻意避免打造人形機器人的原因,是為了防止自己陷入「先入為主」的窠臼,預先認定人形會是理想解,而失去探索其他可能性的機會。 「如果我們做出一個像是人的機器人,我們希望那是一個巧合。」他表示,「我們希望有充足的工程學理由支持這個設計。如果最終的解決方案正好是人形,那就採用,但那不是預設的目標。」 有趣的是,目前Digit最「不像」人的地方,也即將在2026年的改版中大幅調整得更像人類,而這背後也有著審慎的工程考量。 Digit使用鳥腿式設計已久,身體下降時,看似膝蓋的踝部是向後彎曲。這利於走路、跑步,但也使得Digit無法蹲得非常低,因為踝部會撞到地面。為了能從地面抬起重物,Digit也需要能將軀幹前傾,借用髖部和膝蓋的馬達來分擔重量。團隊發現,人類的腿部設計更適合進行負重的操控。 赫斯特預告,除了快充、可換手等更新,第五代Digit在外貌上也將會看起來相當不同,但希望能同樣討人喜愛。 赫斯特初次訪台,對台灣硬體供應鏈的觀察具體而微。他指出,台灣擁有一群生產零組件的中小企業,深知製造流程,也具有客製化經驗。在發展新一代機器人的研發團隊眼中,台廠這樣的能力相當令人興奮。 赫斯特也表示,從機器人學家觀點來說,要把零件、模組做得品質精良、穩定發揮應有功能,可能是最難的環節。相對來說,組裝一台Digit的時間不到一周,佔總成本不到5%。若台灣擁有蓬勃發展的零組件生態系,他相信也能孕育出整機機器人供應商。 對此,赫斯特給機器人新創後進們的建議是:打造出產品,而不只是人形機器人。他舉例,宇樹能以低成本生產可跳舞、打拳擊的機器人,但它們並不適合長時間在倉庫裡搬運重物。即使外型都與人類相近,H1與Digit並非競爭相同市場的產品。 他強調,較好的方式是打造解決方案,為客戶提供商業價值,而不是「為做而做」人形機器人。 早在2020年就投資Agility Robotics的創新工業技術移轉股份有限公司(ITIC)現任總經理余宛如,是這次赫斯特訪台的重要推手。選在Agility期待提升量產能力的時刻,強力推銷台廠作為硬體策略夥伴,是說動赫斯特親自來台的一大原因。 余宛如觀察,台廠的優勢除了價格相對有競爭力、英文溝通能力佳,生產管理的軟實力也可以是一大特色。面對新一波機器人商機,要如何在既有優勢上疊加出新的市場區辨度,企業們皆相當積極。 創新工業技術移轉股份有限公司(ITIC)現任總經理余宛如(左)、Agility Robotic共同創辦人Jonathan Hurst(右)。黃菁慧攝 Agility Robotics與能率集團的合作可視為先例。在供應鏈多元化的趨勢中,台廠如何挑對題目、提供可靠解方,將是在新一代AI機器人硬體戰中脫穎而出的關鍵能力。 延伸閱讀 人形機器人Digit已上工一年!為何最怕它「摔倒」? 早鳥報名|國際級師資,嚴長壽 53 年經營智慧親授! 加好友掌握趨勢 關注FB熱門議題 AI治理有方 新北市打造智慧城市資安新典範 洞察趨勢的新北市政府,以超前部署、積極布局的策略,致力打造宜居、創新的智慧城市,聚焦「基礎建設」、「關鍵技術」、「智慧應用」三大面向,新北市已同步展現具體成果,體現地方政府的前瞻思維與執行力。 熱門 訂遠見再享25+20吋雙行李箱 「有選擇」才是真正的財富自由 《阿凡達:火與燼》史詩年度大作 走進嘉義:藝術如何改變鄉村風貌 螢幕亮起新北把希望送孩子手中 BMS翻轉職場打造人才跨域成長 三重變局下的台灣永續新航道 請往下繼續閱讀 熱門 訂遠見再享25+20吋雙行李箱 「有選擇」才是真正的財富自由 《阿凡達:火與燼》史詩年度大作 走進嘉義:藝術如何改變鄉村風貌 螢幕亮起新北把希望送孩子手中 BMS翻轉職場打造人才跨域成長 三重變局下的台灣永續新航道 近期熱門文章 曹西平驟逝享壽66歲!家屬不認領恐成「有名無主屍」乾兒子:盼後事全權給我處理 Google AI Pro打折快截止!Gemini付費或免費好? 2026長照3.0全新上路!服務項目、補助新增什麼?申請一次看 看此文章的人也看了... 加好友掌握趨勢 關注FB熱門議題 登入網站會員 享受更多個人化的會員服務 加入網路會員,享受讀者專屬權益 關於我們 文章 讀者服務部電話:(02)2662-0012 服務時間:週一 ~ 週五 9:00~12:30;13:30~17:00 服務信箱:gvm@cwgv.com.tw Copyright© 1999~2026遠見天下文化出版股份有限公司. All rights reserved. 請登入會員 此為會員限定功能立即登入 今天準備好明天的競爭力!精選國際、產經時事,快速掌握最新趨勢。
Images (1):
|
|||||
| Nauka i technologia - Amazon rozpocznie testy Digit – humanoidalnego … | https://www.prisonplanet.pl/nauka_i_tec… | 1 | Dec 31, 2025 16:00 | active | |
Nauka i technologia - Amazon rozpocznie testy Digit – humanoidalnego robota, który może zastąpić ludzkich pracowników w magazynach. - Prison PlanetURL: https://www.prisonplanet.pl/nauka_i_technologia/amazon_rozpocznie_testy,p1191527409 Content:
Amazon testuje wykorzystanie w swoich magazynach humanoidalnych robotów o wysokości ponad sześciu stóp, co zwiększa obawy co do zwolnień pracowników. Eksperyment zostanie przeprowadzony w ośrodku eksperymentalnym BFI1 firmy Amazon w Sumner w stanie Waszyngton. Robot, znany jako Digit, to dwunożny robot, który jest w stanie chwytać i podnosić przedmioty w magazynach Amazon. Digit ma dwie ręce, dwie nogi, niebieską klatkę piersiową i dwa kwadratowe światła zamiast oczu. Może poruszać się do przodu i do tyłu, obracać się i schylać. Może dosięgnąć, chwycić i podnieść żółte pudełka Amazon. Więcej na temat wprowadzania tego typu systemów w zakładce "Przyszłość pojazdów bezzałogowych": Firma twierdzi, że Digit pomoże pracownikom magazynu w recyklingu toreb, zbierając i przenosząc puste torby po usunięciu wszystkich znajdujących się w nich przedmiotów. Aby stworzyć Digit, spółka zależna Amazona zajmująca się robotyką, Amazon Robotics, nawiązała współpracę ze start-upem technologicznym Agility Robotics. Ten start-up z siedzibą w Oregonie koncentruje się na tworzeniu robotów dla firm logistycznych i magazynowych. Amazon za pośrednictwem funduszu Amazon Industrial Innovation Fund przekazał Agility Robotics kwotę 150 milionów dolarów na pomoc w stworzeniu Digit. Pod koniec okresu próbnego Amazon planuje wdrożyć robota „w nowatorski sposób w przestrzeniach i narożnikach magazynów”. „Wierzymy, że istnieje duża szansa na skalowanie rozwiązania manipulatora mobilnego, takiego jak Digit, które może współpracować z pracownikami” dodała firma Amazon. Link do oryginalnego artykułu: LINK | Humanoidalne roboty przybywają by przejąć pracę w magazynach. | Południowokoreańscy naukowcy stworzyli robo-pilota wykorzystującego Ai. | Po raz pierwszy armia USA pomyślnie wykonała lot myśliwcem wykorzystującym AI. | Dron kontrolowany przez sztuczną inteligencję „zabił” ludzkiego operatora w symulowanym teście sił powietrznych USA. | Ekspansja generatywnych systemów sztucznej inteligencji może znacząco zakłócić rynek pracy. | Ai w samochodach? Robo-taksówki powodują chaos w ruchu drogowym w Austin. | Microsoft włączy obowiązkowe systemy AI w systemie Windows 11, aby szpiegować co robisz na komputerze. | Firmy wprowadzają do szkół opaski kontrolujące myśli uczniów i monitoring AI. | Znikające zawody do roku 2030. 50% zawodów ulegnie komputeryzacji. | Czy aktorzy głosowi stracą pracę na rzecz sztucznej inteligencji? | Scenariusz rozwoju rynku pracy do roku 2050. | Bill Gates: Ludzie nie zdają sobie sprawy, ile miejsc pracy zostanie wkrótce zastąpionych przez boty. | Dlaczego przyszłości nas nie potrzebuje. | Implikacje zmian demograficznych do roku 2050. | Miliarderzy biorą populację na celownik. | Telemarketer robot, który zaprzecza, że jest robotem. | Czy Robo-Reporterzy zastąpią dziennikarzy głównego nurtu? | Sztuczny mózg przeszedł podstawowy test IQ. | Pierwsza bezzałogowa flota ciężarówek rusza w Australii. | Pentagon będzie budował roboty z "prawdziwym" mózgiem. | Armie wirtualnych przyjaciół promują propagandę poprzez sieci społecznościowe. | NBIC- zabawa w Boga. Obraz świata w 2025 roku. | Przyszłość pojazdów bezzałogowych. | ONZ przewiduje transhumanistyczną przyszłości, w której człowiek będzie zbędny. | Pola bitwy 2050. | Dlaczego przyszłości nas nie potrzebuje. | Elity Doliny Krzemowej założyły kult religijny mający czcić sztuczną inteligencję jako Boga. | Twórca firmy Apple: “Roboty podbiją świat, i będą nas trzymać jako zwierzęta domowe.” | ONZ przewiduje transhumanistyczną przyszłości, w której człowiek będzie zbędny. | Chatboty AI zastąpią wyszukiwarki i wirtualnych asystentów. I twoich przyjaciół. | Elon Musk: ludzie muszą połączyć się z maszynami lub stać się nieistotnymi w wieku sztucznej inteligencji. | Samoświadoma sztuczna inteligencja może pewnego dnia kontrolować każde urządzenie na świecie, przewiduje wynalazca Andy Rubin. | Największy na świecie fundusz hedgingowy zastąpi menedżerów sztuczną inteligencją. | Sztuczna inteligencja Microsoftu stała się nazistą w mniej niż 24 godziny. | NSA i sztuczna inteligencja do wykrywania przed-zbrodni. | Sztuczna inteligencja: "Homo sapiens zostanie podzielony na kilku bogów i resztę". | Eric Schmidt: "Sztuczna inteligencja powinna mieć systemy weryfikacji, aby uniknąć niepożądanych efektów. | Sztuczna inteligencja będzie decydowała o życiu i śmierci ludzi. | Eric Schmidt: Sztuczna Inteligencja Google nie do odróżnienia od ludzkiej w przeciągu dekady. | Wyłączniki robotów i legalny status dla maszyn: posłowie EU poparli wniosek dotyczący sztucznej inteligencji. | Nowe oprogramowanie "sztucznej inteligencji" posiada niemalże ludzkie zdolności rozpoznawania obrazów. | Skynet, robotyka i globalne trendy strategiczne. | Bogaci Amerykanie szukają na czarnym rynku możliwości wszczepienia implantów mózgowych by podłączyć się do sztucznej inteligencji. | Pola bitwy 2050. | Znikające zawody do roku 2030. 50% zawodów ulegnie komputeryzacji. | Człowiek postindustrialny. |
Images (1):
|
|||||
| GXO tests walking Digit robot to automate repetitive warehouse tasks | https://www.dcvelocity.com/articles/593… | 0 | Dec 31, 2025 16:00 | active | |
GXO tests walking Digit robot to automate repetitive warehouse tasksDescription: Logistics provider runs pilot test of 5'9 Content: |
|||||
| MercadoLibre (MELI) to Use Digit Robots in Texas Warehouse | https://finance.yahoo.com/news/mercadol… | 1 | Dec 31, 2025 16:00 | active | |
MercadoLibre (MELI) to Use Digit Robots in Texas WarehouseURL: https://finance.yahoo.com/news/mercadolibre-meli-digit-robots-texas-041351263.html Description: MercadoLibre, Inc. (NASDAQ:MELI) is one of the 14 Most Promising Fintech Stocks to Invest In. On December 10, MercadoLibre, Inc. (NASDAQ:MELI) and Agility Robotics announced a commercial agreement. Agility Robotics is the creator of the leading humanoid robot called Digit. This agreement will bring Agility Robotics’ Digit humanoid robot into MercadoLibre, Inc.’s (NASDAQ:MELI) facility in […] Content:
Oops, something went wrong MercadoLibre, Inc. (NASDAQ:MELI) is one of the 14 Most Promising Fintech Stocks to Invest In. On December 10, MercadoLibre, Inc. (NASDAQ:MELI) and Agility Robotics announced a commercial agreement. Agility Robotics is the creator of the leading humanoid robot called Digit. This agreement will bring Agility Robotics’ Digit humanoid robot into MercadoLibre, Inc.’s (NASDAQ:MELI) facility in San Antonio, Texas. Initially, Digit will help with tasks that support commerce fulfillment. MercadoLibre, Inc. (NASDAQ:MELI) and Agility Robotics also plan to find more use cases where AI-powered humanoids can support and add value to logistics operations in MercadoLibre, Inc.’s (NASDAQ:MELI) warehouses across Latin America. The main goal here is to automate jobs that are hard to recruit for, especially those that are extremely repetitive and physically difficult. This could help improve safety for the company’s workers and also reduce labor gaps. By using robots and automating these tasks, MercadoLibre, Inc. (NASDAQ:MELI) hopes to increase productivity and allow employees to focus on more value-added work. In other news, on December 5, TipRanks reported that Citi analyst Joao Soares reiterated a Buy rating on MercadoLibre, Inc. (NASDAQ:MELI). The research firm has a price target of $2,500 on the stock. On December 3, TipRanks reported that BTIG analyst Marvin Fong reaffirmed a Buy rating on MercadoLibre, Inc. (NASDAQ:MELI) with a price target of $2750.00. MercadoLibre, Inc. (NASDAQ:MELI) is the leading e-commerce and financial technology company in Latin America with a presence in 18 countries. While we acknowledge the potential of MELI as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you’re looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 15 Best Technology Penny Stocks to Buy and 15 Best Aggressive Growth Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio
Images (1):
|
|||||
| Agility Robotics CEO: Robots only coming for jobs we don't … | https://www.fastcompany.com/91137691/ag… | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics CEO: Robots only coming for jobs we don't want - Fast CompanyDescription: The Agility Robotics CEO talks humanoids, AR, and startup culture on the latest episode of the ‘Rapid Response’ podcast. Content:
LOGIN 06-07-2024RAPID RESPONSE The Agility Robotics CEO talks humanoids, AR, and startup culture on the latest episode of the ‘Rapid Response’ podcast. [Photo: Agility Robotics] BY Robert Safian In the era of AI, are Hollywood’s threatening sci-fi robots poised to come to life? Peggy Johnson, CEO of Agility Robotics, separates hype from reality, explaining how Agility’s humanoid robot, Digit, is entering the industrial workforce today. The former CEO of augmented reality startup Magic Leap, Johnson shares what makes robot tech more tangible than AR and explores the sensitive relationship between robotics and human-held jobs. This is an abridged transcript of an interview from Rapid Response, hosted by the former editor-in-chief of Fast Company Bob Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with today’s top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. You became CEO of Magic Leap in the heat of the pandemic. Now you’ve taken on a new CEO role again at Agility. Can you catch us up on how you got here? At Magic Leap, we successfully turned things around. We repointed the company towards enterprise. I felt like I’d done what I came there to do. I think augmented reality is still very early days. It will happen. Even looking at the Apple Vision Pro, it’s going to take some time. And while I love the game-changing aspect of augmented reality, the trajectory is on a slower slope than robotics. And when I saw the opportunity at Agility, it appeared they had product market fit and a huge demand for what they could do—which is putting robots into many open jobs that people don’t want. I felt like that’s the spot for me. So I made the jump and I’m super excited about the space. Humanoids are kind of having a moment right now. There’s this huge tailwind of open jobs that are kind of repetitive, dull, and people don’t want. It’s these jobs that have doubled over the past five years from about 600,000 open jobs to about over a million. When you can’t find humans to do these jobs, humanoids can play that role. There are all kinds of robots that are used in business — a lot of them look like motorized carts or things like that. But Agility has this humanlike robot, that you called Digit. So why is there this sudden burst of attention of humanlike robots? In warehouses, automation has been around for over a decade. But there’s pockets still with humans that are moving a box from a conveyor belt over to an automated put wall. You can think of that as the Achilles’ heel of all of that fantastic automation. Humanoids can work for long hours, they don’t get hurt, they don’t get emotional, they don’t have anything that you have to worry about. When humanoids are deployed, the job for a human is to manage the fleet and the interaction between the robots and the other automated facilities. And that is something that looks more like a career. I’ve seen videos of Digit. Even the videos are a little spooky. What kind of relationships do the human workers have with the robots? We thought a lot about that. For instance, we have a head on Digit. And you can do different things with the eyes, like sparkles and hearts. There is a positive reaction with humans. There is a connection there because of that head and the fact that there are eyes. But mostly, humans and humanoids work separately. The idea of what’s called “collaborative robots” is a little bit further down the line. There’s very high safety standards. Is Digit a “he” . . . A “she”? I refer to it as an “it” oftentimes. But people do want to put characterization on the humanoid robot. We let them do what they’d like. You’re leaning into this industrial use. Is there a vision for Digit to be in retail environments, interacting with customers? Or is that sort of a different track? We’re only starting with these industrial applications because the need is so high. But this is a multipurpose humanoid. So it’s meant to go off into those areas. In fact, we’re already talking to customers in retail and transportation. You’ve seen a few of the consumer robots come and go. Households are pretty messy places. Things inside of a warehouse are much more structured. So the need is greater there. And that’s where we’re going to focus. And that’s where the revenue is. How are you thinking about using AI to change Digit? We’ve been using AI in the form of reinforcement learning. So it’s been part of how we’ve developed Digit. The big opportunity that we see going forward is this idea of Digit’s semantic intelligence. So, giving Digit unstructured commands. The other day, we said, “go pick up all this trash” and Digit looked around, picked up the trash, and put it in the right bins—recycle, paper, or landfill. But going forward, AI is going to help us teach Digit new skills much more quickly. You don’t want to use AI fully to control the robot right now. ‘Cause as we know, it’s not always perfect. And when you have a robot that weighs 160 pounds or so, and has all the torque to lift up heavy things, you want to be controlling that with a known platform. And so, until we’re much further along with AI, we aren’t going to make that swap. You were a first time CEO at Magic Leap. What did you learn from that experience that you’re applying now? Like, what are you doing differently? When I first stepped in at Magic Leap, they were chasing a lot of markets and we narrowed that down. That’s the same here—we get a lot of attention from companies who have innovation departments, for instance. I respect the innovation departments, but if they don’t have an intention to deploy and a problem to solve, we don’t focus there. We pick up and move on. We don’t want to be an interesting demo for someone’s boardroom, frankly. This is not demoware. We’re not doing backflips and making coffee. When you left Microsoft, it was sort of just starting to come out of hibernation in a certain way. And now thanks to co-pilot and the arrangement with OpenAI, things are just cooking over there. Do you ever think about the decision to leave and wonder, “what if I’d stayed?” Actually, never. I loved my time there. I loved working for Satya Nadella. I learned so much from him. He’s clearly an iconic leader, but I remember very clearly the day I made up my mind. It was actually only a few weeks into COVID lockdown at the time. My job in particular slowed down. It was running biz dev, very outwardly facing. I got on planes all the time—that was not available anymore. So I really sat and thought about where I was in my career, and I had always wanted to be a CEO. And I think like a lot of people, you keep saying, “well, if I just had a little more experience here, a little more experience there, then I’ll be ready.” But I started looking. Now that I’ve had the experience at Magic Leap and Agility, I could never see myself going back. I like the startup environment. I like the pace. I like the freedom and the flexibility to make decisions. The bigger the company is, the harder it is to get to a yes or no. And so I love the fast-paced environment at startups. I’m at the point in my career where I can do anything I want. And this is what I want to do. I love it. The extended deadline for Fast Company’s World Changing Ideas Awards is Friday, December 19, at 11:59 p.m. PT. Apply today. ABOUT THE AUTHOR Robert Safian is the editor and managing director of The Flux Group. From 2007 through 2017, Safian oversaw Fast Company’s print, digital and live-events content, as well as its brand management and business operations More Fast Company & Inc © 2025 Mansueto Ventures, LLC Fastcompany.com adheres to NewsGuard’s nine standards of credibility and transparency. Learn More
Images (1):
|
|||||
| Humanoid Global Announces Commitment to a Strategic Investment in Agility … | https://www.manilatimes.net/2025/09/16/… | 0 | Dec 31, 2025 16:00 | active | |
Humanoid Global Announces Commitment to a Strategic Investment in Agility Robotics, Developers of One of the World’s First Commercially Deployed Humanoid RobotDescription: - NOT FOR DISSEMINATION IN THE UNITED STATES OR THROUGH U.S. NEWSWIRE SERVICES - Content: |
|||||
| Agility Robotics獲輝達投資 佳能爭取合作機器人視覺模組 | https://tw.stock.yahoo.com/news/agility… | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics獲輝達投資 佳能爭取合作機器人視覺模組Description: (中央社記者江明晏台北2025年9月11日電)美國人型機器人公司Agility Robotics宣布,在C輪融資中獲得輝達(NVIDIA)投資;能率集團旗下能率亞洲(7777)、佳能(2374)等先前已共同投資AgilityRobotics新台幣3億元;Agility創辦人將於24日訪台,首站行程就安排拜訪佳能,雙方有望合作開發量產機器人視覺模組。Agility Robotics發布新聞稿宣布,與NVIDIA的關係不僅僅是技術合作,也在C輪融資中獲得NVentures(Nvidia的風險投資部門)的投資;AgilityRobotics將繼續與NVIDIA合作開發下一代AI加速機器人技術。外傳NVIDIA投資Agility Robotics的金額約2000萬至3000萬美元,但並未獲得證實。Agility Robotics是一家雙足Mobile ManipulationRobot(MMR)公司,創造首款全球商業化應用的人形機器人產品Digit,客戶包含物流巨頭GXO Logistics和電商龍頭Amazon等。能率集團積極布局機器人領域,旗下能率亞洲先前已宣布,連同佳能等其他關係企業,簽 Content:
(中央社記者江明晏台北2025年9月11日電)美國人型機器人公司Agility Robotics宣布,在C輪融資中獲得輝達(NVIDIA)投資;能率集團旗下能率亞洲 (7777) 、佳能 (2374) 等先前已共同投資AgilityRobotics新台幣3億元;Agility創辦人將於24日訪台,首站行程就安排拜訪佳能,雙方有望合作開發量產機器人視覺模組。 Agility Robotics發布新聞稿宣布,與NVIDIA的關係不僅僅是技術合作,也在C輪融資中獲得NVentures(Nvidia的風險投資部門)的投資;AgilityRobotics將繼續與NVIDIA合作開發下一代AI加速機器人技術。 外傳NVIDIA投資Agility Robotics的金額約2000萬至3000萬美元,但並未獲得證實。 Agility Robotics是一家雙足Mobile ManipulationRobot(MMR)公司,創造首款全球商業化應用的人形機器人產品Digit,客戶包含物流巨頭GXO Logistics和電商龍頭Amazon等。 能率集團積極布局機器人領域,旗下能率亞洲先前已宣布,連同佳能等其他關係企業,簽約投資美國機器人公司Agility Robotics約1000萬美元(約新台幣3億元);除了投資外,佳能也積極爭取Agility Robotics機器人視覺模組的合作開發與生產製造。 此外,Agility Robotics創辦人赫斯特(JonathanHurst)將於24日來台參訪,首發站就安排在佳能企業,預料雙方將針對機器人視覺模組領域討論。 2025年被視為人形機器人元年,能率集團先前宣布,旗下包括能率創新 (5392) 、佳能等,共同攜手投資美國機器人公司Mantis Robotics,佳能也接獲Mantis Robotics的感測器訂單,初期約300套,預計第4季出貨,能率創新有望代工組裝機器人手臂。 佳能進一步指出,機器人訂單能見度直達2026年下半年,且人形機器人一台至少8顆鏡頭,視覺模組產品單價(ASP)更高,預估新產品如機器人視覺與無人機相關營收占比,2026年將達5%到10%,2027年挑戰20%,為未來營運添上強勁成長引擎。 Agility Robotics強調,他們目標是製造可在動態環境中與人類一起移動、工作和適應的機器人,多年來NVIDIA幫助Agility Robotics在雙足人形機器人中,實現突破的AI和自主性,透過NVIDIA AI基礎設施和開發框架,可訓練強大的模型,即時處理感測器資料。 Agility Robotics指出,隨著機器人技術進入下一階段,在倉庫、工廠、家庭、醫院、酒店、辦公室和零售店進行大規模部署,NVIDIA的技術平台可提供運算效能和靈活性。(編輯:張良知) 能率集團旗下能率亞洲掛牌上櫃之後,AI及機器人相關投資躍上檯面,集團與輝達在人形機器人公司Agility Robotics、AI軟體公司鑫蘊林科(Linker Vision)二項大投資案並列重量級股東,透過這層裙帶關係,可望替佳能AI鏡頭大單指路。 能率集團AI投資再傳捷報!外電報導,AI軟體平台公司鑫蘊林科(Linker Vision)成功完成3,500萬美元A輪募資,由能率集團(Abico集團)領投,投資人陣容還包含美國AI巨擘輝達(NVIDIA)。繼美商人形機器人公司Agility Robotics之後,這也是能率集團與輝達共同注資的另一代表性投資案。 【財訊快報/記者戴海茜報導】佳能(2374)因應旗下重要智慧城市業務發展,今年年中攜手能率集團關係企業共同投資鑫蘊林科(Linker Vision);鑫蘊林科今天宣佈已成功完成3500萬美元A輪募資,除能率集團(Abico 集團)領投外,投資人包含美國AI巨擎NVIDIA(輝達)及台灣其他金融機構等相關企業,鑫蘊林科與輝達(Nvidia)也正式從單純業務合作關係轉而成為被投資夥伴。據了解,能率集團此次投資約佔整個A輪投資額約一半的資金,輝達此次則無對外公布投資相關的金額,不過伴隨著輝達(Nvidia)的投資,預計輝達與鑫蘊林科未來合作也將會越來越緊密,業務也將有機會加速成長。而輝達投資台灣軟體公司鑫蘊林科是台灣少數軟體公司受輝達肯定並將攜手合作一起開發中東、東南亞、美國等智慧城市商機。 根據美通社報導,有鑒於佳能也是鑫蘊林科的股東,未來佳能邊際AI攝影機模組之業務發展也有機會透過鑫蘊林科在智慧城市的發展而取得斬獲。 另外,市場傳出鑫蘊林科不排除在台灣證券交易所創新版掛牌上市,公司目前不予回應,但如果屬實、這將是Nvidia投資之台灣公司在台股票上市的先例。 在全球智慧城市建設加速推進之 【時報-台北電】能率集團AI投資再傳捷報!外電報導,AI軟體平台公司鑫蘊林科(Linker Vision)成功完成3,500萬美元A輪募資,由能率集團(Abico集團)領投,投資人陣容還包含美國AI巨擘輝達(NVIDIA)。繼美商人形機器人公司Agility Robotics之後,這也是能率集團與輝達共同注資的另一代表性投資案。 能率創新副董事長董俊毅表示,能率亞洲近年明確將AI、機器人與智慧製造列為投資主軸,扮演集團內「前瞻布局」角色,而此次能率集團扮演領投的角色,集團投入金額約此次A輪3,500萬美元的一半,值得一提的是,美商AI巨擘輝達也參與此次的投資案。 董俊毅強調,由於Linker Vision的平台致力於推動智慧城市與智慧空間的AI基礎建設大規模部署,且包含工業AI、企業AI、Telco AI、人形機器人及自主服務等領域。此次的策略性投資合作,在輝達的投資合作後,會進一步強化Linker Vision 在全球引領下一波AI運用。 鑫蘊林科30日引述外電報導,官宣這輪募資案。此次投資鑫蘊林科由能率集團領投,集團成員能率創新、佳能企業、能率亞洲等於今年下半年投資鑫蘊林科,並洽談 【時報記者張漢綺台北報導】繼Agility Robotic之後,能率創新(5392)及輝達(NVIDIA)共同投資鑫蘊林科(Linker Vision),未來能率集團將結合鑫蘊林科公司軟體開發資源,一起打造人形機器人大腦及智慧城市等相關運用,隨著集團各關係企業在2025年陸續傳出捷報,能率創新預計2026年布局與資源整合效應也會逐漸浮現。 能率集團加速佈局AI及機器人等相關新興應用,投資眼光精準的能率集團繼美國機器人廠Agility Robotic獲Nvidia投資之後,近期投資的鑫蘊林科再獲Nvidia青睞,據美通社報導,專注於Physical AI 與 Reasoning AI 技術的AI軟體平台公司—Linker Vision今日宣布,成功完成3500萬美元A輪募資;本輪募資由Abico集團領投,投資人包含NVIDIA,為Linker Vision推動全球 AI 生態系發展的重要里程碑。 能率集團表示,在AI與機器人成為全球產業投資主軸之際,能率集團旗下的能率創新,正逐步從傳統硬體製造角色,轉型為串聯軟硬體、創投資本與產業應用的資源整合關鍵平台,此次投資鑫蘊林科由能率集團領投,帶 【財訊快報/記者王宜弘報導】能率(5392)集團旗下創投能率亞洲(7777)領投的台灣鑫蘊林科(Linker Vision),獲得AI巨擘輝達(Nvidia)公司投資挹注;鑫蘊林科藉此將運用輝達解決方案一起打造世界級的智慧城市,也將透過輝達技術發展人形機器人最強大腦。台灣首家永續型創投能率亞洲上週一(22日)轉上櫃掛牌,正式對外宣布其2026年核心成長引擎:日前策略性投資的台灣公司鑫蘊林科(Linker Vision),輝達也共同參與此輪投資。由於輝達看好Linker Vision在智慧城市整體軟體運用解決方案,此次注資該公司,主要是希望藉由台灣的軟體實力,一起拓展國際市場。另外Linker Vision也將運用輝達Jetson Thor解決方案,於人形機器人打造優化機器人的大腦,使機器人未來可以更聰明。此外,近日市場傳出Linker Vision不排除在台灣證券交易所創新版申請掛牌上市,該公司目前不予回應,但如果屬實,將是Nvidia投資的台灣企業在國內股票市場上市的先例。據了解,能率亞洲半年前藉由Agility Robotics人形機器人及Mantis Robotics工業協作機器 【時報記者張漢綺台北報導】佳能企業(2374)宣布投資、甫獲Nvidia投資的鑫蘊林科(Linker Vision),希望藉由能率(5392)集團、輝達與鑫蘊林科投資結盟,為佳能創造全球「AI智慧城市天眼」的商機,擴大佳能機器人光學、智慧城市與無人載具等AI影像解決方案廠領先地位。 佳能企業與能率集團加大力道在AI相關/機器人等新興應用佈局,其中佳能今年年中因應旗下重要智慧城市業務發展,攜手能率集團關係企業共同投資鑫蘊林科(Linker Vision)公司,今天鑫蘊林科也傳出好消息,據美通社報導,鑫蘊林科獲得輝達(Nvidia)投資,讓雙方從單純業務合作關係轉而成為投資夥伴,亦讓能率/佳能更深化與Nvidia的合作關係,佳能企業邊際AI攝影機模組之業務可望透過鑫蘊林科在智慧城市的發展而取得斬獲。 全球智慧城市建設加速推進之際,影像感測與即時監控已成為城市治理、公共安全與基礎建設數位化的關鍵核心,在投資鑫蘊林科後,佳能企業將目前提供客戶從前端AI攝影機製造到後端佳捷智能的系統整合服務,未來並將透過與鑫韻林科在AI核心技術上的合作從國內發展到國外。 佳能表示,鑫蘊林科為輝達(Nvidia) 根據宇樹科技官方微博發布訊息,與京東集團 (09618-HK)( 外媒美通社報導,AI軟體平台公司鑫蘊林科(Linker Vision)宣布,成功完成3,500萬美元A輪募資,該輪募資由能率集團(Abico 集團)領投,投資人還包含美國AI巨擎NVIDIA及台灣其他金融機構等相關企業,成為美商人形機器人公司Agility Robotics之後,能率集團與輝達共同注資的另一代表作。 [FTNN新聞網]記者何亞軒/綜合報導今(31)日台股加權指數收盤28,963.60點,漲256.47點,漲幅0.89%,觀察今日機器人個股趨勢,儘管台股封關之際盤面買氣旺,... 人工智慧(AI)應用在全球崛起,人形機器人產業隨之飛速發展。TrendForce日前公布報告指出,預期2026年將是人形機器人正式進入以AI為驅動、以應用為核心的產業新階段。大陸業內動作頻頻,各家業者積極收購、上市,搶占行業先機。 特斯拉執行長馬斯克(Elon Musk)高度關注大陸機器人發展,宇樹科技(Unitree)的G1人形機器人近日進行動作測試,誤踹工程師的私密部位,工程師瞬間痛苦蹲下,沒想到該機器人也跟著蹲下,畫面相當滑稽,特斯拉 Cybertruck首席工程師轉發影片,馬斯克留下「笑哭」表情符號互動,引發關注。 人工智慧是否陷入泡沫成為科技圈焦點,就連人形機器人也加入討論。中國宇樹科技打造的 KOID 表示,AI 熱潮是泡沫還是革命仍待時間驗證,但人形機器人與 AI 將長期存在並持續進化。 【財訊快報/陳孟朔】外電報導,輝達(Nvidia,美股代碼NVDA)機器人研究總監暨傑出科學家范麟熙(Jim Fan)近日在社群發表長文直指,機器人領域仍處於高度混沌狀態,不僅軟硬體迭代節奏失衡,連主流技術路線都可能存在方向性偏誤;在生成式AI於雲端快速擴散、相關類股估值已高度前置之際,機器人若遲遲無法建立可複製的工程與量產方法學,將直接影響投資人對「下一個AI爆發點」的定價框架。他首先把矛頭指向硬體可靠性。儘管Optimus、e-Atlas、Figure、Neo、G1等人形機器人展現精湛工程能力,但日常運行仍面臨過熱、馬達損壞、韌體異常等高頻問題,且錯誤往往不可逆、不可容忍,導致軟體迭代被迫降速,研發更像「靠營運團隊撐住的工程戰」。市場人士解讀,這意味短期瓶頸不在模型參數,而在可維護性、可用率與成本曲線,商業化落地不會線性發生。第二個痛點是行業標準缺失。范麟熙形容機器人領域的基準測試(Benchmarking)近乎失序,硬體平台、任務定義、評分標準、模擬器或真實世界設定均缺乏一致規範,使外界難以比較不同團隊成果,衍生「各自定義基準、各自宣稱最強」的現象;部分展示影片也可能只是多次嘗試 日 期:2025年12月29日公司名稱:能率亞洲(7777)主 旨:能率亞洲初次上櫃掛牌首五個營業日穩定價格操作結果發言人:游智元說 明:1.事實發生日:114/12/292.公司名稱:能率亞洲資本股份有限公司3.與公司關係(請輸入本公司或子公司):本公司4.相互持股比例:不適用5.發生緣由:依據「中華民國證券商業同業公會承銷商辦理初次上市(櫃)案件承銷作業應行注意事項要點」之規定,公告本公司上櫃掛牌首五個營業日穩定價格操作結果。(1)主辦承銷商:第一金證券股份有限公司(2)證券名稱、代號:能率亞洲(股票代號:7777)(3)實際過額配售數量:150,000股(4)承銷價格:每股新台幣18元(5)過額配售所得價款:新台幣2,700,000元(6)執行穩定操作期間:114/12/22~114/12/29(7)主辦承銷商「穩定價格操作專戶」買進累計數量:0股。6.因應措施:不適用。7.其他應敘明事項(若事件發生或決議之主體係屬公開發行以上公司,本則重大訊息同時符合證券交易法施行細則第7條第9款所定對股東權益或證券價格有重大影響之事項):無。 【財訊快報/記者王宜弘報導】能率(5392)集團AI佈局再傳捷報!台系AI軟體平台公司Linker Vision週二(30日)宣布完成3500萬元的A輪融資,由Abico能率集團領投,值得注意的是,投資人中赫然見到輝達(nVIDIA)透過旗下的創投部門也參與了本次A輪融資。這是能率集團繼今年投資美國人形機器人新創Agility Robotics,輝達也跟進參與入股之後,雙方再次在AI的新創領域產生連結,尤其是由能率集團領投,包括能率創新、能率亞洲(7777)與佳能(2374)均參與投資,顯示能率集團在前次的連結後,關係更加深化。根據美通社報導,專注Physical AI與Reasoning AI技術的AI軟體平台商Linker Vision(鑫蘊林科)宣布,成功完成3,500萬美元A輪募資;本輪募資由Abico(能率)集團領投,投資人包含nVIDIA旗下創投部門、運賢企業、中加投資、和通創投集團、ITIC(永續創新基金)、彰銀創投以及元大創投等,為Linker Vision推動全球AI生態系發展的重要里程碑。能率則指出,在AI與機器人成為全球產業投資主軸之際,該集團正逐步從傳統硬體製造 公開資訊觀測站重大訊息公告(7777)能率亞洲-公告本公司初次上櫃掛牌首五個營業日穩定價格操作結果1.事實發生日:114/12/292.公司名稱:能率亞洲資本股份有限公司3.與公司關係(請輸入本公司或子公司):本公司4.相互持股比例:不適用5.發生緣由:依據「中華民國證券商業同業公會承銷商辦理初次上市(櫃)案件承銷作業應行注意事項要點」之規定,公告本公司上櫃掛牌首五個營業日穩定價格操作結果。(1)主辦承銷商:第一金證券股份有限公司(2)證券名稱、代號:能率亞洲(股票代號:7777)(3)實際過額配售數量:150,000股(4)承銷價格:每股新台幣18元(5)過額配售所得價款:新台幣2,700,000元(6)執行穩定操作期間:114/12/22~114/12/29(7)主辦承銷商「穩定價格操作專戶」買進累計數量:0股。6.因應措施:不適用。7.其他應敘明事項(若事件發生或決議之主體係屬公開發行以上公司,本則重大訊息同時符合證券交易法施行細則第7條第9款所定對股東權益或證券價格有重大影響之事項):無。延伸閱讀:能率亞洲上櫃掛牌大漲逾36%,蜜月行情亮眼能率亞洲普通股、現增股款繳納憑證12/ 公開資訊觀測站重大訊息公告(7777)能率亞洲-公告本公司初次上櫃掛牌首五個營業日穩定價格操作結果1.事實發生日:114/12/292.公司名稱:能率亞洲資本股份有限公司3.與公司關係(請輸入本公司或子公司):本公司4.相互持股比例:不適用5.發生緣由:依據「中華民國證券商業同業公會承銷商辦理初次上市(櫃)案件承銷作業應行注意事項要點」之規定,公告本公司上櫃掛牌首五個營業日穩定價格操作結果。(1)主辦承銷商:第一金證券股份有限公司(2)證券名稱、代號:能率亞洲(股票代號:7777)(3)實際過額配售數量:150,000股(4)承銷價格:每股新台幣18元(5)過額配售所得價款:新台幣2,700,000元(6)執行穩定操作期間:114/12/22~114/12/29(7)主辦承銷商「穩定價格操作專戶」買進累計數量:0股。6.因應措施:不適用。7.其他應敘明事項(若事件發生或決議之主體係屬公開發行以上公司,本則重大訊息同時符合證券交易法施行細則第7條第9款所定對股東權益或證券價格有重大影響之事項):無。延伸閱讀:能率亞洲 公告本公司初次上櫃掛牌首五個營業日穩定價格操作結果能率亞洲上櫃掛牌大漲 京東與宇樹科技推出第一家線下門市選在2025年最後一天開業,地點設在京東MALL北京雙井店。 隨著 AI 技術加速人形機器人與工業自動化發展,台灣 LED 大廠紛紛轉型切入高階利基市場,富采、億光及弘凱積極佈局機器人視覺、感測及人機介面領域。業者看好高階感測技術與智慧光源將成為推升營運的核心動能,相關高毛利產品線出貨預期 台股資料來源臺灣證券交易所、臺灣期貨交易所及財團法人中華民國證券櫃檯買賣中心,國際股市資料來源請參考Yahoo Finance。台股行情、個股基本資料及財務資訊為精誠資訊提供。使用Yahoo奇摩股市服務前,請您詳閱相關使用規範與聲明。
Images (1):
|
|||||
| Agility Robotics到訪佳能 看好台灣人形機器人產業發展 - 自由財經 | https://ec.ltn.com.tw/article/breakingn… | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics到訪佳能 看好台灣人形機器人產業發展 - 自由財經URL: https://ec.ltn.com.tw/article/breakingnews/5189440 Description: 〔記者歐宇祥/台北報導〕今日能率(5392)集團投資的美國人形機器人公司Agility Robotics共同創辦人Jonathan Hurst到訪能率旗下的光學廠佳能(2374)。Hurst於受 Content:
新聞查詢 基金查詢 美國人形機器人公司Agility Robotics共同創辦人Jonathan Hurst(中央者)到訪佳能。(記者歐宇祥攝) 〔記者歐宇祥/台北報導〕今日能率(5392)集團投資的美國人形機器人公司Agility Robotics共同創辦人Jonathan Hurst到訪能率旗下的光學廠佳能(2374)。Hurst於受訪時表示,人形機器人未來10年會成長到數萬台規模,台灣產業含相機模組、感測器、處理器、馬達等都是供應鏈重要一環,台廠也有望設計出自己的人形機器人投入市場,未來產業發展正向。 Agility Robotics除獲得能率集團投資,近日也獲得輝達(NVIDIA)投資,前景正向。Hurst說,因人形機器人所需的AI、運算能力、電池與感測器都比現有產品更高階,佳能因影像光學技術強、又有設計能力,是Agility Robotics重要合作夥伴,在獲得輝達投資後也會持續拓展全球市場。 請繼續往下閱讀... 佳能也指出,公司近5至6年間接獲許多AI影像專案,藉以建立資料庫,更加強化佳能的多光軸光學校準,強化人形機器人的AI影像協助與辨識,尤其佳能在大角度、魚眼鏡頭作校準與光學影像辨識技術具備優勢,以幫助人形機器人、無人機客戶。 針對台灣供應鏈,Hurst也看好,因為台灣的ICT產業與供應鏈已經完熟,佔據全球重要地位,人形機器人需要相機模組、感測器、處理器、馬達與傳動系統量能也都在台灣,因此台灣也將是人形機器人重要供應鏈一環,台灣也有機會設計並推出自研的人形機器人,未來值得期待。 全球勞動力持續減少,Hurst指出,人形機器人將從物流、倉儲拓展到製造業、零售業,10年後可能進入家庭使用,隨著使用場景擴大,Agility Robotics的人形機器人Digit未來也正向,目前有數10台Digit投入商用,工廠年產能為1萬台,2026年底將推出Digit V5,屆時有望正式進入量產。 針對人形機器人產業前景,Hurst則說,目前此產品仍需要大量測試與使用場景訓練與校準,以期達到應有的安全水準。例如目前Agility Robotics的產品只在3至5個應用場景有深入研究,正期待能進入未來10至20個場景,以蒐集使用資料、進行校正與安全性提升,才能更有效部署人形機器人應用。 Hurst指出,人形機器人需要重新開發專用的致動器模組、相機模組、各種電子零件,因此企業需要合作夥伴與供應鏈支持、形成產業聯盟,台灣就是全球供應鏈的關鍵一環,Agility Robotics也持續與輝達合作,包含移動機器人晶片、高動態系統控制AI與人形機器人未來所需的安全模組與運算模組,並向國際拓展。 一手掌握經濟脈動 點我訂閱自由財經Youtube頻道 不用抽 不用搶 現在用APP看新聞 保證天天中獎 點我下載APP 按我看活動辦法 相關新聞 基金查詢more 熱門新訊more 注目新聞 自由時報版權所有不得轉載 © 2025 The Liberty Times. All Rights Reserved.
Images (1):
|
|||||
| 能率亞洲偕佳能企業入股美國人型機器人公司Agility Robotics | 鉅亨網 - 台股新聞 | https://news.cnyes.com/news/id/6014127 | 1 | Dec 31, 2025 16:00 | active | |
能率亞洲偕佳能企業入股美國人型機器人公司Agility Robotics | 鉅亨網 - 台股新聞URL: https://news.cnyes.com/news/id/6014127 Description: 能率集團旗下能率亞洲於簽約投資美國機器人公司 Agility Robotics,連同佳能企業等其他關係企業之後續投資,預估總投資金額將在 1000 萬美元。佳能企業計劃爭取 Robotics 機器人視覺模組的合作開發與生產製造。 Content:
鉅亨網記者張欽發 台北2025-06-09 19:27 能率集團旗下能率亞洲 (7777-TW) 於日前簽約投資美國機器人公司 Agility Robotics,連同佳能企業 (2374-TW) 等其他關係企業之後續投資,預估總投資金額將在 1000 萬美元。佳能企業也計劃爭取 Agility Robotics 機器人視覺模組的合作開發與生產製造。 佳能總經理章孝祺。(鉅亨網記者張欽發攝) 能率集團指出,Agility Robotics 是一家史無前例的雙足 Mobile Manipulation Robot(MMR)公司,創造了首款全球商業化應用的人型機器人產品「Digit」。除了投資外,佳能企業也計劃爭取 Agility Robotics 機器人視覺模組的合作開發與生產製造,並同時參與本次之投資。 能率集團指出,隨著全球勞動成本上升及已開發國家面臨人力短缺問題,能率集團持續看好人型機器人領域的巨大潛力,而 Agility Robotics 擁有業界首款也是唯一已商業部署於倉儲與製造設施中的人型機器人,將能為用戶帶來立即的使用效益。目前 Agility Robotics 已與 Amazon、GXO 與 Schaeffler 等客戶合作,宣布多項商業部署,領先市場。 能率集團指出,Agility Robotics 以其業界領先技術,於 Nvidia 於 2024 年提出人型機器人 Project GR00T 計畫,該公司成為初始計畫合作夥伴,人型機器人商業化運用進程加速。目前 Agility Robotics 產品 Digit,已在亞馬遜 (Amazon) 倉儲及 GXO Logistics 工廠中進行測試,Digit 預計在 2026 年將升級成為全球首款可與人類安全協作,無需安全圍籬的人型機器人。此項技術突破將使 Agility Robotics 成為首家可在工廠與倉儲等商業場域大規模部署機器人的公司,進而達成 2027 年全球市場商業銷售之目標。 Agility Robotics 現有投資人,包括矽谷創投公司 DCVC 和 Playground Global,以及全球零售業者 Amazon 和德國 motion technology 企業 (Schaeffler 集團) 等,能率亞洲此次投資可望加速能率集團進入人型機器人發展。未來,能率集團也將運用集團在台灣及日本生產基地,協助 Agility Robotics 共同推動 Digit 的銷售,拓展其在亞洲市場之版圖,為人類工作環境帶來更高的安全性及效率。 Agility Robotics 成立於 2015 年,總部位於美國俄勒岡州塞勒姆,並在賓州的匹茲堡及加州的聖荷西設有辦公室。該公司的使命是打造能夠輔助人類工作的仿人型機器人,最終讓人類能更專注於發揮人的本質。 Agility 所研發的開創性雙足移動操作機器人(Mobile Manipulation Robot,簡稱 MMR)— Digit,是全球首款多用途、以人為本、真正為工作而設計的機器人。2024 年該公司的機器人製造工廠 RoboFab 正式啟用營運,該工廠是全球首個專門用於大量生產人型機器人的基地,未來有望每年生產超過 1 萬台 Digit 機器人。 #動能指標下跌股 82.35% 上一篇 下一篇 川普稱不擔心台海局勢 國台辦回應! 經濟衰退與股市創高並存!神準分析師預言2026年美股將現「融漲」行情、回檔恐反覆 首超日本!印度自稱世界第四大經濟體 IMF:其實只多100萬美元 日媒這樣回應 〈2025封關〉電子五哥孤「漲」難鳴!緯創年漲45% 千金緯穎大漲逾7成 台股叩關2萬9!00881估配2.65元 年化配息率飆破16%
Images (1):
|
|||||
| 佳能卡位Agility Robotics 獲輝達投資 | https://tw.news.yahoo.com/%E4%BD%B3%E8%… | 1 | Dec 31, 2025 16:00 | active | |
佳能卡位Agility Robotics 獲輝達投資Description: [NOWnews今日新聞]佳能積極卡位人形機器人商機,與能率集團旗下能率亞洲先前已共同投資美國新創AgilityRobotics約1000萬美元(折合新台幣約3億元),並鎖定視覺模組合作開發與製造。隨... Content:
[NOWnews今日新聞] 佳能積極卡位人形機器人商機,與能率集團旗下能率亞洲先前已共同投資美國新創Agility Robotics約1000萬美元(折合新台幣約3億元),並鎖定視覺模組合作開發與製造。隨著Agility Robotics宣布在C輪融資獲得輝達旗下NVentures投資,佳能也被外界視為可能的關鍵合作夥伴。 Agility Robotics創辦人赫斯特(Jonathan Hurst)將於24日來台,首站即拜訪佳能,雙方預計就人形機器人視覺模組展開深度討論。佳能看好這塊市場潛力,指出一台人形機器人至少需搭載8顆鏡頭,視覺模組平均售價高,將帶動營收結構優化。公司預估,包含人形機器人與無人機等新產品的營收占比,2026年可達5%至10%,2027年挑戰20%,成為未來成長主力。 Agility Robotics主攻雙足移動與操作(MMR)技術,推出全球首款商業化人形機器人「Digit」,客戶包括物流大廠GXO Logistics與亞馬遜。隨著輝達投資並提供AI運算平台支援,Agility Robotics計畫加速量產與應用落地,帶動供應鏈合作需求。 更多 NOWnews 今日新聞 報導 和碩高檔再賣佳能8617張 處分利益達4.9億元 仍持有逾3% 佳能通過高配股!合計殖利率達13% 攻機器人視覺產品下半年看俏 甲骨文股價暴漲40%!雲端基建營收衝5000億美元 神達等台鏈狂歡 「我的彎路已經走太多了!公司股價從170元跌到31元,現在真的想通了,未來只做對的事情,只跟對的人一起合作!」以火鍋起家的築間餐飲集團(7723)今年4月成功上櫃掛牌,積極跨足燒肉、鐵板燒、酸菜魚等餐飲品牌、快速展店,但營運表現未如預期,前三季每股虧損約1.81元、續創歷年低點,股價也一路走弱。董事長林楷傑今(30)日受訪時坦言,未來將不再自創品牌,經營策略回歸單純,只聚焦2件事。 [FTNN新聞網]記者蔡曉容/綜合報導台股31日迎來封關日,盤面氣勢依舊強勁,權值股台積電(2330)盤中一度衝上1,550元,續創歷史新高,帶動大盤衝上2萬9刷新紀... 疫後航空市場回溫,各家航空公司年終與調薪狀況備受關注。星宇航空去年(2024)年終為2個月、調薪約3.5%至5%,今年因營運環境變動、客運表現與去年不同,年終改為1個月、整體調薪約3%;長榮航空方面,去年年終約7個月、加薪約4%,今年則傳出年終6.5個月、非主管調薪2500元。兩家業者均表示,相關發放係依營運成果與未來布局通盤評估。華航則需等獲利結算完成,經董事會決議後定案。 建商為了促銷房子,會打出「免頭期款」、「全額貸款」,看似不用準備任何錢就可以買屋,千萬別相信,過了寬限期,未來房貸的本息有多重可想而知。如果買房前都沒法存錢,買下去一下子要支出這麼多,恐怕生活無法喘息。 多數人在剛離開職場時會準備一筆退休金,確保起碼幾年內生活無虞。但隨著退休生活不斷展開,坐吃山空的日子令人焦慮,尤其當碰到如意外、疾病、子女需求等緊急狀況,必須動用計畫外的資金時更是如此,該如何應付才能讓退休生活不致因缺錢而陷入困頓無繼? 台股今(31)日迎來 2025 年最後一個交易日,儘管加權指數全年表現亮眼,但上櫃市場卻有不少個股股價慘遭腰斬。《Yahoo奇摩股市》盤點 2025 年上櫃跌幅前 20 大個股,多檔跌幅超過 5成。其中,昔日生技股王浩鼎(4174)因巨額虧損與新藥解盲失利重挫市場信心,今年來股價腰斬,名列跌幅前段班。 2025年即將結束,2026年投資機會受關注,投資媒體《Motley Fool》撰文指出,真正決定投資價值,仍是企業長期的獲利能力與競爭優勢,該媒體並點名未來五年值得長期持有的四檔股票,其中包含台積電。 股王、全球最大遠端伺服器管理晶片製造商信驊(5274)不甩大盤低迷氣氛,今(30)日盤中股價一路驚驚漲,截至上午9時40分,暫報7,715元再寫歷史新天價,大漲8.6%、615元。股王自今年4月股災低點以來,已飆漲5,440元,漲幅逾2倍之多。 [FTNN新聞網]記者莊蕙如/綜合報導全球原物料市場在年終掀起罕見狂潮,其中又以銅價最為吸睛。法人指出,倫敦金屬交易所(LME)銅價於聖誕夜正式站上每噸12,0... [FTNN新聞網]記者周雅琦/綜合報導台股封關前最後一個交易日,台股加權指數今(31)日開高震盪,早盤一度翻黑至28,687.9點,截至上午9點20分,暫收28,837.72... [FTNN新聞網]記者陳献朋/綜合報導受龍頭股台積電(2330)及多數類股下跌影響,台股今(30)日大盤走疲,截至12點40分,加權指數來到28748.99點,下跌61.9點... [FTNN新聞網]記者曾奕語/綜合報導台股加權指數昨(29)日收28,810.89點,漲254.87點或0.89%。外資買超95.10億元,觀察其買超ETF,入手最多的是規模高達1.01... 台股封關前夕雖隨美股收黑,但終場跌點收斂至百點,新台幣匯率則是延續年底偏升走勢,但央行穩步調節,收盤將新台幣拉回31.4元的「理想價位」,封關前唯穩作收。公股銀行預估,新台幣2026年波動區間預期在29.5元~32元,回到「2字頭」都是長期進場的美元買點。 2025台股市值型ETF績效出爐!國泰台灣科技龍頭ETF(00881)今年表現亮眼,憑藉前十大成分股包含台積電、鴻海、聯發科、台達電、廣達等晶圓代工與AI伺服器供應鏈要角,全年漲幅達37%,一舉超越元大台灣50(0050)與富邦台50(006208),成為今年台股市值型ETF黑馬。《Yahoo股市》也同步整理19檔相關ETF表現,帶投資人一次掌握整體走勢。 2025年最後一天,台股今(31)日盤中終於攻上眾所期待的29000點大關,最大功臣依舊是以台積電為首的重量級權值股,但多數股票黯然走低,截至記者發稿前,上市下跌家數仍多於上漲家數。最新公布的美國聯準會(Fed)12月會議紀錄顯示,內部對於降息明顯分歧,部分官員主張在三度降息後應該維持利率一段時間不變,使得市場對快速寬鬆的期待降溫,但整體語調仍屬於降息節奏...... 近年台灣自行車產業景氣承壓,產業龍頭巨大與美利達股價跌回至約十多年前價位,引發市場對「自行車王國」前景的關注。財經專家、股人阿勳在臉書分析台灣自行車王國衰退的關鍵,主因遭遇「雙重打擊」,恐怕再回不到到疫情那段時間的榮光。 [FTNN新聞網]記者薛明峻/台北報導美股四大指數周一全面收黑,AI晶片龍頭輝達(NVDA-US)重摔1.21%、台積電ADR也下滑0.63%,拖累台股今(30)日開低,開盤下... [NOWnews今日新聞]受到投資人持續觀望聯準會政策動向與地緣政治變數,耶誕節後交投清淡等因素影響,美股週二(30日)全數收黑,標普500指數連續第三個交易日收低。道瓊工業指數下跌94.87點,或0... 【時報-台北電】2025年最後交易日,台股力拚5線收紅!台積電(2330)盤中奮力拉抬衝上天價1550元,鴻海(2317)、廣達(2382)、聯發科(2454)等權值大軍接力揚升,加以記憶體、面板、PCB等題材股百花齊放,拉升指數大漲逾270點、高見28978點歷史巔峰,距離29K僅不到30點之遙,有望完美封關! 記憶體近期利多簇擁,屢成台股盤面交投熱點!傳三星、SK海力士獲美許可晶片製造設備可再輸入中國,加以美光二度在台獲得大額補貼,合計近百億元,已超越輝達!諸多利好消息吸引記憶體族群買盤蜂擁,今日包括南亞科(2408)、華邦電(2344)、群聯(8299)、宜鼎(5289)、威剛(3260)、十銓(4967)等均改寫歷史高。 今日記憶體族群由IC設計的群聯(8299)、晶豪科(3006)揭露11月佳績後強拉漲停扮演多方領頭羊,點火模組廠宜鼎(5289)、凌航(3135)跟進攻頂,十銓(4967)、宇瞻(8271)、威剛(3260)聯袂走強,通路商方土昶(6265)亦鎖紅燈,旺宏(2337)大漲逾半根停板,南亞科(2408)、華邦電(2344)創高後雙雙翻黑。 繼晶豪科(3006)揭 貴金屬向前衝,鎳價創九個月新高,不銹鋼上游廠燁聯、華新麗華等昨(30)日開出明年1月不銹鋼盤價,燁聯300系每公噸大漲5...
Images (1):
|
|||||
| Agility Robotics CEO on How Robots Are Getting Paid to … | https://www.businessinsider.com/agility… | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics CEO on How Robots Are Getting Paid to Fill Labor Gaps - Business InsiderDescription: By mid-2025, the next-generation version of Agility Robotics' Digit robots will be able to safely operate around humans, its CEO said. Content:
Every time Lara publishes a story, you’ll get an alert straight to your inbox! Enter your email By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider’s Terms of Service and Privacy Policy. Robots are coming for our jobs — at least the repetitive, back-breaking jobs humans increasingly don't want to do. Peggy Johnson, the Silicon Valley veteran who became the chief executive of Agility Robotics earlier this year, told Business Insider that it'd soon be "very normal" for humanoid robots to become coworkers with humans across a variety of workplaces. Many factories in the US are struggling to recruit workers amid a labor shortage that Deloitte predicted could cost the economy as much as $1 trillion by 2030. In January, there were 622,000 manufacturing jobs that hadn't been filled, according to data from the Bureau of Labor Statistics. Enter the robots. "First in the business-enterprise space because that's where the need is highest. And then, as Digit learns new skills, it'll start to be able to go beyond logistics and manufacturing facilities and eventually, somewhere way down the line, is consumer robots," Johnson said in an interview at the Web Summit tech event in Lisbon earlier this month. Every time Lara publishes a story, you’ll get an alert straight to your inbox! Stay connected to Lara and get more of their work as it publishes. By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider's Terms of Service and Privacy Policy. Digit is Agility Robotics' mobile manipulation humanoid robot. It stands at 5-foot-9 and has hands designed to grip and carry objects. Its backward-folding legs allow it to maneuver around a workspace. Digit also has animated LED eyes that act as indicators to its human coworkers to let them know which function it's about to perform next. This year, Digit became the first humanoid robot to be "paid" for performing a job. Agility Robotics signed a multiyear deal with GXO Logistics for Digit to be deployed in its Spanx womenswear factories, with it moving boxes known as totes and placing them onto conveyor belts. Agility Robotics charges a monthly fee, similar to a software-as-a-service model, which includes the Digit robot, its work cell, and the robot's operating software. While Agility Robotics hasn't disclosed the exact amount its Digit robots are paid, the company has previously said that GXO is estimated to see a return on its investment within two years, based on the equivalent of a human working an hourly rate of $30. Johnson said that any company that requires material handling — be it pharmaceutical or grocery — could make use of a workforce of Digits. "Mobile phones started first in the enterprise space because there was an ROI for a salesperson not to stop and find a phone," Johnson said. "That will happen with robots." Amazon began testing Digit in its warehouse operations last year. Ford is looking at how it can deploy Digit with its autonomous-vehicle technology to create a "last-mile" delivery service. Most recently, Agility Robotics struck a deal with the German automotive and industrial supplier Schaeffler, which also made a minority investment in the company. Agility Robotics has raised about $178 million in investment to date, a spokesperson said. It competes with the likes of Apptronik, which is working with NASA on humanoid robots, and Boston Dynamics, which has created humanoid robots called Atlas that it says can run and jump over obstacles, as well as perform factory-worker tasks. Agility Robotics' humanoid robots are permitted to work only inside a specific, cordoned-off space separate from human workers. But Johnson said that by mid-2025, the next-generation version of Digit would be able to safely operate around humans. The company is aiming for the new model to be commercially available within 18 to 24 months. A 2023 Gallup poll found that about one-fifth of US workers surveyed were worried that their jobs would become obsolete because of technology, up from 15% of workers polled in 2021. Johnson said Agility Robotics hadn't had pushback from the likes of workers' unions despite advancements in the number of humanlike tasks Digit can perform. Widespread deployment of humanoid robots is still some way off, however. "I think they also recognize that these are jobs that they haven't been able to fill," Johnson said. "We tend to think of it as augmenting humans and not replacing humans — it's just taking some of the tasks off their plate." While Digit robots are starting to be tested in some workplaces, Johnson said getting them to perform tasks around the home, like folding laundry, would take awhile longer. "A household is a very chaotic environment: At any given moment, a child's ball runs across the room, and dogs run by. There's things that are in the way," Johnson said. "Warehouses are much more disciplined." Johnson said the data gathered by robots working in warehouses would eventually be used to train consumer robots. But she added that she wanted Agility Robotics to focus on demonstrating what its technology can perform today — rather than the concept videos used by some of its competitors. Robotics videos and demos at trade shows and events are often highly choreographed, she said. For instance, Tesla's humanoid Optimus bots at last month's robotaxi event were remotely operated by humans behind the scenes. "The hype, in general, is not great for the industry because people think it's somehow not here and now," Johnson said. "My job is to say, no, it is here and now. Humanoids are deployed right now and are getting paid to do work." Agility Robotics takes a similarly cautious approach to its application of artificial intelligence, which is deep in the hype stage. Johnson described the company as "AI-agnostic," as it uses various models in reinforcement learning to help fine-tune Digit's leg movements and help it recognize and carry out various tasks. "Many companies in the robotics space think, well, now that AI is here, I can just build a complete AI stack. We think that is very dangerous right now," Johnson said. "The problem is, just asking ChatGPT a question — it doesn't always answer exactly right. Can you imagine if what it's telling it to do is move an arm around and these things are human forms, 5-foot-9, 160 pounds? They have a lot of force." Jump to
Images (1):
|
|||||
| Agility Robotics’ humanoid robot has its first real job—at a … | https://fortune.com/2024/07/15/agility-… | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics’ humanoid robot has its first real job—at a Spanx factory | FortuneDescription: CEO Peggy Johnson said the robot can lift up to 50 pounds and is ideally suited for the kind of repetitive, backbreaking work that leads to injuries when done by humans. Content:
Digit, the flagship robot at Oregon-based Agility Robotics, raised its hand to wave at the audience at the Fortune Brainstorm Tech in Park City, Utah, as CEO Peggy Johnson explained to the crowd why the robot’s knees were, well, backward—like bird legs. “Knees get in the way of picking things up,” she explained to Fortune tech reporter Jason Del Rey, pointing out that Digit was designed to work in big warehouses, lifting things up and putting things down. Now, Digit is putting its backward-knees design, ten years in the making, to good use: The droid recently got hired at its first real job—picking up totes at a Spanx facility in Georgia and putting them onto conveyors. The work is part of a multi-year deal with logistics provider GXO Logistics, and Johnson said the company is already getting monthly revenue from the robot-as-a-service project. Johnson, who has only been in the CEO role for four months, said that there are about 1.1 million unfilled warehouse jobs in the US that require the repetitive, mundane tasks Digit is doing. “Nobody wants these jobs,” she said, adding that repetitively lifting heavy weights leads to workers getting hurt, and ultimately quitting their warehouse jobs. “That’s where the injuries come in. That’s where the turnover comes in,” she said. Warehouse employees that used to do physical work are now becoming the managers of the robots, she added: “They need to be upskilled.” Agility Robotics, which spun out of research at Oregon State University, raised $150 million in a Series B round in April 2022 as the company prepared to deploy Digit in logistics and warehouse environments. Now that Digit has been released into the wild of the warehouse, Johnson said the company is working on a rollout for its next generation of Digit to come in the fall. Thanks to a factory the company recently built in Salem, Oregon, the company will roll out hundreds of Digit robots, and thousands are planned for the following year—with an eye towards a goal of 10,000 to meet growing demand. But it’s not just about getting the robots to walk—backward knees and all, Johnson emphasized. Instead, it’s about the robot being able to step into an operation’s existing workflow. “We need to enter a corporate IT infrastructure and make it work for them,” she said. In April 2024, Agility Robotics confirmed that it laid off a “small number” of employees because of “ongoing efforts to structure the company for success” while ramping up production of Digit. Johnson said that the company is currently raising capital for another round of funding down the line. But at the moment, she said, she is trying to figure out the best workplace for the handful of Digit robots currently available. “We have a lot of interest from automotive, retail grocers,” she said. “I’m trying to figure out which direction to go in.” Correction: Agility Robotic’s $150 million funding was a Series B in 2022, not a series C in 2024. The GXO facility where Digit is working is in Georgia, not Connecticut. Sharon Goldman is an AI reporter at Fortune and co-authors Eye on AI, Fortune’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade. © 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
Images (1):
|
|||||
| Agility Robotics is opening a humanoid robot factory | https://www.cnbc.com/2023/09/18/agility… | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics is opening a humanoid robot factoryURL: https://www.cnbc.com/2023/09/18/agility-robotics-is-opening-a-humanoid-robot-factory-.html Description: A first of its kind factory that will build humanoid robots is set to open in Salem, Oregon Content:
Agility Robotics is wrapping up construction of a factory in Salem, Oregon, where it plans to mass produce its first line of humanoid robots, called Digit. Each robot has two legs and two arms and is engineered to maneuver freely and work alongside humans in warehouses and factories. The 70,000-square-foot facility, which the company is calling the "RoboFab," is the first of its kind, according to Damion Shelton, co-founder and CEO of Agility Robotics. COO Aindrea Campbell, who was formerly Apple's senior director of iPad operations and an engineering manager at Ford, told CNBC that the facility will have a 10,000 unit annual max capacity when it's fully built out and will employ more than 500 people. For now, though, Agility Robotics is focused on the installation and testing of its first production lines. "It's a really big endeavor, not something where you flick a switch and suddenly turn it on," Campbell said. "There's kind of a ramp-up process. The inflection point today is that we're opening the factory, installing the production lines and starting to grow capacity and scale with something that's never been done before." Funded by DCVC and Playground Global among venture investors, Agility Robotics beat would-be competitors to the punch, including Tesla with its Optimus initiative, by completing development of production prototype humanoid robots and standing up a factory where it can mass produce them. Shelton told CNBC that his team developed Digit with a human form factor so that the robots can lift, sort and maneuver while staying balanced, and so they could operate in environments where steps or other structures could otherwise limit the use of robotics. The robots are powered with rechargeable lithium ion batteries. One thing Digit lacks is a five-fingered hand — instead, the robot's hands look more like a claw or mitten. "Human style hands are very complex," Shelton said. "When I see robots that have five fingers, I think, 'Oh, great. Someone built a robot, then they built two more robots onto that robot.' You should have a 'hand' that is no more complex than you need for the job." Digit can traverse stairs, crouch into tight spaces, unload containers and move materials onto or off of a pallet or a conveyor, then help to sort and divide material onto other pallets, according to Agility. The company plans to put the robots to use transporting materials around its own factory, Campbell said. Agility's preferred partners will be first to receive the robots next year, and the company is only selling — not renting or leasing — the systems in the near term. Asked if the company is concerned that its technology could "steal jobs" from people, Shelton said he envisions Digit allowing manufacturing and logistics businesses to meet rising demand as recruiting remains a challenge and as many workers retire or opt to leave the industry. Matt Ocko, managing partner at DCVC and an investor in Agility, told CNBC that Digit should "fill millions of unmet roles that human beings don't want." At the same time, he emphasized, Agility Robotics has designed its humanoid robots to work safely and autonomously as a "robotic co-worker." Got a confidential news tip? We want to hear from you. Sign up for free newsletters and get more CNBC delivered to your inbox Get this delivered to your inbox, and more info about our products and services. © 2025 Versant Media, LLC. All Rights Reserved. A Versant Media Company. Data is a real-time snapshot *Data is delayed at least 15 minutes. Global Business and Financial News, Stock Quotes, and Market Data and Analysis. Data also provided by
Images (1):
|
|||||
| Agility Robotics Humanoids to Manage E-commerce Firm Mercado Libre’s Texas … | https://analyticsindiamag.com/ai-news-u… | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics Humanoids to Manage E-commerce Firm Mercado Libre’s Texas WarehouseDescription: The agreement aims to address labour gaps, automate repetitive work, and improve workplace ergonomics. Content:
Mercado Libre, Latin America’s largest e-commerce company and fintech acquirer, has signed an agreement with Agility Robotics to deploy Digit, the latter’s humanoid robot, in its fulfilment centre in San Antonio, Texas. The company also plans to deploy Agility Robotics’ robots across more of Mercado Libre’s warehouses in Latin America. The companies will begin with commerce-support tasks and will test how humanoid systems can fit into the e-commerce company’s existing workflows. The agreement aims to address labour gaps, automate repetitive work, and improve workplace ergonomics. Both companies will evaluate how Digit can support logistics operations and help staff shift to more value-added roles. With this, Mercado Libre has joined other companies deploying Agility’s robots, including GXO, Schaeffler, and Amazon. “At Mercado Libre, we are constantly exploring how emerging technologies can elevate our operations and improve the experience for our employees and millions of users,” said Agustin Costa, senior VP of shipping at Mercado Libre, in a statement. Digit will move materials in the Texas facility by walking, lifting, and carrying totes (plastic bins) through standard warehouse aisles. Agility said the robot can integrate into existing layouts without significant changes. The robot has already handled more than 100,000 totes in live operations, the company noted. Costa said the deployment is a step towards building a “safer, more efficient, and adaptable logistics network.” Daniel Diez, chief business officer at Agility Robotics, said the company is “excited to partner with them [Mercado Libre] to integrate our autonomous humanoid robots capable of performing meaningful work and delivering real value into their facilities.” Agility asserted that Digit can take on high-turnover roles and support continuous workflows. It uses AI to learn new tasks and adapt to different processes. The robot utilises Agility ARC, the company’s cloud platform for managing robot fleets within enterprise systems, for task allocation. 📣 Want to advertise in AIM? Book here Karnataka was among the first to roll out a data centre policy, but the implementation on the ground is sluggish With an advanced traffic management system, Bengaluru has reduced travel time by up to 20% in some corridors. The contrast playing out across southern India highlights how decisively infrastructure readiness, policy execution, and speed shape tech investment outcomes. From AI workloads to enterprise cloud adoption, data centres could become foundational drivers of India’s GDP growth. “What we’re seeing is that accessibility-driven design often solves a broader problem. It’s not charity. It’s engineering.” Telangana has attracted over 75 greenfield GCCs in 2025, compared with 40-plus in Karnataka. “Only 30% of software engineering happens on the laptop. The real 70% starts after you commit the code,” says Jyoti From groundwater and slopes to carbon sinks, tools like CatBoost are enabling Indian scientists to extract insights and drive sustainability. Happy Llama 2026 brings together the world’s top AI startups, investors, and innovators across two power-packed editions in Bangalore and San Francisco. Email:[email protected] Our Offices AIM India1st Floor, Sakti Statesman, Marathahalli – Sarjapur Outer Ring Rd, Green Glen Layout, Bellandur, Bengaluru, Karnataka 560103 AIM Americas166 Geary St STE 1500 Suite #634, San Francisco, California 94108, United States © Analytics India Magazine Pvt Ltd & AIM Media House LLC 2025
Images (1):
|
|||||
| Mercado Libre 與 Agility Robotics 合作 引入 Digit 機器人於德州設施提升工作效率 | https://www.techritual.com/2025/12/11/4… | 0 | Dec 31, 2025 16:00 | active | |
Mercado Libre 與 Agility Robotics 合作 引入 Digit 機器人於德州設施提升工作效率URL: https://www.techritual.com/2025/12/11/469153/ Description: | Content: |
|||||
| 人機協作不是夢!Agility Robotics 家 Digit 上工一年,目標是走入家庭 | TechNews 科技新報 | https://finance.technews.tw/2025/11/02/… | 1 | Dec 31, 2025 16:00 | active | |
人機協作不是夢!Agility Robotics 家 Digit 上工一年,目標是走入家庭 | TechNews 科技新報URL: https://finance.technews.tw/2025/11/02/agility-robotics-digit/ Description: 美國物流巨頭 GXO 偌大倉庫裡,體型修長的藍綠色身影正忙進忙出。它有如鳥雙腿,夾爪雙手靈巧探入無人搬運車,將一個個物流塑膠箱搬至輸送帶。這批融入工作流、與人類共享工作空間的機器員工「Digit」,已忠實執行任務超過一年。 6月,美國機器人獨角獸Agility Robotics為自家Digit歡慶上... Content:
美國物流巨頭 GXO 偌大倉庫裡,體型修長的藍綠色身影正忙進忙出。它有如鳥雙腿,夾爪雙手靈巧探入無人搬運車,將一個個物流塑膠箱搬至輸送帶。這批融入工作流、與人類共享工作空間的機器員工「Digit」,已忠實執行任務超過一年。 6月,美國機器人獨角獸Agility Robotics為自家Digit歡慶上工一週年。超過2千小時工時,搬運30多萬個庫存單位。這只是全世界首款商業部署的人形機器人的首個里程碑。 Earlier this month, Digit passed the one-year mark working full time for @GXOLogistics 🎉🤖 pic.twitter.com/wom0Qz4LKq — Agility Robotics (@agilityrobotics) June 26, 2025 Agility Robotics共同創辦人暨機器人首席設計長赫斯特(Jonathan Hurst)眼中,他親手打造的Digit還有很大的發展空間。未來,Digit將走出倉庫,進入工廠、零售、照護領域,最終進入一般消費者的家庭。 「不需要急著跳到終點」,赫斯特對《遠見》解釋,家用機器人的安全門檻最高,他估算是機器人最後抵達的市場。但前進家戶的途中,每個逐步解鎖的市場都超過百億美元規模。他表示,「服務這些市場時,你可以打造百億甚至兆億美元規模的公司。最終再將機器人推上家用市場。」 除了與GXO簽約別具意義的「機器人即服務」(Robots-as-a-Service)長期合約,Agility Robotics早已打入高度自動化的零售巨頭亞馬遜倉儲。年初以來,Digit也在舍弗勒(Schaeffler)位於美國南卡的工廠裝卸洗衣機外殼。這間機械與汽車零件製造商亦是Agility投資者,準備全球生產基地部署人形機器人。 合作數年後,輝達9月宣布,創投部門Nventures加入Agility Robotics的C輪投資。之前Agility已應用輝達Omniverse機器人模擬與學習框架Isaac Sim與Isaac Lab,在模擬環境訓練機器人全身控制能力。Digit也是新機器人電腦Jetson Thor的早期使用者。 ▲ Agility Robotics共同創辦人、首席機器人長赫斯特(Jonathan Hurst)。 2015年成立,研發機器人已久的Agility Robotics正在這波「實體AI」熱潮助陣下加速前進。身為業界首個拿下長期商業合約的人形機器人供應商,他們最寶貴一課是什麼? 答案是:安全性是規模化商業應用的最大門檻。赫斯特直言,這是最重要的經驗學習,「沒有一套完整的功能安全性策略,就無法部署機器人」。 過往,工廠自動化設備常是固定式,或只在機器工作區執行任務,以防人類員工受傷。但如果只將人形機器人部署特別工作區域,就大幅限制任務種類,可說是失去研發人形機器人的意義。 赫斯特便說,「人形機器人的價值在於,它可以走入現有為人類打造的空間,代人類執行原本人要做的工作。」 他表示,Agility Robotics也是在概念驗證、與亞馬遜等客戶合作以後,才真正理解機器人需要什麼安全性策略,「這需要整體性設計整台機器人(以符合安全原則)」。 例如,與人類在共同空間工作的機器人可能會因電力不足或踩到障礙物摔倒,壓在人類身上。目前Digit重量超過60公斤,未來超過100公斤,是相當高風險的意外事件。 ▲ Agility Robotics概念驗證、與亞馬遜等客戶合作以後,才真正理解機器人需要什麼樣的安全性策略。 為了搬運貨物,Digit雙臂可承重逾15公斤。若在抬放、轉動手臂時撞到人的臉部,那也相當危險。 率先搶進商業市場,Agility Robotics希望在安全性解決方案上,也扮演領頭羊的角色。預計在2026年推出的第五代Digit,可望成為第一個安全性足以與人類密切合作的人形機器人。 這包括裝配一系列感測器,以確保準確偵測到人類。當人類逐漸靠近,機器人就會停止、坐下,進入低能量模式,這樣就不會摔倒在人身上。可能也需要一套獨立監控系統,若監測到機器人可能有逾越安全守則的行動,就能介入阻止。 赫斯特表示,安全部署需要一整套完整的安全功能策略。不只是元件、模組本身的安全性,也牽動外型設計,是從裡到外的整體性更新補強。 Agility Robotics的計畫是2026年底前讓Digit與人類高度合作(fully cooperative);也就是人機可在同空間工作,但不會同時處理同物件。再以此為基礎,前進真正的人機協作(fully collaborative)。 Agility Robotics也與波士頓動力(Boston Dynamics)等其他機器人公司,一同制定可動態平衡的工業移動機器人安全標準ISO 25785-1。有了國際化安全標準,才能為機器人提供認證保險,為商業部署打下基礎。 「那會是首款具備功能安全性、可動態平衡的人形機器人,能在工作區域外使用。」赫斯特表示。「那也將是人形機器人規模化的時刻,也就是它們不需額外基礎建設支援,就可進入一個空間執行工作流程時。我們將在2026年底發表這項成果。」 倉儲物流、製造工廠等場域,是人形機器人從任務明確的結構化場域邁向彈性、多元的開放世界的初始應用場景,也是啟動「資料飛輪」最佳化產品、以實質營收證成後續投資的關鍵起步。 Agility Robotics花費數年,檢視數百個用例,才找到「倉儲物流」這理想的灘頭市場。赫斯特舉例,一開始考慮「最後一哩配送」,讓機器人從自駕車把包裹送至客戶門口。但自駕車還不普及,每個家戶也不盡相同,加上安全性是一大挑戰,並非理想用例。 Agility Robotics也評估過物流裝卸貨,但紙箱大小不同,如何抓取、分類相當複雜。要把紙箱堆到高處需要攀爬凳子,堆在牆邊時手臂容易撞到牆,因此這也不適合當初始市場。 最後,他們發現搬運倉庫用的物流塑膠箱,是非常適合Digit的任務。這些塑膠箱標準化程度夠高,Digit可用夾爪抓取手把位置。要舉著重物在狹窄走道移動,Digit磨練已久的雙腿動態平衡能力也正好派上用場。 ▲ 搬運倉庫用的物流塑膠箱,是非常適合Digit的任務。 赫斯特估計,光這個市場就有出貨數千台的潛力。盡早進入市場,也讓公司累積實地部署、管理大量機器人的經驗,並能優化安全性功能等產品面向。 或許更重要的是,這讓Agility Robotics與客戶一同成長、一同打造更「通用」的人形機器人。如赫斯特強調,如果只要自動化單一任務,機器手臂便可勝任。但Agility逐步拓展Digit任務範圍,客戶手上的人形機器人會愈來愈多工。他認為,下一步先是物流箱工作流的上下游任務,等到掌握更靈巧的操控能力,會再回頭攻克紙箱搬運這項任務。 「我們不想持續募資、花十幾二十年進軍最終階段市場」,赫斯特表示,「我們想要開始賺錢,逐步建立持續改進的飛輪效應」。 他預期,人形機器人最初的應用場域在工業環境,接著會進入結構化程度較低、需要高安全標準的產業,如零售雜貨、建築、照護,最後進入一般家庭。整個過程將花超過十年時間。赫斯特預期,到2050年,機器人將能與人類在共有空間一起工作生活。為了邁向那樣的未來,無論Digit或人類,都還有許多工作要完成。 (本文由 遠見雜誌 授權轉載;圖片來源:Agility Robotics) 文章看完覺得有幫助,何不給我們一個鼓勵 您的咖啡贊助將是讓我們持續走下去的動力 從這裡可透過《Google 新聞》追蹤 TechNews 科技新知,時時更新 我的最愛 趕快加入最愛頁面 Facebook 成為我們的小粉絲 RSS 即時更新新知 © 2013-2025 TechNews Inc. All rights reserved. | 關於我們 | 使用條款 | 隱私權政策 | 著作權與授權轉載 親愛的會員, 您的帳戶已經在其他裝置進行登入,於是系統將自動把您的帳戶登出本裝置。
Images (1):
|
|||||
| GXO Signs Industry-First Multi-Year Agreement with Agility | https://www.globenewswire.com/news-rele… | 1 | Dec 31, 2025 16:00 | active | |
GXO Signs Industry-First Multi-Year Agreement with AgilityDescription: Robots-as-a-Service (RaaS) agreement integrates Agility’s Humanoid robots with other cobots at SPANX facility GREENWICH, Conn., June 27, 2024 ... Content:
June 27, 2024 16:30 ET | Source: GXO Logistics GXO Logistics Robots-as-a-Service (RaaS) agreement integrates Agility’s Humanoid robots with other cobots at SPANX facility GREENWICH, Conn., June 27, 2024 (GLOBE NEWSWIRE) -- GXO Logistics, Inc. (NYSE: GXO), the world’s largest pure-play contract logistics provider, and Agility Robotics, creator of the leading bipedal Mobile Manipulation Robot (MMR) Digit®, announced today that they have signed a multi-year agreement to begin deploying Digit in GXO’s logistics operations. This agreement, which follows a proof-of-concept pilot in late 2023, is both the industry’s first formal commercial deployment of humanoid robots and first Robots-as-a-Service (RaaS) deployment of humanoid robots. “We’re building on the success of last year’s groundbreaking pilot with Agility by deploying fully operational Digit humanoids into a live warehouse environment,” said Adrian Stoch, Chief Automation Officer, GXO. “Our R&D approach is to partner with developers all over the world to help them build and validate practical use cases that improve the working environment for our employees while optimizing operations for our customers. Agility shares this philosophy, and Digit is the perfect addition to work alongside our people in our fulfillment center. We’re delighted to progress our partnership through this critical milestone.” As part of the RaaS agreement, GXO is deploying Digit robots and Agility Arc™, Agility’s cloud automation platform for deploying and managing Digit fleets. Digit is a multi-purpose, human-centric robot made for logistics work, and designed to work safely in human spaces and help with a variety of repetitive tasks. Agility Arc is designed to simplify the deployment lifecycle, from facility mapping and workflow definition to operational management and troubleshooting. In the SPANX facility, Agility’s solutions integrate with existing automation, including Autonomous Mobile Robots (AMRs). Expanding on last year’s pilot, Digit robots are assisting with repetitive tasks such as moving totes from cobots and placing them onto conveyors, all orchestrated through Agility Arc. Under the RaaS agreement, the companies will continue to explore additional use cases and scale Digit usage to meet demand throughout the deployment. To see Digit in action at GXO, watch this video. “There will be many firsts in the humanoid robot market in the years to come, but I’m extremely proud of the fact that Agility is the first with actual humanoid robots deployed at a customer site, generating revenue, and solving real-world business problems,” said Peggy Johnson, CEO, Agility Robotics. “Agility has always been focused on the only metric that matters - delivering value to our customers by putting Digit to work - and this milestone deployment raises the bar for the entire industry.” Supply & Demand Chain Executive magazine (SDCE) recently recognized GXO as the overall winner of the 2024 Top Supply Chain Projects awards for its groundbreaking pilot with Digit. About GXO Logistics GXO Logistics, Inc. (NYSE: GXO) is the world’s largest pure-play contract logistics provider and is benefiting from the rapid growth of ecommerce, automation and outsourcing. GXO is committed to providing a diverse, world-class workplace for more than 130,000 team members across more than 970 facilities totaling approximately 200 million square feet. The company partners with the world’s leading blue-chip companies to solve complex logistics challenges with technologically advanced supply chain and ecommerce solutions, at scale and with speed. GXO corporate headquarters is in Greenwich, Connecticut, USA. Visit GXO.com for more information and connect with GXO on LinkedIn, X, Facebook, Instagram and YouTube. About Agility RoboticsHeadquartered in Corvallis, Oregon, with offices in Pittsburgh, Pennsylvania and Palo Alto, California, Agility Robotics’ mission is to build robot partners that augment the human workforce, ultimately enabling humans to be more human. Agility’s groundbreaking bipedal Mobile Manipulation Robot (MMR) Digit is the first multi-purpose, human-centric robot that is made for work™. Forward-Looking Statements This release includes forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. All statements other than statements of historical fact are, or may be deemed to be, forward-looking statements, including statements regarding the future performance of its customer solutions. These forward-looking statements are qualified by cautionary statements regarding unknown risks, uncertainties, and assumptions as can be found in GXO’s filings with the U.S. Securities and Exchange Commission and accessed through the company’s website http://www.gxo.com. Media contactsMatthew Schmidt +1 203-307-2809 matt.schmidt@gxo.com Tim SmithElement Public Relations, for Agility Robotics415-350-3019 media@agilityrobotics.com Attachment Global role to accelerate expansion in high-growth segments, verticals and geographies GREENWICH, Conn., Dec. 19, 2025 (GLOBE NEWSWIRE) -- GXO Logistics, Inc. (NYSE: GXO), the world’s largest... Seasoned Executive Patrick Byrne to Become Non-Executive Chairman GREENWICH, Conn., Dec. 15, 2025 (GLOBE NEWSWIRE) -- GXO Logistics, Inc. (NYSE: GXO), the world’s largest pure-play contract...
Images (1):
|
|||||
| Agility Robotics Debuts Compact Foundation Model for Digit Robot Control | https://www.webpronews.com/agility-robo… | 1 | Dec 31, 2025 16:00 | active | |
Agility Robotics Debuts Compact Foundation Model for Digit Robot ControlURL: https://www.webpronews.com/agility-robotics-debuts-compact-foundation-model-for-digit-robot-control/ Description: Keywords Content:
In the rapidly evolving field of humanoid robotics, Agility Robotics is pushing boundaries with its latest innovation: a whole-body control foundation model designed to act as the “motor cortex” for its Digit robots. This neural network, boasting fewer than one million parameters, promises to revolutionize how humanoid robots interact with dynamic environments, handling tasks from heavy lifting to disturbance recovery with unprecedented stability and efficiency. Drawing from advanced simulation techniques, the model is trained entirely in NVIDIA’s Isaac Sim, leveraging reinforcement learning to master omnidirectional locomotion and manipulation. As detailed in a recent post on Agility Robotics’ blog, the system decouples high-level planning from low-level control, allowing for intuitive interfaces that simplify teleoperation and behavior cloning. Sim-to-Real Transfer Breakthroughs One of the model’s standout features is its zero-shot sim-to-real transfer capability, enabling seamless deployment from virtual training to physical hardware without additional fine-tuning. This efficiency stems from a carefully curated dataset of 2,000 hours of simulated motion, encompassing diverse scenarios like uneven terrain navigation and object manipulation under perturbations. Industry observers note that this approach addresses longstanding challenges in robotics, such as underactuation and complex dynamics. A survey published on arXiv highlights how behavior foundation models like this one facilitate rapid adaptation to new tasks, potentially transforming humanoid applications in logistics and manufacturing. Integration with Broader Ecosystems Agility’s foundation model integrates smoothly with higher-level AI systems, including large language models for task planning. Posts on X from robotics experts, such as those shared by Chris Paxton, Agility’s director of robotics, emphasize its robustness in handling heavy objects and disturbances, positioning it as a platform for learning new skills. Recent news from The Robot Report underscores August 2025 as a pivotal month for such advancements, with Agility’s work featured alongside investments and new product releases. The model’s small size and low computational demands make it deployable on edge devices, a boon for real-world operations. Real-World Deployments and Future Implications In practical terms, this technology is already influencing deployments. Agility’s cloud platform, Arc, allows for workflow integration in warehouses, as noted in their official site updates. A Business Insider interview with Agility’s CEO reveals plans for safety-certified humanoids by late 2025, capable of operating alongside humans. Bloomberg reports project the humanoid market reaching $38 billion by 2035, with Agility at the forefront. X discussions, including those from The Humanoid Hub, praise the model’s simulation-based training for enabling safe, reactive control in diverse tasks. Challenges and Ethical Considerations Despite these strides, challenges remain in scaling such models for general-purpose use. NVIDIA’s Jetson Thor integration, as announced on Agility’s site, aims to meet growing compute needs, supporting more complex perceptions and decisions. Insiders at events like RoboBusiness 2025, covered by The Robot Report, share insights on initial deployments, revealing lessons in human-robot collaboration. As Agility refines this foundation model, it could set new standards for robotic autonomy, blending machine learning with physical intelligence in ways that enhance human productivity without replacing it. Looking Ahead in Robotic Innovation The convergence of foundation models with humanoid hardware signals a shift toward more intuitive robotic systems. Wikipedia’s entry on Agility Robotics traces its origins to Oregon State’s Dynamic Robotics Lab, underscoring the academic foundations fueling these commercial breakthroughs. Ultimately, this whole-body control model exemplifies how targeted AI can bridge simulation and reality, paving the way for robots that not only perform tasks but adapt intelligently to the unpredictable nature of human environments. As 2025 progresses, Agility’s innovations may well define the next era of practical robotics. Subscribe for Updates Help us improve our content by reporting any issues you find. Get the free daily newsletter read by decision makers Get our media kit Deliver your marketing message directly to decision makers.
Images (1):
|
|||||
| Czy Digit zdominuje magazyny? Agility Robotics pozyskuje 400 mln dolarów … | https://mamstartup.pl/czy-digit-zdominu… | 1 | Dec 31, 2025 16:00 | active | |
Czy Digit zdominuje magazyny? Agility Robotics pozyskuje 400 mln dolarów na rozwójDescription: Agility Robotics, amerykański startup stojący za humanoidalnym robotem Digit, szykuje się do gigantycznej rundy finansowania. Firma ma pozyskać 400 milionów dolarów przy wycenie wy Content:
Antygrupa Dodane: 04.04.2025 Mam Startup Udostępnij: Głównym inwestorem ma być WP Global Partners, a wśród uczestników rundy znajduje się również SoftBank Group. Celem pozyskania funduszy jest skalowanie produkcji robotów humanoidalnych, które mają wspierać pracę w magazynach i centrach logistycznych. Agility Robotics od kilku lat rozwija humanoidalnego robota Digit, przeznaczonego do pracy w magazynach i centrach logistycznych. Maszyna, o wzroście około 175 cm, jest zaprojektowana do współpracy z ludźmi. Potrafi przenosić ładunki do około 16 kg i nawigować w ludzkim otoczeniu dzięki zaawansowanym czujnikom, kamerom i technologii LiDAR. Digit jest już wykorzystywany przez takie firmy jak Amazon, Spanx i GXO Logistics do wykonywania rzeczywistych zadań, w tym transportowania pojemników magazynowych. Dzięki pozyskanym funduszom Agility Robotics planuje przyspieszyć wdrażanie Digit na szeroką skalę, co może znacząco zmienić krajobraz nowoczesnej logistyki. W miarę rozwoju sztucznej inteligencji oraz ulepszania komponentów sprzętowych, roboty humanoidalne przestają być futurystyczną wizją, a stają się realnym rozwiązaniem problemów operacyjnych firm na całym świecie. Eksperci przewidują, że wyścig o dominację w sektorze humanoidalnej robotyki nabierze tempa, a Agility Robotics, dzięki Digit, może zyskać przewagę nad konkurencją i na nowo zdefiniować sposób, w jaki ludzie współpracują z maszynami w erze automatyzacji. Udostępnij: Jak przebiegała największa runda w polskim medtechu w 2025? Jan Szumada o inwestycji Warsaw Equity Group w Jutro Medical Alfabetyczny przegląd trendów: co zadziało się w 2025 roku? © 2011-2025 Autentika. MamStartup należy do Antygrupy
Images (1):
|
|||||
| Робот-гуманоид Digit от Agility Robotics теперь понимает даже замысловатые запросы … | https://techno.nv.ua/innovations/robot-… | 0 | Dec 31, 2025 16:00 | active | |
Робот-гуманоид Digit от Agility Robotics теперь понимает даже замысловатые запросы / NVURL: https://techno.nv.ua/innovations/robot-gumanoid-digit-ot-agility-robotics-s-ii-50376752.html Description: Робот-гуманоид Digit от Agility Robotics теперь может лучше понимать запросы людей, даже если они... Content: |
|||||
| Tesla Optimus robotlarında sorun yaşıyor - ShiftDelete.Net | https://shiftdelete.net/tesla-optimus-r… | 1 | Dec 31, 2025 08:01 | active | |
Tesla Optimus robotlarında sorun yaşıyor - ShiftDelete.NetURL: https://shiftdelete.net/tesla-optimus-robotlarinda-sorun-yasiyor Description: Tesla, Optimus teknolojisinde aksaklık yaşamaya başladı. Bu durum, şirket yönetiminde büyük endişe yarattı. Content:
Tesla, insansı robot projesi Optimus’un seri üretimini, el ve kol tasarımında yaşanan ciddi teknik sorunlar nedeniyle geçici olarak askıya aldı. Şirket, bu aksaklıklar yüzünden 2025 sonu için belirlediği üretim hedefini 5.000 üniteden 2.000 üniteye düşürmek zorunda kaldı. Gelen son bilgilere göre, Tesla’nın el ve kol tasarımı konusunda zorluklar yaşadığı belirtiliyor. Proje sürecine yakın kaynaklar, mühendislerin insan eline benzer esnek ve çevik bir mekanizma geliştirmekte büyük zorluk çektiğini söylüyor. Bu durum, seri üretimin geçici olarak durmasına yol açtı. Şirketin depolarında, eli ve ön kolu bulunmayan çok sayıda robot gövdesinin biriktiği bildiriliyor, ancak ne zaman tamamlanacaklarına dair net bir takvim verilmedi. Elon Musk, katıldığı bir podcast yayınında tasarım sorunlarını doğruladı, ancak üretimin yeniden başlayacağı bir tarih belirtmedi. Tesla, bu teknik sıkıntıları ilk olarak geçen yaz fark etti ve o dönemde üretim hedefini düşürme kararı aldı. Yeni problemlerin ortaya çıkmasıyla birlikte şirket, kaynaklarını mekanik iyileştirmelere ve tasarım revizyonlarına yönlendirdi. Başlangıçta 2025 sonuna kadar en az 5.000 Optimus robotu üretme hedefi olan Tesla, mühendislerin hedefin gerçekçi olmadığını belirtmesi üzerine bu sayıyı 2.000 üniteye indirdi. Bu revize edilen plan, robotun gelişim sürecinin beklenenden daha karmaşık hale geldiğini gösteriyor. Elon Musk, insan benzeri el hareketlerinin tasarım sürecindeki en zorlu aşama olduğunu daha önce de söylemişti. Buna rağmen Musk, Optimus projesine olan inancını koruyor. Peki siz bu konu hakkında ne düşünüyorsunuz? Görüşlerinizi aşağıdaki yorumlar kısmından bizimle paylaşabilirsiniz. İsmimi bu tarayıcıya kaydet Δ
Images (1):
|
|||||