Total Articles Scraped
Total Images Extracted
| Action | Title | URL | Images | Scraped At | Status |
|---|---|---|---|---|---|
| What will define AI? AReaL head Yi Wu points to … | https://kr-asia.com/what-will-define-ai… | 1 | Dec 24, 2025 16:00 | active | |
What will define AI? AReaL head Yi Wu points to reinforcement learningURL: https://kr-asia.com/what-will-define-ai-areal-head-yi-wu-points-to-reinforcement-learning Description: His work on reinforcement learning and embodied agents is part research, part startup, and all about learning by doing. Content:
Written by 36Kr English Published on 3 Dec 2025 11 mins read Whether in academic research or collaborations with companies such as Ant Group, Yi Wu encourages his team to keep an entrepreneurial mindset: move quickly, iterate often, and avoid fearing failure. An assistant professor at Tsinghua University’s Institute for Interdisciplinary Information Sciences and head of the AReaL project, Wu studies reinforcement learning algorithms and applications of artificial intelligence. In May, his team and Ant Research jointly open-sourced AReaL-lite, described by the researchers as the first asynchronous reinforcement learning training framework designed to improve training efficiency and reduce GPU waste. The claim has not been independently verified. As a young tech leader, Wu emphasizes learning through trial and error. He resists the idea that a lack of resources is an acceptable reason for stalled progress, saying that building something new often requires creating the resources along the way. That philosophy surfaced at the Inclusion Conference on the Bund in September, where Wu argued that teams should release products as soon as they work at a basic level so they can learn from market feedback. The goal, he said, is not to wait for a perfect launch but to identify problems early and refine the product. His approach is rooted in earlier entrepreneurial experience. In 2023, his team founded Prosocial Intelligence, an agentic AI company that later evolved into AReaL. Wu is informally grouped with Jianyu Chen, Yang Gao, and Huazhe Xu as part of the “Berkeley Four,” a nickname referencing their shared academic background in AI research. All four studied in the US. Wu was the first to return to China, and he encouraged the others to follow. At Tsinghua University, Wu often reminds students that innovation requires venturing into unfamiliar territory. He argues that AI breakthroughs benefit more from long-term focus than from trying to chase every potential direction. He also holds a specific view of AI’s future, where intelligent agents interpret loosely defined human intentions, complete long-horizon tasks, and eventually move from digital spaces into the physical world. At this year’s World Artificial Intelligence Conference (WAIC), he described a scenario where a person could verbally ask a robot to tidy a room, and the robot would spend hours finishing the task. Reinforcement learning, his area of research, is central to that goal. He notes that the technique enables AI systems to learn through interaction and exploration, in contrast to supervised learning, which depends on continuous human instruction and struggles with long, open-ended tasks. Despite his rigorous academic work, Wu’s social media presence is lighter. On Xiaohongshu, he posts research updates, responds to questions about careers in AI, and occasionally ranks his favorite bubble tea flavors. Wu spoke with 36Kr about his take on AI’s future, entrepreneurship, and building efficient teams. The following transcript has been edited and consolidated for brevity and clarity. Wu Yi (WY): I think enabling AI to complete long-horizon tasks is an irreversible trend. Meanwhile, the commands humans give AI will become increasingly simple and vague. It’s hard to predict the exact product form, but one thing is certain: we’re moving from users actively driving AI to AI proactively anticipating what users want and completing it. This pattern already appeared during the mobile internet era. With search engines, users had to look for information themselves. Then came platforms like Zhihu, and later ByteDance’s products, whose algorithms pushed desired content directly to users. So I think, eventually, people will forget what a “search box” is as AI increasingly caters to human laziness. Ultimately, a whole new kind of product will emerge, and it will mark a generational opportunity. WY: A smart embodied agent can infer user intentions from fuzzy instructions, complete tasks accurately, and even anticipate unspoken needs. For example, if you tell your home robot that you can’t find your power bank, it may reason and act on its own, searching based on your habits and its memory of where you last used it. WY: They can cooperate to handle more complex tasks. Take robots in soccer for instance. Just like human players, when robots encounter familiar situations, a quick scan of the environment signals what formation to take. If you have several intelligent agents, the next step is defining how they communicate. In the digital world, one approach is a master agent that coordinates many smaller ones. You can use different models or even a single model structured like a planner directing many executors. That’s the idea behind a multi-agent system. I often cite Claude Code and Gemini as an example: Claude Code excels at programming but has short context and high cost, while Gemini handles large amounts of content but lacks reasoning power. Let Gemini first read an entire codebase and extract key parts, then let Claude Code write the actual code. It’s like pairing a smart but frail thinker with a strong but dull worker. The combination makes a highly efficient multi-agent system. In embodied scenarios, such as cleaning a space, robots “communicate” to plan who sweeps and who mops, working together to finish the job. WY: Transitioning from the digital to the physical world requires multimodal data, moving training environments from computers into reality. In the digital world, tools are mostly bits. They execute reliably, but in the physical world, using tools like grabbing a bag or opening a door still involves high error rates. Embodied intelligence, therefore, develops more slowly and with greater complexity. That said, if we look far enough ahead, once the physical world has been sufficiently digitized, the core technical challenges for all types of agents will converge. If one day a machine can reliably operate almost any physical tool, building an embodied agent that can function autonomously for an entire day will be technically no different from a digital agent. WY: At Prosocial, we made plenty of mistakes in early hiring. Many employees treated it like a regular job, not a startup, and didn’t grasp what entrepreneurship really means. Objectively, the team wasn’t fully ready to adopt a mindset catered to running a startup in the AI era. Still, everyone was learning. It was inevitable to stumble. Now I really dislike hearing that something can’t be done because we don’t have the resources. Startups rarely have abundant resources, and people create them while pursuing their goals. Entrepreneurial teams need that spark of innovation and the right level of conviction. Innovation isn’t about placing bets. Startups must believe deeply in what they are doing. We don’t have enough resources to hedge across multiple tracks hoping one will win. That only breeds mediocrity. Entrepreneurial spirit means believing something is right even if you fail to achieve it yourself. Someday, someone will. WY: In August 2018, I finished my internship at ByteDance in Beijing. Though I earned my PhD at UC Berkeley, my experience at ByteDance had a big influence on me. Since 2016, I’d interned intermittently in various ByteDance teams and was among the first members of its AI lab, witnessing the end of China’s mobile internet boom. By August 2018, I knew I wanted to return to China. Partly, I saw enormous opportunity in China’s development. Partly, I felt a clear ceiling for Chinese professionals in the US. Unless you become fully American, you face that question: if you want to make a real impact, do you want to be Chinese or American? I realized I didn’t want to compromise by becoming American. Many people say that they aren’t ready yet, and that they will wait until they are. Some Chinese scholars in the US say they will develop there for a few more years, then return. But my view is: if you’re sure you’ll do something someday, the best time was yesterday. The second best is today. So I decided to come back. A month later, I turned down ByteDance’s return offer. In October 2018, I joined Tsinghua University as a faculty member. Then I shared my thoughts with my fellow Berkeley classmates, telling them to seize the opportunity, and indeed, some were persuaded. Looking back, the timing really was ideal. We early returnees enjoyed the dividends of that wave. WY: Exactly. I’ve been “learning by reinforcement” all along, hitting every pitfall as quickly as possible. Honestly, learning through trial and error teaches you more deeply and generalizes better than supervised fine-tuning. Building products works the same way. I often say: once you make something, release it immediately. In the AI era, even great products need exposure. Get them out there, gather feedback, and iterate fast. Even negative feedback shows you where the pitfalls are. Of course, with high-quality supervised fine-tuning data, reinforcement learning becomes more efficient. Negative rewards are costly, so I share my experiences to help others learn faster. WY: I have a method to help me make decisions quickly: I flip a coin. Before it even lands, I usually already know my answer. I’m always the one who flips first. WY: Yes. I’ve thought about it: if I built a startup from zero to one and later, as it scaled from one to 100, I was no longer the most visible leader, would I be fine with that? The answer I arrived at is yes. At that inflection point, I’d likely bring in a professional manager and move on to the next project. Managing hundreds of people isn’t what I enjoy most. That said, I’m reflecting on whether such idealism might limit me. Maybe when that moment comes, I’ll choose differently. But if you ask me now, I’d still choose to keep creating zero-to-one projects. WY: Because it lets AI learn from real interaction. Supervised learning or fine-tuning means humans constantly tell AI what to do, but possibilities are infinite, and humans can’t give instructions for ten hours straight. Human instructions also differ from how AI “thinks.” When AIs simply memorize, they don’t truly understand, and thus generalize poorly. Reinforcement learning encourages active interaction with environments, even teaching AI to ask questions when uncertain. It cultivates self-driven exploration, something only the technique can achieve. WY: I now think the most important factor is prompting, specifically creating large numbers of high-quality prompts. Here’s an analogy: a teacher tutoring a student in math. Prompts are the teacher’s problems, search and exploration are the student’s problem-solving process, and the reward model is the teacher’s feedback. Choosing the right difficulty is crucial. Make it too advanced and the student gives up, but if it’s too easy, they learn nothing. The same applies to reinforcement learning: data volume alone doesn’t help. Appropriateness does. WY: The relationship goes two ways. One is locomotion, where reinforcement learning is already mature and doesn’t require pre-training. The other involves long-horizon reasoning and planning, usually combined with large pre-trained models. That area only became popular after ChatGPT. These two aspects form a spectrum from high-frequency control for short tasks to abstract reasoning for complex ones. Traditional reinforcement learning for control doesn’t need pre-training, think quadruped robots that can run and jump. Tiny neural networks trained in simulation can directly control real robots without pre-training. In such tasks, reinforcement learning trains the network to output control signals for each joint, enabling motion over durations as short as seconds, not hours. By contrast, models like ChatGPT and DeepSeek’s R1 use reinforcement learning after pre-training to enhance reasoning. Large models that employ reinforcement learning can think for minutes or hours, use common sense, break complex problems into subtasks, and call tools. But so far, this success remains in the digital realm, not the physical. In between lies the vision-language-action (VLA) model, which is often discussed in embodied intelligence research. WY: VLA applies pre-training to the physical world. Researchers gather massive data to pre-train models that not only complete short tasks like running or jumping but also generalize to minute-long activities like folding towels and pouring water. To reach longer-range tasks like cooking or cleaning, robots must perform for hours, combining fine control with abstract reasoning and human interaction, just like digital agents, except it’s in the physical world. So I see embodied agents as systems that merge locomotion or VLA as the “small brain” controlling motion, with language models enhanced using reinforcement learning as the “big brain.” Unlike digital agents, physical agents still get less attention. Most people focus on hardware aspects such as gripping accuracy, object sorting, and so on. But altering the physical world is always harder. Given my focus, I’m working on stabilizing long-duration reasoning before combining it with physical control. WY: Our current plan is a layered structure. As I said at WAIC, the higher you go, the more human knowledge you need. Likewise, the lower, the less. Lower layers handle instinctive reactions like grabbing a cup based on tactile feedback or simple physics. Upper layers need prior knowledge. So there’s a natural division between digital and physical agents. I don’t think VLA will be the final paradigm, because it isn’t large enough in scale to become a fully capable agent. We’ll perfect the digital agent format first while others explore the embodied side, then merge them when the timing is right. WY: In the internet era, building a product required four or five people which typically includes a frontend developer, a backend developer, and a product manager. In the AI era, one person might be sufficient to handle all that. Previously, small companies outsourced many tasks. Now, AI can streamline not just internal work but also outsourcing itself. If a team can run heavily on AI, its capabilities will naturally scale outward, because if AI can serve us, it can serve others, too. That’s a new product opportunity. Our AReaL team has only six members, with some external support. Counting everyone, we could still make it leaner. I want the team to stay minimalist, and that’s why it has always been small. WY: First, a modern agent-focused team must use many agents every day itself. Second, I combine algorithm and infrastructure teams into a single full-stack unit. Traditional structures separate algorithms from systems and add data collection teams, creating a segregation dynamic whereby algorithm teams are the “clients,” while engineers become the contractors doing the “dirty work.” That division kills innovation. Once you’re used to being the former, you avoid the grunt work. And if you’re in the latter, you lose creative space. OpenAI didn’t magically invent new algorithms, it simply perfected the details. So to excel in infrastructure and data, you need to dig deep. With that groundwork, algorithms can shine. That’s why algorithms and infrastructure must be co-designed and co-evolved. A small, highly capable team can collectively fulfill this. Large organizations, say with 200 people, can’t avoid silos. Limited communication bandwidth leads to rigid roles and inefficiency. So a compact, full-stack setup and high innovation go hand in hand. Forget the 200-person org chart. In the AI era, it’s all about going from zero to one, so take bold, radical approaches and build anew. KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Fu Chong for 36Kr. Loading... Subscribe to our newsletters KrASIA A digital media company reporting on China's tech and business pulse.
Images (1):
|
|||||
| A critical assessment of reinforcement learning methods for microswimmer navigation … | https://hal.science/hal-05343166v1 | 1 | Dec 24, 2025 16:00 | active | |
A critical assessment of reinforcement learning methods for microswimmer navigation in complex flows - Archive ouverte HALURL: https://hal.science/hal-05343166v1 Description: Navigating in a fluid flow while being carried by it, using only information accessible from on-board sensors, is a problem commonly faced by small planktonic organisms. It is also directly relevant to autonomous robots deployed in the oceans. In the last ten years, the fluid mechanics community has widely adopted reinforcement learning, often in the form of its simplest implementations, to address this challenge. But it is unclear how good are the strategies learned by these algorithms. In this paper, we perform a quantitative assessment of reinforcement learning methods applied to navigation in partially observable flows. We first introduce a well-posed problem of directional navigation for which a quasioptimal policy is known analytically. We then report on the poor performance and robustness of commonly used algorithms (Q-Learning, Advantage Actor Critic) in flows regularly encountered in the literature: Taylor-Green vortices, Arnold-Beltrami-Childress flow, and two-dimensional turbulence. We show that they are vastly surpassed by PPO (Proximal Policy Optimization), a more advanced algorithm that has established dominance across a wide range of benchmarks in the reinforcement learning community. In particular, our custom implementation of PPO matches the theoretical quasi-optimal performance in turbulent flow and does so in a robust manner. Reaching this result required the use of several additional techniques, such as vectorized environments and generalized advantage estimation, as well as hyperparameter optimization. This study demonstrates the importance of algorithm selection, implementation details, and fine-tuning for discovering truly smart autonomous navigation strategies in complex flows. Content:
Navigating in a fluid flow while being carried by it, using only information accessible from on-board sensors, is a problem commonly faced by small planktonic organisms. It is also directly relevant to autonomous robots deployed in the oceans. In the last ten years, the fluid mechanics community has widely adopted reinforcement learning, often in the form of its simplest implementations, to address this challenge. But it is unclear how good are the strategies learned by these algorithms. In this paper, we perform a quantitative assessment of reinforcement learning methods applied to navigation in partially observable flows. We first introduce a well-posed problem of directional navigation for which a quasioptimal policy is known analytically. We then report on the poor performance and robustness of commonly used algorithms (Q-Learning, Advantage Actor Critic) in flows regularly encountered in the literature: Taylor-Green vortices, Arnold-Beltrami-Childress flow, and two-dimensional turbulence. We show that they are vastly surpassed by PPO (Proximal Policy Optimization), a more advanced algorithm that has established dominance across a wide range of benchmarks in the reinforcement learning community. In particular, our custom implementation of PPO matches the theoretical quasi-optimal performance in turbulent flow and does so in a robust manner. Reaching this result required the use of several additional techniques, such as vectorized environments and generalized advantage estimation, as well as hyperparameter optimization. This study demonstrates the importance of algorithm selection, implementation details, and fine-tuning for discovering truly smart autonomous navigation strategies in complex flows. Connectez-vous pour contacter le contributeur https://hal.science/hal-05343166 Soumis le : lundi 3 novembre 2025-10:15:31 Dernière modification le : vendredi 7 novembre 2025-12:43:28 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| What Is Reinforcement Learning? Practical Steps Included - Hackster.io | https://www.hackster.io/HiwonderRobot/w… | 1 | Dec 24, 2025 16:00 | active | |
What Is Reinforcement Learning? Practical Steps Included - Hackster.ioURL: https://www.hackster.io/HiwonderRobot/what-is-reinforcement-learning-practical-steps-included-0954ef Description: Build real-world reinforcement learning skills with an open-source robot arm—fully hackable, community-driven, and ready to experiment. Find this and other hardware projects on Hackster.io. Content:
Add the following snippet to your HTML:<iframe frameborder='0' height='385' scrolling='no' src='https://www.hackster.io/HiwonderRobot/what-is-reinforcement-learning-practical-steps-included-0954ef/embed' width='350'></iframe> Build real-world reinforcement learning skills with an open-source robot arm—fully hackable, community-driven, and ready to experiment. Read up about this project on Build real-world reinforcement learning skills with an open-source robot arm—fully hackable, community-driven, and ready to experiment. Reinforcement learning (RL) is one of the most fascinating areas of artificial intelligence—where an agent learns to make decisions by interacting with an environment, receiving feedback through rewards or penalties, and optimizing its behavior over time. From game-playing AIs like AlphaGo to robots learning to walk, RL bridges the gap between perception and action in AI. But moving from theory to real-world application is often challenging: high hardware costs, complex system integration, and difficulties in reproducing experiments can stall progress. This is where accessible, open-source hardware platforms become essential. Enter the Hiwonder SO-ARM101—a fully open-source robotic arm platform born from the Hugging Face LeRobot project. It offers a hands-on, reproducible way to explore embodied AI, imitation learning, and yes, reinforcement learning in the physical world. The SO-ARM101 isn’t just another robotic arm. It’s built on LeRobot, an open-source robotics project from Hugging Face, and follows a fully open philosophy—from hardware designs and firmware to software and example algorithms. This approach lowers the barrier to entry, allowing researchers, students, and makers to focus on experimenting with AI, not struggling with hardware integration. The kit includes 2 robotic arms in a leader-follower setup, making it particularly suited for imitation learning workflows. You can physically guide the leader arm to demonstrate a task—such as picking up an object or stacking blocks—while the follower arm records joint trajectories and camera data. After multiple demonstrations, the system learns a policy that allows the follower to perform the task autonomously. It’s an intuitive and effective way to get started with learning from demonstration (LfD), which often serves as a foundation for more advanced RL methods. To support stable and repeatable real-world experiments, the SO-ARM101 incorporates several key upgrades over the baseline LeRobot design: While the leader-follower setup naturally supports imitation learning, the SO-ARM101 is also a capable platform for exploring reinforcement learning. Consider experiments like: The platform comes with step-by-step guides and reproducible examples, regularly updated to align with the latest LeRobot releases. Even without a background in robotics or RL, you can follow along to set up the system, collect demonstration data, train models, and deploy learned behaviors. It’s not only a research tool—it’s also an educational platform that makes embodied AI tangible and approachable. Reinforcement learning is more than equations and algorithms—it’s about agents that act, learn, and adapt in the real world. Open-source platforms like the SO-ARM101 help turn theoretical concepts into running experiments. By lowering cost and complexity, they enable more people to participate in embodied AI research, iterate on ideas, and contribute back to the community. If you’ve been curious about reinforcement learning beyond simulations, or if you’re looking for a reliable hardware platform to test AI policies in physical environments, this community-driven, fully open robotic arm could be the right place to start. Download Hiwonder LeRobot tutorials! Hackster.io, an Avnet Community © 2025
Images (1):
|
|||||
| Q-Learning in Reinforcement Learning — Teaching Machines to Learn by … | https://medium.com/@krimatrivedi1/q-lea… | 0 | Dec 24, 2025 16:00 | active | |
Q-Learning in Reinforcement Learning — Teaching Machines to Learn by DoingDescription: Q-Learning in Reinforcement Learning — Teaching Machines to Learn by Doing Imagine this: You’re training your dog to fetch. Every time it brings the ball ba... Content: |
|||||
| Meta Reinforcement Learning: AI’s Next Big Leap | https://medium.com/@theBotGroup/meta-re… | 0 | Dec 24, 2025 16:00 | active | |
Meta Reinforcement Learning: AI’s Next Big LeapURL: https://medium.com/@theBotGroup/meta-reinforcement-learning-ais-next-big-leap-e4c49d552182 Description: Meta Reinforcement Learning: AI’s Next Big Leap You adapt instantly. When you drive a new car, you don’t need a million miles of practice. You draw on years... Content: |
|||||
| Deep Q-Learning in Reinforcement Learning: How Machines Learn, Fail, Adapt, … | https://medium.com/@rizvaanpatel/deep-q… | 0 | Dec 24, 2025 16:00 | active | |
Deep Q-Learning in Reinforcement Learning: How Machines Learn, Fail, Adapt, and Become Smarter Than UsDescription: Deep Q-Learning in Reinforcement Learning: How Machines Learn, Fail, Adapt, and Become Smarter Than Us When people hear the term Artificial Intelligence, they o... Content: |
|||||
| Reinforcement Learning: Teaching AI to Play and Win | https://kawaldeepsingh.medium.com/reinf… | 0 | Dec 24, 2025 16:00 | active | |
Reinforcement Learning: Teaching AI to Play and WinURL: https://kawaldeepsingh.medium.com/reinforcement-learning-teaching-ai-to-play-and-win-e831de1687a3 Description: Reinforcement Learning: Teaching AI to Play and Win You’ve learned supervised learning (prediction and classification) and unsupervised learning (finding patt... Content: |
|||||
| AgiBot breaks new ground with first real-world deployment of reinforcement … | https://www.naturalnews.com/2025-11-06-… | 1 | Dec 24, 2025 16:00 | active | |
AgiBot breaks new ground with first real-world deployment of reinforcement learning in industrial robotics â NaturalNews.comDescription: AgiBot successfully deployed Real-World Reinforcement Learning (RW-RL) in an active manufacturing line with Longcheer Technology. This marks the first industrial application of reinforcement learning in robotics, bridging AI research with real-world production. Traditional robots rely on rigid programming, requiring costly reconfiguration and custom fixtures. AgiBot’s RW-RL system enables robots to learn and adapt on the factory floor, acquiring new skills in minutes instead […] Content:
AgiBot successfully deployed Real-World Reinforcement Learning (RW-RL) in an active manufacturing line with Longcheer Technology. This marks the first industrial application of reinforcement learning in robotics, bridging AI research with real-world production. Traditional robots rely on rigid programming, requiring costly reconfiguration and custom fixtures. AgiBot's RW-RL system enables robots to learn and adapt on the factory floor, acquiring new skills in minutes instead of weeks while maintaining industrial-grade stability. Unlike lab-based RL, AgiBot's system was tested in near-production conditions, proving its industrial readiness. Robots demonstrated resilience against disruptions (temperature shifts, vibrations, misalignment) while maintaining precision assembly. When production models changed, robots retrained in minutes without manual reprogramming. AgiBot plans to expand RW-RL into consumer electronics and automotive manufacturing, focusing on plug-and-play robotic solutions. Their LinkCraft platform (converting human motion into robot actions) and G2 robot (powered by NVIDIA's Jetson Thor T5000) enable real-time AI processing. If scalable, this could usher in the adaptive factory era, where robots continuously learn and optimize without halting operations. Traditional robots rely on rigid programming, requiring costly reconfiguration and custom fixtures. AgiBot's RW-RL system enables robots to learn and adapt on the factory floor, acquiring new skills in minutes instead of weeks while maintaining industrial-grade stability. Unlike lab-based RL, AgiBot's system was tested in near-production conditions, proving its industrial readiness. Robots demonstrated resilience against disruptions (temperature shifts, vibrations, misalignment) while maintaining precision assembly. When production models changed, robots retrained in minutes without manual reprogramming. AgiBot plans to expand RW-RL into consumer electronics and automotive manufacturing, focusing on plug-and-play robotic solutions. Their LinkCraft platform (converting human motion into robot actions) and G2 robot (powered by NVIDIA's Jetson Thor T5000) enable real-time AI processing. If scalable, this could usher in the adaptive factory era, where robots continuously learn and optimize without halting operations. Unlike lab-based RL, AgiBot's system was tested in near-production conditions, proving its industrial readiness. Robots demonstrated resilience against disruptions (temperature shifts, vibrations, misalignment) while maintaining precision assembly. When production models changed, robots retrained in minutes without manual reprogramming. AgiBot plans to expand RW-RL into consumer electronics and automotive manufacturing, focusing on plug-and-play robotic solutions. Their LinkCraft platform (converting human motion into robot actions) and G2 robot (powered by NVIDIA's Jetson Thor T5000) enable real-time AI processing. If scalable, this could usher in the adaptive factory era, where robots continuously learn and optimize without halting operations. AgiBot plans to expand RW-RL into consumer electronics and automotive manufacturing, focusing on plug-and-play robotic solutions. Their LinkCraft platform (converting human motion into robot actions) and G2 robot (powered by NVIDIA's Jetson Thor T5000) enable real-time AI processing. If scalable, this could usher in the adaptive factory era, where robots continuously learn and optimize without halting operations. AgiBot, a robotics firm specializing in embodied intelligence, has achieved a major milestone by successfully deploying its Real-World Reinforcement Learning (RW-RL) system in an active manufacturing line with Longcheer Technology. This marks the first industrial-scale application of reinforcement learning in robotics, bridging advanced AI research with real-world productionâa breakthrough that could redefine flexible manufacturing. According to BrightU.AI's Enoch, RL is a type of machine learning where an agent learns to behave in an environment by performing actions and receiving rewards or penalties. The agent's goal is to maximize the cumulative reward over time, learning from its environment through trial and error. This learning process is akin to how humans and animals learn from their surroundings, making RL a powerful tool for solving complex problems in various fields, including robotics, gaming, resource management and more. Traditional industrial robots rely on rigid programming, requiring extensive tuning, costly reconfiguration and custom fixtures for each task. Even advanced "vision + force-control" systems struggle with parameter sensitivity and maintenance complexity. AgiBot's RW-RL system tackles these limitations by allowing robots to learn and adapt directly on the factory floorâacquiring new skills in minutes rather than weeks while maintaining industrial-grade stability. Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com This marks the first industrial-scale application of reinforcement learning in robotics, bridging advanced AI research with real-world productionâa breakthrough that could redefine flexible manufacturing. According to BrightU.AI's Enoch, RL is a type of machine learning where an agent learns to behave in an environment by performing actions and receiving rewards or penalties. The agent's goal is to maximize the cumulative reward over time, learning from its environment through trial and error. This learning process is akin to how humans and animals learn from their surroundings, making RL a powerful tool for solving complex problems in various fields, including robotics, gaming, resource management and more. Traditional industrial robots rely on rigid programming, requiring extensive tuning, costly reconfiguration and custom fixtures for each task. Even advanced "vision + force-control" systems struggle with parameter sensitivity and maintenance complexity. AgiBot's RW-RL system tackles these limitations by allowing robots to learn and adapt directly on the factory floorâacquiring new skills in minutes rather than weeks while maintaining industrial-grade stability. Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com This marks the first industrial-scale application of reinforcement learning in robotics, bridging advanced AI research with real-world productionâa breakthrough that could redefine flexible manufacturing. According to BrightU.AI's Enoch, RL is a type of machine learning where an agent learns to behave in an environment by performing actions and receiving rewards or penalties. The agent's goal is to maximize the cumulative reward over time, learning from its environment through trial and error. This learning process is akin to how humans and animals learn from their surroundings, making RL a powerful tool for solving complex problems in various fields, including robotics, gaming, resource management and more. Traditional industrial robots rely on rigid programming, requiring extensive tuning, costly reconfiguration and custom fixtures for each task. Even advanced "vision + force-control" systems struggle with parameter sensitivity and maintenance complexity. AgiBot's RW-RL system tackles these limitations by allowing robots to learn and adapt directly on the factory floorâacquiring new skills in minutes rather than weeks while maintaining industrial-grade stability. Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com According to BrightU.AI's Enoch, RL is a type of machine learning where an agent learns to behave in an environment by performing actions and receiving rewards or penalties. The agent's goal is to maximize the cumulative reward over time, learning from its environment through trial and error. This learning process is akin to how humans and animals learn from their surroundings, making RL a powerful tool for solving complex problems in various fields, including robotics, gaming, resource management and more. Traditional industrial robots rely on rigid programming, requiring extensive tuning, costly reconfiguration and custom fixtures for each task. Even advanced "vision + force-control" systems struggle with parameter sensitivity and maintenance complexity. AgiBot's RW-RL system tackles these limitations by allowing robots to learn and adapt directly on the factory floorâacquiring new skills in minutes rather than weeks while maintaining industrial-grade stability. Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com According to BrightU.AI's Enoch, RL is a type of machine learning where an agent learns to behave in an environment by performing actions and receiving rewards or penalties. The agent's goal is to maximize the cumulative reward over time, learning from its environment through trial and error. This learning process is akin to how humans and animals learn from their surroundings, making RL a powerful tool for solving complex problems in various fields, including robotics, gaming, resource management and more. Traditional industrial robots rely on rigid programming, requiring extensive tuning, costly reconfiguration and custom fixtures for each task. Even advanced "vision + force-control" systems struggle with parameter sensitivity and maintenance complexity. AgiBot's RW-RL system tackles these limitations by allowing robots to learn and adapt directly on the factory floorâacquiring new skills in minutes rather than weeks while maintaining industrial-grade stability. Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Traditional industrial robots rely on rigid programming, requiring extensive tuning, costly reconfiguration and custom fixtures for each task. Even advanced "vision + force-control" systems struggle with parameter sensitivity and maintenance complexity. AgiBot's RW-RL system tackles these limitations by allowing robots to learn and adapt directly on the factory floorâacquiring new skills in minutes rather than weeks while maintaining industrial-grade stability. Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Traditional industrial robots rely on rigid programming, requiring extensive tuning, costly reconfiguration and custom fixtures for each task. Even advanced "vision + force-control" systems struggle with parameter sensitivity and maintenance complexity. AgiBot's RW-RL system tackles these limitations by allowing robots to learn and adapt directly on the factory floorâacquiring new skills in minutes rather than weeks while maintaining industrial-grade stability. Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Dr. Jianlan Luo, AgiBot's Chief Scientist, stated that their "system achieves stable, repeatable learning on real machines" closing the gap between academic research and industrial deployment. Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Key advantages of RW-RL AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com AgiBot highlights three core benefits of its reinforcement learning system: Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Rapid Deployment â Training time slashed from weeks to minutes. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. High Adaptability â Robots autonomously compensate for variations like part misalignment while maintaining 100 percent task completion. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Flexible Reconfiguration â Production line changes require minimal hardware adjustments, eliminating costly downtime. Unlike lab-based demonstrations, AgiBot's system was validated under near-production conditions, proving its readiness for industrial use. Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Reinforcement learningâwhere robots optimize performance through trial and errorâhas long been confined to research papers and controlled experiments. AgiBot's breakthrough integrates perception, decision-making and motion control into a unified loop, enabling robots to self-correct in real-time. The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com The Longcheer pilot demonstrated RW-RL's resilience against environmental disruptionsâincluding vibration, temperature shifts and part misalignmentâwhile maintaining precision assembly. When production models changed, the robot retrained in minutes without manual reprogramming, showcasing unprecedented flexibility. The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com The future of adaptive factories AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com AgiBot and Longcheer plan to expand RW-RL into consumer electronics and automotive manufacturing, focusing on modular, plug-and-play robotic solutions that integrate seamlessly with existing systems. The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com The company's LinkCraft platformâwhich converts human motion videos into robot actionsâcomplements this advancement, reducing programming barriers. Meanwhile, AgiBot's G2 robot, powered by NVIDIA's Jetson Thor T5000, suggests that real-time AI processing is enabling this leap forward. While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com While Google's Intrinsic and NVIDIA's Isaac Lab have pioneered reinforcement learning frameworks, AgiBot appears to be the first to deploy RL in live production. If scalable, this could herald the adaptive factory era, where robots continuously learn, optimize and evolveâwithout halting operations. As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com As factories face increasing demands for customization and rapid model changes, AgiBot's breakthrough may finally make self-learning robotics a commercial reality. Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Watch the video below about Chinese startup AgiBot beginning mass production of general-purpose humanoid robots. This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com This video is from the SecureLife channel on Brighteon.com. Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com Sources include: TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com TheRobotReport.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com BrightU.ai PRNewswire.com Ubergizmo.com Brighteon.com PRNewswire.com Ubergizmo.com Brighteon.com PRNewswire.com Ubergizmo.com Brighteon.com Ubergizmo.com Brighteon.com Ubergizmo.com Brighteon.com Brighteon.com Brighteon.com This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners. Backup IP: http://45.89.97.6
Images (1):
|
|||||
| 🌟 Deep Reinforcement Learning: From Pixels to Superhuman Intelligence | https://medium.com/@bilalqadeer/deep-re… | 0 | Dec 24, 2025 16:00 | active | |
🌟 Deep Reinforcement Learning: From Pixels to Superhuman IntelligenceDescription: Imagine an AI that learns to master chess without human instruction, navigate a maze it’s never seen before, or control a robotic arm with the grace of a huma... Content: |
|||||
| AgiBot's AI Leap: Self-Learning Robots Storm China's Factories | https://www.webpronews.com/agibots-ai-l… | 1 | Dec 24, 2025 16:00 | active | |
AgiBot's AI Leap: Self-Learning Robots Storm China's FactoriesURL: https://www.webpronews.com/agibots-ai-leap-self-learning-robots-storm-chinas-factories/ Description: Keywords Content:
SHANGHAI—In a bustling factory on the outskirts of Shanghai, a new breed of robot is quietly revolutionizing manufacturing. AgiBot, a Chinese startup founded by former Huawei engineer Peng Zhihui, is deploying AI-powered robots that learn tasks through reinforcement learning, marking a significant shift in industrial automation. This technology allows machines to adapt and improve on the fly, potentially transforming labor-intensive industries. According to a recent article in WIRED, AgiBot employs a unique training method combining AI algorithms with human teleoperation. Workers remotely control robots to perform tasks, generating data that trains AI models. This approach has enabled AgiBot to achieve what many consider a breakthrough: the first real-world deployment of reinforcement learning in industrial settings. From Huawei to Robotics Pioneer Peng Zhihui, a graduate of the University of Electronic Science and Technology of China, gained fame for inventions like a self-driving bicycle and an Iron Man-inspired robotic arm, as detailed on Wikipedia. He joined Huawei in 2020, earning a high salary, but left in December 2022 to launch AgiBot in February 2023. Backed by investors including HongShan, Hillhouse Investment, and BYD, the company quickly established a manufacturing facility in Shanghai by January 2024. By August 2024, AgiBot’s factory began deliveries, shipping 200 bipedal and 100 wheeled robots by year’s end, according to reports from The Paper. This rapid scaling underscores China’s aggressive push in AI-robotics integration, with AgiBot emerging as a key player. Mass Production Milestones Recent posts on X highlight AgiBot’s mass production of general-purpose humanoid robots for factories, labs, and homes. One post from user tphuang notes the production of wheeled robots with adjustable bodies, emphasizing data collection through showrooms. Another from The Humanoid Hub describes the startup’s progress since 2023, including mass production of 1,000 units earlier this year and plans for 5,000 more. Disclose.tv on X reported the beginning of mass production for humanoid robots, while RT mentioned launches for warehouse and store applications. These developments align with AgiBot’s official site, which lists products like AgiBot A2, X/D1, Genie, and C5, designed for diverse tasks. Breakthrough in Reinforcement Learning AgiBot achieved a historic milestone with the first real-world deployment of reinforcement learning in industrial robotics, as announced in a press release covered by RoboticsTomorrow. Jianlan Luo, AgiBot’s Chief Scientist, explained that the system integrates advanced algorithms with hardware, enabling stable learning on physical robots. This was demonstrated on a pilot production line with Longcheer Technology. The collaboration plans to expand to precision manufacturing in consumer electronics and automotive components, focusing on modular, deployable solutions. According to The Robot Report, robots can learn new skills in minutes on the factory floor, bridging academic research and industrial application. Unleashing Massive Datasets In December 2024, AgiBot released the largest humanoid manipulation dataset, AgiBot World, with over 1 million trajectories from 100 robots, as reported by PR Newswire and The Robot Report. This dataset enables large-scale learning, paving the way for general-purpose robots in everyday life. The diversity and complexity of the data support training for tasks like precision nailing, as shown in WIRED’s coverage. Schneider from WIRED notes that other companies are exploring similar reinforcement learning for manufacturing, but AgiBot’s approach stands out in China’s booming AI-robotics scene. Deployments and Partnerships AgiBot secured a deal to deploy 100 robots at car parts factories, with A2-W models meeting monthly production targets in a single shift, according to the South China Morning Post. Additionally, CyberRobo on X mentioned deployments in commercial scenarios like shopping malls and auto dealerships, with mass production scaling to 5,000 units. Recent news from Ubergizmo and Gizmochina emphasizes AgiBot’s reinforcement learning deployment, allowing robots to learn tasks in minutes rather than weeks. This self-learning capability could redefine manufacturing, as Eyisha Zyer’s X post suggests. Global Implications and US Comparisons WIRED highlights that AgiBot’s technology may be crucial for US companies aiming to reshore manufacturing. US startups like Physical Intelligence and Skild are developing similar robo-learning algorithms, spun out from UC Berkeley and Carnegie Mellon research. However, AgiBot’s integration of AI models for humanoids and fixed robot arms positions it ahead in practical deployments. A post from EyeingAI on X calls this the moment embodied AI powers the real world, while Alif Hossain notes robots learning tasks in minutes without human intervention after initial training. Innovative Platforms and Future Visions AgiBot unveiled LinkCraft, a robot content creation platform allowing users to upload human movement videos for humanoid mimicry, as reported by The Information. This tool democratizes robot programming, requiring no expertise. At IROS 2025, AgiBot debuted and concluded the AgiBot World Challenge, showcasing advancements, per PR Newswire. WIRED Science on X notes how smarter machines could transform physical labor in China, with AgiBot leading the charge. Challenges and Industry Sentiment Despite successes, scaling reinforcement learning poses challenges, including data quality and hardware integration. Industrial Automation Magazine on X reported on the deployment, emphasizing its groundbreaking nature. Current sentiment on X, from users like tphuang, questions if the market will be dominated by one player like DJI in drones. AgiBot’s factory aims for robots cheaper than family cars—under $20,000—at scale, potentially accelerating adoption. Pushing Boundaries in AI Robotics AgiBot’s blend of reinforcement learning and human-assisted training addresses longstanding robotics hurdles, enabling adaptability in dynamic environments. As WIRED describes, a ‘small army of workers’ aids in data generation, combining human intuition with AI efficiency. Looking ahead, expansions with Longcheer signal broader applications, from electronics to automotive. This positions AgiBot at the forefront of a manufacturing renaissance, where AI not only automates but evolves with the tasks at hand. Subscribe for Updates Help us improve our content by reporting any issues you find. Get the free daily newsletter read by decision makers Get our media kit Deliver your marketing message directly to decision makers.
Images (1):
|
|||||
| “5 Ways Reinforcement Learning Is Quietly Powering the Robots Around … | https://blog.stackademic.com/5-ways-rei… | 0 | Dec 24, 2025 16:00 | active | |
“5 Ways Reinforcement Learning Is Quietly Powering the Robots Around You”Description: Reinforcement learning (RL) is turning robots into active learners. Instead of writing code for every move, we let robots learn by trial and error — a bit lik... Content: |
|||||
| How Robotic Drummers Use Reinforcement Learning to Achieve Human-Like Rhythm | https://medium.com/@meisshaily/how-robo… | 0 | Dec 24, 2025 16:00 | active | |
How Robotic Drummers Use Reinforcement Learning to Achieve Human-Like RhythmDescription: How Robotic Drummers Use Reinforcement Learning to Achieve Human-Like Rhythm Exploring AI Innovations, Real-World Applications, and Ethical Challenges in Music ... Content: |
|||||
| Reinforcement Learning in 2025 | https://medium.com/@meisshaily/reinforc… | 0 | Dec 24, 2025 16:00 | active | |
Reinforcement Learning in 2025URL: https://medium.com/@meisshaily/reinforcement-learning-in-2025-b25e6b16b7d6 Description: Discover how reinforcement learning in 2025 is transforming AI across industries like robotics, healthcare, and finance with cutting-edge algorithms and tech. Content: |
|||||
| How Reinforcement Learning Trains Quadruped Robots Like Spot - Geeky … | https://www.geeky-gadgets.com/quadruped… | 1 | Dec 24, 2025 16:00 | active | |
How Reinforcement Learning Trains Quadruped Robots Like Spot - Geeky GadgetsURL: https://www.geeky-gadgets.com/quadruped-robot-smarter-and-more-agile/ Description: Discover how reinforcement learning is transforming quadruped robots like Spot into agile, adaptable tools for real-world applications. Content:
Geeky Gadgets The Latest Technology News 10:14 am August 29, 2025 By Julian Horsey What if robots could learn to adapt to their surroundings as effortlessly as humans do? The rise of quadruped robots, like Boston Dynamics’ Spot, is turning this vision into reality. By integrating reinforcement learning (RL)—a innovative machine learning technique, these robots are not only mastering practical tasks like industrial inspections but also pushing the boundaries of agility and resilience. Imagine a robot navigating a hazardous construction site, climbing uneven stairs, or recovering gracefully after a slip, all without human intervention. This isn’t science fiction; it’s the result of advanced programming and iterative development that’s redefining what robotics can achieve in real-world applications. Boston Dynamics provide more insights into the fantastic role of RL in enhancing the behavior and performance of quadruped robots. From the meticulous programming that balances practicality with high-performance maneuvers to the use of simulation environments for perfecting their adaptability, we’ll uncover how Spot is evolving into a versatile tool for industries and beyond. You’ll discover how hardware optimization, robustness testing, and iterative debugging contribute to Spot’s reliability in unpredictable conditions. By the end, you’ll see how these advancements are not just improving robotics but reshaping the future of automation itself, one step, leap, or backflip at a time. TL;DR Key Takeaways : Spot’s programming is carefully designed to address two primary objectives: executing practical tasks for real-world applications and performing extreme maneuvers that push its operational boundaries. This dual focus ensures that Spot is both a reliable tool for everyday tasks and a robust performer in extreme conditions. The development of Spot relies heavily on simulation modeling and hardware optimization, which together create a smarter and more efficient robot. By combining advanced simulation techniques with precise hardware refinement, Spot achieves seamless integration between its software intelligence and physical capabilities. Watch this video on YouTube. Explore further guides and articles from our vast library that you may find relevant to your interests in humanoid robots. Spot’s reliability is the result of a rigorous development process that emphasizes iterative debugging and comprehensive robustness testing. This meticulous process guarantees that Spot is prepared to handle the diverse demands of real-world applications, making it a dependable asset in various industries. Spot’s versatility and adaptability make it a valuable resource across a wide range of industrial sectors, where precision and reliability are paramount. Spot’s ability to adapt to diverse scenarios underscores its potential to transform workflows, improve safety, and enhance productivity across multiple domains. The ongoing advancements in reinforcement learning and hardware engineering are paving the way for a new era in quadruped robotics. As these technologies continue to evolve, robots like Spot are expected to become even more capable, addressing challenges that were once considered insurmountable. From conducting industrial inspections to participating in creative and innovative projects, the potential applications for quadruped robots are vast and varied. By combining innovative machine learning techniques with robust engineering, quadruped robots are poised to play a fantastic role across industries. Their ability to perform complex tasks with precision, adapt to changing conditions, and operate in challenging environments positions them as key contributors to the future of automation and innovation. As these machines become more advanced, they will redefine what is possible in both practical and creative domains, ushering in a new era of technological progress. Media Credit: Boston Dynamics Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Images (1):
|
|||||
| What is Reinforcement Learning? | https://leonidasgorgo.medium.com/what-i… | 0 | Dec 24, 2025 16:00 | active | |
What is Reinforcement Learning?URL: https://leonidasgorgo.medium.com/what-is-reinforcement-learning-1345c77aff6f Description: Reinforcement Learning (RL) is a type of machine learning where an agent learns to make decisions by interacting with an environment. The goal of the agent is t... Content: |
|||||
| Robots Learn to Sculpt Sand Using Reinforcement Learning - 3D … | https://3dprintingindustry.com/news/rob… | 1 | Dec 24, 2025 16:00 | active | |
Robots Learn to Sculpt Sand Using Reinforcement Learning - 3D Printing IndustryURL: https://3dprintingindustry.com/news/robots-learn-to-sculpt-sand-using-reinforcement-learning-244376/ Description: A study published on arXiv details how researchers at the University of Bonn have developed a reinforcement learning framework that enables robots to manipulate granular media such as sand into target shapes. The system trains a robotic arm with a cubic end-effector and a stereo camera to reshape loose material into forms including rectangles, L-shapes, […] Content:
A study published on arXiv details how researchers at the University of Bonn have developed a reinforcement learning framework that enables robots to manipulate granular media such as sand into target shapes. The system trains a robotic arm with a cubic end-effector and a stereo camera to reshape loose material into forms including rectangles, L-shapes, polygons, and negatives of archaeological fresco fragments. Experiments showed millimeter-level accuracy, with the trained agent outperforming two baseline approaches and transferring successfully from simulation to a physical robot without additional training. Granular materials pose difficulties for robotics because of their high-dimensional configuration space and unstable dynamics. Rule-based approaches often fail, while particle simulations are computationally expensive. Researchers addressed these challenges by designing compact observation spaces and reward functions that guided learning. Visual policies were trained using Truncated Quantile Critics (TQC), an off-policy reinforcement learning algorithm. Depth images from a ZED 2i stereo camera were converted into height maps, allowing the robot to compare current and goal structures in a form suitable for efficient training. The system was evaluated against a random policy and a Boustrophedon Coverage Path Planning baseline. Across 400 goal shapes, the learned agent consistently outperformed both methods. Using the delta reward (DELTA) formulation, the robot achieved a mean height difference of 3.4 millimeters compared with 4.8 millimeters for the planning method and 7.2 millimeters for random motion. Execution time was shorter as well, averaging 23.5 steps versus 44 for the path planning baseline. The agent also modified 97 percent of relevant cells in the goal area, compared with 54 percent for random motion. Execution steps were defined as the number of actions until the end-effector left the granular medium for three consecutive steps. Statistical testing confirmed that the DELTA policy significantly outperformed all alternatives. The project involved the Humanoid Robots Lab, the Autonomous Intelligent Systems Lab, and the Center for Robotics at the University of Bonn, working with the Lamarr Institute for Machine Learning and Artificial Intelligence. Funding came from the European Commission’s RePAIR program under Horizon 2020 and from Germany’s Federal Ministry of Education and Research through the Robotics Institute Germany initiative. Further experiments examined design choices. When the goal-area movement reward was removed, agents avoided manipulation behaviors entirely, performing no better than random baselines. Feature extractor ablations showed that the proposed gating-based encoder achieved the best performance, with an average error of 3.4 millimeters compared with 4.6 millimeters when relying directly on depth images. Algorithm comparisons confirmed that TQC achieved stable convergence, whereas Soft Actor-Critic lagged and Twin Delayed Deep Deterministic Policy Gradient failed to converge. A supplementary site linked in the paper provides additional details, videos, and code. Deployment on a UR5e robotic arm validated the approach outside simulation. Despite sensor noise and an uneven starting surface, the robot reproduced target shapes such as rectangles with results similar to those seen in simulation. The ability to transfer directly from synthetic training environments to real-world execution demonstrated the robustness of the framework. Research into granular media manipulation spans excavation, grading, and extraterrestrial soil handling. Many approaches depend on computationally demanding finite or discrete element simulations or on imitation learning pipelines tailored to specific tasks. By combining efficient height map representations with carefully designed reward formulations, the Bonn team demonstrated that reinforcement learning can adaptively shape granular media without handcrafted rules.The authors conclude that their method consistently outperforms traditional baselines and establishes a viable route for adaptive robotic manipulation of deformable materials. Limited spaces remain for AMA:Energy 2025. Register now to join the conversation on the future of energy and additive manufacturing. Ready to discover who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter and follow us on LinkedIn to stay updated with the latest news and insights. Featured image shows a training process is employed to enable agents to manipulate granular media using sensory inputs. Image via University of Bonn. Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology. © Copyright 2017 | All Rights Reserved | 3D Printing Industry
Images (1):
|
|||||
| Reinforcement Learning for Controlling Cable Driven Parallel Robots - Université … | https://hal.univ-lorraine.fr/tel-050725… | 1 | Dec 24, 2025 16:00 | active | |
Reinforcement Learning for Controlling Cable Driven Parallel Robots - Université de LorraineURL: https://hal.univ-lorraine.fr/tel-05072595v1 Description: In this thesis , the use of reinforcement learning for controlling Cable-driven parallel robots has been investigated. This class of robots is known with its complex dynamics and the nonlinearity of the system, which offers an interesting environment for the implementation of reinforcement learning algorithms. These algorithms requires a lot of data to learn the optimal policy, which is not always possible in real world applications. To overcome this issue, we propose a sim-to-real approach. First, the newton euler equation of the robot is used to derive the dynamic model of the robot, and by setting the parameters to the real values of the robot, we validated the model by comparing the simulation results with the real data. To ensure a high precision of the simulation data and a reduced execution time, it was implemented using Matlab/Simulink and then converted to exttt{C++} library for easier integration with the extit{gym} environment in exttt{python}. Additionally, in order to learn the optimal policy using reinforcement learning, the objective of the controller should be specified. As most of the use cases of cable-driven parallel robots could be summarized as a trajectory tracking problem, a reward function that align with this objective is designed and a process for target trajectories generation was developed. In addition to that, a limitation on the action space was introduced to ensure the cable tension is within the limits during the training process. These key components along with the most known reinforcement learning algorithms for continuous space: DDPG, PPO, and SAC make a full-fledged training platform for the generation of reinforcement learning based controllers for cable-driven parallel robots. A comparison between the three algorithms during the training process and the performance of the trained controllers was conducted. A side by side evaluation of the reinforcement learning controller with a PID based controller developed for insect tracking purposes was also performed and many aspects were compared such as the tracking error, the energy consumption, and the robustness of the controller. One of the main challenges of this work is the transition to different configuration of the robot, as the trained policy was developed for a specific configuration, a new training process is required for each different configuration. To overcome this issue, a new method to learn an actuator level policy has been developed and comparative analysis with the conventional policy has been hold. Finally, the trained controller was tested on the robot to ensure the transferability of the policy from the simulation to the real world. Content:
In this thesis , the use of reinforcement learning for controlling Cable-driven parallel robots has been investigated. This class of robots is known with its complex dynamics and the nonlinearity of the system, which offers an interesting environment for the implementation of reinforcement learning algorithms. These algorithms requires a lot of data to learn the optimal policy, which is not always possible in real world applications. To overcome this issue, we propose a sim-to-real approach. First, the newton euler equation of the robot is used to derive the dynamic model of the robot, and by setting the parameters to the real values of the robot, we validated the model by comparing the simulation results with the real data. To ensure a high precision of the simulation data and a reduced execution time, it was implemented using Matlab/Simulink and then converted to exttt{C++} library for easier integration with the extit{gym} environment in exttt{python}. Additionally, in order to learn the optimal policy using reinforcement learning, the objective of the controller should be specified. As most of the use cases of cable-driven parallel robots could be summarized as a trajectory tracking problem, a reward function that align with this objective is designed and a process for target trajectories generation was developed. In addition to that, a limitation on the action space was introduced to ensure the cable tension is within the limits during the training process. These key components along with the most known reinforcement learning algorithms for continuous space: DDPG, PPO, and SAC make a full-fledged training platform for the generation of reinforcement learning based controllers for cable-driven parallel robots. A comparison between the three algorithms during the training process and the performance of the trained controllers was conducted. A side by side evaluation of the reinforcement learning controller with a PID based controller developed for insect tracking purposes was also performed and many aspects were compared such as the tracking error, the energy consumption, and the robustness of the controller. One of the main challenges of this work is the transition to different configuration of the robot, as the trained policy was developed for a specific configuration, a new training process is required for each different configuration. To overcome this issue, a new method to learn an actuator level policy has been developed and comparative analysis with the conventional policy has been hold. Finally, the trained controller was tested on the robot to ensure the transferability of the policy from the simulation to the real world. Dans cette thèse, l'apprentissage par renforcement appliqué au contrôle des robots parallèles à câbles a été étudié. Cette catégorie de robots se distingue par sa dynamique complexe et la non-linéarité de son système, offrant ainsi un cadre idéal pour l'implémentation d'algorithmes d'apprentissage par renforcement. Toutefois, ces algorithmes nécessitent de vastes quantités de données pour apprendre la politique optimale, ce qui n'est pas toujours réalisable dans des scénarios réels. Pour contourner cette limitation, nous avons proposé une approche sim-to-real. Tout d'abord, l'équation de Newton-Euler a été utilisée pour modéliser la dynamique du robot, et en fixant les paramètres aux valeurs réelles, le modèle a été validé en comparant les résultats de la simulation avec les données expérimentales. Afin d'assurer une grande précision des simulations tout en réduisant les temps de calcul, le modèle a été implémenté sous Matlab/Simulink, puis converti en bibliothèque exttt{C++} pour une intégration plus fluide avec l'environnement extit{gym} en exttt{python}. Par ailleurs, pour déterminer la politique optimale via l'apprentissage par renforcement, l'objectif du contrôleur doit être défini. Étant donné que la majorité des applications des robots parallèles à câbles se rapportent au suivi de trajectoires, une fonction de récompense alignée sur cet objectif a été conçue, accompagnée d'un processus de génération de trajectoires cibles. De plus, une limitation de l'espace d'action a été introduite afin de garantir que la tension des câbles reste dans les limites acceptables durant l'apprentissage. Ces éléments clés, associés aux algorithmes d'apprentissage par renforcement les plus répandus pour les espaces continus - DDPG, PPO et SAC - forment une plateforme complète dédiée à la génération de contrôleurs pour les robots parallèles à câbles. Une comparaison approfondie entre ces trois algorithmes a été réalisée, tant durant l'apprentissage que lors de l'évaluation des performances des contrôleurs entraînés. En parallèle, une comparaison a été effectuée entre le contrôleur d'apprentissage par renforcement et un contrôleur PID développé pour le suivi des insectes, en prenant en compte divers critères tels que l'erreur de suivi, la consommation d'énergie et la robustesse du système. Un des défis majeurs de cette étude concerne la transition vers différentes configurations du robot, car la politique apprise est spécifique à une configuration donnée, nécessitant un nouveau processus d'apprentissage pour chaque configuration différente. Pour pallier cette difficulté, une nouvelle méthode d'apprentissage d'une politique au niveau des actionneurs a été développée et comparée à la méthode conventionnelle. Enfin, le contrôleur entraîné a été testé sur le robot afin de valider la transférabilité de la politique de la simulation au monde réel. Connectez-vous pour contacter le contributeur https://hal.univ-lorraine.fr/tel-05072595 Soumis le : mercredi 10 septembre 2025-11:36:36 Dernière modification le : mercredi 5 novembre 2025-03:24:06 Archivage à long terme le : jeudi 11 décembre 2025-18:23:51 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| AI Robots Master Complex Tasks via Reinforcement Learning Breakthrough | https://www.webpronews.com/ai-robots-ma… | 1 | Dec 24, 2025 16:00 | active | |
AI Robots Master Complex Tasks via Reinforcement Learning BreakthroughURL: https://www.webpronews.com/ai-robots-master-complex-tasks-via-reinforcement-learning-breakthrough/ Description: Keywords Content:
In the rapidly evolving field of robotics, a groundbreaking advancement is reshaping how machines learn to interact with the world. Researchers have unveiled a new system that integrates reinforcement learning with advanced robotic vision, enabling robots to master complex manipulation tasks with remarkably less reliance on human-provided demonstration data. This innovation not only accelerates learning but also allows robots to innovate beyond their initial training, discovering more efficient movement patterns that humans might not have anticipated. At the core of this breakthrough is a sophisticated algorithm that rewards robots for successful actions while penalizing failures, all processed through visual inputs. By simulating trial-and-error in real-time, the system scales up vision-action skills, turning raw pixel data into precise motor commands. This approach marks a significant leap from traditional methods that demand extensive pre-programmed examples, potentially revolutionizing industries from manufacturing to healthcare. Unlocking Autonomous Discovery in Robotics Recent reports highlight how this technology empowers robots to explore uncharted territories of motion. For instance, in tasks like grasping irregular objects or navigating cluttered environments, the system doesn’t just mimic; it evolves. According to an article from Quantum Zeitgeist, the integration of reinforcement learning with vision allows for “learning complex manipulation tasks with less human demonstration data and even discovers new, more efficient movement patterns beyond those it was initially taught.” This self-improvement capability is akin to how animals adapt in the wild, but engineered for mechanical precision. Industry insiders note that such advancements address long-standing bottlenecks in robotics, where data scarcity has hindered scalability. By minimizing the need for human oversight, this method could democratize robot deployment in small-scale operations, from warehouse automation to personalized assistive devices. Insights from Recent Academic and Industry Developments Building on this, a study published in the International Journal of Robotics Research, as detailed in a 2021 review by Tengteng Zhang and Hongwei Mo in SAGE Journals, underscores the potential of reinforcement learning to endow robots with “humanoid perception and decision-making wisdom.” Fast-forward to today, and we’re seeing practical applications emerge. A recent piece from TechXplore describes work at UC Berkeley where AI-driven robots learn tasks faster with human feedback, stacking Jenga blocks with a single limb—demonstrating how vision-guided reinforcement learning handles delicate, real-world interactions. Moreover, posts on X (formerly Twitter) from robotics experts like Russell Mendonca reveal ongoing excitement, with one noting that reinforcement learning enables robots to “learn skills via real-world practice, without any demonstrations or simulation engineering,” using language and vision models for rewards. This sentiment echoes broader innovations, such as Google DeepMind’s framework for coordinating multiple robot arms without collisions, as reported in Science Robotics, where up to 40 tasks run simultaneously in crowded spaces. Challenges and Future Implications for Scalability Yet, scaling these vision-action skills isn’t without hurdles. Training in dynamic environments requires immense computational power, and ensuring safety in unpredictable settings remains a priority. As outlined in a 2018 paper from Proceedings of Machine Learning Research on scalable deep reinforcement learning for vision-based manipulation, the key lies in balancing exploration with exploitation to avoid catastrophic failures during learning. For industry leaders, this breakthrough signals a shift toward more adaptive systems. Imagine assembly lines where robots self-optimize workflows, reducing downtime and costs. A Neuroscience News article from two weeks ago highlights robots integrating sight and touch for human-like object handling, further amplified by reinforcement learning’s trial-and-error ethos. Bridging Theory to Real-World Deployment Experts predict this will accelerate adoption in sectors like logistics and elder care. A post on X by AK discusses “RoboGen,” a generative simulation approach for learning diverse skills at scale, pointing to infinite data generation as a game-changer. Similarly, a DVIDS news release from six days ago reports the U.S. Naval Research Laboratory’s successful reinforcement learning control of a free-flyer in space, extending these principles beyond Earth. As these technologies mature, ethical considerations loom—ensuring equitable access and mitigating job displacement. Still, the fusion of vision and action through reinforcement learning promises a future where robots aren’t just tools, but intelligent partners, continually evolving to meet human needs. With ongoing research from institutions like Carnegie Mellon University, as referenced in their 2013 publication, the trajectory is clear: robotics is entering an era of unprecedented autonomy. Subscribe for Updates Help us improve our content by reporting any issues you find. Get the free daily newsletter read by decision makers Get our media kit Deliver your marketing message directly to decision makers.
Images (1):
|
|||||
| Advantech Announces Next-Gen Robotics Development Kit with NVIDIA Jetson Thor … | https://moneycompass.com.my/advantech-a… | 1 | Dec 24, 2025 08:00 | active | |
Advantech Announces Next-Gen Robotics Development Kit with NVIDIA Jetson Thor with Holoscan Built In - Money CompassDescription: Money Compass is one of the credible Chinese and English financial media in Malaysia with strong influence in Malaysia’s financial industry. As the winner of the SME Award in Malaysia for 5 consecutive years, we persistently propel the financial industry towards a mutually beneficial framework. Since 2004, with the dedication to advocating the public to practice financial planning in everyday life, Money Compass has accumulated a vast connection in ASEAN financial industries and garnered government agencies and corporate resources. At present, Money Compass is adjusting its pace to transform into Money Compass 2.0. Consolidating the existing connections and network, Money Compass Integrated Media Platform is founded, which is well grounded in Malaysia whilst serving the ASEAN region. The mission of the new Money Compass Integrated Media Platform is to become the financial freedom gateway to assist internet users enhance financial intelligence, create wealth opportunities and achieve financial freedom for everyone! Content:
TAIPEI, Aug. 25, 2025 /PRNewswire/ — Advantech, a global leader in industrial edge computing, is proud to announce the launch of the Advantech MIC-742-AT Robotics Development Kit accelerated by NVIDIA Jetson Thor modules with NVIDIA Holoscan platform support. Advantech MIC-742-AT Robotics Development Kit Designed for next-generation robotics and physical AI applications, the platform combines NVIDIA Jetson Thor’s ultra-high AI compute power with built-in support for the NVIDIA Holoscan platform—enabling robots to sense, understand, and act with real-time reasoning and ultra low latency sensor processing at the edge. Next-Level AI Compute for Robotics Accelerated by the NVIDIA Jetson Thor module, the Advantech MIC-742-AT delivers up to 2,070 FP4 TFLOPS AI performance with 128GB high-bandwidth LPDDR5X memory. Designed for industrial-grade reliability, it offers a total system power consumption of just 150W, wide operating temperature support from -10°C to 60°C, and a compact form factor for space-constrained deployments. The platform also features 8-channel GMSL 2.0 support, enabling high-speed, low-latency sensor connectivity for advanced perception systems. These capabilities make it ideal for humanoid robots, autonomous mobile robots (AMRs), surgical robotics, and other high-performance edge AI applications. Future-Ready Sensor Fusion and Integration Aligning with the industry trend toward sensor standardization and Ethernet packet-based transmission, Advantech is adopting NVIDIA Holoscan platform with NVIDIA GPUDirect RDMA, allowing sensor data to stream directly into GPU memory—bypassing CPU bottlenecks and reducing latency—to accelerate AI inference and real-time decision-making. For developers requiring real-time multimodal sensor synchronization, Advantech offers optional MIC-FG-HSBA1 powered by Holoscan Sensor Bridge and EKI-2712X-SPE switch accessories. These enable low-latency, high bandwidth, and sub-millisecond alignment of data from multiple sensors—such as stereo cameras, Lidar, radar, ultrasound, and IMUs—ensuring AI models operate on precisely timed, multimodal data for more reliable perception and decision-making in robotics. Availability The Advantech MIC-742-AT Jetson Thor Robotics Development Kit will officially launch in September 2025. For full specifications, please visit the product page at Advantech website. Advantech Holoscan Sensor Bridge (MIC-FG-HSBA1) and Holoscan Switch (EKI-2712X-SPE) accessories will be offered as add-on modules for advanced robotics applications. About Advantech: Advantech’s corporate vision is to enable an intelligent planet. The company is a global leader in the fields of IoT intelligent systems and embedded platforms. To embrace the trends of IoT, big data, and artificial intelligence, Advantech promotes IoT hardware and software solutions with the Edge Intelligence WISE-PaaS core to assist business partners and clients in connecting their industrial chains. Advantech is also working with business partners to co-create business ecosystems that accelerate the goal of industrial intelligence. (www.advantech.com) Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved Login to your account below Remember Me Please enter your username or email address to reset your password. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved
Images (1):
|
|||||
| NVIDIA and US Manufacturing and Robotics Leaders Drive America’s Reindustrialization … | https://www.manilatimes.net/2025/10/29/… | 0 | Dec 24, 2025 08:00 | active | |
NVIDIA and US Manufacturing and Robotics Leaders Drive America’s Reindustrialization With Physical AIDescription: NVIDIA and US Manufacturing and Robotics Leaders Drive America’s Reindustrialization With Physical AI Content: |
|||||
| Акции Arbe Robotics взлетели на фоне сотрудничества с NVIDIA От … | https://ru.investing.com/news/stock-mar… | 0 | Dec 24, 2025 08:00 | active | |
Акции Arbe Robotics взлетели на фоне сотрудничества с NVIDIA От Investing.comURL: https://ru.investing.com/news/stock-market-news/article-93CH-2613932 Description: Акции Arbe Robotics взлетели на фоне сотрудничества с NVIDIA Content: |
|||||
| NVIDIA Corp (NVDA) Unveils New Robotics Innovations at COMPUTEX 2025 … | https://www.gurufocus.com/news/2872412/… | 0 | Dec 24, 2025 08:00 | active | |
NVIDIA Corp (NVDA) Unveils New Robotics Innovations at COMPUTEX 2025 | NVDA stock newsDescription: Summary NVIDIA Corp (NVDA) announced significant advancements in humanoid robotics at COMPUTEX 2025, held in Taipei, Taiwan, on May 19, 2025. The company introd Content: |
|||||
| Nvidia Jetson AGX Thor Dev Kit Raises The Robotics Bar … | https://www.forbes.com/sites/davealtavi… | 0 | Dec 24, 2025 08:00 | active | |
Nvidia Jetson AGX Thor Dev Kit Raises The Robotics Bar With BlackwellDescription: The Jetson AGX Thor Developer Kit signals that Nvidia now intends to also be the default platform for physical AI at the edge. Content: |
|||||
| Abu Dhabi’s TII, NVIDIA launch Middle East’s first AI, robotics … | https://gulfbusiness.com/tii-nvidia-lau… | 1 | Dec 24, 2025 08:00 | active | |
Abu Dhabi’s TII, NVIDIA launch Middle East’s first AI, robotics joint labURL: https://gulfbusiness.com/tii-nvidia-launch-me-s-1st-ai-robotics-joint-lab/ Description: The lab will also support TII’s open innovation strategy, including joint research, open-source initiatives Content:
Get daily business news from the region delivered straight to your inbox The lab will integrate NVIDIA’s accelerated computing platforms with TII’s multidisciplinary research in AI, robotics, autonomous systems, and high-performance computing The Technology Innovation Institute (TII), the applied research arm of Abu Dhabi’s Advanced Technology Research Council (ATRC), and NVIDIA, the global leader in accelerated computing and artificial intelligence (AI), have launched the Middle East’s first joint laboratory dedicated to AI and robotics. The TII-NVAITC (NVIDIA AI Technology Centre) Joint Lab for AI and Robotics aims to develop next-generation AI models, robotics platforms, and humanoid technologies, with the goal of accelerating innovation across multiple industries. The agreement was signed at TII’s headquarters in Abu Dhabi. “This collaboration with NVIDIA marks a major step toward building AI-enhanced robotic systems capable of reasoning, adapting, and acting in complex environments,” Dr Najwa Aaraj, CEO of TII. She added that combining TII’s robotic platforms with AI models and accelerated computing would accelerate the convergence of perception, control, and language, laying the foundation for intelligent machines. The lab will integrate NVIDIA’s accelerated computing platforms with TII’s multidisciplinary research in AI, robotics, autonomous systems, and high-performance computing. It will be the first NVIDIA AI Technology Centre lab in the Middle East, with research spanning robotic learning and control at scale, large language models including TII’s Falcon AI models, and hardware for real-time robotic systems. Carlo Ruiz, VP – Enterprise Solutions & Operations EMEA at NVIDIA, said the lab expands the scope of NVIDIA’s global AI Technology Centre network into robotics for the Middle East, helping researchers accelerate breakthroughs in intelligent systems. The initiative aligns with Abu Dhabi’s long-term strategy to advance technological sovereignty and the UAE’s wider ambition to establish itself as a global AI and robotics hub. The lab will also support TII’s open innovation strategy, including joint research, open-source initiatives, and cross-network learning through the global NVAITC community. TII’s existing modular robotic platforms, including robotic arms and delivery robots, will provide a foundation for research focused on technical excellence and practical readiness. © 2021 MOTIVATE MEDIA GROUP. ALL RIGHTS RESERVED.
Images (1):
|
|||||
| Serve Robotics переживает значительный рост после того, как NVIDIA обнаружила … | https://ru.investing.com/news/stock-mar… | 0 | Dec 24, 2025 08:00 | active | |
Serve Robotics переживает значительный рост после того, как NVIDIA обнаружила значительную долю участия От Investing.comURL: https://ru.investing.com/news/stock-market-news/article-432SI-2459502 Description: Serve Robotics переживает значительный рост после того, как NVIDIA обнаружила значительную долю ... Content: |
|||||
| Teradyne Inc (TER) Unveils AI-Driven Robotics Solutions at NVIDIA GTC … | https://www.gurufocus.com/news/2747081/… | 0 | Dec 24, 2025 08:00 | active | |
Teradyne Inc (TER) Unveils AI-Driven Robotics Solutions at NVIDIA GTC 2025Description: Summary Teradyne Inc (TER) has announced the unveiling of its latest AI-driven robotics solutions at the NVIDIA GTC 2025, taking place from March 17-21. This ma Content: |
|||||
| Arbe Robotics shares soar on NVIDIA collaboration By Investing.com | https://www.investing.com/news/stock-ma… | 0 | Dec 24, 2025 08:00 | active | |
Arbe Robotics shares soar on NVIDIA collaboration By Investing.comDescription: Arbe Robotics shares soar on NVIDIA collaboration Content: |
|||||
| NVIDIA Blackwell-Powered Jetson Thor Now Available, Accelerating the Age of … | https://markets.businessinsider.com/new… | 1 | Dec 24, 2025 08:00 | active | |
NVIDIA Blackwell-Powered Jetson Thor Now Available, Accelerating the Age of General Robotics | Markets InsiderDescription: News Summary: NVIDIA Jetson AGX Thor developer kit and production modules, robotics computers designed for physical AI and robotics, are now gener... Content:
News Summary: SANTA CLARA, Calif., Aug. 25, 2025 (GLOBE NEWSWIRE) -- NVIDIA today announced the general availability of the NVIDIA Jetson AGX Thor™ developer kit and production modules, powerful new robotics computers designed to power millions of robots across industries including manufacturing, logistics, transportation, healthcare, agriculture and retail. Early adopters include industry leaders Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic and Meta, while 1X, John Deere, OpenAI and Physical Intelligence are evaluating Jetson Thor to advance their physical AI capabilities. “We’ve built Jetson Thor for the millions of developers working on robotic systems that interact with and increasingly shape the physical world,” said Jensen Huang, founder and CEO of NVIDIA. “With unmatched performance and energy efficiency, and the ability to run multiple generative AI models at the edge, Jetson Thor is the ultimate supercomputer to drive the age of physical AI and general robotics.” The Ultimate Platform for Next-Generation Robotics Powered by an NVIDIA Blackwell GPU and featuring 128GB of memory, Jetson Thor delivers up to 2,070 FP4 teraflops of AI compute to effortlessly run the latest AI models — all within a 130-watt power envelope. Compared with its predecessor, the NVIDIA Jetson Orin™, Jetson Thor delivers up to 7.5x higher AI compute and 3.5x greater energy efficiency to run any generative AI model — from vision language action models like NVIDIA Isaac™ GR00T N1.5 to popular large language and vision language models. The new system-on-module solves one of the most significant challenges in robotics: running multi-AI workflows to enable robots to have real-time, intelligent interactions with people and the physical world. Jetson Thor unlocks real-time inference, critical for highly performant physical AI applications spanning humanoid robotics, agriculture and surgical assistance. Global Robotics Leaders Build on Jetson Thor Jetson Thor is powered by the full-stack NVIDIA Jetson™ software platform, built for physical AI and humanoid robotics, which supports any popular AI framework and generative AI model. It is also fully compatible with NVIDIA’s software stack from cloud to edge, including NVIDIA Isaac for robotics simulation and development, Isaac GR00T humanoid robot foundation models, NVIDIA Metropolis for vision AI and NVIDIA Holoscan for real-time sensor processing. Since its inception in 2014, the NVIDIA Jetson platform and NVIDIA’s robotics stack have attracted over 2 million developers and a growing ecosystem of 150+ hardware system, software and sensor partners, with Jetson Orin enabling over 7,000 customers to use edge AI across industries. Jetson Thor pushes the frontier further for visual AI agents and complex robotic systems such as humanoids and surgical robots. World technology leaders in robotics are adopting Jetson Thor to power their next-generation robots. “The development of capable humanoid robots hinges on our ability to run powerful AI models directly on the robot, enabling real-time learning and interaction,” said Brett Adcock, founder and CEO of Figure. “NVIDIA Jetson Thor’s server-class performance, delivered within a compact and power-efficient design, allows us to deploy the large-scale generative AI models necessary for our humanoids to perceive, reason and act in complex, unstructured environments.” “The future of robotics in logistics depends on the ability to deploy increasingly intelligent and autonomous systems,” said Tye Brady, chief technologist at Amazon Robotics. “NVIDIA Jetson Thor offers the computational horsepower and energy efficiency necessary to develop and scale the next generation of AI-powered robots that can operate safely and effectively in dynamic, real-world environments, transforming how we move and manage goods globally.” “As autonomous machines tackle more complex tasks in our customers’ operations, edge computing is critical for real-time decision making,” said Joe Creed, CEO of Caterpillar. “NVIDIA Jetson Thor offers the AI performance we need to develop and deploy the construction and mining equipment of the future, enhancing precision, reducing waste and improving safety for our customers around the globe.” Availability The NVIDIA Jetson AGX Thor developer kit is available now starting at $3,499. Jetson T5000 production modules are available from worldwide distribution partners. Production systems and carrier boards can be purchased from embedded partners. About NVIDIA NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing. For further information, contact: Paris Fox408-242-0035pfox@nvidia.com Certain statements in this press release including, but not limited to, statements as to: with unmatched performance and energy efficiency, and the ability to run multiple generative AI models at the edge, Jetson Thor being the ultimate supercomputer to drive the age of physical AI and general robotics; the benefits, impact, performance, and availability of NVIDIA’s products, services, and technologies; expectations with respect to NVIDIA’s third party arrangements, including with its collaborators and partners; expectations with respect to technology developments; and other statements that are not historical facts are forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended, which are subject to the “safe harbor” created by those sections based on management’s beliefs and assumptions and on information currently available to management and are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic and political conditions; NVIDIA’s reliance on third parties to manufacture, assemble, package and test NVIDIA’s products; the impact of technological development and competition; development of new products and technologies or enhancements to NVIDIA’s existing product and technologies; market acceptance of NVIDIA’s products or NVIDIA’s partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of NVIDIA’s products or technologies when integrated into systems; and changes in applicable laws and regulations, as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances. © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, Jetson AGX Orin, Jetson AGX Thor, Jetson Orin, Jetson Thor, NVIDIA Isaac and NVIDIA Jetson are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice. A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/275e485f-4697-4281-a916-e77c818eac3d Copyright © 2025 Insider Inc and finanzen.net GmbH (Imprint). All rights reserved. Registration on or use of this site constitutes acceptance of our Terms of Service and Privacy Policy.
Images (1):
|
|||||
| Nvidia CEO says robotics is chipmaker's biggest opportunity after AI | https://www.cnbc.com/2025/06/25/nvidia-… | 1 | Dec 24, 2025 08:00 | active | |
Nvidia CEO says robotics is chipmaker's biggest opportunity after AIURL: https://www.cnbc.com/2025/06/25/nvidia-shareholder-meeting-2025-robots.html Description: Nvidia CEO Jensen Huang said at the company's annual shareholders meeting on Wednesday that robotics is a rapidly growing business. Content:
In this article Nvidia CEO Jensen Huang said other than artificial intelligence, robotics represents the chipmaker's biggest market for potential growth, and that self-driving cars would be the first major commercial application for the technology. "We have many growth opportunities across our company, with AI and robotics the two largest, representing a multitrillion-dollar growth opportunity," Huang said Wednesday at Nvidia's annual shareholders meeting, in response to a question from an attendee. A little more than a year ago, Nvidia changed the way it reported its business units to group both its automotive and robotics divisions into the same line item. In May, Nvidia said the business unit had $567 million in quarterly sales, or about 1% of the company's total revenue. Automotive and robotics was up 72% on an annual basis. Nvidia's sales have been surging over the past three years due to unyielding demand for the company's data center graphics processing units, or GPUs, which are used to build and operate sophisticated AI applications such as OpenAI's ChatGPT. Total sales have soared from about $27 billion in its fiscal 2023 to $130.5 billion last year, and analysts are expecting nearly $200 billion in sales this year, according to LSEG. The stock climbed to a record on Wednesday, lifting Nvidia's market cap to about $3.75 trillion, putting it just ahead of Microsoft as the most valuable company in the world. While robotics remains relatively small for Nvidia at the moment, Huang said applications will require the company's data center AI chips to train the software as well as other chips installed in self-driving cars and robots. Huang highlighted Nvidia's Drive platform of chips, and software for self-driving cars, which Mercedes-Benz is using. He also said the company recently released AI models for humanoid robots called Cosmos. "We're working towards a day where there will be billions of robots, hundreds of millions of autonomous vehicles, and hundreds of thousands of robotic factories that can be powered by Nvidia technology," Huang said. Nvidia has increasingly been offering more complementary technology alongside its AI chips, including software, a cloud service and networking chips to tie AI accelerators together. Huang said Nvidia's brand is evolving, and that it's better described as an "AI infrastructure" or "computing platform" provider. "We stopped thinking of ourselves as a chip company long ago," Huang said. At the annual meeting, shareholders approved the company's executive compensation plan and reelected all 13 board members. Outside shareholder proposals to produce a more detailed diversity report and change shareholder meeting procedure did not pass. WATCH: AMD working to catch Nvidia Got a confidential news tip? We want to hear from you. Sign up for free newsletters and get more CNBC delivered to your inbox Get this delivered to your inbox, and more info about our products and services. © 2025 Versant Media, LLC. All Rights Reserved. A Versant Media Company. Data is a real-time snapshot *Data is delayed at least 15 minutes. Global Business and Financial News, Stock Quotes, and Market Data and Analysis. Data also provided by
Images (1):
|
|||||
| Teradyne Robotics to Debut AI Accelerator-Powered Solutions at NVIDIA GTC … | https://www.gurufocus.com/news/2742044/… | 0 | Dec 24, 2025 08:00 | active | |
Teradyne Robotics to Debut AI Accelerator-Powered Solutions at NVIDIA GTC 2025, Marking a First in AI-Driven Collaborative RoboticsDescription: [url="]Teradyne Robotics[/url] and its partners are set to unveil a suite of advanced, AI-driven robotics solutions at [url="]NVIDIA GTC 2025[/url] March 17-21 Content: |
|||||
| Nvidia says ‘the age of generalist robotics is here’ | … | https://www.theverge.com/news/631743/nv… | 1 | Dec 24, 2025 08:00 | active | |
Nvidia says ‘the age of generalist robotics is here’ | The VergeURL: https://www.theverge.com/news/631743/nvidia-issac-groot-n1-robotics-foundation-model-available Description: Nvidia demonstrated GR00T N1 foundation model powering 1X’s NEO Gamma humanoid robot at GTC 2025. Content:
Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech At GTC 2025, Nvidia demonstrated 1X’s NEO Gamma humanoid robot running its GR00T N1 foundation model. At GTC 2025, Nvidia demonstrated 1X’s NEO Gamma humanoid robot running its GR00T N1 foundation model. Posts from this author will be added to your daily email digest and your homepage feed. See All by Andrew Liszewski Posts from this author will be added to your daily email digest and your homepage feed. See All by Andrew Liszewski Nvidia has announced that Isaac GR00T N1 — the company’s open-source, pretrained but customizable foundation model that’s designed to expedite the development and capabilities of humanoid robots — is now available. “The age of generalist robotics is here,” says Nvidia founder and CEO, Jensen Huang. “With Nvidia Isaac GR00T N1 and new data-generation and robot-learning frameworks, robotics developers everywhere will open the next frontier in the age of AI.” During his GTC 2025 keynote today, Huang demonstrated 1X’s NEO Gamma humanoid robot performing autonomous tidying jobs using a post-trained policy built on the GR00T N1 model. “The future of humanoids is about adaptability and learning,” says 1X Technologies CEO Bernt Børnich. “While we develop our own models, NVIDIA’s GR00T N1 provides a significant boost to robot reasoning and skills. With minimal post-training data, we fully deployed on NEO Gamma — advancing our mission of creating robots that are not just tools, but companions capable of assisting humans in meaningful, immeasurable ways.” You might recall seeing this freakishly lifelike bot a few weeks ago in Nothing’s teaser for its latest phone. We didn’t post it because it looked like another human in a robot suit — thanks, Elon. Other companies developing humanoid robots who have had early access to the GR00T N1 model include Boston Dynamics, the creators of Atlas; Agility Robotics; Mentee Robotics; and Neura Robotics. Originally announced as Project GR00T a year ago, the GR00T N1 foundation model utilizes a dual-system architecture inspired by human cognition. System 1, as Nvidia calls it, is described as a “fast-thinking action model” that behaves similarly to human reflexes and intuition. It was trained on data collected through human demonstrations and synthetic data generated by Nvidia’s Omniverse platform. System 2, which is powered by a vision language model, is a “slow-thinking model” that “reasons about its environment and the instructions it has received to plan actions.” Those plans are passed along to System 1, which translates them into “precise, continuous robot movements” that include grasping, moving objects with one or two arms, as well as more complex multistep tasks that involve combinations of basic skills. While the GR00T N1 foundation model is pretrained with generalized humanoid reasoning and skills, developers can customize its behavior and capabilities for specific needs by post-training it with data gathered from human demonstrations or simulations. Nvidia has made GR00T N1 training data and task evaluation scenarios available for download through Hugging Face and GitHub. Update, March 19th: Updated comment from 1X Technologies CEO Bernt Børnich. Posts from this author will be added to your daily email digest and your homepage feed. See All by Andrew Liszewski Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Nvidia Posts from this topic will be added to your daily email digest and your homepage feed. See All Robot Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech A free daily digest of the news that matters most. This is the title for the native ad This is the title for the native ad © 2025 Vox Media, LLC. All Rights Reserved
Images (1):
|
|||||
| Nvidia Robotics появится в облачных сервисах Alibaba | Новости Украины … | https://tech.liga.net/technology/novost… | 1 | Dec 24, 2025 08:00 | active | |
Nvidia Robotics появится в облачных сервисах Alibaba | Новости Украины | LIGA.netURL: https://tech.liga.net/technology/novosti/alibaba-integrirovala-nvidia-robotics-v-svoyu-ii-platformu Description: Alibaba Group Holdings Ltd. также объявила об интеграции программного обеспечения Nvidia Robotics в свою облачную платформу для разработки искусственного интеллекта. Это позволит клиентам создавать решения для физического ИИ — от гуманоидных роботов до беспилотных автомобилей. Content:
Alibaba Group Holdings Ltd. объявила об интеграции программного обеспечения Nvidia Robotics в свою облачную платформу для разработки искусственного интеллекта. Это позволит клиентам создавать решения для Physical AI – от гуманоидных роботов до беспилотных автомобилей. Об этом сообщает Bloomberg. Обновление было представлено на ежегодной конференции Apsara в Ханчжоу. Областное подразделение Alibaba Cloud добавило полный стек Nvidia Robotics в свою Platform for AI, предоставив разработчикам доступ к нему. Интеграция является частью масштабной стратегии компании по развитию инфраструктуры искусственного интеллекта. Генеральный директор Эдди Ву заявил, что Alibaba увеличит инвестиции сверх ранее объявленных 380 млрд юаней ($53 млрд). Напоминаем, что Alibaba представила AI-модель Qwen 2.5-Max. Также продукты с ИИ помогли Alibaba увеличить доход.
Images (1):
|
|||||
| Nvidia Eyes Trillions in Robotics as Next AI Gold Rush … | https://www.gurufocus.com/news/2947304/… | 0 | Dec 24, 2025 08:00 | active | |
Nvidia Eyes Trillions in Robotics as Next AI Gold Rush BeginsURL: https://www.gurufocus.com/news/2947304/nvidia-eyes-trillions-in-robotics-as-next-ai-gold-rush-begins Description: June 26 - Nvidia (NVDA) CEO Jensen Huang said robotics is the company's second-largest growth opportunity, following artificial intelligence, during his remarks Content: |
|||||
| After Earnings, Nvidia Powers Ahead On Robotics And Automation | https://www.forbes.com/sites/johnwerner… | 0 | Dec 24, 2025 08:00 | active | |
After Earnings, Nvidia Powers Ahead On Robotics And AutomationDescription: Nvidia posts strong Q3 revenue, expands beyond chips into robotics, self-driving, and AI safety despite China trade uncertainties. Content: |
|||||
| Nvidia and Samsung to Invest in Robotics Startup Skild AI | https://nextbigwhat.com/nvidia-and-sams… | 1 | Dec 24, 2025 08:00 | active | |
Nvidia and Samsung to Invest in Robotics Startup Skild AIURL: https://nextbigwhat.com/nvidia-and-samsung-to-invest-in-robotics-startup-skild-ai/ Description: Nvidia and Samsung to Invest in Robotics Startup Skild AI Content:
Samsung Electronics and Nvidia are set to acquire minority stakes in Skild AI, aiming to enhance their presence in the growing consumer robotics sector. The investments signal a strategic move by both tech giants to tap into the potential of robotics technology. Skild AI’s partnership with Nvidia and Samsung could lead to significant advancements in the field of consumer robotics. [Via] Name Email Subscribe now to keep reading and get access to the full archive. Continue reading Already a member? Log in
Images (1):
|
|||||
| Nvidia launches Jetson AGX Thor dev kit for physical AI … | https://gamesbeat.com/nvidia-launches-j… | 0 | Dec 24, 2025 08:00 | active | |
Nvidia launches Jetson AGX Thor dev kit for physical AI and roboticsURL: https://gamesbeat.com/nvidia-launches-jetson-agx-thor-dev-kit-for-physical-ai-and-robotics/ Description: Nvidia today announced the general availability of the Nvidia Jetson AGX Thor developer kit and production modules. Content: |
|||||
| Galbot to Accelerate Robotics with NVIDIA Jetson Thor - Money … | https://moneycompass.com.my/galbot-to-a… | 1 | Dec 24, 2025 08:00 | active | |
Galbot to Accelerate Robotics with NVIDIA Jetson Thor - Money CompassURL: https://moneycompass.com.my/galbot-to-accelerate-robotics-with-nvidia-jetson-thor/ Description: Money Compass is one of the credible Chinese and English financial media in Malaysia with strong influence in Malaysia’s financial industry. As the winner of the SME Award in Malaysia for 5 consecutive years, we persistently propel the financial industry towards a mutually beneficial framework. Since 2004, with the dedication to advocating the public to practice financial planning in everyday life, Money Compass has accumulated a vast connection in ASEAN financial industries and garnered government agencies and corporate resources. At present, Money Compass is adjusting its pace to transform into Money Compass 2.0. Consolidating the existing connections and network, Money Compass Integrated Media Platform is founded, which is well grounded in Malaysia whilst serving the ASEAN region. The mission of the new Money Compass Integrated Media Platform is to become the financial freedom gateway to assist internet users enhance financial intelligence, create wealth opportunities and achieve financial freedom for everyone! Content:
BEIJING, Aug. 26, 2025 /PRNewswire/ — Galbot, a global leader in embodied intelligence and robotics, is integrating NVIDIA Jetson AGX Thor into its G1 Premium robot, recently shown at the World Robotics Conference (WRC). As one of the early adopters in the industry, Galbot is leveraging Jetson Thor to elevate performance—unlocking swifter, smoother, and more intelligent autonomy—while setting new benchmarks for deployment across retail, healthcare, and logistics sectors. Galbot G1 Premium Powered by NVIDIA Jetson Thor at WRC By integrating NVIDIA Jetson Thor, Galbot has significantly raised the bar for general-purpose robotics, accelerating its roadmap and demonstrating how cutting-edge compute drives stronger autonomy and broader commercial deployment. Built on Jetson Thor, the G1 Premium has already achieved major gains in speed, fluidity, and real-time reasoning. With 7.5x the AI compute of the previous generation NVIDIA Jetson Orin and 3.5x greater energy efficiency, Galbot’s robots can now execute complex planning and motion with a level of precision that redefines what’s possible in embodied intelligence. “Our G1 Premium, now running on NVIDIA Jetson Thor, delivers significant gains in speed and real-time reasoning, enabling our proprietary VLA models to achieve enhanced real-world performance” said Professor Wang He, Founder & CTO of Galbot. The robot’s performance was showcased at the WRC, where the G1 Premium was recognized as the “swiftest humanoid worker.” Building on this foundation, Galbot has partnered with Tsinghua University and Shanghai Qi Zhi Institute to co-develop OpenWBT_Isaac—a specialized simulation platform for whole-body teleoperation of humanoid robots. Leveraging the OpenWBT system’s full-body control capabilities and powered by NVIDIA L20/RTX 5880 Ada GPUs with virtual-real fusion, OpenWBT_Isaac enables efficient and advanced development of humanoid robotics. At the core of Galbot’s achievements is its unique Sim2Real methodology, which utilizes massive pre-training on large-scale, high-quality synthetic datasets and is refined with minimal real-world data. This approach drastically reduces dependence on costly real-world data collection while significantly improving generalizability across complex environments. Galbot further demonstrated its technical capabilities by winning the championship at the World Humanoid Robot Games, competing fully autonomously without teleoperation and achieving first place in every round from preliminaries to finals. Driven by Galbot’s proprietary Sim2Real methodology and large VLA models, Galbot’s G1 robots are already operating fully autonomously in over10 pharmacies in Beijing, with plans to expand to over 100 locations nationwide by the year-end. Galbot has also partnered with leading manufacturers such as Bosch Group to deploy its AI-powered platform in smart production, replacing traditional robotic arms with more intelligent, adaptable solutions. Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved Login to your account below Remember Me Please enter your username or email address to reset your password. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved
Images (1):
|
|||||
| Arbe Robotics jumps on Nvidia collaboration - Globes | https://en.globes.co.il/en/article-arbe… | 1 | Dec 24, 2025 08:00 | active | |
Arbe Robotics jumps on Nvidia collaboration - GlobesURL: https://en.globes.co.il/en/article-arbe-robotics-jumps-on-nvidia-collaboration-1001498727 Description: The Israeli auto-tech chip company will work with Nvidia to enhance free space mapping and AI-driven capabilities to further advance the automotive industry. Content:
Israeli auto-tech chip company Arbe Robotics (Nasdaq: ARBE; TASE: ARBE) is up 64% on Nasdaq, giving a market cap of $362 million after announcing a collaboration with chip giant Nvidia. Arbe, founded by CEO Kobi Marenko, announced that in collaboration with Nvidia it is enhancing free space mapping and AI-driven capabilities to further advance the automotive industry. Arbe's high-resolution radar will integrate with the Nvidia DRIVE AGX in-vehicle computing platform for hands-free driving and real-timer safety applications. The solution will be presented at this month's CES 2025 exhibition in Las Vegas. RELATED ARTICLES Autonomous vehicles still on the radar Israeli autotech companies crash in New York At the heart of Arbe's innovation is its AI-powered processing of an exceptionally dense, high-resolution point cloud, providing long-range detection capabilities in all weather and lighting conditions, with minimal false alarms in both urban environments and on highways. Arbe Robotics is developing a chip for automotive imaging radars, intended for safety applications and autonomous vehicles. Last year, the company raised NIS 103 million in a convertible bond on the Tel Aviv Stock Exchange, but according to the terms of the offering, the amount will be transferred to it on the condition that it wins a tender or contract to supply its products to one of the leading car manufacturers, and that the closing price of its shares on the Nasdaq will be above $3.1 for 30 consecutive trading days. Arba Robotics share price, which closed at the end of last week at $2.63, are now jumped above this threshold to $4.12 (although, as mentioned, the company must meet this condition for 30 days). Arba was merged into a SPAC company three years ago at a valuation of about $500 million. Marenko said, "Arba is working to redefine the field of automotive safety with next-generation radar solutions that provide accurate and detailed information about the vehicle's environment during the journey and enable seamless integration with cameras and other sensors in the vehicle. In the live demonstrations at CES, we will demonstrate how our technology advances the vision of safe driving without road accidents and, ultimately, the autonomous vehicle." Published by Globes, Israel business news - en.globes.co.il - on January 6, 2025. © Copyright of Globes Publisher Itonut (1983) Ltd., 2025. Israeli auto-tech chip company Arbe Robotics (Nasdaq: ARBE; TASE: ARBE) is up 64% on Nasdaq, giving a market cap of $362 million after announcing a collaboration with chip giant Nvidia. Arbe, founded by CEO Kobi Marenko, announced that in collaboration with Nvidia it is enhancing free space mapping and AI-driven capabilities to further advance the automotive industry. Arbe's high-resolution radar will integrate with the Nvidia DRIVE AGX in-vehicle computing platform for hands-free driving and real-timer safety applications. The solution will be presented at this month's CES 2025 exhibition in Las Vegas. RELATED ARTICLES Autonomous vehicles still on the radar Israeli autotech companies crash in New York At the heart of Arbe's innovation is its AI-powered processing of an exceptionally dense, high-resolution point cloud, providing long-range detection capabilities in all weather and lighting conditions, with minimal false alarms in both urban environments and on highways. Arbe Robotics is developing a chip for automotive imaging radars, intended for safety applications and autonomous vehicles. Last year, the company raised NIS 103 million in a convertible bond on the Tel Aviv Stock Exchange, but according to the terms of the offering, the amount will be transferred to it on the condition that it wins a tender or contract to supply its products to one of the leading car manufacturers, and that the closing price of its shares on the Nasdaq will be above $3.1 for 30 consecutive trading days. Arba Robotics share price, which closed at the end of last week at $2.63, are now jumped above this threshold to $4.12 (although, as mentioned, the company must meet this condition for 30 days). Arba was merged into a SPAC company three years ago at a valuation of about $500 million. Marenko said, "Arba is working to redefine the field of automotive safety with next-generation radar solutions that provide accurate and detailed information about the vehicle's environment during the journey and enable seamless integration with cameras and other sensors in the vehicle. In the live demonstrations at CES, we will demonstrate how our technology advances the vision of safe driving without road accidents and, ultimately, the autonomous vehicle." Published by Globes, Israel business news - en.globes.co.il - on January 6, 2025. © Copyright of Globes Publisher Itonut (1983) Ltd., 2025. Israeli auto-tech chip company Arbe Robotics (Nasdaq: ARBE; TASE: ARBE) is up 64% on Nasdaq, giving a market cap of $362 million after announcing a collaboration with chip giant Nvidia. Arbe, founded by CEO Kobi Marenko, announced that in collaboration with Nvidia it is enhancing free space mapping and AI-driven capabilities to further advance the automotive industry. Arbe's high-resolution radar will integrate with the Nvidia DRIVE AGX in-vehicle computing platform for hands-free driving and real-timer safety applications. The solution will be presented at this month's CES 2025 exhibition in Las Vegas. At the heart of Arbe's innovation is its AI-powered processing of an exceptionally dense, high-resolution point cloud, providing long-range detection capabilities in all weather and lighting conditions, with minimal false alarms in both urban environments and on highways. Arbe Robotics is developing a chip for automotive imaging radars, intended for safety applications and autonomous vehicles. Last year, the company raised NIS 103 million in a convertible bond on the Tel Aviv Stock Exchange, but according to the terms of the offering, the amount will be transferred to it on the condition that it wins a tender or contract to supply its products to one of the leading car manufacturers, and that the closing price of its shares on the Nasdaq will be above $3.1 for 30 consecutive trading days. Arba Robotics share price, which closed at the end of last week at $2.63, are now jumped above this threshold to $4.12 (although, as mentioned, the company must meet this condition for 30 days). Arba was merged into a SPAC company three years ago at a valuation of about $500 million. Marenko said, "Arba is working to redefine the field of automotive safety with next-generation radar solutions that provide accurate and detailed information about the vehicle's environment during the journey and enable seamless integration with cameras and other sensors in the vehicle. In the live demonstrations at CES, we will demonstrate how our technology advances the vision of safe driving without road accidents and, ultimately, the autonomous vehicle." Published by Globes, Israel business news - en.globes.co.il - on January 6, 2025. © Copyright of Globes Publisher Itonut (1983) Ltd., 2025.
Images (1):
|
|||||
| Ai2 launches robotics initiative led by UW prof, former head … | https://commstrader.com/technology/ai2-… | 1 | Dec 24, 2025 08:00 | active | |
Ai2 launches robotics initiative led by UW prof, former head of NVIDIA robotics | CommsTraderDescription: Dieter Fox, Senior Research Director at Allen AI (Ai2) and University of Washington (UW) professor in the Paul G. Allen School of Computer Science & Content:
Dieter Fox, Senior Research Director at Allen AI (Ai2) and University of Washington (UW) professor in the Paul G. Allen School of Computer Science & Engineering, joins the nonprofit organization Allen AI (Ai2), aiming to lead a new AI initiative focused on robotics. Fox, who previously led Nvidia’s robotics lab at the UW beginning in 2017, now is the head of the Robotics and State Estimation Lab (RSE-Lab), part of the team that will develop a robotics team centered on foundation models addressing challenges in language, vision, and reasoning. This initiative is part of the broader effort at Allen AI to advance fundamental AI models for robotics. Fox’s vision for the NVSDR aims to develop a world-class robotics team that will tackle significant challenges in robotics, such as object manipulation, motion, and the generation of generative AI for robotics. He shared publicly in a LinkedIn post that the team will focus on simulation, behavior, and training, with a particular emphasis on producing data that mimics human activities across scales. Theaylor lab, where Fox’s lab was established, has been a hub for foundational work in these areas, drawing on strengths from the UW’s Computer Science and Engineering department. Fox began his career at the UW before joining Nvidia in 2017. He initially aided Nvidia’s robotics lab but was interested in advancing AI research. By the late 2010s, it became clear that neural networks, policies, and simulation technologies would be critical for solving complex robotics problems. Fredric-central robots are now a focus of national and international interest, particularly in areas like self-driving cars and humanoid assistive technologies. Fox later joined the UW in 2000, following his previous work there. As the head of the UW Robotics and State Estimation Lab, Fox has been instrumental in developing foundational robot models that address both practical tasks and behavioral investigate into the fundamental principles governing intelligent behavior. The lab builds on the dual strengths of the UW CS and Math departments, leveraging computational science and engineering expertise. During his time at ny.com, the company became a global leader in semiconductor manufacturing, ranging from advanced integrated circuits to the next-generation AI systems. The company’s offices are based in Seattle and Redmond, making it a key player in both academic and industrial research. While at ny.com from 2017 onwards, Fox was crucial in moving Nvidia’s robotics lab from a smaller, experimental setup into a more robust foundation. His leadership at Nvidia contributed to the lab’s growth into a“They becoming a significant enterprise in robotics and AI.” In 2017, Fox visited the召开 of the annual CVPR conference arranged by the company, where he acquainted-language models; simulation and planning; and large-scale training for reasoning and control. This connection was pivotal inBX RSE-Lab’s rise to prominence. Fox commends the creation of a world-class robotics research team, highlighting their ability to tackle challenges in object manipulation and the development of generative AI for extended periods.Naming itself Allen AI, Fox, along with Ai2, is dedicated to advancing the AI field through robust research. Yash Narang will take over as the leader of the simulation and behavior generation team (S embarrassed), exposing his organizational role at the lab. Prior to his(partition team,ny’s Robotics Lab is being synthesized into Narendra’s new group, which is expected to have significant influence. Qiore will be critical as part of a shift towards AI scaling.基础知识 in fundamental AI models and leadership in fostering an ecosystem for robotics and AI are at the core of Fox’s secret. As a trainer at the UW, Fox has been dedicated to contributing to the advancement of AI and robotics researchers, which will guide future-driven innovation.mlx goals in alternative cities, he hopes to achieve assignments and contribute to a similarly engaged workforce. Dieter Fox joins Allen AI, a platform dedicated to advancing AI research, particularly through robotics and computer vision. His leadership at Nvidia was pivotal to the lab’s journey,worth his name. Fox’s background and leadership experience from previous positions at the UW are draws to the vision at Allen AI alongside the company’s commitment to innovation and making a meaningful impact. totaled focus on developing strong, controllable robots,积 applicants as fundamental principles of intelligent behavior. Fox end each line, the resume jumps to focus on vision-domain problems, such as real-time recognition of human faces in challenging conditions. Relies on his leadership at Nvidia, Fox transformed the lab’s capabilities,with his contributions crucial in advancing autom GaPoPl preferential to large-scale training. Flake. Fox’s knowledge of a wide range of cosmological models in perception,inatiomatic knowledge and apsidal optimization, and impressive abstract thinking will help create robust and scalable solutions for robotics challenges. Fox’s leadership at迎接 his ,as head of the UW RSE-Lab, is pivotal in ensuring that future generations of researchers build on the strengths of the UW CS,Math and AI2 teams, ac tolerance of robotics. In summary, Fox has built a team that is expected to not only produce cutting-edge results in robotics but also contribute to the advancement of fundamental AI models. This initiative belongs at the heart of AI2’s efforts, building on潍calc’s capabilities. Fox’s contributions have positioned Ai2 as a leader in both robotics and AI research, and he is optimistic about the future, confident that his leadership and vision can drive the organization. The alignment of Fox’s efforts with the societal vision of advancing AI and robotics across large networks of servers is crucial. Successful leadership will require a deep understanding of fundamental principles and the ability to create systems that operate effectively across scale; he knows the significance of])): Coding is one of the most important aspects of modern life. рожд, Roetelst MR, & Muller, RB will be fur[from the lab’s simulation capabilities. ) In summary, Dieter Fox’s leadership at Nvidia has expanded Allen’s capabilities in robotics and AI, drawing from his experiences and vision in the UW. Fox’s work is central to creating a collaborative’ ecosystem, reflecting both the academiand the industry, and aligning with societal goals to build a more human-like AI. Fox believes that this platform has the potential asnChelyatnikov, AA, and others,fi administration privately Reluctantly. They are joining the lab’s simulation capabilities and expanding on the label’s work toward creating systems thatulations both to generate AI for robotics and other areas. Fox’s previous experience atUnityEngine coupled with his insights into how to effectively deploy these innovations will consolidateAi2’s reputation as one of the most important centers厂商 robotics and AI initiatives in the world. ultimately, Fox’s vision at Allen AI, along with Nvidia’s use of open-source systems, seeks to foster national collaboration, innovate, and the equitable浪潮 of AI thatnetworks. Fox joining Allen AI is bringing the best of his technical expertise coupled with his industrial andManagerial view, who will go on to lead wondered ground for the lab’s continued success and impact on sem贸易 strstrS, facing olto, has been enrich وأوضح the lab’s ingenuity towards develop ing effective systems that behave like humans. Fox’s work at Nvidia sheds light onin on new aspects of robotics that potential new=numerator.push nd to explore further innovations in this area. He believes that the lab’s contributions are not yet fully realized, but building upon the strength ofDigital Science and the UW’s smarty, the lab can take on productive, meaningful challenges. fox’s vision at Allen AI, including the lab’s long-term plans, is in place for the future. Save my name, email, and website in this browser for the next time I comment. Type above and press Enter to search. Press Esc to cancel. Login to your account below.
Images (1):
|
|||||
| Serve Robotics surges as NVIDIA discloses large stake By Investing.com | https://www.investing.com/news/stock-ma… | 0 | Dec 24, 2025 08:00 | active | |
Serve Robotics surges as NVIDIA discloses large stake By Investing.comDescription: Serve Robotics surges as NVIDIA discloses large stake Content: |
|||||
| Client Challenge | https://www.ft.com/content/7c3dafa8-ffb… | 1 | Dec 24, 2025 08:00 | active | |
Client ChallengeURL: https://www.ft.com/content/7c3dafa8-ffb9-4ca8-b677-ab3cc2afbdcb Content:
Please enable JavaScript to proceed.
Images (1):
|
|||||
| Could Serve Robotics Become the Next Nvidia? | https://finance.yahoo.com/news/could-se… | 1 | Dec 24, 2025 08:00 | active | |
Could Serve Robotics Become the Next Nvidia?URL: https://finance.yahoo.com/news/could-serve-robotics-become-next-093000017.html Description: This little delivery robot maker could still have plenty of room to grow. Content:
Oops, something went wrong Nvidia's (NASDAQ: NVDA) stock soared 2,630% over the past five years, boosting its market cap to roughly $3.5 trillion and making it the most valuable company in the world. Most of that rally was driven by its brisk sales of AI-oriented GPUs for data centers. From fiscal 2019 to fiscal 2024 (which ended this January), Nvidia's revenue grew at a compound annual growth rate (CAGR) of 39%. But from fiscal 2024 to fiscal 2027, analysts expect its revenue to rise at an even faster CAGR of 53% as the AI market continues to expand. Are You Missing The Morning Scoop? Breakfast News delivers it all in a quick, Foolish, and free daily newsletter. Sign Up For Free » That secular trend makes Nvidia a great long-term investment, but it could struggle to replicate its millionaire-making gains from the past several years. So if you're looking for the "next Nvidia," you might want to check out the smaller AI companies the chipmaker is investing in. One of those companies that stands out is Serve Robotics (NASDAQ: SERV), a producer of AI-powered sidewalk delivery robots. Let's see if this little $384 million company could eventually become a trillion-dollar tech giant like Nvidia. Serve Robotics was founded in 2017 within Postmates, the food delivery service acquired by Uber Technologies (NYSE: UBER) and integrated into Uber Eats in 2020. Uber subsequently spun off Serve Robotics as an independent company in 2021, but it continued using its delivery robots to fulfill orders in select areas across Los Angeles. Its newest Gen 3 robots can travel 48 miles on a single charge, carry up to 15 gallons of cargo, and have a max speed of 11 mph. They're also resistant to extreme temperatures and heavy rain. Serve Robotics executed a reverse merger with the blank-check company Patricia Acquisition in 2023, which paved the way to its Nasdaq listing at $4 a share on April 18. But it ended the first day at just $3.11 and sank below $3 by the end of its first month. Today, Serve's stock trades at nearly $9. Most of that rally occurred this July after Nvidia revealed that it had taken a 10% stake in the company. That vote of confidence brought back a lot of bulls, even though the company still barely generates any revenue. Serve owns a fleet of 100 robots, but it only operated 59 active robots in the L.A. area for Uber Eats in the third quarter of 2024. It generated just $1.6 million in revenue in the first nine months of 2024 as it racked up a net loss of $26.1 million. For the full year, analysts expect it to generate $1.9 million in revenue with a net loss of $34.3 million. With an enterprise value of $384 million, it might seem ridiculously overvalued at more than 200 times this year's sales. But in 2025, Serve plans to deploy up to 2,000 robots for Uber Eats across the L.A. and Dallas-Fort Worth metro areas. Assuming it achieves that ambitious expansion, analysts expect its revenue to jump to $13.3 million in 2025 and $59.5 million in 2026. Therefore, we could argue that Serve isn't terribly expensive at about 6.5 times 2026 sales. If Serve successfully scales up its autonomous delivery robot fleet for Uber Eats, it could attract a lot more attention from other delivery-oriented companies. Those new customers would reduce its dependence on Uber and drive its long-term growth. According to Precedence Research, the global delivery robot market could expand at a CAGR of 32% from 2024 to 2034. That growth could be driven by labor shortages, rising e-commerce sales, and the development of more efficient autonomous robots. These little robots could also be considered a safer, cheaper, and more reliable alternative to human drivers for last-mile deliveries. So if the company can break out of its niche, it might deliver massive long-term gains. Serve might have a bright future, but it's too early to tell if it can ramp up its production, attract more customers, and diversify its business with other types of autonomous robots. So while we can't seriously call it the "next Nvidia" yet, it's easy to see why Nvidia bought a slice of this fledgling AI company. Investors who are looking for a high-risk, high-reward play in the booming AI market can consider following Nvidia's lead. Before you buy stock in Serve Robotics, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Serve Robotics wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $869,885!* Stock Advisor provides investors with an easy-to-follow blueprint for success, including guidance on building a portfolio, regular updates from analysts, and two new stock picks each month. The Stock Advisor service has more than quadrupled the return of S&P 500 since 2002*. See the 10 stocks » *Stock Advisor returns as of November 18, 2024 Leo Sun has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia, Serve Robotics, and Uber Technologies. The Motley Fool recommends Nasdaq. The Motley Fool has a disclosure policy. Could Serve Robotics Become the Next Nvidia? was originally published by The Motley Fool Sign in to access your portfolio
Images (1):
|
|||||
| Solomon to Build Next Wave of Advanced Robotics Solutions Using … | https://www.liberoquotidiano.it/news/ad… | 0 | Dec 24, 2025 08:00 | active | |
Solomon to Build Next Wave of Advanced Robotics Solutions Using NVIDIA Isaac Robotics PlatformDescription: TAIPEI, July 1, 2024 /PRNewswire/ -- Solomon, a leader in advanced vision and robotics solutions, is excited to announce a collaboration with NVIDIA a... Content: |
|||||
| ABB Robotics: accordo di cessione fra ABB e SoftBank Group | https://automazione-plus.it/abb-cedera-… | 1 | Dec 24, 2025 00:02 | active | |
ABB Robotics: accordo di cessione fra ABB e SoftBank GroupURL: https://automazione-plus.it/abb-cedera-la-divisione-robotics-a-softbank-group_169058/ Description: Ai clienti i vantaggi della combinazione tra la tecnologia e l'esperienza industriale di ABB Robotics e le conoscenze di SoftBank su AI, robotica e calcolo Content:
La cessione, per un valore d’impresa di 5,375 miliardi di dollari, riflette la solidità a lungo termine del business della robotica e crea valore immediato per gli azionisti di ABB ABB utilizzerà i proventi della cessione in linea con i propri principi di allocazione del capitale ABB ha annunciato di aver firmato un accordo per la cessione della propria divisione Robotics a SoftBank Group Corp. per un valore d’impresa di 5,375 miliardi di dollari, decidendo di non proseguire con la precedente intenzione di scorporare la divisione e quotarla come società separata. L’operazione è soggetta alle autorizzazioni regolamentari e ad altre condizioni di chiusura consuete ed è prevista la conclusione nella metà o fine del 2026. Peter Voser, presidente di ABB, ha dichiarato: “L’offerta di SoftBank è stata attentamente valutata dal Consiglio di Amministrazione e dal Comitato Esecutivo, e confrontata con la nostra intenzione originaria di procedere con uno spin-off. Essa riflette i punti di forza a lungo termine della divisione e la cessione creerà valore immediato per gli azionisti di ABB”. “ABB utilizzerà i proventi derivanti dalla transazione in linea con i propri consolidati principi di allocazione del capitale. Le nostre ambizioni per ABB restano invariate: continueremo a concentrarci sulla nostra strategia di lungo periodo, basata sulle nostre posizioni di leadership nei settori dell’elettrificazione e dell’automazione”. Morten Wierod, CEO di ABB, ha aggiunto: “SoftBank sarà un’eccellente nuova casa per l’attività e per i suoi dipendenti. ABB e SoftBank condividono la stessa visione: il mondo sta entrando in una nuova era della robotica basata sull’intelligenza artificiale, e crediamo che la divisione e l’offerta robotica di SoftBank possano insieme plasmare al meglio questa nuova era”. “ABB Robotics beneficerà della combinazione tra la propria tecnologia leader e la profonda esperienza industriale con le capacità avanzate di SoftBank in AI, robotica e calcolo di nuova generazione. Ciò permetterà all’attività di rafforzare ed espandere la propria posizione di leader tecnologico nel suo settore”. Masayoshi Son, presidente e CEO di SoftBank Group Corp. ha dichiarato: “La prossima frontiera di SoftBank è la ‘Physical AI’. Insieme ad ABB Robotics uniremo tecnologie e talenti di livello mondiale sotto una visione condivisa: fondere la Super Intelligenza Artificiale con la robotica, guidando un’evoluzione rivoluzionaria che farà progredire l’umanità”. A seguito della firma dell’accordo, ABB adeguerà la propria struttura di reporting, passando a tre aree di business. A partire dal quarto trimestre 2025, la divisione Robotics sarà riportata come “Attività cessate” (Discontinued Operations). La divisione Machine Automation, che attualmente insieme a Robotics costituisce l’area di business Robotics & Discrete Automation, entrerà a far parte dell’area Process Automation. Al momento della chiusura, la cessione comporterà una plusvalenza contabile pre-tasse non operativa di circa 2,4 miliardi di dollari, con proventi in cassa attesi, al netto dei costi di transazione, pari a circa 5,3 miliardi di dollari. I costi di separazione stimati sono di circa 200 milioni di dollari, circa la metà dei quali già inclusi nelle previsioni per il 2025. La stima attuale di ABB per i flussi fiscali in uscita legati alla separazione locale dell’attività si colloca tra i 400 e i 500 milioni di dollari. ABB Robotics è un leader nel proprio settore, al centro delle tendenze secolari e future dell’automazione. Come già comunicato, esistono sinergie limitate tra il business della robotica e il resto delle attività di ABB, caratterizzate da una diversa domanda e da differenti dinamiche di mercato. La divisione ABB Robotics impiega circa 7.000 persone. Con ricavi 2024 pari a 2,3 miliardi di dollari, ha rappresentato circa il 7% dei ricavi complessivi del Gruppo ABB, con un margine Ebita operativo del 12,1%. Comunicato ad hoc ai sensi dell’art. 53 del Regolamento di quotazione della Borsa svizzera SIX Fonte foto Pixabay_geralt Il nuovo Acopos M4 di B&R ridefinisce il motion control ad alte prestazioni per la produzione moderna. Combina precisione e potenza eccezionali con un design compatto e scalabile. Un nuovo sistema embedded consente l’implementazione della manutenzione predittiva,... A SPS 2025, B&R Industrial Automation, la divisione Machine Automation di ABB, presenterà una tecnologia rivoluzionaria nel campo del motion, progettata per aiutare i produttori a superare le principali sfide odierne: cicli di vita dei prodotti più... Con l’integrazione della divisione Machine Automation nell’area Process Automation, dove era confluita anche B&R, i clienti dei settori di processo e ibridi di ABB potranno beneficiare di sinergie tecnologiche più profonde, che integrano automazione, elettrificazione e digitalizzazione,... ABB Robotics ha investito nella californiana LandingAI per accelerare la trasformazione della visione AI, rendendola più rapida, intuitiva e accessibile a una platea più ampia di utenti. Questa collaborazione pionieristica integrerà le capacità di visione AI di... B&R, la Divisione Machine Automation di ABB, ha ottenuto la certificazione IEC 62443-4-1 per l’intero processo di sviluppo dei propri prodotti. L’audit condotto da TÜV Rheinland accerta la conformità delle pratiche di sviluppo agli standard internazionali per... La nuova piattaforma di motori a velocità variabile LV Titanium di ABB offre i vantaggi di un motore ad alta efficienza e della tecnologia di azionamento a velocità variabile in un’unica soluzione plug&play, compatta e personalizzabile, che... Start-up, scale-up e PMI italiane hanno tempo fino al 10 settembre 2025 per partecipare alla Call4Innovit, il programma promosso da Innovit, Italian Innovation and Culture Hub di San Francisco, nel cuore della Silicon Valley, dedicato a hardware, robotics, IoT,... Integrare soluzioni software MES avanzate e hardware industriale può trasformare un impianto in una fabbrica digitale altamente efficiente e affidabile Leggi l’articolo ABB Robotics sta preparando il futuro del fast food con BurgerBots, un innovativo concetto di ristorante lanciato a Los Gatos, in California. Progettata per offrire ogni volta hamburger perfettamente cotti e preparati su ordinazione, la cucina automatizzata... All’edizione 2025 di SPS Italia, Sick presenta un “impianto dimostrativo” di soluzioni applicative complete per la digitalizzazione, la robotica, la mobilità e la visione artificiale, pensate per rispondere concretamente alle esigenze del mercato e trasformare le sfide in realtà.... Il nuovo Acopos M4 di B&R ridefinisce il motion control ad alte prestazioni per la produzione moderna.... A SPS 2025, B&R Industrial Automation, la divisione Machine Automation di ABB, presenterà una tecnologia rivoluzionaria nel... L’ecosistema start-up italiano accelera sui numeri ma frena sui capitali: 204 round chiusi nel... Hexagon AB ha annunciato l’acquisizione di IconPro, azienda tedesca specializzata in soluzioni di intelligenza... Dopo le tappe di Roma, Cagliari e Torino, Milano ha ospitato ieri l’evento conclusivo... Per affrontare lo scenario competitivo odierno, sempre più sfidante e dinamico, le aziende industriali... Forte dell’esperienza ultradecennale nel mondo dell’automazione industriale e del monitoraggio energetico, Seneca propone soluzioni... La produzione di pneumatici è un processo altamente complesso e i sistemi utilizzati richiedono... Automazione Plus è un network di Quine. Quine srl Direzione, amministrazione, redazione, pubblicità Viale Enrico Forlanini 21 - 20134 MilanoTel. +39 02 864105 | Fax +39 02 72016740 | P.I.: 13002100157 Contatti: media.quine.it | www.quine.it | quineformazione.it Privacy Copyright 2024 - Tutti i diritti riservati
Images (1):
|
|||||
| SoftBank Invests $2.8 billion In Norwegian Robotics Firm AutoStore | https://philip9876.com/2021/05/10/softb… | 0 | Dec 24, 2025 00:02 | active | |
SoftBank Invests $2.8 billion In Norwegian Robotics Firm AutoStoreURL: https://philip9876.com/2021/05/10/softbank-invests-2-8-billion-in-norwegian-robotics-firm-autostore/ Description: Japanese tech conglomerate SoftBank has acquired 40% of Norwegian warehouse automation firm AutoStore for $2.8 billion. The news was first reported by The Wall ... Content: |
|||||
| SoftBank Expands AI Footprint With Multibillion-Dollar Robotics Deal | https://biztoc.com/x/1ff58f6a4caecb17?r… | 0 | Dec 24, 2025 00:02 | active | |
SoftBank Expands AI Footprint With Multibillion-Dollar Robotics DealURL: https://biztoc.com/x/1ff58f6a4caecb17?ref=ff Description: Tech conglomerate SoftBank Group has agreed to a $5.4 billion deal for the industrial-robotics-focused business of ABB, a bid to combine the potential of… Content: |
|||||
| ABB Robotics: Überraschender Kauf durch Softbank und zukünftige Ausrichtung | https://www.maschinenmarkt.vogel.de/abb… | 1 | Dec 24, 2025 00:02 | active | |
ABB Robotics: Überraschender Kauf durch Softbank und zukünftige AusrichtungDescription: ABB plant, seine Robotiksparte auszugliedern und an die Börse zu bringen. Überraschend wurde sie jedoch von Softbank gekauft. Was bedeutet das für die Zukunft von ABB Robotics? Content:
Anbieter zum Thema Einerseits wusste man schon seit April, das ABB seine Robotiksparte ausgliedern will, dass das Unternehmen nun aber vom japanischen Softbank-Konzern gekauft wird, war dann doch eine überraschende Wendung. Ich bin gespannt, was aus dem Industrieroboteranbieter ABB Robotics unter dem Dach der Physical AI-Strategie von Softbank wird. Mitte April veröffentlichte ABB die Pressemitteilung „ABB plant Spin-off der Division Robotics als eigenständig kotiertes Unternehmen“, in der bekannt gegeben wurde, dass man die Robotiksparte aus dem ABB-Konzern ausgliedern und an die Börse bringen möchte. Das seltsame Wort „kotieren“ stammt übrigens vom französischen Wort „Cote“, Kurszettel ab. Die Division ABB Robotics stand im Jahr 2023 nach weltweitem Umsatz mit 3,3 Milliarden Euro auf Platz drei nach Mitsubishi Electric (10,5 Mill. €) und Kuka Robotics (4 Mill.€, Quelle: Statista). Die Division erreichte seit 2019, als die Divisionen unter dem dezentralisierten Betriebsmodell ABB Way getrennt bilanziert wurden, in den meisten Quartalen zweistellige Margen. Nach einer ungewöhnlich volatilen Marktlage, in der sich das Bestellverhalten nach einer Phase vorgezogener Käufe in Zeiten angespannter Lieferketten normalisierte, hat sich der Markt offensichtlich stabilisiert, was das Auftragswachstum der Division unterstützt hat. Die Division ABB Robotics beschäftigt rund 7.000 Mitarbeiter. 2024 erzielte sie einen Umsatz von 2,3 Milliarden US-Dollar und steuerte damit etwa 7 Prozent des Konzernumsatzes von ABB bei. Die operative EBITA-Marge belief sich 2024 auf 12,1 Prozent. Um die Robotikabteilung ausgliedern zu können, wird die Division Machine Automation, die zusammen mit ABB Robotics derzeit den Geschäftsbereich Robotik & Fertigungsautomation bildet, Teil des Geschäftsbereichs Prozessautomation. Dies bietet nach ABB-Angaben Synergien im Bereich Software/Steuerungstechnologien, dank derer die Divisionen besseren Kundennutzen erzielen dürften. Die Division Machine Automation hat eine Führungsstellung im High-End-Segment von Lösungen auf Basis von SPS, Industrie-PCs, Servoantrieben, industriellen Transportsystemen und Vision und Software inne. Nun jedoch änderte sich der Plan, am 8. Oktober gab ABB bekannt, dass der ursprünglich beabsichtigte Spin-off des Geschäfts als eigenständige Aktiengesellschaft nicht weiterverfolgt wird. Stattdessen unterzeichnete ABB eine Vereinbarung zum Verkauf ihrer Robotics-Division an die SoftBank Group Corp. für einen Unternehmenswert von 5,375 Milliarden US-Dollar. Sami Atiya, Leiter des Geschäftsbereichs Robotik & Fertigungsautomation sowie Mitglied der Konzernleitung, verlässt das Unternehmen bis Ende 2026. Er wird zum Jahresende 2025 aus der Konzernleitung ausscheiden und die Robotics-Division sowie den Carve-Out-Prozess 2026 als strategischer Berater unterstützen. Betrachtet man nun den Käufer Softbank Group, mag man sich zunächst einmal die Augen reiben. Laut Wikipedia ist Softbank „ein japanischer Telekommunikations- und Medienkonzern mit Unternehmensbereichen in Breitbandfernsehen, Festnetz-Telekommunikation, E-Commerce, Internet, Robotik, Technologie, Service, Finanzen, Medien und Vermarktung.“ Allerdings verwaltet Softbank auch den weltweit größten Investmentfonds im Bereich Technologie. In dem weitläufigen Firmenkonglomerat finden sich allerdings einige interessante Unternehmen, die in die richtige Richtung weisen: Schon Anfang 2012 übernahm das Unternehmen große Anteile am französischen Roboterhersteller Aldebaran, inzwischen gehört das Unternehmen zu 95 Prozent dem japanischen Investor. Aldebaran ist unter anderem bekannt für seine kleinen humanoiden Roboter namens Nao bekannt, mit denen seit 2007 die Roboter-Fußballliga RoboCup ausgerichtet wird. Zumindest optisch bekannt ist auch der Serviceroboter Pepper, dessen Produktion 2020 endete. Besser bekannt für zwei- und vierbeinige Roboter ist Boston Dynamics, das Softbank 2017 von Google beziehungsweise deren Mutterkonzern Alphabet kaufte. 2020 stieg Softbank bei Boston Dynamics wieder aus und verkaufte 80 Prozent des Robotik-Pioniers für etwa 880 Millionen US-Dollar an Hyundai. Ebenfalls im Jahr 2017 begann Softbank, den Investmentfonds Vision Fund aufzubauen. Dieser investierte insgesamt über 100 Milliarden US-Dollar in KI-Unternehmen. Und nicht zuletzt hat Softbank im aktuellen Jahr ein Joint Venture mit OpenAI, der Firma hinter ChatGPT, gegründet. Das Joint Venture namens SP OpenAI Japan soll das Advanced Enterprise KI-System Cristal Intelligence in die Praxis bringen. Bitte geben Sie eine gültige E-Mailadresse ein. Mit Klick auf „Newsletter abonnieren“ erkläre ich mich mit der Verarbeitung und Nutzung meiner Daten gemäß Einwilligungserklärung (bitte aufklappen für Details) einverstanden und akzeptiere die Nutzungsbedingungen. Weitere Informationen finde ich in unserer Datenschutzerklärung. Die Einwilligungserklärung bezieht sich u. a. auf die Zusendung von redaktionellen Newslettern per E-Mail und auf den Datenabgleich zu Marketingzwecken mit ausgewählten Werbepartnern (z. B. LinkedIn, Google, Meta). Stand: 08.12.2025 Es ist für uns eine Selbstverständlichkeit, dass wir verantwortungsvoll mit Ihren personenbezogenen Daten umgehen. Sofern wir personenbezogene Daten von Ihnen erheben, verarbeiten wir diese unter Beachtung der geltenden Datenschutzvorschriften. Detaillierte Informationen finden Sie in unserer Datenschutzerklärung. Ich bin damit einverstanden, dass die Vogel Communications Group GmbH & Co. KG, Max-Planckstr. 7-9, 97082 Würzburg einschließlich aller mit ihr im Sinne der §§ 15 ff. AktG verbundenen Unternehmen (im weiteren: Vogel Communications Group) meine E-Mail-Adresse für die Zusendung von redaktionellen Newslettern nutzt. Auflistungen der jeweils zugehörigen Unternehmen können hier abgerufen werden. Der Newsletterinhalt erstreckt sich dabei auf Produkte und Dienstleistungen aller zuvor genannten Unternehmen, darunter beispielsweise Fachzeitschriften und Fachbücher, Veranstaltungen und Messen sowie veranstaltungsbezogene Produkte und Dienstleistungen, Print- und Digital-Mediaangebote und Services wie weitere (redaktionelle) Newsletter, Gewinnspiele, Lead-Kampagnen, Marktforschung im Online- und Offline-Bereich, fachspezifische Webportale und E-Learning-Angebote. Wenn auch meine persönliche Telefonnummer erhoben wurde, darf diese für die Unterbreitung von Angeboten der vorgenannten Produkte und Dienstleistungen der vorgenannten Unternehmen und Marktforschung genutzt werden. Meine Einwilligung umfasst zudem die Verarbeitung meiner E-Mail-Adresse und Telefonnummer für den Datenabgleich zu Marketingzwecken mit ausgewählten Werbepartnern wie z.B. LinkedIN, Google und Meta. Hierfür darf die Vogel Communications Group die genannten Daten gehasht an Werbepartner übermitteln, die diese Daten dann nutzen, um feststellen zu können, ob ich ebenfalls Mitglied auf den besagten Werbepartnerportalen bin. Die Vogel Communications Group nutzt diese Funktion zu Zwecken des Retargeting (Upselling, Crossselling und Kundenbindung), der Generierung von sog. Lookalike Audiences zur Neukundengewinnung und als Ausschlussgrundlage für laufende Werbekampagnen. Weitere Informationen kann ich dem Abschnitt „Datenabgleich zu Marketingzwecken“ in der Datenschutzerklärung entnehmen. Falls ich im Internet auf Portalen der Vogel Communications Group einschließlich deren mit ihr im Sinne der §§ 15 ff. AktG verbundenen Unternehmen geschützte Inhalte abrufe, muss ich mich mit weiteren Daten für den Zugang zu diesen Inhalten registrieren. Im Gegenzug für diesen gebührenlosen Zugang zu redaktionellen Inhalten dürfen meine Daten im Sinne dieser Einwilligung für die hier genannten Zwecke verwendet werden. Dies gilt nicht für den Datenabgleich zu Marketingzwecken. Mir ist bewusst, dass ich diese Einwilligung jederzeit für die Zukunft widerrufen kann. Durch meinen Widerruf wird die Rechtmäßigkeit der aufgrund meiner Einwilligung bis zum Widerruf erfolgten Verarbeitung nicht berührt. Um meinen Widerruf zu erklären, kann ich als eine Möglichkeit das unter https://contact.vogel.de abrufbare Kontaktformular nutzen. Sofern ich einzelne von mir abonnierte Newsletter nicht mehr erhalten möchte, kann ich darüber hinaus auch den am Ende eines Newsletters eingebundenen Abmeldelink anklicken. Weitere Informationen zu meinem Widerrufsrecht und dessen Ausübung sowie zu den Folgen meines Widerrufs finde ich in der Datenschutzerklärung, Abschnitt Redaktionelle Newsletter. Cristal Intelligence soll bei SoftBank-Unternehmen eingeführt werden, darunter auch bei Arm Holdings, auf deren Chipdesign ein Großteil aller Smartphone-Chips basiert, ebenso die Apple Desktoprechner seit 2020. SoftBank will jährlich drei Milliarden Dollar investieren, um die OpenAI-Technologie in seinen Geschäftsbereichen einzuführen. Ein weiteres Joint Venture mit Oracle und OpenAI soll unter dem Namen Stargate in den USA eine KI-Infrastruktur aufbauen. Softbank-Gründer Masayoshi Son sieht die Zukunft in intelligenten Maschinen, die nicht nur digital, sondern auch physisch agieren können. In diesen Masterplan passt natürlich der renommierte Industrieroboterhersteller ABB Robotics, der im Gegensatz zu den obengenannten Robotikfirmen ein etabliertes Unternehmen ist, dessen Produkte in der Industrie weit verbreitet sind. Die SoftBank-Aktie stieg denn auch nach Bekanntgabe der Übernahme um über elf Prozent. Vor diesem Hintergrund macht der Deal viel Sinn. KI und Robotik sind so etwas wie zwei Teile eines Puzzles, indem die KI die Steuerung von Robotern revolutioniert. KI-gestützte Visionsysteme und Sensoren ermöglichen es einem Roboterarm, intelligent und gezielt Objekte zu fassen, zu bewegen und abzulegen. Diese Technologien kommen genau jetzt in die praktische Anwendung und mit dem KI-Know-how von Softbank – das weit über die genannten Unternehmen hinausgeht – ist das neue Robotikunternehmen dafür hervorragend aufgestellt. Eine Vision für 300 Jahre – Genialität oder Hybris?Softbank-Gründer Masayoshi Son denkt in großen Maßstäben: Auf der Aktionärsversammlung 2010 verkündete er „SoftBank's Next 30-Year Vision“, die nicht nur die nächsten 30 Jahre beschreibt, sondern auch postuliert, dass das Unternehmen 300 Jahre bestehen wird. Softbank soll „the corporate group needed most by people around the world“ werden, und das „durch die Beschleunigung des menschlichen Fortschritts und die Steigerung des Glücks für alle durch die Informationsrevolution.“Noch ein (übersetztes) Zitat aus dieser Vision: „Die Verwirklichung einer Superintelligenz mit wirklich hochdimensionalen Gefühlen wie ‚Güte‘ und ‚Liebe‘ ist der richtige evolutionäre Weg für Gehirncomputer. Das ist meine Überzeugung. Wir bei der SoftBank Group möchten eine Gesellschaft verwirklichen, in der superintelligente Computer mit Menschen koexistieren, um zu unserem Glück beizutragen, so wie ein Mensch einen anderen glücklich macht oder wie Maschinen uns ein glücklicheres Leben ermöglichen.“Mir ist nicht wohl dabei, wenn Menschen mit nahezu unbegrenzten Möglichkeiten solche Visionen postulieren – weil sie heute, mit der Macht ihrer Unternehmen, ihrem Einfluss in Wirtschaft und Politik und mit den Möglichkeiten des digitalen Zeitalters, eben tatsächlich in der Lage sind, solche Visionen umzusetzen. Man rufe sich nur einmal Elon Musk ins Gedächtnis, der nicht nur die Landeplattformen seiner SpaceX-Raketen aus den Science-Fiction-Romanen von Iain Banks entnimmt. Sondern er arbeitet tatsächlich daran, Science-Fiction-Technologien und -Szenarien umzusetzen. Nur leider sind dabei immer wieder recht dystopische Szenarien. Damit bestimmen nicht mehr „der Markt“, wir Konsumenten und Regierungen über die Weiterentwicklung der Technologien, die unsere Zukunft prägen, sondern einzelne, teils recht durchgeknallte „Visionäre“ und Milliardäre. So richtig beruhigend finde ich das nicht.Ein Kommentar von Ralf Steck Softbank-Gründer Masayoshi Son denkt in großen Maßstäben: Auf der Aktionärsversammlung 2010 verkündete er „SoftBank's Next 30-Year Vision“, die nicht nur die nächsten 30 Jahre beschreibt, sondern auch postuliert, dass das Unternehmen 300 Jahre bestehen wird. Softbank soll „the corporate group needed most by people around the world“ werden, und das „durch die Beschleunigung des menschlichen Fortschritts und die Steigerung des Glücks für alle durch die Informationsrevolution.“ Noch ein (übersetztes) Zitat aus dieser Vision: „Die Verwirklichung einer Superintelligenz mit wirklich hochdimensionalen Gefühlen wie ‚Güte‘ und ‚Liebe‘ ist der richtige evolutionäre Weg für Gehirncomputer. Das ist meine Überzeugung. Wir bei der SoftBank Group möchten eine Gesellschaft verwirklichen, in der superintelligente Computer mit Menschen koexistieren, um zu unserem Glück beizutragen, so wie ein Mensch einen anderen glücklich macht oder wie Maschinen uns ein glücklicheres Leben ermöglichen.“ Mir ist nicht wohl dabei, wenn Menschen mit nahezu unbegrenzten Möglichkeiten solche Visionen postulieren – weil sie heute, mit der Macht ihrer Unternehmen, ihrem Einfluss in Wirtschaft und Politik und mit den Möglichkeiten des digitalen Zeitalters, eben tatsächlich in der Lage sind, solche Visionen umzusetzen. Man rufe sich nur einmal Elon Musk ins Gedächtnis, der nicht nur die Landeplattformen seiner SpaceX-Raketen aus den Science-Fiction-Romanen von Iain Banks entnimmt. Sondern er arbeitet tatsächlich daran, Science-Fiction-Technologien und -Szenarien umzusetzen. Nur leider sind dabei immer wieder recht dystopische Szenarien. Damit bestimmen nicht mehr „der Markt“, wir Konsumenten und Regierungen über die Weiterentwicklung der Technologien, die unsere Zukunft prägen, sondern einzelne, teils recht durchgeknallte „Visionäre“ und Milliardäre. So richtig beruhigend finde ich das nicht. Ein Kommentar von Ralf Steck (ID:50594851) Weiterführende Inhalte Robotik ABB verkauft Robotik-Geschäft an Softbank Group Auf der KI-Welle Der ABB-Konzern glänzt mit einem brillanten dritten Quartal Cookie-Manager Leserservice AGB Hilfe Abo-Kündigung Werbekunden-Center Mediadaten Datenschutz Barrierefreiheit Impressum KI-Leitlinien Abo Autoren Copyright © 2025 Vogel Communications Group Diese Webseite ist eine Marke von Vogel Communications Group. Eine Übersicht von allen Produkten und Leistungen finden Sie unter www.vogel.de
Images (1):
|
|||||
| SoftBank To Acquire ABB's Robotics Division For $5.4 Billion - … | https://dataconomy.com/2025/10/08/softb… | 1 | Dec 24, 2025 00:02 | active | |
SoftBank To Acquire ABB's Robotics Division For $5.4 Billion - DataconomyURL: https://dataconomy.com/2025/10/08/softbank-to-acquire-abbs-robotics-division-for-5-4-billion/ Description: SoftBank Group announced on Monday that it has agreed to buy the robotics division of Swiss engineering firm ABB for $5.4 billion. The move is designed to Content:
SoftBank Group announced on Monday that it has agreed to buy the robotics division of Swiss engineering firm ABB for $5.4 billion. The move is designed to strengthen the Japanese company’s position in artificial intelligence. The deal, which is subject to regulatory approval, means ABB will no longer pursue its previous plan to spin off the robotics business into a separately listed company. The acquisition is a key part of SoftBank founder Masayoshi Son’s vision for “Physical AI,” which aims to combine advanced artificial intelligence with robotics. Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox. “SoftBank’s next frontier is Physical AI. Together with ABB Robotics, we will unite world-class technology and talent under our shared vision to fuse Artificial Super Intelligence and robotics — driving a groundbreaking evolution that will propel humanity forward.” Son defines Artificial Super Intelligence (ASI) as AI that is 10,000 times smarter than humans. He has worked to position SoftBank at the center of the AI industry through investments and acquisitions. The company owns chip designer Arm, holds a major stake in OpenAI, and has previous robot-related investments in companies like AutoStore Holdings and Agile Robots. This is not the company’s first venture into robotics. In 2012, SoftBank acquired a majority stake in a French company called Aldebaran, which led to the launch of the humanoid robot Pepper. Although that project did not succeed commercially, robotics has re-emerged as a key focus for the company. For ABB, the sale is a strategic shift from its previous plan to spin off the robotics unit. The company stated that the deal “will create immediate value to ABB shareholders” and that it will use the proceeds from the transaction according to its established capital allocation principles. ABB expects to receive approximately $5.3 billion in cash proceeds from the sale. The expected separation cost is around $200 million. Featured image credit
Images (1):
|
|||||
| SoftBank buys ABB robotics unit for $5.4 billion, bets on … | https://medium.com/@fintegra.news/softb… | 0 | Dec 24, 2025 00:02 | active | |
SoftBank buys ABB robotics unit for $5.4 billion, bets on “physical AI”Description: SoftBank buys ABB robotics unit for $5.4 billion, bets on “physical AI” SoftBank announced the acquisition of ABB Group’s robotics business for $5.375 bil... Content: |
|||||
| Wall Street Pit - Finance, Stock Market, Technology, Science | https://wallstreetpit.com/127641-softba… | 1 | Dec 24, 2025 00:02 | active | |
Wall Street Pit - Finance, Stock Market, Technology, ScienceURL: https://wallstreetpit.com/127641-softbank-eyes-trillion-dollar-ai-robotics-empire/ Description: Breaking news and analysis from Wall Street Pit. Finance, stock market, economy, technology, science. Content:
MicroStrategy Adds 11,000 Bitcoins Morgan Stanley Emerges as Front-Runner for SpaceX’s Massive 2026 IPO Circle (CRCL) Gets the Nod, Lyft (LYFT) Takes a Hit: Key Wall Street Analyst Moves Nvidia (NVDA) Stock Jumps as Trump Administration Begins Review of H200 AI Chip Exports to China Nike’s China Troubles Deepen as Turnaround Stalls, Shares Slide Ex-Facebook Privacy Chief: AI’s Next Big Pivot Is Efficiency Over Gigawatts OpenAI Eyes $100 Billion Funding Round at $830 Billion Valuation Amazon Reportedly in Talks to Inject $10B+ into OpenAI, With Deal Tied to Its AI Chips Elon Musk Blasts EU Ruling as “Bullshit,” Calls for Union’s Abolition After $140M Hit on X Musk Shoots Down Reports of $800 Billion SpaceX Valuation China’s Bold Plan: Space Solar Array to Generate More Energy Than All Earth’s Oil Quantum Breakthrough: Teleportation Achieved Over Internet for the First Time The Astronomical Gamble: Elon Musk’s Vision for Mars Stocks Surge as Nikkei 225 Rallies in Historic Session The Universe is the Answer: Musk’s Quest for the Right Questions U.S. Economy Surges: 4.3% GDP Growth in Q3 Crushes Forecasts Citadel Hands Back $5 Billion in Gains to Investors, Trimming AUM to $67 Billion ServiceNow (NOW) Secures Armis in $7.75 Billion Deal Novo Nordisk (NVO) Stock Rallies as FDA Approves Game-Changing Daily Weight-Loss Pill Tesla (TSLA) Hits Record High – Robotaxi Momentum Drives Rally Trump Tariffs Update: $2K Rebate Checks Likely, Says Economic Council Director Trump Urges GOP to Send Federal Health Funds ‘Directly to the People’ Hill Republicans Revolt Over Trump’s Palantir Deal GOP Revives Trump-Era Tax Cuts as Bessent Shrugs Off Credit Downgrade Speaker Johnson Throws Support Behind Congressional Stock Trading Ban Trump Administration Doubles Down on Big Pharma Pressure Ending Big Pharma’s Grip: U.S. Drug Prices Set to Plummet by 80% Allurion Stock Soars on Promising GLP-1 Combo Results Dario Amodei: AI Could Double Lifespans by 2030 AI’s Golden Age: A Cure for All Diseases Within a Decade? Trump Lays Out Top Demands for China Before Trade Talks Resume Trump Hits China with 100% Tariff and Export Controls on Key Software U.S. Treasury’s Bessent Hints at New China Talks Soon to Forge Broader Trade Deal White House Announces U.S.-China Trade Deal, Keeps Key Details Under Wraps Bessent Signals ‘Substantial Progress’ in U.S.-China Trade Talks Copyright © 2025 Wall Street Pit | Contact Us | Advertise | Terms of Use | Privacy Policy
Images (1):
|
|||||