Total Articles Scraped
Total Images Extracted
| Action | Title | URL | Images | Scraped At | Status |
|---|---|---|---|---|---|
| This ‘Machine Eye’ Could Give Robots Superhuman Reflexes | https://singularityhub.com/2026/02/19/t… | 1 | Mar 14, 2026 16:00 | active | |
This ‘Machine Eye’ Could Give Robots Superhuman ReflexesURL: https://singularityhub.com/2026/02/19/this-machine-eye-could-give-robots-superhuman-reflexes/ Description: Running on a brain-like chip, the 'eye' could help robots and self-driving cars make split-second decisions. Content:
Running on a brain-like chip, the 'eye' could help robots and self-driving cars make split-second decisions. Image Credit Amanda Dalbjörn on Unsplash Share You’re driving in a winter storm at midnight. Icy rain smashes your windshield, immediately turning it into a sheet of frost. Your eyes dart across the highway, seeking any movement that could be wildlife, struggling vehicles, or highway responders trying to pass. Whether you find safe passage or meet catastrophe hinges on how fast you see and react. You’re driving in a winter storm at midnight. Icy rain smashes your windshield, immediately turning it into a sheet of frost. Your eyes dart across the highway, seeking any movement that could be wildlife, struggling vehicles, or highway responders trying to pass. Whether you find safe passage or meet catastrophe hinges on how fast you see and react. Even experienced drivers struggle with bad weather. For self-driving cars, drones, and other robots, a snowstorm could cause mayhem. The best computer-vision algorithms can handle some scenarios, but even running on advanced computer chips, their reaction times are roughly four times greater than a human’s. Even experienced drivers struggle with bad weather. For self-driving cars, drones, and other robots, a snowstorm could cause mayhem. The best computer-vision algorithms can handle some scenarios, but even running on advanced computer chips, their reaction times are roughly four times greater than a human’s. “Such delays are unacceptable for time-sensitive applications…where a one-second delay at highway speeds can reduce the safety margin by up to 27m [88.6 feet], significantly increasing safety risks,” Shuo Gao at Beihang University and colleagues wrote in a recent paper describing a new superfast computer vision system. “Such delays are unacceptable for time-sensitive applications…where a one-second delay at highway speeds can reduce the safety margin by up to 27m [88.6 feet], significantly increasing safety risks,” Shuo Gao at Beihang University and colleagues wrote in a recent paper describing a new superfast computer vision system. Instead of working on the software, the team turned to hardware. Inspired by the way human eyes process movement, they developed an electronic replica that rapidly detects and isolates motion. Instead of working on the software, the team turned to hardware. Inspired by the way human eyes process movement, they developed an electronic replica that rapidly detects and isolates motion. The machine eye’s artificial synapses connect transistors into networks that detect changes in the brightness of an image. Like biological neural circuits, these connections store a brief memory of the past before processing new inputs. Comparing the two allows them to track motion. The machine eye’s artificial synapses connect transistors into networks that detect changes in the brightness of an image. Like biological neural circuits, these connections store a brief memory of the past before processing new inputs. Comparing the two allows them to track motion. Combined with a popular vision algorithm, the system quickly separates moving objects, like walking pedestrians, from static objects, like buildings. By limiting its attention to motion, the machine eye needs far less time and energy to assess and respond to complex environments. Combined with a popular vision algorithm, the system quickly separates moving objects, like walking pedestrians, from static objects, like buildings. By limiting its attention to motion, the machine eye needs far less time and energy to assess and respond to complex environments. When tested on autonomous vehicles, drones, and robotic arms, the system sped up processing times by roughly 400 percent and, in most cases, surpassed the speed of human perception without sacrificing accuracy. When tested on autonomous vehicles, drones, and robotic arms, the system sped up processing times by roughly 400 percent and, in most cases, surpassed the speed of human perception without sacrificing accuracy. “These advancements empower robots with ultrafast and accurate perceptual capabilities, enabling them to handle complex and dynamic tasks more efficiently than ever before,” wrote the team. “These advancements empower robots with ultrafast and accurate perceptual capabilities, enabling them to handle complex and dynamic tasks more efficiently than ever before,” wrote the team. A mere flicker in the corner of an eye captures our attention. We’ve evolved to be especially sensitive to movement. This perceptual superpower begins in the retina. The thin layer of light-sensitive tissue at the back of the eye is packed with cells fine-tuned to detect motion. A mere flicker in the corner of an eye captures our attention. We’ve evolved to be especially sensitive to movement. This perceptual superpower begins in the retina. The thin layer of light-sensitive tissue at the back of the eye is packed with cells fine-tuned to detect motion. Retinal cells are a curious bunch. They store memories of previous scenes and spark with activity when something in our visual field shifts. The process is a bit like an old-school film reel: Rapid transitions between still frames lead to the perception of movement. Retinal cells are a curious bunch. They store memories of previous scenes and spark with activity when something in our visual field shifts. The process is a bit like an old-school film reel: Rapid transitions between still frames lead to the perception of movement. Every cell is tuned to detect visual changes in a particular direction—for example, left to right or up to down—but is otherwise dormant. These activity patterns form a two-dimensional neural map that the brain interprets as speed and direction within a fraction of a second. Every cell is tuned to detect visual changes in a particular direction—for example, left to right or up to down—but is otherwise dormant. These activity patterns form a two-dimensional neural map that the brain interprets as speed and direction within a fraction of a second. “Biological vision excels at processing large volumes of visual information” by focusing only on motion, wrote the team. When driving across an intersection, our eyes intuitively zero in on pedestrians, cyclists, and other moving objects. “Biological vision excels at processing large volumes of visual information” by focusing only on motion, wrote the team. When driving across an intersection, our eyes intuitively zero in on pedestrians, cyclists, and other moving objects. Computer vision takes a more mathematical approach. Computer vision takes a more mathematical approach. A popular type called optical flow analyzes differences between pixels across visual frames. The algorithm segments pixels into objects and infers movement based on changes in brightness. This approach assumes that objects maintain brightness as they move. A white dot, for example, remains a white dot as it drifts to the right, at least in simulations. Pixels near each other should also move in tandem as a marker for motion. A popular type called optical flow analyzes differences between pixels across visual frames. The algorithm segments pixels into objects and infers movement based on changes in brightness. This approach assumes that objects maintain brightness as they move. A white dot, for example, remains a white dot as it drifts to the right, at least in simulations. Pixels near each other should also move in tandem as a marker for motion. Although inspired by biological vision, optical flow struggles in real-world scenarios. It’s an energy hog and can be laggy. Add in unexpected noise—like a snowstorm—and robots running optical flow algorithms will have trouble adapting to our messy world. Although inspired by biological vision, optical flow struggles in real-world scenarios. It’s an energy hog and can be laggy. Add in unexpected noise—like a snowstorm—and robots running optical flow algorithms will have trouble adapting to our messy world. Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub. To get around these problems, Gao and colleagues built a neuron-inspired chip that dynamically detects regions of motion and then focuses an optical flow algorithm on only those areas. To get around these problems, Gao and colleagues built a neuron-inspired chip that dynamically detects regions of motion and then focuses an optical flow algorithm on only those areas. Their initial design immediately hit a roadblock. Traditional computer chips can’t adjust their wiring. So the team fabricated a neuromorphic chip that, true to its name, computes and stores information at the same spot, much like a neuron processes data and retains memory. Their initial design immediately hit a roadblock. Traditional computer chips can’t adjust their wiring. So the team fabricated a neuromorphic chip that, true to its name, computes and stores information at the same spot, much like a neuron processes data and retains memory. Because neuromorphic chips don’t shuttle data from memory to processors, they’re far faster and more energy-efficient than classical chips. They outshine standard chips in a variety of tasks, such as sensing touch, detecting auditory patterns, and processing vision. Because neuromorphic chips don’t shuttle data from memory to processors, they’re far faster and more energy-efficient than classical chips. They outshine standard chips in a variety of tasks, such as sensing touch, detecting auditory patterns, and processing vision. “The on-device adaptation capability of synaptic devices makes human-like ultrafast visual processing possible,” wrote the team. “The on-device adaptation capability of synaptic devices makes human-like ultrafast visual processing possible,” wrote the team. The new chip is built from materials and designs commonly used in other neuromorphic chips. Similar to the retina, the array’s artificial synapses encode differences in brightness and remember these changes by adjusting their responses to subsequent electrical signals. The new chip is built from materials and designs commonly used in other neuromorphic chips. Similar to the retina, the array’s artificial synapses encode differences in brightness and remember these changes by adjusting their responses to subsequent electrical signals. When processing an image, the chip converts the data into voltage changes, which only activate a handful of synaptic transistors; the others stay quiet. This means the chip can filter out irrelevant visual data and focus optical flow algorithms on regions with motion only. When processing an image, the chip converts the data into voltage changes, which only activate a handful of synaptic transistors; the others stay quiet. This means the chip can filter out irrelevant visual data and focus optical flow algorithms on regions with motion only. In tests, the two-step setup boosted processing speed. When analyzing a movie of a pedestrian about to dash across a road, the chip detected their subtle body position and predicted what direction they’d run in roughly 100 microseconds—faster than a human. Compared to conventional computer vision, the machine eye roughly doubled the ability of self-driving cars to detect hazards in a simulation. It also improved the accuracy of robotic arms by over 740 percent thanks to better and faster tracking. In tests, the two-step setup boosted processing speed. When analyzing a movie of a pedestrian about to dash across a road, the chip detected their subtle body position and predicted what direction they’d run in roughly 100 microseconds—faster than a human. Compared to conventional computer vision, the machine eye roughly doubled the ability of self-driving cars to detect hazards in a simulation. It also improved the accuracy of robotic arms by over 740 percent thanks to better and faster tracking. The system is compatible with computer vision algorithms beyond optical flow, such as the YOLO neural network that detects objects in a scene, making it adjustable for different uses. The system is compatible with computer vision algorithms beyond optical flow, such as the YOLO neural network that detects objects in a scene, making it adjustable for different uses. “We do not completely overthrow the existing camera system; instead, by using hardware plug-ins, we enable existing computer vision algorithms to run four times faster than before, which holds greater practical value for engineering applications,” Gao told the South China Morning Post. “We do not completely overthrow the existing camera system; instead, by using hardware plug-ins, we enable existing computer vision algorithms to run four times faster than before, which holds greater practical value for engineering applications,” Gao told the South China Morning Post. Dr. Shelly Xuelai Fan is a neuroscientist-turned-science-writer. She's fascinated with research about the brain, AI, longevity, biotech, and especially their intersection. As a digital nomad, she enjoys exploring new cultures, local foods, and the great outdoors. Related Articles What we’re reading Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub. SingularityHub chronicles the technological frontier with coverage of the breakthroughs, players, and issues shaping the future.
Images (1):
|
|||||
| ABB Robotics anuncia parceria com a NVIDIA em IA física … | https://overbr.com.br/parcerias/abb-rob… | 1 | Mar 14, 2026 08:00 | active | |
ABB Robotics anuncia parceria com a NVIDIA em IA física - OverBRURL: https://overbr.com.br/parcerias/abb-robotics-anuncia-parceria-com-a-nvidia-em-ia-fisica Description: Além disso, a ABB Robotics também avalia integrar a plataforma de computação de borda NVIDIA Jetson ao seu controlador Omnicore... Content:
Integração entre RobotStudio® da ABB Robotics e tecnologias NVIDIA Omniverse permitirá simulações industriais com até 99% de precisão; Tecnologia pode reduzir custos de produção em até 40% e acelerar em até 50% o lançamento de novos produtos A ABB Robotics anuncia hoje uma parceria com a NVIDIA para acelerar a adoção de IA física na indústria, integrando sua plataforma de simulação RobotStudio® às tecnologias NVIDIA Omniverse. Mais detalhes sobre a colaboração serão apresentados em uma coletiva virtual nesta segunda-feira, 9 de março, às 12h (horário de Brasília). Jornalistas poderão acompanhar o evento e acessar imagens e materiais de apoio, que estarão disponíveis após a coletiva. A iniciativa permitirá que fabricantes simulem robôs em gêmeos digitais e utilizem dados sintéticos para treinar modelos de inteligência artificial antes da implantação em ambientes reais, reduzindo a chamada lacuna entre simulação e aplicação prática — conhecida como “sim-to-real gap”. “Hoje, utilizando tecnologias de computação acelerada e simulação da NVIDIA, eliminamos as últimas barreiras para tornar a IA industrial e física uma realidade em escala global”, afirma Marc Segura, presidente da ABB Robotics. “Há mais de 50 anos, a ABB lidera a evolução da automação industrial, e essa colaboração marca um novo passo para levar a IA física à indústria”, completa o executivo. A integração permitirá criar simulações fisicamente precisas e gerar dados sintéticos para treinar robôs em diferentes cenários industriais. Com isso, empresas poderão testar e otimizar processos produtivos virtualmente antes da implementação nas fábricas. A solução resultante, chamada RobotStudio HyperReality, terá suas primeiras versões disponibilizadas para clientes selecionados antes do lançamento global previsto para o segundo semestre de 2026. Segundo a ABB, a tecnologia poderá reduzir em até 80% o tempo de configuração de linhas de produção, diminuir custos em até 40% ao eliminar protótipos físicos e acelerar em até 50% o lançamento de produtos complexos, como eletrônicos de consumo. “Integrar as tecnologias NVIDIA Omniverse ao RobotStudio leva simulação avançada e computação acelerada ao controlador virtual exclusivo da ABB”, afirma Deepu Talla, vice-presidente de robótica e IA de edge da NVIDIA. “Isso acelera a forma como fabricantes de todos os tamanhos levam produtos complexos ao mercado”, explica. Entre os primeiros casos de uso da tecnologia está um projeto piloto com a Foxconn, maior fabricante terceirizada de eletrônicos do mundo. A empresa está utilizando a solução para treinar virtualmente robôs responsáveis pela montagem de componentes em dispositivos eletrônicos, com dados sintéticos para aperfeiçoar processos produtivos antes da implementação na linha de produção. De acordo com Dr. Zhe Shi, Chief Digital Officer da Foxconn, o uso da tecnologia permitirá maior precisão e velocidade na produção de eletrônicos de consumo. “Precisão é essencial na fabricação de eletrônicos e, até agora, esse nível de fidelidade simplesmente não era possível em simulações e gêmeos digitais”, afirma. Outra empresa envolvida na aplicação da tecnologia é a WORKR, companhia de força de trabalho robótica baseada na Califórnia. A empresa utilizará a plataforma para levar automação baseada em IA a pequenos e médios fabricantes nos Estados Unidos. Durante a NVIDIA GTC 2026, que acontece entre 16 e 19 de março em San Jose, na Califórnia, a WORKR demonstrará sistemas robóticos baseados em tecnologia da ABB, treinados com dados sintéticos e capazes de executar novas tarefas em poucos minutos. Segundo Ken Macken, CEO e fundador da WORKR, a colaboração demonstra que a automação avançada pode ser aplicada em empresas de qualquer porte. “Junto com ABB e NVIDIA, estamos mostrando que a IA industrial pode ser implementada hoje e ajudar fabricantes a enfrentar desafios como a escassez de mão de obra”, acrescenta Ken. Além disso, a ABB Robotics também avalia integrar a plataforma de computação de borda NVIDIA Jetson ao seu controlador Omnicore, permitindo inferência de IA em tempo real diretamente nos robôs. A ABB destaca que é a única fabricante de robôs com um controlador virtual que utiliza o mesmo firmware do hardware, garantindo alta fidelidade entre simulação e desempenho no mundo real. Combinada com a tecnologia Absolute Accuracy, que reduz erros de posicionamento para cerca de 0,5 milímetro, a solução busca oferecer alta precisão em aplicações industriais. Coletiva virtual e materiais Para apresentar a parceria e os detalhes do RobotStudio HyperReality, a ABB Robotics realizou uma coletiva virtual na segunda-feira, 9 de março, às 12h (horário de Brasília). A ABB Robótica é líder em seu setor, posicionada no centro das tendências estruturais e futuras de automação. Conforme comunicado anteriormente, há sinergias limitadas de negócios e tecnologia entre a ABB Robótica e as demais divisões do Grupo ABB, que apresentam diferentes características de demanda e de mercado. A divisão ABB Robótica conta com aproximadamente 7.000 colaboradores. Com receitas de US$ 2,3 bilhões em 2024, representou cerca de 7% das receitas do Grupo ABB, com uma margem de EBITA Operacional de 12,1%. A ABB é uma líder global em tecnologia nas áreas de eletrificação e automação, viabilizando um futuro mais sustentável e eficiente no uso de recursos. Ao integrar sua expertise em engenharia e digitalização, a ABB apoia indústrias a operarem com alto desempenho, tornando-se mais eficientes, produtivas e sustentáveis, para que possam superar seus resultados. Na ABB, chamamos isso de “Engineered to Outrun”. A empresa possui mais de 140 anos de história e cerca de 110.000 colaboradores em todo o mundo. As ações da ABB estão listadas na SIX Swiss Exchange (ABBN) e na Nasdaq Stockholm (ABB). Salve meu nome, email e site neste navegador para a próxima vez que eu comentar. Δ Ao navegar neste site, você aceita os cookies que usamos para melhorar sua experiência. Aceito Mais informações OverBR - 2008 | © Copyright 2021 , All Rights Reserved. | Nós Confiamos em Deus. | Privacidade | Anuncie | Otimizador de Site.
Images (1):
|
|||||
| Qrypt Releases Post-Quantum VPN for NVIDIA Jetson Robotics - Quantum … | https://quantumcomputingreport.com/qryp… | 1 | Mar 14, 2026 08:00 | active | |
Qrypt Releases Post-Quantum VPN for NVIDIA Jetson Robotics - Quantum Computing ReportURL: https://quantumcomputingreport.com/qrypt-releases-post-quantum-vpn-for-nvidia-jetson-robotics/ Description: Qrypt has launched a post-quantum secure VPN solution specifically designed for the NVIDIA Jetson Orin (and upcoming Jetson Thor) platforms to protect robotics data from "harvest now, decrypt later" Content:
Qrypt has launched a post-quantum secure VPN solution specifically designed for the NVIDIA Jetson Orin (and upcoming Jetson Thor) platforms to protect robotics data from “harvest now, decrypt later” (HNDL) attacks. Robotics systems, such as autonomous mobile robots (AMRs) and drones, often remain in the field for over a decade, making their sensor streams, navigation maps, and telemetry vulnerable to future quantum-enabled decryption. The solution utilizes a Hybrid PQC IPsec framework, allowing engineers to implement quantum-resilient key exchanges without sacrificing real-time performance or overhauling existing stacks. The architecture integrates strongSwan 6.0 with liboqs for ML-KEM (Kyber) and the Qrypt BLAST plugin for quantum-secure key generation. To support this on embedded hardware, Qrypt engineered a custom Yocto Project distribution that upgrades the NVIDIA kernel from the stock 5.15 to the 6.6 LTS version, which is required for PQC Child SA rekeying support. This technical stack enables a high-throughput tunnel (benchmarked at 926 Mbps) with less than 1% overhead compared to classical encryption, ensuring that latency-sensitive robotics applications—like remote teleoperation or operator video—maintain high performance while achieving “intelligence-grade” security. Beyond standard PQC algorithms, the integration of Qrypt BLAST provides a second layer of defense by replacing traditional key-distribution architectures. While standard Post-Quantum Cryptography still transmits keys over the network, BLAST allows endpoints to independently generate matching keys from quantum entropy sourced from NIST ESV certified random number generators. This “turn the dial to maximum” security option eliminates the risk of key interception during transit and supports asynchronous key buffering, which helps the VPN maintain stability and low latency even under network jitter or intermittent cellular backhaul. Qrypt provides pre-built images and reproducible build instructions, allowing robotics engineers to deploy a post-quantum secure stack in approximately 40 minutes. Support for the next-generation NVIDIA Jetson Thor AGX is currently in development and targeted for a February 2026 release, which will further integrate with the platform’s improved TrustZone and hardware security modules. This initiative represents a critical step in future-proofing industrial infrastructure and ensuring that sensitive proprietary navigation models and facility data remain protected for their entire operational lifetime. For full technical configuration, kernel options, and the Jetson Yocto build repository, consult the official Qrypt blog post here and the technical BLAST integration documentation here. March 13, 2026 Δ This site uses Akismet to reduce spam. Learn how your comment data is processed.
Images (1):
|
|||||
| ABB Robotics will mit Nvidia physische KI in der Industrie … | https://www.elektrotechnik.vogel.de/abb… | 1 | Mar 14, 2026 08:00 | active | |
ABB Robotics will mit Nvidia physische KI in der Industrie nutzbar machenDescription: Durch die Integration der Nvidia Omniverse-Bibliotheken in ABBs Robot Studio soll die Skalierung industrieller Produktion stark beschleunigen können. Content:
Anbieter zum Thema Durch die Integration der Nvidia Omniverse-Bibliotheken in ABBs Robot Studio soll die Skalierung industrieller Produktion stark beschleunigen können. ABB Robotics hat die Integration der Nvidia-Omniverse-Bibliotheken in seine Simulations- und Programmierplattform Robot Studio angekündigt und damit den Weg für den breiten industriellen Einsatz physischer KI geebnet. Laut einer Mitteilung soll die Kombination aus ABB-Robotiksoftware und physikalisch präziser Simulation die bislang bestehende „Sim-to-Real“-Lücke zwischen virtueller Entwicklung und realer Produktion nahezu vollständig schließen. Durch die Verknüpfung digitaler Zwillinge mit realitätsnahen Simulationen erreichen Roboter laut Unternehmen eine Übereinstimmung von bis zu 99 Prozent zwischen Modell und Anwendung. Entwickler können Roboterprozesse vollständig virtuell entwerfen, synthetische Trainingsdaten erzeugen und KI-Modelle bereits vor der Inbetriebnahme trainieren. ABB setzt dabei auf seinen Virtual Controller, der die reale Steuerungssoftware exakt abbildet, sowie auf Absolute-Accuracy-Technologie mit Positioniergenauigkeiten bis 0,5 Millimeter. Ein zentrales Element der Kooperation ist Robot Studio Hyper Reality, das ab der zweiten Jahreshälfte 2026 verfügbar sein soll. Die Plattform soll Einrichtungszeiten um bis zu 80 Prozent reduzieren, Kosten um bis zu 40 Prozent senken und die Markteinführung neuer Produkte um bis zu 50 Prozent beschleunigen. Erste Pilotanwendungen laufen bereits, unter anderem bei Foxconn in der Montage von Unterhaltungselektronik, wo Roboterprozesse zunächst vollständig virtuell optimiert werden. Parallel prüfen ABB und Nvidia die Integration von Edge-KI über die Jetson-Plattform in ABB-Steuerungen, um KI-Funktionen direkt am Roboter auszuführen. Auf der Nvidia GTC zeigt zudem das Robotikunternehmen WORKR Anwendungen, die auf synthetischen Daten trainierte ABB-Roboter nutzen, um insbesondere kleinen und mittleren Herstellern Automatisierung ohne Programmieraufwand zu ermöglichen und dem Fachkräftemangel entgegenzuwirken. (ID:50783089) Bitte geben Sie eine gültige E-Mailadresse ein. Mit Klick auf „Newsletter abonnieren“ erkläre ich mich mit der Verarbeitung und Nutzung meiner Daten gemäß Einwilligungserklärung (bitte aufklappen für Details) einverstanden und akzeptiere die Nutzungsbedingungen. Weitere Informationen finde ich in unserer Datenschutzerklärung. Die Einwilligungserklärung bezieht sich u. a. auf die Zusendung von redaktionellen Newslettern per E-Mail und auf den Datenabgleich zu Marketingzwecken mit ausgewählten Werbepartnern (z. B. LinkedIn, Google, Meta). Stand: 08.12.2025 Es ist für uns eine Selbstverständlichkeit, dass wir verantwortungsvoll mit Ihren personenbezogenen Daten umgehen. Sofern wir personenbezogene Daten von Ihnen erheben, verarbeiten wir diese unter Beachtung der geltenden Datenschutzvorschriften. Detaillierte Informationen finden Sie in unserer Datenschutzerklärung. Ich bin damit einverstanden, dass die Vogel Communications Group GmbH & Co. KG, Max-Planckstr. 7-9, 97082 Würzburg einschließlich aller mit ihr im Sinne der §§ 15 ff. AktG verbundenen Unternehmen (im weiteren: Vogel Communications Group) meine E-Mail-Adresse für die Zusendung von redaktionellen Newslettern nutzt. Auflistungen der jeweils zugehörigen Unternehmen können hier abgerufen werden. Der Newsletterinhalt erstreckt sich dabei auf Produkte und Dienstleistungen aller zuvor genannten Unternehmen, darunter beispielsweise Fachzeitschriften und Fachbücher, Veranstaltungen und Messen sowie veranstaltungsbezogene Produkte und Dienstleistungen, Print- und Digital-Mediaangebote und Services wie weitere (redaktionelle) Newsletter, Gewinnspiele, Lead-Kampagnen, Marktforschung im Online- und Offline-Bereich, fachspezifische Webportale und E-Learning-Angebote. Wenn auch meine persönliche Telefonnummer erhoben wurde, darf diese für die Unterbreitung von Angeboten der vorgenannten Produkte und Dienstleistungen der vorgenannten Unternehmen und Marktforschung genutzt werden. Meine Einwilligung umfasst zudem die Verarbeitung meiner E-Mail-Adresse und Telefonnummer für den Datenabgleich zu Marketingzwecken mit ausgewählten Werbepartnern wie z.B. LinkedIN, Google und Meta. Hierfür darf die Vogel Communications Group die genannten Daten gehasht an Werbepartner übermitteln, die diese Daten dann nutzen, um feststellen zu können, ob ich ebenfalls Mitglied auf den besagten Werbepartnerportalen bin. Die Vogel Communications Group nutzt diese Funktion zu Zwecken des Retargeting (Upselling, Crossselling und Kundenbindung), der Generierung von sog. Lookalike Audiences zur Neukundengewinnung und als Ausschlussgrundlage für laufende Werbekampagnen. Weitere Informationen kann ich dem Abschnitt „Datenabgleich zu Marketingzwecken“ in der Datenschutzerklärung entnehmen. Falls ich im Internet auf Portalen der Vogel Communications Group einschließlich deren mit ihr im Sinne der §§ 15 ff. AktG verbundenen Unternehmen geschützte Inhalte abrufe, muss ich mich mit weiteren Daten für den Zugang zu diesen Inhalten registrieren. Im Gegenzug für diesen gebührenlosen Zugang zu redaktionellen Inhalten dürfen meine Daten im Sinne dieser Einwilligung für die hier genannten Zwecke verwendet werden. Dies gilt nicht für den Datenabgleich zu Marketingzwecken. Mir ist bewusst, dass ich diese Einwilligung jederzeit für die Zukunft widerrufen kann. Durch meinen Widerruf wird die Rechtmäßigkeit der aufgrund meiner Einwilligung bis zum Widerruf erfolgten Verarbeitung nicht berührt. Um meinen Widerruf zu erklären, kann ich als eine Möglichkeit das unter https://contact.vogel.de abrufbare Kontaktformular nutzen. Sofern ich einzelne von mir abonnierte Newsletter nicht mehr erhalten möchte, kann ich darüber hinaus auch den am Ende eines Newsletters eingebundenen Abmeldelink anklicken. Weitere Informationen zu meinem Widerrufsrecht und dessen Ausübung sowie zu den Folgen meines Widerrufs finde ich in der Datenschutzerklärung, Abschnitt Redaktionelle Newsletter. Weiterführende Inhalte Produkttipps Neue Produkte aus der Robotik Produkttipps Acht Produktneuheiten rund um die Robotik Cookie-Manager Leserservice Abo Datenschutz Barrierefreiheit AGB Abo-Kündigung Werbekunden-Center Impressum Mediadaten Hilfe KI-Leitlinien Autoren Copyright © 2026 Vogel Communications Group Diese Webseite ist eine Marke von Vogel Communications Group. Eine Übersicht von allen Produkten und Leistungen finden Sie unter www.vogel.de
Images (1):
|
|||||
| Naver to develop Arabic-based LLM, expand AI cooperation with Saudi … | https://www.koreaherald.com/view.php?ud… | 1 | Mar 13, 2026 16:00 | active | |
Naver to develop Arabic-based LLM, expand AI cooperation with Saudi Arabia - The Korea HeraldURL: https://www.koreaherald.com/view.php?ud=20240913050188 Description: Naver Corp., the operator of South Korea's largest internet platform, has signed an initial agreement with Saudi Arabia's artificial intellige Content:
Business Naver to develop Arabic-based LLM, expand AI cooperation with Saudi Arabia Published : Sept. 13, 2024 - 10:20:12 Link copied! Naver Corp., the operator of South Korea's largest internet platform, has signed an initial agreement with Saudi Arabia's artificial intelligence (AI) agency to jointly develop an Arabic language-based large language model (LLM), company officials said Friday. During the Global AI Summit hosted by the Saudi Data & AI Authority (SDAIA) in Saudi Arabia's capital of Riyadh earlier this week, Naver and the SDAIA signed the memorandum of understanding (MOU) to cooperate in various sectors, including AI, cloud computing, data centers, and robots, according to the officials. Under the MOU, the two sides plan to jointly develop an Arabic LLM, and technology solutions and services in the fields. SDAIA has been leading the Middle Eastern nation's ambitious plan of creating a technology-driven economy by 2030. Last year, Naver also struck a deal with the Saudi Arabian government to create a digital twin platform for Riyadh and four other Saudi cities. (Yonhap) PM Kim Min-seok meets Vance in Washington to discuss trade, investment plan South Korean Prime Minister Kim Min-seok met US Vice President JD Vance in Washington on Thursday, where the two discussed bilateral trade issues, including the implementation of a Korea-US investment framework agreed upon by President Lee Jae Myung and US President Donald Trump last year. Police raid Transport Ministry over Jeju Air plane crash disaster Major roads in central Seoul to be closed Sunday for 2026 Seoul Marathon Is a new K-pop power map emerging in 2026? Ghanaian companies ready to partner Korean firms: President Mahama Korea’s first dating show for the disabled challenges genre with tender, humanistic lens Herald Review Diplomat highlights growing Korean investments in India Diplomatic Circuit Celeb Reads The books celebrities love — and the ones that might become your next read 100 Food Challenge 100 foods to try: Are you up to the challenge? Oddities From the funny to the strange and downright unbelievable Herald Interview A series of in-depth interviews. After DeepSeek: China shifts from breakthrough to AI scale Lee Jae Myung’s war on housing speculation Kia unveils upgraded Niro hybrid, discontinues EV version Hyundai Motor tops Volkswagen in profit for first time Kospi plunges, won weakens as oil shock rattles markets South Korea investigates BTS ticket scalping after nearly 1,900 illegal listings detected Leaving a teaching job in Korea? Here's what foreign instructors need to know The people's king: How "The King's Warden" became Korea's biggest film in years Korea moves to curb 'excessive' celebrity security disrupting airport passenger flow Father of newborn saves five lives through organ donation Address : Huam-ro 4-gil 10, Yongsan-gu,Seoul, Korea Tel : +82-2-727-0114 Online newspaper registration No : Seoul 아03711 Date of registration : 2015.04.28 Publisher. Editor : Choi Jin-Young Juvenile Protection Manager : Choi He-suk The Korea Herald by Herald Corporation. Copyright Herald Corporation. All Rights Reserved.
Images (1):
|
|||||
| 3 Robots walk into a room ... - RubyFlow | https://rubyflow.com/p/i57q9f-3-robots-… | 1 | Mar 13, 2026 16:00 | active | |
3 Robots walk into a room ... - RubyFlowURL: https://rubyflow.com/p/i57q9f-3-robots-walk-into-a-room- Description: I call my LLM-based service objects robots instead of agents. Why? because robots do what you tell them. Agent have agency. That means they can make choices and do whatever they want to do. Like travel agents, real-estate agents, FBI agents. Robots are machines that follow instructions. Thats what I want. My objects should follow my instructions but what happens when you add a little bit of agency to 3 robots and put them into a room with a set of tools that allow them to communicate with eah other through shared memory, broadcast message channels and direct message channels. Then you tell all 3 robots to do the same thing? They become a self-organizing group. SOGs have agency. Content:
Made a library? Written a blog post? Found a useful tutorial? Share it with the Ruby community here or just enjoy what everyone else has found! I call my LLM-based service objects robots instead of agents. Why? because robots do what you tell them. Agent have agency. That means they can make choices and do whatever they want to do. Like travel agents, real-estate agents, FBI agents. Robots are machines that follow instructions. That’s what I want. My objects should follow my instructions… but what happens when you add a little bit of agency to 3 robots and put them into a room with a set of tools that allow them to communicate with eah other through shared memory, broadcast message channels and direct message channels. Then you tell all 3 robots to do the same thing? They become a self-organizing group. SOGs have agency. https://madbomber.github.io/blog/engineering/robotlab-and-the-writers-room/
Images (1):
|
|||||
| XGSynBot Debuts Z1 Wheeled Robot, Targeting the "Last Mile" of … | https://moneycompass.com.my/xgsynbot-de… | 1 | Mar 12, 2026 16:01 | active | |
XGSynBot Debuts Z1 Wheeled Robot, Targeting the "Last Mile" of Industrial Embodied AI - Money CompassDescription: Money Compass is one of the credible Chinese and English financial media in Malaysia with strong influence in Malaysia’s financial industry. As the winner of the SME Award in Malaysia for 5 consecutive years, we persistently propel the financial industry towards a mutually beneficial framework. Since 2004, with the dedication to advocating the public to practice financial planning in everyday life, Money Compass has accumulated a vast connection in ASEAN financial industries and garnered government agencies and corporate resources. At present, Money Compass is adjusting its pace to transform into Money Compass 2.0. Consolidating the existing connections and network, Money Compass Integrated Media Platform is founded, which is well grounded in Malaysia whilst serving the ASEAN region. The mission of the new Money Compass Integrated Media Platform is to become the financial freedom gateway to assist internet users enhance financial intelligence, create wealth opportunities and achieve financial freedom for everyone! Content:
BEIJING and SAN FRANCISCO, March 10, 2026 /PRNewswire/ — March 5, 2026, XGSynBot a pioneer in embodied AI, hosted its 2026 product dual-city launch event themed “More Than One Answer” in both Silicon Valley and Beijing. The company officially debuted the Z1 wheeled humanoid robot with the world’s first Modular-End-Effector Quick Change System and self-developed XG- High-Performance Joint Modules. Beyond the product, XGSynBot announced the ” STARFIRE”, a global ecosystem cooperation strategy designed to accelerate the transition of Embodied AI to the real unpredictable, heavy-duty industrial production environment. The event successfully captured the attention of numerous strategic partners and investment institutions, generating potential interests and orders worth tens millions. The Automation Paradox in Manufacturing The global manufacturing sector currently faces a “double-bind”: high-cost automation that remains frustratingly rigid. While the industry has seen a surge in agile humanoid prototypes, few can withstand the 24/7 rigors, oil-splattered environments, and micron-level precision required in actual factories. “We’ve built the world’s most flexible robots over the past three years, yet they remain trapped in the world’s most rigid processes,” said the CEO of XGSynBot “The Z1 isn’t a ‘mascot’ build for the lab; it’s a ‘blue-collar worker’ designed for the real world from the first day.” A Robot Build for the Factory Production At the core of Z1 is a set of hardware and software system architecture decisions intended to prioritize reliability and adaptability in production environments. Modular Quick-Change System: Breaking the limitation of single-purpose robotics, Z1 can switch between different end-effectors—such as grippers, welders, or suction cups in just under 6 seconds, enabling one robot to cover multiple specialized workstations. XG- High-Performance Joint Modules: By integrating motors, reducers, and sensors into a single unit, it significantly improving joint precision, stability, and structural rigidity while eliminating signal interference and latency common in distributed architectures. In practical terms, this means the system is more stable, faster, and built to withstand demanding industrial use. The “Dual-System” Central Brain: Inspired by human cognition, the Z1 features a “Slow System” for high-level task planning and natural language understanding (Reasoning), and a “Fast System” operating at 100Hz for real-time motor control and tactile feedback (Reflex). This allows the robot to understand complex human commands while maintaining millisecond-level stability on the assembly line. STARFIRE: Building an Embodied AI Cooperation Ecosystem Alongside the launch of Z1, XGSynBot announced Project STARFIRE, an initiative aimed at building an open cooperation ecosystem around embodied AI. The program will focus on three areas: Scenario Co-Innovation: Deploying large-scale solutions across 3C electronics, automotive, and renewable energy sectors with global industry partners. Product Synergy: Opening hardware interfaces to third-party tool and component manufacturers to create a “Plug-and-Play” industrial ecosystem. Open-Sourcing: Incrementally open-sourcing proprietary datasets, scenario models, and SDKs in a phased manner, collaborating with academic and industry developers to optimize Embodied AI together. The Bigger Picture for Embodied AI The launch comes at a moment when embodied AI is attracting significant global attention and investment, with startups and large tech companies alike racing to bring intelligent robots into physical workplaces. Yet despite rapid progress in AI models, commercial deployment remains the industry’s biggest hurdle. By focusing on durability, modularity, and ecosystem development, XGSynBot is betting that the next wave of robotics innovation will be defined less by flashy prototypes—and more by machines that can quietly survive the realities of manufacturing production. About XGSynBot XGSynBot is an innovative technology company with cutting-edge AI and robotics. Guided by the core philosophy “Evolve from Unified Architecture. Grow beyond All Limits”, it hopes to empower industries worldwide, redefine human-robot collaboration, and pioneer a new era of productivity. By developing embodied AI robots, XGSynBot is actively advancing robots from single-task to cross-scenario applications. Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved Login to your account below Remember Me Please enter your username or email address to reset your password. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved
Images (1):
|
|||||
| World Internet Conference hosts forum on embodied AI in Spain … | https://bubblear.com/world-internet-con… | 1 | Mar 12, 2026 16:01 | active | |
World Internet Conference hosts forum on embodied AI in Spain – The BubbleURL: https://bubblear.com/world-internet-conference-hosts-forum-on-embodied-ai-in-spain/28298/ Content:
BEIJING, March 7, 2026 /PRNewswire/ — A report by CRI Online: The World Internet Conference Specialized Committee on Artificial Intelligence (the WIC SC on AI) hosted a special forum under the theme “Embodied AI: Leading a New Paradigm of AI Development” on March 3 in Barcelona, during the Mobile World Congress. Notable figures addressing the event included Francis Gurry, vice-chair of the WIC and former director-general of the World Intellectual Property Organization; Mohamed Ben Amor, director general of the Arab Information and Communication Technologies Organization; and Lara Dewar, chief marketing officer of the GSMA. Others who gave speeches were Schahram Dustdar, co-chair of the WIC SC on AI, a member of Academia Europaea and president of the International Artificial Intelligence Industry Alliance; Zhang Dong, executive vice president of China Mobile; Nakul Duggal, executive vice president and group general manager at Qualcomm Technologies, Inc.; and Wang Xiang, senior vice president at ZTE. The forum also featured a roundtable moderated by John Higgins, co-lead of the Standards Program of the WIC SC on AI and chairman of the International AI Governance Association. It was participated by Liu Dong, director of China Future Internet Engineering Center and Internet Hall of Fame Inductee; Jayne Stancavage, vice president of policy and regulatory affairs at Intel Corporation; Emanuela Girardi, president of ADRA; and Qu Zhenbin, chief solutions architect for AI+ at Alibaba Cloud. They discussed the opportunities and challenges emerging in the embodied AI industry. The participants agreed that embodied AI is rapidly transitioning from research and development to large-scale implementation and is now a key focus in AI development. They stressed the importance of deepening international cooperation to foster a healthy, diverse and inclusive global industry ecosystem and build a secure collaborative governance framework. These efforts are crucial to developing safe and reliable embodied AI and ensuring technological progress benefits humanity, they noted. They expressed the hope that the WIC SC on AI would continue to play a vital role in building global consensus and create a shared, intelligent future. The forum was attended by over 100 representatives, including WIC SC on AI members, WIC members and representatives from international organizations, government agencies and institutions in the embodied AI field. View original content to download multimedia:https://www.prnewswire.com/news-releases/world-internet-conference-hosts-forum-on-embodied-ai-in-spain-302707539.html SOURCE CRI Online Disclaimer: The above press release comes to you under an arrangement with PR Newswire. Bubblear.com takes no editorial responsibility for the same. © 2026 - The Bubble. All Rights Reserved.
Images (1):
|
|||||
| World Internet Conference hosts forum on embodied AI in Spain … | https://moneycompass.com.my/world-inter… | 1 | Mar 12, 2026 16:01 | active | |
World Internet Conference hosts forum on embodied AI in Spain - Money CompassURL: https://moneycompass.com.my/world-internet-conference-hosts-forum-on-embodied-ai-in-spain/ Description: Money Compass is one of the credible Chinese and English financial media in Malaysia with strong influence in Malaysia’s financial industry. As the winner of the SME Award in Malaysia for 5 consecutive years, we persistently propel the financial industry towards a mutually beneficial framework. Since 2004, with the dedication to advocating the public to practice financial planning in everyday life, Money Compass has accumulated a vast connection in ASEAN financial industries and garnered government agencies and corporate resources. At present, Money Compass is adjusting its pace to transform into Money Compass 2.0. Consolidating the existing connections and network, Money Compass Integrated Media Platform is founded, which is well grounded in Malaysia whilst serving the ASEAN region. The mission of the new Money Compass Integrated Media Platform is to become the financial freedom gateway to assist internet users enhance financial intelligence, create wealth opportunities and achieve financial freedom for everyone! Content:
BEIJING, March 8, 2026 /PRNewswire/ — A report by CRI Online: The World Internet Conference Specialized Committee on Artificial Intelligence (the WIC SC on AI) hosted a special forum under the theme “Embodied AI: Leading a New Paradigm of AI Development” on March 3 in Barcelona, during the Mobile World Congress. Notable figures addressing the event included Francis Gurry, vice-chair of the WIC and former director-general of the World Intellectual Property Organization; Mohamed Ben Amor, director general of the Arab Information and Communication Technologies Organization; and Lara Dewar, chief marketing officer of the GSMA. Others who gave speeches were Schahram Dustdar, co-chair of the WIC SC on AI, a member of Academia Europaea and president of the International Artificial Intelligence Industry Alliance; Zhang Dong, executive vice president of China Mobile; Nakul Duggal, executive vice president and group general manager at Qualcomm Technologies, Inc.; and Wang Xiang, senior vice president at ZTE. The forum also featured a roundtable The forum also featured a roundtable moderated by John Higgins, co-lead of the Standards Program of the WIC SC on AI and chairman of the International AI Governance Association. It was participated by Liu Dong, director of China Future Internet Engineering Center and Internet Hall of Fame Inductee; Jayne Stancavage, vice president of policy and regulatory affairs at Intel Corporation; Emanuela Girardi, president of ADRA; and Qu Zhenbin, chief solutions architect for AI+ at Alibaba Cloud. They discussed the opportunities and challenges emerging in the embodied AI industry. The participants agreed that embodied AI is rapidly transitioning from research and development to large-scale implementation and is now a key focus in AI development. They stressed the importance of deepening international cooperation to foster a healthy, diverse and inclusive global industry ecosystem and build a secure collaborative governance framework. These efforts are crucial to developing safe and reliable embodied AI and ensuring technological progress benefits humanity, they noted. They expressed the hope that the WIC SC on AI would continue to play a vital role in building global consensus and create a shared, intelligent future. The forum was attended by over 100 representatives, including WIC SC on AI members, WIC members and representatives from international organizations, government agencies and institutions in the embodied AI field. Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved Login to your account below Remember Me Please enter your username or email address to reset your password. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved
Images (1):
|
|||||
| Embodied AI Is Leaving the Screen and Entering the Real … | https://www.cmswire.com/digital-experie… | 1 | Mar 12, 2026 16:01 | active | |
Embodied AI Is Leaving the Screen and Entering the Real WorldDescription: As AI moves into robots and physical systems, leaders must rethink operations, workforce models and how intelligence shapes real-world work. Content:
CMSWire's Marketing & Customer Experience Leadership channel is the go-to hub for actionable research, editorial and opinion for CMOs, aspiring CMOs and today's customer experience innovators. Our dedicated editorial and research teams focus on bringing you the data and information you need to navigate today's complex customer, organizational and technical landscapes. We've got you covered. For the past few years, many of our conversations about artificial intelligence (AI) have happened on screens. We've talked to AI, prompted it, queried it and watched it generate text, images and code at remarkable speed. But something important is changing. AI is leaving the chat and entering the physical world. Intelligence is increasingly stepping out of two-dimensional interfaces and into environments where it can move, sense, lift, navigate and assist. This shift toward embodied AI may be one of the clearest signals yet of where the next frontier of AI value could emerge. Embodied AI refers to AI paired with a physical form. That includes robots, autonomous machines and smart systems that can perceive their surroundings, reason about what they're sensing, and take autonomous action in the real world. Embodied AI is already showing up across warehouses, hospitals, factories, logistics networks and agricultural operations. These environments carry real constraints, safety considerations and economic consequences, making them a meaningful proving ground for what comes next. Once AI can see, move and act, the conversation around value, risk and advantage begins to shift. Productivity is no longer limited to faster analysis or better recommendations and instead becomes about physical outcomes such as throughput, safety, uptime and resilience. One way to understand this moment is to recognize that AI is increasingly becoming an ingredient rather than a standalone technology. Many of the more meaningful breakthroughs today are not "AI-only" solutions but "AI and…" – new innovations where AI merges with other technologies. Embodied AI is a key proof point for how machine intelligence today is being woven into the fabric of how work happens, not just how decisions are made. This shift can also reframe AI strategy. Instead of asking where AI can be deployed, leaders may find more impact by asking where intelligence should live inside their operations. In many cases, the answers point beyond software teams and into the physical core of the business. Robotics is one of the clearest signals of embodied AI's momentum. Robots themselves are not new, but the intelligence inside them — as well as their physical abilities to replicate finer human motor skills — is advancing rapidly. Progress in perception, multimodal models, reinforcement learning and edge computing is enabling machines to operate in less structured environments and adapt to variability. That evolution is moving robotics away from rigid automation toward systems that can respond to change and work more fluidly alongside people. What stands out is the level of sustained investment behind these capabilities. According to the International Federation of Robotics, 542,000 industrial robots were installed worldwide in 2024 – more than double the number a decade ago – suggesting that organizations are moving beyond experimentation and into execution, even if adoption varies by sector and use case. Healthcare offers a particularly clear example of this shift. From surgical robotics and rehabilitation systems to autonomous logistics and patient-support technologies, embodied intelligence is increasingly present in clinical and operational settings. These environments demand high levels of trust, precision, and reliability, which can slow adoption, but also sharpen the value proposition when systems perform as intended. Recent industry showcases have highlighted the breadth of innovation underway, and while many solutions are still maturing, the range of use cases suggests embodied AI could play a meaningful role in addressing workforce shortages, operational strain and customer experience challenges over time. Unlike software-based AI, embodied AI often scales more slowly. Hardware constraints, integration complexity, safety requirements and regulatory considerations introduce friction that can temper deployment speed. Yet slower scale does not necessarily mean lower impact. Embodied AI influences parts of the business that software AI rarely touches, including capital investment, labor models, and facility design. Success is not measured solely by efficiency gains. It may appear in the form of fewer workplace injuries, better asset utilization, or more resilient supply chains. When embodied AI is treated as a future concern or delegated entirely to technical teams, leaders may underestimate what is already beginning to shift. The risk is not falling behind on experimentation but overlooking how physical intelligence could reshape the operating model itself. Leaders who wait may miss: Embodied AI is moving beyond software interfaces into physical systems that sense, move and act. The following examples illustrate how machine intelligence is beginning to reshape operations across multiple sectors. Embodied AI signals that AI's impact extends well beyond digital productivity gains. As intelligence moves into physical systems, it begins to influence operations, supply chains, safety protocols, labor dynamics and long-term capital planning. That raises new questions around governance and responsibility — not just about what AI decides, but about what it does in the world. For many leaders, this moment is less about predicting exactly how embodied AI will unfold and more about recognizing the signal. AI is no longer confined to models and interfaces. It is becoming part of the physical fabric of work. Organizations that take the time to design for that reality, thoughtfully and deliberately, can be better positioned to navigate both the opportunities and the risks that follow. Understanding how agentic customer experience and agentic marketing are evolving alongside physical AI systems will be essential for leaders seeking to integrate intelligence across both digital and physical touchpoints. Learn how you can join our contributor community. As the US and global chief AI engineering officer, Scott is in charge of PwC’s cutting-edge technology development in areas that are essential for future innovation development. With 30 years of emerging technology and AI experience, he has helped clients transform their customer experience and enhance digital operations across all aspects of their business. Connect with Scott Likens: For over two decades CMSWire, produced by Simpler Media Group, has been the world's leading community of customer experience professionals. Today the CMSWire community consists of over 5 million influential customer experience, customer service and digital experience leaders, the majority of whom are based in North America and employed by medium to large organizations. Our sister community, Reworked, covers employee experience news and digital workplace news and gathers the world's leading employee experience and digital workplace professionals. And our newest community, VKTR, covers enterprise AI news and is home for AI-focused professionals building agentic AI, prompt engineering and enterprise AI skills, and tracking the top AI chip companies. Not yet a CMSWire member? We serve over 5 million of the world's top customer experience practitioners. Join us today — unlock member benefits and accelerate your career, all for free. For over two decades CMSWire, produced by Simpler Media Group, has been the world's leading community of customer experience professionals. Today the CMSWire community consists of over 5 million influential customer experience, customer service and digital experience leaders, the majority of whom are based in North America and employed by medium to large organizations. Our sister community, Reworked, covers employee experience news and digital workplace news and gathers the world's leading employee experience and digital workplace professionals. And our newest community, VKTR, covers enterprise AI news and is home for AI-focused professionals building agentic AI, prompt engineering and enterprise AI skills, and tracking the top AI chip companies.
Images (1):
|
|||||
| ZTE CSO Wang Xiang highlights embodied AI at MWC 2026 … | https://www.theregister.com/2026/03/11/… | 1 | Mar 12, 2026 16:01 | active | |
ZTE CSO Wang Xiang highlights embodied AI at MWC 2026 • The RegisterURL: https://www.theregister.com/2026/03/11/zte-cso-highlights-embodied-ai-pathways-mwc-2026/ Description: Partner Content: Wang Xiang details ZTE’s Point-Line-Plane approach to accelerate embodied AI from demonstration to productivity Content:
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, announced that Wang Xiang, Senior Vice President and Chief Strategy Officer delivered a keynote at World Internet Conference (WIC) during MWC Barcelona 2026, outlining ZTE's vision for embodied AI and its industrialization pathway. In his keynote speech titled "Human–Machine Symbiosis, Intelligence Igniting a New Journey", Wang Xiang presented ZTE's systematic approach to embodied AI, built on the synergy of Sensing, Communication, Computing, AI, and Control and the "Point-Line-Plane" deployment path pioneered at the company's Binjiang manufacturing base. He underscored how cost and efficiency optimization are driving the industry toward scaled commercialization. zte-cso - Click to enlarge Wang Xiang pointed out that AI is undergoing a critical paradigm shift from generative AI to physical AI, evolving from "disembodied" intelligence in the digital world to "embodied" intelligence deeply integrated with the physical world. Embodied AI, as the optimal vehicle for this transformation, is advancing beyond technical validation into scaled commercialization, emerging as the core driver of the physical AI era. It is not only the bridge connecting digital intelligence with the physical environment, but also a source of enormous value across industrial manufacturing, logistics, services, and other sectors, with the market projected to exceed USD 40 billion by 2030. Yet the path to scaled commercialization is not without obstacles. Wang Xiang noted that the complexity and precision of industrial scenarios present three major hurdles: the difficulty of agile single-point deployment in dynamic workstations, the challenge of coordinating heterogeneous agents into efficient production lines, and the lack of seamless integration between intelligent production lines and enterprise business operations. In practice, this means that enabling a single robot to operate quickly and flexibly in dynamic environments remains prohibitively costly; coordinating heterogeneous clusters of robots efficiently faces real-time challenges in communication and scheduling; and allowing intelligent production line data to truly drive business decision-making is hindered by significant delays and breakpoints. Only by overcoming these barriers can embodied AI evolve from "toy" to "tool", from "demonstration" to "productivity", and achieve the leap from "point" to "chain". In response to these systemic challenges, Wang Xiang explained that ZTE has taken its Binjiang manufacturing base—China's first "5G Fully Connected Factory"—as a practical blueprint to explore the "Point-Line-Plane" pathway for advancing Embodied AI applications. Through functional optimization and flexible combination, ZTE extends deployment from a single industry to replication across many sectors, thereby accelerating the commercialization and maturity of Embodied AI. At the "point" level, ZTE synergizes Sensing, Communication, Computing, AI, and Control to create embodied single-unit applications. With full-stack solutions integrating in-house R&D and ecosystem collaboration, these five core elements are standardized, modularized, and made plug-and-play, enabling rapid assembly of versatile embodied agents for key scenarios, addressing specific tasks such as material handling, equipment inspection, and quality testing. At the same time, by combining the proprietary automotive-grade SoC Lanyue A1 with the industrial-grade OS OpenNewStar, ZTE achieves SoC & OS synergy, delivering 4x faster picking speed and advanced motion control capabilities to ensure deterministic and efficient perception-decision-execution loops. In addition, the full-stack simulation platform helps overcome data scarcity, enabling algorithms to migrate rapidly from simulation to real-world deployment. At the "line" level, the company leverages its Co-Sight Super AI Agent as a powerful orchestration tool to seamlessly connect the intelligent nodes with diverse functions. Drag-and-drop orchestration enables minute-level application building, solving collaboration challenges among heterogeneous agents and forming production lines tailored to specific products. At the "plane" level, ZTE relies on its Digital Nebula 4.0 platform, which provides rich large-model capabilities, one-stop development tools, and a transactional architecture. By aggregating more than 100 applications, it enables integrated factory business flow design that links production, logistics, quality and other end-to-end data flows. This allows intelligent production lines to be deeply embedded into enterprise workflows, driving real-time decision-making. This model has already delivered measurable impact. At the Binjiang base, per-capita output value rose 81%, production line adjustment cycles shortened by 20%, and delivery cycles fell 30%. The approach has since been replicated across more than 18 industries—including metallurgy, mining, transportation, power, and ports—working with over 1,000 partners to create 24+ product categories and 100+ scenario applications. In doing so, ZTE has embedded embodied intelligence across thousands of sectors, accelerating its transformation from demonstration to productivity. Wang Xiang highlighted that to support this pathway, ZTE has consistently built core competitiveness around five dimensions: Sensing, Communication, Computing, AI, and Control. The company adheres to a balanced approach of proprietary R&D and ecosystem partnership, combining ultimate synergy with open innovation. In sensing and communication, ZTE leverages an internet platform + MEC to widely integrate ecosystem sensors, while advanced 5G/5G-A, MEC, and OTN technologies deliver end-to-end deterministic connectivity, building the "highways" and "neural networks" for agents. In computing and AI, ZTE provides an open computing hardware foundation that supports diverse, on-demand compute, achieving five-dimensional synergy across chip, standard, architecture, algorithm, and delivery. The "1+N+X" large model portfolio spans foundational models, domain models, and applications, lowering adoption barriers across industries. In control and orchestration, hardware-software synergy strengthens motion control, while the Co-Sight Super AI Agent and Digital Nebula platform enable rapid orchestration and seamless integration of embodied intelligence into enterprise operations. Wang Xiang stressed that the advancement of embodied AI is not the endeavor of a single enterprise but a collective effort across the industry. Positioned as both a core technology innovator and an ecosystem enabler, ZTE will continue to drive independent innovation in key technologies while promoting standardized hardware, platform software, modular solutions, and open ecosystem. Working hand-in-hand with global partners, ZTE is committed to strengthening foundational technologies, fostering open collaboration, and ushering in a new journey of human-machine symbiosis to co-create a brighter digital future. Contributed by ZTE. Send us news The Register Biting the hand that feeds IT Copyright. All rights reserved © 1998–2026
Images (1):
|
|||||
| Deep Reinforcement Learning — DQN (Part 2) | https://medium.com/@ojescobar14/deep-re… | 0 | Mar 11, 2026 16:00 | active | |
Deep Reinforcement Learning — DQN (Part 2)URL: https://medium.com/@ojescobar14/deep-reinforcement-learning-dqn-part-2-f34e016d40b7 Description: Deep Reinforcement Learning — DQN (Part 2) Preface: As I started to write the next article in this series, one thing became apparent: giving background and ex... Content: |
|||||
| How can robots acquire skills through interactions with the physical … | https://robohub.org/how-can-robots-acqu… | 1 | Mar 11, 2026 16:00 | active | |
How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu - RobohubContent:
One of the key challenges in building robots for household or industrial settings is the need to master the control of high-degree-of-freedom systems such as mobile manipulators. Reinforcement learning has been a promising avenue for acquiring robot control policies, however, scaling to complex systems has proved tricky. In their work SLAC: Simulation-Pretrained Latent Action Space for Whole-Body Real-World RL, Jiaheng Hu, Peter Stone and Roberto Martín-Martín introduce a method that renders real-world reinforcement learning feasible for complex embodiments. We caught up with Jiaheng to find out more. This paper is about how robots (in particular, household robots like mobile manipulators) can autonomously acquire skills via interacting with the physical world (i.e. real-world reinforcement learning). Reinforcement learning (RL) is a general learning framework for learning from trial-and-error interaction with an environment, and has huge potential in allowing robots to learn tasks without humans hand-engineering the solution. RL for robotics is a very exciting field, as it can open possibilities for robots to self-improve in a scalable way, towards the creation of general-purpose household robots that can assist people in our everyday lives. Previously, most of the successful applications of RL to robotics were done by training entirely in simulation, then deploying the policy in the real-world directly (i.e. zero-shot sim2real). However, such a method has big limitations: on one hand, it is not very scalable, as you need to create task-specific, high-fidelity simulation environments that highly match the real-world environment that you want to deploy the robot in, and this can often take days or months for each and every task. On the other hand, some tasks are actually very hard to simulate, as they involve deformable objects and contact-rich interactions (for example, pouring water, folding clothes, wiping whiteboard). For these tasks, the simulation is often quite different from the real world. This is where real-world RL comes into play: if we can allow a robot to learn by directly interacting with the physical world, we don’t need a simulator anymore. However, while several attempts have been made towards realizing real-world RL, it is actually a very hard problem since: 1. Sample-inefficiency: RL requires a lot of samples (i.e. interaction with the environment) to learn good behavior, which is often impossible to collect in large quantities in the real-world. 2. Safety Issues: RL requires exploration, and random exploration in the real-world is often very very dangerous. The robot can break itself and will never be able to recover from that. So, creating high-fidelity simulations is very hard, and directly learning in the real-world is also really hard. What should we do? The key idea of SLAC is that we can use a low-fidelity simulation environment to assist subsequent real-world RL. Specifically, SLAC implements this idea in a two-step process: in the first step, SLAC learns a latent action space in simulation via unsupervised reinforcement learning. Unsupervised RL is a technique that allows the robot to explore a given environment and learn task-agnostic behaviors. In SLAC, we design a special unsupervised RL objective that encourages these behaviors to be safe and structured. In the second step, we treat these learned behaviors as the new action space of the robot, where the robot does real-world RL for downstream tasks such as wiping whiteboards by making decisions in this new action space. Importantly, this method allow us to circumvent the two biggest problem of real-world RL: we don’t have to worry about safety issues since the new action space is pretrained to be always safe; and we can learn in a sample-efficient way because our new action space is trained to be very structured. The robot carrying out the task of wiping a whiteboard. We test our methods on a real Tiago robot – a high degrees-of-freedom, bi-manual mobile manipulation, on a series of very challenging real-world tasks, including wiping a large whiteboard, cleaning a table, and sweeping trash into a bag. These tasks are challenging from three aspects: 1. They are visuo-motor tasks that require processing of high-dimensional image information. 2. They require the whole-body motion of the robot (i.e. controlling many degrees-of-freedom at the same time), and 3. They are contact-rich, which makes it hard to simulate accurately. On all of these tasks, our method allows us to learn high-performance policies (>80% success rate) within an hour of real-world interactions. By comparison, previous methods simply cannot solve the task, and often risk breaking the robot. So to summarize, previously it was simply not possible to solve these tasks via real-world RL, and our method has made it possible. I think there is still a lot more to do at the intersection of RL and robotics. My eventual goal is to create truly self-improving robots that can learn entirely by themselves without any human involvement. More recently, I’ve been interested in how we can leverage foundation models such as vision-language models (VLMs) and vision-language-action models (VLAs) to further automate the self-improvement loop. Jiaheng Hu is a 4th-year PhD student at UT-Austin, co-advised by Prof. Peter Stone and Prof. Roberto Martín-Martín. His research interest is in Robot Learning and Reinforcement Learning, with the long-term goal of developing self-improving robots that can learn and adapt autonomously in unstructured environments. Jiaheng’s work has been published at top-tier Robotics and ML venues, including CoRL, NeurIPS, RSS, and ICRA, and has earned multiple best paper nominations and awards. During his PhD, he interned at Google DeepMind and Ai2, and is a recipient of the Two Sigma PhD Fellowship. SLAC: Simulation-Pretrained Latent Action Space for Whole-Body Real-World RL, Jiaheng Hu, Peter Stone, Roberto Martín-Martín.
Images (1):
|
|||||
| Using Adversarial Reinforcement Learning to Improve the Resilience of Human-Robot … | https://ifip.hal.science/hal-05540657v1 | 1 | Mar 11, 2026 16:00 | active | |
Using Adversarial Reinforcement Learning to Improve the Resilience of Human-Robot Collaboration in Industrial Assembly - IFIP Open Digital LibraryURL: https://ifip.hal.science/hal-05540657v1 Description: The paper proposes a novel approach to enhance the resilience of mutual collaborative activity between humans and robots in industrial assembly tasks. The approach exploits Adversarial Reinforcement Learning (ARL) to enable a robot to learn an assembly policy that is robust against human mistakes. The adversary can represent various sources of uncertainty or disturbance in the environment. By learning from adversarial feedback, the agent can improve its performance and adaptability in challenging scenarios. The paper applies ARL to the execution of the assembly task sequence. The robot acts as one agent and learns how to assist the human partner during the assembly. The agent simulating the human partner acts as the adversary and deliberately introduces mistakes during the assembly process. The robot also learns how to cope with different levels of human competence and cooperation by adjusting its own behaviour accordingly. The paper evaluates the proposed approach through experiments reproducing complex assembly sequences and compares it with baseline methods that use conventional optimization algorithms. The results show that ARL does not outperforms conventional optimization algorithms in terms of task completion time but guarantee robustness against human mistakes. The paper also discusses the implications for human-robot collaboration and suggests future directions for research. Content:
The paper proposes a novel approach to enhance the resilience of mutual collaborative activity between humans and robots in industrial assembly tasks. The approach exploits Adversarial Reinforcement Learning (ARL) to enable a robot to learn an assembly policy that is robust against human mistakes. The adversary can represent various sources of uncertainty or disturbance in the environment. By learning from adversarial feedback, the agent can improve its performance and adaptability in challenging scenarios. The paper applies ARL to the execution of the assembly task sequence. The robot acts as one agent and learns how to assist the human partner during the assembly. The agent simulating the human partner acts as the adversary and deliberately introduces mistakes during the assembly process. The robot also learns how to cope with different levels of human competence and cooperation by adjusting its own behaviour accordingly. The paper evaluates the proposed approach through experiments reproducing complex assembly sequences and compares it with baseline methods that use conventional optimization algorithms. The results show that ARL does not outperforms conventional optimization algorithms in terms of task completion time but guarantee robustness against human mistakes. The paper also discusses the implications for human-robot collaboration and suggests future directions for research. Connect in order to contact the contributor https://ifip.hal.science/hal-05540657 Submitted on : Friday, March 6, 2026-4:42:09 PM Last modification on : Friday, March 6, 2026-4:50:40 PM Contact Resources Informations Legal issues Portals CCSD
Images (1):
|
|||||
| Xiaomi Integrates Humanoid Robots Into Manufacturing Lines | Ubergizmo | https://www.ubergizmo.com/2026/03/xiaom… | 1 | Mar 10, 2026 08:00 | active | |
Xiaomi Integrates Humanoid Robots Into Manufacturing Lines | UbergizmoURL: https://www.ubergizmo.com/2026/03/xiaomi-robots-manufacturing/ Description: Xiaomi has officially transitioned its humanoid robot project from laboratory prototypes to real-world industrial applications. According to CEO Lei... Content:
Xiaomi has officially transitioned its humanoid robot project from laboratory prototypes to real-world industrial applications. According to CEO Lei Jun, these robots are currently undergoing operational testing within the company’s automotive assembly plants, marking a significant step toward full-scale industrial automation. At present, the humanoid models are deployed in specific assembly stations where they perform repetitive yet high-precision tasks. These include transporting material boxes and loading self-tapping nuts. Unlike traditional stationary industrial arms, these robots utilize a “Vision-Language-Action” (VLA) AI model titled Xiaomi-Robotics-0. This system integrates multimodal perception—combining visual data and sensor feedback—with reinforcement learning. This allows the machines to interpret complex instructions and adapt to environmental variables rather than merely following fixed, pre-programmed scripts. The company’s strategic roadmap involves a massive rollout of these units over the next five years. To ensure reliability, Xiaomi is monitoring key performance indicators (KPIs) such as: Mean Time Between Failures (MTBF): Measuring the consistency of hardware and software. Single-Task Success Rate: Evaluating precision in delicate assembly maneuvers. Lei Jun noted that as the software is optimized through continuous experience, the robots are becoming increasingly compatible with large-scale production demands. This initiative reflects Xiaomi’s broader evolution from a smartphone manufacturer into a diversified technology powerhouse spanning electric vehicles, smart home ecosystems, and advanced robotics. By integrating AI-driven humanoids, the company aims to reduce operational costs and gain a competitive edge in supply chain management. However, this shift also mirrors a broader economic trend in China, raising significant questions regarding the long-term impact on the human workforce and the potential displacement of factory laborers by autonomous systems. Filed in Robots >Transportation. Read more about Humanoid Robot, Robotics and Xiaomi.
Images (1):
|
|||||
| Ethereum Builders Should Focus On 'Sanctuary Tech' Instead Of Trying … | https://finance.yahoo.com/news/ethereum… | 1 | Mar 10, 2026 00:02 | active | |
Ethereum Builders Should Focus On 'Sanctuary Tech' Instead Of Trying 'To Be Apple Or Google,' Vitalik Buterin SaysURL: https://finance.yahoo.com/news/ethereum-builders-focus-sanctuary-tech-172721602.html Description: Ethereum should serve as a space where people can interact free from corporate and government control, co-founder Vitalik Buterin says. "Ethereum should conceptualize ourselves as being part of an ecosystem building ‘sanctuary technologies:' free open-source technologies that let people live,... Content:
Oops, something went wrong Benzinga and Yahoo Finance LLC may earn commission or revenue on some items through the links below. Ethereum should serve as a space where people can interact free from corporate and government control, co-founder Vitalik Buterin says. "Ethereum should conceptualize ourselves as being part of an ecosystem building ‘sanctuary technologies:' free open-source technologies that let people live, work, talk to each other, manage risk and build wealth, and collaborate on shared goals, in a way that optimizes for robustness to outside pressures," Buterin said on X on March 3. Don't Miss: Build your own AI-powered index in minutes — and earn an uncapped 1% match when you move your portfolio to Public. Learn how it works. Maximize saving for your retirement and cut down taxes: Schedule your free call with a financial advisor to start your financial journey – no cost, no obligation. He named Starlink, Signal and X community notes as examples of technologies developers should aspire to while discouraging attempts "to be Apple or Google." He added that developers should build full-stack ecosystems from the wallet to AI-powered interfaces and hardware. The goal is "de-totalization," Buterin said, referring to a scenario where humans are able to avoid a life controlled by a single entity. Buterin’s remarks come as he says people are growing increasingly concerned about control and surveillance by governments and corporations and how AI could interact with them. See Also: Own a Stake in California's New Standard for Luxury Behavioral Health Buterin last month urged developers exploring intersections between Ethereum and AI to focus on use cases that foster human freedoms rather than simply pursuing artificial general intelligence. The developer has become increasingly vocal about Ethereum’s founding ethos in recent months. The timing of his rallying cries coincides with increased institutional adoption of the blockchain and excitement around its potential for tokenization. Read Next: Before the IPO: How One Company Quietly Locked Up 500+ Iconic Character Rights The AI Marketing Platform Backed by Insiders from Google, Meta, and Amazon — Invest at $0.85/Share Building a resilient portfolio means thinking beyond a single asset or market trend. Economic cycles shift, sectors rise and fall, and no one investment performs well in every environment. That's why many investors look to diversify with platforms that provide access to real estate, fixed-income opportunities, professional financial guidance, precious metals, and even self-directed retirement accounts. By spreading exposure across multiple asset classes, it becomes easier to manage risk, capture steady returns, and create long-term wealth that isn't tied to the fortunes of just one company or industry. Rad AI Rad AI's award-winning artificial intelligence technology helps transform data chaos into actionable insights, enabling the creation of high-performing content with measurable ROI. Their Regulation A+ offering allows investors to participate at $0.85 per share with a minimum investment of $1,000, providing an opportunity to diversify portfolios into early-stage AI innovation. For investors seeking exposure to the rapidly growing AI and tech sector, Rad AI offers a chance to get in on the ground floor of a data-driven growth story. Paladin Paladin Power is addressing the growing demand for energy independence with a fire-safe energy storage system that doesn't rely on lithium-ion batteries. Instead, its ESS uses non-lithium, solid-state graphene battery technology designed for durability, safety, and long service life—positioning it as an alternative to fire-prone storage solutions that dominate today's market. Since launching in 2023, Paladin has generated $185 million in contracted revenue, achieved strong year-over-year growth, and secured a manufacturing agreement with NYSE-listed Jabil. With systems already deployed across residential and commercial properties and a $500B global electrification market opportunity ahead, Paladin offers investors exposure to decentralized energy infrastructure backed by real contracts, U.S.-based manufacturing, and scalable next-generation technology. Elf Labs Elf Labs is an IP-focused entertainment company built on a strategy that has powered giants like Disney and Marvel: ownership of globally recognized character IP. After more than a decade of rights acquisition, the company controls 500+ protected trademarks and copyrights tied to iconic characters including Cinderella, Snow White, Rapunzel, Sleeping Beauty, and Peter Pan. This foundation has generated over $15 million in royalties, expanded licensing into 30+ countries, and supported development of 100+ product lines. With its Nasdaq ticker ($ELFS) reserved and valuation growth exceeding 1,600% in under two years, Elf Labs is now scaling distribution through patented production systems, global licensing, and streaming and mobile initiatives—offering investors exposure to a private entertainment company with a clear public-market trajectory. Valley Center Wellness Valley Center Wellness is setting a new benchmark in luxury behavioral health with its flagship facility in Corona, California. Designed as a private, resort-style wellness retreat on a 4.2-acre estate, the center combines discretion, comfort, and comprehensive care, offering patients private chefs, daily massages, acupuncturist sessions, and access to a pool, spa, gym, and basketball court. Focused on high-profile and affluent clients, Valley Wellness provides fully customized treatment plans outside the constraints of insurance, emphasizing long-term recovery, holistic wellness, and life-after-addiction strategies. Through its three-stage care model—including residential, outpatient, and transitional housing—patients experience continuity of care that supports lasting change. For investors, Valley Wellness has launched an equity crowdfunding opportunity, offering a way to participate in a fast-growing $42 billion behavioral health sector while gaining exposure to both high-end real estate and a premium healthcare business. Immersed Immersed is a private, pre-IPO technology company operating at the intersection of AI, spatial computing, and remote work. Best known for building the most widely used productivity app on the Meta Quest platform, Immersed enables professionals and teams to work full-time in shared virtual environments across macOS, Windows, and Linux. The company is expanding beyond software with its own productivity-focused XR headset and AI tools, supported by partnerships with major technology firms including Meta, Samsung, and Qualcomm. Immersed is currently allowing retail investors to participate in its pre-IPO round, subject to eligibility and offering terms. Arrived Backed by Jeff Bezos, Arrived Homes makes real estate investing accessible with a low barrier to entry. Investors can buy fractional shares of single-family rentals and vacation homes starting with as little as $100. This allows everyday investors to diversify into real estate, collect rental income, and build long-term wealth without needing to manage properties directly. Masterworks Masterworks enables investors to diversify into blue-chip art, an alternative asset class with historically low correlation to stocks and bonds. Through fractional ownership of museum-quality works by artists like Banksy, Basquiat, and Picasso, investors gain access without the high costs or complexities of owning art outright. With hundreds of offerings and strong historical exits on select works, Masterworks adds a scarce, globally traded asset to portfolios seeking long-term diversification. Rex Shares REX Shares designs specialized ETFs for investors who want more precision than traditional broad-market funds can offer. Its lineup spans options-based income strategies, leveraged and inverse exposures, spot-linked crypto ETFs, and thematic funds tied to structural trends. By targeting specific income objectives, volatility profiles, or market themes, these ETFs can be used alongside core holdings to introduce differentiated return drivers and reduce reliance on a single market outcome, while maintaining the liquidity and transparency of the ETF structure. Motley Fool Motley Fool Asset Management brings its long-standing "Foolish" investing philosophy into a lineup of passive ETFs designed around clear, rules-based investment styles. Built using decades of proprietary research from The Motley Fool, LLC, these factor-based ETFs focus on growth, value, and momentum strategies, selecting U.S. companies based on quality, risk, and long-term potential. For investors who want professionally vetted stock exposure without the demands of active trading, Motley Fool Asset Management offers a straightforward way to access expert-driven strategies through the simplicity and liquidity of an ETF. Finance Advisors Finance Advisors helps Americans approach retirement with greater clarity by connecting them to vetted, fiduciary financial advisors who specialize in tax-aware retirement planning. Rather than focusing on products or investment performance alone, the platform emphasizes strategies that account for after-tax income, withdrawal sequencing, and long-term tax efficiency—factors that can materially impact retirement outcomes. Free to use, Finance Advisors gives individuals with meaningful savings access to a level of planning sophistication historically reserved for high-net-worth households, helping reduce hidden tax risk and improve long-term financial confidence. Public Public is a multi-asset investing platform built for long-term investors who want more control, transparency, and innovation in how they grow wealth. Founded in 2019 as the first broker-dealer to offer commission-free, real-time fractional investing, Public now lets users invest in stocks, bonds, options, crypto, and more—all in one place. Its latest feature, Generated Assets, uses AI to turn a single idea into a fully customized, investable index that can be explained and backtested before committing capital. Combined with AI-powered research tools, clear explanations of market moves, and an uncapped 1% match for transferring an existing portfolio, Public positions itself as a modern platform designed to help serious investors make more informed decisions with context. Money Pickle Money Pickle helps people connect with vetted fiduciary financial advisors—professionals who are legally obligated to act in their clients' best interests. Through a quick online quiz, users are matched with a fiduciary for a complimentary, no-obligation one-on-one strategy session tailored to goals like retirement planning, investing, tax strategy, or getting financially organized. With no upfront costs and no sales pressure, Money Pickle removes the friction and uncertainty from finding trustworthy advice, making personalized financial guidance accessible whether you're building wealth, preserving it, or planning for the future. Atari Atari is bringing its iconic legacy into the physical world with the launch of the first-ever Atari Hotel, a construction-ready gaming and entertainment destination in downtown Phoenix. The Atari Hotel Phoenix blends immersive gaming, live events, dining, and technology-driven experiences into a next-generation hospitality concept, backed by secured land, licensing, and development partners. Through a Regulation A+ offering, investors can own a direct stake in the land, building, and branded hotel starting at $500, with targeted returns including a 15% preferred return and a projected 5.8x multiple. As gaming and experiential travel continue to converge, this opportunity allows everyday investors to participate alongside developers in transforming a legendary brand into a real-world destination. Image: Shutterstock This article Ethereum Builders Should Focus On 'Sanctuary Tech' Instead Of Trying 'To Be Apple Or Google,' Vitalik Buterin Says originally appeared on Benzinga.com © 2026 Benzinga.com. Benzinga does not provide investment advice. All rights reserved. Sign in to access your portfolio
Images (1):
|
|||||
| NPC Theater AI: Robots Performing Quirky Stories by Georg Haller … | http://www.kicktraq.com/projects/292163… | 10 | Mar 09, 2026 16:01 | active | |
NPC Theater AI: Robots Performing Quirky Stories by Georg Haller :: KicktraqURL: http://www.kicktraq.com/projects/29216330/npc-theater-ai-robots-performing-quirky-stories/ Description: NPC Theater AI — robots stage plays based on your ideas. We want to understand human emotions through art – Join the Mission! Content: Images (10):
|
|||||
| $5900 Unitree R1 robot is surprisingly affordable | Fox News | https://www.foxnews.com/tech/5900-unitr… | 1 | Mar 09, 2026 00:01 | active | |
$5900 Unitree R1 robot is surprisingly affordable | Fox NewsURL: https://www.foxnews.com/tech/5900-unitree-r1-robot-surprisingly-affordable Description: Unitree just dropped its latest creation, the R1 humanoid robot, and people are talking. Content:
This material may not be published, broadcast, rewritten, or redistributed. ©2026 FOX News Network, LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset. Powered and implemented by FactSet Digital Solutions. Legal Statement. Mutual Fund and ETF data provided by LSEG. Industries can rethink how work gets done, raising the bar for productivity and workplace safety. Unitree just dropped its latest creation, the R1 humanoid robot, and people are talking. At only $5,900, it's the most affordable bipedal robot we've seen so far. The low price has taken the tech world by surprise and kicked off a wave of excitement. It's a big step toward making humanoid robots more affordable for people. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER. R1 humanoid robot. (Unitree) In Unitree's promo videos, the R1 shows off by running, spinning, shadowboxing, doing handstands, and even nailing cartwheels. People are starting to realize just how far these humanoid robots have come in terms of coordination and agility. What's especially wild is that it's not priced exclusively for big research labs; regular consumers might actually be able to get their hands on one. R1 humanoid robot doing a handstand. (Unitree) The robot can pull off impressive moves thanks to 26 joint degrees of freedom, giving it flexibility similar to a gymnast. It uses onboard sensors, like binocular and wide-angle cameras, microphones, and speakers to understand and navigate its surroundings. An 8-core CPU and GPU power tasks such as voice and image recognition. Its battery lasts about one hour per charge, which is solid for a robot this size. WHAT IS ARTIFICIAL INTELLIGENCE (AI)? Speaking of size, the R1 weighs around 55 pounds and stands about 4 feet tall. That makes it compact enough to fit easily into classrooms or labs. The standard model comes with fixed open fists, so it can't actually grip objects. However, an advanced EDU version offers movable fingers and lets each arm carry up to 6.6 pounds. R1 humanoid robot. (Unitree) Unitree's older models include the G1 at sixteen thousand dollars and the H1 at over ninety thousand. In comparison, the R1 feels like a total game changer. Its lower price gives researchers, small developers, and educators a new opportunity to explore humanoid robotics. Of course, some people are a little skeptical. A few have raised questions about whether the promo footage uses CGI or overly scripted setups. And let's be honest, anyone who's seen robots go off-script knows how unpredictable things can get. That's why solid software and strong safety systems are still so important, especially at this price point. R1 humanoid robot running. (Unitree) Administrators and researchers around the world are closely watching Unitree's move. China's strength in manufacturing and low-cost hardware gives it a clear advantage, especially as it goes head-to-head with U.S. players like Tesla, Figure AI, and Agility Robotics. Everyone's racing to make humanoids affordable and practical. GET FOX BUSINESS ON THE GO BY CLICKING HERE Some researchers are already working the R1 into academic projects. Researchers expect machine learning systems and training tools from older models to work with the R1 as well. And in the medical world, some trials are exploring how humanoid robots can assist in remote care, though they still need improvements in strength and sensitivity. Two R1 humanoid robots. (Unitree) If you've ever dreamed of working with a humanoid robot but thought it was out of reach, the R1 changes that. At $5,900, it's affordable enough for educators, researchers, and developers on a budget. It can walk, spin, and even cartwheel, giving you a real platform to test AI and robotics projects. The standard version doesn't grip, but the EDU model adds movable fingers and more power. With its compact size and one-hour battery life, the R1 fits easily into classrooms, labs, or maker spaces. It's not perfect, but it's a big step toward making humanoid robotics truly accessible. CLICK HERE TO GET THE FOX NEWS APP The Unitree R1 is catching attention for all the right reasons. It's fast, flexible, and surprisingly affordable, just $5,900 for a bipedal humanoid that can run, cartwheel, and react to its surroundings. That's huge for schools, researchers, and developers who've never had access to this kind of tech at this kind of price. But while it looks impressive on video, some folks are wondering how it performs in real life. Is it a reliable research tool or just a flashy demo machine? One thing's clear: the R1 could mark a turning point in the push to bring humanoid robots into everyday life. Could robots like this really end up in classrooms, clinics, or even homes someday? If humanoid robots become affordable, how comfortable would you be sharing your space with one? Let us know by writing to us at Cyberguy.com/Contact. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER. Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. Get a daily look at what’s developing in science and technology throughout the world. Subscribed You've successfully subscribed to this newsletter! This material may not be published, broadcast, rewritten, or redistributed. ©2026 FOX News Network, LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset. Powered and implemented by FactSet Digital Solutions. Legal Statement. Mutual Fund and ETF data provided by LSEG.
Images (1):
|
|||||
| Unitree Technology and JD Open First Offline Store in Beijing … | https://pandaily.com/unitree-technology… | 0 | Mar 09, 2026 00:01 | active | |
Unitree Technology and JD Open First Offline Store in Beijing - PandailyURL: https://pandaily.com/unitree-technology-and-jd-open-first-offline-store-in-beijing Description: Unitree Technology takes its robotics experience offline with a first store in Beijing, letting customers try, buy, and interact with its humanoid and quadruped robots. Content: |
|||||
| WATCH humanoid robot perform kung fu — RT Entertainment | https://www.rt.com/pop-culture/613387-h… | 1 | Mar 09, 2026 00:01 | active | |
WATCH humanoid robot perform kung fu — RT EntertainmentURL: https://www.rt.com/pop-culture/613387-humanoid-robot-kung-fu-video/ Description: A Chinese robotics firm has revealed an upgraded android able to “learn and perform virtually any movement” Content:
Chinese robotics company Unitree has shared a video featuring its humanoid robot doing kung fu moves. The bot’s balance capabilities and range of movement have been upgraded, the firm said. Humanoid robots are made to resemble and act like humans, imitating facial expressions, movements, and speech. The video teaser published by the Hangzhou-based company earlier this week shows the human-like robot walking down the street while performing various martial arts strikes and kicking techniques. Unitree stated that the latest algorithm upgrade allows its G1 humanoid robot to “learn and perform virtually any movement.” Kungfu BOT: Unitree G1🥳We have continued to upgrade the Unitree G1's algorithm, enabling it to learn and perform virtually any movement. What other moves would you like to see. Do share with us in the comments. (Please keep a safe distance from the robot.)#Unitree#Kungfu… pic.twitter.com/1vDZHyRjqZ As per the company’s website, the $16,000 G1 humanoid robot, which debuted in August 2023, features powered joints on its arms, legs, and torso that allow 23 degrees of freedom. Earlier this month, Unitree unveiled video footage of its humanoid G1 and H1 androids showing off new moves. G1, a more affordable version of the robot, was shown running, navigating uneven terrain, and walking in a more natural way. The taller H1 model performed a preset routine alongside human dancers at the Spring Festival Gala event marking the Chinese New Year. A number of companies – including Japan’s Honda, Hyundai Motor’s Boston Dynamics, and Agility Robotics – have been betting on humanoid robots to meet potential labor shortages in certain industries by performing repetitive tasks that may be seen as dangerous or tedious. Tesla, Meta, and OpenAI have recently joined the trend. Earlier this month, Bloomberg cited sources as stating that Meta Platforms is planning to invest into futuristic robots that can act like humans and assist with physical tasks. The company is reportedly forming a new team within its Reality Labs hardware division to conduct the work. Last December, media reports emerged that OpenAI, the creator of ChatGPT, is seeking to develop its own android. Last year, electric-vehicle producer Tesla announced plans to introduce humanoid robots for internal purposes starting in 2025, with plans for broader production by the following year. Valued at $1.8 billion in 2023, the global humanoid robot market is projected to soar to more than $13 billion over the next five years, according to research firm MarketsandMarkets. RT News App © Autonomous Nonprofit Organization “TV-Novosti”, 2005–2026. All rights reserved. This website uses cookies. Read RT Privacy policy to find out more.
Images (1):
|
|||||
| UPSC GS-3 Mains Answer Writing Practice (16 Jan 2026): Robotics | https://www.insightsonindia.com/2026/01… | 1 | Mar 08, 2026 16:00 | active | |
UPSC GS-3 Mains Answer Writing Practice (16 Jan 2026): RoboticsDescription: UPSC GS-3 Mains Answer Writing Practice (16 Jan 2026): Analyse how advancements in robotics are transforming industrial labour processes. Examine their implications for skill formation. Assess policy challenges arising from this shift. Boost your preparation with structured practice. Content:
Call us @ 08069405205 Topic: Science & Technology Q6. Analyse how advancements in robotics are transforming industrial labour processes. Examine their implications for skill formation. Assess policy challenges arising from this shift. (15 M) Difficulty Level: Medium Reference: InsightsIAS Why the question The accelerating adoption of industrial robotics is reshaping production systems while raising concerns about employment transitions, skill gaps and regulatory readiness. Key Demand of the question The question requires analysing how robotics transforms industrial labour processes, examining its implications for skill formation, and assessing the policy challenges arising from this shift in an integrated manner. Structure of the Answer Introduction Briefly contextualise robotics within Industry 4.0 and its growing role in redefining industrial labour and production efficiency. Body Conclusion Conclude by underscoring the need for anticipatory skilling strategies and adaptive policy frameworks to align robotics-led industrial growth with inclusive development. No Related Posts found Bangalore 3rd Floor, Nanda Ashirwad Building, Chandra Layout Main Rd, Maruthi Nagar, Attiguppe, Bengaluru, Karnataka 560040 Google Map+ Delhi #B-10, 3rd Floor, Bada Bazar Rd, Old Rajinder Nagar, New Delhi, Delhi 110060 Google Map+ Srinagar 3rd Floor, Opposite Hotel Solar Residency, Aramwari, Pathanbagh, Rajbagh, Srinagar-190008 Google Map+ Davanagere #1092 Jadhav Complex, Ring Road, Nijilingappa Layout, opposite to EDU ASIA School Davangere, Karnataka 577004 Insights IAS: Simplifying UPSC IAS Exam Preparation. InsightsIAS has redefined, revolutionized and simplified the way aspirants prepare for UPSC IAS Civil Services Exam. Today, it’s India’s top website and institution when it comes to imparting quality content, guidance and teaching for the IAS Exam. Contact Us: Call us at 080 69405205 (toll-free) support@insightsias.com careers@insightsias.com Copyright © Insights Active Learning
Images (1):
|
|||||
| Tether Backs $81 Million Humanoid Robotics Project in Italy | https://www.pymnts.com/news/investment-… | 1 | Mar 08, 2026 16:00 | active | |
Tether Backs $81 Million Humanoid Robotics Project in ItalyContent:
Italian robotics firm Generative Bionics has raised $81 million in new funding. Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required. yesSubscribe to our daily newsletter, PYMNTS Today. By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Δ The new funding round was one of the largest ever in Europe in the “humanoid robotics deep tech space,” Generative Bionics said in a Monday (Dec. 8) announcement. “Our mission is to build a future where intelligent humanoid robots collaborate daily with people, amplifying human cognitive and physical potential,” said Daniele Pucci, the company’s co-founder and CEO. “Our Physical AI enables us to design and manufacture human-inspired robots that create tangible value across multiple applications.” He pointed to analyses projecting that the humanoid robotics market will exceed 200 billion euros ($232 billion) by 2035 and could top $5 trillion by 2050. “This is an epochal transformation: our goal is to position Generative Bionics as a global leader in physical AI for human-centric humanoid ecosystems,” Pucci said. We’d love to be your preferred source for news. Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks! The round was led by CDP Venture Capital, with participation by investors that include Tether. The company says it will use the funding to boost product development, train physical artificial intelligence (AI) systems — what it calls the fusion of robotics and AI — and to construct its first production plant. Advertisement: Scroll to Continue “The company is also finalizing its first industrial deployment contracts, which will be announced in early 2026, marking the introduction of humanoids into real production environments,” the announcement said. As PYMNTS wrote last month, physical AI has emerged as the next phase of robotics as new developments in sensing, perception and large AI models provide machines with capabilities that had never been supported by traditional automation. “Earlier robots followed fixed commands and worked only in predictable environments, struggling with the unpredictability found in everyday operations such as shifting layouts, varying item shapes, mixed lighting, and human movement,” that report said. “That is beginning to change as research groups show how simulation, digital twins and multimodal learning pipelines enable robots to learn adaptive behaviors and carry those behaviors into real facilities with minimal retraining.” An example of this technology in the day-to-day world is Amazon’s launch of its Vulcan robot earlier this year. Vulcan employs both vision and touch to pick and stow items in fulfillment centers, letting it handle flexible fabric storage pods and unpredictable product shapes. Amazon says the robot’s tactile systems allow it to respond to pressure, contact and motion in real time to carry out tasks that typically call for human dexterity. Cash App Debuts Payment Links to Make P2P Less Formal Ahold Delhaize Sees US Online Grocery Sales Jump 22% Shopify Expands Grip on Checkout as AI-Driven Shopping Surges Amazon Accelerates Medical Ambitions With Faster Prescription Deliveries and AI Insights Get PYMNTS Today, AI, B2B and more. We’re always on the lookout for opportunities to partner with innovators and disruptors.
Images (1):
|
|||||
| Robots con forma de conejo: la nueva estrategia para controlar … | https://www.elimparcial.com/locurioso/2… | 0 | Mar 08, 2026 08:00 | active | |
Robots con forma de conejo: la nueva estrategia para controlar pitones en FloridaDescription: Autoridades del sur de Florida comenzaron a usar conejos robóticos con energía solar para detectar y retirar pitones birmanas en los Everglades. Content: |
|||||
| Humanoid robotics making big strides | http://www.ecns.cn/china/2026-03-03/det… | 1 | Mar 08, 2026 00:02 | active | |
Humanoid robotics making big stridesURL: http://www.ecns.cn/china/2026-03-03/detail-ihfaizcc2495140.shtml Content:
China has unveiled a framework to standardize its rapidly expanding humanoid robotics industry, as policymakers, companies and researchers seek to address growing technical fragmentation in the sector. The 2026 edition of the humanoid robots and embodied intelligence standard system was released on Saturday at the annual meeting of the humanoid robots and embodied intelligence standardization technical committee in Beijing's E-Town. Organizers described it as the country's first top-level design covering the entire industry chain and life-cycle of humanoid robotics and embodied intelligence. The framework comes at a pivotal moment. According to figures disclosed by Ministry of Industry and Information Technology in January, the country has more than 140 humanoid robot manufacturers and over 330 product models. Executives widely refer to 2026 as a transitional year for mass production in the sector. Public interest has also accelerated. E-commerce giant JD reported a surge in robotics-related sales following humanoid robots' recent high-profile Spring Festival performances, underscoring rising consumer visibility, said Zheng Xiaodan, head of embodied intelligent robotics business at JD. Yet scaling remains complex. In roundtable discussions, company executives pointed to manufacturing consistency as a key challenge. "Humanoid robots involve long supply chains stretching from supply networks and components to complete systems, operating systems and algorithms," said Chen Jianyu, founder of Robotera. Gao Jiyang, founder of Galaxea, added that even subtle mechanical differences between units can be amplified when integrated with large foundation models, requiring systematic calibration to align sensors, structures and software within a unified framework. Hardware maturity remains uneven. Participants noted that key components such as high-torque joints and dexterous hands have yet to achieve stable economies of scale, keeping costs high and limiting predictable expansion. Beyond hardware, data was repeatedly described as a structural bottleneck. "We are still lacking high-quality embodied data and relevant standards," said Wang Zhongyuan, director of the Beijing Academy of Artificial Intelligence, adding that inconsistent data formats and labeling methods across firms have created isolated silos, forcing developers to duplicate work. The newly issued framework seeks to address these challenges, structured across six areas â foundational standards, brain-inspired computing and intelligent processing, body structures and components, complete systems, applications, alongside safety and ethics. It reflects coordinated input from government agencies, research institutes, enterprises and universities. "To enable robots to truly work in real-world scenarios, industry-wide standards are indispensable," said Wang Xingxing, founder of Unitree Robotics and a committee vice-chair. He identified unified task definitions, evaluation systems and safety standards as immediate priorities. Globally, no dominant humanoid robotics standards have yet emerged. Xu Jincheng, founder and CEO of tactile-sensing firm PaXini Tech, said China's achievements in embodied intelligence have drawn global attention, and that continued technological progress may position the country to play a significant role in shaping future international standards. EconoScope | China leads in humanoid robot sports, powers robotics innovation Chinese humanoid robotics company achieves the world's first multi-robot collaborative training
Images (1):
|
|||||
| Mignons, bien finis, programmables, on a essayé les robots Pixar … | https://www.bfmtv.com/tech/objets-conne… | 1 | Mar 07, 2026 00:03 | active | |
Mignons, bien finis, programmables, on a essayé les robots Pixar du chinois Robosen, ont-ils ce qu'il faut pour illuminer votre Noël?Description: Pour les fêtes de fin d'année, la marque chinoise spécialisée dans la robotique, Robosen, lance une collection de mini robots connectés utilisant les personnages de Pixar. Tech&Co vous livre son avis. Content:
Si les enfants des années 90 et 2000 se souviennent sans doute du Furby, un petit animal robotisé aux multiples mimiques, ceux des années 2020 pourraient de leur côté porter un grand intérêt aux produits signés par l'entreprise chinoise Robosen. Pour cette fin d'année 2025, c'est une collection Pixar qui attend les plus jeunes (mais aussi les fans) pour un prix loin d'être aussi exorbitant que les robots les plus intelligents que l'on peut déjà trouver sur le marché. Des mini-robots que Tech&Co a pu prendre en main, et qui ont la particularité de pouvoir être personnalisés en fonction des besoins, bien qu'ils ne soient pas aussi évolués que d'autres modèles bien plus onéreux. Concrètement, chaque personnage est vendu séparément et peut être placé sur un socle qui viendra l'alimenter. Ils ne peuvent pas bouger sur un bureau, contrairement à certains petits robots de compagnie comme Emo ou Eilik. En revanche, ils disposent d'un certain nombre de mouvements et s'accompagnent d'une série de sons (en français, avec les voix officielles des films Pixar) qui peuvent être joués à la demande. Pour ce faire, il suffit d'appuyer sur le bouton "Lecture" du socle sur lequel on pose la figurine. Parmi les personnages disponibles, on peut compter sur Woody, Buzz l'éclair, Lotso, le petit alien ou encore Rex et Jessie de Toy Story. A cela s'ajoutent Wall-E et Eve du film éponyme. Sur le moment, on s'en amuse. Les petits mouvements fonctionnent bien avec les différents sons disponibles par défaut. Il y a aussi un réel souci du détail. Lotso, par exemple, a l'odeur de fraise qu'est censé dégager le personnage dans le film (mais aussi les peluches vendues en magasin). Robosen n'a pas simplement mis quelques rouages et engrenages dans ses robots, il y en a à tous les niveaux: les bras, les sourcils ou encore les chapeaux. On constate un effort de réalisme par rapport à l'antique Furby qui avait fait le bonheur des enfants aujourd'hui adultes. Mais une fois qu'on a fait le tour des voix, que faire? C'est là que le site Robosen Studio entre en jeu. Il nécessite un compte et ne fonctionne que sur les navigateurs Chromium, comme Chrome, Edge, ou la plupart des navigateurs actuels à l'exception de Safari et Firefox. Vous devez simplement connecter la base (avec un personnage dessus) via un câble USB-C pour ensuite avoir la possibilité d'importer un fichier audio. Une musique, un message particulier ou encore un son trouvé sur internet, tout est possible. En revanche, le robot ne permet pas de modifier, ou ne serait-ce que raccourcir le fichier qui lui est soumis. Prévoyez donc un peu de montage sur Audacity par exemple. Et pour préservez la magie auprès de votre enfant, il faudra y mettre des fichiers reprenant les voix des personnages que vous détenez. Une fois le fichier téléchargé, le logiciel accessible depuis le navigateur vous propose de caler le rythme du son sur une série de mouvements préprogrammés. Mais il est aussi possible de personnaliser les séquences. L'interface est loin d'être compliquée à comprendre, mais elle n'est pas facile pour autant. Il faudra obligatoirement un adulte pour s'occuper de cette partie de l'utilisation: on imagine mal un enfant se mettre à programmer ce personnage. Pas besoin d'être un expert en Python pour autant. S'il est suffisamment à l'aise avec une interface web et sait lire, il devrait pouvoir se débrouiller rapidement seul, une fois que vous l'aurez accompagné. Car si l'interface n'est pas agaçante, elle demande tout de même des manipulations complexes pour réussir à avoir un résultat convaincant, pour peu que l'on se décide à gérer les mouvements de A à Z - cela peut aller jusqu'à une trentaine de manipulations de différentes jauges, des temps de mouvement aux minuteurs liés au rythme du son utilisé. On peut néanmoins imaginer un parent le faire avec son enfant, de quoi amuser la galerie. Surtout, on peut aussi profiter des contenus partagés par la communauté en cas de panne d'inspiration. Les mini-robots de Robosen doivent donc être vus comme un premier pas vers de la robotique plus évoluée. Il ne s'agit pas d'écrire du code, mais l'interface est suffisamment évoluée pour permettre de personnaliser les mouvements comme bon nous semble, avec les limitations que l'on peut imaginer compte tenu du prix relativement bas de ces mini-robots. De plus, on doutera tout de même de l'intérêt sur le moyen terme, si bien qu'il ne serait pas étonnant qu'au final, cela devienne davantage des figurines d'exposition qu'un robot que l'on active régulièrement. Robosen ne s'en cache d'ailleurs pas: ils visent avant tout les fans des licences qu'ils commercialisent, et compte tenu des prix d'une jolie figurine, ces mini-robots ne sont pas si onéreux. Sont-ils rentables? Absolument pas. Mais les parents fans de Pixar pourront toujours se trouver une excuse et se dire qu'ils initient leurs enfants à la robotique. C'est en partie faux, mais on ne les jugera pas. La marque commercialise déjà des modèles bien plus convaincants, mais aussi bien plus onéreux. C'est le cas avec ses produits Transformers, qui peuvent se transformer automatiquement, mais qui peuvent coûter jusqu'à 1.799 euros. Le Buzz l'éclair avec 23 articulations intelligentes, et donc un rendu forcément plus premium, est quant à lui proposé à 599 euros. Au-delà de l'intérêt qui ne sera pas évident pour tous, ces mini-robots souffrent toutefois d'un manque d'interaction avec leur environnement. Robosen ne vendant pas (encore?) de socle supplémentaire en plus de celui que vous recevez dans un pack de départ, il ne sera même pas possible de faire interagir les personnages entre eux. Ils ne sont d'ailleurs pas programmés pour: si ce n'est Wall-E et Eve qui disposent "d'effets synchronisés", aucun autre personnage ne semble vraiment fait pour converser avec ses camarades, et il n'y a d'ailleurs pas de micro pour qu'ils puissent le faire dans le futur. Sur ce point, Furby faisait mieux puisqu'il proposait une telle option il y a de cela plusieurs années. Son propriétaire humain ne pourra pas non plus "discuter" avec lui. Plutôt objets de collection ou cadeaux étonnants à faire, les mini-robots de Robosen ont surtout pour eux leur prix d'appel. Avec un tarif démarrant à 90 euros (à peine plus cher qu'un Furby), on ne peut pas dire que l'entreprise n'a pas fait un effort pour attirer le geek ou l'enfant qui sommeille en nous. Casques bleus blessés, Beyrouth massivement bombardé, Poutine appelle à un cessez-le-feu... Ce qu'il faut retenir du septième jour de la guerre au Moyen-Orient © Copyright 2006-2026 BFMTV.com. Tous droits réservés. Site édité par NextInteractive
Images (1):
|
|||||
| Are We Going Too Far By Allowing Generative AI To … | https://www.forbes.com/sites/lanceeliot… | 0 | Mar 07, 2026 00:03 | active | |
Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI LawDescription: Generative AI such as ChatGPT is increasingly being used to control robots. This bodes for concern since the AI might produce faulty instructions and endanger h... Content: |
|||||
| Robots aspirateurs, automatisations et énergie : ce que change la … | https://www.maison-et-domotique.com/168… | 1 | Mar 07, 2026 00:03 | active | |
Robots aspirateurs, automatisations et énergie : ce que change la mise à jour Home Assistant 2026.3 - Maison et DomotiqueDescription: Home Assistant 2026.3 apporte le nettoyage pièce par pièce pour les aspirateurs robots, un assistant vocal Android avec wake word local, des automatisations plus fiables et de nombreuses améliorations de l’interface et de l’énergie. Content:
La mise à jour Home Assistant 2026.3 est disponible et poursuit la cadence mensuelle du projet. Cette version n’est pas centrée sur une seule grosse nouveauté spectaculaire, mais sur une multitude d’améliorations concrètes issues de la communauté. L’équipe s’est d’ailleurs volontairement concentrée sur l’intégration de contributions open-source en attente afin d’enrichir l’écosystème. Au final, le résultat est impressionnant : nouvelles possibilités pour les aspirateurs robots, amélioration des automatisations, évolution du tableau de bord énergie, intégration du wake word Android et une longue liste d’intégrations supplémentaires. Voyons les nouveautés les plus marquantes. L’une des nouveautés les plus pratiques concerne les robots aspirateurs. Jusqu’ici, lancer le nettoyage d’une pièce précise n’était pas toujours simple depuis Home Assistant, même si les applications des fabricants proposaient cette fonction. La version 2026.3 introduit un système officiel permettant d’associer les pièces de la carte du robot aspirateur aux zones définies dans Home Assistant. Concrètement, si votre robot a identifié les pièces de votre maison (salon, cuisine, bureau…), il est désormais possible de les mapper directement avec les zones de Home Assistant. Une fois cette correspondance créée, de nouvelles actions apparaissent dans les automatisations. Vous pouvez par exemple déclencher une action du type : Et le robot ira directement dans la zone concernée. Cela ouvre aussi la porte à des scénarios très pratiques : Un scénario peut lancer automatiquement le nettoyage de la cuisine après le dîner. Un autre peut nettoyer le bureau uniquement les jours de télétravail. Ou encore déclencher un nettoyage ciblé lorsque la présence détecte que toute la famille est sortie. Pour l’instant, cette fonctionnalité est supportée par plusieurs intégrations populaires comme Roborock, Ecovacs et Matter, mais d’autres devraient suivre rapidement. Les automatisations gagnent un nouvel outil très utile : Continue on error. Dans Home Assistant, lorsqu’une action échoue dans une automatisation, l’exécution s’arrête généralement immédiatement. Cela peut poser problème dans certains scénarios critiques. Prenons un exemple concret. Vous avez une automatisation de sécurité qui doit : Si l’allumage d’une lampe échoue pour une raison quelconque (ampoule déconnectée, appareil indisponible), l’automatisation pouvait s’interrompre avant l’envoi de la notification. Avec cette nouvelle option, il est désormais possible de dire à Home Assistant de continuer l’exécution même si une action échoue. Cette fonctionnalité existait déjà en YAML depuis longtemps, mais elle devient maintenant accessible directement dans l’interface graphique des automatisations. Résultat : des scénarios plus fiables et plus faciles à construire. Autre nouveauté majeure : l’application Android de Home Assistant introduit le wake word local. Concrètement, votre smartphone peut désormais écouter un mot clé (comme “Hey Jarvis”) et déclencher l’assistant vocal sans avoir à toucher l’écran. La détection se fait directement sur le téléphone, sans passer par le cloud. Cela repose sur le moteur open-source Micro Wake Word développé pour le projet. Une fois activée, l’expérience devient très proche des assistants vocaux classiques : Vous pouvez par exemple dire : “Hey Jarvis, allume la lumière du salon” Le téléphone capte la commande, l’envoie à Home Assistant et exécute l’action. Cette fonctionnalité présente plusieurs avantages intéressants. Elle permet de transformer un ancien smartphone ou une tablette Android en assistant vocal mural. Parfait pour créer un point de contrôle dans une pièce sans acheter de matériel supplémentaire. Elle fonctionne aussi même sans connexion Internet, puisque la détection du mot clé est locale. Le seul inconvénient pour l’instant concerne la consommation de batterie. Le système peut utiliser environ 15 % de batterie par jour, car Android ne permet pas encore d’utiliser les puces spécialisées de détection vocale. À noter : cette fonction n’est malheureusement pas possible sur iOS, Apple ne permettant pas ce type d’accès aux applications tierces. Le tableau de bord énergie continue d’évoluer lui aussi. Plusieurs améliorations rendent l’interface plus claire et plus pratique à utiliser. La première concerne l’organisation des données. L’onglet principal est désormais nommé Électricité au lieu de “Énergie”, afin d’éviter toute confusion avec les autres sources comme le gaz ou l’eau. Les graphiques gagnent aussi en lisibilité. Lorsque vous survolez un histogramme, la date exacte apparaît désormais dans l’infobulle, ce qui permet d’identifier immédiatement le jour correspondant. Autre nouveauté intéressante : les badges en haut du tableau affichent maintenant les valeurs instantanées comme la puissance actuelle ou le débit d’eau. Cela permet de voir en un coup d’œil la consommation en temps réel. Pour ceux qui utilisent Home Assistant pour optimiser leur facture énergétique (solaire, délestage, suivi de consommation…), ces petits détails rendent l’analyse beaucoup plus confortable. Le tableau de bord principal reçoit lui aussi quelques améliorations. La barre latérale peut maintenant afficher : Ces informations apparaissent directement dans le résumé du système. Autre détail pratique : les fenêtres ouvertes peuvent désormais apparaître dans la section sécurité. Un rappel utile avant de quitter la maison. Les dashboards Home Assistant introduisent également une nouveauté intéressante : le footer personnalisable. En plus du header, il est désormais possible d’ajouter un pied de page contenant n’importe quelle carte. Cela permet par exemple d’y placer : Cette zone reste visible en bas de l’écran et peut servir de raccourci permanent vers certaines actions importantes. Une amélioration simple… mais qui ouvre beaucoup de possibilités pour organiser ses interfaces. Sous le capot, Home Assistant passe à Python 3.14. Ce genre de changement peut sembler anodin, mais il a un impact direct sur les performances globales du système. Les nouvelles versions de Python apportent régulièrement des optimisations importantes. Comme Home Assistant repose entièrement sur ce langage, toute amélioration profite automatiquement à l’ensemble du système. Résultat : démarrage plus rapide, scripts plus réactifs et utilisation plus fluide. Comme souvent, la mise à jour apporte aussi une longue liste de nouvelles intégrations. Parmi les plus intéressantes : Cette version met clairement en avant la force de l’écosystème : la communauté open-source. De nombreux développeurs ont contribué à enrichir la plateforme, ce qui explique la taille impressionnante de la liste d’améliorations. La version 2026.3 ne cherche pas à impressionner avec une seule fonctionnalité spectaculaire. Elle améliore plutôt des dizaines de petits détails qui rendent Home Assistant plus agréable au quotidien. Entre le contrôle pièce par pièce des aspirateurs robots, l’assistant vocal Android, les automatisations plus robustes et les améliorations du tableau énergie, cette version apporte des outils très concrets pour automatiser la maison. Et comme souvent avec Home Assistant, ce n’est qu’un aperçu de ce qui arrive : plusieurs évolutions importantes autour des automatisations et de la voix sont déjà en préparation pour les prochaines mises à jour. On a hâte ! Pour ceux qui veulent tester les nouveautés, l’environnement de tests Home Assistant a été mis à jour dans la foulée ;-) Tags : home assistant Saisissez votre réponse en chiffreshuit + seize = Toutes les marques citées, noms de produits ou de services, ainsi que les logos sont les propriétés de leurs sociétés respectives. L’auteur ne saurait être tenu responsable d’un quelconque dommage subi, manque à gagner ou d’une quelconque perte de données consécutives à une manipulation décrite dans ce blog. Nous déclinons également toute responsabilité vis-à-vis des erreurs ou omissions détectées sur ce blog. Nous nous engageons, sans aucune obligation, à modifier le contenu, à améliorer le blog et à corriger toute erreur ou omission, à tout moment et sans préavis. Nous nous efforcerons d’actualiser les informations en temps utile sans toutefois être responsables d’éventuelles inexactitudes.
Images (1):
|
|||||
| Programmable Robots Market Size to Worth USD US$ 10.2 | https://www.globenewswire.com/news-rele… | 1 | Mar 07, 2026 00:03 | active | |
Programmable Robots Market Size to Worth USD US$ 10.2Description: The Programmable Robots Market is experiencing robust growth fueled by increasing adoption in education, industry, and continuous technological... Content:
January 24, 2024 05:39 ET | Source: Persistence Market Research Persistence Market Research New York, Jan. 24, 2024 (GLOBE NEWSWIRE) -- Programmable robots are witnessing increasing adoption across educational institutions, commercial enterprises, and industrial settings. The global market for programmable robots is anticipated to reach US$ 3.2 billion by 2023, with a projected compound annual growth rate (CAGR) of 12.1%. By 2033, the industry is expected to achieve sales of US$ 10.2 billion Research has unveiled the potential of using biodegradable components to create artificial muscles, paving the way for compostable robots in the future. Soft robotics, which focuses on designing robots from flexible and pliable materials like elastomers, hydrogels, and textiles, is a growing field. The increasing adoption of programmable robots across educational institutions, commercial sectors, and industrial settings is driving the demand for these robots. Continuous technological advancements and innovations in these domains are propelling market growth. The rise in the global programmable robots market is attributed to the mounting adoption of automation in manufacturing facilities, rising awareness regarding the benefits of artificial intelligence (AI), and the advent of deep learning systems. The demand for these robots is likely to increase in the future owing to their ability to handle any job as programmed or instructed to them via programming languages such as C/C++, Java, and Python. These robots can efficiently manage tasks such as packing, sorting, and others without human intervention and error and help the players maximize their profits. Such adoption of robotics solutions for the e-commerce industry fuels the programmable robots market growth. With the increasing adoption of programmable robots in educational institutions, commercial sectors, and industrial environments, the demand for such robots is poised for growth, driven by ongoing technological advancements and innovations. Recent research has showcased the potential of biodegradable components in constructing artificial muscles, paving the way for the development of compostable robots in the future. The emerging field of soft robotics is dedicated to crafting robots from materials like elastomers, hydrogels, and textiles, imparting flexibility and softness to their designs. In response to these challenges, a new generation of materials and manufacturing techniques is being explored, including shape-memory polymers and programmable textiles capable of changing shape. For example, researchers led by Ellen Rumley, a graduate student at CU Boulder, are actively working on biodegradable artificial muscles. These muscles enable robotic arms and legs to mimic lifelike movements and eventually biodegrade into the soil over time. This breakthrough addresses the issues of technological waste and unsustainable disposal practices, marking a significant advancement in the field of robotics Seeking Deeper Insights into Competitor Analysis? Request a Sample of the Report Now! https://www.persistencemarketresearch.com/samples/33467 Programmable Robots Market Report Scope: Programmable Robots: Market Dynamics: The rapid advancement of technology, driven by developments in robotics, artificial intelligence, and deep learning systems, is a driving force behind the expanding programmable robots market. Continuous research and development efforts will further bolster this market in the coming years. Programmable robots are gaining popularity in research and education, offering a versatile platform for understanding and developing robotic technology. Their adaptability across a wide range of tasks positions them for increasing prominence. From artificial intelligence-enabled robots to drones, programmable robots are finding applications in industries like housecleaning, administration, and security, including bomb detection and disposal. These robots offer efficiency and precision, particularly in manufacturing, where they excel at repetitive and error-free tasks. Their adaptability and reconfigurability make them valuable assets in various industrial processes, enhancing flexibility and productivity. The growing automation, the growing popularity of robots for educational purposes, and the rising Internet of Things technology. However, the high cost of robots is expected to hinder the growth of the market. Programmable robots are a powerful tool to make Science, Technology, Engineering, and Mathematics (STEM) education more interactive and fun for kids and students of different age groups. These robots assist children in learning all the aspects of electronics coding, engineering, and computer science. It helps children to develop one of the basic and crucial cognitive skills of mathematical and computational thinking at a very early age. These skills help them to develop the mental capability to solve problems of various kinds through an orderly series of actions. LEGO Mindstorms, Sphero, Dash and Dot, Ozobot, and VEX Robotics are some of the educational programmable robots that can be controlled via smartphone or tablet and are equipped with sensors and cameras. These robots help students to develop and learn skills in analytical, critical, practical, and creative thinking, which leads to their adoption in research and education and further fosters the programmable robots market growth. The rising awareness of practical learning is one of the major driving factors anticipated to boost the programmable robots market growth exponentially. In addition, the advancement of programmable robots transformed the way children use to get educated. Children of all age groups, ranging from kindergarten to graduate universities, benefitted from the learning with the help of programmable robots. Governments, schools, and universities are making efforts to integrate robotics learning into their education system to develop cognitive skills and learning in children and students. A large number of educational robotics (ER) initiatives have been implemented to teach and motivate learners of different age groups. For example, Fischertechnik’s learning environment supports the learning and teaching materials and kits for STEM, technology & programming. It is used for learning and understanding various industry 4.0 applications via various schools and training sessions. Thus, such application of programmable robots and initiatives to promote robotics fuels the adoption and development of robots, which further fuels the programmable robots market. In a nutshell, the Persistence Market Research report is a must-read for start-ups, industry players, investors, researchers, consultants, business strategists, and all those who are looking to understand this industry. Get a glance at the report at - https://www.persistencemarketresearch.com/market-research/programmable-robots-market.asp This growth is fueled by continuous technological advancements and innovations in the field, making programmable robots more versatile and capable than ever before. A noteworthy development in this market is the exploration of biodegradable components for creating artificial muscles, opening up possibilities for compostable robots in the future. Additionally, the emerging field of soft robotics is revolutionizing robot design by using flexible and pliable materials, such as elastomers, hydrogels, and textiles. To meet the challenges of creating these advanced robots, researchers are exploring new materials like shape-memory polymers and programmable textiles that can change shape as needed. This market is not only pushing the boundaries of technology but also addressing environmental concerns, as biodegradable components pave the way for robots that can naturally decompose, reducing technological waste and contributing to a more sustainable future. The Programmable Robots Market is a testament to the boundless potential of robotics in various industries, from education to healthcare and beyond. Competitive Landscape The programmable robot market sees numerous technology companies vying for dominance. Success in this competitive arena necessitates offering a diverse product range, effective marketing strategies, and a high level of technical expertise. Collaborative efforts between companies to leverage their strengths often result in the development of innovative products. Research and development are pivotal in the creation of new products and technologies that confer a competitive edge. Companies continually seek ways to enhance existing products or introduce novel ones to maintain their competitive stance. In April 2023, SynSense, a pioneer in neuromorphic chips, announced plans to secure over 200 million RMB ($29 million) in B-round financing. Zurich-based Maxvision secured $10 million in funding in March through Ausvic Capital, followed by an undisclosed investment in April from Chinese venture capital firm RunWoo and multimodal biometrics firm Maxvision. In May 2023, Sphero RVR+ unveiled its latest and most advanced robot, designed for middle and high schools, as well as makerspaces. The Sphero RVR+ Multi-Pack, designed for classrooms, equips teachers with six RVR+ robots and an Educator Guide featuring step-by-step activities. This multi-pack option enables educators to seamlessly integrate computer science concepts into their middle and high school classrooms and makerspaces, requiring no prior programming experience Programmable Robots Market Outlook by Category By Component: By Application: By Region: About Persistence Market Research: Business intelligence is the foundation of every business model employed by Persistence Market Research. Multi-dimensional sources are being put to work, which include big data, customer experience analytics, and real-time data collection. Thus, working on “micros” by Persistence Market Research helps companies overcome their “macro” business challenges. Persistence Market Research is always way ahead of its time. In other words, it tables market solutions by stepping into the companies’/clients’ shoes much before they themselves have a sneak pick into the market. The pro-active approach followed by experts at Persistence Market Research helps companies/clients lay their hands on techno-commercial insights beforehand, so that the subsequent course of action could be simplified on their part. Contact Us: Persistence Market ResearchTeerth Technospace, Unit B-704Survey Number - 103, BanerMumbai Bangalore HighwayPune 411045, IndiaEmail: sales@persistencemarketresearch.comWeb: https://www.persistencemarketresearch.com
Images (1):
|
|||||
| Home Assistant 2026.3 trae control por voz en Android y … | https://www.redeszone.net/noticias/lanz… | 1 | Mar 07, 2026 00:03 | active | |
Home Assistant 2026.3 trae control por voz en Android y un cambio crítico en PythonDescription: Controla tu aspirador por zonas, vigila el consumo en tiempo real y usa la voz en Android. Descubre las novedades de la última versión de Home Assistant. Content:
Navegar por redeszone.net con publicidad personalizada, seguimiento y cookies de forma gratuita. i Para ello, nosotros y nuestros socios i necesitamos tu consentimiento i para el tratamiento de datos personales i para los siguientes fines: Las cookies, los identificadores de dispositivos o los identificadores online de similares características (p. ej., los identificadores basados en inicio de sesión, los identificadores asignados aleatoriamente, los identificadores basados en la red), junto con otra información (p. ej., la información y el tipo del navegador, el idioma, el tamaño de la pantalla, las tecnologías compatibles, etc.), pueden almacenarse o leerse en tu dispositivo a fin de reconocerlo siempre que se conecte a una aplicación o a una página web para una o varias de los finalidades que se recogen en el presente texto. La mayoría de las finalidades que se explican en este texto dependen del almacenamiento o del acceso a la información de tu dispositivo cuando utilizas una aplicación o visitas una página web. Por ejemplo, es posible que un proveedor o un editor/medio de comunicación necesiten almacenar una cookie en tu dispositivo la primera vez que visite una página web a fin de poder reconocer tu dispositivo las próximas veces que vuelva a visitarla (accediendo a esta cookie cada vez que lo haga). La publicidad y el contenido pueden personalizarse basándose en tu perfil. Tu actividad en este servicio puede utilizarse para crear o mejorar un perfil sobre tu persona para recibir publicidad o contenido personalizados. El rendimiento de la publicidad y del contenido puede medirse. Los informes pueden generarse en función de tu actividad y la de otros usuarios. Tu actividad en este servicio puede ayudar a desarrollar y mejorar productos y servicios. La publicidad que se presenta en este servicio puede basarse en datos limitados, tales como la página web o la aplicación que esté utilizando, tu ubicación no precisa, el tipo de dispositivo o el contenido con el que está interactuando (o con el que ha interactuado) (por ejemplo, para limitar el número de veces que se presenta un anuncio concreto). La información sobre tu actividad en este servicio (por ejemplo, los formularios que rellenes, el contenido que estás consumiendo) puede almacenarse y combinarse con otra información que se tenga sobre tu persona o sobre usuarios similares(por ejemplo, información sobre tu actividad previa en este servicio y en otras páginas web o aplicaciones). Posteriormente, esto se utilizará para crear o mejorar un perfil sobre tu persona (que podría incluir posibles intereses y aspectos personales). Tu perfil puede utilizarse (también en un momento posterior) para mostrarte publicidad que pueda parecerte más relevante en función de tus posibles intereses, ya sea por parte nuestra o de terceros. El contenido que se te presenta en este servicio puede basarse en un perfilde personalización de contenido que se haya realizado previamente sobre tu persona, lo que puede reflejar tu actividad en este u otros servicios (por ejemplo, los formularios con los que interactúas o el contenido que visualizas), tus posibles intereses y aspectos personales. Un ejemplo de lo anterior sería la adaptación del orden en el que se te presenta el contenido, para que así te resulte más sencillo encontrar el contenido (no publicitario) que coincida con tus intereses. La información sobre qué publicidad se te presenta y sobre la forma en que interactúas con ella puede utilizarse para determinar lo bien que ha funcionado un anuncio en tu caso o en el de otros usuarios y si se han alcanzado los objetivos publicitarios. Por ejemplo, si has visualizado un anuncio, si has hecho clic sobre el mismo, si eso te ha llevado posteriormente a comprar un producto o a visitar una página web, etc. Esto resulta muy útil para comprender la relevancia de las campañas publicitarias. La información sobre qué contenido se te presenta y sobre la forma en que interactúas con él puede utilizarse para determinar, por ejemplo, si el contenido (no publicitario) ha llegado a su público previsto y ha coincidido con sus intereses. Por ejemplo, si hasleído un artículo, si has visualizado un vídeo, si has escuchado un “pódcast” o si has consultado la descripción de un producto, cuánto tiempo has pasado en esos servicios y en las páginas web que has visitado, etc. Esto resulta muy útil para comprender la relevancia del contenido (no publicitario) que se te muestra. Se pueden generar informes basados en la combinación de conjuntos de datos (como perfiles de usuario, estadísticas, estudios de mercado, datos analíticos) respecto a tus interacciones y las de otros usuarios con el contenido publicitario (o no publicitario) para identificar las características comunes (por ejemplo, para determinar qué público objetivo es más receptivo a una campaña publicitaria o a ciertos contenidos). La información sobre tu actividad en este servicio, como tu interacción con los anuncios o con el contenido, puede resultar muy útil para mejorar productos y servicios, así como para crear otros nuevos en base a las interacciones de los usuarios, el tipo de audiencia, etc. Esta finalidad específica no incluye el desarrollo ni la mejora de los perfiles de usuario y de identificadores. El contenido que se presenta en este servicio puede basarse en datos limitados, como por ejemplo la página web o la aplicación que esté utilizando, tu ubicación no precisa, el tipo de dispositivo o el contenido con el que estás interactuando (o con el que has interactuado) (por ejemplo, para limitar el número de veces que se te presenta un vídeo o un artículo en concreto). Se puede utilizar la localización geográfica precisa y la información sobre las características del dispositivo Al contar con tu aprobación, tu ubicación exacta (dentro de un radio inferior a 500 metros) podrá utilizarse para apoyar las finalidades que se explican en este documento. Con tu aceptación, se pueden solicitar y utilizar ciertas características específicas de tu dispositivo para distinguirlo de otros (por ejemplo, las fuentes o complementos instalados y la resolución de su pantalla) en apoyo de las finalidades que se explican en este documento. Por solo 1.67 al mes, disfruta de una navegación sin interrupciones por toda la red del Grupo ADSLZone: adslzone.net, movilzona.es, testdevelocidad.es, lamanzanamordida.net, hardzone.es, softzone.es, redeszone.net, topesdegama.com y más. Al unirte a nuestra comunidad, no solo estarás apoyando nuestro trabajo, sino que también te beneficiarás de una experiencia online sin publicidad ni cookies de seguimiento. La nueva versión mensual de Home Assistant ya está con nosotros, y en RedesZone os vamos a contar todas las novedades que incorpora esta nueva versión. A diferencia de versiones anteriores donde teníamos una gran cantidad de cambios, en este caso, lo más destacable es que tenemos muchas nuevas integraciones para ser más versátil. Si tienes robots aspiradores en casa, ahora podrás controlarlos de forma mucho más avanzada a través de este sistema de domótica. Si quieres descubrir todas las novedades de esta nueva versión, y un cambio crítico que han realizado en Python, a continuación, os explicamos todos los detalles. Un mes más, tenemos una nueva versión del popular sistema de domótica de Home Assistant. Como suele ser habitual, han incorporado nuevas integraciones, mejoradas las ya existentes (en cuanto a la calidad del código fuente), e incluso han mejorado el panel de energía, las posibilidades con los robots aspiradores y mucho más. Eso sí, debes estar muy atento a cambios críticos en Python que podrían afectar a tus aplicaciones instaladas de HACS, o a scripts manuales que tengas escritos en este lenguaje. Esta nueva versión no incorpora demasiadas novedades, se han limitado a mejorar lo que ya teníamos, aunque sí debemos destacar la mejora que han introducido en los robots aspiradores. Ahora podremos realizar todas estas acciones cuando vayamos a añadir robots aspiradores a nuestro sistema de Home Assistant: La configuración es bastante sencilla, simplemente debemos irnos a «Dispositivos e integraciones«, irnos a nuestro robot aspirador, y buscar la sección de «Mapeo«. Aquí verás una lista de las habitaciones detectadas por el robot, así que tendremos asignarlas a las diferentes áreas de la casa. Ahora mismo esta funcionalidad de las áreas está disponible para robots aspiradores que usan Matter, también es compatible con la marca Ecovacs y con Roborock. De momento, ninguna marca más es compatible con esta característica, aunque es posible que marcas como Dreame o Xiaomi empiecen a desarrollar sus dispositivos para ser compatible con esta nueva función. Respecto a las mejoras en el panel de energía, en la pestaña «Ahora» tenemos tarjetas de tipo mosaico que mostrarán el consumo de enegría, el caudal de gas y el caudal de agua en tiempo real, y todo de un vistazo. Por supuesto, es completamente necesario que tengamos algún tipo de dispositivo o sensor que mida todo esto, y podamos integrarlo en el sistema de Home Assistant. Además, ahora el agua también incluye un gráfico que nos indicará dónde se «gasta» dicho agua, como teníamos en el panel de electricidad. Otro cambio interesante que han añadido, es que, para reducir la ambigüedad, la segunda pestaña del panel de energía ya no es «Energía» sino «Electricidad«, ya que el panel abarca también el gas y el agua. Otra mejora importante en las automatizaciones de esta nueva versión, es que en la sección de «Acciones» o «Entonces hacer», se ha añadido una opción de «Continuar en caso de error«. Esto ya estaba presente en Home Assistant, pero solamente a través de lenguaje YAML, no estaba en la interfaz gráfica de usuario. Esta funcionalidad está presente en los tres puntos verticales, en la parte de «Continuar en caso de error», justo debajo de «deshabilitar». Esta característica permite que, en caso de que una acción falle, el resto de acciones se puedan ejecutar sin que la automatización se vea interrumpida. Otra novedad interesante es que tenemos detección de palabras de activación en smartphones Android, aunque ahora mismo se trata de una función experimental. Gracias a esta característica, podremos hacer uso de Home Assistant Voice directamente desde nuestro smartphone, como si tuviéramos un satélite siempre con nosotros. La activación se realiza mediante las tres palabras clave de siempre (Okay Nabu, Hey Jarvis y Hey Mycroft), todo se hace de forma local en el dispositivo, no se envía audio a la nube en ningún momento, ni tampoco se requiere procesamiento en el servidor. Esta funcionalidad tiene un impacto importante en la batería de nuestro smartphone: la detección de palabras de activación requiere acceso continuo al micrófono y aumentará el uso de procesador, así que la autonomía de la batería se reducirá radicalmente. Una opción para reducir este consumo tan elevado, es hacer ciertas automatizaciones para que solamente en ciertos casos se habilite, como, por ejemplo, si está cargando nuestro smartphone, o si entramos en casa donde el cargador lo tendremos cerca. Este consumo de batería se podría reducir si Google abriera la API para la detección de palabras clave a través de hardware, pero de momento no es posible. Otros cambios que debemos mencionar, es que las páginas de configuración de Matter, Z-Wave, ZigBee y Bluetooth se han reorganizado para tener mayor claridad. Ahora también tenemos la posibilidad de eliminar elementos de la lista de tareas, así como elegir «Año» en el editor de tarjetas, el apnel de seguridad también mostrará las cubiertas de ventanas, y tendremos tarjetas de pie de página en la vista de «Secciones», igual que tenemos en el encabezado. Como podéis ver, tenemos novedades muy importantes que hacen avanzar a Home Assistant para convertirse en una referencia en domótica en el hogar. En esta nueva versión tenemos integraciones nuevas muy interesantes para seguir ampliando la compatibilidad del sistema de domótica. No debemos olvidar que las integraciones son el «cerebro» del sistema, ya que permiten integrar diferentes dispositivos en Home Assistant para, posteriormente, interactuar con ellas de forma fácil y rápida. Algunas integraciones que han incorporado son las de Ghost, el sistema de música Hegel Amplifier, la batería de Homevolt, el sistema de monitorización en la nube de inversores solares de Hypontech Cloud, así como también de Indevolt. Otras integraciones son las de las marcas IntelliClima, Liebherr, MTA New York City Transit, MyNeomitis, Powerfox Local, Redgtech, System Nexa 2, Teltonika, Trane Local, así como Zinvolt. Si estás buscando un sistema de copia de seguridad en la nube, ahora tenemos compatibilidad con IDrive e2 que es compatible con Amazon S3, para guardar nuestros backups a salvo. También tenemos compatibilidad con OneDrive of Business, la versión empresarial de OneDrive, donde podremos guardar nuestras copias de seguridad. Como podéis ver, tenemos una gran cantidad de integraciones nuevas en esta versión, además, se han mejorado algunas integraciones existentes: Otros cambios están relacionados con los dispositivos de Alexa que ahora soporta dispositivos de calidad de aire, los humificadores de VeSync ahora tienen más funciones, si tienes KNX ahora podrás configurar el número de entidades y enviar la hora actual directamente desde la interfaz gráfica. También tenemos mejoras en la integración de Uptime Kuma, de Radarr, de Proxmox VE, Mealie así como en Portainer, por si quieres ver el estado de todos estos servicios directamente desde Home Assistant. Os recomendamos leer todos los cambios en las integraciones en las notas oficiales de Home Assistant 2026.3. El cambio más importante de esta versión que debes revisar con cuidado, es que tenemos la nueva versión Python 3.14, lo que supone un cambio bastante importante. Ahora tenemos mejoras en el rendimiento del sistema de domótica gracias a la actualización de Python, la nueva versión incluye un intérprete más rápido, tiempo de inicio inferior, y un mejor uso de la memoria RAM. Aunque no es algo que vayas a notar de la noche a la mañana, lo cierto es que todo esto contribuye a mejorar la experiencia con Home Assistant. En principio, no deberías tener problemas, pero es posible que tengas integraciones instaladas mediante HACS o scripts propios en Python que sí den problemas, así que antes de actualizar a esta nueva versión, prueba en una máquina virtual o en tu entorno de pruebas, que todo funciona correctamente. Algunos cambios incompatibles con esta nueva versión está relacionado con las luces, y es que en «color_temp» ya no se admite el uso de «mireds» para configurar el color de las luces, ahora tenemos que usar color_temp_kelvin en su lugar. También se han eliminado atributos de estado de entidad como color_temp, kelvin, min_mireds y max_mireds. Ahora debemos usar color_temp_kelvin, min_color_temp_kelvin y max_color_temp_kelvin. Si usas la integración de tadoº, ahora el seguimiento de dispositivos móviles se ha eliminado de la integración, así que los dispositivos móviles y sus entidades asociadas ya no están disponibles, esto soluciona problemas de reautenticación y reduce la carga de la API de tadoº en la nube. Si usas una «template» de tipo «fans» también tenemos cambios, pero solamente si hay errores de sintaxis. En las notas oficiales de esta nueva versión, están detallados todos los cambios incompatibles que debes valorar.
Images (1):
|
|||||
| Home Assistant 2026.3 envoie les robots aspirateurs nettoyer pièce par … | https://www.igen.fr/domotique/2026/03/h… | 1 | Mar 07, 2026 00:03 | active | |
Home Assistant 2026.3 envoie les robots aspirateurs nettoyer pièce par pièce - iGenerationDescription: Pour une fois que Maison avait de l’avance sur Home Assistant, cela valait bien une mention : la mise à jour mensuelle de la domotique open source sortie hier ajoute une fonctionnalité que l’app d’Apple proposait déjà. Avec la version 2026.3, le co... Content:
Ouvrir le menu principal iGeneration Recherche Compte iGeneration Soutenez le travail d'une rédaction indépendante. Rejoignez la plus grande communauté Apple francophone ! Nicolas Furno jeudi 05 mars 2026 à 08:40 • 14 Pour une fois que Maison avait de l’avance sur Home Assistant, cela valait bien une mention : la mise à jour mensuelle de la domotique open source sortie hier ajoute une fonctionnalité que l’app d’Apple proposait déjà. Avec la version 2026.3, le contrôle des robots aspirateurs peut désormais se faire pièce par pièce. Si vous disposez d’un tel appareil, vous pourrez associer chaque pièce repérée par le robot à l’une des zones configurées dans Home Assistant et ensuite utiliser des automatisations, l’interface ou, à l’avenir, un assistant vocal pour demander de nettoyer tel ou tel endroit. Cette nouveauté est proposée pour tous les robots Matter qui savent gérer les pièces, ce qui n’est pas le cas de tous les modèles. Home Assistant oblige, les robots aspirateurs peuvent gérer directement la fonctionnalité même sans le standard, et c’est le cas au lancement pour les produits de Roborock ainsi que ceux d’Ecovacs. Les autres marques pourront rejoindre la liste à l’avenir, qu’elles utilisent Matter ou non. Aspirateurs-robots dans Apple Maison : attention à la gestion des pièces Parmi les autres changements de la mise à jour, plutôt riche ce mois-ci, signalons des améliorations appréciables sur le tableau de bord énergie. Pour simplifier sa configuration, les paramètres sont désormais séparés en trois sections pour gérer l’électricité, l’eau et le gaz. Par ailleurs, la vue « Maintenant » introduite avec Home Assistant 2025.12 gagne des pastilles en haut de l’écran pour afficher les consommations instantanées d’électricité, d’eau et de gaz si vous avez l’équipement adéquat. Pour finir, la vue dédiée à l’électricité a été réorganisée pour mieux mettre en avant les statistiques globales, notamment sur mobile. La mise à jour met aussi en avant une option particulièrement utile pour les automatisations et jusque-là bien cachée, à tel point que j’ai découvert son existence en lisant les notes de version. Chaque action pouvait déjà être configurée pour continuer en cas d’erreur, au lieu d’arrêter toute l’exécution de l’automatisation ou du script en cours. Il fallait toutefois modifier ce paramètre en YAML, mais la nouvelle version l’affiche dans le menu contextuel de chaque action. Dans bien des situations, notamment si vous voulez allumer/éteindre de nombreuses lumières ou ouvrir/fermer de multiples volets, c’est le choix qui conviendra le mieux. L’assistant de Home Assistant peut maintenant être activé à la voix sur les appareils Android. C’est l’app associée qui se charge de repérer en local le déclencheur vocal, ce qui a pour effet négatif de consommer bien plus d’énergie. Néanmoins, cela peut être utile pour les tablettes Android utilisées pour afficher un tableau de bord. Pour les smartphones, il sera possible d’activer ou désactiver la fonctionnalité depuis des automatisations, de quoi l’associer à la charge de l’appareil ou à la géolocalisation, puisqu’elle n’a pas de sens hors de chez soi. Comme toujours, la mise à jour enrichit le catalogue d’intégrations et on peut notamment relever l’ajout de l’électroménager connecté de Liebherr. La prise en charge du standard Matter progresse avec l’ajout des détecteurs de monoxyde de carbone et des capteurs TVOC pour la qualité de l’air, même s’il n’y a toujours pas Matter 1.5 et ses caméras. Enfin, dans les aspects moins visibles, Home Assistant a mis à jour Python sous le capot et ce changement promet des gains de performances, notamment un démarrage plus rapide. Vous trouverez le détail des changements apportés par cette version dans l’article de blog qui l’accompagne. Soutenez MacGeneration sur Tipeee Précommandez-le dès maintenant et profitez-en pour découvrir nos nouveaux goodies, ou prolonger votre abonnement au Club iGen à tarif réduit. Un livre pour raconter 50 ans d'Apple, une journée à Lyon pour les célébrer ensemble. 06/03/2026 à 22:45 • 9 06/03/2026 à 20:30 • 13 06/03/2026 à 19:05 • 12 06/03/2026 à 18:36 • 12 06/03/2026 à 17:28 • 19 06/03/2026 à 15:30 • 3 06/03/2026 à 12:35 • 9 06/03/2026 à 10:54 • 22 06/03/2026 à 10:28 • 7 06/03/2026 à 09:09 • 35 06/03/2026 à 08:01 • 21 06/03/2026 à 06:21 • 17 05/03/2026 à 23:55 • 6 05/03/2026 à 22:20 • 5 05/03/2026 à 20:15 • 5 05/03/2026 à 19:35 • 17 Le Club iGen c'est : Soutenez le travail d'une rédaction indépendante. Rejoignez la plus grande communauté Apple francophone ! iOS 26 : notre petit guide des nouveautés Découvrez Home Assistant Le guide de macOS Sonoma Sauvegardez vos données Tout savoir sur les raccourcis clavier sur Mac Tout savoir sur les raccourcis clavier sur iPad Fièrement publié depuis 1999 par MacGeneration SARL. Service de presse en ligne reconnu par la CPPAP sous le numéro 0929 W 93490. ISSN 1773-200X. Tous droits réservés.
Images (1):
|
|||||
| Build LLM-Powered Robots in Python with Gemini API by Mammoth … | http://www.kicktraq.com/projects/johnbu… | 8 | Mar 07, 2026 00:03 | active | |
Build LLM-Powered Robots in Python with Gemini API by Mammoth Interactive :: KicktraqURL: http://www.kicktraq.com/projects/johnbura/build-llm-powered-robots-in-python-with-gemini-api/ Description: This book gives you the language to unlock your next mechanical move. Content: Images (8):
|
|||||
| Honor's Humanoid Robot Ambitions Signal a New Front in China's … | https://www.webpronews.com/honors-human… | 0 | Mar 04, 2026 16:00 | active | |
Honor's Humanoid Robot Ambitions Signal a New Front in China's Tech Hardware WarsDescription: Keywords Content: |
|||||
| Honor to Show Off its First Humanoid Robot at MWC … | https://www.androidheadlines.com/2026/0… | 1 | Mar 04, 2026 16:00 | active | |
Honor to Show Off its First Humanoid Robot at MWC 2026URL: https://www.androidheadlines.com/2026/02/honor-humanoid-robot-phone-mwc-2026-launch.html Description: Honor is launching its first humanoid robot at MWC 2026 along with a working prototype of its intriguing "Robot Phone." Content:
Sign Up! envelope_alt Get the latest Android News in your inbox every day arrow_right Sign up to receive the latest Android News every weekday: Only send updates once a week Android Headlines / Mobile Events News / Honor's First Humanoid Service Robot to Debut at MWC 2026 At MWC 2026 in Barcelona, Honor will show off its first humanoid service robot and a prototype of its Robot Phone. The company is investing billions to become the leader in consumer robotics, with a focus on shopping assistance and AI-driven tasks. China currently dominates shipments in 2025 (over 13,000 units shipped worldwide) and has prices that are much lower than those of Western companies like Tesla. It seems like the robotics industry is ready to make another big step forward this year. China wants to be a big player in this technology and is investing heavily for it. Honor, best known for its smartphones, will officially unveil its first humanoid service robot this weekend at MWC 2026. This will be the company’s first step into a sector currently experiencing explosive growth. Honor’s move is the result of a massive multibillion-dollar initiative focused on “embodied AI.” That is, intelligence that lives in a physical, moving body. While competitors like Xiaomi and Oppo are also exploring AI agents, Honor is positioning itself as the first major smartphone manufacturer to dive headfirst into the humanoid consumer-service segment. Many humanoid robots today are designed for heavy lifting in factories or logistics. However, Honor is aiming for something more personal. According to reports, this new machine is optimized for consumer services, specifically tasks like shopping assistance. Interestingly, Honor’s humanoid robot won’t be traveling alone. The firm plans to showcase it alongside the first working prototype of its “Robot Phone.” This tech concept device features a unique gimbal-mounted pop-up camera that can autonomously track subjects, effectively acting as a tiny, desk-bound robot that follows your movement. Honor is entering the market at a time when China has firmly taken the lead in the robotics industry. According to data from the research firm Omdia, the global market saw a staggering 500% revenue growth in 2025 alone. Last year, out of the 13,000 humanoid robots shipped worldwide, the vast majority originated in China. Local companies like AgiBot and Unitree are currently outperforming Western rivals in terms of shipment volumes and pricing. For instance, while a Tesla Optimus might cost between $20,000 and $30,000, Chinese models are entering the market at much lower price points—some as low as $6,000. Honor’s upcoming entry will add even more pressure to the global competition. As we wait for the official keynote on March 1st, the big question remains: how ready is this robot for current homes? Whether we see a fully functional prototype or a conceptual vision, we won’t have to wait long to find out. Jean has been a mobile-tech enthusiast since ever. He likes to always be up-to-date on the latest news in the industry and write about it. He specializes in Android, smartphones, tablets, wearables, apps, and some gaming. Copyright ©2026 Android Headlines. All Rights Reserved. Main Deals & More Android News Sign Up! envelope_alt Get the latest Android News in your inbox every day arrow_right Sign up to receive the latest Android News every weekday: Only send updates once a week
Images (1):
|
|||||
| Honor Will Unveil, its First AI Humanoid Robot and Robot … | https://www.gizmochina.com/2026/02/24/h… | 1 | Mar 04, 2026 16:00 | active | |
Honor Will Unveil, its First AI Humanoid Robot and Robot Phone Concept at MWC 2026 - GizmochinaDescription: Honor unveils its first AI humanoid robot and Robot Phone concept at MWC 2026, expanding beyond smartphones into AI-driven devices Content:
Honor Device Co. is set to make a major announcement at MWC Barcelona this weekend with the unveiling of its first humanoid robot. The company says the robot is designed mainly for consumer service tasks such as shopping assistance and everyday support. This move signals Honorâs entry into the fast-growing humanoid robotics segment, an area that has recently gained strong global attention due to rapid AI advancements. Honor claims it will be the first smartphone brand among its direct competitors to launch a humanoid robot. The company aims to position itself as a leader in next-generation AI experiences, moving beyond traditional mobile devices and exploring embodied AI technology. The humanoid robot will be showcased under the âHonor Robot Phoneâ concept, which highlights how artificial intelligence could integrate more deeply into daily life. Honor is also expected to present new agentic AI services designed to work across smartphones and connected devices. This strategy is part of a larger multibillion-dollar expansion plan focused on artificial intelligence, new AI applications, and building a smarter ecosystem. Similar moves are being seen across the industry, with rivals like Xiaomi, Oppo, and Vivo also investing heavily in AI innovation. Chinese startups such as Unitree are currently leading the momentum in AI robotics with advanced demonstrations, showing how competitive the space has become. The event, themed âBelievers in AI Future,â will also include the launch of several new devices. These include the Honor Magic V6 foldable smartphone, which features improved durability, a larger battery, and a slimmer design. Honor will also introduce the AI-powered MagicPad4 tablet and MagicBook Pro 14 laptop, both designed for productivity and lightweight performance. Since becoming independent from Huawei in 2020, Honor has expanded rapidly with backing from Shenzhen government investment entities and state-owned enterprises. The company has also expressed plans for a future IPO, reflecting its long-term ambition to grow beyond smartphones and establish itself as a broader AI technology leader. Read More:
Images (1):
|
|||||
| Xiaomi trials humanoid robots in its EV factory | https://www.cnbc.com/2026/03/04/xiaomi-… | 1 | Mar 04, 2026 16:00 | active | |
Xiaomi trials humanoid robots in its EV factoryURL: https://www.cnbc.com/2026/03/04/xiaomi-humanoid-robots-ev-factory-.html Description: Two humanoid robots can complete 90% of the work in three hours, Xiaomi President Lu Weibing told CNBC. Content:
In this article Xiaomi trialed its humanoid robots in its electric vehicle production plants, the company's president told CNBC, as it looks to boost productivity in its factories. Two humanoid robots can complete 90% of the work in three hours, Lu Weibing told CNBC in an interview at the Mobile World Congress trade show in Barcelona, Spain. They can complete tasks like installing nuts and moving materials, he said. "To integrate robots into our production lines, the biggest challenge is for them to keep up with the pace," Weibing added. "In Xiaomi's car factory, every 76 seconds, a new car gets off the assembly line. The two humanoid robots are able to keep up our pace." Lu said that having humanoid robots working in factories and improving productivity was a key focus for Xiaomi. In the future, humanoid robots will be able to "replace humans for certain work" and "accomplish work that humans couldn't do," he added. Xiaomi first debuted its CyberOne humanoid robot in 2022, but it is currently not selling the product. However, Lu said the use of its robots in production plants was still in its early stages. "The robots in our production lines weren't doing an official job, more like the interns," Lu told CNBC. Still, the trial highlights the pace at which Chinese companies are investing in and improving robotic capabilities. There are a plethora of Chinese firms, some of which have recently gone public, developing the technology. Experts expect Chinese firms to ramp up production of robots this year, with the country itself an early adopter of the technology. Analysts at RBC Capital Markets forecast a global total addressable market for humanoids of $9 trillion by 2050, with China accounting for more than 60% of that. Xiaomi built its business around selling a whole host of consumer electronics products, but in recent years launched an electric vehicle business, which is growing fast. While Lu said he was "bullish" on robotics, he also said it was "too early to say" how big the market will be. Other companies in China have also expanded into robotics. Chinese EV startup XPeng has developed its own humanoid, while on Sunday, smartphone player Honor debuted its first model. In the U.S., Elon Musk has sought to position Tesla as a robotics and AI firm. In January, Musk said Tesla was ending production of its Model S and X vehicles and would use the factory in Fremont, California, to build Optimus humanoid robots. Got a confidential news tip? We want to hear from you. Sign up for free newsletters and get more CNBC delivered to your inbox Get this delivered to your inbox, and more info about our products and services. © 2026 Versant Media, LLC. All Rights Reserved. A Versant Media Company. Data is a real-time snapshot *Data is delayed at least 15 minutes. Global Business and Financial News, Stock Quotes, and Market Data and Analysis. Data also provided by
Images (1):
|
|||||
| Tesla Rival Xiaomi Deploys Humanoid Robot With 3 Hours Of … | https://www.benzinga.com/markets/tech/2… | 1 | Mar 04, 2026 16:00 | active | |
Tesla Rival Xiaomi Deploys Humanoid Robot With 3 Hours Of Autonomous Operating Time At EV Assembly Plant - Xiaomi (OTC:XIACF) - BenzingaDescription: Xiaomi successfully deployed humanoid robots in EV assembly plant, achieving 3 hours of autonomous operations with 90.2% success rate. Content:
A newsletter built for market enthusiasts by market enthusiasts. Top stories, top movers, and trade ideas delivered to your inbox every weekday before and after the market closes. Xiaomi Corp (OTC:XIACY) (OTC:XIACF) has shared that it deployed humanoid robots in its EV assembly plant as the auto industry looks towards incorporating robotics into production activities. The Chinese tech giant shared that the humanoid robots it deployed successfully achieved 3 hours of autonomous operations, placing self-tapping nuts in the die-casting workshop, CnEVPost reported on Monday, citing a post on Chinese social media platform Weibo by Xiaomi. The robots achieved a 90.2% success rate at the task, the report said. Xiaomi is reportedly considering other applications for its robots, which incorporate a Vision-Language-Action (VLA) approach combined with reinforcement learning. The report also cited Xiaomi CEO Lei Jun, who, in a post on the Chinese social media platform WeChat, predicted that the tech company would see more humanoid robots being deployed at its production facilities in the next five years. Musk had also shared plans to establish an Optimus Academy to train the robot by using Tesla’s reality generator. The Optimus academy would help Tesla train the robots using the same technology it uses for Full Self-Driving (FSD) training. Boston Dynamics says Atlas is capable of lifting objects weighing up to 110 lbs and can function smoothly in temperature ranges from -4° to 104° F. It also boasts the capability of replicating a task learned by a single unit to the entire fleet. Check out more of Benzinga’s Future Of Mobility coverage by following this link. Photo courtesy: Shutterstock © 2026 Benzinga.com. Benzinga does not provide investment advice. All rights reserved. To add Benzinga News as your preferred source on Google, click here. The news comes as more auto manufacturers are entering the robotics sector, with Xpeng Inc. (NYSE:XPEV) announcing that it was establishing a facility in Guangzhou, China, to target large-scale production of its IRON humanoid robot by the end of 2026. Meanwhile, Xpeng also aims to produce over 1 million units of the IRON robot by the end of the decade and has touted deploying the robot in sectors like tour guiding, retail services and more. Tesla Inc. (NASDAQ:TSLA) CEO Elon Musk also shared that the company faces its biggest competitive threat in the robotics sector from Chinese companies. Musk has been bullish on Tesla's Optimus, sharing that the robot could become the first-ever von Neumann probe and aid colonization efforts on other planets in Space. Tesla also faces competition from Hyundai Motor-backed Boston Dynamics, which recently shared details regarding its Atlas humanoid robot. The robot, according to the company, is scheduled to be deployed in the South Korean automaker’s production facility in Georgia.
Images (1):
|
|||||
| Xiaomi humanoid robot achieves 90% success in EV nut installation | https://interestingengineering.com/ai-r… | 1 | Mar 04, 2026 16:00 | active | |
Xiaomi humanoid robot achieves 90% success in EV nut installationURL: https://interestingengineering.com/ai-robotics/china-xiaomi-humanoid-robot-ev-factory Description: Chinese tech giant Xiaomi has joined a growing list of companies who are deploying humanoids on factory floors in manufacturing processes. Content:
From daily news and career tips to monthly insights on AI, sustainability, software, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. The robot performed the installation task within 76 seconds, fulfilling the fastest cycle time requirement. Chinese technology company Xiaomi announced on March 2 that it has deployed humanoid robots in EV assembly operations. Xiaomi CEO Li Jun revealed this development on his official WeChat account. According to company sources, the humanoid worked at a self-tapping nut installation workstation in Xiaomi EV’s die-casting workshop. It achieved a success rate of 90.2% by working for three consecutive hours during the task. The humanoid completed the process in 76 seconds, meeting the production line’s fastest cycle time requirement. Xiaomi said that the deployment of its humanoids marks a key step towards its vision of “large-scale application in automotive manufacturing scenarios.” Speaking to Chinese tech outlet ITHome, the Beijing-based firm revealed the deployments and validation tests are ongoing at other production stations, with further updates to be announced later. In the task, the humanoid robot picked up self-tapping nuts precisely from an automatic feeding device and placed them onto positioning fixtures. In the next step, it coordinated with slide belt conveyors and automatic positioning to complete the automated tightening of floor components after integrated die-casting. According to Xiaomi, the greatest challenge is achieving precise alignment and relatable engagement of the self-tapping nuts. The spline structure inside the nuts, the non-fixed gripping posture, and interference from magnetic forces significantly increased assembly complexity. The Beijing-based tech giant adopted an end-to-end data-driven approach to solve this problem using a joint training framework. It leveraged a 4.7-billion-parameter Vision-Language-Action (VLA) model developed in-house, combined with reinforcement learning. This approach reduces dependence on manual training data and enables the robot to adapt and learn quickly, and learn from its environment. With multimodal input from different sensing techniques, such as vision, touch, and joint position awareness, the robot is less likely to misinterpret complex situations. This process improves the stability and overall performance of the robot. To control full-body movement, Xiaomi uses a hybrid system that blends traditional optimization-based control with reinforcement learning. The optimization controller updates in under one millisecond, allowing the robot to respond in real time. Meanwhile, the reinforcement learning system was trained using hundreds of millions of simulated disturbances in a virtual environment. The training helps the robot stay balanced even in extreme conditions and allows what it learned in simulation to transfer directly to the real-world robot without additional retraining. In an interview last year, Xiaomi CEO Lei Jun shed light on the company’s goal to deploy humanoid robots at a large scale in its factories within the next five years. The company also plans to expand the use of humanoid robots in household settings, which he said “could open a new trillion-yuan market”. Last week, the BMW Group announced its intentions to pilot humanoid robots at its Leipzig plant in Germany this summer. Meanwhile, Tesla CEO Elon Musk has already revealed that Optimus Version 3 will be launching later this year, with the current model already performing basic factory tasks. As global automakers and tech firms accelerate real-world trials, Xiaomi’s latest deployment underscores how humanoid robots are steadily moving from experimental prototypes toward practical roles on factory floors. Atharva is a full-time content writer with a post-graduate degree in media & amp; entertainment and a graduate degree in electronics & telecommunications. He has written in the sports and technology domains respectively. In his leisure time, Atharva loves learning about digital marketing and watching soccer matches. His main goal behind joining Interesting Engineering is to learn more about how the recent technological advancements are helping human beings on both societal and individual levels in their daily lives. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Premium Follow
Images (1):
|
|||||
| Xiaomi debuts humanoid robot for EV plant | https://www.azernews.az/region/255178.h… | 1 | Mar 04, 2026 16:00 | active | |
Xiaomi debuts humanoid robot for EV plantURL: https://www.azernews.az/region/255178.html Description: Xiaomi has announced the deployment of a humanoid robot at its electric vehicle manufacturing plant, AzerNEWS reports. Content:
By Alimat Aliyeva Xiaomi has announced the deployment of a humanoid robot at its electric vehicle manufacturing plant, AzerNEWS reports. The robot operated autonomously for three hours at the self-tapping nut installation station in the foundry. During testing, it achieved a 90.2% operational success rate with an average production cycle of 76 seconds. The task involved precise gripping of fasteners, accurate positioning, and automatic tightening following the integrated casting of components. The system is powered by Xiaomi’s proprietary VLA model, Robotics-0, which features 4.7 billion parameters and is enhanced with reinforcement learning. Its control architecture integrates visual perception, tactile feedback, and joint position control, allowing the robot to maintain stability and adapt to minor variations on the assembly line. Xiaomi states that this is the first step toward scaling humanoid robots in automotive manufacturing. Future plans include expanding their use to additional assembly tasks, such as quality inspection and component handling. This innovation not only boosts efficiency but could also address labor shortages and improve workplace safety by taking over repetitive or physically demanding tasks. Xiaomi’s move strengthens its position against competitors like Tesla and Xpeng, who are also investing heavily in humanoid robotics for industrial applications. Interestingly, Xiaomi is exploring the possibility of giving the robot limited collaborative capabilities with human workers, opening the door to hybrid assembly lines where humans and humanoids work side by side. Here we are to serve you with news right now. It does not cost much, but worth your attention. Choose to support open, independent, quality journalism and subscribe on a monthly basis. By subscribing to our online newspaper, you can have full digital access to all news, analysis, and much more. You can also follow AzerNEWS on Twitter @AzerNewsAz or Facebook @AzerNewsNewspaper Thank you! © Azernews.az 2026
Images (1):
|
|||||
| Chi sono (e come sono) i robot umanoidi che costruiscono … | https://it.motor1.com/news/719500/robot… | 1 | Mar 03, 2026 08:00 | active | |
Chi sono (e come sono) i robot umanoidi che costruiscono autoURL: https://it.motor1.com/news/719500/robot-umanoidi-produzione-auto/ Description: Apollo, Optimus o Figure 01. Loro sono solo l'inizio, vale quindi la pena conoscerli meglio. Ecco chi sono e dove lavorano Content:
Per un'esperienza più personalizzata Navigate su questo e su molti altri siti senza banner pubblicitari, tracciamento personalizzato e annunci video. Uomini e macchine. Fin dall'antichità, l'essere umano ha costruito e utilizzato macchine per svolgere alcune attività. Nel 1927 un film muto diretto da Fritz Lang (Metropolis) ha immaginato come sarebbe stato l'anno 2026. Secondo Fritz Lang i ricchi sarebbero stati molto distanti dai poveri e l'"uomo-macchina" avrebbe sostituito in tutto l'essere umano. Il film Metropolis ha ispirato (o è stato citato) in tanti altri film di fantascienza, solo per citarne alcuni: la saga di Star Wars (dal 1977); Blade Runner (1982) o Terminator (dal 1984). In Italia, nel 1980, Alberto Sordi ha raccontato il rapporto uomo-robot nel film "Io e Caterina. Nel 2004 Audi ha presentato la concept car RSQ in "Io, Robot" e di recente il film "I Mitchell contro le macchine" (2021) ha mostrato cosa potrebbe accadere all'essere umano se gli smartphone si ribellassero e ci mettessero contro un esercito di robot. Insomma, il cinema ci ha abituato da anni a una realtà in cui i robot convivono con noi e adesso ci siamo. Il settore auto (un colosso da un trilione di dollari) è stato il primo a fare uso su larga scala della robotica e con l'intelligenza artificiale siamo arrivati a chiamarli per nome: Apollo, Optimus o Figure 01. Loro sono solo l'inizio, vale quindi la pena conoscerli meglio. È il 19 agosto 2021, Tesla organizza la Giornata dell'intelligenza artificiale (AI) e presenta un prototipo di robot umanoide chiamato Optimus, in omaggio al personaggio cinematografico di Transformers "Optimus Prime". A distanza di tre anni, mentre fa notizia il licenziamento di centinaia di persone che hanno lavorato ai Tesla Supercharger, la casa rilascia un nuovo video in cui Optimus lavora in fabbrica. Stando all'azienda Optimus sarà venduto a clienti esterni entro la fine del 2025. Continuerà a essere ottimizzato di anno in anno e i lettori più attenti ricorderanno anche il polverone suscitato a fine 2023 dalla notizia che il robot aveva ferito un operaio. Incidenti di percorso. Gennaio 2024: la startup di robotica Figure firma un accordo con BMW per introdurre i suoi robot umanoidi nello stabilimento di Spartanburg, nella Carolina del Sud (USA). I robot, spiega una nota ufficiale, sono destinati ad automatizzare attività di produzione “difficili, pericolose o noiose”. “La robotica monouso ha saturato il mercato commerciale per decenni, ma il potenziale della robotica generica è completamente inutilizzato”, dice Brett Adcock, CEO di Figure. “I robot di Figure consentiranno alle aziende di aumentare la produttività, ridurre i costi e creare un ambiente più sicuro e coerente”. Quanti siano i robot attualmente in fase di sperimentazione non è noto, ma l'esperimento è destinato a riguardare fabbriche in tutto il mondo. Figure 01 assembla BMW Un altro annuncio è datato 2024: la società di robotica Apptronik collabora con Mercedes per sperimentare Apollo, un robot bipede alto un metro e 70 cm che può sollevare fino a 25 kg. "Si tratta di una nuova frontiera e vogliamo comprendere il potenziale sia della robotica che della produzione automobilistica per colmare le lacune di manodopera in aree quali il lavoro poco qualificato, ripetitivo e fisicamente impegnativo e per liberare i membri del nostro team altamente qualificati sulla linea di produzione per costruire le auto più desiderabili del mondo", dice il capo della produzione Mercedes, Jörg Burzer. Anche in questo caso non sappiamo quanti robot siano in fase di sviluppo, ma ci aspettiamo notizie molto presto. Aprile 2024, Sanctuary AI, società nata nel 2018 con l'obiettivo di creare "la prima intelligenza simile a quella umana in robot a uso generale", annuncia che collaborerà con Magna (il colosso che produce e assembla auto per conto di vari brand, tra cui Mercedes, Jaguar e BMW). "Integrando robot AI a uso generale nelle nostre strutture di produzione per compiti specifici, possiamo potenziare le nostre capacità per consegnare prodotti di alta qualità ai nostri clienti", dice Todd Deaville, vicepresidente della divisione Advanced Manufacturing Innovation presso Magna. Quindi benvenuto Phoenix, potete vederlo in azione nel video qui sotto: Nelle fabbriche di auto ci sono robot di tutti i tipi. Quelli "umanoidi" sono solo gli ultimi arrivati e vanno ad affiancare robot chiamati articolati, a braccio articolato, collaborativi a sei assi, cartesiani (costituiti essenzialmente da attuatori lineari), cilindrici... Per questo gli umanoidi vengono anche identificati con l'appellativo di "robot bipedi" e li troviamo in fabbriche di vario genere. Digit, per esempio, ha due braccia e due gambe di metallo e dà una mano nei magazzini di Amazon per impacchettare le consegne. Digit lavora nei magazzini di Amazon Oltre a lavorare in fabbrica questi robot con sembianze umane stanno iniziando a essere impiegati anche nell'area dei servizi. Al Salone di Pechino, Omoda e Jaecoo hanno presentato Mornine, un androide intelligente realizzato in collaborazione con AiMoga (partner strategico di Chery) che nel video qui sotto vediamo impegnato nella descrizione di un'auto per la vendita. Riusciranno queste macchine a fare meglio dell'essere umano sotto ogni aspetto? In Giappone, nonostante la costante crescita tecnologica, i maestri Takumi (gli artigiani più esperti che lavorano in Nissan o in Toyota) sono ancora capaci di superare i robot nella finitura dei prodotti di alta qualità. Ma l'artigianato ha ancora un futuro? Leggi anche Vogliamo la tua opinione! Cosa vorresti vedere su Motor1.com? - Il team di Motor1.com DI TENDENZA ULTIMI ARTICOLI Consigliati per te Gli interni del nuovo SUV ibrido di Kia Renault aggiorna per il 2026 le sue Austral, Espace e Rafale In Finlandia le gomme da neve si provano così Le auto più vendute d'Italia a febbraio 2026: la classifica ZTL Roma, stop alle elettriche gratis dal 1° luglio 2026 Offerta Mercedes CLA, vantaggi e svantaggi I prezzi di benzina e gasolio salgono sempre più Foto Video Per un'esperienza più personalizzata Navigate su questo e su molti altri siti senza banner pubblicitari, tracciamento personalizzato e annunci video.
Images (1):
|
|||||
| Samsung to use humanoid robots and agentic AI to reshape … | https://www.notebookcheck.net/Samsung-t… | 1 | Mar 02, 2026 16:01 | active | |
Samsung to use humanoid robots and agentic AI to reshape its global factories by 2030 - NotebookCheck.net NewsDescription: Samsung has revealed its manufacturing roadmap for the coming years. The company will push humanoid robots and agentic AI to transform its factories by 2030. Content:
Samsung Electronics has given a glimpse of its global production vision. The South Korean giant will lean into two of the most disruptive technologies in the industry to overhaul the way it brings products to the market: humanoid robots and agentic AI. Samsung is active in robot development, but has been limited to commercial products such as vacuums. However, it has been investing in companies developing humanoid robots, including Rainbow Robotics. The electronics maker intended to deploy Rainbow Robotics’ RB-Y1 on its manufacturing lines. The other main pillar of Samsung’s 2030 manufacturing transformation strategy is AI. Samsung states it will enhance quality and productivity at every production step (from material warehousing to shipment) with agentic AI. It will also deploy AI applications to improve workplace safety and environmental health. Samsung will reveal more about its AI strategy at the Mobile World Congress (MWC) in Barcelona in March. The company will host a side-line event, which will also detail its governance strategy for overseeing its AI deployment. Meanwhile, Samsung is not alone in bringing humanoids to the factory. Prominent Apple supplier Foxconn announced in October 2025 that it would begin using Nvidia-powered bipedal robots to assemble AI servers within 6 months. Hyundai has also ordered 30,000 6-feet-2-inches Atlas robots from its subsidiary, Boston Dynamics. The humanoids will work across Hyundai’s car factories in the US. Yonhap News Agency
Images (1):
|
|||||
| Toyota Puts Humanoid Robots on the Assembly Line: What Seven … | https://www.webpronews.com/toyota-puts-… | 0 | Mar 02, 2026 16:01 | active | |
Toyota Puts Humanoid Robots on the Assembly Line: What Seven Machines in Ontario Signal for the Future of ManufacturingDescription: Keywords Content: |
|||||
| Watch: Humanoid robots work together using the same AI 'brain' … | https://www.euronews.com/next/2026/02/0… | 1 | Mar 02, 2026 16:01 | active | |
Watch: Humanoid robots work together using the same AI 'brain' | EuronewsURL: https://www.euronews.com/next/2026/02/05/watch-humanoid-robots-work-together-using-the-same-ai-brain Description: Until now, humanoid robots have largely worked on their own. A new AI system is designed to run them together. Content:
Europe Today Euronews' flagship morning TV show with the news and insights that drive Europe, live from Brussels every morning at 08.00. Also available as a newsletter and podcast. The Ring The Ring is Euronews’ weekly political showdown, where Europe’s toughest debates meet their boldest voices. In each episode, two political heavyweights from across the EU face off to propose a diversity of opinions and spark conversations around the most important issues of EU affairs and the wider European political life. No Comment No agenda, no argument, no bias, No Comment. Get the story without commentary. My Wildest Prediction Dare to imagine the future with business and tech visionaries The Big Question Deep dive conversations with business leaders Euronews Tech Talks Euronews Tech Talks goes beyond discussions to explore the impact of new technologies on our lives. With explanations, engaging Q&As, and lively conversations, the podcast provides valuable insights into the intersection of technology and society. The Food Detectives Europe's best food experts are joining forces to crack down on fraud. Euronews is following them in this special series: The Food Detectives Water Matters Europe's water is under increasing pressure. Pollution, droughts, floods are taking their toll on our drinking water, lakes, rivers and coastlines. Join us on a journey around Europe to see why protecting ecosystems matters, how our wastewater can be better managed, and to discover some of the best water solutions. Video reports, an animated explainer series and live debate - find out why Water Matters, from Euronews. Climate Now We give you the latest climate facts from the world’s leading source, analyse the trends and explain how our planet is changing. We meet the experts on the front line of climate change who explore new strategies to mitigate and adapt. Europe Today Euronews' flagship morning TV show with the news and insights that drive Europe, live from Brussels every morning at 08.00. Also available as a newsletter and podcast. The Ring The Ring is Euronews’ weekly political showdown, where Europe’s toughest debates meet their boldest voices. In each episode, two political heavyweights from across the EU face off to propose a diversity of opinions and spark conversations around the most important issues of EU affairs and the wider European political life. No Comment No agenda, no argument, no bias, No Comment. Get the story without commentary. My Wildest Prediction Dare to imagine the future with business and tech visionaries The Big Question Deep dive conversations with business leaders Euronews Tech Talks Euronews Tech Talks goes beyond discussions to explore the impact of new technologies on our lives. With explanations, engaging Q&As, and lively conversations, the podcast provides valuable insights into the intersection of technology and society. The Food Detectives Europe's best food experts are joining forces to crack down on fraud. Euronews is following them in this special series: The Food Detectives Water Matters Europe's water is under increasing pressure. Pollution, droughts, floods are taking their toll on our drinking water, lakes, rivers and coastlines. Join us on a journey around Europe to see why protecting ecosystems matters, how our wastewater can be better managed, and to discover some of the best water solutions. Video reports, an animated explainer series and live debate - find out why Water Matters, from Euronews. Climate Now We give you the latest climate facts from the world’s leading source, analyse the trends and explain how our planet is changing. We meet the experts on the front line of climate change who explore new strategies to mitigate and adapt. "Humanoid robots designed for different tasks can now share a single artificial intelligence 'brain' that coordinates their actions across multiple locations simultaneously." A UK-based company has unveiled an AI system that can virtually work as a “shared brain” for fleets of robots built for different purposes across factories, services, and homes simultaneously. Companies like Tesla, Boston Dynamics and XPeng have showcased humanoid robot prototypes in recent years, but these demonstrations typically feature robots operating on their own. The UK firm’s approach is designed to manage multiple humanoid robots together under a single AI. Shared control systems are already common for industrial robots, however, applying the same approach to robots that rely on human-like movement and manipulation has been rare. The AI system, called KinetIQ, can assign tasks to entire robot fleets and control individual movements simultaneously in seconds, according to Humanoid, the robotics company behind the new system. Data from individual robots is shared across the system, helping improve performance fleet-wide. In a video released by Humanoid, a woman asks a bipedal humanoid robot to order cocoa powder and olive oil. In the next scene, wheeled robots in a warehouse-like environment use five-fingered hands to grasp a glass bottle and a soft paper bag, and then put them in a hard container box before packing them into a paper bag. Once the order reaches the home, the bipedal robot unpacks the bag and places the items as instructed by the woman’s voice commands. According to Humanoid, the wheeled robots seen in the video are designed for industrial use, such as back-of-store grocery picking, container handling and packing, while the bipedal robots are intended for service roles and domestic use. The company describes the bipedal robot as an “intelligent assistant” capable of voice interaction, online ordering, and grocery handling. Humanoid has previously managed to have a 179 cm bipedal robot walk in just two days after its assembly, a process that typically takes weeks or months in humanoid robotics. The robot is designed to carry loads of up to 15 kilograms, with the company positioning it as a response to labour shortages, physically demanding work and unpaid domestic care. Humanoid said the capabilities shown in the video have already been tested in real-world pilot projects, and that a beta version of the wheeled robots will be available for sale early next year. For more on this story, watch the video in the media player above. Video editor • Roselyne Min Browse today's tags
Images (1):
|
|||||
| Segway Robotics teams up with NVIDIA on new AI kit … | https://www.investing.com/news/stock-ma… | 0 | Mar 01, 2026 16:00 | active | |
Segway Robotics teams up with NVIDIA on new AI kit By Investing.comDescription: Segway Robotics teams up with NVIDIA on new AI kit Content: |
|||||
| Toyota porta in fabbrica i robot umanoidi Digit | https://www.tecnoandroid.it/2026/02/26/… | 1 | Mar 01, 2026 00:01 | active | |
Toyota porta in fabbrica i robot umanoidi DigitURL: https://www.tecnoandroid.it/2026/02/26/toyota-porta-in-fabbrica-i-robot-umanoidi-digit-1783640/ Description: Toyota introduce i robot umanoidi Digit nello stabilimento canadese del RAV4. L'azienda punta sull'automatizzazione dei flussi interni. Content:
Negli stabilimenti automobilistici, i robot umanoidi non sono più solo una promessa futuristica. Tali dispositivi stanno iniziando a muoversi tra le linee produttive con la stessa naturalezza di un operaio esperto. A tal proposito, Toyota da tempo osserva l’evoluzione dell’automazione. Ora l’azienda ha deciso di fare un passo avanti e portare in campo i robot Digit, creati da Agility Robotics. Si tratta di sette di tali umanoidi che verranno impiegati nello stabilimento canadese dove nasce il SUV RAV4. Il compito dei robot sarà quello di gestire i flussi interni di componenti, scaricando le cassette di ricambi dai veicoli automatizzati. Un lavoro ripetitivo e faticoso, che finora gravava sulle spalle degli operatori. La vera novità sta nel modo in cui tali umanoidi vengono integrati. Il modello scelto da Toyota è quello del “Robots as a Service“, che permette di avere i robot pronti all’uso e collegati a una piattaforma cloud, Agility Arc. Quest’ultima è capace di coordinare le flotte, ottimizzare percorsi e ridurre tempi morti. Tutto il sistema si regge sull’intelligenza artificiale che permette ai robot di adattarsi rapidamente ai flussi di lavoro già esistenti. Il tutto senza stravolgere l’organizzazione dello stabilimento. Digit non è una novità assoluta. Agility Robotics lo ha già testato in contesti logistici impegnativi, per aziende come Amazon, Schaeffler e GXO. È un modello maturo, capace di operare con autonomia e precisione. Toyota vuole sfruttarne appieno tale potenziale, trasformando Digit in uno strumento che velocizzi la produzione senza aggiungere complessità. Nel frattempo, la lista dei produttori interessati ai robot umanoidi continua a crescere. I robot attuali sono in grado di camminare, sollevare pesi, coordinare il lavoro. Nel contesto attuale, gli umanoidi sono progettati per diventare parte integrante della produzione quotidiana, e che potrebbero cambiare radicalmente il modo in cui si costruiscono le auto (e non solo). Ciao sono Margareth, per gli amici Maggie, la vostra amichevole web writer di quartiere. Questa piccola citazione dice già tanto di me: amo il cinema, le serie tv, leggere e cantare a squarciagola i musical a teatro. Se a questo aggiungiamo la passione per la fotografia e la tecnologia direi che è facile intuire perché ho deciso di studiare e poi lavorare con la comunicazione. 2012 – 2023 Tecnoandroid.it – Gestito dalla STARGATE SRLS – P.Iva: 15525681001 Testata telematica quotidiana registrata al Tribunale di Roma CON DECRETO N° 225/2015, editore STARGATE SRLS. Tutti i marchi riportati appartengono ai legittimi proprietari. Questo articolo potrebbe includere collegamenti affiliati: eventuali acquisti o ordini realizzati attraverso questi link contribuiranno a fornire una commissione al nostro sito. Inserisci qualcosa di speciale: Tienimi connesso fino a quando non esco Password dimenticata? Ti sarà inviata una nuova password via email. Hai ricevuto una nuova password? Accedi qui.
Images (1):
|
|||||
| Гуманоїдні роботи Agility працюватимуть на заводі | https://expert.com.ua/211252-humanoidni… | 1 | Mar 01, 2026 00:01 | active | |
Гуманоїдні роботи Agility працюватимуть на заводіURL: https://expert.com.ua/211252-humanoidni-roboty-agility-pracyuvatymut-na-zavodi.html Description: Toyota уклала контракт на використання семи гуманоїдних роботів для роботи на заводі з виробництва позашляховиків RAV4 в рамках угоди «роботи як послуга» Content:
Після річного пілотного проекту канадська виробнича дочірня компанія Toyota уклала контракт на використання семи гуманоїдних роботів для роботи на заводі з виробництва позашляховиків RAV4 в рамках угоди «роботи як послуга». «Після оцінки ряду роботів ми раді впровадити Digit для поліпшення досвіду членів команди та подальшого підвищення операційної ефективності на наших виробничих потужностях», — заявив у своїй заяві президент Toyota Motor Manufacturing Canada (TMMC) Тім Холландер. Робот Digit, про який йдеться, побудований компанією Agility Robotics, яка відокремилася від Університету штату Орегон у 2015 році. Digit призначений для роботи в промислових умовах без присутності людей, часто з’єднуючи дві автоматизовані виробничі лінії. У цьому випадку роботи будуть розвантажувати контейнери з автозапчастинами з автоматизованого складського тягача. Хоча сім роботів, які виконують ручну важку роботу, можуть здатися невеликим кроком у порівнянні з ефектними роликами, де металеві люди виконують сальто, насправді впровадження гуманоїдних роботів на реальних робочих місцях є рідкісним і складним. Продемонструвати можливості в лабораторії — це одне, але інтегрувати їх у робочий процес компанії, включаючи технічне обслуговування та заряджання, — не так просто. «Коли технологічні компанії витрачають реальний час на розуміння завдань, які потрібно виконувати, реальних робочих процесів, що відбуваються… саме тоді ми побачимо значне зростання їхнього впровадження», — сказав Рам Девараджулу, віцепрезидент Cambridge Consultants, на саміті Humanoids Summit наприкінці 2025 року. Agility є одним із лідерів у виведенні роботів із лабораторій, а Digits працює в аналогічному режимі для логістичних провайдерів, таких як GXO, Schaeffler та Amazon. Компанія має власний хмарний програмний пакет під назвою Arc, за допомогою якого користувачі можуть керувати парком своїх роботів, і стверджує, що штучний інтелект буде мати вирішальне значення для зниження витрат на впровадження. «Витрати на впровадження… можуть значно перевищувати ціну робота», — заявив Прас Велагапуді, технічний директор Agility, в інтерв’ю минулого року. «Інструменти штучного інтелекту дозволяють нам зменшити витрати на впровадження, скоротити час на налаштування робота і досягнення ним необхідного рівня продуктивності». TMMC і Agility використають цю співпрацю як можливість для впровадження інших сценаріїв використання, які звільнять працівників від повторюваних фізичних завдань і дозволять їм зосередитися на більш цінній роботі. Компанія також готує роботів нового покоління, які будуть безпечними для роботи поряд з людьми. Сучасні гуманоїдні роботи, які мають достатню силу для підйому важких вантажів, все ще вважаються занадто ненадійними для автономної роботи в оточенні людей. Конкурент Figure AI протягом 10 місяців минулого року тестував своїх роботів Figure 02 на заводі BMW, де, за даними компанії, вони розвантажили 90 000 деталей. Інші компанії, які використовують гуманоїдів у пілотних програмах, включають Apptronic, Unitree, Tesla, Boston Dynamics, 1X Technology та Reflex Robotics. При повному або частковому відтворенні матеріалів обов'язково посилайтеся на HiTech.Expert. Про проект та рекламні можливості сайту HITech.Expert Редакція: gbogapov@gmail.com © 1999-2023 HiTech.Expert. Хостинг: Mirohost.net
Images (1):
|
|||||
| Toyota внедряет роботов-гуманоидов Digit для сборки RAV4 | https://hightech.fm/2026/02/20/toyota-d… | 1 | Mar 01, 2026 00:01 | active | |
Toyota внедряет роботов-гуманоидов Digit для сборки RAV4URL: https://hightech.fm/2026/02/20/toyota-digit Description: На канадском заводе Toyota начали работать семь человекоподобных роботов Agility Robotics модели Digit. Они помогают разгружать комплектующие и ускоряют процесс сборки кроссоверов RAV4, демонстрируя, как роботы постепенно входят в автопроизводство. Content:
На канадском заводе Toyota начали работать семь человекоподобных роботов Agility Robotics модели Digit. Они помогают разгружать комплектующие и ускоряют процесс сборки кроссоверов RAV4, демонстрируя, как роботы постепенно входят в автопроизводство. На канадском заводе Toyota семь роботов Digit присоединились к процессу сборки RAV4 после года тестирования, сообщает TechCrunch. Роботы Digit сосредоточатся на работе с грузами. Их задача — разгружать контейнеры с комплектующими с автоматических тележек, которые курсируют по цеху. Благодаря фирменному облачному сервису Arc компания Agility Robotics позволяет управлять роботами удалённо. Agility Robotics уже поставляет своих роботов в логистические компании, включая гиганта интернет-торговли Amazon. При этом представители компании отмечают, что интеграция роботов в инфраструктуру клиента часто обходится дороже самой покупки, поэтому здесь активно применяют ИИ для управления процессом. Не только Toyota проявляет интерес к человекоподобным роботам. TSMC планирует внедрять роботов Agility на своих фабриках по производству чипов. Agility также разрабатывает следующее поколение гуманоидных роботов, которые будут безопаснее работать рядом с людьми. Существующие модели способны поднимать тяжёлые грузы, но требуют строгого соблюдения техники безопасности. Сейчас многие автопроизводители, особенно китайские компании, тестируют человекоподобных роботов на своих предприятиях. Это шаг к более автоматизированному и гибкому производству, где роботы помогают людям выполнять тяжёлую и рутинную работу. Читать далее: Вселенная внутри черной дыры: наблюдения «Уэбба» подтверждают странную гипотезу Испытания ракеты Starship Илона Маска вновь закончились взрывом в небе Сразу четыре похожих на Землю планеты нашли у ближайшей одиночной звезды Обложка: Agility Robotics Пройдите, пожалуйста, мини-опрос, который поможет нам стать еще интереснее и полезнее для вас!
Images (1):
|
|||||
| Ð¡ÐµÐ¼Ñ ÑеловекоподобнÑÑ ÑобоÑов Agility помогÑÑ Ñо ÑбоÑкой Toyota RAV4 в … | https://pcnews.ru/news/sem_celovekopodo… | 1 | Mar 01, 2026 00:01 | active | |
Ð¡ÐµÐ¼Ñ ÑеловекоподобнÑÑ ÑобоÑов Agility помогÑÑ Ñо ÑбоÑкой Toyota RAV4 в Ðанаде - PCNEWS.RUDescription: ÐÑе компÑÑÑеÑнÑе новоÑÑи на PCNews.ru. ÐÑÑ Ð½Ð¾Ð²Ð°Ñ Ð¸Ð½ÑоÑмаÑиÑ, о компÑÑÑеÑÐ°Ñ Ð¸ инÑоÑмаÑионнÑÑ ÑÐµÑ Ð½Ð¾Ð»Ð¾Ð³Ð¸ÑÑ . СиндикаÑÐ¸Ñ Ð½Ð¾Ð²Ð¾ÑÑей, ÑÑаÑей, пÑеÑÑ-Ñелизов Ñо вÑÐµÑ ÑайÑов компÑÑÑеÑной (ÐТ или IT) ÑемаÑики. Content:
ÐÑÑоÑиÑеÑки на авÑоÑбоÑоÑнÑÑ Ð¿ÑедпÑиÑÑиÑÑ Ð¸ÑполÑзовалиÑÑ Ð³Ð»Ð°Ð²Ð½Ñм обÑазом манипÑлÑÑоÑÑ Ð¸ ÑобоÑизиÑованнÑе Ñележки, но в наÑи дни Ñам Ð½Ð°Ñ Ð¾Ð´ÑÑ Ð¿Ñименение и ÑеловекоподобнÑе ÑобоÑÑ. Ðа канадÑком пÑедпÑиÑÑии Toyota Motor наÑали ÑкÑплÑаÑиÑоваÑÑÑÑ ÑÐµÐ¼Ñ ÑеловекоподобнÑÑ ÑобоÑов Agility Robotics модели Digit. ÐÑÑоÑник изобÑажениÑ: Agility Robotics Ðак оÑмеÑÐ°ÐµÑ TechCrunch, пеÑед ÑÑим Toyota около года ÑеÑÑиÑовала даннÑÑ ÑобоÑов, но ÑепеÑÑ Ð¾Ð½Ð¸ пÑивлеÑÐµÐ½Ñ Ðº пÑоÑеÑÑÑ Ð²ÑпÑÑка кÑоÑÑовеÑов RAV4 на поÑÑоÑнной оÑнове. Ðак и во Ð¼Ð½Ð¾Ð³Ð¸Ñ ÑлÑÑаÑÑ Ð² наÑи дни, ÑеловекоподобнÑе ÑобоÑÑ Digit ÑоÑÑедоÑоÑаÑÑÑ Ð½Ð° манипÑлÑÑиÑÑ Ñ Ð³ÑÑзами. Ðм бÑÐ´ÐµÑ Ð¿Ð¾ÑÑÑено ÑазгÑÑжаÑÑ ÐºÐ¾Ð½ÑейнеÑÑ Ñ ÐºÐ¾Ð¼Ð¿Ð»ÐµÐºÑÑÑÑими Ñ Ð°Ð²ÑомаÑиÑеÑÐºÐ¸Ñ Ñележек, кÑÑÑиÑÑÑÑÐ¸Ñ Ð¿Ð¾ канадÑÐºÐ¾Ð¼Ñ Ð·Ð°Ð²Ð¾Ð´Ñ Toyota. ÐÐ¾Ð¼Ð¿Ð°Ð½Ð¸Ñ Agility Robotics Ñже ÑÐ½Ð°Ð±Ð¶Ð°ÐµÑ Ñвоими ÑобоÑами неÑколÑко логиÑÑиÑеÑÐºÐ¸Ñ ÐºÐ¾Ð¼Ð¿Ð°Ð½Ð¸Ð¹, вклÑÑÐ°Ñ Ð³Ð¸Ð³Ð°Ð½Ñа инÑеÑнеÑ-ÑоÑговли Amazon. ФиÑменнÑй облаÑнÑй ÑеÑÐ²Ð¸Ñ Arc позволÑÐµÑ Ð¸Ð¼ ÑдалÑнно конÑÑолиÑоваÑÑ ÑабоÑÑ ÑобоÑов. Ðо Ñловам пÑедÑÑавиÑелей Agility, ÑÑоимоÑÑÑ Ð¸Ð½ÑегÑаÑии ÑобоÑов в инÑÑаÑÑÑÑкÑÑÑÑ ÐºÐ»Ð¸ÐµÐ½Ñа неÑедко пÑевÑÑÐ°ÐµÑ Ð·Ð°ÑÑаÑÑ Ð½Ð° его покÑпкÑ, поÑÑÐ¾Ð¼Ñ Ð² ÑÑой ÑÑеÑе Ñоже важно иÑполÑзоваÑÑ ÐÐ. TSMC Ñакже ÑобиÑаеÑÑÑ Ð²Ð½ÐµÐ´ÑиÑÑ ÑеловекоподобнÑÑ ÑобоÑов Agility Ñ ÑÐµÐ±Ñ Ð½Ð° пÑедпÑиÑÑиÑÑ Ð¿Ð¾ пÑоизводÑÑÐ²Ñ Ñипов, Ñ Ð¾ÑÑ ÑеÑÑ Ð½Ð°Ð²ÐµÑнÑка идÑÑ Ð½Ðµ о ÑамÑÑ Ð²ÑÑокоÑоÑнÑÑ Ð¸ оÑвеÑÑÑвеннÑÑ Ð¼Ð°Ð½Ð¸Ð¿ÑлÑÑиÑÑ . Agility Ñакже ÑазÑабаÑÑÐ²Ð°ÐµÑ Ñеловекоподобного ÑобоÑа нового поколениÑ, коÑоÑÑй бÑÐ´ÐµÑ Ð±Ð¾Ð»ÐµÐµ безопаÑнÑм пÑи иÑполÑзовании в окÑÑжении, наполненном лÑдÑми. СÑÑеÑÑвÑÑÑие гÑманоиднÑе ÑобоÑÑ, ÑпоÑобнÑе поднимаÑÑ ÑÑжÑлÑе гÑÑзÑ, Ñ ÑоÑки зÑÐµÐ½Ð¸Ñ ÑÐµÑ Ð½Ð¸ÐºÐ¸ безопаÑноÑÑи пÑи ÑоÑедÑÑве Ñ Ð»ÑдÑми не Ñак безÑпÑеÑнÑ. СÑеди авÑопÑоизводиÑелей много компаний, коÑоÑÑе Ñже ÑеÑÑиÑÑÑÑ ÑеловекоподобнÑÑ ÑобоÑов Ñ ÑÐµÐ±Ñ Ð½Ð° пÑедпÑиÑÑиÑÑ , оÑобенно ÑÑеди киÑайÑÐºÐ¸Ñ . © 3DNews
Images (1):
|
|||||
| Meet Digit: Toyota’s newest worker doesn’t need coffee breaks | … | https://globalnews.ca/news/11677383/dig… | 1 | Mar 01, 2026 00:01 | active | |
Meet Digit: Toyota’s newest worker doesn’t need coffee breaks | Globalnews.caURL: https://globalnews.ca/news/11677383/digit-toyotas-humanoid-robot/ Description: Toyota Motor Manufacturing Canada will begin deploying three humanoid ‘Digit’ robots at its Woodstock plant under a new deal with Agility Robotics, following a successful pilot. Content:
Instructions: Want to discuss? Please read our Commenting Policy first. Comment Name Δ 1 comment A company that refrains from using robots and advertises it will lead sales. If you get Global News from Instagram or Facebook - that will be changing. Find out how you can still connect with us. It has arms, hands, eyes — of a sort — and can stand for hours doing the same task, over and over, without uttering a word of complaint. But Toyota Canada’s latest employee is unlike any other ever to grace the floor of the company’s Woodstock, Ont., assembly plant. You see, Digit is a humanoid robot. Following a successful pilot, the company has signed a commercial Robots-as-a-Service agreement with Oregon-based Agility Robotics to deploy its general-purpose robot at the facility. The robots will support manufacturing, supply chain and logistics operations. While seven robots are allocated under the agreement, deployment will begin with three units. “After evaluating a number of robots, we are excited to deploy Digit to improve the team member experience and further increase operational efficiency in our manufacturing facilities,” Tim Hollander, president of Toyota Motor Manufacturing Canada, said in a release. Digit is designed to take on repetitive and physically demanding tasks commonly found on automotive production lines. In the release, the companies said that automating “extremely repetitive and physically taxing tasks” could reduce strain and increase safety for employees, freeing them to focus on more value-added work. Agility Robotics CEO Peggy Johnson said partnering with Toyota, one of the world’s largest automakers, marks a significant step for humanoid robots in industrial settings. “Toyota is one of the premier companies in the world; one with a long history of innovation and success, so it’s a privilege to join forces to integrate humanoid robotic solutions like Digit into automotive production,” Johnson said. The companies say they will continue exploring additional use cases where robots and artificial intelligence could further augment automotive production. Toyota joins a growing number of Fortune 500 companies deploying Agility’s humanoid robots globally, including GXO, Schaeffler and Amazon. Toyota Motor Manufacturing Canada operates vehicle assembly plants in Cambridge and Woodstock and is Toyota’s largest manufacturing operation outside Japan. The email you need for the day’s top news stories from Canada and around the world. The email you need for the day’s top news stories from Canada and around the world.
Images (1):
|
|||||
| Toyota наняла семь человекоподобных роботов Agility Digit для сборки RAV4 | https://3dnews.ru/1137151/sem-cheloveko… | 1 | Mar 01, 2026 00:01 | active | |
Toyota наняла семь человекоподобных роботов Agility Digit для сборки RAV4Description: Исторически на автосборочных предприятиях использовались главным образом манипуляторы и роботизированные тележки, но в наши дни там находят применение и человекоподобные роботы. На канадском предприятии Toyota Motor начали эксплуатироваться семь человекоподобных роботов Agility Robotics модели Digit.. Content:
Исторически на автосборочных предприятиях использовались главным образом манипуляторы и роботизированные тележки, но в наши дни там находят применение и человекоподобные роботы. На канадском предприятии Toyota Motor начали эксплуатироваться семь человекоподобных роботов Agility Robotics модели Digit. Источник изображения: Agility Robotics Как отмечает TechCrunch, перед этим Toyota около года тестировала данных роботов, но теперь они привлечены к процессу выпуска кроссоверов RAV4 на постоянной основе. Как и во многих случаях в наши дни, человекоподобные роботы Digit сосредоточатся на манипуляциях с грузами. Им будет поручено разгружать контейнеры с комплектующими с автоматических тележек, курсирующих по канадскому заводу Toyota. Компания Agility Robotics уже снабжает своими роботами несколько логистических компаний, включая гиганта интернет-торговли Amazon. Фирменный облачный сервис Arc позволяет им удалённо контролировать работу роботов. По словам представителей Agility, стоимость интеграции роботов в инфраструктуру клиента нередко превышает затраты на его покупку, поэтому в этой сфере тоже важно использовать ИИ. TSMC также собирается внедрить человекоподобных роботов Agility у себя на предприятиях по производству чипов, хотя речь наверняка идёт не о самых высокоточных и ответственных манипуляциях. Agility также разрабатывает человекоподобного робота нового поколения, который будет более безопасным при использовании в окружении, наполненном людьми. Существующие гуманоидные роботы, способные поднимать тяжёлые грузы, с точки зрения техники безопасности при соседстве с людьми не так безупречны. Среди автопроизводителей много компаний, которые уже тестируют человекоподобных роботов у себя на предприятиях, особенно среди китайских. Источник: Укажите имя пользователя: и пароль: Войти © 1997—2026 Электронное периодическое издание "3ДНьюс" | Свидетельство о регистрации СМИ Эл ФС 77-22224 выдано Федеральной Службой по надзору за соблюдением законодательства в сфере массовых коммуникаций и охране культурного наследия При цитировании документа ссылка на сайт с указанием автора обязательна. Полное заимствование документа является нарушениемроссийского и международного законодательства и возможно только с согласия редакции 3DNews. Во время посещения сайта вы соглашаетесь с использованием нами файлов cookie, метрических программ, Пользовательским соглашением и даёте согласие на обработку и трансграничную передачу персональных данных.
Images (1):
|
|||||
| Musk vs Zuckerberg : la bataille des robots humanoïdes s’intensifie … | https://www.socialnetlink.org/2025/10/0… | 1 | Feb 28, 2026 16:00 | active | |
Musk vs Zuckerberg : la bataille des robots humanoïdes s’intensifie | SocialnetlinkDescription: Depuis plus de dix ans, Elon Musk et Mark Zuckerberg se livrent une rivalité technologique sans répit. D’un côté, le patron de Tesla et SpaceX a bâti son influence sur les voitures électriques, les fusées réutilisables et, plus récemment, l’intelligence artificielle générative. Content:
Depuis plus de dix ans, Elon Musk et Mark Zuckerberg se livrent une rivalité technologique sans répit. D’un côté, le patron de Tesla et SpaceX a bâti son influence sur les voitures électriques, les fusées réutilisables et, plus récemment, l’intelligence artificielle générative. De l’autre, le fondateur de Meta a imposé Facebook comme réseau social planétaire avant de miser sur la réalité virtuelle et augmentée. Aujourd’hui, leur affrontement prend une nouvelle dimension : les robots humanoïdes. La stratégie de Musk s’articule autour de la puissance mécanique. Avec son robot Optimus, il vise la production de masse et imagine déjà des millions d’unités d’ici 2030, destinées aussi bien aux usines qu’aux foyers. Mais l’ambition pose des défis titanesques : concevoir des humanoïdes fiables, accessibles et industrialisables à grande échelle nécessite de réinventer les chaînes d’assemblage et de surmonter des contraintes logistiques immenses. Zuckerberg, lui, emprunte une voie différente avec Metabot. Ici, l’enjeu n’est pas tant les muscles que le cerveau. Conçu par les équipes de Meta, ce robot mise sur une IA capable d’apprendre et de raisonner, avec des effecteurs mécaniques simples. L’idée est claire : comme le smartphone dont la valeur réside surtout dans ses applications, c’est le logiciel qui rend la machine polyvalente. Le coup de génie de Zuckerberg repose sur une plateforme ouverte aux développeurs et aux fabricants de matériel. Plutôt qu’une approche fermée à la Apple, Meta s’inspire du succès d’Android : proposer une couche logicielle universelle capable d’animer différents robots, sans construire chaque appareil. Pour concrétiser cette vision, Meta a recruté des experts du MIT et de la robotique industrielle, tout en investissant massivement via Reality Labs et Superintelligence Labs. Objectif : imposer un standard logiciel pour la robotique, comme Android l’a fait dans la téléphonie mobile. Au fond, cette confrontation illustre deux philosophies :Musk veut bâtir un corps mécanique perfectionné et Zuckerberg veut fournir un esprit numérique adaptable. Le succès dépendra de l’adoption par le marché. Si Optimus prouve son efficacité dans les usines, il pourrait transformer la productivité mondiale. Si Metabot et son logiciel s’imposent comme standard, Meta pourrait équiper bien plus de robots qu’elle n’en fabriquera. Retrouvez toute l'actu Tech et des Nouveaux Médias en Afrique sur Socialnetlink. Actualités technologiques et startups au Sénégal et en Afrique Numérique- Technologies- Innovation- Startup- Sénégal – Afrique-Bourses et Emplois Nous utilisons des cookies pour vous offrir la meilleure expérience sur notre site. Vous pouvez en savoir plus sur les cookies que nous utilisons ou les désactiver dans paramètres. Une plateforme qui traite de l’actualité des startups, innovations et de l’économie numérique. Nous produisons chaque année, des centaines d’articles, des études de cas, interviews afin de décrypter les écosystèmes innovants d’Afrique. Ce site utilise des cookies afin que nous puissions vous fournir la meilleure expérience utilisateur possible. Les informations sur les cookies sont stockées dans votre navigateur et remplissent des fonctions telles que vous reconnaître lorsque vous revenez sur notre site Web et aider notre équipe à comprendre les sections du site que vous trouvez les plus intéressantes et utiles. Cette option doit être activée à tout moment afin que nous puissions enregistrer vos préférences pour les réglages de cookie.
Images (1):
|
|||||