Total Articles Scraped
Total Images Extracted
| Action | Title | URL | Images | Scraped At | Status |
|---|---|---|---|---|---|
| This Chinese Robot Head Looks So Human Its Almost Creepy | https://propakistani.pk/2025/10/07/this… | 1 | Jan 03, 2026 00:01 | active | |
This Chinese Robot Head Looks So Human Its Almost CreepyURL: https://propakistani.pk/2025/10/07/this-chinese-robot-head-looks-so-human-its-almost-creepy/ Description: A Chinese robotics startup has introduced what looks to be one of the most realistic robot faces yet, capable of blinking, nodding, and moving its eyes in Content:
A Chinese robotics startup has introduced what looks to be one of the most realistic robot faces yet, capable of blinking, nodding, and moving its eyes in a strikingly human-like manner. The resemblance is uncanny. The company, AheadForm, released a demonstration video on YouTube showing the robotic head called the Origin M1 displaying subtle expressions as it appears to observe its surroundings. The head can tilt, blink, and respond to environmental cues, giving it a level of realism that could change how humans perceive robots. Founded in 2024, AheadForm says its goal is to make human-robot interactions more natural and emotionally intuitive. The company’s website explains that it plans to merge advanced artificial intelligence systems, including large language models (LLMs), with expressive robotic heads capable of understanding and responding to people in real time. AheadForm’s researchers say that creating emotionally expressive robots could prove valuable in industries such as customer service, education, and healthcare, where trust and human-like interaction are crucial. The firm has developed multiple lines of robotic designs, including the “Elf Series,” known for its stylized, expressive features, and the “Lan Series,” focused on more lifelike human movements and cost-efficient designs. According to the company, its current work centers on building humanoid heads that can perceive emotion, interpret human cues, and respond appropriately, bridging the gap between mechanical systems and genuine social interaction. Experts believe such developments could have far-reaching implications for human-robot relationships, potentially making service robots, caregivers, and AI companions more relatable in the years ahead. 📢 For the latest Tech & Telecom news, videos and analysis join ProPakistani's WhatsApp Group now! Follow ProPakistani on Google News & scroll through your favourite content faster! Shares ProPakistani is the premier and most trustworthy resource for all happenings in technology, telecom, business, sports, auto, education, real estate and entertainment news in Pakistan. Whether it's the top trending news, inside scoops or features, interviews, market trends and analysis, product reviews, How to's or tutorials – we cover it all. © 2026 ProPakistani.PK - All rights reserved Join the groups below to get latest news and updates.
Images (1):
|
|||||
| Grundlagen der Human-Robot Interaction - Displays - Elektroniknet | https://www.elektroniknet.de/optoelektr… | 1 | Jan 03, 2026 00:01 | active | |
Grundlagen der Human-Robot Interaction - Displays - ElektroniknetDescription: Selbstfahrende Roboter arbeiten zunehmend an öffentlichen Plätzen. Damit sich Passanten und Roboter nicht gegenseitig behindern, werden Konzepte erprobt, wie sich die Bewegung der Roboter über Displays visualisieren und damit für Passanten voraussehbar machen lässt. Das sind die ersten Ergebnisse. Content:
Selbstfahrende Roboter arbeiten zunehmend an öffentlichen Plätzen. Damit sich Passanten und Roboter nicht gegenseitig behindern, werden Konzepte erprobt, wie sich die Bewegung der Roboter über Displays visualisieren und damit für Passanten voraussehbar machen lässt. Das sind die ersten Ergebnisse. Selbstfahrende Roboter wurden ursprünglich für die Industrie und die Logistik entwickelt. Das sind auch heute noch die größten Anwendungsfälle. Dort spielt die Mensch-Roboter-Interaktion eine untergeordnete Rolle, findet gar nicht oder höchstens mit einzelnen und trainierten Personen statt. Entsprechend sind auch die integrierten Anzeigesysteme und HMIs nicht für eine Interaktion mit Passanten in größeren Menschenmengen ausgelegt. Allmählich verlagert sich das Einsatzgebiet der Roboter aber aus den isolierten und weitestgehend menschenleeren Anwendungsbereichen und damit verändern sich auch die Anforderungen an die Anzeigesysteme. »Mittlerweile werden selbstfahrende Roboter auch in Bereichen eingesetzt, in denen sie mit einer größeren Menschenansammlung in Kontakt kommen können und sollen«, sagte Milton Guerry, Präsident der International Federation of Robotics (IFR) im August 2021. Der Verband schätzt den Weltmarkt für Serviceroboter in der Logistik für das Jahr 2020 auf rund 120.000 ausgelieferte Geräte; 2021 sollen es bereits 160.000 sein [1]. Im Vergleich dazu sind Serviceroboter für öffentliche Bereiche wie Hotels, Flughafenterminals oder in Parkanlagen mit rund 25.000 Geräten im Jahr 2020 zwar ein noch deutlich kleinerer Markt, dafür aber stark im Kommen. Das IFR prognostiziert bis 2023 ein durchschnittliches Wachstum von knapp 40 %. Neue Anzeigekonzepte werden, zumindest zum Teil, auch für das Segment der professionellen Reinigungsroboter (Bild 1) benötigt. Der Weltmarkt lag im Jahr 2020 bei rund 18.000 Geräten und soll laut IFR ähnlich stark anziehen wie die Serviceroboter in öffentlichen Bereichen. Neue Anzeigesysteme für die Mensch-Roboter-Interaktion werden in der Entwicklungsdisziplin HRI, Human-Robotic Interaction, erforscht. Ein zentrales Problem der HRI-Entwicklung sind Trajektorie-Konflikte. Sie entstehen dann, wenn die intendierte Fahrspur eines Roboters für den Menschen nicht einsichtig genug ist und führen vor allem an öffentlichen Plätzen mit hohem Durchgangsverkehr zu Störungen. Anders als beim Gang durch eine Menschenmenge, bei der sich das Gehverhalten der Mitmenschen, vor allem die Intention zum Abbiegen und Ausweichen, intuitiv einschätzt lässt, fehlt diese Intuition beim Umgang mit Robotern. Diesen Mangel sollen HRI-Anzeigen ausgleichen. Die Entwicklung von HRI-Anzeigesystemen steht noch relativ am Anfang. Für die Human Robotic Interaction sind bisher ledglich blaue Spotleuchten etabliert, die in Fahrtrichtung des Roboters scheinen (Bild 2). Der Ansatz ist zwar einfach umzusetzen, aber für Passanten nicht intuitiv verständlich. »Es hat auch erste Versuche mit Displays in Forschung und Industrie gegeben«, erklärt der Leiter des Displaylabors der Hochschule Pforzheim, Prof. Dr. Karlheinz Blankenbach. »Die Ergebnisse waren bisher aber nicht zufriedenstellend.« Insgesamt ist die Mensch-Roboter-Interaktion bei Servicerobotern an öffentlichen Plätzen ein noch wenig erforschtes Gebiet, in dem vor allem Praxiserfahrungen fehlen. Zu den grundlegenden Fragestellungen gehören: Welche Darstellung der Roboterfahrbahn wird quer durch alle Alters- und Kulturgruppen intuitiv verstanden? Welche Helligkeits- und Kontrastwerte sowie Farbwiedergabe muss das Anzeigesystem liefern? Welche Displays können diese Werte erreichen und sind zudem für die Integration in Serviceroboter geeignet? Diese Aspekte haben Etienne Charrier und Prof. Blankenbach von der Hochschule Pforzheim, Franziska Babel von der Universität Ulm und Siegfried Hochdorfer vom Ulmer Roboterhersteller Adlatus Robotics untersucht und ihre Ergebnisse in einem Paper auf der SID Display Week 2021 vorgestellt [2]. Sie bieten eine erste Arbeitsgrundlage für HRI-Systementwickler. Anhand einer Probandenstudie mit einem Reinigungsroboter an einem Bahnhof mit mittlerem Passantenaufkommen in Deutschland wurde bestätigt, dass die fehlende Anzeige der Robotertrajektorie zu Behinderungen an öffentlichen Plätzen führt und eine Akzeptanzhürde darstellt. Eine weitere Herausforderung für Passanten ist die fehlende Rückmeldung des Roboters, ob er Personen im Weg erkennt und einen Ausweichkurs einschlägt bzw. stoppen wird. Als beste Lösung für die Fahrbahnvisualisierung hat sich laut den Forschern eine animierte Anzeige von Pfeilen herausgestellt – entweder über eine Länge von rund 50 cm in Fahrtrichtung auf den Boden projiziert oder in Form von Richtungspfeilen auf einem Display. Eine Rückmeldung für Passanten kann über die Farbdarstellung geschehen: ein grüner Pfeil für eine erkannte freie Fahrbahn und rot für ein erkanntes Hindernis (Bild 3). Bei Projektionen auf den Boden muss die Größe der Projektionsfläche abgeschätzt werden. Sie darf nicht zu viel Platz beanspruchen, um auch bei hohem Besucheraufkommen und gedrängten Platzverhältnissen noch darstellbar zu sein und muss groß genug sein, um Kurvenfahrten anzuzeigen (Bild 4). GLYN GmbH & Co. KG Idstein, Deutschland Karl Kruse GmbH & CO KG Düsseldorf, Deutschland ADKOM Elektronik GmbH Rechberghausen, Deutschland © 2026 Componeers GmbH. Alle Rechte vorbehalten.
Images (1):
|
|||||
| ReWalk Joins Israeli Technology Innovation Consortium for | https://www.globenewswire.com/news-rele… | 1 | Jan 03, 2026 00:01 | active | |
ReWalk Joins Israeli Technology Innovation Consortium forDescription: The MAGNET Consortium program led by Israel’s Innovation Authority fosters collaboration through R&D grants... Content:
April 29, 2022 08:30 ET | Source: ReWalk Robotics Ltd. ReWalk Robotics Ltd. MARLBOROUGH, Mass., April 29, 2022 (GLOBE NEWSWIRE) -- ReWalk Robotics, Ltd. (Nasdaq: RWLK) ("ReWalk" or the "Company"), a leading manufacturer of robotic medical technology for people with lower extremity disabilities, today announced its membership in the Human Robot Interaction (HRI) Consortium, part of the Israel Innovation Authority’s MAGNET incentive program. This incentive program provides grants for R&D collaboration as part of a consortium comprised of private businesses and leading academic centers. The goals of the HRI consortium are to “develop advanced technologies aimed at providing robots with social capabilities, enabling them to carry out various tasks and effective interactions with different users in diverse operational environments.” The total program has a budget of NIS 57 million, which includes funding for research and development grants to help drive technological innovation. The first 18-month period of the grant has allocated NIS 1.745 million to fund ReWalk-specific projects. “ReWalk is proud to continue our legacy as a company rooted in collaboration between Israel and the United States,” said CEO Larry Jasinski. “Committing substantial R&D funding to human-robotic interaction will help foster greater advancements in technology at a moment when we are seeing reimbursement progress in the US and Germany. This will enable access for individuals wanting to walk, coupled with technology paths that will broaden adoption in the years ahead.” As a member of the HRI Consortium, ReWalk will collaborate with several universities to develop advanced technologies aimed at improving the human-exoskeleton interaction. This research collaboration with top researchers in the fields of robotics, behavioral sciences and human-computer interaction will seek to make the use of exoskeletons easier and more natural in order to promote wider adoption of the technology. “This program is expected to boost our technological capabilities and allow us to introduce groundbreaking technologies in our current and future products,” said David Hexner, VP of R&D at ReWalk. The Consortium is a 3-year project, with the first meeting of the HRI cohort scheduled for May 2022. ReWalk is one of nine companies participating in the HRI Consortium, in addition to several Israeli universities. About ReWalk Robotics Ltd.ReWalk Robotics Ltd. develops, manufactures and markets wearable robotic exoskeletons for individuals with lower limb disabilities as a result of spinal cord injury or stroke. ReWalk's mission is to fundamentally change the quality of life for individuals with lower limb disability through the creation and development of market leading robotic technologies. Founded in 2001, ReWalk has headquarters in the U.S., Israel and Germany. For more information on the ReWalk systems, please visit rewalk.com ReWalk® is a registered trademark of ReWalk Robotics Ltd. in Israel and the United States.ReStore® is a registered trademark of ReWalk Robotics Ltd. in the United States, Europe and the United Kingdom. Forward-Looking StatementsIn addition to historical information, this press release contains forward-looking statements within the meaning of the U.S. Private Securities Litigation Reform Act of 1995, Section 27A of the U.S. Securities Act of 1933, as amended, and Section 21E of the U.S. Securities Exchange Act of 1934, as amended. Such forward-looking statements may include projections regarding ReWalk's future performance and other statements that are not statements of historical fact and, in some cases, may be identified by words like "anticipate," "assume," "believe," "continue," "could," "estimate," "expect," "intend," "may," "plan," "potential," "predict," "project," "future," "will," "should," "would," "seek" and similar terms or phrases. The forward-looking statements contained in this press release are based on management's current expectations, including with respect to any anticipated benefits from ReWalk’s participation in the HRI Consortium, which cannot be guaranteed and are subject to uncertainty, risks and changes in circumstances that are difficult to predict, many of which are outside of ReWalk's control. Important factors that could cause ReWalk's actual results to differ materially from those indicated in the forward-looking statements include, among others: uncertainties associated with future clinical trials and the clinical development process, the product development process and U.S. Food and Drug Administration (“FDA”) regulatory submission review and approval process; the adverse effect that the COVID-19 pandemic has had and may continue to have on the Company’s business and results of operations; ReWalk's ability to have sufficient funds to meet certain future capital requirements, which could impair the Company's efforts to develop and commercialize existing and new products; ReWalk's ability to maintain compliance with the continued listing requirements of the Nasdaq Capital Market and the risk that its ordinary shares will be delisted if it cannot do so; ReWalk’s ability to maintain and grow its reputation and the market acceptance of its products; ReWalk's ability to achieve reimbursement from third-party payors, including the Centers for Medicare & Medicaid Services (CMS), for its products; ReWalk's limited operating history and its ability to leverage its sales, marketing and training infrastructure; ReWalk's expectations as to its clinical research program and clinical results; ReWalk's expectations regarding future growth, including its ability to increase sales in its existing geographic markets and expand to new markets; ReWalk's ability to obtain certain components of its products from third-party suppliers and its continued access to its product manufacturers; ReWalk's ability to improve its products and develop new products; ReWalk’s ability to obtain clearance from the FDA for use of the ReWalk Personal device on stairs; ReWalk's compliance with medical device reporting regulations to report adverse events involving its products, which could result in voluntary corrective actions or enforcement actions such as mandatory recalls, and the potential impact of such adverse events on ReWalk's ability to market and sell its products; ReWalk's ability to gain and maintain regulatory approvals; ReWalk's ability to maintain adequate protection of its intellectual property and to avoid violation of the intellectual property rights of others; the risk of a cybersecurity attack or breach of ReWalk’s IT systems significantly disrupting its business operations; ReWalk's ability to use effectively the proceeds of its offerings of securities; and other factors discussed under the heading "Risk Factors" in ReWalk's annual report on Form 10-K for the year ended December 31, 2021 filed with the Securities and Exchange Commission (“SEC”) and other documents subsequently filed with or furnished to the SEC. Any forward-looking statement made in this press release speaks only as of the date hereof. Factors or events that could cause ReWalk's actual results to differ from the statements contained herein may emerge from time to time, and it is not possible for ReWalk to predict all of them. Except as required by law, ReWalk undertakes no obligation to publicly update any forward-looking statements, whether as a result of new information, future developments or otherwise. ReWalk Media Relations:Jennifer WlachE: media@rewalk.com ReWalk Investor Contact:Almog AdarDirector of Finance ReWalk Robotics Ltd.T: +972-4-9590130E: investorrelations@rewalk.com MARLBOROUGH, Mass. and YOKNEAM ILLIT, Israel, Dec. 19, 2025 (GLOBE NEWSWIRE) -- Lifeward Ltd., (Nasdaq: LFWD) (“Lifeward” or the “Company”), a global leader in innovative medical technology designed... Distribution agreement with Singapore-based Verita Neuro will provide patients the ReWalk Personal Exoskeleton as part of its multi-layered treatment modalities New delivery model for ReWalk will...
Images (1):
|
|||||
| Humanoid robot with human-like competence of riding a bicycle unveiled … | http://www.ecns.cn/news/sci-tech/2025-0… | 1 | Jan 03, 2026 00:01 | active | |
Humanoid robot with human-like competence of riding a bicycle unveiled in ShanghaiURL: http://www.ecns.cn/news/sci-tech/2025-03-12/detail-ihepqcpn0565612.shtml Content:
Humanoid robot Lingxi X2 (Photo/Courtesy of AgiBot) Humanoid robot manufacturer AgiBot in Shanghai unveiled on Tuesday its latest humanoid robot model which achieves nearly human-like mobility such as riding a bicycle and balancing on a hoverboard. With its prompt responding competence in interaction with users, the robot showcases the perfect integration of artificial intelligence (AI) and humanoid robot technology, presenting a great application potential in scenarios such as elderly care services and family companionship. In a video released by Peng Zhihui, co-founder of AgiBot or Zhiyuan Robotics, the 1.3-meter tall and 33.8-kilogram Lingxi X2 humanoid robot showcases its excellent sports, interaction and operation competence. It not only can walk, run, turn around and dance like a real human being, but can also ride a bicycle, a scooter and a hoverboard, with its movement flexibility far outperforming other similar humanoid robots through combining deep reinforcement learning and imitation learning techniques and algorithms. According to Peng, the robot was designed with a system integrated with great innovation and was made with impact-resistant flexible materials. When Peng picks up a mobile phone and shows it to the robot, asking what time it is, the robot can precisely tell the time. When Peng further asks the robot for advice on which drink to take, milk or juice, at the time of 5:42 am, the robot suggests him to drink milk which helps with sleep. Besides, the robot can also quickly read and understand medicine description. As the second model of the Lingxi series, Lingxi X2 is the first truly agile robot with complex interaction capabilities. It can mimic human breathing rhythms, exhibit curiosity and attention mechanisms, and communicate with human beings through subtle body movements and gestures, the Global Times learned from the company on Tuesday. Based on a multimodal large language model, the robot can achieve millisecond-level interaction responses, assess humanâs emotional states through their facial expressions and vocal tones, and provide corresponding responses. According to Peng, the research team is improving the robotâs cognitive model and expects to empower it with more emotional expression capabilities in the future. According to the company, the robot can achieve multi-robot collaborations for certain tasks and extend its applications to various aspects of daily life, serving as a security guard, a nanny, and a cleaner in sectors such as education and healthcare. Its functions can also be tailored by users according to their respective needs in various scenarios such as elderly care, services and family companionship. Peng said in the video that Lingxi X2 represents a significant breakthrough in the fields of AI and emotional AI. Experts noted that this humanoid robot has reached a new level of naturalness and immersion in human-robot interaction. With continuous advancements in technology, it is expected to become an important assistant in human life, bringing more possibilities for future smart life. Humanoid robot and robot dogs perform in Hangzhou Chinese humanoid robot conducts world's first front flip China holds leading position in humanoid robot industry: report
Images (1):
|
|||||
| India-made human-like robot - The HinduBusinessLine | https://www.thehindubusinessline.com/bu… | 1 | Jan 03, 2026 00:01 | active | |
India-made human-like robot - The HinduBusinessLineURL: https://www.thehindubusinessline.com/business-tech/india-made-human-like-robot/article70396735.ece Description: NIT-Rourkela patents an AI-driven, human-like robot capable of understanding speech, emotions, and gestures for natural interaction. Content:
-367.25 -99.80 -2.00 + 67.00 + 17,145.00 -367.25 -99.80 -99.80 -2.00 -2.00 + 67.00 Get businessline apps on Connect with us TO ENJOY ADDITIONAL BENEFITS Connect With Us Get BusinessLine apps on NIT-Rourkela has secured a patent for an indigenous robotic system designed to interact with people in a highly human-like manner. Built using artificial intelligence and large language models (LLMs), the robot integrates both verbal and non-verbal communication to enable seamless, natural interaction. Unlike conventional robots limited to pre-programmed replies, this system can understand everyday spoken language, follow verbal instructions, answer questions and hold real-time conversations that adapt to context. A defining feature of the robot is its ability to recognise human emotions. By analysing facial expressions, such as happy, neutral, or sad, it can respond in an empathetic and comforting way, improving user engagement. The robot can also recognise simple gestures like waving or raising a hand, making it accessible to users across age groups, including children and elderly individuals who may rely more on intuitive gestures than voice commands. The system is designed as a friendly companion suitable for homes, classrooms, offices, hospitals and community environments. For speech and language processing, the robot uses a Raspberry Pi (low-cost single-board computer) to capture spoken or text inputs. These inputs are interpreted by an LLM, which determines context and generates a relevant, human-like reply. The final output is delivered using Google TTS (text-to-speech), giving the robot natural-sounding voice responses. At an estimated cost of ₹80,000–90,000, the robot offers a cost-effective alternative to interactive robotic systems that use expensive components or proprietary technologies. Published on December 15, 2025 Copyright© 2025, THG PUBLISHING PVT LTD. or its affiliated companies. All rights reserved. BACK TO TOP Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments. We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle. Terms & conditions | Institutional Subscriber
Images (1):
|
|||||
| Human-robot interaction using retrieval-augmented generation and fine-tuning with transformer neural … | https://www.nature.com/articles/s41598-… | 1 | Jan 03, 2026 00:01 | active | |
Human-robot interaction using retrieval-augmented generation and fine-tuning with transformer neural networks in industry 5.0 | Scientific ReportsDescription: The integration of Artificial Intelligence (AI) in Human-Robot Interaction (HRI) has significantly improved automation in the modern manufacturing environments. This paper proposes a new framework of using Retrieval-Augmented Generation (RAG) together with fine-tuned Transformer Neural Networks to improve robotic decision making and flexibility in group working conditions. Unlike the traditional rigid rule based robotic systems, this approach retrieves and uses domain specific information and responds dynamically in real time, thus increasing the performance of the tasks and the intimacy between people and robots. One of the significant findings of this research is the application of regret-based learning, which helps the robots learn from previous mistakes and reduce regret in order to improve the decisions in the future. A model is developed to represent the interaction between RAG based knowledge acquisition and Transformers for optimization along with regret based learning for predictable improvement. To validate the effectiveness of the proposed system, a numerical case study is carried out to compare the performance with the conventional robotic systems in a production environment. Furthermore, this research offers a clear approach for implementing such a system, which includes the system architecture and parameters for the AI-based human-robot manufacturing systems. This research solves some of the major issues including the problems of scalability, specific fine-tuning, multimodal learning, and the ethical issues in the integration of AI in robotics. The outcomes of the study are important in Industry 5.0, intelligent manufacturing and collaborative robotics, and the advancement of highly autonomous, flexible and intelligent production systems. Content:
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement Scientific Reports volume 15, Article number: 29233 (2025) Cite this article 4388 Accesses 3 Citations Metrics details The integration of Artificial Intelligence (AI) in Human-Robot Interaction (HRI) has significantly improved automation in the modern manufacturing environments. This paper proposes a new framework of using Retrieval-Augmented Generation (RAG) together with fine-tuned Transformer Neural Networks to improve robotic decision making and flexibility in group working conditions. Unlike the traditional rigid rule based robotic systems, this approach retrieves and uses domain specific information and responds dynamically in real time, thus increasing the performance of the tasks and the intimacy between people and robots. One of the significant findings of this research is the application of regret-based learning, which helps the robots learn from previous mistakes and reduce regret in order to improve the decisions in the future. A model is developed to represent the interaction between RAG based knowledge acquisition and Transformers for optimization along with regret based learning for predictable improvement. To validate the effectiveness of the proposed system, a numerical case study is carried out to compare the performance with the conventional robotic systems in a production environment. Furthermore, this research offers a clear approach for implementing such a system, which includes the system architecture and parameters for the AI-based human-robot manufacturing systems. This research solves some of the major issues including the problems of scalability, specific fine-tuning, multimodal learning, and the ethical issues in the integration of AI in robotics. The outcomes of the study are important in Industry 5.0, intelligent manufacturing and collaborative robotics, and the advancement of highly autonomous, flexible and intelligent production systems. https://orcid.org/0000-0003-3053-4399. The integration of Artificial Intelligence (AI) and robotics in modern manufacturing has led to the emergence of Human-Robot Interaction (HRI) systems that enhance productivity, efficiency, and adaptability. However, traditional robotic systems often struggle with real-time decision-making, knowledge retrieval, and adaptive learning in dynamic environments. To address these challenges, Retrieval-Augmented Generation (RAG) and Transformer-based fine-tuning offer promising solutions by enabling robots to retrieve relevant information, optimize task execution, and continuously learn from human feedback1. In industrial production, robots are required to perform complex, sequential tasks such as assembly, quality inspection, and maintenance. While conventional automation relies on predefined programming, these approaches fail to adapt to variations, uncertainties, and human interventions. This research proposes a regret-based learning model that enables a human-robot production system to optimize decision-making by minimizing performance regret over multiple learning cycles2. Despite advancements in deep learning and reinforcement learning, most robotic systems still lack efficient knowledge retrieval from previous experiences and external data sources, adaptive fine-tuning that refines robotic behavior based on human interventions, and regret-based optimization to quantify and minimize task execution errors over time. This research addresses these limitations by developing a novel human-robot production framework that integrates retrieval-augmented generation (RAG) for dynamic knowledge retrieval, fine-tuned transformer neural networks (TNN) for task optimization, and regret-based learning to continuously enhance robotic decision-making. The primary objectives of this study are: Develop a robust HRI framework that enables real-time knowledge retrieval and task adaptation using RAG and fine-tuned Transformers. Implement a regret-based optimization model that allows the robot to minimize errors, execution time, and human interventions. Evaluate system performance numerically in a real-world production environment, focusing on task efficiency, accuracy, and learning rate improvements. The human-robot production system is designed using the following components: Human Operator: Provides task instructions and corrective feedback. RAG Module: Retrieves relevant knowledge from past experiences, manuals, and external databases. Transformer Neural Network (TNN): Processes retrieved knowledge to optimize execution plans. Regret Model: Calculates performance gaps and updates learning parameters. Robot Execution Module: Performs the assigned tasks and adapts based on fine-tuned decisions. Sensor Feedback Loop: Monitors task execution and transmits real-time performance metrics. The system undergoes iterative fine-tuning, where the robot learns from past mistakes, reduces errors, and enhances efficiency over multiple production cycles. This study contributes to the field of human-robot collaboration, adaptive automation, and AI-driven manufacturing by: Enhancing robotic learning capabilities through regret-based reinforcement learning. Integrating RAG for efficient knowledge retrieval, improving real-time adaptability. Reducing human interventions, making robotic systems more autonomous and efficient. By combining RAG, Transformers, and regret-based learning, this research presents a scalable, intelligent HRI model that can be applied to smart factories, autonomous assembly lines, and collaborative robotics in various industries. This study aims to bridge the gap between AI-driven learning models and real-world robotic applications, providing a foundation for the next generation of intelligent, autonomous industrial robots. The remainder of this study is organized as follows: Sect. 2 is the literature review on HRI, RAG, Transformers, and regret-based learning. Section 3 proposes the general research model framework. Section 4 presents mathematical model for regret-based optimization in robotic decision-making. Section 5 delivers numerical analysis of the proposed system in a simulated production environment. In Sect. 6, results and discussion on system performance, learning efficiency, and scalability are given. And finally in Sect. 7, conclusion and future directions for expanding this research into multi-robot and real-time adaptive systems is presented. The evolution of Human-Robot Interaction (HRI) has been significantly influenced by advancements in Retrieval-Augmented Generation (RAG), Transformer Neural Networks (TNN), and regret-based learning. The integration of these techniques enables robotic systems to retrieve relevant knowledge, optimize decision-making, and continuously refine their behavior based on human feedback. This literature review synthesizes recent research on these topics, examining their role in enhancing adaptive and intelligent robotic systems3. Traditional robotic systems often lack mechanisms to quantify and minimize decision-making errors over time. Recent studies have proposed regret-based models to address this limitation. Nakamura et al.4 introduced a calibrated regret metric to assess robot decision-making quality in HRI. Their approach facilitates more accurate comparisons across interactions and enables efficient failure mitigation through targeted dataset construction. Similarly, Muvvala et al.5 developed a regret-minimizing synthesis framework for collaborative robotic manipulation, which ensures task completion while fostering cooperation. Another application of regret theory in HRI is its role in risk-aware decision-making. Jiang and Wang6 explored regret theory-based cost functions for optimizing human-multi-robot collaborative search tasks. Their results demonstrated that risk-aware models significantly enhance robotic performance by prioritizing tasks with higher regret values. The RAG framework has emerged as a powerful tool for improving robotic decision-making by enabling knowledge retrieval from past experiences, manuals, and external databases. Wang et al.7 introduced EMG-RAG, which combines retrieval mechanisms with editable memory graphs to generate personalized agents for adaptive human-robot collaboration. Their approach significantly improved context-aware decision-making in robotic systems. Furthermore, Sobrín-Hidalgo et al.8 investigated the role of large language models (LLMs) in generating explanations for robotic actions. By integrating RAG, they enhanced robots’ ability to provide justifications for their decisions, improving transparency and trust in HRI. Their findings underscore the potential of RAG-based systems in facilitating human-robot communication9. The introduction of Transformer architectures has revolutionized language processing and decision-making in robotics. The foundational work by Vaswani et al.10 on Transformers introduced the self-attention mechanism, which has since been widely adopted for robotic task adaptation. Brown et al.11 demonstrated how GPT-3 enables few-shot learning, allowing robots to adapt to new tasks with minimal training data12. The application of fine-tuned Transformers in robotic manipulation was explored by Zhang et al.13 through Mani-GPT, a model that integrates natural language processing (NLP) with robotic control. Their findings indicate that transformer-based models can significantly enhance intent recognition, task planning, and execution accuracy in interactive robotic environments14. Regret-based learning has also been applied to improve human-robot collaboration by minimizing suboptimal decision-making over time. Jiang and Wang6 introduced a regret-based framework for robotic decision optimization, where robots continuously update their behavior based on real-time human feedback. This approach has proven effective in reducing execution errors and improving task efficiency. Moreover, Mani-GPT13 demonstrates how integrating RAG and Transformers into robotic control can enhance collaborative task performance. By retrieving relevant past experiences and fine-tuning decision models, the system enables robots to adapt dynamically to evolving tasks and environments15. The integration of RAG, Transformer-based fine-tuning, and regret optimization offers a promising pathway for the next generation of intelligent robots. As demonstrated by recent research, these techniques enable robots to retrieve relevant knowledge, dynamically refine their decision-making, and continuously learn from human feedback. However, challenges remain in scalability, real-time adaptation, and computational efficiency. Future research should focus on hybrid models that combine deep learning, reinforcement learning, and symbolic reasoning to further enhance robotic autonomy and human collaboration. The literature reviewed highlights the critical role of RAG, Transformer Neural Networks, and regret-based learning in advancing adaptive robotic systems. These techniques enable robots to improve task execution, minimize decision errors, and enhance collaboration with humans. As these models continue to evolve, they will play an essential role in reshaping the landscape of intelligent automation and human-robot interaction. The field of Human-Robot Interaction (HRI) has seen significant advancements through the integration of Retrieval-Augmented Generation (RAG), Transformer Neural Networks (TNN), and regret-based learning. However, several research gaps remain, preventing the full realization of adaptive, autonomous, and human-centered robotic systems. Limited Real-World Implementation of Regret-Based Learning in HRI. Existing Research: Several studies (e.g4,5. , have introduced regret-based models to improve robotic decision-making and minimize errors over time. These models provide a framework for assessing and optimizing robot actions post-facto, enhancing adaptation and collaboration. Research Gap: Despite theoretical advancements, real-world implementations of regret-based learning remain limited. Most research focuses on simulations rather than deployment in real-world production environments. There is a need for large-scale empirical studies to evaluate how regret-based models perform under dynamic and unpredictable human interactions. Challenges in Scalability and Computational Efficiency of RAG-Based HRI. Existing Research: RAG frameworks have been successfully applied in language generation and decision-making (e.g7,8. , . These studies highlight RAG’s potential in improving robot adaptability by allowing retrieval from large datasets. Research Gap: However, current RAG implementations in HRI face scalability challenges due to high computational requirements and memory-intensive operations. Existing research does not address how edge AI and distributed computing can make RAG-based robotic systems more efficient in low-latency environments (e.g., real-time manufacturing or healthcare applications). Lack of Personalized Fine-Tuning for Transformer-Based Robotic Systems. Existing Research: Transformer models (e.g., GPT-3, BERT, Mani-GPT) have shown promise in enabling context-aware robotic decision-making (e.g13,16. , . However, fine-tuning strategies often rely on generic datasets, rather than task-specific or user-specific data. Research Gap: There is a lack of personalized fine-tuning strategies that allow robots to learn from individual user preferences and behavioral patterns. Future research should explore how adaptive learning techniques, continual learning, and federated learning can be leveraged to create customized robotic assistants. Insufficient Integration of Multimodal Learning in HRI. Existing Research: Most research on RAG and Transformers focuses on text-based interactions (e.g10,17. , . However, human-robot interactions often require multimodal communication, including speech, gestures, vision, and tactile feedback. Research Gap: Current studies fail to fully integrate multimodal learning into HRI frameworks. Future work should explore multimodal Transformers that can process and learn from multiple sensory inputs simultaneously, leading to more intuitive and natural human-robot collaboration. Ethical and Safety Considerations in Regret-Based and AI-Driven HRI. Existing Research: While regret-based learning offers improvements in error minimization, its impact on safety-critical applications (e.g., medical robotics, autonomous vehicles) remains underexplored. Studies such as Jiang & Wang6 highlight the potential for risk-aware decision-making, but they do not address how regret-based learning can be regulated to prevent unintended consequences. Research Gap: There is a critical need for standardized ethical frameworks governing regret-based decision-making in AI-driven robotics. How do we ensure AI-powered robots make regret-optimized decisions while adhering to ethical and safety constraints? Future studies should investigate human-in-the-loop approaches to balance autonomy, control, and ethical considerations in HRI. Addressing these research gaps will be essential in developing more efficient, adaptive, and ethical robotic systems. Future studies should focus on: Real-world implementation of regret-based HRI models in production environments. Optimizing RAG-based robotic decision-making for real-time applications. Personalized fine-tuning strategies for Transformer-based robotic models. Advancing multimodal learning to enhance human-robot interaction. Developing ethical frameworks for AI-driven regret-based robotics. Bridging these gaps will help elevate human-robot collaboration, making it more intelligent, personalized, and ethically sound. The major contributions of your research on a Human-Robot Production System using Retrieval-Augmented Generation (RAG) and fine-tuned Transformer Neural Networks can be summarized as follows: Novel Integration of RAG and Transformer Neural Networks in HRI. This work pioneers the combination of RAG and fine-tuned Transformer models to enhance decision-making and adaptability in human-robot collaboration. Unlike traditional robotic systems, the model retrieves and generates knowledge dynamically, enabling context-aware responses in real-time production environments. Regret-Based Learning for Improved Robot Decision-Making. This research introduces a regret-based optimization framework that allows robots to learn from past mistakes and refine their decision-making process over time. This ensures that robots reduce suboptimal actions and improve their interactions with humans, leading to safer and more efficient operations. Development of an Analytical Model for Human-Robot Collaboration. This work provides a mathematical formulation that bridges the gap between RAG-based retrieval, Transformer fine-tuning, and regret theory in human-robot interactions. This analytical model enables a structured understanding of how robots can autonomously improve performance in a production setting. Real-World Numerical Study and Performance Validation. Unlike many theoretical studies, this research includes a real-world numerical case study demonstrating the effectiveness of RAG-enhanced robotic systems. The results showcase performance improvements in terms of accuracy, adaptability, and production efficiency compared to conventional automation techniques. Practical Implications for Smart Manufacturing and Industry 5.0. This research provides insights into how AI-powered robots can transform industrial production systems, leading to higher efficiency, reduced errors, and cost savings. The proposed system has applications in smart factories, autonomous warehousing, and collaborative robotic assembly lines, aligning with Industry 5.0 objectives. Designing a human-robot production system using Generative AI (GenAI) can be quite an innovative project. Below is an outline of a model for such a system. This system would combine human and robotic efforts in a production line with support from GenAI in planning, optimization, and decision-making processes. This research model integrates Human-Robot Collaboration (HRC) in a production system using Retrieval-Augmented Generation (RAG) and fine-tuned Transformer Neural Networks (TNN) to optimize performance, adapt to changes, and minimize errors over time. The system consists of a human operator and a robot, working together in a production environment. The Transformer-based AI system is responsible for decision-making, learning from past experiences, and dynamically improving task execution. Key components: Human Input: Task instructions or modifications. Retrieval-Augmented Generation (RAG): Fetches relevant production knowledge and past actions. Transformer Neural Network: Processes input and generates optimal task execution plans. Action Execution: Robot carries out the assigned task. Performance Evaluation: Feedback mechanism assessing task success. Fine-Tuning and Learning: Regret-based feedback loop optimizing future decisions. In this work, ‘intimacy’ in human-robot interaction (HRI) is operationalized as a composite metric of: Measured via post-task surveys (Likert scale 1–5) on perceived robot reliability. Number of human interventions per task cycle (logged automatically). Time delay between human commands and robot execution (in milliseconds). Higher intimacy reflects lower corrective feedback, shorter delays, and higher trust scores. Model Components and Mathematical Formulation: Human Input (xt): Human provides natural language commands or task modifications. “Assemble part A with part B and tighten screws to 50 Nm.” Retrieval-Augmented Generation (RAG): Context Retrieval (rt): The system queries a knowledge base (K) to retrieve similar past tasks, production manuals, or sensor data. Transformer Neural Network (TNN): The Transformer Neural Network (TNN) used in this research is designed to enhance real-time decision-making and context-aware responses in human-robot production systems. Unlike conventional deep learning models, the proposed Transformer architecture leverages self-attention mechanisms and Retrieval-Augmented Generation (RAG) to dynamically process and generate optimal task-specific actions. The key components of the Transformer Model includes: Input Encoding Layer: The input consists of sensor data, human instructions, environmental context, and retrieved knowledge from RAG. A multi-modal embedding layer processes different input types (text, vision, numerical data). The input is tokenized and passed through an embedding matrix to generate vector representations. Multi-Head Self-Attention Mechanism: Self-attention layers allow the model to focus on relevant task dependencies by assigning different attention weights to different parts of the input. The multi-head attention mechanism enables the robot to process multiple contextual signals simultaneously, improving real-time decision accuracy. Feedforward Network and Layer Normalization: The Transformer includes a position-wise feedforward network (FFN) that applies two linear transformations with ReLU activation: Retrieval-Augmented Generation (RAG) Module: The Transformer is enhanced with a retrieval mechanism, allowing it to access external domain knowledge when generating decisions. The RAG module retrieves relevant information from a pre-trained knowledge base, enriching the Transformer’s understanding of complex tasks. The retrieved knowledge vectors are concatenated with the input embeddings before passing through self-attention layers. Regret-Based Fine-Tuning: The model incorporates regret-aware reinforcement learning, where past errors are used to fine-tune Transformer weights. A regret function is computed based on the difference between predicted and optimal robot actions: The loss function is adjusted based on regret values to improve future predictions. Output Decision Layer: The final Transformer layer outputs task-specific robotic actions, such as: Movement commands, Object manipulation, Collaborative responses to human instructions. The output can be continuous-valued (motion trajectories) or discrete (task selection and scheduling). Then, for the implementation purpose, the Transformer model generates an optimal execution plan. Uses self-attention to weigh the importance of different parts of xt and rt. Robot Action Execution: The robot executes gt, following the generated plan. \({\text{Execute}}({g_t})\) Performance Evaluation and Feedback: The system calculates regret as the difference between the robot’s actual action and the optimal action. Human corrections. Sensor data (e.g., torque applied, assembly precision). Production efficiency metrics. Fine-Tuning with Reinforcement Learning: The model updates its policy using reinforcement learning to minimize regret. Workflow of the Human-Robot Production System: Human provides task instructions (xt). RAG retrieves relevant knowledge (rt). Transformer model generates an execution plan (gt). Robot executes the task (gt). Performance evaluation assesses the task outcome. Regret-based feedback loop fine-tunes the Transformer model for future tasks. Loss Function: The total loss function combines, Supervised Learning Loss (\({\mathcal{L}_{{\text{ce}}}}\)) – cross-entropy for initial training. Regret-Based RL Loss (\({\mathcal{L}_{{\text{RL}}}}\)) – improving robot actions over time. Operational Error Loss (\({\mathcal{L}_{{\text{op}}}}\)) – penalizing physical execution errors. where \(\alpha\) and \(\beta\) balance different objectives. This research model enables a human-robot production system that adapts dynamically, retrieves relevant knowledge, and improves through a regret-based learning mechanism. By fine-tuning a Transformer Neural Network, the system enhances collaboration efficiency, reduces operational errors, and continuously learns from human feedback. The RAG module employs a hybrid retrieval strategy combining: Hierarchical Indexing: Domain-specific knowledge (e.g., assembly manuals, past task logs) is indexed using FAISS (Facebook AI Similarity Search) for sublinear-time retrieval18. Edge Caching: Frequently accessed data (e.g., screw torque values) is cached on local edge devices to reduce latency (< 50ms retrieval time). Query Pruning: Irrelevant retrievals are filtered using a lightweight BERT classifier (precision: 92%, recall: 88%). Computational Overhead: The system processes 100 queries/sec on an NVIDIA Jetson AGX Orin (8GB RAM), with latency breakdown: Retrieval: 60ms ± 5ms. Generation: 120ms ± 10ms. The system processes 100 queries/sec on an NVIDIA Jetson AGX Orin (8GB RAM), with latency breakdown. Figure 1 shows latency distribution of RAG-TNN pipeline. System latency breakdown (pie chart). To formally define the mathematical formulations for the Retrieval-Augmented Generation (RAG) and fine-tuning with Transformer Neural Networks (TNN) in Human-Robot Interaction (HRI), we will break the problem into key components. These components include the retrieval mechanism, generation mechanism, fine-tuning process, and reinforcement learning. Retrieval-Augmented Generation (RAG) Model. In a typical RAG system, the main idea is to retrieve relevant data and generate responses based on this data. The overall process can be broken into two stages: retrieval and generation. Input Representation: At each time step t, the system receives a human input that can be a command, gesture, or textual input (e.g., speech-to-text). This input needs to be processed to query a knowledge base or historical interaction data. Let xt represent the input at time t. The system retrieves relevant context from a knowledge base K, which can be historical human-robot interactions or other relevant task data. The retrieval process is typically done by computing similarities between xt and stored knowledge. Retrieval Process: Given an input xt, the system retrieves relevant data from the knowledge base K. We can formulate this retrieval step as: where rt is the set of retrieved data based on the query xt, Retrieve is a function that matches xt with the most relevant pieces of information in K, typically using similarity measures like cosine similarity, BM25, or dense retrieval techniques (e.g., using embeddings). Generation Process: The generation phase utilizes a Transformer model to generate an appropriate response gt based on the input xt and the retrieved context rt. The model uses a sequence-to-sequence architecture, where the output is conditioned on both the current input and the context. Let \(\mathcal{T}\left( . \right)\) represent the Transformer-based generative model. The output is generated as: where \(\mathcal{T}\) is a Transformer model (e.g., GPT, BERT, T5), gt is the generated response, which could be either a physical action or a speech output depending on the robot’s functionality. Fine-Tuning the Transformer with HRI Data. Fine-tuning the Transformer model is crucial to adapt the system to the specific task of HRI. This process typically involves supervised learning and reinforcement learning. We will break this into two parts: supervised fine-tuning and reinforcement learning fine-tuning. Supervised Fine-Tuning: During the fine-tuning process, we adjust the parameters of the Transformer model to improve its ability to generate appropriate responses based on human feedback. Let the supervised data consist of pairs (xt, gt), where xt is the input at time step t, gt is the correct output (either a text response or a robot action). The model learns the relationship between inputs xt and outputs gt through cross-entropy loss. The loss function for supervised learning is given by: where N is the number of tokens in the output sequence gt, yi is the true token at position i, pi is the predicted probability of token i in the output sequence. The supervised loss encourages the model to generate the correct output based on the training data. Reinforcement Learning Fine-Tuning: In Reinforcement Learning (RL), the system learns based on interactions with the human. The robot receives feedback after each action, and this feedback is used to update the model. Reward Signal: Let R(xt, gt) represent the reward received after the robot performs action gt in response to input xt. This reward can be a function of human satisfaction, task completion accuracy, or robot performance. Policy Update: The policy is the function that maps the input xt to an action gt. The goal of reinforcement learning is to optimize the policy to maximize the cumulative reward over time. The objective is to maximize the expected reward J: where \(\theta\) represents the parameters of the Transformer model, T is the total number of timesteps, \(\gamma\) is the discount factor that determines how much future rewards are valued compared to immediate rewards, \(R({x_t},{g_t})\) is the reward at time . Policy Gradient: The gradient of the expected reward \(J(\theta )\) with respect to the model parameters \(\theta\) is computed using Policy Gradient methods: where \({\pi _\theta }({g_t}|{x_t})\) is the probability distribution over actions gt given the input xt under policy \(\theta\). The above gradient is used to update the model parameters, ensuring that the model learns to generate responses that maximize long-term rewards. Combined Loss Function: The overall loss function during fine-tuning combines both cross-entropy loss (for supervised fine-tuning) and reinforcement learning loss (for policy optimization): where \(\alpha\) is a weighting factor to balance the supervised and RL losses. End-to-End Model Training. Pre-training: Start with a pre-trained Transformer model (e.g., GPT-2 or T5) that has been trained on a large corpus of data. Supervised Fine-tuning: Fine-tune the model using human-robot interaction data (xt, gt), where gt corresponds to the correct output for input xt. Reinforcement Learning: Use interaction data to further fine-tune the model using reinforcement learning, where rewards are given based on task completion, human satisfaction, and robot performance. Summary of Mathematical Formulations. Retrieval: Generation: Supervised Fine-tuning Loss: This approach allows the robot to not only generate appropriate responses but also learn from feedback over time, resulting in a dynamic and adaptable HRI system. The regret minimization follows a sublinear bound under the assumption of Lipschitz continuity in the reward function19: where RT is cumulative regret overT cycles, L is the Lipschitz constant and λ is the learning rate. Empirical validation (Fig. 6) shows regret reduction follows O(\(\sqrt T\)) scaling. Regret vs. training cycles (T) for the robotic assembly task is shown in Fig. 2. Shaded region shows 95% confidence interval over 10 runs. Regret convergence plot. This study simulates a human-robot production system in a manufacturing environment where a robot performs assembly tasks while learning from human feedback. We use numerical analysis to evaluate the system’s effectiveness, focusing on task efficiency, accuracy, and regret minimization over multiple production cycles. Problem Setup: Human-Robot Collaboration in an Assembly Line. Scenario: A human supervisor provides instructions to a robotic arm to assemble electronic circuit boards. The robot uses a Retrieval-Augmented Generation (RAG) Transformer Model to retrieve past task knowledge and generate an execution plan. The robot’s performance is evaluated based on assembly accuracy and efficiency. The regret function measures the difference between the robot’s actual performance and the optimal benchmark. Production Tasks: The robot is assigned three sequential tasks: Pick and Place Components (T1), Screw Fastening (T2), Quality Inspection (T3). The goal is to improve task performance over time using regret-based reinforcement learning and fine-tuning with a Transformer Neural Network. The study simulated 500 production cycles across 5 robotic workstations, generating a dataset of: 10,000 task executions (T1: 3,500; T2: 4,000; T3: 2,500). Human-robot interaction logs (voice commands, torque sensor readings, error reports). Ground truth labels: Optimal execution times and error thresholds derived from ISO 9283:2021 industrial robot performance standards. Data and Parameters. Initial Conditions: We define a dataset based on past human-robot collaboration experiences as given in Table 1. Execution Time: Time taken by the robot to complete the task. Error Rate: Percentage of defective assemblies. Human Corrections: Number of times a human had to intervene. Regret Calculation: We define regret as the difference in performance between the robot’s execution and an optimal benchmark: \({R_t}={w_1} \cdot ({\text{Time Deviation}})+{w_2} \cdot ({\text{Error Rate}})+{w_3} \cdot ({\text{Human Corrections}})\) Using weight coefficients w1 = 1.5, w2 = 2.0, and w3 = 3.0, the regrets are computed as shown in Table 2. Reinforcement Learning and Fine-Tuning. The Transformer Model learns from past errors and adjusts its execution strategy over five production cycles as presented in Table 3. Table 3 summarizes per-cycle metrics, while Fig. 3 highlights the aggregate improvement across key parameters . Performance metrics comparison. A paired t-test confirms significant improvement (p < 0.01) in execution time (t = 8.2, df = 4) and error rate (t = 9.7, df = 4) between Cycles 1 and 5. Paired t-test (Cycle 1 vs. Cycle 5): Execution Time: t(4) = 8.2,p < 0.001. Error Rate: t(4) = 9.7,p < 0.001. Human Corrections: t(4) = 7.1,p < 0.001. Improvement Rates: Execution Time: 28.5% reduction (η2 = 0.89). Error Rate: 60.2% reduction (η2 = 0.92). Human Corrections: 79.7% reduction (η2 = 0.95). Metrics align with Industry 5.0 KPIs are given in Table 4. Vision-based QA achieved 98% precision in error detection (validated against human inspectors). Observations: Robot Execution Time Decreased: Initial execution times were (7, 12, 6) seconds → Final times (5.1, 8.5, 4.2) seconds. Error Rate Improved: Error rate dropped from (12%, 18%, 10%) to (5%, 7%, 4%). Human Corrections Reduced: Interventions dropped from (3, 5, 2) to (1, 1, 0). Regret Reduced: Total regret dropped from 122.0 to 47.2 (61.3% improvement). Key Metrics: \({\text{Regret Reduction Rate}}=\frac{{{\text{Initial Regret}} - {\text{Final Regret}}}}{{{\text{Initial Regret}}}} \times 100\%\) . The performance evaluation having metrics and the corresponding numerical values are given in Table 5. Inference: The RAG-based Transformer Model successfully optimized execution over multiple cycles. Regret-based reinforcement learning reduced human intervention and improved efficiency. The robot adapted to human instructions and learned optimal task execution over time. Key Takeaways: The integration of RAG and Transformer-based learning significantly improved robotic performance. The regret mechanism enabled fine-tuning, helping the robot minimize errors and reduce human dependency. This approach can be applied in real-world manufacturing to improve human-robot collaboration, reduce defects, and enhance production efficiency. Here are the visualizations for the key metrics of the implemented numerical study. To show how the regret value decreases over production cycles, indicating improved robot performance, Regret Reduction Curve is shown in Fig. 4. Regret reduction over production cycles curve. Further, to illustrate how the robot’s execution time for different tasks reduces as it learns, execution time improvement is shown in Fig. 5. Execution time improvement for different tasks. Figure 6 depicts how the error rate declines as the model fine-tunes its actions. Error Rate Reduction. Figure 7 shows how human interventions decrease, reflecting increased automation efficiency. Human Corrections Reduction. A bar chart comparing initial and final values of key performance metrics, showing clear improvements is shown in Fig. 8. Overall Performance Summary Chart. These results confirm that the RAG-based Transformer model with regret optimization significantly enhances efficiency and accuracy in the human-robot production system. Let me know if you need further refinements! Improvements were validated using: Cycle 1 vs. Cycle 5 metrics (n = 500 cycles, df = 499). Execution time: t = 28.3, p < 0.001, Cohen’s d = 1.2 (large effect). Error rate: t = 31.7, p < 0.001, Cohen’s d = 1.5. Confirmed task-type (T1/T2/T3) had no significant effect (F(2,497) = 1.1, p = 0.33). Regret reduction followed a log-linear trend (R²=0.89, p < 0.001).” Intimacy was evaluated across cycles using: Trust Surveys: Administered to 10 human operators after each cycle (Q: ‘How reliable was the robot?’). Logged via the robot’s error-handling API. Timestamped between voice command onset and robot action initiation (Intel RealSense tracking). Results as given in Table 6, showed intimacy improved by 42% (Cycle 1: 2.1/5 trust, 3.2 corrections/task; Cycle 5: 4.5/5 trust, 0.7 corrections/task). The findings of this research have significant implications for industrial managers, production supervisors, and decision-makers seeking to implement AI-driven automation in human-robot collaborative environments20,21. The proposed Retrieval-Augmented Generation (RAG) and Transformer fine-tuned robotic system introduces new opportunities and challenges for optimizing efficiency, reducing errors, and enhancing adaptability in production settings. Enhanced Decision-Making and Productivity. Managers can leverage RAG-powered robotics to improve real-time decision-making by enabling robots to retrieve and generate relevant knowledge dynamically. This leads to: Reduced production downtime through adaptive learning. Improved task execution with minimal human intervention. Faster response to operational uncertainties in manufacturing workflows. Reduction in Operational Errors through Regret-Based Learning. The integration of regret-based learning allows robots to learn from past mistakes, continuously improving task accuracy. This results in: Lower defect rates in manufacturing and assembly lines. Optimized resource utilization, minimizing waste and inefficiencies. Improved compliance with safety and quality standards. Workforce Optimization and Human-Robot Collaboration. This study highlights how AI-driven robots can act as collaborative assistants rather than replacements for human workers. Key benefits include: Empowering employees with AI-enhanced decision support, reducing cognitive load. Redesigning job roles to focus on supervision and quality assurance rather than repetitive tasks. Enhanced worker safety, as robots handle hazardous or high-precision tasks. Strategic Implementation for Industry 5.0. The proposed system aligns with Industry 5.0 and smart manufacturing principles, offering managers the opportunity to: Integrate AI-driven robotics into existing production infrastructure. Utilize data-driven insights for predictive maintenance and demand forecasting. Improve supply chain agility through AI-enhanced automation. For industrial managers, adopting RAG and Transformer-enhanced robotics presents a competitive advantage by improving efficiency, adaptability, and human-robot collaboration. By strategically implementing these technologies, organizations can transition towards more autonomous, intelligent, and scalable production systems, driving long-term growth in the era of AI-driven automation. While ‘intimacy’ is unconventional in HRI, we adopt it to encapsulate bidirectional trust and fluency metrics that mirror human-team dynamics. This aligns with recent work on socially attuned robotics (e.g22. , , where intimacy correlates with team productivity (+ 37% in joint tasks). All human-robot interaction logs are anonymized (k = 3-anonymity) before retrieval. A human override protocol halts the robot if regret exceeds a threshold (e.g., > 50% deviation from optimal). Retrieval diversity is enforced via maximum marginal relevance (MMR) scoring23. To transition from simulation to industrial implementation, we propose: Pilot Phase (6 months): Deploy at [Industry Partner]’s electronics assembly line (20 workstations), with: Edge Computing Nodes: NVIDIA Jetson AGX Orin for low-latency RAG retrieval (< 100ms). Safety Protocols: ISO 13849-1 compliant emergency stops and human-in-the-loop override triggers. Scalability Assessment: Monitor performance degradation with > 5 collaborative robots per cell. Cost-Benefit Analysis: Compare ROI against traditional automation (projected 23% reduction in defects, 17% labor cost savings). Current work processes unimodal (voice) commands. Next-phase integration includes: Vision: RealSense D455 cameras for gesture recognition (OpenPose framework) and object detection (YOLOv8). Haptics: Force-torque sensors (Robotiq FT-300) to detect human touch intent (e.g., guiding robot motions). Sensor Fusion: Transformer-based late fusion to combine modalities. This research presented an innovative framework for enhancing Human-Robot Interaction (HRI) in production systems through Retrieval-Augmented Generation (RAG) and fine-tuned Transformer Neural Networks (TNN). By integrating dynamic knowledge retrieval and context-aware response generation, the proposed model significantly improves robotic adaptability, decision-making, and collaborative efficiency. One of the primary contributions was the introduction of regret-based learning, which allows robots to minimize decision-making errors over time by learning from past interactions. The analytical model developed in this study provides a comprehensive understanding of the interplay between RAG, Transformers, and regret minimization, forming a robust foundation for adaptive robotic systems. The numerical study conducted demonstrates substantial enhancements in task accuracy, error reduction, and human intervention minimization, validating the effectiveness of the proposed approach. However, challenges remain, including the scalability of RAG systems, personalized fine-tuning of Transformer models, and ethical considerations in regret-based learning. Future work should focus on real-world deployments, multimodal integration, and developing ethical frameworks to guide AI-driven robotics in safety-critical applications. Overall, this study contributes to advancing intelligent, autonomous, and human-centered robotic systems, offering a pathway towards more efficient, flexible, and responsive production environments. The findings underscore the potential of integrating advanced AI techniques in robotics, driving the next generation of adaptive human-robot collaboration. All data generated or analysed during this study are included in this published article. Sheu, J-B. From human-robot interaction to human-robot relationship management at the flip from industry 4.0 to 5.0 in operations management. Ref. Module Social Sci. https://doi.org/10.1016/B978-0-443-28993-4.00007-X (2024). Article Google Scholar Coronado, E. et al. Evaluating quality in human-robot interaction: A systematic search and classification of performance and human-centered factors, measures and metrics towards an industry 5.0. J. Manuf. Syst. 63, 392–410 (2022). Article Google Scholar Dhanda, M., Rogers, B. A., Hall, S., Dekoninck, E. & Dhokia, V. Reviewing human-robot collaboration in manufacturing: Opportunities and challenges in the context of industry 5.0, Robotics and Computer-Integrated Manufacturing 93, 102937 (2025). Nakamura, K., Tian, R. & Bajcsy, A. A general calibrated regret metric for detecting and mitigating human-robot interaction failures. ArXiv Preprint (2024). arXiv:2403.04745. Muvvala, K., Amorese, P. & Lahijanian, M. Let’s collaborate: Regret-based reactive synthesis for robotic manipulation. ArXiv Preprint (2022). arXiv:2203.06861. Jiang, L. & Wang, Y. Risk-aware decision-making in human-multi-robot collaborative search: A regret theory approach. Journal Intell. & Robotic Systems, 105(2). (2022). Wang, Z., Li, Z., Jiang, Z., Tu, D. & Shi, W. Crafting personalized agents through retrieval-augmented generation on editable memory graphs. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 4891–4906. Association for Computational Linguistics. (2024). Sobrín-Hidalgo, D., González-Santamarta, M. A., Guerrero-Higueras, Á. M., Rodríguez-Lera, F. J. & Matellán-Olivera, V. Explaining autonomy: enhancing human-robot interaction through explanation generation with large Language models. ArXiv Preprint (2024). arXiv:2402.04206. Yuan, G. et al. Human-robot collaborative disassembly in industry 5.0: A systematic literature review and future research agenda. J. Manuf. Syst. 79, 199–216 (2025). Article Google Scholar Vaswani, A. et al. Attention is all you need. Adv. Neural. Inf. Process. Syst. 30, 5998–6008 (2017). Google Scholar Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., … Amodei,D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Ahn, S., Kim, J-H., Heo, J. & Ahn, S-H. Human-robot and robot-robot sound interaction using a 3-Dimensional acoustic ranging (3DAR) in audible and inaudible frequency. Robot. Comput. Integr. Manuf. 94, 102970 (2025). Article Google Scholar Zhang, Z., Chai, W. & Wang, J. Mani-GPT: A generative model for interactive robotic manipulation. ArXiv Preprint (2023). arXiv:2308.01555. Keshvarparast, A. et al. Ergonomic design of Human-Robot collaborative workstation in the era of industry 5.0. Comput. Ind. Eng. 198, 110729 (2024). Article Google Scholar Christian Ohueri, C., Masrom, N., Noguchi, M. A. & M Human-robot collaboration for Building Deconstruction in the context of construction 5.0. Autom. Constr. 167, 105723 (2024). Article Google Scholar Devlin, J., Chang, M. W., Lee, K. & Toutanova, K. BERT: Pre-training of deep bidirectional Transformers for Language Understanding. ArXiv Preprint (2018). arXiv:1810.04805. Radford, A. et al. Language models are unsupervised multitask learners. OpenAI Blog. 1 (8), 9 (2019). Google Scholar Johnson, J., Douze, M. & Jégou, H. Billion-scale similarity search with FAISS. IEEE Trans. Pattern Anal. Mach. Intell. 43 (9), 3009–3024. https://doi.org/10.1109/TPAMI.2021.3056764 (2021). Article Google Scholar Cesa-Bianchi, N. & Lugosi, G. Prediction, Learning, and Games (Cambridge University Press, 2006). Fazlollahtabar, H. Optimizing robotic manufacturing in industry 4.0: A hybrid fuzzy neural bayesian belief networks. Spectr. Mech. Eng. Oper. Res. 2 (1), 191–203. https://doi.org/10.31181/smeor21202543 (2025). Article MathSciNet Google Scholar Babaeimorad, S., Fattahi, P., Fazlollahtabar, H. & Shafiee, M. An Integrated Optimization of Production and Preventive Maintenance Scheduling in Industry 4.0. Facta Universitatis711–720 (Mechanical Engineering, 2024). Lee, J. D. & See, K. A. Socially attuned robotics: measuring intimacy in human-robot teams. IEEE Trans. Human-Machine Syst. 53 (1), 12–25. https://doi.org/10.1109/THMS.2022.3216780 (2023). Article Google Scholar Carbonell, J. & Goldstein, J. The use of MMR, diversity-based reranking for reordering documents and producing summaries. Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 335–336). (1998). https://doi.org/10.1145/290941.291025 Download references There is no funding for this research. Department of Industrial Engineering, School of Engineering, Damghan University, Damghan, Iran Hamed Fazlollahtabar Search author on:PubMed Google Scholar The H. Fazlollahtabar (first author)did the conception, design of the work, the acquisition, numerical analysis and illustration, and analysis, the creation of new software used in the work and have drafted the work. Correspondence to Hamed Fazlollahtabar. The authors declare no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. Reprints and permissions Fazlollahtabar, H. Human-robot interaction using retrieval-augmented generation and fine-tuning with transformer neural networks in industry 5.0. Sci Rep 15, 29233 (2025). https://doi.org/10.1038/s41598-025-12742-9 Download citation Received: 30 May 2025 Accepted: 18 July 2025 Published: 10 August 2025 Version of record: 10 August 2025 DOI: https://doi.org/10.1038/s41598-025-12742-9 Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Advertisement Scientific Reports (Sci Rep) ISSN 2045-2322 (online) © 2026 Springer Nature Limited Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Images (1):
|
|||||
| Bats and Dolphins Inspire a New Single-Sensor 3D-Positional Microphone for … | https://www.hackster.io/news/bats-and-d… | 1 | Jan 03, 2026 00:01 | active | |
Bats and Dolphins Inspire a New Single-Sensor 3D-Positional Microphone for Human-Robot Interaction - Hackster.ioDescription: Spinning tube delivers what would normally take a multi-microphone array, and could help improve robot operations in industry and more. Content:
Please ensure that JavaScript is enabled in your browser to view this page. Researchers from Seoul National University's College of Engineering have announced the development of what they say is the world's first 3D microphone to be built around a single sensor — yet capable of estimating the position of a sound's source like a multi-sensor array. "Previously, determining positions using sound required multiple sensors or complex calculations," explains lead author Semin Ahn, a doctoral candidate at the university. "Developing a 3D sensor capable of accurately locating sound sources with just a rotating single microphone opens new avenues in acoustic sensing technology." The sensing system is inspired by the way bats and dolphins use echolocation to determine there whereabouts of objects and the sources of sound, in which they "see space with ears." Dubbed "3D acoustic perception technology," the three-dimensional acoustic ranging (3DAR) system uses a single microphone sensor positioned in a hollow tube cut with rectangular slots — serving, the researchers explain, as a hardware-based phase cancellation mechanism. By rotating the microphone and processing the incoming data, it's possible to locate the source of sound in 3D space. The team's work goes beyond just locating a sound, though: the researchers have demonstrated how the 3DAR system can also be used to implement a sound-based human-robot interaction system capable of operating even in noisy environments — which, they say, could be applied to everything from industrial robotics, where it can provide real-time tracking of user position, and search-and-rescue operations. In real-world testing on a quadrupedal robot platform, the system showed over a 90 percent accuracy in human-robot interaction tasks and a 99 percent accuracy for robot-robot interaction tasks. For multiple sound sources, the tracking accuracy reached 94 percent — even in noisy environments, the researchers say. The team's work has been published in the journal Robotics and Computer-Integrated Manufacturing under closed-access terms. Main article image courtesy of the Seoul National University College of Engineering. Hackster.io, an Avnet Community © 2026
Images (1):
|
|||||
| Didi invests in Anywit Robotics to push lifelike human-robot interaction | https://kr-asia.com/didi-invests-in-any… | 1 | Jan 03, 2026 00:01 | active | |
Didi invests in Anywit Robotics to push lifelike human-robot interactionURL: https://kr-asia.com/didi-invests-in-anywit-robotics-to-push-lifelike-human-robot-interaction Description: The deal signals growing investor interest in emotionally responsive machines. Content:
Written by 36Kr English Published on 3 Dec 2025 2 mins read Anywit Robotics has raised an eight-figure RMB sum in a pre-Series A funding round from several industry investors, including Didi’s corporate venture capital arm. The capital will be used to refine its standardized expressive head products and upgrade its emotional interaction models. Winsoul Capital, the lead investor in Anywit’s angel round, served as the financial advisor for the transaction. Founded in December 2023, Anywit develops multimodal interactive robots designed to emulate the vitality of humans. The company has released a head-and-face component for humanoid robots that features 34 degrees of freedom. According to the company, the system delivers embodied expressions comparable to those of humans, including eye contact and audio-lip synchronization. The facial interaction component integrates a multimodal large model, a proprietary small model for emotion generation, and a motion planning system capable of handling multiple intents. This setup is intended to support interaction experiences that feel more natural to users. Cao Rongyun, founder and CEO of Anywit, told 36Kr that the company leads the market in the segments where it specializes. He added that its technology benchmarks globally against Ameca, a recognized industry leader. Anywit is also advancing its commercialization efforts. It has introduced interactive robots equipped with expressive head-and-face components for use in education and research, marketing reception, and entertainment. Anni, a robot developed by Anywit, appeared at this year’s World Robot Conference and the 27th China Hi-Tech Fair. The company has delivered preliminary units of its educational robots, established a standardized product line, and deployed robotic teachers in primary school classrooms. Looking ahead, Cao told 36Kr that Anywit plans to expand its product exploration, with mass shipments of standardized robotic head-and-face components targeted for 2026. Drawing a parallel to the role graphical user interfaces played in bringing computers into homes, and how the iPhone’s touch screen and app ecosystem ushered in the mobile internet era, he believes mature facial expression interaction technology will be a critical breakthrough for broader robot adoption. Anywit’s team includes graduates from the University of Science and Technology of China. The group has nearly a decade of research experience in human–machine interaction and has earned awards in several international robotics competitions. KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Qiu Xiaofen for 36Kr. Loading... Subscribe to our newsletters KrASIA A digital media company reporting on China's tech and business pulse.
Images (1):
|
|||||
| To Feel and to Act : Exploring Motor and Affective … | https://theses.hal.science/tel-05423779… | 1 | Jan 03, 2026 00:01 | active | |
To Feel and to Act : Exploring Motor and Affective Processes in Human and Human-Robot Interaction - TEL - Thèses en ligneURL: https://theses.hal.science/tel-05423779v1 Description: Humans experience emotions continuously, and these emotions shape perception, decision-making, and the ways we interact with both our environment and with others. Despite extensive advances in affective neuroscience, relatively little is known about how affect manifests in observable motor behavior. The overall aim of this thesis was to examine how affect modulates spontaneous movement, particularly in the context of Human interaction and Human-Robot Interaction (HRI). Study 1 introduces a methodological contribution by applying motion data analysis to define kinematic parameters that identify mobility and stability challenges in early-stage Parkinson's disease, offering advantages over traditional chronometry-based assessments. In Study 2, we developed a spectral analysis technique to characterize spontaneous human oscillations (i.e., sway). This method facilitated the assessment of movement emergence and inhibition, reflecting affective changes and engagement during HRI. An experimental study (Study 2) compared human sway in interactions with a small humanoid, a tall humanoid, and a small non-humanoid robot. The results indicated that the small non-humanoid robot elicited more spontaneous movement, suggesting that robot morphology influences human motor behavior. In Study 3, we explored the role of emotional context in HRI. Findings revealed that positive contexts increased the power of spontaneous oscillations, whereas negative contexts suppressed movement. In Study 4, we investigated whether interpersonal motor synchronization is related to affective compatibility between two individuals. Results demonstrated that dyads with congruent affective states (i.e., both experiencing either positive or negative states) maintained synchronization longer than incongruent pairs. Taken together, these findings provide empirical evidence that emotions influence motor control. Across both human interaction and HRI, affective changes modulated not only the intensity of spontaneous movement but also the time spent on interpersonal synchronization. Based on this work, we propose a theoretical model of affective motor behavior that describes the influence of affect on motor processes. More broadly, this thesis proposes a theoretical framework in which affective states and movement patterns continuously shape each other through dynamic feedback loops across diverse interactive contexts. Content:
Humans experience emotions continuously, and these emotions shape perception, decision-making, and the ways we interact with both our environment and with others. Despite extensive advances in affective neuroscience, relatively little is known about how affect manifests in observable motor behavior. The overall aim of this thesis was to examine how affect modulates spontaneous movement, particularly in the context of Human interaction and Human-Robot Interaction (HRI). Study 1 introduces a methodological contribution by applying motion data analysis to define kinematic parameters that identify mobility and stability challenges in early-stage Parkinson's disease, offering advantages over traditional chronometry-based assessments. In Study 2, we developed a spectral analysis technique to characterize spontaneous human oscillations (i.e., sway). This method facilitated the assessment of movement emergence and inhibition, reflecting affective changes and engagement during HRI. An experimental study (Study 2) compared human sway in interactions with a small humanoid, a tall humanoid, and a small non-humanoid robot. The results indicated that the small non-humanoid robot elicited more spontaneous movement, suggesting that robot morphology influences human motor behavior. In Study 3, we explored the role of emotional context in HRI. Findings revealed that positive contexts increased the power of spontaneous oscillations, whereas negative contexts suppressed movement. In Study 4, we investigated whether interpersonal motor synchronization is related to affective compatibility between two individuals. Results demonstrated that dyads with congruent affective states (i.e., both experiencing either positive or negative states) maintained synchronization longer than incongruent pairs. Taken together, these findings provide empirical evidence that emotions influence motor control. Across both human interaction and HRI, affective changes modulated not only the intensity of spontaneous movement but also the time spent on interpersonal synchronization. Based on this work, we propose a theoretical model of affective motor behavior that describes the influence of affect on motor processes. More broadly, this thesis proposes a theoretical framework in which affective states and movement patterns continuously shape each other through dynamic feedback loops across diverse interactive contexts. Les êtres humains éprouvent des émotions en continu, et ces émotions façonnent la perception, la prise de décision ainsi que les manières d'interagir avec l'environnement et avec autrui. Malgré les avancées considérables en neurosciences affectives, on sait relativement peu de choses sur la façon dont l'affect se manifeste dans le comportement moteur observable. L'objectif général de cette thèse était d'examiner comment l'affect module le mouvement spontané, en particulier dans le contexte de l'interaction humaine et de l'interaction Humain-Robot (IHR). L'Étude 1 propose une contribution méthodologique. L'analyse des données de mouvement y est utilisée pour définir des paramètres cinématiques permettant d'identifier des difficultés de mobilité et de stabilité dans le débout de maladie de Parkinson. Cette approche présente des avantages par rapport aux évaluations chronométriques classiques. L'Étude 2 développe une méthode d'analyse spectrale destinée à caractériser les oscillations humaines spontanées (balancement postural). Cette méthode permet d'évaluer l'émergence et l'inhibition du mouvement en fonction des variations affectives et de l'engagement en contexte d'IHR. L'expérimentation compare les oscillations spontanées lors d'interactions avec un petit humanoïde, un grand humanoïde et un petit robot non humanoïde. Les résultats indiquent que le petit robot non humanoïde suscite davantage de mouvement spontané. La morphologie du robot apparaît ainsi comme un facteur influençant le comportement moteur humain. L'Étude 3 examine le rôle du contexte émotionnel dans l'IHR. Les résultats montrent que les contextes positifs augmentent la puissance des oscillations spontanées, tandis que les contextes négatifs réduisent le mouvement. L'Étude 4 analyse la synchronisation motrice interpersonnelle. Les résultats révèlent que des dyades partageant un état affectif congruent (i.e., tous deux dans un état positif ou négatif) maintiennent leur synchronisation plus longtemps que des dyades incongruentes. Ces travaux démontrent empiriquement que les émotions influencent le contrôle moteur. Dans l'interaction humaine comme dans l'IHR, l'affect module à la fois l'intensité du mouvement spontané et la durée de la synchronisation interpersonnelle. Un modèle théorique du comportement moteur affectif est proposé, décrivant l'impact de l'affect sur les processus moteurs. Plus largement, cette thèse avance un cadre où états affectifs et dynamiques motrices se modifient mutuellement au sein de boucles de rétroaction dynamiques, dans divers contextes interactifs. Contact https://theses.hal.science/tel-05423779 Soumis le : jeudi 18 décembre 2025-13:49:05 Dernière modification le : vendredi 19 décembre 2025-03:12:07 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| Human-robot interaction design retreat | https://robohub.org/human-robot-interac… | 0 | Jan 03, 2026 00:01 | active | |
Human-robot interaction design retreatURL: https://robohub.org/human-robot-interaction-design-retreat/ Content: |
|||||
| TUM - Tutor for Human-Robot Interaction seminar | https://portal.mytum.de/schwarzesbrett/… | 1 | Jan 03, 2026 00:00 | active | |
TUM - Tutor for Human-Robot Interaction seminarURL: https://portal.mytum.de/schwarzesbrett/hiwi_stellen/NewsArticle_20250716_111111 Description: We are seeking a tutor (m/f/x) for our interdisciplinary seminar “Human-Robot Interaction” for the Winter Term 2025/26. Content:
Aktuelles Biodiversität langfristig beobachten Frohe Festtage und ein glückliches Neues Jahr „Steht für eure Überzeugungen ein!“ „Alle Forschenden stehen vor philosophischen Fragen“ Forschen mit der Zivilgesellschaft Wichtiger Brennstofftest für FRM II erfolgreich no events today.
Images (1):
|
|||||
| Four Russian combat robots arrive in Donbass, says ex-space chief … | https://tass.com/defense/1570429 | 1 | Jan 02, 2026 16:00 | active | |
Four Russian combat robots arrive in Donbass, says ex-space chief - Military & Defense - TASSURL: https://tass.com/defense/1570429 Description: The head of the group of military advisers announced on January 15 that Marker combat robots would be tested in the zone of the special military operation in Ukraine Content:
MOSCOW, February 2. /TASS/. Four Marker robotic platforms have arrived in Donbass and specialists have started testing algorithms of conducting warfare within a group of combat robots, Dmitry Rogozin, Russian ex-space boss and head of a group of military advisers and the Tsar Wolves research and technical center, said on his Telegram channel on Thursday. "The first four Marker robots have arrived in the region strictly on schedule. We begin downloading target images and testing algorithms of warfare within a unit of combat robots and installing powerful anti-tank armament," Rogozin said. The head of the group of military advisers announced on January 15 that Marker combat robots would be tested in the zone of the special military operation in Ukraine. The Marker robotic platform was engineered under a joint project of the National Center for the Development of Technologies and Basic Robotic Elements within the Advanced Research Fund and the Android Technics Research and Production Association. The robotic platform is designed to create future warfare models.
Images (1):
|
|||||
| OpenMind Wants To Be The Android Operating System Of Humanoid … | https://www.ponoko.com/blog/ponoko/open… | 1 | Jan 02, 2026 16:00 | active | |
OpenMind Wants To Be The Android Operating System Of Humanoid RobotsDescription: OpenMind’s ambition to become the “Android” of humanoid robots signals a notable shift in robotics, one where software and human-like interaction take center stage. Their focus on open, hardware-agnostic systems that allow machines to learn and collaborate quickly, is crucial for real-world adaptability. While the idea of robots sharing knowledge instantly is exciting, it’s also a reminder that the technology must remain practical and reliable. OpenMind’s approach of rapid iteration shows promise, and it’s worth watching how these systems evolve as they enter homes and daily life. Content:
Thanks, you're subscribed :) OpenMind’s ambition to become the “Android” of humanoid robots signals a notable shift in robotics, one where software and human-like interaction take center stage. Their focus on open, hardware-agnostic systems that allow machines to learn and collaborate quickly, is crucial for real-world adaptability. While the idea of robots sharing knowledge instantly is exciting, it’s also a reminder that the technology must remain practical and reliable. OpenMind’s approach of rapid iteration shows promise, and it’s worth watching how these systems evolve as they enter homes and daily life. Read the full article here The Oak Ridge National Laboratory’s breakthrough in 3D-printed polymer composite forms for nuclear concrete structures is a game-changer for infrastructure. As an engineer, I value how this innovation cuts weeks to days in production, while maintaining precision and strength, no small feat when dealing with nuclear-grade materials. It’s refreshing to see additive manufacturing applied to such a critical and traditionally slow sector, proving that practical, scalable solutions can come from bold collaboration. This approach promises to reduce costs and risks, accelerating nuclear projects that America urgently needs for its energy future. Read the full article here Rebuilding wildfire-ravaged California homes with AI-powered microfactories shows how technology can meet urgent, real-world needs. ABB and Cosmic Buildings’ robotic microfactories, using digital twins and AI, accelerate construction while ensuring precision and quality, a crucial balance when lives and communities depend on speed and safety. This use of AI and robotics undoubtedly cuts costs and timelines without sacrificing code compliance or resilience, something which is desperately needed. This project is a clear example of smart engineering tackling complex challenges with practical solutions. Read the full article here Samsung Electronics has launched InnoX Lab to push the boundaries of AI, robotics, and digital twins, and it’s hard not to appreciate the focus on practical innovation rather than hype. While many chase flashy tech, Samsung is assembling experts to solve real, complex problems across manufacturing and logistics, which is exactly the kind of measured progress our industry needs. I respect their approach because it balances bold ambition with clear, strategic execution, something often missing in today’s fast-moving AI race. Keep an eye on this lab, as it might just shape the future of hardware engineering in ways that matter. Read the full article here Siemens is raising the bar in industrial robotics with the new Sinumerik Machine Tool Robot, which blends the precision of machine tools with the flexibility of robots. This leap in accuracy and productivity isn’t just incremental, it’s a game changer for sectors like aerospace and automotive that demand tough machining and high precision. What I find notable is Siemens’ use of digital twins and their Sinumerik One CNC to simulate workflows before production, showing a practical, efficient mindset rather than just flashy tech. It’s an innovation worth watching for engineers serious about precision and productivity. Read the full article here Lithium batteries power much of today’s tech, but their supply chain is fragile and geopolitically risky, which often gets overlooked. Researchers at NJIT used AI to discover five promising multivalent-ion battery materials that could bypass lithium’s limits by using more abundant elements like magnesium and zinc. While this breakthrough shows real promise, I believe the bigger challenge isn’t just swapping lithium but rethinking battery design to prioritize fast charging and realistic range over sheer capacity. That practical mindset may ultimately shape the future of electrification more than chasing the highest energy density. Read the full article here Recycling electronics has long been a stubborn challenge, but researchers from Maryland, Georgia Tech, and Notre Dame are changing that with DissolvPCB, a 3D printing method using water-soluble PVA and liquid metal traces. This approach lets you dissolve and reclaim circuit components quickly, bypassing the complex industrial recycling of traditional FR-4 boards. Beyond sustainability, it’s practical for prototyping spaces with limited resources. While it won’t replace high-volume manufacturing overnight, DissolvPCB highlights how additive manufacturing and clever material choices can reshape electronics design. Read the full article here Japan just announced its first fully homegrown superconducting quantum computer, and it’s a clear sign of real engineering independence. What impresses me isn’t just the hardware, crafted entirely domestically, but also the open-source software that makes the system accessible. While quantum tech still faces many challenges, Japan’s integrated approach shows how national focus can reduce reliance on foreign supply chains and boost innovation. It’s a strong reminder that cutting-edge progress doesn’t need sprawling governments or imports, but disciplined expertise and collaboration. If you follow hardware engineering, this is definitely a milestone worth watching closely. Read the full article here Australia’s RMIT team just cracked a major hurdle in 3D-printed titanium alloys by cutting costs nearly a third while boosting strength and ductility. They’ve replaced pricey vanadium with cheaper elements and solved the troublesome uneven grain structure common in legacy alloys like Ti-6Al-4V. What stands out is their clear, practical framework for designing alloys optimized for additive manufacturing, a step that could finally push 3D printing into broader aerospace and medical use. This isn’t just incremental improvement; it’s a genuine leap forward. It reminds us that smart material innovation still drives real progress in hardware engineering. Read the full article here China’s latest organic solar cell breakthrough tackles a stubborn bottleneck in interfacial layers by combining organic and inorganic materials for a “dual-component synergy.” This approach improves conductivity, reduces defects, and curbs charge recombination, pushing certified efficiency to 20.8%, a new record. What’s compelling here is the focus on flexible, lightweight, and eco-friendly power sources for wearable tech, aerospace, and harsh environments. It’s a clear example of how smart materials engineering can drive practical advances in sustainable energy, which is crucial as we seek scalable solutions beyond traditional silicon-based solar cells. Read the full article here Not all e-readers are created equal, and the ZEReader proves that less can be more. Born from an engineering thesis, this open-source, microcontroller-based device strips away the bloat of Android or Linux systems, and Amazon’s ecosystem,, focusing solely on reading. Built on the lightweight Zephyr RTOS, it supports basic EPUB functions with room to grow. As someone who appreciates efficiency and user control, I see real promise here: a device tailored for purpose, free from corporate strings, and designed to last. It’s a reminder that sometimes, engineering elegance beats flashy features every time. Read the full article here In the second episode of the Ponoko Podcast, we dive into the world of engineering, prototyping, and rapid product development, featuring conversations with some of…
Images (1):
|
|||||
| In Video: Russian Marker Combat Robots Arrive In Donbass | https://southfront.org/in-video-russian… | 0 | Jan 02, 2026 16:00 | active | |
In Video: Russian Marker Combat Robots Arrive In DonbassURL: https://southfront.org/in-video-russian-marker-combat-robots-arrive-in-donbass/ Description: Four Marker combat robots have arrived in the Donbass region, Dmitry Rogozin, former director General of Roscosmos and the head... Content: |
|||||
| Client Challenge | https://www.ft.com/content/4635f501-f91… | 1 | Jan 02, 2026 16:00 | active | |
Client ChallengeURL: https://www.ft.com/content/4635f501-f915-4ea4-8235-0017a0137a94 Content:
Please enable JavaScript to proceed.
Images (1):
|
|||||
| Скачать Unfair War: Zombie Robots 1.0 для Android | https://trashbox.ru/link/unfair-war-zom… | 10 | Jan 02, 2026 16:00 | active | |
Скачать Unfair War: Zombie Robots 1.0 для AndroidURL: https://trashbox.ru/link/unfair-war-zombie-robots-android Description: Unfair War: Zombie Robots — это напряженный шутер на выживание, действие которого разворачивается в постапокалиптическом мире, захваченном злобными машинами. Content:
Unfair War: Zombie Robots — это напряженный шутер на выживание, действие которого разворачивается в постапокалиптическом мире, захваченном злобными машинами. Unfair War: Zombie Robots погружает вас в отчаянную борьбу за выживание против безжалостных роботов-врагов. Режим «Арена» проверит вашу выносливость бесконечными волнами механических противников, от шаров-огнеметов до снайперских роботов и колоссальных боссов. Между тем, режим «Комната» бросает вам вызов очистить комнаты, кишащие врагами, в ходе систематических боевых операций. Ваше выживание зависит от стратегического позиционирования, разумного расходования боеприпасов и эффективного использования укрытий во время интенсивных перестрелок. Создайте свой арсенал из разнообразного оружия, включая АК-47, миниганы и апокалиптическую косу, объединившись со специализированными компаньонами. Выбирайте своих партнеров с умом — будь то гранатометчик, снайпер или ниндзя, каждый из них обладает уникальными способностями, которые могут переломить ход сражения. Откройте скрытые комнаты, содержащие улучшения оружия, и соберите золото, чтобы разблокировать мощные усовершенствования и скины персонажей, которые повысят вашу боевую эффективность. Ключевые особенности:
Images (10):
|
|||||
| Des robots militaires russes pourront distinguer les cibles cachées - … | https://fr.sputniknews.africa/20230820/… | 7 | Jan 02, 2026 16:00 | active | |
Des robots militaires russes pourront distinguer les cibles cachées - 20.08.2023, Sputnik AfriqueDescription: Il sera bientôt impossible de se cacher des robots russes. En effet, une technologie permettant aux robots de distinguer les soldats et équipements militaires... 20.08.2023, Sputnik Afrique Content: Images (7):
|
|||||
| Critical Hits - Videogames, Anime, Cinema e TV e Reviews | https://criticalhits.com.br/dicas/war-r… | 1 | Jan 02, 2026 16:00 | active | |
Critical Hits - Videogames, Anime, Cinema e TV e ReviewsURL: https://criticalhits.com.br/dicas/war-robots-codigos-para-itens-gratis-julho-2025/ Description: O Critical Hits é um site com notícias diárias sobre Games, Anime, Cinema e TV além de reviews, vídeos, artigos e listas Content:
O Critical Hits é um site com notícias diárias sobre o mundo dos games, anime, cinema e tv, além de artigos, opinião e análises. © Critical Hits - Marca Registrada
Images (1):
|
|||||
| Robots can save humans time by doing menial tasks in … | https://tass.com/science/1740881 | 1 | Jan 02, 2026 16:00 | active | |
Robots can save humans time by doing menial tasks in open space - Science & Space - TASSURL: https://tass.com/science/1740881 Description: According to Yevgeny Dudorov, in the future, robots will be able to complete the tasks that humans never perform outside space stations Content:
MOSCOW, February 2. /TASS/. Robots can be used to greatly reduce the amount of time needed to perform extravehicular space activities, Yevgeny Dudorov, executive director of the Android Technics research and production association, said in an interview with TASS. "We can use a robot to prepare the station; once that is done, a cosmonaut comes to perform a specific mission and after that, a robot, controlled by a human at the station or on Earth, cleans up the workplace. Besides, when there is an urgent need to perform a mission but a cosmonaut is unable to go outside for a spacewalk, it’s logical to use robots. By using robots, we can significantly reduce the time an operation requires," Dudorov said. According to him, in the future, robots will be able to complete the tasks that humans never perform outside space stations. "This could be satellite repairs. Today, there are a number of satellites that need components replaced and other minor repairs. These satellites aren’t functional in their present condition. Meanwhile, a robot can be used to intercept the faulty device and replace some of its parts so it can continue to operate, instead of launching a new satellite and leaving the old one floating in space," the Android Technics executive explained. In order to do so, there is a need to develop fine motor skills in advanced robots, he added.
Images (1):
|
|||||
| Meta ha empezado a mostrar su juego en la robótica. … | https://www.xataka.com/robotica-e-ia/me… | 1 | Jan 02, 2026 16:00 | active | |
Meta ha empezado a mostrar su juego en la robótica. Lo que busca es claro: ser el Android de los robots gracias al softwareDescription: Hay muchas razones para pensar que los robots humanoides tendrán un papel clave en el futuro. Podrían encargarse de las tareas del hogar, atender en un hotel... Content:
Javier Marquez Javier Marquez Hay muchas razones para pensar que los robots humanoides tendrán un papel clave en el futuro. Podrían encargarse de las tareas del hogar, atender en un hotel o asumir trabajos de riesgo. Sin embargo, no todo el mundo lo ve posible: el cofundador de iRobot insiste en que todo esto es una fantasía. Pero si ese futuro termina llegando, la duda es inevitable: ¿qué empresas y países marcarán la pauta? Si tuviésemos que dar nombres, Tesla (Estados Unidos) y Unitree (China) estarían entre los candidatos, pero hay muchas otras empresas en carrera. Meta, conocida por su imperio en redes sociales, quiere abrirse paso en este terreno con una estrategia distinta. Su apuesta no pasa solo por el hardware, sino por algo que podría marcar la diferencia: el software. Los rumores sobre los planes de Meta en robótica humanoide comenzaron a principios de este año, y la compañía terminó por confirmarlos. Hace poco conocimos más detalles gracias a Andrew Bosworth, su CTO, que dejó entrever en una entrevista con The Verge que la jugada se parece bastante a lo que Google hizo con Android. Aunque Bosworth evita la comparación directa, lo que plantea Meta sigue ese guion. Continuarán desarrollando sus propios robots humanoides, pero el movimiento diferencial está en poner su software al alcance de otros fabricantes mediante licencias. La condición: cumplir con unas especificaciones concretas, igual que ocurre en Android. El ejecutivo deja claro por qué esto puede funcionar. A su juicio, el verdadero obstáculo de la robótica humanoide no está en el hardware, sino en el software. Especialmente en lo que llama “manipulación hábil”. Los robots ya son capaces de moverse con rapidez e incluso de dar volteretas, pero siguen fallando en algo tan básico como sujetar un vaso de agua sin derramarlo. Una de las piezas centrales de la estrategia de Meta está en la simulación. Su laboratorio de superinteligencia trabaja en lo que llaman un “world model”, un modelo del mundo capaz de recrear en un entorno digital cómo debería moverse y reaccionar una mano humana. El objetivo es entrenar a los robots en escenarios virtuales hasta que adquieran destreza suficiente para manipular objetos con precisión. La compañía, además, ha logrado reunir a un equipo singular, con figuras como Marc Whitten, que antes fue directivo en Cruise, o Sangbae Kim, considerado uno de los grandes referentes en robótica avanzada. A ellos se añaden perfiles internos con larga trayectoria. Una mezcla de especialistas externos y veteranos de la casa que refleja la ambición de la apuesta. El aterrizaje de los humanoides no será inmediato ni universal. Un análisis de Bank of América detalla un despliegue de tres fases. Lo más probable es que primero los veamos en entornos controlados antes de que aterricen en nuestros hogares de manera masiva. Si el plan de Meta prospera, la compañía venderá sus propios robots, también licenciará su plataforma. De esta forma, podríamos ver la tecnología de Meta detrás de robots de un amplio abanico de fabricantes. A partir de ahí, se abren escenarios plausibles aunque todavía no confirmados. Uno de ellos es que Meta complemente las licencias con servicios en la nube para entrenamiento o mantenimiento, o incluso que surja un ecosistema de “habilidades” descargables como aplicaciones. Es una hipótesis razonable: cuantos más robots utilicen el sistema, más datos obtendría la empresa para mejorar el producto, creando un círculo virtuoso difícil de replicar por sus competidores. Habrá que ver si finalmente avanza en esa dirección. La estrategia de Google con Android ha funcionado. Imágenes | Apple (serie ‘Sunny’ de Apple TV+) | Dima Solomin | Mika Baumeister En Xataka | Alibaba está ganando la carrera de los modelos de IA Open Source. Su estrategia es simple: ser cansina Los mejores comentarios: Ver 6 comentarios En Xataka hablamos de... Ver más temas Webedia Tecnología Videojuegos Entretenimiento Gastronomía Motor Estilo de vida Economía Ediciones Internacionales Partners Destacamos Ver más temas Suscribir Más sitios que te gustarán Reciente Ver más artículos Xataka TV Ver más vídeos
Images (1):
|
|||||
| « Découvrez le futur d'Android XR » : battle de … | https://www.phonandroid.com/decouvrez-l… | 1 | Jan 02, 2026 16:00 | active | |
« Découvrez le futur d'Android XR » : battle de robots, lunettes et casque connectés... Google tease sa conférence The Android Show du 8 décembreDescription: Google vient d’annoncer une édition de The Android Show dédiée à Android XR. Découvrez les indices semés par le teaser de l'événement. Content:
Google vient d’annoncer une nouvelle édition de The Android Show. Cette fois-ci, cette conférence en ligne sera consacrée à Android XR. Battle de danse de robots, casque et lunettes connectées, Gemini… La bande-annonce (et sa description surtout) sème d’intrigants indices sur ce que nous réserve cet événement. Android XR est devenu réalité avec la présentation officielle en octobre du Galaxy XR, le casque de réalité mixte de Samsung – qui arrivera également en France l’an prochain. Mais il ne s’agit là que du début de l’expérience Android XR. Pour dévoiler ses nouveautés, Google fait les choses en grand : le géant de la tech vient d’annoncer une nouvelle édition de sa conférence en ligne spéciale The Android Show. Le sous-titre est plus qu’explicite : après l’« I/O Edition » du 13 mai dernier, faites désormais place à l’« XR Edition ». Pour annoncer l’événement, Google n’a pas lésiné sur les moyens : la firme de Mountain View propose un teaser vidéo de 40 secondes – visible ci-dessous – sur lequel figurent des Android Bots qui arborent fièrement des casques et des lunettes. Cet événement sera dédié à Android XR, la bande-annonce est sans équivoque : « Découvrez le futur d’Android XR ». C’est la description qui est la plus bavarde. Elle annonce : « Apprenez tout ce qu’il y a à savoir sur la XR, que ce soit à travers les lunettes, les casques et tout ce qui se situe entre les deux. Découvrez comment, avec Gemini à vos côtés, vous pouvez profiter d’une expérience plus conversationnelle, contextuelle et utile. » La mention des « lunettes » n’est pas anodine et les spéculations vont bon train : l’animation mettant en scène un battle de danse de robots, cet événement sera-t-il l’occasion pour Google de présenter ses propres lunettes connectées ? Ou s’agira-t-il de lunettes en collaboration et pourquoi pas celles de Samsung, pressentie pour 2026 ? Seul le visionnage du The Android Show: XR Edition nous le dira. Et si la star du show n’était finalement pas du matériel, mais bel et bien l’intelligence artificielle maison de Google : Gemini – également citée dans la description ? L’IA multimodale de la firme de Mountain View s’immisce – pour ne pas dire s’impose – partout et présente des progrès impressionnants. Quoi qu’il en soit, pour être sûr que vous ne passiez pas à côté de cette conférence en ligne, Google a inclus dans la description un lien qui vous renvoie vers une page web dédiée à l’événement : en plus d’afficher un compte à rebours, elle vous permet de planifier un rappel par e-mail ou d’ajouter l’événement à votre calendrier. Vous pouvez également activer les notifications YouTube en cliquant sur « M’avertir ». Alors, rendez-vous le 8 décembre 2025 à 19 heures (heure française) pour découvrir The Android Show: XR Edition. Les informations recueillies sont destinées à CCM Benchmark Group pour vous assurer l'envoi de votre newsletter. Elles seront également utilisées sous réserve des options souscrites, par CCM Benchmark Group à des fins de ciblage publicitaire et prospection commerciale au sein du Groupe Le Figaro, ainsi qu’avec nos partenaires commerciaux. Le traitement de votre email à des fins de publicité et de contenus personnalisés est réalisé lors de votre inscription sur ce formulaire. Toutefois, vous pouvez vous y opposer à tout moment Plus généralement, vous bénéficiez d'un droit d'accès et de rectification de vos données personnelles, ainsi que celui d'en demander l'effacement dans les limites prévues par la loi. Vous pouvez également à tout moment revoir vos options en matière de prospection commerciale et ciblage. En savoir plus sur notre politique de confidentialité et notre politique Cookies. Fender ne se contente plus de ses guitares et amplis. Lors du CES 2026, la marque a surpris avec une paire d’enceintes Bluetooth au concept inédit. Celles-ci embarquent une fonction unique qui change la façon dont plusieurs appareils peuvent diffuser… Samsung ne semble pas vouloir ralentir sur le marché des smartphones pliables. Une nouvelle fuite évoque des changements majeurs pour les Galaxy Z Fold 8 et Z Flip 8. Si elle se confirme, ces prochains modèles pourraient bien prendre tout… Un rapport coréen vient de dévoiler le prix du Galaxy S26, le prochain flagship de Samsung. Si tout se passe comme prévu, le constructeur coréen devrait parvenir à défier les contraintes actuelles du marché. La nouvelle année est là et,… Il se pourrait bien que l’on ne soit plus qu’à quelques mois d’un jailbreak officiel de la PS5. Après le leak d’un élément capital à la sécurité de la console, les hackers ont désormais toutes les clés en main pour… Les Galaxy Watch 8 et Watch 8 Classic profitent d’une grosse réduction en ce début d’année. Grâce à plusieurs offres cumulables, Samsung propose jusqu’à 150 € de remise sur ses montres connectées haut de gamme. On vous dit comment profiter… Stranger Things est enfin arrivé à son terme après presque 10 ans sur nos écrans. Si le dernier épisode ne laisse que peu de place aux doutes, certains espèrent encore voir Hawkins et ses habitants revenir sur Netflix. Ce ne… Microsoft recommence avec ses messages peu subtils pour nous pousser à utiliser ses services. C’est au tour de OneDrive de s’inviter dans le menu Démarrer à travers un message légèrement alarmant. Décidément, Microsoft a de la suite dans les idées…. Une fuite liée à One UI 8.5 confirme enfin la présence d’un mode d’affichage spécial sur le Galaxy S26 Ultra. Cette fonction rend l’écran difficile à lire lorsqu’il est vu de côté, et empêche les regards indiscrets dans les lieux… Plusieurs analystes estiment qu’Apple pourrait faire une entorse à ses habitudes en sortant l’iPhone 18 plus tard qu’attendu. Une décision qui, si elle se confirme, serait motivée par plusieurs facteurs. Depuis qu’Apple a dévoilé le premier iPhone en 2007, nous… La sortie de la série Marvel Wonder Man sur Disney+ approche. En attendant, un nouveau trailer toujours aussi décalé nous montre quels sont les pouvoirs du super-héros, du moins en partie. Plus de 3 ans que les fans attendent ça…. Les informations recueillies sont destinées à CCM Benchmark Group pour vous assurer l'envoi de votre newsletter. Elles seront également utilisées sous réserve des options souscrites, par CCM Benchmark Group à des fins de ciblage publicitaire et prospection commerciale au sein du Groupe Le Figaro, ainsi qu’avec nos partenaires commerciaux. Le traitement de votre email à des fins de publicité et de contenus personnalisés est réalisé lors de votre inscription sur ce formulaire. Toutefois, vous pouvez vous y opposer à tout moment Plus généralement, vous bénéficiez d'un droit d'accès et de rectification de vos données personnelles, ainsi que celui d'en demander l'effacement dans les limites prévues par la loi. Vous pouvez également à tout moment revoir vos options en matière de prospection commerciale et ciblage. En savoir plus sur notre politique de confidentialité et notre politique Cookies.
Images (1):
|
|||||
| Lunar base to require hundreds of robots — executives - … | https://tass.com/science/1740977 | 1 | Jan 02, 2026 16:00 | active | |
Lunar base to require hundreds of robots — executives - Science & Space - TASSURL: https://tass.com/science/1740977 Description: Robots will also be needed to transport cargo, including Moon soil Content:
MOSCOW, February 2. /TASS/. Several hundred robots will be required to create and maintain a manned base on the Moon, Executive Director of Android Technics Company Evgeny Dudorov told TASS in an interview. "How many robots could fly to the Moon? Definitely not just one. Hundreds of robots will be required if humans plan to have a base on the Moon. A robot with a mobile anthropomorphic structure will be one of possible design variants," Dudorov said. Robots will also be needed to transport cargo, including Moon soil, as well as manipulation systems to move items and unload arriving spaceships, he added.
Images (1):
|
|||||
| War Robots - Trailer d'aperçu officiel Kaji Robot | https://fr.ign.com/war-robots/81202/tra… | 10 | Jan 02, 2026 16:00 | active | |
War Robots - Trailer d'aperçu officiel Kaji RobotURL: https://fr.ign.com/war-robots/81202/trailer/war-robots-trailer-dapercu-officiel-kaji-robot Description: Jetez un coup d'œil à la bande-annonce de présentation du robot Kaji pour War Robots, un jeu d'action et de combat en ligne basé sur des méchas et développé par MY.GAMES. Les joueurs peuvent découvrir Kaji, un mécha qui peut devenir invisible, se déplacer à grande vitesse et déployer des mines pour vaincre l'adversaire. Kaji est disponible dès maintenant dans War Robots pour iOS, Android et PC (Steam). Content: Images (10):
|
|||||
| В War Robots добавили кросс-прогресс между PC и смартфонами | https://app-time.ru/post/v-war-robots-d… | 1 | Jan 02, 2026 16:00 | active | |
В War Robots добавили кросс-прогресс между PC и смартфонамиURL: https://app-time.ru/post/v-war-robots-dobavili-kross-progress-mezhdu-pc-i-smartfonami Description: Игроки War Robots могут продолжать битвы независимо от того, где они находятся: дома или в дороге. Издатель MY.GAMES сообщил, что в War Robots появилась поддерж Content:
Издатель MY.GAMES сообщил, что в War Robots появилась поддержка кросс-платформы. Теперь игроки могут легко переключаться между PC, iOS и Android, сохраняя весь свой прогресс. Чтобы перенести прогресс, игроки должны привязать свой профиль к специальному идентификатору — MY.GAMES ID. Это можно сделать в меню игры. После привязки можно войти в свой аккаунт на другой платформе. Кроме этого, недавно разработчики открыли ещё одну возможность скачать War Robots на Android, если в вашем регионе игра недоступна — скачать APK-файл с официального сайта. Это здравое решение, которого должны придерживаться если не издатели, то разработчики многих других популярных мобильных игр, недоступны в РФ после 2022 года. War Robots скачали более 300 млн раз: 98% пришлось на Android и iOS, а 2% — на PC. Среди смартфонов 228 млн загрузок пришлось на Android и 73 млн загрузок на iOS. В свежем обновлении разработчики улучшили интерфейс и добавили несколько секунд неуязвимости для воскресших роботов.
Images (1):
|
|||||
| Скачать World of Robots 1.26.0 для Android | https://trashbox.ru/link/world-of-robot… | 1 | Jan 02, 2026 16:00 | active | |
Скачать World of Robots 1.26.0 для AndroidURL: https://trashbox.ru/link/world-of-robots-android Description: World of Robots – это динамичный тактический онлайн-шутер про гигантских боевых роботов, где вас ждут масштабные PvP-баталии и безграничные возможности для стратегического превосходства. Content:
World of Robots – это динамичный тактический онлайн-шутер про гигантских боевых роботов, где вас ждут масштабные PvP-баталии и безграничные возможности для стратегического превосходства. Погрузитесь в мир высокотехнологичных сражений, управляя многотонными машинами с мощным арсеналом. Разрабатывайте уникальные тактики, экспериментируя с разными типами оружия и модификациями роботов. Объединяйтесь с игроками со всего мира, создавайте непобедимые кланы и доминируйте на глобальной арене. С каждым обновлением World of Robots становится еще интереснее – новые карты, виды вооружения и спецэффекты делают каждую битву неповторимой. Реалистичная физика разрушений и детализированная графика усиливают погружение в эпичные сражения будущего. Особенности World of Robots:
Images (1):
|
|||||
| Скачать Merge Robots 1.13.33 для Android | https://trashbox.ru/link/merge-robots-a… | 1 | Jan 02, 2026 16:00 | active | |
Скачать Merge Robots 1.13.33 для AndroidURL: https://trashbox.ru/link/merge-robots-android Description: Merge Robots – насладитесь созданием крутых и мощных роботов, оснащайте их самыми прогрессивными технологиями и зарабатывайте на этом миллиарды. Content:
Merge Robots – насладитесь созданием крутых и мощных роботов, оснащайте их самыми прогрессивными технологиями и зарабатывайте на этом миллиарды. Постройте настоящую роботическую империю и станьте футуристическим королем нашего времени! Соревнуйтесь в другими пользователями, докажите всем, что вы тут самый главный и богатый. Все очень просто: чтобы создать нового робота, просто соедините два одинаковых. Продолжайте кликать и создавать новых роботов, производству нельзя стоять, иначе другие геймеры станут лидерами рынка. Не забывайте каждый день крутить колесо с призами, ведь в наше время без инвестиций – никуда! Скорее скачивайте Merge Robots и начинайте развивать свой бизнес уже сегодня! Особенности игры Merge Robots:
Images (1):
|
|||||
| Los mejores robots limpiacristales con app para Android: guía completa … | https://androidayuda.com/aplicaciones/r… | 1 | Jan 02, 2026 16:00 | active | |
Los mejores robots limpiacristales con app para Android: guía completa con pruebas, pros y contrasDescription: Guía top de robots limpiacristales con app Android: comparativas, pros y contras, y trucos para un acabado perfecto en tus ventanas. Content:
Android Ayuda » Aplicaciones » Recomendadas 12 minutos Limpiar bien los ventanales, espejos y mamparas de casa puede ser un auténtico quebradero de cabeza, sobre todo cuando hay que asomarse a la parte exterior o lidiar con manchas que no salen ni a la de tres. En ese punto, los robots limpiacristales con app para Android son una pequeña salvación: automatizan el trabajo, se adhieren con succión al vidrio y te dejan supervisar todo desde el móvil sin tener que jugártela en una escalera. Si has llegado hasta aquí buscando cuáles son los mejores modelos compatibles con tu smartphone, estás en el sitio adecuado. Hemos reunido lo más destacado de las guías y pruebas de referencia para que puedas comparar, entender las diferencias reales entre ellos y comprar con cabeza. Encontrarás modelos con app Android, alternativas sin app pero muy solventes, pros y contras, y trucos de uso que marcan la diferencia en el resultado final. Estos dispositivos se adhieren a la superficie mediante un potente sistema de succión y recorren el cristal con rutas en zigzag u otros patrones, mientras sus mopas (en seco o en húmedo) arrastran la suciedad. La app para Android aporta control remoto, selección de modos, supervisión y, en algunos casos, ajustes finos como potencia o frecuencia de pulverización. Además del motor de succión, hay dos elementos clave: el paño o mopa (suelen venir varias de repuesto) y la seguridad. Los mejores equipos incorporan cuerda/cordón de seguridad y batería de respaldo para evitar caídas si se corta la luz, un detalle que conviene exigir en cualquier ventana con altura. Al evaluar estos robots, conviene ir más allá de las cifras y fijarse en lo que realmente cambia la experiencia. En las pruebas prácticas de los diferentes análisis consultados se valoraron cuatro aspectos: diseño y calidad de construcción (incluida la forma, cuadrada o alargada), rendimiento de limpieza, seguridad y facilidad de uso. En limpieza cuenta la combinación de modos, el tipo de mopa y el acabado tras una o varias pasadas. También los pulverizadores, cuando existen, que aportan constancia de líquido en la zona de trabajo. En seguridad, aunque el funcionamiento sea autónomo, se recomienda usarlo estando presente, con cuerda de seguridad colocada y batería de emergencia para cortes de corriente. La facilidad de uso depende del control (mando, app o manual), la claridad de los modos y lo intuitivo que resulta ponerlo a trabajar. Si buscas un primer robot, la app para Android puede simplificar el manejo y permitir rutas precisas sin complicarte. Un viejo conocido por su relación calidad-precio. Su diseño cuadrado le permite llegar mejor a las esquinas y cubrir la superficie de forma uniforme. Destaca por su velocidad: alrededor de 2,5 minutos por m², cuando otros tardan entre 4 y 5 minutos, lo que se nota mucho si tienes ventanales. Incluye dos tipos de mopas: unas para limpiar en seco (muy recomendables como primera pasada si hay polvo incrustado) y otras para húmedo, que puedes rociar con agua o limpiacristales. Se maneja desde mando o app móvil, y si lo enciendes directamente en el botón arranca en modo automático. Viene con los sistemas de seguridad esperables: batería de respaldo y cable de seguridad. El único punto menos cómodo es tener que alternar mopas para seco y mojado, aunque así logras la limpieza completa. Diseño cuadrado, cinco programas automáticos y control por mando o app. En pruebas reales, el doble spray de agua dosifica poco a poco para que siempre limpie con líquido fresco. Calcula la ruta, detecta límites y se apaga al terminar. Llega bastante bien a las esquinas y la velocidad de trabajo es más que aceptable para mantener la casa al día. En seguridad, cable resistente y sistemas adicionales para contingencias como cortes de luz. Funciona también en azulejos del baño con un resultado sorprendentemente bueno. Como contra, si los marcos son muy finos, alguna vez puede no detectarlos perfectamente. Ficha técnica destacada: 24,6 × 25,2 × 9,2 cm, 1,95 kg, 5 modos y 4 métodos de seguridad. Otro modelo conectado de Cecotec con cuatro programas automáticos y tecnología para detenerse al finalizar. Se controla con mando o desde la app y limpia cristales de cualquier grosor, azulejos y superficies lisas en interior y exterior. Incluye varios sistemas de seguridad anticaídas para aportar tranquilidad en ventanas altas. Este modelo combina navegación inteligente, cálculo de ruta, detección de límites y pulverización automática para una limpieza profunda. Incluye cinco modos, sistema anticaídas y mopas de microfibra de buena calidad. Resulta una opción sólida si quieres un Cecotec con app y buenas funciones automáticas. Compacto y alargado, con app y conexión WiFi (además del mando). Dispone de dos niveles de pulverización, modo de «limpieza de manchas» que se entretiene en lo realmente complicado y un depósito de 60 ml. Su succión de 3.200 Pa y la app (espejo del mando) permiten dirigirlo justo a la zona que quieras. Silencioso para su categoría (70 dB) y con batería de emergencia de unos 30 minutos por si se va la luz. Como consejo de uso, no humedezcas demasiado la mopa o perderá agarre. Y si hay mucho polvo o arena, realiza una primera pasada en seco para no rayar el cristal. Medidas: 27 × 13,6 × 8 cm; 1,15 kg; 2 modos; cable de seguridad de 5 m. Un robot de gama alta centrado en la experiencia y la seguridad. Ofrece succión de 2.800 Pa para adherirse firmemente, protección ante cortes de electricidad y compensación de presión atmosférica. Se controla desde el propio dispositivo o mediante su app para smartphone Android/iOS. En limpieza, integra WinSlam 3.0 con planificación inteligente y 3 modos adaptativos. La pulverización cruzada con dos niveles de flujo ayuda a desincrustar sin rayar, y la bayeta rodea el contorno para dejar un acabado homogéneo. Es un dispositivo pensado para quien quiere resultados muy consistentes con supervisión desde el móvil. Con forma cuadrada, cable de alimentación largo (alrededor de 7 metros), peso contenido y navegación inteligente con IA. Calcula el tamaño de la ventana, elige la ruta más eficiente y puede limpiar 1 m² en aproximadamente 2,5 minutos. Dispone de batería de emergencia (SAI) y cuerda de seguridad. Se maneja con control remoto o app para Android/iOS, por lo que encaja perfecto si quieres control desde el móvil. Un modelo avanzado con app (iOS y Android), 3 modos automáticos y limpieza de múltiples superficies: ventanas interiores y exteriores, azulejos, mármol y más. Cuenta con depósito de 30 ml para tu limpiacristales o vinagre y avisos sonoros si se corta la corriente eléctrica. Es un «todo terreno» para quien busca versatilidad y control por smartphone. Aunque aquí priorizamos los que incluyen app para Android, hay robots muy competentes que, aun sin app, merecen mención por rendimiento o por su relación calidad-precio; y también existen apps para amas de casa que complementan su uso. Pueden ser una gran compra si lo del móvil no es imprescindible para ti. Ideal para mantenimiento diario. Similar en diseño a otros alargados, trae 12 mopas de repuesto y exige humedecerlas y escurrirlas antes de empezar. Se controla desde el mando, con potencia de succión de 3.800 Pa y opción de una o dos pasadas. Tiene cable de seguridad de 5 m y batería de emergencia de unos 20 minutos. Si llevas tiempo sin limpiar, conviene una primera pasada manual para quitar costra. Dimensiones: 29 × 13,5 × 8,5 cm; 1,14 kg; 3 modos. Muy ligero y compacto (unos 8,5 cm de alto), cabe donde otros no, como en ventanas con doble cristal o barrotes. Compatible con ventanas sin marco gracias a sus sensores de borde. No incorpora pulverización automática ni app, y se maneja con mando. Muy interesante si necesitas un equipo que entre en huecos ajustados. Un buen ayudante para cristales medianamente sucios. Trae 12 mopas, 3 modos preestablecidos y dos depósitos de agua (25 ml cada uno) con pulverización en la dirección de avance. Buena adherencia, batería de emergencia de 30 minutos y cordón de seguridad, aunque este último es algo más endeble que en otros. Funciona mejor en vidrio que en azulejos y puede dejar ligeras marcas de mopa. 29 × 14,5 × 10 cm; 900 g; 3 modos. Versión sin app del modelo de Create, capaz de limpiar hasta 1 m² en unos 4 minutos con rutas en zigzag. Incluye 12 mopas lavables, cable de seguridad largo y batería de respaldo de 30 minutos. Una compra muy sensata si buscas rapidez y seguridad sin complicarte con móviles. Este modelo destaca por su base: actúa como controlador, estación de carga, estabilizador y maletín de transporte. Es una opción orientada a quien valora la experiencia de uso y el orden, con la tranquilidad que aporta una estación pensada para todo el ciclo del robot. Antes de empezar, retira polvo y arena de la superficie. En cristales con mucha suciedad, haz una primera pasada en seco (o manual si hace mucho que no se limpian). Así evitas rayar el vidrio y ayudas a que el robot deslice mejor. Si tu robot pulveriza o humedeces las mopas, no te pases con el líquido. Demasiada humedad reduce la adherencia de la succión y puede provocar resbalones. Mejor mojar poco y repetir pasada si hace falta. En ventanas exteriores o a cierta altura, coloca siempre el cordón de seguridad y usa la batería de respaldo como última barrera ante cortes de suministro. Aunque muchos robots cuentan con medidas de seguridad, lo ideal es estar presente mientras trabajan. Ojo a los marcos muy finos: algunos modelos pueden detectarlos peor y necesitar un empujoncito manual al principio. En azulejos, ciertos robots se mueven con menos precisión que en vidrio y repiten zonas; es normal, el agarre varía según la textura. Si el robot ofrece modos (una o dos pasadas), úsalo a tu favor: dos pasadas seguidas suelen dejar el cristal impecable cuando hay manchas incrustadas. Y si incorpora “limpieza de manchas”, actívalo para que se detenga donde hace falta insistir. ¿Cómo funciona un robot limpiacristales? Se fija al cristal por succión y recorre la superficie con rutas predefinidas, usando mopas para arrastrar la suciedad en seco y/o húmedo. Los modelos con pulverización aplican líquido durante el proceso. La mayoría detecta bordes y límites para no salirse y cuentan con cable y batería de emergencia como medidas de seguridad. ¿Qué ventajas tiene que lleve app para Android? Más control y comodidad: eliges modos, gestionas pasadas, diriges el robot a zonas concretas y supervisas sin depender del mando. Si vas a usarlo con frecuencia, la app agiliza y hace más predecibles los resultados. ¿Todos limpian esquinas por igual? No. Los de formato cuadrado suelen llegar mejor a las esquinas, mientras que los alargados pueden moverse con mayor soltura en ciertas situaciones. Si tus marcos son muy marcados o buscas rematar esquinas, prioriza un diseño cuadrado. ¿Sirven para azulejos o mármol? Sí, varios modelos probados trabajan correctamente en azulejos y algunos incluso en mármol, aunque en superficies no vítreas el movimiento puede ser menos preciso. Revisa que el fabricante indique compatibilidad con superficies lisas no vítreas. ¿Qué pasa si se va la luz? Los mejores incluyen batería de respaldo de 20–30 minutos para mantenerse adheridos y evitar caídas. Usa siempre el cordón de seguridad, especialmente en exterior o a gran altura. Si quieres cristales impecables con el mínimo esfuerzo y control total desde el móvil, los robots con app para Android como Mamibot W120-T, Cecotec (1290, 870, 1390), Create Wipebot Pro, Ecovacs Winbot W1 Pro o Schbot WindX1 son apuestas muy fiables. Elige potencia suficiente, buenas medidas de seguridad, modos útiles y una app clara; aplica los consejos de uso (pasada en seco, poca humedad, cuerda puesta) y notarás la diferencia en menos tiempo del que imaginas. Comparte esta lista de robots limpiacristales para que más usuarios sepan cuál elegir.
Images (1):
|
|||||
| OpenMind wants to be the Android operating system of humanoid … | https://techcrunch.com/2025/08/04/openm… | 1 | Jan 02, 2026 16:00 | active | |
OpenMind wants to be the Android operating system of humanoid robots | TechCrunchDescription: OpenMind is building humanoid robot operating software designed for robots that interact with people and other robots. Content:
Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Many companies are focused on building robots, or the hardware components to help them move, grip objects, or interact with the world around them. Silicon Valley-based OpenMind is focused under the hood. OpenMind is building a software layer, OM1, for humanoid robots that acts as an operating system. The company compares itself to being the Android for robotics because its software is open and hardware agnostic. Stanford professor Jan Liphardt, the founder of OpenMind, told TechCrunch that humanoids and other robots have been around and able to do repetitive tasks for decades. But now that humanoids are being developed for use cases that require more human-to-machine interactions, like having a humanoid in your home, they need a new operating system that thinks more like a human. “All of a sudden, this world is opening where machines are able to interact with humans in ways I’ve certainly never before seen,” Liphardt said. “We’re very much believers here that it’s not just about the humans, but we really think of ourselves as a company that is a collaboration between machines and humans.” OpenMind unveiled on Monday a new protocol called FABRIC that allows robots to verify identity and share context and information with other robots. Unlike humans, machines can learn almost instantly, Liphardt said, which means giving them a better way to connect to other robots will allow them to more easily train and absorb new information. Liphardt gave the example of languages and how robots could connect to each other and share data on how to speak different languages, which would help them better interact with more people without having to be taught each language by a human directly. “Humans take it for granted that they can interact with any other human on Earth,” Liphardt said. “Humans have built a lot of infrastructure around us that allows us to trust other people, call them, text them, and interact and coordinate and do things together. Machines, of course, are going to be no different.” OpenMind was founded in 2024 and is gearing up to ship its first fleet of 10 OM1-powered robotic dogs by September. Liphardt said that he’s a big believer in getting the tech out there and iterating on it after the fact. “We full well expect all the humans that will be hosting these quadrupeds, they’ll come back with a long list of things they didn’t like or they want, and then it’s up to us to very, very quickly iterate and improve the machines,” he said. The company also recently raised a $20 million funding round led by Pantera Capital, with participation from Ribbit, Coinbase Ventures, and Pebblebed, among other strategic investors and angel investors. Now, the company is focused on getting its tech into people’s homes and starting to iterate on the product. “The most important thing for us is to get robots out there and to get feedback,” Liphardt said. “Our goal as a company is to do as many of these tests as we can, so that we can very rapidly identify the most interesting opportunities where the capabilities of the robots today are optimally matched against what humans are looking for.” Topics Senior Reporter, Venture Becca is a senior writer at TechCrunch that covers venture capital trends and startups. She previously covered the same beat for Forbes and the Venture Capital Journal. You can contact or verify outreach from Becca by emailing rebecca.szkutak@techcrunch.com. Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates. OpenAI bets big on audio as Silicon Valley declares war on screens The phone is dead. Long live . . . what exactly? Meta just bought Manus, an AI startup everyone has been talking about You’ve been targeted by government spyware. Now what? Sauron, the high-end home security startup for ‘super premium’ customers, plucks a new CEO out of Sonos The Google Pixel Watch 4 made me like smartwatches again NY Governor Hochul signs bill requiring warning labels on ‘addictive’ social media © 2025 TechCrunch Media LLC.
Images (1):
|
|||||
| China’s ‘Android For Robots’ And The Race For Embodied AI … | https://www.forbes.com/sites/viviantoh/… | 0 | Jan 02, 2026 16:00 | active | |
China’s ‘Android For Robots’ And The Race For Embodied AI SupremacyDescription: Which philosophy they will learn to think. Can a walled garden outlast an open field, or will openness inevitably prevail? Content: |
|||||
| El 'Android para robots' es real: Google DeepMind ficha al … | https://andro4all.com/robots/el-android… | 1 | Jan 02, 2026 16:00 | active | |
El 'Android para robots' es real: Google DeepMind ficha al CTO de Boston Dynamics para crear el cerebro universal de la robóticaDescription: Google DeepMind ficha al ex CTO de Boston Dynamics para liderar su sistema operativo universal para robots. Aaron Saunders, que lleva 23 años en la compañía cre Content:
Google DeepMind ficha al ex CTO de Boston Dynamics para liderar su sistema operativo universal para robots. Aaron Saunders, que lleva 23 años en la compañía creadora del famoso Atlas, asume el puesto de vicepresidente de ingeniería de hardware. El movimiento confirma que la filial de IA va en serio con su proyecto Gemini, el famoso 'Android de la robótica'.RobotsQué robot aspirador comprarHistoria de los robots aspiradoresXiaomi CyberdogComo recoge WinBuzzer, Saunders se incorporó a Boston Dynamics en 2018 y fue ascendido a CTO en 2021. Allí se curtió desarrollando los prototipos más avanzados de la industria, desde los cuadrúpedos Spot hasta el humanoide Atlas que nos dejó a todos con la boca abierta por sus piruetas imposibles.Un cerebro universal para cualquier cuerpo robóticoDemis Hassabis, CEO de DeepMind, no se anda con rodeos: quiere crear un "Android para robots". En lugar de fabricar sus propios robots, la idea es desarrollar una base de IA que pueda controlar cualquier configuración de hardware. ¿En qué se traduce esto? En crear un sistema que funcione en cualquier configuración corporal sin importar si es humanoide, cuadrúpedo o con ruedas.Esta aproximación busca separar la inteligencia del chasis físico, evitando los quebraderos de cabeza de fabricar hardware mientras extiende sus modelos de IA por toda la industria. El planteamiento replica el modelo Android: Google proporciona el cerebro, otros fabrican el cuerpo.Sin embargo, el fichaje de Saunders tiene su gracia. A pesar de todo el discurso sobre software, incorporar a un veterano constructor sugiere que Google adopta una táctica similar a Pixel: crear hardware propio para demostrar lo que puede hacer su software. Ya hemos visto como Google desarrolla robots con IA que pueden pensar antes de actuar y hacer tareas tan cotidianas como clasificar ropa.La experiencia de Saunders viene como anillo al dedo para resolver el problema de simulación a realidad, ese escollo donde los robots que funcionan perfectamente en simulaciones se estrellan contra la realidad física. Su experiencia práctica con actuación hidráulica y eléctrica puede acelerar el salto de los laboratorios a aplicaciones reales.El fichaje llega justo cuando el hardware robótico se está abaratando a marchas forzadas, liderado por fabricantes chinos como Unitree. Esta empresa se ha convertido en el mayor proveedor de sistemas cuadrúpedos entregando 10 veces más unidades que la competencia en 2023-2024, algo parecido a como Boston Dynamics rompió esquemas con sus robots humanoides eléctricos que parecían CGI.Mientras, la industria muestra distintas formas de funcionar: Tesla va a lo suyo con un ecosistema cerrado para Optimus, Meta juega la carta del código abierto con V-JEPA 2, un modelo que enseña sentido común físico a los robots. Como ya demostró Google con robots jugando ping-pong, la carrera ya no va de contratar investigadores, sino de fichar a gente que sepa hacer robots que funcionen de verdad.El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos Como recoge WinBuzzer, Saunders se incorporó a Boston Dynamics en 2018 y fue ascendido a CTO en 2021. Allí se curtió desarrollando los prototipos más avanzados de la industria, desde los cuadrúpedos Spot hasta el humanoide Atlas que nos dejó a todos con la boca abierta por sus piruetas imposibles.Un cerebro universal para cualquier cuerpo robóticoDemis Hassabis, CEO de DeepMind, no se anda con rodeos: quiere crear un "Android para robots". En lugar de fabricar sus propios robots, la idea es desarrollar una base de IA que pueda controlar cualquier configuración de hardware. ¿En qué se traduce esto? En crear un sistema que funcione en cualquier configuración corporal sin importar si es humanoide, cuadrúpedo o con ruedas.Esta aproximación busca separar la inteligencia del chasis físico, evitando los quebraderos de cabeza de fabricar hardware mientras extiende sus modelos de IA por toda la industria. El planteamiento replica el modelo Android: Google proporciona el cerebro, otros fabrican el cuerpo.Sin embargo, el fichaje de Saunders tiene su gracia. A pesar de todo el discurso sobre software, incorporar a un veterano constructor sugiere que Google adopta una táctica similar a Pixel: crear hardware propio para demostrar lo que puede hacer su software. Ya hemos visto como Google desarrolla robots con IA que pueden pensar antes de actuar y hacer tareas tan cotidianas como clasificar ropa.La experiencia de Saunders viene como anillo al dedo para resolver el problema de simulación a realidad, ese escollo donde los robots que funcionan perfectamente en simulaciones se estrellan contra la realidad física. Su experiencia práctica con actuación hidráulica y eléctrica puede acelerar el salto de los laboratorios a aplicaciones reales.El fichaje llega justo cuando el hardware robótico se está abaratando a marchas forzadas, liderado por fabricantes chinos como Unitree. Esta empresa se ha convertido en el mayor proveedor de sistemas cuadrúpedos entregando 10 veces más unidades que la competencia en 2023-2024, algo parecido a como Boston Dynamics rompió esquemas con sus robots humanoides eléctricos que parecían CGI.Mientras, la industria muestra distintas formas de funcionar: Tesla va a lo suyo con un ecosistema cerrado para Optimus, Meta juega la carta del código abierto con V-JEPA 2, un modelo que enseña sentido común físico a los robots. Como ya demostró Google con robots jugando ping-pong, la carrera ya no va de contratar investigadores, sino de fichar a gente que sepa hacer robots que funcionen de verdad.El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos Demis Hassabis, CEO de DeepMind, no se anda con rodeos: quiere crear un "Android para robots". En lugar de fabricar sus propios robots, la idea es desarrollar una base de IA que pueda controlar cualquier configuración de hardware. ¿En qué se traduce esto? En crear un sistema que funcione en cualquier configuración corporal sin importar si es humanoide, cuadrúpedo o con ruedas.Esta aproximación busca separar la inteligencia del chasis físico, evitando los quebraderos de cabeza de fabricar hardware mientras extiende sus modelos de IA por toda la industria. El planteamiento replica el modelo Android: Google proporciona el cerebro, otros fabrican el cuerpo.Sin embargo, el fichaje de Saunders tiene su gracia. A pesar de todo el discurso sobre software, incorporar a un veterano constructor sugiere que Google adopta una táctica similar a Pixel: crear hardware propio para demostrar lo que puede hacer su software. Ya hemos visto como Google desarrolla robots con IA que pueden pensar antes de actuar y hacer tareas tan cotidianas como clasificar ropa.La experiencia de Saunders viene como anillo al dedo para resolver el problema de simulación a realidad, ese escollo donde los robots que funcionan perfectamente en simulaciones se estrellan contra la realidad física. Su experiencia práctica con actuación hidráulica y eléctrica puede acelerar el salto de los laboratorios a aplicaciones reales.El fichaje llega justo cuando el hardware robótico se está abaratando a marchas forzadas, liderado por fabricantes chinos como Unitree. Esta empresa se ha convertido en el mayor proveedor de sistemas cuadrúpedos entregando 10 veces más unidades que la competencia en 2023-2024, algo parecido a como Boston Dynamics rompió esquemas con sus robots humanoides eléctricos que parecían CGI.Mientras, la industria muestra distintas formas de funcionar: Tesla va a lo suyo con un ecosistema cerrado para Optimus, Meta juega la carta del código abierto con V-JEPA 2, un modelo que enseña sentido común físico a los robots. Como ya demostró Google con robots jugando ping-pong, la carrera ya no va de contratar investigadores, sino de fichar a gente que sepa hacer robots que funcionen de verdad.El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos Esta aproximación busca separar la inteligencia del chasis físico, evitando los quebraderos de cabeza de fabricar hardware mientras extiende sus modelos de IA por toda la industria. El planteamiento replica el modelo Android: Google proporciona el cerebro, otros fabrican el cuerpo.Sin embargo, el fichaje de Saunders tiene su gracia. A pesar de todo el discurso sobre software, incorporar a un veterano constructor sugiere que Google adopta una táctica similar a Pixel: crear hardware propio para demostrar lo que puede hacer su software. Ya hemos visto como Google desarrolla robots con IA que pueden pensar antes de actuar y hacer tareas tan cotidianas como clasificar ropa.La experiencia de Saunders viene como anillo al dedo para resolver el problema de simulación a realidad, ese escollo donde los robots que funcionan perfectamente en simulaciones se estrellan contra la realidad física. Su experiencia práctica con actuación hidráulica y eléctrica puede acelerar el salto de los laboratorios a aplicaciones reales.El fichaje llega justo cuando el hardware robótico se está abaratando a marchas forzadas, liderado por fabricantes chinos como Unitree. Esta empresa se ha convertido en el mayor proveedor de sistemas cuadrúpedos entregando 10 veces más unidades que la competencia en 2023-2024, algo parecido a como Boston Dynamics rompió esquemas con sus robots humanoides eléctricos que parecían CGI.Mientras, la industria muestra distintas formas de funcionar: Tesla va a lo suyo con un ecosistema cerrado para Optimus, Meta juega la carta del código abierto con V-JEPA 2, un modelo que enseña sentido común físico a los robots. Como ya demostró Google con robots jugando ping-pong, la carrera ya no va de contratar investigadores, sino de fichar a gente que sepa hacer robots que funcionen de verdad.El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos Sin embargo, el fichaje de Saunders tiene su gracia. A pesar de todo el discurso sobre software, incorporar a un veterano constructor sugiere que Google adopta una táctica similar a Pixel: crear hardware propio para demostrar lo que puede hacer su software. Ya hemos visto como Google desarrolla robots con IA que pueden pensar antes de actuar y hacer tareas tan cotidianas como clasificar ropa.La experiencia de Saunders viene como anillo al dedo para resolver el problema de simulación a realidad, ese escollo donde los robots que funcionan perfectamente en simulaciones se estrellan contra la realidad física. Su experiencia práctica con actuación hidráulica y eléctrica puede acelerar el salto de los laboratorios a aplicaciones reales.El fichaje llega justo cuando el hardware robótico se está abaratando a marchas forzadas, liderado por fabricantes chinos como Unitree. Esta empresa se ha convertido en el mayor proveedor de sistemas cuadrúpedos entregando 10 veces más unidades que la competencia en 2023-2024, algo parecido a como Boston Dynamics rompió esquemas con sus robots humanoides eléctricos que parecían CGI.Mientras, la industria muestra distintas formas de funcionar: Tesla va a lo suyo con un ecosistema cerrado para Optimus, Meta juega la carta del código abierto con V-JEPA 2, un modelo que enseña sentido común físico a los robots. Como ya demostró Google con robots jugando ping-pong, la carrera ya no va de contratar investigadores, sino de fichar a gente que sepa hacer robots que funcionen de verdad.El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos La experiencia de Saunders viene como anillo al dedo para resolver el problema de simulación a realidad, ese escollo donde los robots que funcionan perfectamente en simulaciones se estrellan contra la realidad física. Su experiencia práctica con actuación hidráulica y eléctrica puede acelerar el salto de los laboratorios a aplicaciones reales.El fichaje llega justo cuando el hardware robótico se está abaratando a marchas forzadas, liderado por fabricantes chinos como Unitree. Esta empresa se ha convertido en el mayor proveedor de sistemas cuadrúpedos entregando 10 veces más unidades que la competencia en 2023-2024, algo parecido a como Boston Dynamics rompió esquemas con sus robots humanoides eléctricos que parecían CGI.Mientras, la industria muestra distintas formas de funcionar: Tesla va a lo suyo con un ecosistema cerrado para Optimus, Meta juega la carta del código abierto con V-JEPA 2, un modelo que enseña sentido común físico a los robots. Como ya demostró Google con robots jugando ping-pong, la carrera ya no va de contratar investigadores, sino de fichar a gente que sepa hacer robots que funcionen de verdad.El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos El fichaje llega justo cuando el hardware robótico se está abaratando a marchas forzadas, liderado por fabricantes chinos como Unitree. Esta empresa se ha convertido en el mayor proveedor de sistemas cuadrúpedos entregando 10 veces más unidades que la competencia en 2023-2024, algo parecido a como Boston Dynamics rompió esquemas con sus robots humanoides eléctricos que parecían CGI.Mientras, la industria muestra distintas formas de funcionar: Tesla va a lo suyo con un ecosistema cerrado para Optimus, Meta juega la carta del código abierto con V-JEPA 2, un modelo que enseña sentido común físico a los robots. Como ya demostró Google con robots jugando ping-pong, la carrera ya no va de contratar investigadores, sino de fichar a gente que sepa hacer robots que funcionen de verdad.El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos Mientras, la industria muestra distintas formas de funcionar: Tesla va a lo suyo con un ecosistema cerrado para Optimus, Meta juega la carta del código abierto con V-JEPA 2, un modelo que enseña sentido común físico a los robots. Como ya demostró Google con robots jugando ping-pong, la carrera ya no va de contratar investigadores, sino de fichar a gente que sepa hacer robots que funcionen de verdad.El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos El movimiento deja claro que DeepMind apuesta por el "cerebro" como la parte más valiosa del robot. Pero reconoce que necesita entender el "cuerpo" para que su software no se quede corto. Esta mezcla entre la experiencia física de Saunders y la IA avanzada de Google podría marcar el momento en que los robots dejen de ser prototipos de laboratorio para convertirse en herramientas realmente útiles.Únete a la conversación Si estabas pensando en comprarte el iPhone 18 en 2026, tengo malas noticias para ti Sigue siendo un referente: 512 GB, Snapdragon 8 Gen 3, carga de 90 W y Android 16 por 358 euros Carta a los Reyes Magos: esta es mi lista de deseos como fan de Apple para el 2026 200 megapíxeles y batería para dos días: el nuevo "Pro Max" de OPPO quiere que te olvides de la gama alta Lleva la magia del 4K con WiFi 6 y sonido Bluetooth a cualquier pared y disfruta tus pelis y series como nunca con este proyectoriRobot, la empresa detrás de los robots Roomba, se declara en bancarrota: ¿qué significa para ti como usuario? El padrino de la robótica dice que la fiebre de los humanoides es una burbuja condenada al fracaso El robot más pequeño del mundo es capaz de nadar, sentir la temperatura y hasta "pensar" Así es el robot autónomo más pequeño del mundo que un día podría salvarte la vida Saben correr, pero no trabajar: los creadores de robots humanoides admiten que el 'hype' se nos ha ido de las manos
Images (1):
|
|||||
| OpenMind quiere ser el Android de los robots humanoides | https://www.muycomputerpro.com/2025/08/… | 1 | Jan 02, 2026 16:00 | active | |
OpenMind quiere ser el Android de los robots humanoidesDescription: OpenMind está desarrollando una capa de software, OM1, que actúa como sistema operativo para la próxima generación de robots humanoides Content:
El grupo formado por Indar Kartera, el Gobierno Vasco, la Fundación BBK y Teknei compra Ayesa Digital El precio de la memoria DDR5 subirá otro 45% en 2026 Meta se queda con la startup Manus para integrar agentes de IA en sus redes sociales SoftBank avanza en inversión en infraestructura para IA con la compra de DigitalBridge Microsoft agiliza la protección de archivos con el BitLocker acelerado por hardware Cómo arreglar Windows 11 y mejorar su cuota de mercado Los deepfakes de 2026 harán imposible distinguir la realidad Las 10 tendencias tecnológicas más destacadas para el sector TI en 2026 AMD Threadripper 9980X y Threadripper 9970X análisis Principales tendencias que afectarán a infraestructuras y operaciones en 2026 «LG dedica el 75% de su I+D a tecnología para las empresas» «La salud emocional del CEO impacta directamente en la salud del negocio» «El cambio no se impone, se inspira y se construye con equipos comprometidos» «La cultura se come a la estrategia en el desayuno…. y al compliance de postre» “La IA convierte la cadena de suministro en un sistema anticipativo” ¿Qué empresas hacen una mayor aportación socioeconómica al PIB español? Nvidia da un respiro a inversores y firmas Big Tech…, por ahora El uso ético de la IA en proyectos de I+D europeos Así es como las GPU han reemplazado definitivamente a las CPU como las protagonistas indiscutibles de la computación La digitalización empresarial aporta más del 20% del PIB español IA, soberanía digital y open source: más allá del ‘hype’ Tienes una cita el 20/11 en el Tour Tecnológico @aslan 2025 en Málaga Dell Technologies Forum 2025: la tecnología de vanguardia de Dell, de nuevo en Madrid LG España celebra su evento «Juntos» para clientes B2B Barcelona reune a CIOs y directivos tecnológicos en la IV edición del Congreso LiceoTIC La seguridad documental: un pilar esencial para las organizaciones modernas Controla la calidad de los datos con DocuWare Validation Smart Connect: integra tus sistemas empresariales con DocuWare Smart Connect: Integración avanzada de aplicaciones con DocuWare Workflow Expression Parser, personalizar DocuWare sin programar Publicado el por OpenMind está desarrollando una capa de software, OM1, que actúa como sistema operativo para la próxima generación de robots humanoides, y ayer dio un nuevo paso con la presentación de un nuevo protocolo llamado FABRIC que permite a los robots verificar la identidad y compartir contexto e información con otros de su especie. Muchas empresas se centran en la construcción de robots o de los componentes de hardware que les permiten moverse, sujetar objetos o interactuar con el mundo que los rodea. OpenMind, una firma especializada con sede en Silicon Valley, considerada a sí misma el Android de la robótica porque su software es abierto y no depende del hardware, se centra en el desarrollo interno. Jan Liphardt, profesor de Stanford y fundador de OpenMind, ha explicado que los humanoides y otros robots llevan décadas existiendo y realizando tareas repetitivas. Pero ahora que se están desarrollando humanoides para casos de uso que requieren más interacciones entre humanos y máquinas, como tener un humanoide en casa, necesitan un nuevo sistema operativo que piense más como un humano. «De repente, se abre un mundo donde las máquinas pueden interactuar con los humanos de maneras nunca antes vistas», dijo Liphardt. «Aquí creemos firmemente que no se trata solo de los humanos, sino que nos consideramos una empresa que es una colaboración entre máquinas y humanos”. El nuevo protocolo de OpenMind permite a los robots verificar la identidad y compartir contexto e información con otros robots. A diferencia de los humanos, las máquinas pueden aprender casi instantáneamente, dijo Liphardt, lo que significa que darles una mejor manera de conectarse con otros robots les permitirá entrenarse más fácilmente y absorber nueva información. Liphardt puso el ejemplo de los idiomas y de cómo los robots podrían conectarse entre sí y compartir datos sobre cómo hablar diferentes idiomas, lo que les ayudaría a interactuar mejor con más personas sin tener que aprender cada idioma directamente de un humano. «Los humanos damos por sentado que podemos interactuar con cualquier otro ser humano en la Tierra», dice el investigador. «Hemos construido una gran infraestructura a nuestro alrededor que nos permite confiar en otras personas, llamarlas, enviarles mensajes, interactuar, coordinarnos y hacer cosas juntos. Las máquinas, por supuesto, no serán la excepción«. La nueva era de la robótica está en marcha y el gran objetivo de OpenMind es llevar su tecnología a los hogares. Para ello, recaudó recientemente 20 millones de dólares en una ronda de financiación liderada por Pantera Capital, con la participación de Ribbit, Coinbase Ventures y Pebblebed, entre otros inversores estratégicos. «Lo más importante para nosotros es lanzar robots y recibir retroalimentación», dijo Liphardt. «Nuestro objetivo como empresa es realizar tantas pruebas como sea posible para identificar rápidamente las oportunidades más interesantes donde las capacidades de los robots actuales se ajusten óptimamente a lo que buscan los humanos«. Microsoft 365 desactivará en enero varias funciones de Office si la suite no está actualizada La consultora knowmad mood compra la agencia creativa digital Buzz Colaboro en medios profesionales y de consumo de TPNET: MCPRO, MuySeguridad, MuyCanal y MuyComputer Los deepfakes de 2026 harán imposible distinguir la realidad Las 10 tendencias tecnológicas más destacadas para el sector TI en 2026 NVIDIA despliega la RTX PRO 5000 con 72 GB de memoria «El cambio no se impone, se inspira y se construye con equipos comprometidos» “La IA convierte la cadena de suministro en un sistema anticipativo” La IA seguirá siendo un desafío para la ciberseguridad en 2026 Los deepfakes de 2026 harán imposible distinguir la realidad NVIDIA compra activos de la startup de chips para IA Groq Telefónica Tech vende a Hiberus sus divisiones de negocio en Colombia, México y Chile Crece el robo de cuentas de Microsoft 365 Copyright © Total Publishing Network S.A. 2026 | Todos los derechos reservados
Images (1):
|
|||||
| Los mejores juegos de robots de combate para Android | https://www.androidsis.com/los-mejores-… | 1 | Jan 02, 2026 16:00 | active | |
Los mejores juegos de robots de combate para AndroidURL: https://www.androidsis.com/los-mejores-juegos-de-robots-de-combate-para-android/ Description: Descubre los mejores juegos de robots de combate para Android: PvP, estrategia y 3v3. Guía detallada para elegir tu favorito. Content:
Androidsis » Juegos Android 10 minutos Si te flipan las peleas entre máquinas gigantes, en Android tienes un buen puñado de propuestas que combinan acción directa, estrategia y progresión. No hablamos solo de disparar a lo loco: hay tácticas, construcción de bases, plantillas que se mejoran y combates con diferentes ritmos según el juego. La gracia está en encontrar el título que encaje contigo, ya busques batallas en línea, duelos 3v3 o la fantasía de liderar héroes de una saga mítica. En esta guía te presentamos una selección variada con opciones que han conquistado a miles de jugadores. Verás desde campos de batalla con mechas colosales y PvP sin descanso, hasta un juego de estrategia oficial de Transformers o un arcade de lucha por equipos con estética cell‑shading. Además de repasarlos al detalle, te explicamos qué los hace únicos para que elijas con cabeza. El encanto de estos títulos está en que mezclan sensaciones muy distintas: desde la adrenalina del enfrentamiento directo hasta la planificación al milímetro. Hay propuestas centradas en el multijugador competitivo en tiempo real, otras que apuestan por la construcción y defensa de tu base, y algunas que convierten cada pelea en un espectáculo de combos táctiles. El encanto de estos títulos está en que mezclan sensaciones muy distintas: desde la adrenalina del enfrentamiento directo hasta la planificación al milímetro. Hay propuestas centradas en el multijugador competitivo en tiempo real, otras que apuestan por la construcción y defensa de tu base, y algunas que convierten cada pelea en un espectáculo de combos táctiles. También verás sistemas de progresión profundos. La mayoría permiten mejorar tu plantilla: incrementas velocidad de movimiento, maniobrabilidad, potencia de fuego y blindaje, desbloqueas módulos o cartas de poder, y personalizas la pinta con camuflajes y calcas. Esa sensación de crecimiento continuo es parte clave del enganche. Otro punto diferencial es la ambientación. Desde robots inspirados en dioses, gladiadores o samuráis hasta unidades icónicas como Autobots y Decepticons. Todo ello apoyado por gráficos 3D actuales y efectos llamativos, e incluso estilos artísticos cell‑shaded con acabado de consola en el móvil. Por último, la comunidad. Estos juegos brillan cuando participan amigos o te mides a jugadores de todo el mundo. Encontrarás chat integrado, torneos y rankings para que cada sesión tenga objetivos claros y un plus de picante competitivo. En este título saltas directo a campos de batalla donde pilotas mechas de gran tonelaje frente a jugadores de todo el planeta. El objetivo es claro: demostrar que eres el comandante más listo, veloz y contundente. Cada ronda es pura tensión, con enfrentamientos rápidos y decisiones que separan la victoria del desastre. La fantasía es potente: la humanidad ha aprendido a construir enormes guerreros mecánicos, auténticos «mech warrior» preparados para combates de alto voltaje. En este entorno, no todo es apretar el gatillo; influyen el posicionamiento, las rutas de flanqueo y esas maniobras con truco y pequeñas tretas tácticas que marcan la diferencia. Te lo repetirán una y otra vez: tu rival más peligroso puedes ser tú mismo si no controlas nervios y errores. Los robots de batalla presumen de una fuerza bruta al nivel de su acero. Se les atribuye una potencia descomunal, con detalles curiosos como la capacidad de almacenar energía cuando circula corriente y aprovecharla en el momento justo. Aunque suene técnico, en la práctica se traduce en golpes que cambian el signo de la partida. El crecimiento del piloto es constante. Puedes potenciar tu unidad en cuatro frentes clave: aumentas la velocidad para reposicionarte, afinas la maniobrabilidad para esquivar mejor, refuerzas el blindaje y subes la capacidad ofensiva. Cuanto más inviertes, más opciones reales de dominar el campo de batalla y encadenar puntuaciones altas. Jugar en compañía suma muchísimo. Invita a tus colegas y salid juntos a romper líneas rivales en estas guerras de robots gratuitas. Esa coordinación entre amigos, cuando cada uno cumple su rol, convierte cada partida en una historia memorable. Entre sus bazas principales están las partidas en línea repletas de ritmo. El PvP no te deja tomar aire: se siente ágil y «a cuchillo». Le acompañan gráficos 3D modernos que visten bien los escenarios y la chatarra volando, además de una buena colección de módulos para afinar tu bot al detalle. La escena social está muy trabajada. Puedes sumarte a comunidades de fans para enterarte de las últimas actualizaciones, competir en torneos con tablas de clasificación y coleccionar recompensas, compartiendo además tus logros en redes como Facebook para vacilar con estilo. Si te atraen las sensaciones fuertes, aquí tienes un «mech war» de los que hacen afición. Entra, aprieta el gatillo y demuestra que eres el mecha más temible del WWR. La combinación de tensión, progresión y juego en equipo lo convierte en un fijo del género en Android. ¿Te tira más la estrategia que el gatillo fácil? En Earth Wars, eliges bando –Autobots o Decepticons– y reclutas a tus robots favoritos para formar una escuadra demoledora. La base es el corazón del progreso: levantarla bien te permite resistir asaltos y preparar incursiones eficaces. La defensa no es cosa menor: reforzarás tus muros con lanzamisiles, torres de choque y torretas láser. Cada estructura cumple un papel, así que conviene pensar en sinergias y coberturas para que el enemigo no encuentre rutas fáciles. A la vez, planificarás tus ofensivas con unidades que se complementen. La campaña y los modos multijugador ofrecen ese loop clásico del género: mejoras edificios, desbloqueas robots, optimizas tiempos y vuelves al combate con ventaja. Es una aventura estratégica con mecánicas familiares para cualquiera que haya catado títulos similares en móvil, pero aquí con el plus del universo Transformers y sus personajes icónicos. Reunir el mejor equipo es media victoria. Con cada incorporación ajustas tu plantilla para cubrir roles –daño, aguante, apoyo– y, si escoge bien, verás cómo tus ataques pasan de «rascar» a arrasar defensas rivales. Esa progresión te anima a entrar a diario a por recursos, mejoras y nuevas piezas. Al final, Earth Wars es ideal si te gustan los ritmos pausados con decisiones a medio plazo. Construir, defender y atacar con cabeza tiene su magia, y más cuando lo haces al mando de Autobots o Decepticons de toda la vida. Si buscas espectáculo directo, combates por equipos y controles táctiles precisos, esta es tu parada. Aquí construyes una plantilla de robots hambrientos de pelea y te lanzas a la arena en duelos 3v3 de ritmo arcade. La curva es inmediata: aprendes con dos gestos y en minutos ya estás encadenando golpes. El sistema de control es todo toque y desliz: pulsas, deslizas y encadenas combos para triturar a los rivales. A medida que llenas la barra de energía, desbloqueas ataques especiales y remates capaces de cambiar una ronda. La tensión sube cuando decides si gastar ahora o reservar para un cierre espectacular. La plantilla es amplísima, con decenas de robots que beben de mil referencias: dioses, gladiadores, dragones, monjes, arsenales con patas, samuráis o ninjas. Hay más de 45 opciones únicas, y con variaciones adicionales que te permiten estrenar poderes y movimientos distintos para refrescar la experiencia. El progreso profundiza con mejoras, ascensos y combinaciones entre miembros del equipo. Existen conjuntos de sinergia que dan bonificaciones exclusivas; al dominar estas combinaciones, puedes reventar enfrentamientos que parecían fuera de tu alcance. Además, equipas cartas de poder y «overclocks» para subir daño y armadura, mejorar personajes y potenciar habilidades concretas. Es la vía perfecta para llevar a tu escuadrón a otro nivel y liberar la verdadera fuerza del acero en cada enfrentamiento. Visualmente, entra por los ojos: un estilo cell‑shaded con acabado de consola que luce de maravilla en móvil, efectos inéditos y entornos inmersivos que convierten cada golpe en una pequeña cinemática. Si te gusta jugar bonito, aquí vas servido. Es free‑to‑play y, como es habitual, incluye elementos de pago dentro de la app. Si no te interesan, puedes desactivar las compras integradas desde los ajustes del dispositivo y jugar sin gastar un euro. El diseño está pensado para progresar con o sin pasar por caja. ¿Quieres estar al día? El estudio mantiene comunicación activa con la comunidad: comparten noticias, actualizaciones, vídeos y consejos a través de su web y perfiles oficiales en redes como Facebook, Twitter o YouTube. Ideal para pillar trucos, adelantos de eventos y regalos. Piensa primero en el tipo de experiencia que te apetece. Si quieres batallas de alta tensión contra gente real y un meta de mejoras profundas, elige una propuesta con PvP continuo y módulos de personalización. Si prefieres construir y planificar, te encajará más una base que levantar y defender con calma. La progresión también importa. ¿Te motiva subir estadísticas, desbloquear camuflajes y presumir de bot? ¿O te seduce más coleccionar robots con estilos muy distintos y encontrar sinergias potentes? Ese gusto marcará la diferencia entre jugar dos días o engancharte una temporada larga. Fíjate en los controles y el ritmo. Los sistemas por toques con combos encadenados ofrecen una satisfacción instantánea, mientras que los títulos con manejo más táctico exigen colocación, lectura del mapa y timings. No hay mejor ni peor, solo lo que te pida el cuerpo en cada momento. Valora la parte social. Un buen chat, torneos periódicos y rankings activos hacen que tengas metas semanales y comunidad con la que compartir jugadas. Estos incentivos alargan la vida útil del juego y añaden un plus de motivación. Y no olvides revisar el modelo de monetización. La mayoría te deja avanzar gratis, y en algunos casos puedes bloquear las compras integradas si quieres una experiencia 100% sin microtransacciones. Lo importante es que el progreso te resulte justo y divertido. Entre propuestas de mechas con ritmo frenético, estrategia con sabor a Transformers y peleas 3v3 llenas de estilo, en Android tienes una oferta variada para todos los gustos. Si te llama la tensión del PvP con mejoras y comunidad, WWR es tu sitio; si lo tuyo es planificar y defender con tus robots favoritos, Earth Wars clava la esencia estratégica; si quieres espectáculo táctil con combos y plantillas enormes, Ultimate Robot Fighting te lo da hecho. Elijas lo que elijas, prepárate para acero, chispas y muchas horas de diversión.
Images (1):
|
|||||
| Humanoid robots may be used at cis-lunar station — Android … | https://tass.com/science/1740867 | 1 | Jan 02, 2026 16:00 | active | |
Humanoid robots may be used at cis-lunar station — Android Technics - Science & Space - TASSURL: https://tass.com/science/1740867 Description: According to the Android Technics executive, in such an environment, which is highly hostile to human life, it would be better to use robots to handle potential emergency situations Content:
MOSCOW, February 2. /TASS/. The use of humanoid robots at a cis-lunar station seems appropriate, Yevgeny Dudorov, executive director of the Android Technics research and production association, said in an interview with TASS. "Basically, when Russia will be working to set up a lunar orbital station, it would be logical to use humanoid robots inside pressurized compartments. The matter is that humans will rarely visit lunar bases and the cis-lunar station as they would be exposed to active space radiation there," Dudorov said. According to the Android Technics executive, in such an environment, which is highly hostile to human life, it would be better to use robots to handle potential emergency situations.
Images (1):
|
|||||
| Mejores juegos de robots de combate para Android | https://androidayuda.com/juegos/listas/… | 1 | Jan 02, 2026 16:00 | active | |
Mejores juegos de robots de combate para AndroidURL: https://androidayuda.com/juegos/listas/mejores-juegos-de-robots-de-combate-para-android/ Description: Los mejores juegos de robots de combate en Android: PvP, estrategia y lucha. Comparativa y consejos para elegir tu favorito. Content:
Android Ayuda » Juegos » Mejores juegos 10 minutos Si te apasiona ver chispas, acero y chatarra volar por los aires, los juegos de robots de combate para Android son un festín. En móviles tenemos desde duelos directos uno contra uno hasta propuestas de estrategia con bases, pasando por arenas PvP donde cada segundo cuenta y cada mejora puede decidir la victoria. En este recopilatorio nos centramos en lo que hemos visto en las páginas que mejor posicionan: títulos móviles como Transformers: Earth Wars, Mechangelion y Robot Crash Fight, recuerdos y comparativas alrededor de War Robots, y además una buena ración de referentes del género en consola/PC (Armored Core, MechWarrior, 13 Sentinels, Iron Harvest, Daemon X Machina, Gundam) que, aunque no son de Android, ayudan a entender el panorama de los mechas. Todo lo que sigue está reescrito con otras palabras, integrando toda la información disponible y aportando contexto para que elijas bien. Cuando hablamos de mechas en el móvil, podemos referirnos a varias cosas: duelos de robots a puñetazo limpio con controles simples, arenas PvP con ritmo alto y progresión, o títulos de estrategia con construcción de base en los que diseñar tu defensa y preparar ataques sincronizados. En Android conviven estos formatos y es importante saber qué te apetece antes de instalar. Una categoría muy habitual en smartphones es la de lucha 1v1 con mejoras, donde el núcleo es encadenar golpes, esquivas y habilidades de forma intuitiva. Aquí brilla la sensación de contacto, la variedad de movimientos y la progresión de piezas y armas que vas desbloqueando. Otra vertiente es la de estrategia y gestión, en la que formar equipos icónicos y planificar ataques o defensas da más peso a la táctica que al reflejo. La clave pasa por el equilibrio entre unidades, torretas y economía de recursos, algo que funciona muy bien en sesiones cortas. Finalmente, también hay espacio para la arena competitiva tipo PvP, donde quien mejor ajusta su configuración y sabe leer el mapa suele salir con ventaja. Son experiencias que premian la constancia y el reto online continuo. De entre las webs analizadas emergen tres propuestas móviles claras: Transformers: Earth Wars, la frenética Mechangelion – Robot Fighting y el competitivo Robot Crash Fight. A ello se suma la eterna pregunta: ¿sigue reinando War Robots en el PvP de mechas o le han salido rivales que lo superan? Si te tira la estrategia y el universo de Optimus y Megatron, aquí se junta lo mejor de ambos mundos: puedes reunir un escuadrón de Transformers, decidir si luchas como Autobot o Decepticon y levantar una base con defensas contundentes. Hay lanzamisiles, torres de choque y turretas láser, todo orientado a frenar incursiones rivales y preparar tu asalto. Su estructura recuerda a otros juegos de su género en móviles, pero con la capa extra de estar en una franquicia mítica. Eso se traduce en misiones y eventos que apelan a la nostalgia, y en una progresión que combina desbloqueo de personajes y mejora de estructuras para alcanzar mejores ligas. Este juego se sitúa en la orilla opuesta: acción directa, combates uno contra uno y controles pensados para que cualquier persona pueda subirse al ring de metal en segundos. El panel de control ofrece movimientos específicos, jabs y golpes al estilo boxeo pero en clave mecha, con impactos sólidos que se sienten bien en pantalla táctil. La progresión consiste en mejorar a tu robot paso a paso: desbloqueas armas, refuerzas defensa y ajustas el conjunto para tumbar a los enemigos más duros. Además, introduce un giro llamativo: también peleas contra dinosaurios gigantes, de modo que no todo son duelos entre robots, sino que hay jefes que exigen leer patrones y variar la estrategia. La idea es clara: partidas cortas, impacto visual y combates que suben de dificultad a medida que avanzas. Si te gusta la táctica ligera y progresar a tu ritmo, aquí hay horas de entretenimiento sin complicaciones. Muchos recordamos War Robots como el gran referente de PvP mecha en móvil. El testimonio que hemos visto habla de que era “el mejor juego PvP de robots mecha” y plantea la pregunta de si a día de hoy sigue siendo el rey. Aunque el contenido analizado no aporta fichas detalladas actuales, el eco que deja es que su fórmula competitiva marcó época, y que durante el tiempo fuera del género surgieron proyectos parecidos, incluso uno de corte ruso que parecía más avanzado cuando aún estaba en desarrollo. Con esa información, la respuesta honesta es que el trono del PvP se discute, y conviene probar alternativas según lo que busques: si quieres una experiencia de arena con builds complejos, War Robots sigue siendo un punto de referencia; si prefieres partidas más directas, Mechangelion ofrece duelos rápidos con progresión clara; y si buscas el plus creativo de diseñar y competir, Robot Crash Fight suma ingredientes interesantes. Aquí la gracia está en ser ingeniero y piloto a la vez. Empiezas por diseñar y operar robots teledirigidos, blindados y armados, para combatir en un torneo de eliminación en arenas. El bucle jugable combina construir, equipar y chocar contra rivales en 3D con animaciones vistosas. El arsenal es variado y un punto gamberro: desde sierras y lanzallamas hasta martillos, descargas eléctricas y imanes con usos tácticos. La personalización es la clave: cada pieza que añades cambia cómo rindes y cómo respondes a ciertas amenazas del rival. Visualmente cumple con creces en móviles, con gráficos 3D de calidad y físicas que dan buen feedback cuando impactas o te impactan. Si te atrae la idea de iterar sobre tu máquina y encontrar combinaciones ganadoras, tiene ese “una más y lo dejo” muy propio de las arenas. Aunque no son juegos de móvil, varias webs que posicionan bien incluyen títulos clave de mechas en consola y PC. No los vas a jugar en Android, pero sirven para entender en qué se inspiran muchos proyectos móviles y qué hace grande a este subgénero. La saga de FromSoftware regresó tras una larga pausa con una entrega centrada en robots enormes y totalmente configurables. No es un “soulslike”, sino un Armored Core puro, de acción exigente y ajustes finos de piezas, armas y propulsores. Su prestigio reside en el pulso del combate y en lo que se llega a exprimir la personalización. Una aproximación muy distinta: aquí se mezcla rol narrativo de ciencia ficción con estrategia a dos dimensiones y un arte espectacular marca de Vanillaware. Su fuerza no está en el choque directo de metal, sino en cómo cuenta doce historias entrelazadas y las lleva a escenarios de batallas tácticas. Estrategia en tiempo real ambientada en una Europa alternativa post Primera Guerra Mundial, inspirada en el trabajo de Jakub Różalski. Tres facciones (con guiños a Rusia, Polonia e Inglaterra) y mechas estilo steampunk que cambian cómo se mueven y pelean tus escuadras. Tiene multijugador para medirse con otras personas. Acción y estrategia a los mandos de un escuadrón de cinco mechas, con escenarios destruibles y cooperativo online para cinco jugadores. La propuesta mezcla disparos, gestión de equipo y misiones en cadena, apostando por coordinar roles dentro del grupo. Simulación de combate en el universo BattleTech. Encarnas a un piloto mercenario, aceptas contratos, mejoras tus mechs y sobrevives en un mapa galáctico con conflictos constantes. Es una experiencia más pausada, de planificación y ejecución de operaciones. Secuela del juego de 2019 con robots configurables, exploración en mundos abiertos y combates intensos tanto en tierra como en el aire. Abundan los jefes colosales y un sistema de progreso basado en piezas y habilidades que invitan a seguir farmeando. Spin-off con foco en la narrativa y el combate dentro del universo Gundam, desarrollado por B.B. Studio y publicado por Bandai Namco. Se lanzó en PlayStation y combina misiones con momentos de historia que expanden su mundo. Si buscas un duelo de reflejos puro y duro, lo tuyo es una experiencia tipo Mechangelion: movimientos claros, lectura del rival y ajustes de daño/defensa que notas de inmediato. Recomendable si quieres partidas cortas con sensación de progreso rápida. Para un enfoque más táctico con capas de gestión, la propuesta de Transformers: Earth Wars brilla: crear una base, decidir la composición de tu equipo y planificar ataques es tan importante como ejecutar. Es una opción ideal si prefieres pensar a medio plazo y ver crecer tu fortaleza. ¿Te motiva construir prototipos y experimentar? Entonces encaja Robot Crash Fight, porque el metajuego de diseñar, probar y volver a ajustar engancha. Cada arma tiene usos situacionales (esa sierra que destroza cerca, ese imán que desbarata la jugada rival), y encontrar la sinergia marca la diferencia. Si lo que persigues es una arena PvP de largo recorrido, el recuerdo de War Robots sigue pesando. Con la información analizada, no podemos dar una ficha minuciosa actualizada, pero sí queda claro que su ADN competitivo sigue siendo referencia al hablar de mechas en móvil. Define primero tu estilo: si priorizas acción inmediata y progresión simple, Mechangelion te lo pone fácil; si te tira la estrategia con licencias míticas, Earth Wars te permite vivir el conflicto Autobot/Decepticon; si quieres construir y competir con creatividad, Robot Crash Fight te va a entretener con su arsenal loco. Valora también el tiempo que le dedicarás: los juegos de estrategia recompensan la constancia y la planificación diaria, mientras que los duelos 1v1 aceptan mejor sesiones cortas. Y en arenas PvP, cuanto más juegas, más entiendes el metajuego, así que piensa en tu curva de aprendizaje. Respondiendo a la duda recurrente de si War Robots sigue siendo el mejor PvP: con los datos revisados, no hay un “único rey” incontestable, sino estilos que encajan con gustos distintos. Si añoras el feeling de arena, dale una oportunidad a War Robots y mide sensaciones; si te apetece algo más directo, Mechangelion cumple; si quieres construcción y customización extrema, Robot Crash Fight puede ser tu campo de pruebas. Y ojo con las expectativas: los referentes de PC y consola nos enseñan que el género es amplio y profundo, pero en Android brilla cuando se centra en sesiones ágiles y progresión clara. Si un título te anima a volver cada día con una meta concreta, vas por buen camino. Con este panorama, tienes dónde elegir: estrategia con Transformers, duelos de mechas a golpe limpio y torneos de máquinas diseñadas a medida, todo en tu móvil. Elige según tu tiempo y tu estilo, prueba lo que más te llame y deja que el acero, las chispas y las builds hagan el resto.
Images (1):
|
|||||
| Why Robots Are Still in Diapers While AI Runs the … | https://medium.com/@mwisenezaange9/why-… | 0 | Jan 02, 2026 00:00 | active | |
Why Robots Are Still in Diapers While AI Runs the ShowDescription: A reflection on evolution, from silicon to cells, and why hardware lags behind both in nature and in tech. We talk to AI every day — in our phones, search eng... Content: |
|||||
| -> Gratis Ebook Download -> Python Adventures for Young Coders: … | https://ebook-hell.to/category/English-… | 1 | Jan 02, 2026 00:00 | active | |
-> Gratis Ebook Download -> Python Adventures for Young Coders: Explore the World of Programming - Tharwat, AlaaDescription: Ebooks Umsonst und Gratis gibt es auf Ebook-Hell.to, Android-Games, Android-Apps, Windows Tools, Magazine und Tageszeitung per Direct Download Links über Turbobit.com, Nitroflare.com, Rapidgator.com ✓ 100% Kostenlos ✓ Sofort ✓ 100.000+ Nutzer Content:
News & UpdatesAdvertiseFAQPartner Aktuelle UploadsDie Top 100 TageszeitungenWochenzeitungenMonatszeitungenMagazine Sammlungen WesternBiographieEbook KWErzählungenFantasyHistorikHorrorHumorKinder/JugendKrimiLiebesromanePsychothrillerRomanScience-FictionThriller- Mystery-Thriller Erotik German-ComicsEnglish Comics- ErotikHentai/Manga GesundheitRezeptePolitik/GeschichteRatgeberReiseführerSonstige Met-ArtPenthousePlayboySonstige RatgeberNY-BestsellerNovel Hörbücher Android AppzMacWindowsWallpapers Kurzbeschreibung:This book takes young readers on an exciting adventure with a child named Kai. One day, Kai wakes up trapped inside a giant robot. He cant talk to anyone outside, and the only way to communicate is through the robot. Inside the robot, Kai finds many books and documents written in a strange language its the robots language, which is Python. Kai realizes he needs to learn this language to control the robot and talk to the outside world. In each chapter in this book, we will join Kai on a new adventure to learn something that helps us control the robot better and communicate with the real world. This fun and interactive book is designed to introduce young minds to the basics of programming while encouraging creativity and problem-solving skills.In the introductory chapters, readers discover Python as a friendly and accessible programming language. The book guides them through setting up their programming environment and crafting their initial lines of code, laying the foundation for an exciting coding adventure. As the exploration unfolds, it delves into fundamental programming concepts essential for any budding coder. From variables and data types to loops and conditionals, these building blocks empower readers to create their programs, fostering a solid understanding of the core principles of coding. It seamlessly integrates these concepts with previously learned fundamentals, providing a comprehensive view of Pythons capabilities. Fueling creativity, it inspires readers to unleash their imagination through engaging projects. From crafting games to developing useful applications, young coders learn to apply their programming skills in innovative ways, transforming abstract coding concepts into real and interactive projects.With a focus on accessibility, engagement, and real-world application, this book paves the way for the next generation of Python enthusiasts.What you will learn:Understand Python programming fundamentals, including syntax, variables, data types, loops, conditionals, lists, functions, and handling files.Learn to break down complex problems into smaller, manageable tasks and apply coding concepts to find creative solutions.How to create their interactive coding projects using Python.Understand strategies for debugging and troubleshooting common programming problems, which are essential skills for any programmerWho this book is for:This book caters primarily for high school students and individuals keen on delving into programming with minimal or zero coding background. Its structured to be both accessible and captivating for young readers, immersing them in the realm of coding through entertaining and interactive journeys. Moreover, it extends its reach to educators and coding enthusiasts alike. Erweiterte Suche 01. Raidrush *HOT*02. Nydus.org03. archivx.to04. Byte.to *HOT*05. GLOAD.TO06. SZENE.LiNK07. Stream DDL Suche08. startseite.to09. ddl.raidrush.org10. Google11. hoerbuch.us12. WarezLoad13. OneDDL14. Dein Link15. Dein Link Partner werdenAlle Partner Startseite | FAQ zum kostenlosen Download auf ebook-hell.to | Top100 | Disclaimer
Images (1):
|
|||||
| NVIDIA Announces Platform for Creating AI Avatars | https://www.globenewswire.com/news-rele… | 1 | Jan 01, 2026 16:00 | active | |
NVIDIA Announces Platform for Creating AI AvatarsDescription: NVIDIA Omniverse Avatar Enables Real-Time Conversational AI Assistants... Content:
November 09, 2021 04:12 ET | Source: NVIDIA NVIDIA SANTA CLARA, Calif., Nov. 09, 2021 (GLOBE NEWSWIRE) -- GTC—NVIDIA today announced NVIDIA Omniverse Avatar, a technology platform for generating interactive AI avatars. Omniverse Avatar connects the company’s technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies. Avatars created in the platform are interactive characters with ray-traced 3D graphics that can see, speak, converse on a wide range of subjects, and understand naturally spoken intent. Omniverse Avatar opens the door to the creation of AI assistants that are easily customizable for virtually any industry. These could help with the billions of daily customer service interactions — restaurant orders, banking transactions, making personal appointments and reservations, and more — leading to greater business opportunities and improved customer satisfaction. “The dawn of intelligent virtual assistants has arrived,” said Jensen Huang, founder and CEO of NVIDIA. “Omniverse Avatar combines NVIDIA’s foundational graphics, simulation and AI technologies to make some of the most complex real-time applications ever created. The use cases of collaborative robots and virtual assistants are incredible and far reaching.” Omniverse Avatar is part of NVIDIA Omniverse™, a virtual world simulation and collaboration platform for 3D workflows currently in open beta with over 70,000 users. In his keynote address at NVIDIA GTC, Huang shared various examples of Omniverse Avatar: Project Tokkio for customer support, NVIDIA DRIVE Concierge for always-on, intelligent services in vehicles, and Project Maxine for video conferencing. In the first demonstration of Project Tokkio, Huang showed colleagues engaging in a real-time conversation with an avatar crafted as a toy replica of himself — conversing on such topics as biology and climate science. In a second Project Tokkio demo, he highlighted a customer-service avatar in a restaurant kiosk, able to see, converse with and understand two customers as they ordered veggie burgers, fries and drinks. The demonstrations were powered by NVIDIA AI software and Megatron 530B, which is currently the world’s largest customizable language model. In a demo of the DRIVE Concierge AI platform, a digital assistant on the center dashboard screen helps a driver select the best driving mode to reach his destination on time, and then follows his request to set a reminder once the car’s range drops below 100 miles. Separately, Huang showed Project Maxine’s ability to add state-of-the-art video and audio features to virtual collaboration and content creation applications. An English-language speaker is shown on a video call in a noisy cafe, but can be heard clearly without background noise. As she speaks, her words are both transcribed and translated in real time into German, French and Spanish with her same voice and intonation. Omniverse Avatar Key ElementsOmniverse Avatar uses elements from speech AI, computer vision, natural language understanding, recommendation engines, facial animation, and graphics delivered through the following technologies: These technologies are composed into an application and processed in real time using the NVIDIA Unified Compute Framework. Packaged as scalable, customizable microservices, the skills can be securely deployed, managed and orchestrated across multiple locations by NVIDIA Fleet Command™. Learn more about Omniverse Avatar. Register for free to learn more about NVIDIA Omniverse during NVIDIA GTC, taking place online through Nov. 11. Watch Huang’s GTC keynote address streaming on Nov. 9 and in replay. About NVIDIANVIDIA’s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market and has redefined modern computer graphics, high performance computing, and artificial intelligence. The company’s pioneering work in accelerated computing and AI is reshaping trillion-dollar industries, such as transportation, healthcare and manufacturing, and fueling the growth of many others. More information at https://nvidianews.nvidia.com/. For further information, contact:Kristin UchiyamaSenior PR ManagerNVIDIA Corporation+1-408-313-0448kuchiyama@nvidia.com Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, and features of NVIDIA Omniverse Avatar, Project Tokkio, DRIVE Concierge, Project Maxine, NVIDIA Riva, Megatron 530B, NVIDIA Merlin, NVIDIA Metropolis, NVIDIA Video2Face and Audio2Face, the NVIDIA Unified Compute Framework and NVIDIA Fleet Command; Omniverse Avatar opening the door to the creation of AI assistants that are easily customizable for virtually any industry; the help of AI assistants leading to greater business opportunities and improved customer satisfaction; and the use cases of collaborative robots and virtual assistants are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances. © 2021 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, Audio2Face, Maxine, NGC, NVIDIA DRIVE, NVIDIA Fleet Command, NVIDIA Merlin and NVIDIA Omniverse are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. All other trademarks and copyrights are the property of their respective owners. Features, pricing, availability, and specifications are subject to change without notice. A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/35c4d67a-361e-4693-b500-289ff3c9dbc0 News Summary: The Nemotron 3 family of open models — in Nano, Super and Ultra sizes — introduces the most efficient family of open models with leading accuracy for building agentic AI... SANTA CLARA, Calif., Nov. 20, 2025 (GLOBE NEWSWIRE) -- NVIDIA will present at the following event for the financial community: UBS Global Technology and AI ConferenceTuesday, Dec. 2, 6:35 a.m....
Images (1): |
|||||
| CICC Publishes Thematic Research on How AI Empowers Achievement of … | https://www.prnewswire.com:443/news-rel… | 1 | Jan 01, 2026 16:00 | active | |
CICC Publishes Thematic Research on How AI Empowers Achievement of Carbon NeutralityDescription: /PRNewswire/ -- China International Capital Corporation Limited (CICC, 3908.HK, 601995.SH) held the Investment and Financing Theme Forum at the 2021 World... Content:
Searching for your content... In-Language News Contact Us 888-776-0942 from 8 AM - 10 PM ET Jul 14, 2021, 23:11 ET Share this article BEIJING, July 14, 2021 /PRNewswire/ -- China International Capital Corporation Limited (CICC, 3908.HK, 601995.SH) held the Investment and Financing Theme Forum at the 2021 World Artificial Intelligence Conference in Shanghai, during which it published a research report, Achieving Carbon Neutrality amid Thriving AI. CICC Research's Technology Group has joined together with multiple sector research teams to complete this in-depth analysis covering potential scenarios for AI-enabled carbon emission reduction. The report demonstrates the practical effects and the evolutionary trends of AI in terms of improving efficiency, saving energy, as well as reducing emissions consumption. Peng Hu, Chief Analyst of CICC's Technology Hardware Group, delivered a keynote speech to the conference, analyzing how AI can help achieve carbon neutrality, as well as the investment opportunities in the technology industry, for example, AI-enabled cities, AI-enabled vehicles, AI-enabled smart manufacturing, and AI-enabled power. Carbon neutrality, an important application for AI After the 21st United Nations Climate Change Conference, the achievement of carbon neutrality has become a global goal. As of the end of 2020, 44 countries and regions around the world made commitments on carbon neutrality. China also announced that it would achieve peak carbon levels by 2030 and carbon neutrality by 2060. CICC believes that the key to achieving carbon neutrality lies in the reduction of carbon dioxide emissions per unit of GDP, which puts pressure on areas such as energy, transportation, manufacturing and urban construction planning in the context of China's rapid economic growth. AI is expected to promote efficiency and consumption reduction in a number of different areas and help achieve the goal of carbon neutrality. Peng explains that AI can help in three ways: prediction, monitoring and optimization. AI helps carbon neutrality in four major areas In Peng's view, AI could help achieve the goal of carbon neutrality in the four fields, namely cities, smart manufacturing, vehicles and power. Investment opportunities in the science and technology industries against the backdrop of carbon neutrality ESG investing has become a global trend. Peng believes that the ESG investment philosophy will prompt listed companies to pay more attention to the control of carbon emissions. Over the long term, companies with higher ESG level will achieve better operation results and more sustainable returns, which creates new investment opportunities. Peng points out that AI should also be filled with "humanistic care", which is not only a technical term that means increase efficiency and profits, but also plays a greater role in improving the living environment, creating social welfare and enhancing human well-being. Peng suggests to focus the investment opportunities in the following 10 areas amid AI-enabled carbon neutrality: 1) smart power grids; 2) drones for civilian use; 3) mobile robots; 4) industrial internet platforms; 5) machine vision; 6) smart cities; 7) cloud computing; 8) AI chips; 9) intelligent driving; and 10) sensors. According to the CICC's forecast, the increase of the market size of these 10 areas is around RMB 2 trillion in China over the next decade (2021–2030). To read the full research report, click here: https://en.cicc.com/api/upload/uploadService/dowloadEx?fileId=24884&tenantId=123890 China International Capital Corporation Limited (CICC): China International Capital Corporation Limited (CICC, 03908.HK,601995.SH) is a top tier investment bank, founded in China in 1995, providing first-class financial services to corporates, institutions and individuals worldwide. As the first international joint-venture investment bank in China, CICC plays a unique role to support China's economic reforms and liberalization through providing comprehensive one-stop domestic, overseas, and cross-border financial services including investment banking, equities, FICC, asset management, private equity investment, wealth management and research. Headquartered in Beijing, CICC has over 200 branches in Mainland China and offices in Hong Kong, Singapore, New York, London, San Francisco, Frankfurt and Tokyo. For more information about CICC, please visit www.cicc.com SOURCE China International Capital Corporation Limited Do not sell or share my personal information:
Images (1):
|
|||||
| CICC Publishes Thematic Research on How AI Empowers Achievement of … | https://markets.businessinsider.com/new… | 0 | Jan 01, 2026 16:00 | active | |
CICC Publishes Thematic Research on How AI Empowers Achievement of Carbon NeutralityDescription: BEIJING, July 15, 2021 /PRNewswire/ -- China International Capital Corporation Limited (CICC, 3908.HK, 601995.SH) held the Investment and Financin... Content: |
|||||
| CICC Publishes Thematic Research on How AI Empowers Achievement of … | https://markets.businessinsider.com/new… | 0 | Jan 01, 2026 16:00 | active | |
CICC Publishes Thematic Research on How AI Empowers Achievement of Carbon NeutralityDescription: BEIJING, July 14, 2021 /PRNewswire/ -- China International Capital Corporation Limited (CICC, 3908.HK, 601995.SH) held the Investment and Financin... Content: |
|||||
| Global Industrial, Enterprise, Military, and Consumer Automation and Robotics Market … | https://www.prnewswire.com:443/news-rel… | 1 | Jan 01, 2026 16:00 | active | |
Global Industrial, Enterprise, Military, and Consumer Automation and Robotics Market Report 2022-2027: Unprecedented Efficiency and Effectiveness Gains will be Realized through 5G Robotics SolutionsDescription: /PRNewswire/ -- The "Automation and Robotics Market in Industrial, Enterprise, Military, and Consumer Segments by Type, Components, Hardware, Software, and... Content:
Searching for your content... In-Language News Contact Us 888-776-0942 from 8 AM - 10 PM ET Mar 15, 2022, 11:30 ET Share this article DUBLIN, March 15, 2022 /PRNewswire/ -- The "Automation and Robotics Market in Industrial, Enterprise, Military, and Consumer Segments by Type, Components, Hardware, Software, and Services 2022 - 2027" report has been added to ResearchAndMarkets.com's offering. This report evaluates the global and regional robotics marketplace including the technologies, companies, and solutions for robots in the industrial, enterprise, military, and consumer segments. The report includes detailed forecasts for robotics by robot type, components, capabilities, solutions, and connectivity for 2022 to 2027.Select Report Findings: With the substantial amount of capital behind global industrial automation, the industrial robotics sector will continue a healthy growth trajectory, which is supported by many qualitative and quantitative benefits including cost reduction, improved quality, increased production, and improved workplace health and safety.In the wake of the pandemic, we see a major push for further automation and robotics, especially within the United States service sector. This is because many businesses see repetitive tasks as performed with great safety, less expense, and reduced probability for service disruption with robotics rather than reliance upon human workers.Robotics is increasingly used to improve enterprise, industrial, and military automation. In addition, robots are finding their way into more consumer use cases as the general public's concerns fade and acceptance grows in terms of benefits versus risks. While many consumer applications continue to be largely lifestyle-oriented, enterprise, industrial, and military organizations utilize both land-based and aerial robots that are used for various repetitive, tedious, and/or dangerous tasks. Adoption and usage are anticipated to rapidly increase with improvements to artificial intelligence, robotic form factors, and fitness for use, cloud computing, and related business models, such as robotics as a service.The global robotics market is broadly segmented into enterprise, industrial, military, and consumer robotics. Major market segments that cross-over industries include Healthcare bots, Unmanned Aerial Vehicles, and autonomous vehicles. Enterprise Robotics includes the use of robots for both business-to-business and business-to-consumer services and support. Functions include internal business operations and processes, delivery of goods and services, research, analytics, and other business-specific applications.The next decade will witness substantial influence of AI upon robotics. The next generation of robotics will include many pre-integrated AI technologies such as machine vision, voice and speech recognition, tactile sensors, and gesture controls. AI has enabled consumer robots to learn while performing a variety of tasks including cleaning, controlling home appliances, reading, performing butler services, and many more. It is anticipated that further improvement in AI and related technologies such as cognitive computing and sensor fusion, will enable consumer robots to take on increasingly more difficult tasks.The degree to which AI capabilities in robotics optimize operational efficiency and effectiveness will largely depend on the contextual capabilities of connected devices. This means that the relationship of IoT to AI and robotics will be much intertwined. AI helps robots learn and become more effective, but both technologies depend on IoT networks for sharing information among devices and applications.Longer-term, the publisher sees many robotics and automation solutions involving multiple AI types as well as integration across other key areas such as the Internet of Things (IoT) and data analytics. The combination of AI and the IoT has the potential to dramatically accelerate the benefits of robotics for consumer, enterprise, industrial, and government market segments.Leading industry verticals are beginning to see improved operational efficiency through the intelligent combination of AI and robotics. The long-term prospect for these technologies is that they will become embedded in many different other technologies and provide autonomous decision-making on behalf of humans, both directly, and indirectly through many processes, products, and services.The military robotics market is an important segment from both an R&D perspective (e.g. many robotics innovations are funded by government/military projects) as well as cross-over into business and consumer markets such as the public safety arena. The consumer robotics sector is in its infantile stage but is anticipated to exceed all other sectors in terms of scale, variety, and impact in the long run.We see substantial overall industry growth across a wide range of robot types that engage in diverse tasks such as home cleaning, personalized healthcare service, home security, autonomous cars, robotic entertainment, and personal care services, managing daily schedules, and various assistive tasks. A few key factors such as the ageing population, personalization services trends, and robot mobility will drive growth in this industry segment.Robotics in business will accelerate as less expensive hardware and improvements in AI lead to improved cost structures and increased integration with enterprise software systems respectively. The massive amount of data generated by robotics will create opportunities for data analytics and AI-enabled decision support systems. Enterprise users will capitalize upon new and enhanced robotics capabilities to enable new use cases and improved workflow. Many business processes will change as the enterprise becomes savvier about the flexibility of robotics uninhibited by bandwidth constraints. Key Topics Covered: 1.0 Executive Summary2.0 Robotics Market Overview 3.0 Robotics and Automation Technology Trends 4.0 Robotics and Automation in Business Transformation 5.0 Robotics Companies and Solutions 6.0 Global Robotics Forecast 2022 - 20277.0 Industrial Robotics Market 2022 - 20278.0 Consumer Robotics Market 2022 - 20279.0 Enterprise Robotics Market 2022 - 202710.0 Military and Government Robotics Market 2022 - 202711.0 Conclusions and Recommendations For more information about this report visit https://www.researchandmarkets.com/r/idfunf Media Contact: Research and Markets Laura Wood, Senior Manager [email protected] For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900 U.S. Fax: 646-607-1904 Fax (outside U.S.): +353-1-481-1716 SOURCE Research and Markets Do not sell or share my personal information:
Images (1):
|
|||||
| Nvidia's CEO Explains How Its New AI Models Could Work … | https://www.cnet.com/tech/mobile/nvidia… | 1 | Jan 01, 2026 16:00 | active | |
Nvidia's CEO Explains How Its New AI Models Could Work on Future Smart Glasses - CNETDescription: Nvidia's new Cosmos model is another sign that devices and machines are getting better at understanding their environments. Content:
Nvidia's new Cosmos model is another sign that devices and machines are getting better at understanding their environments. Meta's Ray-Ban glasses are getting popular. Tech gadgets -- whether they be phones, robots or autonomous vehicles -- are getting better at understanding the world around us, thanks to AI. That message rang loud and clear throughout 2024 and became even louder at CES 2025, where chipmaker Nvidia unveiled a new AI model (a CES award winner) for understanding the physical world and a family of large language models for powering future AI agents. Nvidia CEO Jensen Huang is positioning these world-foundational models as being ideal for robots and autonomous vehicles. There's another class of devices that could benefit from better real-world understanding: smart glasses. Tech-enabled eyewear like Meta's Ray-Bans is quickly becoming the hot new AI gadget, with shipments of Meta's spectacles crossing the 1 million mark in November according to Counterpoint Research. Such devices seem like the ideal vessel for AI agents, or AI helpers that can understand the world around you using cameras and processing speech and visual input to help you get things done rather than just answering questions. Huang didn't say whether Nvidia-powered smart glasses are on the horizon. He did explain how the company's new model would power future smart glasses if partners were to adopt the technology for that purpose. "The use of AI as it gets connected to wearables and virtual presence technology like glasses, all of that is super exciting," Huang said in response to a question about whether its models would work on smart glasses during a press Q&A at CES. Read more: Smart Glasses Are Going to Work This Time, Google's Android President Tells CNET Huang pointed to cloud processing as an option, which would mean queries that use Nvidia's Cosmos model could be handled in the cloud rather than on the device itself. Compact devices like smartphones use this method often to lighten the processing load when running demanding AI models. If a device maker wanted to create glasses that could leverage Nvidia's AI models on-device rather than relying on the cloud, Huang said Cosmos would distill its knowledge into a smaller model that's less generalized and optimized for specific tasks. Nvidia's new Cosmos model is being touted as a platform to gather data about the physical world to train models for robots and self-driving cars -- similar to the way a large language model would learn how to generate text responses after being trained on written media. "The ChatGPT moment for robotics is coming," Huang said in a press release. Nvidia also announced a set of new AI models built with Meta's Llama technology called Llama Nemotron, which is designed to accelerate the development of AI agents. It's interesting to think about how these AI tools and models could potentially be applied to smart glasses too. A recent Nvidia patent filing fueled speculation about upcoming smart glasses, although the chipmaker hasn't made any announcements about future products in that space. Nvidia's new models and Huang's comments come as Google, Samsung and Qualcomm announced last month that they're building a new mixed-reality platform for smart glasses and headsets called Android XR, hinting that smart glasses could soon become more prominent. Several new types of smart glasses were also on display at CES 2025, such as the RayNeo X3 Pro and Halliday smart glasses. The International Data Corporation also predicted in September that shipments of smart glasses would grow by 73.1% in 2024. Nvidia's moves are another space to watch for too. For more from CES, check out the best TVs, the best laptops and the most impressive concepts we've seen.
Images (1):
|
|||||
| New Microsoft AI tech can simulate our voice; should we … | https://www.trtworld.com/life/new-micro… | 1 | Jan 01, 2026 16:00 | active | |
New Microsoft AI tech can simulate our voice; should we be concerned? - TRT WorldDescription: The tech giant has unveiled VALL-E, a new text-to-speech AI that can simulate anyone's voice with a 3-second sample of human speech. Content:
A "neural codec language model", VALL-E uses discrete codes derived from an off-the-shelf neural audio codec model to synthesize high-quality personalised speech with only a 3-second recording of an unseen speaker. The AI is trained with 60,000 hours of English speech with over 7,000 unique speakers. All this data is taken from Libri-Light, the Meta-owned audio library that collects spoken English audio. It can also imitate the speaker's emotional tone and acoustic environment. "Experiment results show that VALL-E significantly outperforms the state-of-the-art zero-shot TTS system in terms of speech naturalness and speaker similarity," say Microsoft researchers in their paper. The three-second voice input needs to match with some other samples in the training data provided to have a better result. This is why VALL-E should be more diverse in the future. The training data will be scaled up to improve the performances of prosody, speaking style, and speaker similarity perspectives, Microsoft says. READ MORE: AI offers solutions, but needs to be more transparent: experts How can we benefit from VALL-E? For now VALL-E can only convert text into speech in the chosen voice. It can’t create new content. Its creators are hopeful that VALL-E can provide various benefits in terms of speech editing and audio content creation. The example of Stephen Hawking using a text-to-speech generator to continue his studies while suffering from classical motor neuron disease (ALS) has shown the world one of the highest benefits one could get from this technology. VALL-E can be used for simultaneous translations, or to create the voice of our loved ones who had passed away. Creating audiobooks would be a lot easier and faster with VALL-E. One can create a voice for any written peace or text message in a short time. For all these uses and more, we need to wait for Microsoft to open VALL-E to public use. Microsoft has not said yet when the new AI will be available for public consumption. READ MORE:Can an invention enabled by artificial intelligence be patented? VALL-E might bring risks While the question of how to use AI technologies safely and ethically is being asked more often than ever these days, people express their ethical concerns over newly launched systems like Chat GPT, Lensa AI, or VALL-E. Chat GPT, a new chatbot AIthat can process natural language tasks such as text generation and language translation started debates on students committing plagiarism by using this AI for their homework. At the same time, Lensa AI, an app that uses algorithms to generate ordinary photos into artistic renderings led to ethical questions on artistic production made by using other artists' works. Many argue that it cannot replace human artists who make digital art. VALL-E, likewise, has potential risks of misuse that can criminalise the users, such as spoofing voice identification or impersonating a specific speaker. Impersonating people's voices without their consent might fuel mischief and deception which could then lead to social harm. Similar to Lensa's risk of replacing real artists, VALL-E also leads to concerns over ethics of art. In a case when music production companies make the AI sing new songs without the consent of the voice-owner singer, it might not be that much fun to use it. READ MORE: Robots can be racist and sexist, new study warns Microsoft's response to concerns Microsoft says it is aware of these concerns and possible risks that robots might bring on. It had apologised in the past for its chatbot Tay's offensive tweets. The researchers who created VALL-E stated in their paper that they are likely to build a measuring mechanism that can prevent such risks. "Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker," they said in the paper. "To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E. We will also put Microsoft AI Principles into practice when further developing the models." they added. The company will see if it could build such a detection model and if so, how much of these risks will be mitigated by it, only when the project is open to public use, they said. READ MORE:AI offers solutions, but needs to be more transparent: experts 00:00 00:00
Images (1):
|
|||||
| Speech Recognition Market Size To Hit USD 48.8 Billion at … | https://www.globenewswire.com/news-rele… | 1 | Jan 01, 2026 16:00 | active | |
Speech Recognition Market Size To Hit USD 48.8 Billion at aDescription: Speech Recognition Market Growth Accelerates by Rising AI Integrations and Global Technology Adaptations... Content:
September 27, 2022 12:15 ET | Source: Market Research Future Market Research Future New York, US, Sept. 27, 2022 (GLOBE NEWSWIRE) -- According to a comprehensive research report by Market Research Future (MRFR), “Speech Recognition Market, By Type, Technology, Verticals - Forecast Till 2030”, to garner USD 48.8 billion by 2030, registering approximately 21.30% CAGR during the review period (2022-2030). Speech Recognition Market Overview The speech recognition market witness’s rapid revenue growth due to the increasing use in the education sectors globally. Besides, the rising demand for accurate and easy-to-use speech recognition APIs suitable for multiple fields and languages fosters market growth. The speech-to-text industry that perceives brisk digitization impact market shares positively. Also, other industries moving toward digital transformation boost the speech recognition market size. Players active in the speech recognition market are, Get Free Sample PDF Brochure: https://www.marketresearchfuture.com/sample_request/1815 Users of speech recognition technologies are demanding much more than ever. New and changing industrial safety regulations are forcing industries to change workflows and adapt to the changes. Resultantly, the requirements of speech recognition solutions have expanded beyond simple text transcription, adding advanced capabilities like punctuation, speaker notes, global language packs, organizational reforms, and new vocabulary with frequent builds. With the growing adoption of new voice-enabled learning tools that can address equity and bias issues, the speech recognition market is likely to gain significant traction in the next few years. Over the past several years, the introduction of breakthrough technologies and digital transformation in various industries has been growing exponentially. This, alongside the resultant scale-up in the speech-to-text industry, is promoting the need to meet these new technological requirements. Also, new developments of self-supervised learning solutions that enable speech recognition engines to learn from unstructured data across the web are increasingly coming forward. Self-supervised learning allows access to a large range of unlabeled data, social media, etc., offers a wider variety of voices, and eliminates the need for human supervision. Industry Trends There are a number of researches to drive advances in speech recognition technology to equally diversify and understand accents & dialects clearly. Besides, the rising use of new voice-enabled learning tools demonstrating the capability to address equity and bias creates vast opportunities. The growing adoption of speech recognition technology in education drives the speech recognition market growth. Additionally, significant investments in developing high-performance and efficient speech recognition boost the market size. Large upgrades in major offerings have enabled educational applications that are accelerating demand for more joyous and frictionless voice experiences, accelerating the speech recognition market value. These advanced educational tools have boosted the interest of developers, educators, and researchers, further widening the access to educational resources and minimizing the risks of implicit biases. Report Scope: Browse In-depth Market Research Report (100 Pages) on Speech Recognition Market: https://www.marketresearchfuture.com/reports/speech-recognition-market-1815 Speech Recognition Market Regional Analysis North America leads the global market for speech recognition technology. Growing implementations of speech recognition applications in cell phones escalate the market value. Besides, the expanding utilization of this technology for obtaining customer consent/ acknowledgment over speech/voice in mobile banking and buyers & IoT gadgets boost the speech recognition market size. Rapid developments and rising applications of this technology substantiate market revenues. Moreover, the large presence of major technology providers and high spending on automotive infotainment systems increase the market sales of speech recognition. Europe is another profitable market for speech recognition, headed by the rising interest in discourse and voice acknowledgment. Additionally, voice recognition tech observing significant advances and expanded applications in buyer gadgets and retail areas drives the speech recognition market demand. The vast automotive sector in the region creates substantial market demand, increasingly using patterns of associated devices in robots. APAC is an emerging market for speech recognition systems. The growing number of developments and use of voice-empowered gadgets in the auto and medical businesses push the market growth. Furthermore, augmenting demand for standalone products of language recognition, speech-to-text, machine translation, and transliteration provides significant market opportunities. China, Japan, and Singapore are major speech recognition markets supporting the growth of the regional market. Speech Recognition Market Segments This speech recognition market is segmented into vehicle types, technologies, verticals, and regions. The type segment is bifurcated into speaker-dependent and speaker-independent. The technology segment is bifurcated into AI-based and non-AI-based. The vertical segment is bifurcated into military, automotive, healthcare, and others. The region segment is bifurcated into the Americas, Asia-Pacific, Middle East & Africa, Europe, and rest-of-the-world. Ask for Discount: https://www.marketresearchfuture.com/sample_request/1815 Speech Recognition Market Competitive Analysis The highly competitive speech recognition market appears fragmented due to the presence of several well-established players. To gain a competitive advantage, industry players adopt strategic approaches such as collaborations, mergers & acquisitions, expansions, and product/ technology launches. Speech recognition technology enables learning and plays experiences. Therefore, speech recognition market players are integrating this technology with AI to develop efficient literacy solutions to quickly and accurately assess reading and language skills. This gives teaching professionals a seamless, scalable, and more accurate approach to assessing the language reading & learning progress of students. Further, market players make substantial investments in driving their R&D activities and expansion plans. For instance, on Sep.21, 2022, Nvidia Corporation, an American multinational technology company, unveiled new languages and other upgrades to its Riva speech AI platform for enterprise services. Last year, Nvidia launched the Riva Custom Voice neural-based text-to-speech service that can create lifelike human voices. Talk to Expert: https://www.marketresearchfuture.com/check-discount/1815 Nvidia Riva now features seven language models and tools for distinguishing speakers in conversations and more customization options. Nvidia's Riva speech AI software development kit (SDK) features speech recognition and text-to-speech functions required in enterprises to augment the existing automatic speech recognition and domain-specific customization. Related Reports: Global Speech Analytics Market, by Type, by Deployment Type, by End-User, by Organization Size - Forecast 2030 Far-Field Speech and Voice Recognition Market Research Report: By Component, Microphone Solution, Application - Forecast till 2030 Voice Recognition System Market for Automotive Information Report by Vehicle Type, Technology, Application and by Regions - Global Forecast To 2030 VoLTE Technology Market Research Report, By Device, Technology - Forecast till 2030 About Market Research Future: Market Research Future (MRFR) is a global market research company that takes pride in its services, offering a complete and accurate analysis regarding diverse markets and consumers worldwide. Market Research Future has the distinguished objective of providing the optimal quality research and granular research to clients. Our market research studies by products, services, technologies, applications, end users, and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help answer your most important questions. Follow Us: LinkedIn | Twitter
Images (1):
|
|||||
| Drone advances in Ukraine could bring dawn of killer robots … | https://nationalpost.com/pmn/news-pmn/d… | 1 | Jan 01, 2026 16:00 | active | |
Drone advances in Ukraine could bring dawn of killer robots | National PostURL: https://nationalpost.com/pmn/news-pmn/drone-advances-in-ukraine-could-bring-dawn-of-killer-robots Description: KYIV, Ukraine (AP) — Drone advances in Ukraine have accelerated a long-anticipated technology trend that could soon bring the world’s first fully autonomous fightin… Content:
Author of the article: You can save this article by registering for free here. Or sign-in if you have an account. KYIV, Ukraine (AP) — Drone advances in Ukraine have accelerated a long-anticipated technology trend that could soon bring the world’s first fully autonomous fighting robots to the battlefield, inaugurating a new age of warfare. Enjoy the latest local, national and international news. Enjoy the latest local, national and international news. Create an account or sign in to continue with your reading experience. Create an account or sign in to continue with your reading experience. The longer the war lasts, the more likely it becomes that drones will be used to identify, select and attack targets without help from humans, according to military analysts, combatants and artificial intelligence researchers. That would mark a revolution in military technology as profound as the introduction of the machine gun. Ukraine already has semi-autonomous attack drones and counter-drone weapons endowed with AI. Russia also claims to possess AI weaponry, though the claims are unproven. But there are no confirmed instances of a nation putting into combat robots that have killed entirely on their own. Experts say it may be only a matter of time before either Russia or Ukraine, or both, deploy them. “Many states are developing this technology,” said Zachary Kallenborn, a George Mason University weapons innovation analyst. “Clearly, it’s not all that difficult.” The sense of inevitability extends to activists, who have tried for years to ban killer drones but now believe they must settle for trying to restrict the weapons’ offensive use. Ukraine’s digital transformation minister, Mykhailo Fedorov, agrees that fully autonomous killer drones are “a logical and inevitable next step” in weapons development. He said Ukraine has been doing “a lot of R&D in this direction.” “I think that the potential for this is great in the next six months,” Fedorov told The Associated Press in a recent interview. Ukrainian Lt. Col. Yaroslav Honchar, co-founder of the combat drone innovation nonprofit Aerorozvidka, said in a recent interview near the front that human war fighters simply cannot process information and make decisions as quickly as machines. Ukrainian military leaders currently prohibit the use of fully independent lethal weapons, although that could change, he said. “We have not crossed this line yet — and I say ‘yet’ because I don’t know what will happen in the future.” said Honchar, whose group has spearheaded drone innovation in Ukraine, converting cheap commercial drones into lethal weapons. Russia could obtain autonomous AI from Iran or elsewhere. The long-range Shahed-136 exploding drones supplied by Iran have crippled Ukrainian power plants and terrorized civilians but are not especially smart. Iran has other drones in its evolving arsenal that it says feature AI. Without a great deal of trouble, Ukraine could make its semi-autonomous weaponized drones fully independent in order to better survive battlefield jamming, their Western manufacturers say. Those drones include the U.S.-made Switchblade 600 and the Polish Warmate, which both currently require a human to choose targets over a live video feed. AI finishes the job. The drones, technically known as “loitering munitions,” can hover for minutes over a target, awaiting a clean shot. “The technology to achieve a fully autonomous mission with Switchblade pretty much exists today,” said Wahid Nawabi, CEO of AeroVironment, its maker. That will require a policy change — to remove the human from the decision-making loop — that he estimates is three years away. Drones can already recognize targets such as armored vehicles using cataloged images. But there is disagreement over whether the technology is reliable enough to ensure that the machines don’t err and take the lives of noncombatants. The AP asked the defense ministries of Ukraine and Russia if they have used autonomous weapons offensively — and whether they would agree not to use them if the other side similarly agreed. Neither responded. If either side were to go on the attack with full AI, it might not even be a first. An inconclusive U.N. report suggested that killer robots debuted in Libya’s internecine conflict in 2020, when Turkish-made Kargu-2 drones in full-automatic mode killed an unspecified number of combatants. A spokesman for STM, the manufacturer, said the report was based on “speculative, unverified” information and “should not be taken seriously.” He told the AP the Kargu-2 cannot attack a target until the operator tells it to do so. Fully autonomous AI is already helping to defend Ukraine. Utah-based Fortem Technologies has supplied the Ukrainian military with drone-hunting systems that combine small radars and unmanned aerial vehicles, both powered by AI. The radars are designed to identify enemy drones, which the UAVs then disable by firing nets at them — all without human assistance. The number of AI-endowed drones keeps growing. Israel has been exporting them for decades. Its radar-killing Harpy can hover over anti-aircraft radar for up to nine hours waiting for them to power up. Other examples include Beijing’s Blowfish-3 unmanned weaponized helicopter. Russia has been working on a nuclear-tipped underwater AI drone called the Poseidon. The Dutch are currently testing a ground robot with a .50-caliber machine gun. Honchar believes Russia, whose attacks on Ukrainian civilians have shown little regard for international law, would have used killer autonomous drones by now if the Kremlin had them. “I don’t think they’d have any scruples,” agreed Adam Bartosiewicz, vice president of WB Group, which makes the Warmate. AI is a priority for Russia. President Vladimir Putin said in 2017 that whoever dominates that technology will rule the world. In a Dec. 21 speech, he expressed confidence in the Russian arms industry’s ability to embed AI in war machines, stressing that “the most effective weapons systems are those that operate quickly and practically in an automatic mode.” Russian officials already claim their Lancet drone can operate with full autonomy. “It’s not going to be easy to know if and when Russia crosses that line,” said Gregory C. Allen, former director of strategy and policy at the Pentagon’s Joint Artificial Intelligence Center. Switching a drone from remote piloting to full autonomy might not be perceptible. To date, drones able to work in both modes have performed better when piloted by a human, Allen said. The technology is not especially complicated, said University of California-Berkeley professor Stuart Russell, a top AI researcher. In the mid-2010s, colleagues he polled agreed that graduate students could, in a single term, produce an autonomous drone “capable of finding and killing an individual, let’s say, inside a building,” he said. An effort to lay international ground rules for military drones has so far been fruitless. Nine years of informal United Nations talks in Geneva made little headway, with major powers including the United States and Russia opposing a ban. The last session, in December, ended with no new round scheduled. Washington policymakers say they won’t agree to a ban because rivals developing drones cannot be trusted to use them ethically. Toby Walsh, an Australian academic who, like Russell, campaigns against killer robots, hopes to achieve a consensus on some limits, including a ban on systems that use facial recognition and other data to identify or attack individuals or categories of people. “If we are not careful, they are going to proliferate much more easily than nuclear weapons,” said Walsh, author of “Machines Behaving Badly.” “If you can get a robot to kill one person, you can get it to kill a thousand.” Scientists also worry about AI weapons being repurposed by terrorists. In one feared scenario, the U.S. military spends hundreds of millions writing code to power killer drones. Then it gets stolen and copied, effectively giving terrorists the same weapon. To date, the Pentagon has neither clearly defined “an AI-enabled autonomous weapon” nor authorized a single such weapon for use by U.S. troops, said Allen, the former Defense Department official. Any proposed system must be approved by the chairman of the Joint Chiefs of Staff and two undersecretaries. That’s not stopping the weapons from being developed across the U.S. Projects are underway at the Defense Advanced Research Projects Agency, military labs, academic institutions and in the private sector. The Pentagon has emphasized using AI to augment human warriors. The Air Force is studying ways to pair pilots with drone wingmen. A booster of the idea, former Deputy Defense Secretary Robert O. Work, said in a report last month that it “would be crazy not to go to an autonomous system” once AI-enabled systems outperform humans — a threshold that he said was crossed in 2015, when computer vision eclipsed that of humans. Humans have already been pushed out in some defensive systems. Israel’s Iron Dome missile shield is authorized to open fire automatically, although it is said to be monitored by a person who can intervene if the system goes after the wrong target. Multiple countries, and every branch of the U.S. military, are developing drones that can attack in deadly synchronized swarms, according to Kallenborn, the George Mason researcher. So will future wars become a fight to the last drone? That’s what Putin predicted in a 2017 televised chat with engineering students: “When one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.” —— Frank Bajak reported from Boston. Associated Press journalists Tara Copp in Washington, Garance Burke in San Francisco and Suzan Fraser in Turkey contributed to this report. —— Follow the AP’s coverage of the war at https://apnews.com/hub/russia-ukraine —— This story has been updated to correct when the U.N. report was issued. It came out in 2021, not last year. Postmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information. SHA Mexico pairs beach bliss with boot-camp rules for a serious reality check Maido is just one of many eateries that make Peru’s capital a culinary mecca The 3 best beauty products we tried this week from Dior, SVR and L'Oréal Paris Plus, our favourite shoe styles for spring Plus best products for dry, oily and flaky scalps 365 Bloor Street East, Toronto, Ontario, M4W 3L4 © 2026 National Post, a division of Postmedia Network Inc. All rights reserved. Unauthorized distribution, transmission or republication strictly prohibited. This website uses cookies to personalize your content (including ads), and allows us to analyze our traffic. Read more about cookies here. By continuing to use our site, you agree to our Terms of Use and Privacy Policy. You can manage saved articles in your account. and save up to 100 articles! You can manage your saved articles in your account and clicking the X located at the bottom right of the article.
Images (1):
|
|||||
| Drone advances in Ukraine could bring dawn of killer robots | https://indianexpress.com/article/world… | 0 | Jan 01, 2026 16:00 | active | |
Drone advances in Ukraine could bring dawn of killer robotsURL: https://indianexpress.com/article/world/drone-advances-ukraine-dawn-killer-robots-8359556/ Description: The longer the war lasts, the more likely it becomes that drones will be used to identify, select and attack targets without help from humans, according to mili... Content: |
|||||
| You'll Freak When You Watch Ameca The Humanoid AI Robot … | https://hothardware.com/news/watch-amec… | 1 | Jan 01, 2026 16:00 | active | |
You'll Freak When You Watch Ameca The Humanoid AI Robot Come To Life | HotHardwareURL: https://hothardware.com/news/watch-ameca-come-to-life Description: Engineered Arts' robot Ameca has people both amazed and fearful as images from movies like I, Robot are brought to mind. Content:
Tim Sweezy Stay updated with the latest news and updates. Subscribe to our newsletter! Home Reviews News Blogs Full Site Sitemap PC Components Systems Mobile IT Infrastructure Leisure Videos About Advertise News Tips Contact Privacy And Terms HotTech Accessibility Shop Twitter Facebook YouTube RSS Or sign in manually:
Images (1):
|
|||||
| Watch: Engineer Integrates ChatGPT With Robot Dog To Make It … | https://www.ndtv.com/feature/watch-engi… | 0 | Jan 01, 2026 16:00 | active | |
Watch: Engineer Integrates ChatGPT With Robot Dog To Make It TalkDescription: With the help of ChatGPT, the robot dog named Spot was seen communicating with people and answering a range of questions. Content: |
|||||
| Drone Advances in Ukraine Could Bring Dawn of Killer Robots | https://www.theyeshivaworld.com/news/he… | 0 | Jan 01, 2026 16:00 | active | |
Drone Advances in Ukraine Could Bring Dawn of Killer RobotsDescription: Drone advances in Ukraine have accelerated a long-anticipated technology trend that could soon bring the world’s first fully autonomous fighting robots to Content: |
|||||
| Drone advances in Ukraine could bring dawn of killer robots | https://www.ctvnews.ca/world/drone-adva… | 0 | Jan 01, 2026 16:00 | active | |
Drone advances in Ukraine could bring dawn of killer robotsURL: https://www.ctvnews.ca/world/drone-advances-in-ukraine-could-bring-dawn-of-killer-robots-1.6215617 Description: Drone advances in Ukraine have accelerated a long-anticipated technology trend that could soon bring the world's first fully autonomous fighting robots to the b... Content: |
|||||