Description: Компания Boston Dynamics сменила поколение своего человекоподобного робота Atlas — на смену модели с гидравлическими приводами пришла новая конструкция на электромоторах.
Description: The South Korean government plans to unveil its industry strategy for advanced robots before the end of the first half of the year as the country looks to foste
Content:
Business Minister visits Boston Dynamics, Hyundai Motor’s robotics pioneer Published : April 30, 2023 - 14:59:05 Link copied! Korea looks to unveil industrial strategy for advanced robotics by first half of the year The South Korean government plans to unveil its industry strategy for advanced robots before the end of the first half of the year as the country looks to foster its robotics sector and competitiveness on the global stage, according to Trade Minister Lee Chang-yang. “The Korean government will provide support for strengthening the competitiveness of cutting-edge robot companies, creating a new market and establishing global footholds,” said Lee during his visit to Boston Dynamics -- Hyundai Motor Group’s global robotics forefront -- in Waltham, Massachusetts, on Friday. Lee was part of the South Korean delegation accompanying President Yoon Suk Yeol on his state visit to the United States last week to celebrate the 70th anniversary of the S. Korea-US alliance. “I’m looking forward to the synergy created by the strategic partnership between Boston Dynamics, which has world-leading technology, and Hyundai Motor Group, which has excellent manufacturing capabilities,” Lee said. He added that the government’s upcoming industry strategy for advanced robots will include plans to expand the level and scope of the robotics technology cooperation between Korea and the US. The minister’s remarks resonated with the commitment of President Yoon and US President Joe Biden stated in the Washington Declaration that underscored the common direction to lead innovation in cutting-edge sectors beyond diplomacy and security. Boston Dynamics’ officials demonstrated for Lee the operation of the company’s signature robots -- Spot, a four-legged walking robot capable of carrying out surveillance and exploration missions in extreme conditions; Atlas, a humanoid robot that can move like humans with its whole-body dynamic balancing technology and gripper; and Stretch, a flexible autonomous mobile robot that can carry 600 boxes weighing up to 23 kilograms per hour to make warehouse operations more efficient. A Boston Dynamics official said, “The government’s long-term investment and support are extremely important for developing innovative robots and invigorating the industry. We hope for active support from the Korean government in regard to Hyundai Motor Group’s expansion plan of (the) robotics business.” Hyundai Motor Group acquired Boston Dynamics in 2021 as the auto giant designated the robotics sector as one of the core businesses for future innovation. The company operates an independent robotics lab to develop industrial support robots, wearable robots for medical purposes and service robots. The company recently showcased its newly-developed automatic charging robot for electric vehicles at the 2023 Seoul Mobility Show, held from March 31 to April 9. Hyundai Motor also pinned artificial intelligence as the pillar of future business, including robotics. The company set up the Boston Dynamics AI Institute last year to secure next-generation AI technology to better the movements and recognition functions of future robots. According to the ministry’s projection, the global robot market, which is currently estimated at $28.2 billion, is expected to jump to $83.1 billion in 2030 with a compound annual growth rate of 13 percent. The government decided in March to push for revising regulations in order to nurture the robotics industry as a future core growth engine for the country, unveiling a road map to figure out which restrictions need to be amended in the areas of expanding robots’ mobility, allowing robots in the safety industry, creating more collaborative and supportive robots, and establishing more robot-friendly infrastructure. Kim Keon Hee interfered in broad range of state affairs: special counsel Controversial former first lady Kim Keon Hee has been found to have violated a wide range of laws in using her status to receive bribes and to meddle in state affairs where she had no authority. Coupang’s W1.7tr payout plan fails to quell public anger Steady notice abroad, modest gains at home NewJeans’ full-member comeback derails as Ador cuts ties with Danielle [Graphic News] Jensen Huang and Michelle Kang make people of the year list N. Korea tests long-range cruise missiles, vows 'unlimited nuclear buildup' Diplomatic Circuit Guide to hidden rules of K-drama romance K-drama Survival Guide K-pop 101 The world of K-pop explained, for both fans and newcomers Inside Gen Z Giving Korea's new generation a voice Oddities From the funny to the strange and downright unbelievable Herald Interview A series of in-depth interviews. Why Samsung doesn’t want to sell many Galaxy Z TriFolds Coupang says data leak suspect identified, no external leak found Won strengthens past 1,450 after rare FX warning Face recognition required to set up all new smartphones from Tuesday Global banks see 1,400 won as new baseline Ador terminates contract with NewJeans' Danielle Ex-NewJeans member Danielle to face multimillion-dollar damages suit NewJeans’ full-member comeback derails as Ador cuts ties with Danielle Who is Danielle? Ex-NewJeans member at center of contract dispute When wealthy call themselves poor Address : Huam-ro 4-gil 10, Yongsan-gu,Seoul, Korea Tel : +82-2-727-0114 Online newspaper registration No : Seoul 아03711 Date of registration : 2015.04.28 Publisher. Editor : Choi Jin-Young Juvenile Protection Manager : Choi He-suk The Korea Herald by Herald Corporation. Copyright Herald Corporation. All Rights Reserved.
Description: After almost 11 years in development, Boston Dynamics announces the retirement of its HD Atlas humanoid robot in a tribute video on the company's YouTube channel.
Content:
Images (10):
Boston Dynamics mostra robô capaz de aprender comportamentos humanos
Description: Treinado com dados de ações humanas, o robô agora aprende sozinho, adapta movimentos e executa tarefas antes vistas como impossíveis
Content:
Siga o Olhar Digital no Google Discover A Boston Dynamics voltou aos holofotes nesta semana com a divulgação de um novo vídeo do Atlas, seu robô humanoide. Ele havia saído de cena por um tempo enquanto os engenheiros trabalhavam em melhorias, mas agora, ele reapareceu, e com novidades. Em parceria com o Toyota Research Institute (TRI), a companhia desenvolveu o chamado Large Behavior Model (LBM). Essa tecnologia funciona como um “cérebro digital”, treinado com bases de dados de ações humanas. A ideia é permitir que robôs entendam comportamentos complexos do cotidiano, segundo informações do portal interestingengineering. Isso agora acontecerá automaticamente, o robô irá aprender sozinho e ainda aprimorar seus movimentos com as informações que coletar. Segundo a Boston Dynamics, o LBM abre espaço para adicionar novas funções em máquinas como o Atlas sem a necessidade de escrever uma única linha de código. Isso representa um salto no desenvolvimento de robôs humanoides. Agora eles podem passar a receber atualizações comportamentais de forma quase instantânea, assim como acontece em aplicativos de celular. Leia mais: A Boston Dynamics divulgou em seu canal no YouTube um vídeo do Atlas executando tarefas até então entendidas como desafiadoras para robôs. Entre as ações, ele abre uma cesta, retira diversos objetos e os transfere para um cesto maior. Em seguida, é capaz de analisar a forma de cada item e colocá-los de maneira organizada em uma prateleira. A demonstração também incluiu testes de resiliência. Durante a execução das tarefas, uma pessoa incomodava intencionalmente o Atlas, empurrando e criando obstáculos. Apesar de ser atrapalhado, o robô conseguiu concluir suas missões. Para Scott Kuindersma, vice-presidente de pesquisa em robótica da Boston Dynamics, esse é apenas um vislumbre do futuro dos robôs de uso geral. Ele destaca que treinar uma rede neural para lidar com várias tarefas complexas de manipulação abre caminho para máquinas mais versáteis, capazes de atuar em situações diversas com precisão, destreza e força. Já Russ Tedrake, vice-presidente de Modelos de Comportamento no TRI, reforça que a principal proposta de valor dos humanoides está justamente em sua versatilidade. Segundo ele, a programação tradicional não era capaz de escalar para a quantidade de tarefas necessárias. Com os LBMs, essa barreira começa a ser superada. Matheus Labourdette é redator(a) no Olhar Digital Layse Ventura é jornalista (Uerj), mestre em Engenharia e Gestão do Conhecimento (Ufsc) e pós-graduada em BI (Conquer). Acumula quase 20 anos de experiência como repórter, copywriter e SEO.
Images (1):
Boston Dynamics unveils ‘creepy’ new fully electric humanoid robot - …
Description: Boston Dynamics' new Atlas robot has earned admiration for its impressive engineering feat while also becoming the subject of mockery online.
Content:
Instructions: Want to discuss? Please read our Commenting Policy first. If you get Global News from Instagram or Facebook - that will be changing. Find out how you can still connect with us. To many, the sight is a combination of mind-blowing and nightmarish: a humanoid robot, laying on the floor, before it flips its legs over its metallic body to stand at full height. In a new video released by Boston Dynamics on Wednesday, the company — famous for its nightmare-fuel videos showcasing robots on obstacle courses and dancing, or robot dogs racing through a forest — unveiled its latest robot, called Atlas. Atlas, which is “designed for real-world applications,” is the freshest iteration of the company’s humanoid robots and is their first fully electric model. Prior to this model, an earlier version of Atlas was hydraulic-powered. In their social media post, Boston Dynamics seemingly took aim at Tesla’s own humanoid robot announcement from 2021, when the Elon Musk unveiled the product using an actor breakdancing in a white morphsuit. “We promise this is not a person in a bodysuit,” Boston Dynamics wrote on X. We promise this is not a person in a bodysuit. https://t.co/S9FgfpqvrW pic.twitter.com/G30sXHQ93C — Boston Dynamics (@BostonDynamics) April 17, 2024 “In the months and years ahead, we’re excited to show what the world’s most dynamic humanoid robot can really do — in the lab, in the factory, and in our lives,” Boston Dynamics wrote in a blog post. “The electric version of Atlas will be stronger, with a broader range of motion than any of our previous generations,” the company boasted. Where the hydraulic Atlas could lift and maneuver heavy or irregularly shaped objects, the fully electric Atlas will do the same but with “several new gripper variations to meet a diverse set of expected manipulation needs in customer environments.” The new Atlas will also include the latest AI and machine learning tools, including reinforcement learning and computer vision, which will involve Atlas learning from its own actions and rewards. Can't trip Atlas up! Our humanoid robot gets ready for real work combining strength, perception, and mobility. pic.twitter.com/f7mcnvbw3L — Boston Dynamics (@BostonDynamics) February 5, 2024 Hyundai, which owns Boston Dynamics, invested in the fully electric Atlas model and will test the robot’s applications in their automotive manufacturing plants. Boston Dynamics said the testing is expected to take place “over the next few years.” This latest Atlas iteration is also more agile than its predecessor and “will move in ways that exceed human capabilities.” However, the robot’s inhuman movement is the very thing that has left some people unnerved by Atlas. Perhaps unsurprisingly, Atlas has earned admiration for its impressive engineering feat while also becoming the subject of mockery online. Tech YouTuber Marques Brownlee referenced a meme and wrote Atlas was “giving ‘call an ambulance… but not for me.” Boston Dynamic just unveiled their newest Atlas robot. This is not a render. Oh my god It's giving "call an ambulance… but not for me"https://t.co/mPUKDlW7WY pic.twitter.com/el3DKlTJT5 — Marques Brownlee (@MKBHD) April 17, 2024 The Boston Dynamics "Atlas" robot 🤖 is going to murder us all pic.twitter.com/rfUkHtlKhl — David Leavitt 🎲🎮🧙♂️🌈 (@David_Leavitt) April 17, 2024 the video for the new electric Atlas from Boston Dynamics is … quite something. practically, it's showing off how human-shaped robots are not limited by the human body's degrees of freedom, but it feels like a videogame introduction to a robot bossfight pic.twitter.com/8OvLTFsUZ0 — James Vincent (@jjvincent) April 17, 2024 Firstly, I'm sorry, I couldn't resist it. Boston Dynamics Atlas Robot Launch party. pic.twitter.com/lfb3cckrZ3 — Dan Dawson (@Dan_Dawson__) April 18, 2024 “boston dynamics robot with dune music isn’t real, it can’t hurt you” boston dynamics robot with dune music: pic.twitter.com/fvSjtrMYzo — frye (@___frye) April 17, 2024 The hydraulic version of Atlas has been retired. Boston Dynamics is one of the world’s leading robotics companies, with several of its models functioning in real-world applications. The company’s robot dog, called Spot, has been utilized in hazardous inspections, construction site monitoring and alongside police forces. The email you need for the day’s top news stories from Canada and around the world. The email you need for the day’s top news stories from Canada and around the world.
Images (1):
Boston Dynamics показала на видео новые навыки робота Atlas
Description: Компания Boston Dynamics опубликовала праздничное видео, на котором три робопса Spot украшают рождественскую елку. Два робота образуют нечто вроде ступенек, третий по ним забирается и с помощью манипулятора надевает на верхушку дерева бант. А спустя еще пару мгновений вся конструкция падае
Content:
Компания Boston Dynamics опубликовала праздничное видео, на котором три робопса Spot украшают рождественскую елку. Два робота образуют нечто вроде ступенек, третий по ним забирается и с помощью манипулятора надевает на верхушку дерева бант. А спустя еще пару мгновений вся конструкция падает. Робопсы Spot благодаря многочисленным видеороликам стали самой популярной моделью Boston Dynamics. При этом не все понимают, на что все-таки способны эти устройства. Так, не стоит думать, что все операции они выполняют в автоматическом режиме. Во всех случаях для управления Spot нужен оператор с пультом. Именно он задает направление движения пса, выстраивает «ступеньки» и обращается с манипулятором. Другое дело, что Spot в автоматическом режиме подстраивает движения своих конечностей под окружающие условия. Наш канал в Telegram. Присоединяйтесь! Есть о чем рассказать? Пишите в наш телеграм-бот. Это анонимно и быстро
Images (1):
Boston Dynamics' Atlas at Work: Everything We Know - CNET
Description: In the latest video of Boston Dynamics' Atlas, the humanoid robot demonstrates its first fully autonomous work tasks, using new grippers and self-correcting any errors along the way.
Content:
Images (10):
Hyundai Motor Group Completes Acquisition of Boston Dynamics from SoftBank
Description: BOSTON, SEOUL, South Korea and TOKYO, June 21, 2021 /PRNewswire/ -- Hyundai Motor Group (the Group), Boston Dynamics, Inc. and SoftBank Group Corp...
Content:
Boston Dynamics показала фейлы робота Atlas - Korrespondent.net
Description: В свое время Atlas был одним из самых совершенных гуманоидных роботов, когда-либо создававшихся.
Content:
Компания Boston Dynamics показала обширную подборку с фейлами своего робота Atlas. Курьезы случались с устройством во время испытаний. Видео опубликовано на YouTube. В ролике разработчики показали, что за любой продвинутой технологией скрываются годы усердного труда и множество неудачных попыток. К слову, по данным СМИ, Boston Dynamics выводит из строя своих гуманоидных роботов. Компания сосредоточится на коммерциализации других своих технологий и будет продолжать разрабатывать новые технологии для коммерческих гуманоидов в штаб-квартире в Массачусетсе. Напомним, ранее компания по производству автомобилей BMW трудоустроила роботов-гуманоидов. Человекоподобный робот показал свою работу на автозаводе Новини від Корреспондент.net в Telegram та WhatsApp. Підписуйтесь на наші канали https://t.me/korrespondentnet та WhatsApp Вам уже виповнився 21 рік? Повернутися на попередню сторінку
Images (1):
Hyundai Motor Group Completes Acquisition of Boston Dynamics from SoftBank
Description: BOSTON, SEOUL, South Korea and TOKYO, June 21, 2021 /PRNewswire/ -- Hyundai Motor Group (the Group), Boston Dynamics, Inc. and SoftBank Group Cor...
Content:
Hyundai Motor Group Completes Acquisition of Boston Dynamics from SoftBank
Description: BOSTON, SEOUL, South Korea and TOKYO, June 21, 2021 /CNW/ -- Hyundai Motor Group (the Group), Boston Dynamics, Inc. and SoftBank Group Corp. (Sof...
Content:
Boston Dynamics robot dog joins Trump’s security detail - The …
Description: Spot, the four-legged robotics dog developed by Boston Dynamics, a subsidiary under Hyundai Motor Group, has reportedly joined the security detail for US Presid
Content:
Business Boston Dynamics robot dog joins Trump’s security detail Published : Nov. 12, 2024 - 14:39:54 Link copied! Spot, the four-legged robotics dog developed by Boston Dynamics, a subsidiary under Hyundai Motor Group, has reportedly joined the security detail for US President-elect Donald Trump. The $75,000 mechanical canine, bearing a cautionary tag that reads “Do not pet,” was seen patrolling the grounds of Trump’s Mar-a-Lago residence in Palm Beach, Florida. News reports said that the remote-controlled dog is with the US Secret Service. “We do not have specific details on Boston Dynamics’ transactions. But it appears that the Secret Service made the purchase,” said an official from Hyundai’s Seoul headquarters. Equipped with cameras, microphones and sensors, Spot is capable of performing surveillance missions and can be deployed in hazardous environments such as toxic sites and even military operations. Boston Dynamics primarily sells Spot to businesses, research institutions and government entities. The robotic hound can be leased for specific projects, as was the case with the New York Police Department. Named “Digidogs” by the police, these robots responded to home invasions and hostage situations until the lease was terminated after less than a year in 2021 due to public backlash. However, the officials reiterated the importance of such high-tech tools and purchased another two Digidogs last year. In 2020, Hyundai Motor invested $880 million to acquire an 80 percent stake in Boston Dynamics. The robotics arm is gearing up for an initial public offering on the tech-heavy Nasdaq next year. Kim Keon Hee interfered in broad range of state affairs: special counsel Controversial former first lady Kim Keon Hee has been found to have violated a wide range of laws in using her status to receive bribes and to meddle in state affairs where she had no authority. Coupang’s W1.7tr payout plan fails to quell public anger Steady notice abroad, modest gains at home NewJeans’ full-member comeback derails as Ador cuts ties with Danielle [Graphic News] Jensen Huang and Michelle Kang make people of the year list N. Korea tests long-range cruise missiles, vows 'unlimited nuclear buildup' Diplomatic Circuit Guide to hidden rules of K-drama romance K-drama Survival Guide K-pop 101 The world of K-pop explained, for both fans and newcomers Inside Gen Z Giving Korea's new generation a voice Oddities From the funny to the strange and downright unbelievable Herald Interview A series of in-depth interviews. Why Samsung doesn’t want to sell many Galaxy Z TriFolds Coupang says data leak suspect identified, no external leak found Won strengthens past 1,450 after rare FX warning Face recognition required to set up all new smartphones from Tuesday Global banks see 1,400 won as new baseline Ador terminates contract with NewJeans' Danielle Ex-NewJeans member Danielle to face multimillion-dollar damages suit NewJeans’ full-member comeback derails as Ador cuts ties with Danielle Who is Danielle? Ex-NewJeans member at center of contract dispute When wealthy call themselves poor Address : Huam-ro 4-gil 10, Yongsan-gu,Seoul, Korea Tel : +82-2-727-0114 Online newspaper registration No : Seoul 아03711 Date of registration : 2015.04.28 Publisher. Editor : Choi Jin-Young Juvenile Protection Manager : Choi He-suk The Korea Herald by Herald Corporation. Copyright Herald Corporation. All Rights Reserved.
Images (1):
Hyundai Motor Completes Acquisition Of Boston Dynamics From SoftBank
Description: (RTTNews) - South Korean automaker Hyundai Motor Co., Ltd. (HYMLF.OB, HYMTF.OB) Monday said it has completed the acquisition of a controlling inte...
Content:
Hyundai Motor : Completes Acquisition of Boston Dynamics from SoftBank
Description: · Hyundai Motor Group acquires a controlling interest in Boston Dynamics from SoftBank, following regulatory approvals and other conditions
...
Content:
Boston Dynamics: Spot lernt den Rückwärtssalto – aus gutem Grund …
Description: Der Roboterhund Spot hat den mehrfachen Rückwärtssalto gelernt. Dank Reinforcement Learning soll Spot so in Grenzsituationen eher überstehen, so Boston Dynamics. Das schützt die Hardware, die Spot besitzt.
Content:
Boston Dynamics lässt Spot Saltos schlagen. Der vierbeinige Roboter hat damit etwas gelernt, was den Erfindern damals gar nicht in den Sinn kam und was selbst das Team, das die Software angepasst hat, erst einmal für unmöglich hielt. Was auf den ersten Blick nutzlos klingt – und das Schlagen eines Saltos ist selbst in den Augen von Boston Dynamics nichts, was die Kunden der Firma brauchen – hat einen interessanten Nebeneffekt. Denn mit der notwendigen Kontrolle, einen Salto zu schlagen, hat Spot auch gelernt, sich in kritischen Situationen besser abzufangen. Wenn der Roboter stürzt, ausrutscht oder stolpert, kann sich Spot so vor größeren Schäden schützen. Aber auch die Payload, oft teure Sensoren auf dem Rücken, kann so besser vor Sturzschäden geschützt werden. Reinforced Learning nennt Boston Dynamics dies und zeigt im Video, wie gut das mittlerweile funktioniert. Selbst ein Rückwärtssalto ist möglich. Das schafft Spot sogar mehrmals hintereinander. Spot kann selbst mit installierten Rollen auf den Vorderbeinen balancieren. Bis dahin war es aber ein weiter Weg, wie Arun Kumar, Robotics Engineer von Boston Dynamics, in dem Video erklärt. So hat er die Szenarios zunächst im Rechner (erfolgreich) simuliert. Doch beim Aufspielen auf den echten Roboter ging so gut wie jedes Mal etwas schief, wie Kumar darlegt. Boston Dynamics zeigt in dem Video auch die ersten unbeholfenen Versuche. Zunächst auf Turnmatten, um Schäden zu minimieren. Später riskierte Boston Dynamics aber auch mehr. Als interessanter Nebeneffekt kann Spot auch deutlich natürlicher laufen, ähnlich anderen Vierbeinern. Im produktiven Einsatz läuft Spot noch so, wie man es für einen Roboter erwarten würde. Das sieht nicht sonderlich elegant aus, nutzt dabei aber die Hardware auch schonender. Die neuen Experimente setzen Spot und dessen Motoren nämlich ans technische Limit. Die neuen Fähigkeiten zeigen auch, dass die Entwicklung von Spot noch nicht am Ende ist, auch wenn sich Boston Dynamics derzeit zumindest in seinen Videos eher auf Atlas konzentriert. Boston Dynamics
Images (1):
Boston Dynamics Atlas — компания отказывается от робота-гуманоида / NV
Description: Boston Dynamics robot teknolojileri konusunda inovasyonlarına tam gaz devam ediyor. Atlas robot ev işlerinde yardımcınız olacak.
Content:
Massachusetts merkezli robotik devi Boston Dynamics, Toyota Araştırma Enstitüsü (TRI) ile yaptığı iş birliğiyle insansı robotu Atlas’ı adeta yeni bir çağa taşıdı. Atlas, artık “Büyük Davranış Modeli” (LBM) adı verilen, insan eylemlerine dair devasa veri setleriyle eğitilmiş karmaşık bir yapay zeka sistemiyle yönetiliyor. Bu yeni beyin, robota sadece ne yapacağını değil, beklenmedik durumlarla nasıl başa çıkacağını da öğretiyor. Yayınlanan videonun en can alıcı kısmı ise Atlas’ın bir sabır testine tabi tutulduğu anlardı. Robot, belirli nesneleri bir kutuya yerleştirme görevini yapıyor. Bir mühendis tıpkı muzip bir iş arkadaşı gibi davranarak sürekli olarak kutunun kapağını kapatıyor ve kutuyu farklı bir yere itiyordu. Daha önceki robotik sistemler bu tür bir müdahale karşısında donup kalabilir veya hataya düşebilirdi. Ancak Atlas, şaşırtıcı bir şekilde durumu analiz etti. Pozisyonunu sakince yeniden ayarladı, kutuyu buldu ve kapağını açtı. Ardından görevine kaldığı yerden devam etti. Bu sahne, robotun artık sadece programlanmış komutları değil, değişen koşullara dinamik olarak adapte olabilme yeteneğini de kazandığını gözler önüne serdi. Boston Dynamics, yaptığı açıklamada, LBM’lerin en büyük devriminin, yeni yeteneklerin tek bir satır kod yazmadan robota eklenebilmesi olduğunu belirtti. Bu, daha önce aylar süren programlama çalışmalarının yerini, yapay zekanın öğrenme kapasitesinin alacağı anlamına geliyor. Boston Dynamics Robotik Araştırmalar Başkan Yardımcısı Scott Kuindersma, “Bu çalışma, yaşam ve çalışma biçimimizi dönüştürecek genel amaçlı robotlar vizyonumuza bir pencere açıyor.” dedi. Kuindersma, bu yaklaşımın Atlas gibi son derece yetenekli robotların, güç ve hassasiyet gerektiren görevler için veri toplamasını kolaylaştıracağını vurguladı. Ayrıca bu durum gelişimi hızlandıracağını ifade etti. Hızla gelişen robot teknolojisi, çamaşır katlamaktan fabrika montajına kadar çok çeşitli alanlarda görev alabilecek daha çevik, akıllı ve uyumlu insansı robotların hayatımıza girmesinin artık çok da uzak bir hayal olmadığını kanıtlıyor. İsmimi bu tarayıcıya kaydet Δ
Description: Der Entladeroboter Stretch von Boston Dynamics arbeitet nun im Regelbetrieb von Lidl. 22 Systeme sollen bis Mitte 2026 in Importlagern in mehreren europäischen Ländern Container automatisch entladen und damit körperlich belastende Tätigkeiten reduzieren.
Description: Американская компания Boston Dynamics заявила о прекращении разработки робота Atlas. Об этом сообщает издание TechCrunch
Content:
Реклама Фото: Josh Reynolds / AP Американская компания Boston Dynamics заявила о прекращении разработки робота Atlas. Об этом сообщает издание TechCrunch. В фирме, принадлежащей корейской корпорации Hyundai, рассказали об отмене проекта Atlas. Boston Dynamics выпустила прощальное видео с записями робота-гуманоида, в котором заявила, что Atlas «пора отправиться на пенсию». В компании не назвали причину отмены проекта, однако представители фирмы заявили, что будут использовать наработки при создании других роботов. По словам журналистов, Boston Dynamics не занималась продажей Atlas коммерческим компаниям, поэтому, вероятно не смогла найти способ заработать на роботе. Также, возможно, Atlas устарел на фоне новых машин. Впервые Atlas представили почти 11 лет назад. Компания выпустила человекоподобного робота, созданного специально для замены человека на тяжелом или опасном производстве. По мнению журналистов TechCrunch, Boston Dynamics вложила в разработку гуманоида миллионы долларов. В середине марта стало известно, что немецкий автомобильный концерн Mercedes-Benz заключил договор об использовании роботов. Гуманоиды от компании Apptronik будут работать в тестовом режиме на одном из заводов Mercedes в Венгрии.
Images (1):
Boston Dynamics AI Institute eröffnet Standort in Zürich - 20 …
Description: Die renommierte US-Roboterfirma erweitert ihre Präsenz und etabliert in Zürich ihren ersten Forschungs-Hub ausserhalb der USA.
Content:
Das Boston Dynamics AI Institute kommt nach Zürich und eröffnet ihren ersten Forschungs-Hub ausserhalb der USA. Kann die Stadt einen ähnlichen Erfolg erzielen wie einst mit Google? Die Volkswirtschaftsdirektorin von Zürich zeigt ihre Begeisterung darüber, dass das AI Institute von Boston Dynamics seine Präsenz in ihrer Region ausbaut. Dazu schreibt Carmen Walker Späh: «Einer der weltweit führenden Akteure im Bereich Robotik wird ein Entwicklungsteam in unserem Kanton etablieren.» Das Projekt in die Schweiz geholt und die Ansiedlung betreut hat der Standortmarketingverbund Greater Zurich Area. Die eigentliche Ankündigung erfolgte letzte Woche auf den Swiss Robotics Days in Zürich. Neben Schweizer Grössen in der Robotikbranche wie beispielsweise ETH-Professor Roland Siegwart trat auch Al Rizzi auf, der eine Keynote hielt. Rizzi ist der Chief Technology Officer des AI Institute von Boston Dynamics. In einer Pressemitteilung erklärt das Institut, «dass das Team in Zürich ab Anfang 2024 an der Entwicklung intelligenter, agiler und geschickter Robotersysteme arbeiten wird, die in den anspruchsvollsten Umgebungen eingesetzt werden sollen.» Der Standort Zürich wird dem Unternehmen dabei behilflich sein, Verbindungen zu talentierten Personen, Universitäten und Forschungsorganisationen im lebendigen europäischen Umfeld zu knüpfen. Die Kultur des Instituts ist darauf ausgerichtet, das Beste aus akademischer und privater Forschung zu vereinen, wie Tippinpoint.ch schreibt. Das Boston Dynamics AI Institute wurde im August 2022 von Marc Raibert gegründet. Es gibt an, dass es 150 Vollzeitmitarbeiter und 10 Gastprofessoren beschäftigt, die eine Fläche von mehr als 30’000 Quadratmetern für Labors und Büros im Kendall Square in Boston nutzen. Das AI Institute eröffnet nun seinen ersten Forschungsstandort ausserhalb der USA. Die Volkswirtschaftsdirektorin von Zürich, Walker Späh, bezeichnet dies als eine bedeutende Chance für den Innovationsstandort Zürich. Die Entscheidung einer so renommierten Firma wie desBoston Dynamics AI Institute zugunsten von Zürich ist zunächst ein erheblicher Imagegewinn. Es bleibt jedoch abzuwarten, inwieweit das Unternehmen die bereits lebhafte Robotik-Szene zusätzlich beleben wird. Deine Meinung zählt Adventskalender Solitaire Kreuzworträtsel Sudoku Mahjong Bubbles Snake Schach eXchange Power of 2 Doppel Cuboro Riddles Wortblitz SudoKen Street Fibonacci Gumblast Rushtower Wimmlbid Bleib auf dem Laufenden.
Images (1):
How to Buy Boston Dynamics Stock - Best Wallet Hacks
Tutor Intelligence, which makes AI-powered robots for warehouse work, has raised $34 million in new funding. Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required. yesSubscribe to our daily newsletter, PYMNTS Today. By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. Δ The company will use the new capital to speed commercialization of its robots, scale its consumer packaged goods (CPG) fleet and advance its central robot intelligence platform and research infrastructure, according to a Monday (Dec. 1) news release. “Tutor stands out for its extraordinary speed of execution and its ability to balance cutting-edge product and model development with a clear commercial focus that quickly gets this functionality into customers’ hands,” Rebecca Kaden, managing partner at Union Square Ventures, which led the funding round, said in the release. “They’re not building for an abstract future; they’re transforming how CPG companies operate today.” Founded out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Tutor Intelligence’s robots work alongside human operators to process goods for a “vast Fortune 50 supply chain network,” the company said in the release. It also works with multiple Fortune 500 packaged food companies, and “global leaders” in the personal care, toys, home goods, beauty and consumer technology spaces. “When we started Tutor Intelligence nearly five years ago as grad students at MIT, we saw that the robotics intelligence bottleneck was the key barrier to robotic worker viability,” said Josh Gruenstein, the company’s co-founder and CEO. “We built a system that leverages on-the-job data to teach robots to navigate and understand the physical world with human-like intuition. This new capital enables us to expand our fleet, scale our robot training infrastructure, and empower our robots to tackle increasingly complex tasks, reshaping industrial work as we know it.” Advertisement: Scroll to Continue The funding comes at a time when, as covered here last month, physical artificial intelligence (AI) is emerging as the next stage of robotics. Earlier robots followed fixed commands and worked only in predictable environments, having trouble with the unpredictability found in everyday operations such as shifting layouts, mixed lighting, and human movement. “That is beginning to change as research groups show how simulation, digital twins and multimodal learning pipelines enable robots to learn adaptive behaviors and carry those behaviors into real facilities with minimal retraining,” PYMNTS wrote. Amazon’s launch of its Vulcan robot is one of the clearest examples of physical AI moving from research to frontline operations. This robot uses both vision and touch to pick and stow items in the company’s fulfillment centers, letting it handle flexible fabric storage pods and unpredictable product shapes. Terraform Labs Co-Founder Do Kwon Sentenced to 15 Years in Prison Costco’s Digital Sales Surge 21% as Members Maintain Spending Appeals Court Says Judge Must Consider Allowing Apple to Collect Commission Disney's $1 Billion Bet: A Licensing Model With OpenAI for User Content We’re always on the lookout for opportunities to partner with innovators and disruptors.
Images (1):
HIT Shenzhen Team Develops Multimodal Large Model 'JiuTian', Tops OpenCompass …
Description: The first multimodal large-scale model 'JiuTian' has topped the OpenCompass multimodal large-scale model ranking upon its debut evaluation.
In the fast-evolving world of technology, 2026 promises to be a pivotal year where artificial intelligence moves beyond hype into tangible, enterprise-level impact. Industry leaders are shifting from pilot projects to full-scale deployments, driven by advancements in AI infrastructure and multimodal models. According to insights from Deloitte Insights, successful organizations are leveraging these tools to transition from experimentation to measurable outcomes, emphasizing the integration of AI with existing systems for strategic advantages. This shift is not just about adopting new tools but rethinking business models entirely. For instance, AI-powered decision-making is combining with Internet of Things (IoT) and blockchain to enable real-time analytics and secure data sharing. Posts on X highlight how these integrations are expanding AI’s role from operational support to core strategic planning, with examples like multilingual generative AI enhancing global operations. Meanwhile, the push for sustainability is intertwining with tech innovations, as companies invest in green technologies to meet regulatory demands and consumer expectations. Reports indicate that bio-based materials and decentralized renewable energy sources are gaining traction, positioning them as key growth areas in the post-2025 era. AI’s Expanding Dominion in Enterprise Strategy The rise of agentic AI, where systems autonomously perform tasks and make decisions, is set to redefine workflows across sectors. Simplilearn outlines how emerging technologies like this are shaping job markets and innovation pipelines, predicting a surge in demand for skills in AI governance and ethical implementation. In parallel, multimodal AI models that process text, images, and audio simultaneously are enabling more sophisticated applications, from advanced healthcare diagnostics to personalized consumer experiences. This convergence is particularly evident in telemedicine platforms and mental health apps, which are leveraging AI for proactive interventions. Challenges remain, however, including the need for robust data privacy measures and addressing biases in AI systems. Industry insiders note that as AI integrates deeper into critical sectors like healthcare and finance, regulatory frameworks will evolve to ensure accountability without stifling progress. Quantum Leaps and Neuromorphic Computing on the Horizon Quantum computing is another frontier gaining momentum, with potential to solve complex problems in drug discovery and financial modeling at unprecedented speeds. The World Economic Forum lists it among the top emerging technologies for 2025, extending into 2026, highlighting its role in accelerating scientific breakthroughs. Neuromorphic computing, mimicking the human brain’s efficiency, is emerging as a solution to the energy demands of traditional AI hardware. Juniper Research identifies this as a trend to watch, noting its potential to enable physical AI—robots and devices that learn and adapt in real-world environments. This technology is particularly promising for edge computing, where low-power, high-efficiency processing is crucial. Innovations in this area could revolutionize industries like manufacturing, with AI-driven diagnostics and 3D printing for on-demand production reducing waste and costs. Sustainability Drives Tech Innovation Waves The intersection of technology and environmental responsibility is creating new opportunities in renewable energy and circular economies. Decentralized systems powered by blockchain are facilitating peer-to-peer energy trading, as seen in emerging agri-tech solutions that optimize resource use. McKinsey ranks sustainability-focused tech among the top trends, emphasizing how companies are using AI to monitor and reduce carbon footprints in supply chains. This includes predictive analytics for energy consumption in data centers, which are exploding in number due to AI demands. Posts on X underscore the investment potential in AI infrastructure, with cloud giants like Microsoft and Amazon ramping up monetization efforts. These developments are not without hurdles, as the energy requirements of massive data centers raise concerns about grid stability and environmental impact. Blockchain’s Role in Secure Digital Ecosystems Beyond cryptocurrencies, blockchain is evolving into a foundational technology for secure, transparent transactions across industries. Its integration with AI and 5G is enabling innovations in supply chain management and digital identities, reducing fraud and enhancing traceability. In healthcare, blockchain-secured data sharing is improving patient outcomes by allowing seamless, privacy-protected access to records. SciTechDaily reports on breakthroughs in biotechnology that leverage these secure frameworks for collaborative research. The automotive sector is also benefiting, with concepts in electric vehicles (EVs) incorporating blockchain for battery lifecycle management and smart contracts for autonomous vehicle interactions. As the industry navigates hybrid and battery-electric transitions, these technologies ensure efficiency and compliance. The Surge of Physical AI and Robotics Physical AI, where intelligent systems interact with the physical world, is poised for significant advancements. Robots equipped with neuromorphic chips are becoming more autonomous, capable of learning from environments without constant human oversight. TechTarget discusses trends like this in machine learning, predicting widespread adoption in logistics and healthcare by 2026. For example, AI-driven robotics in surgery could enhance precision and reduce recovery times. Challenges in scalability and ethical deployment are prompting discussions on governance. Industry experts on X emphasize the need for open-source AI to foster competition and innovation, especially in geopolitical contexts like U.S.-China tech rivalries. Navigating Geopolitical Influences on Tech Progress Global tensions are influencing tech development, with calls for open-source initiatives to counter proprietary dominance. Elon Musk’s ventures, often highlighted in media, exemplify how individual leaders can shape federal policies on AI and space tech. WIRED reflects on 2025’s key stories, including AI data center expansions and political takeovers, projecting similar dynamics into 2026. This includes debates over post-quantum cryptography to secure communications against future threats. Investment themes on X point to digital banks and AI infrastructure as high-growth areas, with cloud providers leading the charge. These trends suggest a maturing market where profitability drives innovation rather than speculation. Innovations in XR and Foldable Devices Extended reality (XR) technologies are blending virtual and augmented realities for immersive experiences in education and training. The mobile tech sector is seeing revolutions with tri-fold devices and ultra-thin designs, as noted in recent analyses. Android Central details how 2025’s foldables and AI integrations are fundamentally changing user interactions, with 2026 expected to build on this by incorporating more seamless AI assistants. In consumer electronics, these innovations are driving competition, with companies like Samsung and Apple pushing boundaries in hardware that supports advanced software ecosystems. The result is a more connected, intuitive user experience that blurs lines between devices. Healthcare Transformations Through Tech Integration AI’s role in healthcare is expanding rapidly, from predictive diagnostics to personalized medicine. Multimodal models are analyzing vast datasets to identify patterns in diseases, accelerating drug development. Jagran Josh lists breakthroughs like autonomous agents in medical research, which are set to define 2026’s advancements. Telemedicine platforms enhanced by AI are making healthcare more accessible, especially in remote areas. Ethical considerations, such as data sovereignty and bias mitigation, are critical. Regulatory bodies are stepping in to ensure these technologies benefit society equitably, balancing innovation with public trust. The Future of Work in an AI-Driven World As AI automates routine tasks, the workforce is shifting toward roles requiring creativity and strategic thinking. Remote work norms, amplified by digital tools, are becoming standard, as per startup trends discussed on X. Capgemini explores how these changes are driving industry transformation, with a focus on upskilling programs to prepare employees for an AI-first environment. Investments in synthetic biology and longevity research are opening new frontiers, potentially extending human capabilities and creating markets in bio-based innovations. This holistic approach ensures technology enhances human potential rather than replacing it. Strategic Investments and Market Dynamics Venture capital is flowing into AI infrastructure, with hyperscalers projecting massive capital expenditures. Posts on X from investors highlight profitable growth in cloud revenues, underscoring the economic viability of these technologies. Firstpost recaps 2025’s defining innovations, noting AI’s dominance alongside breakthroughs in EVs and biotechnology, setting the stage for 2026’s expansions. Companies like Tesla and Amazon exemplify how innovation leads to market leadership, with revenue growth tied to tech adoption. As 2026 unfolds, the focus will be on scalable, impactful solutions that address real-world challenges. Emerging Sectors and Long-Term Visions New sectors like advanced waste management and micro-factories are emerging, driven by 3D printing and AI optimization. These areas promise sustainable growth, reducing environmental footprints while creating jobs. Popular Science celebrates 2025’s greatest innovations, including automotive shifts toward hybrids and EVs, which will influence 2026’s transportation tech. Ultimately, the tech environment in 2026 will be defined by interconnected innovations that prioritize impact over novelty. Industry insiders must navigate these trends with foresight, investing in technologies that align with broader societal goals for enduring success. Subscribe for Updates The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve. Help us improve our content by reporting any issues you find. Get the free daily newsletter read by decision makers Get our media kit Deliver your marketing message directly to decision makers.
Images (1):
NEO, the Worldâs First Native Multimodal Architecture, LaunchesâAchieving Deep Vision-Language …
Description: SenseTime and Nanyang Technological University have unveiled NEO, the worldâs first scalable, open-source native multimodal architecture that fundamentally fuses vision and language.
Description: Chinese researchers unveil RGMP, a data-efficient AI framework boosting humanoid robots’ grasping skills and generalization.
Content:
From daily news and career tips to monthly insights on AI, sustainability, software, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. RGMP helps humanoids adapt quickly to new environments, enabling them to perform household chores without additional training. Researchers in China have introduced a new AI framework designed to enhance humanoid robot manipulation. According to researchers at Wuhan University, RGMP (recurrent geometric-prior multimodal policy) aims to improve grasping accuracy across a broader range of objects and enable robots to perform more complex manual tasks. Unlike many data-driven methods that rely on large training datasets, RGMP incorporates geometric reasoning to boost generalization in new or unpredictable environments. The framework achieves 87 percent generalization and is 5 times more data-efficient than leading diffusion-based models, combining spatial reasoning with efficient learning. The researchers say the framework could be a step toward more adaptable and capable humanoid systems. For humanoid robots to operate independently, they must reliably handle multiple objects across different environments. Current machine learning models often work well only when the robot operates in settings similar to those used during training. These systems rely heavily on large datasets and do not fully use geometric reasoning or spatial awareness, making it difficult for robots to adapt in new situations. Vision-language models can understand instructions but often struggle to link them with the correct actions, especially when object shapes or contexts vary. According to researchers, other approaches, like diffusion or imitation learning, require many demonstrations and still fail to generalize. This raises two key questions: how robots can reason about object geometry and how they can learn effectively with fewer examples. To address limitations in current robot manipulation systems, the team developed RGMP, a new end-to-end framework that combines geometric reasoning with efficient learning. The first part, called the Geometric-prior Skill Selector (GSS), helps the robot choose the correct action based on an object’s shape and task requirements, much as humans decide whether to grasp, pinch, or push. It uses simple geometric rules and works even in new environments. The second part, the Adaptive Recursive Gaussian Network (ARGN), improves learning from small datasets by storing and updating spatial memory. It models the robot’s interactions with objects over time, thereby avoiding vanishing gradients. Together, these components help robots generalize better and handle more complex tasks with fewer training examples. The team tested the RGMP framework to assess its performance and generalization. Experiments were carried out on two types of robots: a humanoid system and a desktop dual-arm robot equipped with cameras and 6-DoF arms. A dataset of 120 demonstration trajectories was used, and performance was measured through two metrics: selecting the correct skill and executing it accurately. RGMP was compared with leading models, including ResNet50, Diffusion Policy, Octo, OpenVLA, and others. The results show RGMP performed better across multiple manipulation tasks, including unseen objects and new environments. Researchers claim the GSS module improved skill selection by up to 25 percent, while ARGN and Gaussian modeling improved execution accuracy. The system also required far fewer training samples—achieving high performance with just 40 examples, compared to 200 needed by baseline models—demonstrating strong efficiency and adaptability. The team highlights that by linking skills to object context and breaking 6-DoF motions into Gaussian components, the system improves efficiency and generalization. RGMP achieves 87 percent generalization accuracy and uses 5 times less data than the Diffusion Policy during human-robot interaction tests. The results show that integrating symbolic reasoning with learning improves adaptability across new objects and environments. Future research will focus on enabling robots to infer actions for new objects after learning just one example. The Wuhan University team’s research details are available on the arXiv preprint server. Jijo is an automotive and business journalist based in India. Armed with a BA in History (Honors) from St. Stephen's College, Delhi University, and a PG diploma in Journalism from the Indian Institute of Mass Communication, Delhi, he has worked for news agencies, national newspapers, and automotive magazines. In his spare time, he likes to go off-roading, engage in political discourse, travel, and teach languages. Premium Follow
Description: New breakthroughs buttress the view that China is drawing closer to US capabilities.
Content:
Written by Vicky Chang Published on 15 Sep 2025 3 mins read China Recap is a weekly roundup tracking Chinese companies expanding abroad, covering market entries, funding rounds, product launches, and global partnerships. China’s corporate globalization strategy is evolving fast. Industry giants are rewriting the global playbook, while a new generation of companies charts fresh paths overseas. China Recap tracks both—focusing on strategic expansion, brand building, and localized operations—to help readers make sense of shifting trends and understand how Chinese firms are reshaping their global approach. This edition highlights China’s advances in artificial and embodied intelligence, as humanoid robots move toward commercialization and AI infrastructure and applications expand in function and capacity. Meituan has launched Xiaomei, an app powered by its LongCat large model, to strengthen its food delivery and local services. Xiaomei uses AI agents to let users order meals by voice and book restaurants. Shares rose in Hong Kong after the rollout, which comes as Alibaba and JD.com escalate competition in China’s delivery market. —Bloomberg UBTech Robotics has won a RMB 250 million (USD 35 million) contract for its Walker S2 humanoid robots, which it described as the “world’s largest order” of its kind. Deliveries will begin this year, and the deal includes operational support services. The Hong Kong-listed firm did not name the client. —Nikkei Asia Alibaba and Baidu have reportedly begun using self-designed chips to train AI models, partly replacing Nvidia hardware. Alibaba has applied its chips to smaller models, while Baidu is testing its Kunlun P800 for Ernie. The shift reflects US export curbs and Beijing’s push for tech self-sufficiency, though both firms still rely on Nvidia for top-tier models. —The Information Ant Group has unveiled its first humanoid robot, R1, at a Shanghai tech conference, highlighting its shift toward AI-powered robotics. Designed to handle tasks from cooking to medical assistance, the R1 runs on Ant’s in-house large model and reflects its strategy to scale AI assistant use across China. —Bloomberg China has reportedly narrowed its gap in AI development with the US to about three months, driven by rapid iteration, open-source model advances, and strategic chip reserves, according to CITIC CLSA. However, it still lags in advanced semiconductor production, which remains a longer-term hurdle that constrains progress in cutting-edge AI capabilities. —SCMP RELATED ARTICLENewsChina Recap | How fandoms shape marketsWritten by Vicky Chang Written by Vicky Chang Unitree Robotics is reportedly preparing to file for an IPO in the final quarter of 2025, targeting a valuation of up to RMB 50 billion (USD 7 billion). Already profitable and active in factory deployments, Unitree could become one of the first humanoid robotics firms to go public. —CNBC ByteDance’s Seed team has released Seedream 4.0, a multimodal AI model for text-to-image generation and editing. It supports 4K output, faster inference, visual signal control, and in-context reasoning. Now live via Dreamina, Doubao, and Volcano Engine, the unified system has shown notable performance across creative tasks. Tencent has released its AI CLI tool, CodeBuddy Code, and launched the global open beta of CodeBuddy IDE. The move positions Tencent Cloud as China’s first provider to support AI-driven coding across plugin, IDE, and CLI formats. Developers can use natural language in the terminal to automate tasks like refactoring, testing, and deployment. —IT Zhijia Alibaba Cloud has led a RMB 1 billion (USD 140 million) Series A+ funding round for Shenzhen-based X Square Robot, marking its first investment in embodied intelligence. Other investors that took part include CAS Investment, China Development Bank Capital, HongShan, Meituan, and Legend Capital. The funds will support training of foundation models and hardware development. —SCMP That wraps up this edition of China Recap. If your company is expanding internationally, we’d love to hear about your latest milestones. Get in touch to share your story. Loading... Subscribe to our newsletters KrASIA A digital media company reporting on China's tech and business pulse.
Images (1):
From Navigation to Cognition: Building a Multimodal AI Robot - …
Description: MentorPi melds SLAM navigation and multimodal AI to explore and describe its world through natural commands. Find this and other hardware projects on Hackster.io.
In the rapidly evolving world of artificial intelligence and robotics, Meta Platforms Inc. is positioning itself not just as a social media giant, but as a pivotal force in humanoid robot development. Drawing from its vast resources in AI, the company is betting big on becoming the software backbone for these advanced machines, much like how Android powers a multitude of smartphones. This strategic pivot comes amid a surge of interest from tech behemoths, with Meta’s chief technology officer, Andrew Bosworth, recently revealing in an interview that the firm views humanoid robots as its next “AR-sized bet,” referring to the scale of investment akin to its augmented reality efforts. Bosworth’s comments, highlighted in a fresh report from PCMag, underscore Meta’s focus on licensing its open-source Llama AI models to third-party robot manufacturers. Rather than delving deeply into hardware production, Meta aims to provide the intelligent software layer that enables robots to perform complex tasks like dexterous manipulation—grasping objects with human-like precision. This approach could democratize robotics, allowing companies like Figure or Unitree to integrate Meta’s tech seamlessly, accelerating deployment in homes and workplaces. Meta’s Ambitious Roadmap: From AI Models to Real-World Helpers Internal memos and partnerships signal Meta’s seriousness. As early as February 2025, Reuters reported that Meta established a dedicated division within its Reality Labs unit to build AI-powered humanoid robots capable of assisting with physical tasks, such as household chores. The company has engaged in discussions with robotics firms, positioning itself as a platform provider rather than a direct competitor to hardware-focused players like Tesla’s Optimus. This isn’t Meta’s first foray into tactile and sensory tech for robots. Posts on X from sources like The Humanoid Hub in late 2024 highlighted Meta’s advancements in touch sensing, including a universal touch encoder and artificial fingertips for multimodal interaction. These innovations, built on projects like Project Aria, are feeding into broader humanoid development, with researchers at Georgia Tech collaborating on algorithms that use human data to train robots faster, as shared in updates from AI at Meta on X. Industry Momentum and Competitive Pressures Driving Innovation The broader push for humanoid robots is gaining steam, with investments booming across the sector. A September 2025 article in The Washington Post noted a frenzy of funding from companies like Amazon and Meta, leading to new robots appearing in homes and workplaces. Meanwhile, Bloomberg reported in February 2025 that Apple and Meta are set to battle in this space, with Meta’s efforts potentially clashing with Apple’s hardware prowess. Challenges remain, however. IEEE Spectrum’s analysis from September 2025 points to scaling hurdles, including battery life, safety concerns, and high demand that could strain production. Meta’s software-centric strategy might sidestep some hardware pitfalls, but it must prove its AI can handle real-world variability. Bosworth admitted in the PCMag interview that dexterous manipulation is the “real hurdle,” not basic locomotion, echoing sentiments from Nvidia CEO Jensen Huang at CES 2025, who predicted a $38 billion market and a “ChatGPT moment” for robotics. Collaborations and Future Visions: Licensing as the Key to Dominance Meta’s vision extends to creating a “Metabot” platform, as detailed in a WebProNews piece from September 2025, where the company plans to license AI to power humanoids by 2030. This mirrors Android’s model, fostering an ecosystem where Meta innovates on software while partners manage mechanics. Recent X posts from users like VraserX and Global Trends X amplify this buzz, with Bosworth confirming the multi-billion-dollar commitment and a star-studded team of robotics experts. Looking ahead, Meta’s integrations could transform industries. Standard Bots’ blog from September 2025 outlines humanoid types and prices, suggesting costs could drop as AI improves efficiency. Yet, as CNBC explored in mid-September 2025, the industry awaits its breakthrough moment for widespread adoption, with Meta potentially catalyzing it through open-source AI. Economic Implications and Ethical Considerations in Robotics Economically, this could reshape labor markets. The New York Times reported just days ago that China leads in factory robots, outpacing the world combined, but Meta’s global licensing could level the playing field for Western firms. Bain & Company’s 2025 technology report urges industries to assess humanoid deployment, from manufacturing to healthcare, where AI-driven bots could handle repetitive tasks. Ethically, questions arise about job displacement and safety. While Meta emphasizes assistance in physical tasks, critics worry about over-reliance on AI. Interesting Engineering’s recent piece on Tesla’s Optimus highlights progress in demos but lags in practical use, a cautionary tale for Meta. Still, with advancements like KAIST’s moonwalking robot from TechXplore in September 2025, the tech is maturing rapidly. Meta’s Edge: Leveraging Data and Open-Source Power Meta’s strength lies in its data trove from social platforms, fueling AI training. The company’s Ego-Exo4D project, as mentioned in X posts from AI at Meta, provides datasets for human-robot interaction, potentially giving Metabot an edge in natural movements. Partnerships with firms like Unitree, noted in February 2025 X updates from Andrew Curran, could lead to prototypes soon. As the current date marks late September 2025, Meta’s humanoid ambitions are crystallizing. By focusing on software, the company avoids the pitfalls that have plagued hardware ventures, positioning itself as the indispensable “backbone” Huang referenced. If successful, Meta could redefine not just social connectivity, but physical assistance in daily life, blending its AI heritage with robotic frontiers. Subscribe for Updates News and insights for social media leaders, marketers and decision makers. Help us improve our content by reporting any issues you find. Get the free daily newsletter read by decision makers Get our media kit Deliver your marketing message directly to decision makers.
Images (1):
Google DeepMind’s new AI models help robots perform physical tasks, …
Description: Today's robots perform safety checks at industrial plants, conduct quality control in manufacturing, and are even starting to keep hospital patients company. But soon -- perhaps very soon -- these increasingly humanlike machines will handle more sophisticated tasks, freeing up people while raising complex questions about the roles of artificial…
Description: Physical AI in Motion: Building Robots That Perceive and Decide We are entering a new era of automation – one where machines no longer just move but think. As...
Content:
Scary AI-powered swarm robots team up to build cars faster …
Description: UBTech and Zeekr unite with AI robot swarms to make car manufacturing faster and smarter. Tech expert Kurt “CyberGuy" Knutsson explains how the process works.
Description: 1 robot, 3 drive types (Mecanum/Ackermann/Differential) and integrated multimodal AI for real embodied robotics & ROS 2 learning. Find this and other hardware projects on Hackster.io.
Description: A new breakthrough shows how robots can now integrate both sight and touch to handle objects with greater accuracy, similar to humans.
Content:
Summary: A new breakthrough shows how robots can now integrate both sight and touch to handle objects with greater accuracy, similar to humans. Researchers developed TactileAloha, a system that combines visual and tactile inputs, enabling robotic arms to adapt more flexibly to real-world tasks. Unlike vision-only systems, this approach allowed robots to manage challenging objects such as Velcro and zip ties, demonstrating human-like sensory judgment. The findings mark a major step toward developing physical AI that could help robots assist with everyday tasks like cooking, cleaning, and caregiving. Key Facts: Source: Tohoku University In everyday life, it’s a no-brainer for us to grab a cup of coffee from the table. We seamlessly combine multiple sensory inputs such as sight (seeing how far away the cup is) and touch (feeling when our hand makes contact) in real-time without even thinking about it. However, recreating this in artificial intelligence (AI) is not quite as easy. An international group of researchers created a new approach that integrates visual and tactile information to manipulate robotic arms, while adaptively responding to the environment. Compared to conventional vision-based methods, this approach achieved higher task success rates. These promising results represent a significant advancement in the field of multimodal physical AI. Details of their breakthrough were published in the journal IEEE Robotics and Automation Letters on July 2, 2025. Machine learning can be used to support artificial intelligence (AI) to learn human movement patterns, enabling robots to autonomously perform daily tasks such as cooking and cleaning. For example, ALOHA (A Low-cost Open-source Hardware System for Bimanual Teleoperation) is a system developed by Stanford University that enables the low-cost and versatile remote operation and learning of dual-arm robots. Both hardware and software are open source, so the research team was able to build upon this base. However, these systems mainly rely on visual information only. Therefore, they lack the same tactile judgements a human could make, such as distinguishing the texture of materials or the front and back sides of objects. For example, it can be easier to tell which is the front or back side of Velcro by simply touching it instead of discerning how it looks. Relying solely on vision without other input is an unfortunate weakness. “To overcome these limitations, we developed a system that also enables operational decisions based on the texture of target objects – which are difficult to judge from visual information alone,” explains Mitsuhiro Hayashibe, a professor at Tohoku University’s Graduate School of Engineering. “This achievement represents an important step toward realizing a multimodal physical AI that integrates and processes multiple senses such as vision, hearing, and touch – just like we do.” The new system was dubbed “TactileAloha.” They found that the robot could perform appropriate bimanual operations even in tasks where front-back differences and adhesiveness are crucial, such as with Velcro and zip ties. They found that by applying vision-tactile transformer technology, their Physical AI robot exhibited more flexible and adaptive control. The improved physical AI method was able to accurately manipulate objects, by combining multiple sensory inputs to form adaptive, responsive movements. There are nearly endless possible practical applications of these types of robots to lend a helping hand. Research contributions such as TactileAloha bring us one step closer to these robotic helpers becoming a seamless part of our everyday lives. The research group was comprised of members from Tohoku University’s Graduate School of Engineering and the Centre for Transformative Garment Production, Hong Kong Science Park, and the University of Hong Kong. Author: Public RelationsSource: Tohoku UniversityContact: Public Relations – Tohoku UniversityImage: The image is credited to Neuroscience News Original Research: Open access.“TactileAloha: Learning Bimanual Manipulation with Tactile Sensing” by Mitsuhiro Hayashibe et al. IEEE Robotics and Automation Letters Abstract TactileAloha: Learning Bimanual Manipulation with Tactile Sensing Tactile texture is vital for robotic manipulation but challenging for camera vision-based observation. To address this, we propose TactileAloha, an integrated tactile-vision robotic system built upon Aloha, with a tactile sensor mounted on the gripper to capture fine-grained texture information and support real-time visualization during teleoperation, facilitating efficient data collection and manipulation. Using data collected from our integrated system, we encode tactile signals with a pre-trained ResNet and fuse them with visual and proprioceptive features. The combined observations are processed by a transformer-based policy with action chunking to predict future actions. We use a weighted loss function during training to emphasize near-future actions, and employ an improved temporal aggregation scheme at deployment to enhance action precision. Experimentally, we introduce two bimanual tasks: zip tie insertion and Velcro fastening, both requiring tactile sensing to perceive the object texture and align two object orientations by two hands. Our proposed method adaptively changes the generated manipulation sequence itself based on tactile sensing in a systematic manner. Results show that our system, leveraging tactile information, can handle texture-related tasks that camera vision-based methods fail to address. Moreover, our method achieves an average relative improvement of approximately 11.0% compared to state-of-the-art method with tactile input, demonstrating its performance. Does no one remember the movie iRobot ????? Hello people!!! Wake up!!!! What could possibly go wrong? I love AI when it works right, like my dishwasher. When we invent a dishwasher that does the job correctly. Then I’ll start thinking about AI. Comments are closed. Neuroscience News Sitemap Neuroscience Graduate and Undergraduate Programs Free Neuroscience MOOCs About Contact Us Privacy Policy Submit Neuroscience News Subscribe for Emails Neuroscience Research Psychology News Brain Cancer Research Alzheimer’s Disease Parkinson’s News Autism / ASD News Neurotechnology News Artificial Intelligence News Robotics News Neuroscience News is an online science magazine offering free to read research articles about neuroscience, neurology, psychology, artificial intelligence, neurotechnology, robotics, deep learning, neurosurgery, mental health and more.
Images (1):
Meta's Metabot: Licensing AI to Power Humanoid Robots by 2030
In the rapidly evolving world of artificial intelligence and robotics, Meta Platforms Inc. is making a bold pivot toward humanoid robots, positioning them as its next major technological frontier. During a recent interview at Meta’s headquarters, Chief Technology Officer Andrew Bosworth revealed that the company views humanoid robots as an “AR-size bet,” comparable in scale and ambition to its investments in augmented reality. This initiative, internally dubbed “Metabot,” underscores Meta’s strategy to extend its AI prowess beyond social media and virtual worlds into physical embodiments that could revolutionize daily life. Bosworth emphasized that Meta isn’t aiming to manufacture hardware at scale but rather to develop and license advanced software for humanoid robots. By leveraging its open-source Llama AI models, the company envisions creating a platform where third-party robot makers can integrate Meta’s technology, much like how Android powers diverse smartphones. This approach could democratize robotics, allowing Meta to focus on software innovation while partners handle the mechanical complexities. Scaling Ambitions Amid Industry Frenzy Recent reports highlight a surge in investments across the sector, with companies like Amazon and Tesla pouring resources into humanoid development. According to a February 2025 article in Reuters, Meta established a dedicated division within its Reality Labs unit to build AI-powered robots capable of assisting with physical tasks, such as household chores. The memo, viewed by Reuters, indicates early discussions with robotics firms like Figure and Unitree, signaling potential collaborations to accelerate deployment. This move comes amid a broader industry boom, where humanoid robots are transitioning from prototypes to pilot programs in homes and workplaces. A September 2025 piece in The Washington Post notes that investments from tech giants have spawned a new generation of these machines, driven by AI advancements like those in ChatGPT. Meta’s entry intensifies competition, particularly with rivals like Apple, which Bloomberg reported in February 2025 is also ramping up humanoid efforts, setting the stage for a high-stakes battle in this emerging field. Technical Challenges and Innovations Despite the enthusiasm, scaling humanoid robots presents formidable hurdles. As detailed in a September 2025 analysis by IEEE Spectrum, issues like battery life, safety protocols, and real-world adaptability remain barriers to widespread adoption. Meta is addressing these through innovative research, including touch-sensing technologies unveiled in late 2024. Posts on X from robotics enthusiasts, such as those highlighting Meta’s artificial fingertip sensors for tactile interaction, suggest the company is prioritizing multimodal AI to make robots more intuitive and human-like. Bosworth’s vision extends to integrating humanoid robots with Meta’s existing ecosystem, potentially linking them to AR glasses or social platforms for seamless user experiences. A recent X post from Techmeme summarized Bosworth’s comments in The Verge, where he described the software licensing model as key to avoiding hardware pitfalls that have plagued other ventures. This strategy aligns with Meta’s history of open-sourcing AI tools, fostering an ecosystem that could lower costs and spur innovation. Societal Implications and Market Projections Industry insiders are buzzing about the transformative potential. A June 2025 story from the World Economic Forum warns that while humanoid robots promise efficiency in sectors like healthcare and manufacturing, society must establish guardrails to mitigate disruptions, such as job displacement. Meta’s focus on household assistance could address labor shortages, with projections from a Bain & Company report in September 2025 indicating pilot stages evolving into waves of adoption by 2030, contingent on battery and ecosystem advancements. Competition from China adds urgency; a CNBC article from early September 2025 reported that Unitree Robotics is eyeing a $7 billion IPO valuation, fueled by its humanoid models. Meta’s collaborations, as mentioned in X posts from accounts like The Humanoid Hub, include integrating Llama models with partners’ hardware, potentially positioning the company as a software leader in a market forecasted to reach millions of units by 2035, per Anadolu Ajansı. Future Bets and Strategic Risks Meta’s humanoid push reflects a broader tech trend where AI meets physical robotics. A PR Newswire release via The Manila Times in September 2025, based on DIGITIMES research, predicts 2025 as the “first year of humanoid robots,” with hardware advancements determining rollout speed. Bosworth acknowledged in The Verge interview that hardware isn’t the bottleneck—software intelligence is, echoing Nvidia’s “physical AI” narrative. Yet, risks abound. Historical parallels to Meta’s metaverse investments, which faced skepticism, loom large. An X post from VraserX captured the sentiment, noting Zuckerberg’s billion-dollar bets shifting from VR to robots. For industry watchers, Meta’s success hinges on execution: licensing software could yield Android-like dominance, but failure might echo past overreaches. As one Georgia Tech collaboration shared on X demonstrates, using Meta’s Project Aria glasses to train robots via human data, the company is betting on data-driven learning to bridge the gap. In this high-stakes arena, Meta’s humanoid ambitions could redefine human-robot interaction, blending AI with physical presence in ways that extend far beyond today’s prototypes. With investments booming and partnerships forming, the coming years will test whether Metabot becomes a household name or another ambitious footnote in tech history. Subscribe for Updates News & updates for website marketing and advertising professionals. Help us improve our content by reporting any issues you find. Get the free daily newsletter read by decision makers Get our media kit Deliver your marketing message directly to decision makers.
Images (1):
AI Meets Nature: The Intelligent Robots Powering Industry 5.0
Description: AI Meets Nature: The Intelligent Robots Powering Industry 5.0 Industrial robotics is entering a pivotal transformation. The integration of artificial intelligen...
Content:
NVIDIA Showcases Jetson Thor Capable of Running Generative AI Robots
Description: Multimodal Learning: The Future of Human-Like AI Artificial intelligence is developing at a breakneck speed, not just in computational power but also in the way...
Content:
Google DeepMind's Gemini AI Transforms Robotics with Multimodal Capabilities
In the rapidly evolving field of artificial intelligence, Google DeepMind’s latest advancements with its Gemini model are pushing boundaries in robotics and web search integration, signaling a potential shift toward more autonomous systems. Announced earlier this year, Gemini Robotics leverages the multimodal capabilities of Gemini 2.0 to enable robots to interpret visual data, process natural language instructions, and execute physical actions with unprecedented fluidity. This isn’t just about programming robots to follow scripts; it’s about creating machines that can adapt in real-time to unpredictable environments, much like a human would improvise on the job. Engineers at DeepMind have demonstrated how Gemini Robotics can handle tasks ranging from sorting objects in cluttered spaces to navigating dynamic obstacles, all while incorporating feedback from web-based data. For instance, a robot equipped with this AI could query online resources mid-task to refine its approach, such as looking up optimal gripping techniques for an unfamiliar item. This fusion of AI reasoning with physical embodiment draws from foundational research in vision-language-action models, allowing for what DeepMind describes as “embodied reasoning” – the ability to think and act in the physical world without exhaustive pre-training. Unlocking New Frontiers in Robotic Autonomy The implications for industries like manufacturing and logistics are profound, where efficiency hinges on adaptability. According to a report from InfoQ, Gemini Robotics integrates seamlessly with existing hardware, reducing the need for specialized datasets and enabling faster deployment. This on-device processing minimizes latency, a critical factor in scenarios requiring split-second decisions, such as automated warehouses or surgical assistance tools. Moreover, DeepMind’s push into web search enhancements via Gemini adds another layer of sophistication. The model now supports agentic browsing, where AI can autonomously navigate the internet to gather and synthesize information, feeding it back into robotic operations. This was highlighted in a recent DeepMind blog post, which detailed how on-device versions of Gemini Robotics operate without constant cloud connectivity, enhancing privacy and speed for edge computing applications. Bridging Digital Intelligence with Physical Execution Critics and insiders alike are watching how these developments address longstanding challenges in AI safety and reliability. A piece in The Guardian noted Gemini 2.5’s breakthrough in solving complex programming problems that baffled human experts, suggesting similar prowess could translate to robotics troubleshooting. Yet, questions remain about scalability – can these models handle the variability of real-world chaos without errors that could lead to costly failures? DeepMind’s strategy also involves open-sourcing certain components to foster collaboration, as evidenced by an arXiv paper on Gemini Robotics, which outlines fine-tuning techniques for long-horizon tasks like intricate assembly lines. This approach not only accelerates innovation but also invites scrutiny from the broader tech community, ensuring robustness through collective input. Industry Impacts and Future Trajectories Looking ahead to the latter half of 2025, integrations with sectors like healthcare and transportation could redefine operational norms. For example, robots powered by Gemini might assist in elder care by cross-referencing medical databases in real-time, adapting to patient needs dynamically. Coverage from The Verge emphasizes how these models enable tasks without prior training, a game-changer for rapid prototyping in R&D labs. However, ethical considerations loom large, with calls for regulatory frameworks to govern AI’s physical interventions. DeepMind’s commitment to safe AI, as stated on their official site, includes built-in safeguards against misuse, but industry watchers argue for more transparency in algorithmic decision-making. As Gemini evolves, its blend of web search prowess and robotic control could usher in an era of truly intelligent machines, transforming how we interact with technology in everyday life. Challenges Ahead in AI-Robotics Integration Despite the hype, hurdles like energy consumption and hardware compatibility persist. On-device models, while efficient, demand powerful processors that not all robots possess, potentially limiting adoption in budget-constrained fields. A TechCrunch analysis points out that while Gemini Robotics excels in controlled demos, real-world variability – from lighting changes to unexpected human interference – tests its limits. Ultimately, Google DeepMind’s innovations with Gemini are setting a high bar, compelling competitors to accelerate their own efforts in multimodal AI. By weaving web intelligence into physical actions, these advancements promise to make robots not just tools, but intelligent partners in human endeavors, with 2025 poised as a pivotal year for deployment at scale. Subscribe for Updates News, updates and trends in generative AI for the Tech and AI leaders and architects. Help us improve our content by reporting any issues you find. Get the free daily newsletter read by decision makers Get our media kit Deliver your marketing message directly to decision makers.
Images (1):
Welcome to Skynet: Google Unveils AI Models to Power Physical …
Description: Google DeepMind has introduced two new AI models designed to bring artificial intelligence into the physical world by powering robots. Google is not the only company pursuing this goal at top speed — OpenAI and Tesla are also designing robots controlled entirely by AI as well.
Description: heavy industrial manufacturing - At the 2025 World Robot Conference, Zoomlion demonstrated significant progress in integrating robotics with heavy industrial ma...
Content:
Integrating Multimodal AI on TurboPi - Hackster.io
Description: Turn your TurboPi into a thinking robot. Add multimodal AI for natural language commands, scene understanding, and smart task planning. Find this and other hardware projects on Hackster.io.