Total Articles Scraped
Total Images Extracted
| Action | Title | URL | Images | Scraped At | Status |
|---|---|---|---|---|---|
| Episode #527 - MCP Servers for Python Devs | Talk … | https://talkpython.fm/episodes/show/527… | 10 | Apr 07, 2026 08:00 | active | |
Episode #527 - MCP Servers for Python Devs | Talk Python To Me PodcastURL: https://talkpython.fm/episodes/show/527/mcp-servers-for-python-devs Description: Today we’re digging into the Model Context Protocol, or MCP. Think LSP for AI: build a small Python service once and your tools and data show up across editors and agents like VS Code, Claude Code, and more. My guest, Den Delimarsky ... Content:
Den Delamarski is a Principal Product Engineer at Microsoft working in the Core AI division, focusing on AI tools for developers. Den is one of the core maintainers of the Model Context Protocol (MCP), having initially joined the project through his expertise in security and authorization. When MCP first launched with an auth specification, Den identified opportunities to improve it for enterprise scale and worked with the Anthropic team to rewrite the authorization framework, which was merged into the June 2024 version of the protocol. Beyond MCP, Den helps drive projects like GitHub SpecKit, which enables spec-driven development with agentic coding tools. His work centers on building developer tools and experiences in the rapidly evolving AI ecosystem, including projects like Copilot and other Microsoft AI initiatives. The Model Context Protocol solves a fundamental problem in AI systems: LLMs are trained on data that gets locked at a specific point in time, but users need to work with fresh, dynamic data. MCP provides a universal interface that allows any LLM or AI client to connect to data sources, applications, and services without custom integrations. Just as the Language Server Protocol (LSP) standardized how editors communicate with language tools, MCP standardizes how AI agents access external capabilities. The protocol is highly opinionated about authentication, message passing, and primitive exposure, eliminating the inconsistency found in traditional REST API integrations. The protocol went from non-existent to widely adopted in less than a year, with major companies across banking, healthcare, and gaming now integrating MCP into their AI strategies. The composability of MCP means you can connect multiple servers to a single client, allowing an LLM to coordinate across different data sources and services seamlessly. The Python SDK makes building MCP servers remarkably simple through the FastMCP framework, which provides a Flask-like developer experience. Creating an MCP tool is as straightforward as writing a Python function and adding a decorator. The SDK handles all the complex JSON-RPC envelope creation, streaming, and protocol compliance automatically. Developers can focus on business logic rather than protocol implementation details. FastMCP is integral to the official Python SDK and simplifies common pain points like authorization. The programming model supports async functions naturally, allowing you to await user input via elicitations without complex callback patterns. The framework also includes built-in support for structured output using Pydantic models, progress reporting, and image handling. MCP servers expose three fundamental primitives that LLMs can interact with. Tools are function calls that perform actions - think of them as API endpoints that do something like sending an email, querying a database, or creating a 3D scene in Blender. Prompts are reusable templates that help LLMs interact with your server effectively, such as "create a recipe with substitutions." Resources allow LLMs to reference and work with entities like databases, files, or API objects. Each primitive serves a distinct purpose in the agent workflow. Tools enable actions and side effects. Prompts guide the LLM on how to best use your server. Resources provide structured access to data and entities. Together, these primitives create a complete interaction model that's both powerful and constrained enough to be reliable. MCP servers can run in two distinct modes depending on your architecture needs. Local MCP servers use stdio (standard input/output) to communicate via native OS constructs between the MCP client and server processes. This is perfect for development machines where you want your editor or AI tool to access local capabilities without network overhead. Remote MCP servers use streamable HTTP and can be hosted anywhere - AWS, Azure, GCP, your home lab, or behind a reverse proxy like Nginx or Caddy. The transport layer is abstracted by the SDK, so the same server code can work in both modes with minimal changes. For local development with remote access, tools like Tailscale provide secure overlay networks without exposing ports or configuring complex VPN setups. This makes it trivial to run an MCP server on your home lab and access it securely from anywhere. The MCP Registry launched in September 2024 as a centralized API that indexes all publicly available MCP servers. Think of it like Docker Hub for MCP servers - you can discover servers, but you're not required to use the registry. The registry supports both public servers (like the GitHub-maintained registry) and private enterprise registries for internal company use. This allows organizations to maintain approved MCP servers behind security gates while still benefiting from the discoverability infrastructure. Discovery is improving rapidly with better integration into clients like VS Code, Cursor, and Claude Desktop. The Awesome MCP Servers list on GitHub has become a valuable community resource with hundreds of servers categorized by function - from biology and medicine to gaming, marketing, and sports analytics. Security and authorization was Den's entry point into MCP development. The June 2024 spec introduced formal OAuth 2.1-based authorization, eliminating the need for developers to implement custom auth flows or check API keys into source control. The brilliant part is that MCP server developers don't need to become OAuth experts - the SDKs handle it. For consumers, authentication is as simple as logging in when you connect a server. The client bootstraps the auth flow, stores tokens securely, and ensures you access only your data. MCP servers can specify whether they use API keys (stored in configuration) or OAuth (handled via standard browser-based login flows). This approach scales from hobby projects to enterprise deployments where data access controls are critical. The standardization means you don't face "17 different dances" to get authentication tokens from different services. GitHub SpecKit represents Microsoft's hypothesis for how spec-driven development works with AI coding tools. The approach starts with defining what and why you're building in a specification document, then breaks down the technical implementation plan, and finally decomposes it into consumable tasks that AI can execute iteratively or in parallel. This isn't the only way to do spec-driven development, but it provides a recipe book and ingredient box for teams wanting to adopt this workflow. The philosophy recognizes that there's no single correct approach to spec-driven development - it depends on your models, team structure, and project complexity. However, starting with a thorough planning phase using high-quality models, then executing with faster models guided by those specs, has proven effective for managing AI agent workflows on complex projects. The MCP ecosystem has exploded with creative and practical implementations. The Blender MCP server lets you describe a medieval scene with a dragon and lighting, and it builds it for you by translating high-level descriptions into Blender's native API calls. Gaming servers exist for Unity 3D, Minecraft, and even analyzing Halo stats. Marketing professionals can connect Facebook Ads, Google Ads, and Amazon Ads MCP servers to ask "how are my ads performing this week" across all platforms without clicking through dashboards. Sports enthusiasts can use Strava MCP for running and biking analytics, or the Formula 1 Multiviewer MCP that controls viewing angles and telemetry during live races. For developers, there are Jira and Atlassian MCP servers to automate bug triage and ticket management. The diversity shows MCP's flexibility - it's not just for data retrieval, but for controlling applications, analyzing information, and automating workflows across domains. Retrieval Augmented Generation (RAG) and MCP serve different purposes in the AI architecture landscape. RAG builds vector databases to augment an LLM's context with additional knowledge, helping it understand what exists in a codebase or documentation set. It's primarily about giving the LLM more relevant context for making decisions. MCP, on the other hand, provides universal access to live data and actionable capabilities. It's not just about knowing what exists - it's about doing something with that information. While RAG helps an LLM understand that an authorization component exists in your codebase, MCP lets it actually invoke authentication services, update records, or chain multiple actions across services. The two technologies can complement each other: RAG for knowledge augmentation and MCP for capability extension. Many real-world AI applications benefit from using both - RAG for understanding context and MCP for taking action. There's ongoing debate about whether specialized local models or general-purpose cloud models work better for specific tasks. Den's perspective is that general-purpose models like Claude and GPT-4 will typically outperform local models for most scenarios due to superior training resources and compute power. However, local models excel for privacy-sensitive workloads - like organizing family photos without sending them to remote servers - or domain-specific tasks where a small, focused model can be as effective as a large general one. MCP enables an interesting hybrid approach: use powerful general-purpose models for orchestration and decision-making, but delegate specific subtasks to specialized local models or services via MCP servers. For example, a general model could coordinate a photo organizing workflow while a local computer vision model handles the actual image analysis. This composability allows building sophisticated systems that balance capability, privacy, cost, and latency. The Python MCP SDK prioritizes developer experience through familiar patterns and minimal boilerplate. The decorator-based approach (@mcp.tool) mirrors Flask and FastAPI, making it immediately intuitive for Python web developers. Async/await support is first-class, allowing natural progress reporting and elicitations without callback hell. The SDK includes 143+ contributors, ships releases every few days, and maintains "good first issue" tags for new contributors. Documentation and samples are comprehensive, with the official Python SDK repo containing multiple example servers. The team actively solicits feedback and iterates quickly on developer pain points. Installation is as simple as uv add mcp or pip install mcp, and you can have a working MCP server in under 10 lines of code. The combination of low barrier to entry and production-ready features makes MCP accessible to Python developers at all skill levels. While MCP provides secure authentication mechanisms, users must still exercise caution when installing third-party MCP servers. Like any software that accesses your data, you should verify the source and understand what an MCP server does before connecting it. An MCP server that reads your iMessages to "sort by importance" could potentially scan for credit card numbers or social security numbers. The responsibility for vetting servers lies with the user, just as it does with browser extensions or system-level applications. Best practices include reviewing source code for open-source MCP servers, starting with servers from trusted organizations, using private registries for enterprise deployments, and being cautious about granting broad permissions. Never check API keys into source control - use environment variables or OAuth flows instead. The MCP community is working on improved discovery with trust signals, but individual diligence remains essential for security. "Think about it like last year at this time, like at the time when we were recording the work item episode, MCP did not exist. Just not a thing. And now everybody's talking about MCP. Like you talk to any big companies, you talk to like the banks, the healthcare, the gaming, everybody, everybody cares about MCP." -- Den Delamarski "The way the folks at Anthropic have been describing it, it is USB-C for AI." -- Den Delamarski "Look at the simplicity of this. You literally have a Python function, you have def add, and there is your arguments, you would pass you a function, like two integers. And then all you need to do to make that a tool that an LM can invoke is just add that @mcp.tool decorator. That's it. You're not going and crafting elaborate JSON RPC envelopes and converters and all these things." -- Den Delamarski on the developer experience "I'll tell you what, the LLMs are getting really good at analyzing the stats. You give them the data, they can make some conclusions." -- Den Delamarski on his Halo stats MCP server "Do you remember the days when you had to work, this episode is not sponsored by Tailscale, for the record. Should be." -- Den Delamarski and Michael Kennedy discussing VPN complexity vs. Tailscale simplicity "The power is composability. It's the fact that you can compose things together and have them work together based on the prompts that you have and scenarios that you have." -- Den Delamarski "There's an MCP server for everything. Like, this list is massive. I'm actually like, every time I discover these things, I was like, oh, I didn't know there was one for multiviewer." -- Den Delamarski exploring the Awesome MCP Servers list "These are the life hacks you learned only from this podcast. Query all the bugs assigned to me, reassign them to somebody else." -- Den Delamarski joking about Jira MCP automation "Exercise caution, just like you would exercise with any other software and APIs and websites where you log in because the responsibility is kind of on you to figure out what's safe, what's not." -- Den Delamarski on MCP server security Model Context Protocol (MCP): An open protocol that provides a standardized way for AI applications to connect to data sources, services, and tools. It acts as a universal translation layer between LLMs and external systems, similar to how LSP standardized language tooling for editors. MCP Server: A service that implements the MCP specification and exposes tools, prompts, and resources that AI clients can use. Servers can run locally via stdio or remotely via HTTP. MCP Client: An application or editor that connects to MCP servers and makes their capabilities available to LLMs. Examples include VS Code, Cursor, Claude Desktop, and custom applications. Tools: Function calls that MCP servers expose to LLMs, allowing them to perform actions like querying databases, sending emails, or controlling applications. Prompts: Reusable templates that MCP servers provide to guide LLMs on how to interact effectively with their capabilities. Resources: References to databases, files, or API entities that MCP servers make available to LLMs for data access and manipulation. Elicitations: A mechanism for MCP servers to request structured input from users during tool execution, enabling confirmation dialogs, dropdown selections, and data validation. FastMCP: The primary framework within the Python SDK that provides a Flask-like decorator-based programming model for building MCP servers quickly. stdio Transport: A local communication method where MCP servers use standard input/output pipes to exchange JSON-RPC messages with clients on the same machine. Streamable HTTP Transport: A remote communication method where MCP servers expose HTTP endpoints for JSON-RPC message exchange, enabling cloud deployment and distributed architectures. JSON-RPC: The underlying message format used by MCP for communication between clients and servers, abstracted away by SDKs for developer convenience. MCP Registry: A centralized index of available MCP servers, similar to Docker Hub, that enables discovery and installation of servers into MCP clients. Supports both public and private registries. OAuth 2.1: The authentication and authorization standard used by MCP for secure access to protected resources, handled automatically by SDKs. RAG (Retrieval Augmented Generation): A technique that builds vector databases to augment LLM context with additional knowledge, complementary to MCP's action-oriented approach. Spec-Driven Development: A development methodology where projects start with detailed specifications that guide AI coding tools through implementation, promoted by GitHub SpecKit. If you want to dive deeper into the topics covered in this episode, these courses from Talk Python Training can help you build the foundational skills and advanced techniques you'll need. LLM Building Blocks for Python: This concise 1.2-hour course teaches you to move beyond basic "text in, text out" with LLMs, covering structured data, chat workflows, async pipelines, and caching - essential skills for building MCP servers that integrate AI capabilities. Modern APIs with FastAPI and Python: Since FastMCP uses FastAPI-like patterns, this course provides deep knowledge of building modern Python APIs with type hints, async/await, and data validation - all of which directly apply to MCP server development. Async Techniques and Examples in Python: MCP servers heavily use async/await for streaming responses and progress reporting. This course covers Python's entire async ecosystem, from basic async/await to parallel processing and thread safety. Rock Solid Python with Python Typing: Type hints are fundamental to MCP servers and structured output with Pydantic. Learn how to use Python's typing system effectively, which powers frameworks like FastAPI and FastMCP. Build An Audio AI App: This course combines AI, FastAPI, and MongoDB to build real applications - a perfect companion for creating MCP servers that work with audio content, transcripts, and multimedia data. The Model Context Protocol represents a fundamental shift in how we build AI-powered applications. Rather than creating custom integrations for every data source and service, MCP provides a universal standard that works across LLMs, editors, and agentic tools. The Python ecosystem has embraced MCP with remarkable speed, delivering a developer experience that feels as natural as Flask or FastAPI while handling the complexity of JSON-RPC, streaming, and authentication behind the scenes. What makes MCP truly powerful is its composability. You can connect multiple servers to a single client, enabling LLMs to coordinate sophisticated workflows across different services. The registry ecosystem is exploding with servers for everything from 3D modeling in Blender to analyzing Formula 1 telemetry to automating Jira tickets. Yet beneath this diversity lies a consistent, well-designed protocol that makes both building and consuming MCP servers straightforward. For Python developers, now is the perfect time to explore MCP. The barriers to entry are low - you can have a working server in minutes. The community is active and welcoming, with good first issues available for contributors. The use cases span every domain imaginable, from enterprise data integration to creative hobby projects. Whether you're building the next generation of AI agents or simply want to give your AI tools access to your custom data, MCP provides the plumbing that just works. As Den put it, "MCP can do anything - it's just a pipe. What you do with that pipe is up to you." 00:00 On this episode, we're digging into the Model Context Protocol, or MCP. 00:04 Think LSP for AI. Build a small Python service once, and your tools and data show up across 00:11 editors and agents like VS Code, Claude Code, and more. My guest, Den Delamarski from Microsoft, 00:17 helps build this space and keeps us honest about what's solid versus what's just shiny. 00:23 We'll keep it practical, transports that actually work, guardrails you can trust, 00:27 and a tiny server you could ship this week. 00:29 By the end, you'll have a clear mental model and a path to plug Python into the internet of agents. 00:36 This is Talk Python To Me, episode 526, recorded September 30th, 2025. 00:43 Talk Python To Me, yeah, we ready to roll. 00:46 Upgrading the code, no fear of getting old. 00:48 Async in the air, new frameworks in sight. 00:51 Geeky rap on deck. 00:52 Quarth crew, it's time to unite. 00:54 We started in Pyramid, cruising old school. 00:57 lanes. Had that stable base. Yes. Welcome to Talk Python To Me, the number one podcast for Python 01:02 developers and data scientists. This is your host, Michael Kennedy. I'm a PSF fellow who's been coding 01:07 for over 25 years. Let's connect on social media. You'll find me and Talk Python on Mastodon, 01:13 Bluesky, and X. The social links are all in the show notes. You can find over 10 years of past 01:19 episodes at talkpython.fm. And if you want to be part of the show, you can join our recording 01:24 live streams. That's right. We live stream the raw uncut version of each episode on YouTube. 01:30 Just visit talkpython.fm/youtube to see the schedule of upcoming events. And be sure to 01:36 subscribe and press the bell so you'll get notified anytime we're recording. This episode is sponsored 01:41 by Posit Connect from the makers of Shiny. Publish, share, and deploy all of your data projects that 01:47 you're creating using Python. Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, 01:54 dashboards, and APIs. Posit Connect supports all of them. Try Posit Connect for free by going to 02:00 talkpython.fm/posit, P-O-S-I-T. And it's brought to you by Nordstellar. Nordstellar is a 02:07 threat exposure management platform from the Nord security family, the folks behind NordVPN, 02:13 that combines dark web intelligence, session hijacking prevention, brand and domain abuse 02:19 detection, and external attack surface management. Learn more and get started keeping your team safe 02:24 at talkpython.fm/nordstellar. Hey, I want to take just a minute and talk to you guys. I just 02:31 released a really cool new course called Agentic AI Programming for Python Developers and Data 02:36 Scientists. You've heard me mention a couple times on the podcast how I've had some incredible success 02:42 with some of these Agentic AI coding tools. I hear people talking about how they're not really 02:47 working for them. And then I look at the results that I'm getting and think, wow, that's something 02:53 that would have taken two weeks. It's built in two hours and it's well factored and good looking code. 03:00 What gives? Why is this difference here? Well, I decided to create this course to share all the 03:06 things that I'm doing with these agentic coding tools with the idea of making you as successful 03:12 and productive as well. Yes, I know we're all tired about hearing about how AI is going to 03:17 change everything for software developers. 03:19 But there are some tools here that will give you truly difference 03:23 making levels of productivity. 03:25 And that's what this course is about. 03:27 So check it out at talkpython.fm/agenticai. 03:31 The links in your podcast player show notes. 03:33 Let's get to the interview. 03:35 Ben, welcome to Talk Python To Me. 03:36 Great to have you here. 03:37 Hello, hello. 03:38 I'm excited to be here. 03:39 I'm a big fan of Talk Python. 03:41 I'm a big fan of you and I'm a big fan of Python. 03:43 So there we go. 03:45 Wow. 03:45 Thank you. 03:46 I've been on your show, Work Item, which was really fun. 03:49 Thank you for having me. 03:50 And now it's time to dive into your expertise. 03:53 I'm going to talk agentic stuff, and especially we're going to talk model context protocol, MCP. 04:01 I think this is one of the really important layers that is kind of invisible, right? 04:05 A lot of the coding agents and coding AI and chat LLMs and all that, 04:10 that's what people think when they hear all these things. 04:13 But there's got to be plumbing, right? 04:15 We're going to talk to plumbing. 04:16 - There has to be, yeah. 04:17 - Nothing is more amazing than plumbing. 04:18 Like we all get excited about plumbing. 04:20 So no. 04:21 - I know. 04:22 - Technology plumbing is cool. 04:25 - Yeah. 04:25 I mean, it's one of those things too, that look at how fast it grew. 04:28 Think about it like last year at this time, like at the time when we were recording 04:32 the work item episode, MCP did not exist. 04:34 - Yeah. 04:34 - Just not a thing. 04:35 - That's wild. 04:35 - And now everybody's talking about MCP. 04:38 Like you talk to any big companies, you talk to like the banks, the healthcare, the gaming, 04:44 Like everybody, everybody cares about MCP. 04:46 That's great. 04:46 It's very great. 04:48 We're going to dive into it. 04:49 Before we do, let's dive into you. 04:51 Give us a quick background on yourself. 04:53 Absolutely. 04:53 So I am Den Delamarski. 04:54 I am a principal product engineer at Microsoft. 04:57 I work in the core AI division. 05:00 So we're focusing on, as the name suggests, AI stuff, but applied to developers. 05:06 So I'm very, very heavily in the developer ecosystem. 05:09 And I'm one of the core maintainers of the Model Converse Protocol. 05:13 So I say one of because there's many of us. 05:15 It's not just me. 05:16 There's many wonderful, talented people way smarter than me. 05:19 And yeah, that's a short intro. 05:21 Okay. 05:22 So when we talk about MCP, you're one of the people helping build it. 05:26 That's incredible. 05:27 That is correct. 05:29 Yeah. 05:29 I try to contribute as much as I can. 05:31 Well, you know, before we move on, just how'd you get into that position? 05:36 Oh, it all started with one of the things that was actually near and dear to my heart, 05:40 which is security and authorization. 05:41 So when MCP first came out, it had a auth spec. 05:45 So we see on the screen right now, Michael is showing the kind of the model context vertical 05:49 specification page. 05:51 But when MCP first started, it had essentially a spec that outlines how to do authorization 05:57 for MCP servers. 05:58 And that spec was a good start, but it made a lot of assumptions about the infrastructure 06:04 and the tooling and how developers build MCP servers that were, I want to say, a little 06:08 flawed at scale. 06:09 So my thought was like, oh, I'll just get some smart people with me and we'll help rewrite this. 06:15 And we asked the MCP folks at Anthropic and they said yes. 06:18 And so we did. 06:19 And I basically like incorporate all the feedback and iterated on it. 06:23 And then again, it's a massive community effort. 06:26 We pushed it out and got it merged in the June version of the protocol. 06:29 And then the folks at Anthropic just reached out and said, hey, do you want to help shape the protocol? 06:35 And here I am helping shape the protocol. 06:37 You seem to know what you're talking about and you sure are participating a lot. Why don't you just hang around? 06:41 Yeah, basically. 06:42 Okay, that's great. And you work at Microsoft. What do you do there? 06:46 That is correct. At Microsoft, I work on developer tools. So think like if you ever use Copilot, if you ever use any, oh, by the way, GitHub spec kit for folks that have not heard about it, we released it like last month. 06:58 But that's something that I helped drive and help maintain is how do you do spec driven development with agentic tools, agentic coding tools? 07:06 Yeah, that's what I do. 07:07 Okay, cool. 07:08 So something that I've started to do a lot when I'm involving AI, I go in like spurts. 07:14 I'll work for a long time, just sort of writing regular. 07:16 And then I'm like, ah, this is really a lot of drudgery, not critical or central to what I'm doing. 07:21 Let me just uncork some agentic AI on it and let it go. 07:25 But one of the things I've started doing a lot, and it has to do with the spec thing that you've touched on here, 07:30 is I will force, I'll pick a really high level model, like a complex smart model. 07:36 And I'll say, I want to plan this out. 07:39 I've given you some ideas, look at the code and let's create a detailed plan 07:42 of what you're gonna do. 07:43 And I'll have it write a markdown file. 07:45 And even though a lot of my projects, I have just a plans folder 07:47 and it's just full of all these different projects. 07:49 You know, maybe they're sort of equivalent to a PR in the end. 07:52 - Yeah. 07:52 - And I'll plan that out really well. 07:54 Then I'll switch it down to a lower model, to a new context and say, let's just do phase one. 07:58 Let's do phase two and knock it out. 08:00 That sounds like a Michael just made up some stuff equivalent of the spec based programming. 08:06 Is that right? 08:06 Like, how does that compare to what you're talking about here? 08:09 It's close. 08:10 It's very close. 08:10 And this is where when when people talk about spec driven development, I want to emphasize 08:14 the fact that there's no one correct approach. 08:17 Like people think that it's like, oh, I'm just going to wait for whatever company is 08:20 going to come out and come up with the right thing. 08:21 Like it all depends on your experience. 08:23 It depends on your models. 08:25 The spec kit project that we launched is our hypothesis, our experiment on how we believe 08:30 And what it does is basically what do you describe? 08:32 You start with a spec. 08:34 You start outlining what and why I am building. 08:37 Then you focus on the technical implementation plan, which is like, OK, now what technology stack I'm using here. 08:43 And then you break that down into tasks, which are basically just consumable chunks that the AI can go and either iteratively or in parallel execute and build the stuff that you want to build. 08:54 So all of it, again, is still an experiment. 08:56 So I'm not by any stretch claiming that what we have is the end of it all or the right way to do this. 09:02 There's many, many ways to do this. 09:04 Okay. 09:04 And you even over on DevBlogs wrote, diving into spec-driven development with GitHub SpecKit. 09:11 That is correct. 09:11 There's also a GitHub blog that I highly recommend folks check out. 09:14 It's actually on the github.blog. 09:17 So you can go there and look for, there you go. 09:19 It's called Spec-Driven Development with AI. 09:21 Get started with a new open source toolkit. 09:23 And we do have an open source toolkit. 09:25 All right. So how is this different than just what I've done? I know I've seen this before. 09:29 Yeah. Okay. Yeah. It just, all it does is think of it as this is the recipe book, 09:34 right? Like if you decided to like, Oh, I want to cook up a new application and you're like, 09:39 well, what's the recipe? Like this bundles the recipe for instance. And by the way, 09:43 here's the box set of ingredients that you can just use to build this. That's what this is. 09:47 That's SpecKit. Okay. Well, very exciting. Let's maybe start to get into the main topic though. 09:54 So MCP servers. 09:56 I've heard this put out as sort of an analogy to the LSP, which I know is, I first heard of it in VS Code. 10:05 I don't know if it came from VS Code. 10:07 Maybe it did, but it's the thing that allows so many different editors to plug into tooling 10:14 like PyLance or Powerfly or ty or a bunch of cool things are coming out around here, 10:19 different implementations of LSPs. 10:22 And I've heard that MCPs are kind of like that for AI. 10:26 Maybe contrast those a bit for people. 10:30 Yeah. 10:30 I mean, if you look at the MCP specification, if you look through the website 10:34 and just peruse through the documentation, you might have like faint echoes of LSP design decisions, 10:40 faint echoes of kind of the LSP architecture. 10:42 But yes, basically think of it this way. 10:44 The way the folks at Anthropic have been describing it, it is USB-C for AI. 10:50 And when I say that is the problem with a lot of the LLMs, a lot of the modern models is the fact that it takes a some amount of time to train them, which means that inherently they get locked into a specific training date, if you will. 11:06 So the corpus of knowledge that gets embedded in them gets locked at a certain date. 11:11 And when you talk to a lot of enterprise customers, you talk to a lot of customers in the wild, 11:15 it doesn't need to be enterprise, by the way, it could be startups, could be hobbies, developers 11:18 like, well, I want to use AI with this fresh data that I have. 11:23 Maybe I have, I don't know, a Dropbox account and I want to use AI to sort my files. 11:27 Or maybe I want to use some data inside Salesforce to go and help me analyze my sales and find 11:34 out outliers and maybe customers I want to focus on. And I just interviewed the people from Nice Guy, 11:40 Nice GUI, and they build robots that cruise around in architectural areas. Like what maybe I want a, 11:48 I want some way to like ask AI, look at how the robots are doing now and then, or see if they're 11:55 busy, find a free one, right? That might be a thing, huh? Yeah. Yeah, no, for sure. Exactly. 12:00 It's like any kind of live data or managed data, something that is more dynamic than the corpus of knowledge that is embedded in these models by default. 12:09 And for those, if I would ask you like, OK, well, let's imagine a world where MCP does not exist. 12:14 How would you go about plugging this data in like into your LLM? Right. 12:19 And like there's different ways to do this. Like people have done like the rags. 12:25 People have done, you know, dump like CSV files and then be like, oh, analyze the CSV file and all these like hacky solutions. 12:31 But it feels like it's not universal. 12:34 It doesn't really work for all cases. 12:35 And something that you've done in one LLM doesn't work in another. 12:39 And now you're locked into this environment. 12:41 So it becomes very hard to manage. 12:43 So MCP is essentially the answer to that. 12:45 MCP says, look, we don't care what data you're connecting to, what applications, what actions. 12:50 we provide you a universal interface by which every single LLM, every single client that 12:57 understands MCP can invoke those primitives, get the data and embed the data in the context that 13:02 you're operating in. And that's another thing, important thing. People think of MCP as the data 13:06 connector, but it's not only a data connector. It's a, I want to call it like a primitive connector 13:11 because you can use MCP with a lot of wonderful things that folks have probably seen already. 13:15 Like I, my favorite example here is Blender MCP. Like for folks that don't know, Blender is a 3D 13:20 modeling tool. And there's an MCP server by which you can actually guide an LLM saying like, I am 13:25 building this like medieval scene with a dragon and the lighting and so, and it goes and it just, 13:30 it builds it for you, right? Through this MCP and MCP is the connective layer between Blender, 13:36 which has its own native API. And then there's the MCP server that the LLM knows how to talk to, 13:42 right? Because the LLM wouldn't know how to like, okay, how do you talk to Blender? How do you, 13:45 how do you go and set up the plugin and whatever the web sockets, whatever they might be using, 13:50 It's super complex, so it needs expertise, but an MCP server is essentially saying, I have these set of primitives that the LLM can invoke at any time, like create polygon or create scene or create sphere, and then based on that information, go and iterate on it. 14:03 So MCP is that adapter. 14:05 Yeah, I see. 14:06 So the LLM or agentic AI or whatever that you're working with, it says, all right, I'm going to talk to Blender. 14:12 Blender says, I have these core ideas, these core building blocks. 14:15 it sort of turns it more into Lego instead of just I'm going to have a saw or whatever I can 14:21 go. Exactly. Okay, I have spheres, I have cylinders, I have squares, I have shading. 14:28 They've asked me to do this. What can I build composing that sort of? Exactly. Precisely. Right. 14:33 So it's you're operating on a set of primitives, right? And this is where you don't even need to 14:37 expose the entirety of the surface of blender API's. You can just say like, oh, I want to have like, 14:42 there's the 10 primitives that I think are the most valuable. 14:44 I'm going to go ahead and use those. 14:46 And out of those, you compose things. 14:48 And maybe there's an advantage to that too, right? 14:50 Maybe you're like, I want to use Blender to create 2D scenes. 14:53 So I'm only going to expose stuff or rotations or whatever that preserves some sort of 2D view of the thing. 14:59 Like it's, we're doing CAD where it's top down from the side. 15:02 Like those are the ways you're going to look at. 15:03 You can't arbitrarily rotate it. 15:04 Yeah. 15:05 So yeah, so essentially like the MCP servers in this case act as a universal translation layer between whatever's downstream 15:11 of the MC server, which can be an application, an API, a database, like anything. And the client, 15:18 which knows like, I know how to talk to MCP and nothing else. I have no idea what's behind. I 15:22 don't know what the REST API you have, what's the authentication authorization logic, 15:26 just give an MCP server. Okay. It sounds a little bit like an API. And by API, I mean, 15:32 yes, most general sense of the word not, oh, it's a REST API. And it makes sure it uses the verbs 15:38 this way. I mean, like anything that you you could sort of call and either get data or cause an 15:43 action that could be a REST API, but it could just be, you know, an OS level API or some something 15:49 like that. Yeah, right. Yeah, totally. I mean, it's all it is just a connective layer. So yeah, 15:55 and people often ask like, well, couldn't you do this with like REST APIs? Couldn't you do this 16:00 with a GraphQL APIs instead? Right? Because like, it's been invented. Why are we creating new 16:05 things. But the thing about this is, even if you look in the world of REST APIs, like think about 16:10 the last time you worked with a REST API from some vendor and then switched another REST API from 16:15 someone, how much of that knowledge was like one-to-one reused or the infrastructure that 16:19 you built or authentication logic? You have like, you have these like 17 different dances by which 16:23 you get the token, right? And MCP essentially is the opinionated version of saying, no, this is how 16:31 you do auth. This is how you do message passing between entities. This is how you expose primitives. 16:37 It's a highly opinionated stack. This portion of Talk Python and me is brought to you by Sentry's 16:43 AI agent monitoring. Are you building AI capabilities into your Python applications? 16:49 Whether you're using open AI, local LLMs, or something else, visibility into your AI agent's 16:55 behavior, performance, and cost is critical. You will definitely want to give Sentry's brand new 17:01 AI agent monitoring a look. 17:03 AI agent monitoring gives you transparent observability into every step of your AI features 17:09 so you can debug, optimize, and control the cost with confidence. 17:14 You'll get full observability into every step of your AI agent. 17:17 That is model calls, prompts, external tool usage, and custom logic steps. 17:23 AI agent monitoring captures every step of an AI agent's workflow 17:27 from the user's input to the final response. 17:30 And your app will have a dedicated AI agent's dashboard showing traces and timelines for each agent run. 17:37 You'll get alerts on model errors, latency spikes, token usage surges, and API failures protecting both performance and cost. 17:46 It's plug-and-play Python SDK integration. 17:49 Open AI for now for Django, Flask, and FastAPI apps with more AI platforms coming soon. 17:55 In summary, AI agent monitoring turns the often black box behavior of AI in your app 18:01 into transparent, debuggable processes. 18:05 If you're adding AI capabilities to your Python app, give Sentry's AI agent monitoring the look. 18:10 Just visit talkpython.fm/sentry agents to get started and be sure to use our code, TALKPYTHON, one word, all caps. 18:19 The link is in your podcast player's show notes. 18:22 Thank you to Sentry for supporting Talk Python and me. 18:25 And so then once implemented them, we call them the hosts like VS Code or PyCharm or cursor or whatever, cloud code, it knows, all right, here's how I inspect the capabilities of this thing. 18:38 Here's how I stream back the responses if it's going to take it 10 minutes to do what I asked it. 18:43 This is how you do it with streaming HTTP APIs and so on. 18:46 Precisely, right? 18:47 Because you only need to then implement once. 18:50 And especially if you use one of the existing MCP SDKs that we're going to be talking down 18:54 the line, like that's the core value prop is like you do it once and it just works. 18:59 You don't need to worry about like, oh, but this other MCP server decided to implement 19:02 their auth in a completely different way. 19:04 What do I do now? 19:05 Yeah. 19:05 Now, if I want to build one of these things, does it does it have to be implemented in 19:11 an LLM or can I build just a traditional FastAPI API that ultimately does queries against 19:19 a database with no, no prompt? 19:22 Yeah, no, I mean, like MCP servers themselves are just essentially entities that they're 19:29 capable of exchanging JSON RPC messages. 19:31 Like you can absolutely, you can write a client that is completely detached from an LLM and 19:35 just invokes tools, right? 19:36 Okay. 19:37 Right. 19:37 Awesome. 19:38 You can, if you want to, I don't know why you would do that, but you 19:41 Absolutely can. 19:42 I'm sure people have a reason. 19:44 So I see a comment out in the audience from Frankie about RAG. 19:48 And also, you mentioned RAG at the beginning. 19:51 You say, well, maybe RAG is not working for you. 19:53 Let's just sort of contrast that a bit, right? 19:56 Like, maybe not everyone knows what RAG is. 19:58 Retrieval Augmented Generation. 20:00 What is this? 20:01 Yeah, essentially, if you have a way for you to optimize, basically, the context for the LLM. 20:07 I'll put it this way. 20:09 in very layman terms, it's like, I have a code base. 20:12 I have a code base that has a number of entities like classes and functions and everything. 20:19 And in a rag, you're essentially building a vector database 20:23 that says like, okay, here's the list of things that exist. 20:26 And then the LLM, you can go and query this thing and find out what exists in this code base. 20:31 So if you make decisions about like, I want to build a authorization component, 20:35 how do I do this? 20:36 Like, okay, well, it can build out that context for itself. 20:39 This is kind of the very basic idea behind the rag. 20:42 Got it. 20:43 So instead of trying to put all the information just into a prompt, 20:46 it has to read every time you can kind of additionally train it on these things 20:50 and then keep the question shorter because it knows the details. 20:54 Right. 20:54 You essentially have a knowledge base that's outside of the primary training set. 20:59 Like that's the core value prop of this is you're augmenting the LLM 21:03 with additional knowledge that you have in the context that you're operating in. 21:07 Okay. So something I've wanted to build for a while, and I do intend to, but we'll see if I 21:13 ever get there, is something where people could go and have like an AI conversation with this 21:18 episode, for example, right? With something on the podcast, I've got 10 years of transcripts. 21:23 Yeah. 21:23 You know, like over a million words, I'm pretty sure. That doesn't fit in most contexts. And 21:33 thing for Talk Python. Maybe there's an MCP angle that's really interesting. Like, what could I do 21:40 with MCPs in the podcast, do you think? MCPs in a podcast. So one of them, of course, is like querying 21:45 the data, which is I want to make sure that, you know, find me all the episodes where I ever talked 21:51 with Michael about AI, right? It could be one thing. I actually think that because of the richness of 21:59 the MCP capabilities, to me, when it comes to like podcasts, I envision a world where I can use MCP 22:03 piece to edit podcasts. That's my dream of this. And actually, like this is something that I've 22:08 been experimenting with because I haven't fully wrapped around kind of like how exactly that would 22:15 look like. But one of the things that I do, like as I'm sure you do when you edit the podcast, 22:18 you know, you have to go through it, generate the transcript, clean up things, then make sure that 22:24 you add timestamps, select the most interesting parts about the podcast. So can I potentially go 22:30 and say, okay, here's where my MP3 file is. 22:34 Can you go and generate transcript, clean it up, 22:37 and then find me the most interesting parts about this and then produce me a report 22:42 that I can then use to maybe like a HTML-based web app and I can just like a one-click save like publish, right? 22:49 And to me, like the value of maybe the MCP connector here is that maybe I can plug it in behind the scenes 22:54 with like FFmpeg to go and convert the MP3 into a WAV file and then use whisper to go and generate the transcript and then go and extract things for it 23:03 right and for a lot of these pieces of the tasks that you need to do you would imagine that you 23:07 would have a different tool inside my mc server which is tool is one of the primitives that 23:11 basically a lm invokes and it says oh let me generate the transcript and there's a tool that's 23:15 called generate transcript and it's gonna have that to produce a transcript and it's like okay 23:19 there's another tool that says yeah yes you could give just give the lm an episode number 200 or 23:26 something. And it could go to your podcast MCP server and say, transcript for 200, even if it 23:33 doesn't exist, it'll figure it out and generate it, that kind of stuff. Yes. And also the wonderful 23:38 thing about LLMs and MCP servers is that you're not actually using just one MCP server, right? So 23:43 I might have an MCP server for myself that is basically, like I said, the one that generates 23:47 transcripts, you know, creates a landing page in my podcast website. And then based on that content, 23:54 there's also next steps. Now I have an MP3, I want to upload that MP3 to Cloudflare, where I host my 23:59 podcast. So there may be a Cloudflare MCP server that the LM is going to invoke and say, I need to 24:04 now upload this. And then it's going to invoke the other MCP server, right? So you have this 24:08 basically stack of MCP servers that you can start using one with another. And that's where the 24:12 superpower comes from. Like you're not just using one application and saying like, okay, hold on, 24:15 let me, let me finish a task for podcast production. Then I'll do other things. Like it can chain 24:19 things together and then say, oh, and by the way, there's an MCP server maybe for audio conversion 24:23 that produces like 10 variations of the format. 24:26 Let me invoke that. 24:27 And then you're going through this process. 24:28 Yeah, I think that's one of the really big, hints at one of the really big differences 24:32 between just using a chat LLM versus some of the agent tool using types of things, right? 24:39 The ability to say, now I have to accomplish this task. 24:42 And I know I figured out there is some way I'm capable of accomplishing that, right? 24:46 Either that's to list a directory, to look for a file or to communicate with the Cloudflare MCP 24:52 that we talked about and so on. 24:55 Yeah, it's power is composability. 24:57 I'll put it this way. 24:58 It's the fact that you can compose things together and have them work together 25:01 based on the prompts that you have and scenarios that you have. 25:04 Okay, cool. 25:05 So imagining the Cloud Player MCP thing exists, your podcast preparation MCP thing exists. 25:12 How does my AI know? 25:14 Let's keep it real basic. 25:15 Let's say I'm using Claude Code, but we could plug this into others, 25:19 but just even something just terminal-based, no UI or whatever, like, yeah, it just is it going to discover them just out of the blue? 25:26 Probably not all of them. You got to point it at them. And yeah, how does it know which ones it's 25:30 allowed to use in this context? Right? Like, how do I get it so I can actually use one of these? 25:34 And we'll talk about maybe building them. Yeah. So for MCP servers themselves, you add them 25:39 explicitly to your host or your client, whatever that might be. VS Code, cloud code, cloud desktop, 25:44 doesn't matter. So you explicitly say, I want to use my podcast MCP. I want to be using my 25:49 Cloudflare MCP server. I want to use my, I don't know, Descript MCP servers to remove the ums and 25:54 uhs from the podcast, right? So you would essentially go through some means in that client, 25:59 on your client of choice, to go and add those MCP servers. Now, the question is, how do you 26:03 discover those MCP servers? So there's various places where you can go to. We just launched the 26:08 MCP registry that is nothing short other than an API that indexes all of the available MCP servers 26:15 that are out there, right? 26:17 So we're looking right now at a blog post on the MCP blog. 26:20 It's called Introducing the MCP Registry that got published September 8th of this year. 26:24 So not that long ago, but basically- 26:26 22 days or something like that. 26:28 And when you say we, you're talking the official 26:30 model context protocol.io working group. 26:33 Yes, yes. 26:34 The model context protocol folks, and there's a bunch of them 26:36 that were specifically focused on the registry, right? 26:39 And you see them in the authors like David Sariapara, Adam Jones, 26:44 But they essentially were in charge of kind of building this out. 26:47 And the registry is a centralized API, essentially, that aggregates an index of MCP servers that are out there. 26:54 So you can use the registry inside your client, whatever client you might be using, to find MCP servers for what you want. 27:00 Maybe there is a Playwright MCP server. 27:02 Maybe there is a Perplexity MCP server. 27:05 So it's all coming from the registry. 27:06 Okay. 27:07 Sounds a little bit like Docker Hub. 27:09 Kind of. 27:10 Yes. 27:10 And just like Docker Hub, you actually don't need Docker Hub to install an MCP server. 27:14 or in this case, like a Docker container, right? 27:15 Like you can just go to random GitHub repos and find somebody to build an MCV server 27:19 for what you're trying to do, and you can just plug it in. 27:21 - Yeah, interesting. 27:22 Yeah, that's how I use Docker Hub by not using Docker Hub for all the stuff I build, 27:26 but you know, I get the foundations. 27:27 - I know it exists. 27:29 - Exactly, I'm like, ah, but I'm gonna build it here. 27:32 It also has the concept of public and private registries. 27:35 - Yes, yeah. 27:36 So public registry is essentially something that like GitHub, 27:40 by the way, maintains their own registry, right? 27:42 So it's public and you can just go and discover MCP servers through the GitHub registry or the public registry. 27:48 Also, we know that MCP servers are used within different companies. 27:51 You might have, let's say, some data that you're locking in behind seven gates that only certain people can access. 27:59 You can build internal MCP servers. 28:01 And for those things, you ship internal private registries where you can say, no, no, no. 28:06 I want my folks in my company to only access these servers and nothing else. 28:11 Right. 28:11 Sure. 28:12 Yeah, that makes sense. Is there a place that I can go to the model context protocol registry, the MCP registry and like browse it like you can? 28:20 Yeah. Yeah. So right now you can't browse it through a UI, but you can look at other registries that can consume some of the content from here. 28:29 So like I believe GitHub registry is one of the consumers. 28:32 So you can look at I think it's gethub.com slash MCP. 28:35 There we go. 28:36 Yeah. 28:36 Okay. 28:37 And you can see some of the registries and you can see like if you click on one of the install buttons is going to like allow you 28:42 to take it directly into like VS Code and then just bring it in and install it 28:46 in the context of your editor. 28:47 - Okay, yeah, very nice. 28:49 So some of these are like web crawling, Notion. 28:52 Okay, I know Notion just added a big agentic AI thing and I've seen a lot of pushback. 28:58 There's probably a lot of happy users who just use it, but people are like, why is this in my way? 29:02 I just wanna work with this. 29:04 But you know, if you were, it'd be really cool to maybe plug that in instead of going, 29:08 we're gonna try to use the API to download this embedded database with the information. 29:13 - Exactly. 29:14 - Like you just talk to it, right? 29:15 - Exactly. 29:16 That's again, what I like about MCP is that if I want to connect to Notion to get my notebook 29:22 and some notes from my standup meetings, I don't need to worry about how they structure their API 29:26 and how to use auth or something. 29:27 Just install the Notion MCP and then ask the alum, pull the latest notes and summarize them for me. 29:32 And then it's gonna know. 29:33 - It's their LinkedIn one. 29:34 Their API is so bad. 29:35 - Oh. 29:37 - Oh. 29:37 - Oh, it makes me sad. 29:39 For any LinkedIn people watching this, we need to have a LinkedIn MCP server. 29:42 - Yes, I think so. 29:44 It might save me. 29:45 Okay, very interesting here. 29:47 I think people should come here and just kind of poke around. 29:50 You can see there's a lot of, a lot of interesting things that I think might 29:53 spark some ideas. 29:55 - Yeah. 29:55 - As you start to play with it, you know, like Postman. 29:58 So I guess one of the problem, well, not one of the problems, 30:00 one of the things you're gonna want to deal with is, a lot of these I see here, 30:04 LaunchDarkly, Postman, Atlassian, Notion, and so on. 30:08 You got to pass things like I am this person. 30:11 Therefore, I want to see my information, not other people's or only public. 30:16 I got to see private info, but mine. 30:18 There's a whole security side. 30:19 And I think that's kind of how you got pulled into it, right? 30:22 Yeah. 30:22 Oh, yeah. 30:23 Yeah. 30:23 So for these things. 30:25 Yeah. 30:25 We just put like an API key in GitHub and you just check that in and just use that when you're trying to. 30:30 Don't do that. 30:31 Don't put API keys in GitHub and check them in. 30:33 What can be done, so starting with the latest spec of MCP that, again, shipped in June, there is a formal way for services to do authorization. 30:42 So it's based on OAuth, OAuth 2.1. 30:45 I know that there's people listening that's like, oh, no, did you just say OAuth? 30:49 I have to learn OAuth now. 30:50 You don't. 30:51 Again, there's a lot of libraries that do this. 30:53 If you're an MCP server developer, it's solved for you. 30:55 If you're an MCP server consumer, you don't even need to think about it. 30:58 So when you connect an MCP server, as a consumer, you'll essentially have the ability to log in with your credentials. 31:04 So if an MCP server, for example, for like we saw Chromo and we're like MongoDB, that's on the screen here. 31:10 If I use the MongoDB server and I want to connect to a database, usually they provide you a way to either one. 31:16 You go into your MCP server config and you say, I will give you an API key if your server is using an API key. 31:22 Or if it's using OAuth, then you can just essentially snap to using OAuth the standard flow. 31:28 Your client is going to bootstrap the authentication flow. 31:31 You're going to go to the box, enter your credentials, log in. 31:33 The client is going to store the tokens, and then you access the server with your credentials as you getting access to your data, not something else. 31:41 One thing that looks really interesting, and there's an example of it right here with the Nux. 31:46 Never written a Nux app in my life, but here we have. 31:49 I have one that helps you understand your Vite Nux app. 31:53 One of the things that I think could be really interesting and probably MCPs could play a really important role is we have these huge foundation models, OpenAI and Cloud Opus and so on, that are generally knowledgeable about the whole world and are big, expensive to train. 32:09 But I can see a future where we get good enough to have a bunch of small models. 32:13 Like this is the Vue.js model. 32:17 If you need to know Vue.js, it's as good as anything, but it runs on your computer in a gig of RAM because it's just trained so specifically on Vue. 32:26 And I feel like maybe you could MCP your way together like, well, I'm using this tech stack. 32:31 So we're going to click together a bunch of things that don't provide data, but provide information about what your architecture or something like that. 32:39 What do you think? 32:39 Yeah, I mean, I think it can go both ways, right? 32:41 Like there's a specialized model. 32:42 And there's an argument for saying that the more general scenarios would always work best. 32:47 Like there's, I think there's always two camps of those folks that I talk to. 32:50 I personally think that I think for certain things, there is a tremendous amount of value 32:55 for hyper centralized or hyper local models. 32:59 I'll give an example, right? 33:00 Like I want to organize the photos on my machine. 33:05 Like maybe I have a lot of duplicates that, you know, because when you take photos of your modern cell phones, 33:09 like just click, click, click, click. 33:10 and then you have like 10 images of your dog and you're like, they're kind of the same, 33:13 but I want to pick the best one. 33:14 Like from a privacy standpoint, like I don't want to send that off to some server 33:18 remotely somewhere with my photos, which, you know, there's like family photos. 33:22 There's all sorts of like stuff that I do not want to send off to some remote server. 33:25 For those things, I want to use a local model. 33:27 And maybe there's an MCP server that allows me to basically like, 33:30 oh, I can find the photos and then crop them and like add some metadata or remove metadata 33:35 or whatever I want to do, right? 33:36 So for those things, I absolutely see the value in these like local models 33:40 where I can just say, I want it to be very good at this one specific task and that task only. And I 33:45 will never use this photo model for web app creation, but photos is going to be darn good. 33:49 And I think there's a lot of value for that. And if you augment it with MCP, I think it's 33:53 superpowers right there. Yeah, it does seem like it could be. It could be this little step would 33:58 benefit from a local model, but I don't want to constrain the entire problem solving to a local 34:03 model. Right. I think that's kind of the problem. Like I use LM Studio a lot and I've got, for 34:08 For example, I have the open AI 20 billion parameter open weights model that I actually 34:13 program against. 34:14 And it does all sorts of cool stuff for me, but I don't use it for my general work because 34:17 it's either too slow because it's on my Mac mini or I just want something that is better, 34:23 right? 34:23 Yeah. 34:24 And so if you're going to just start a, like I'm using this model to solve this problem, 34:28 that might not be the final outcome where we end up, right? 34:34 This portion of Talk Python To Me is brought to you by Nordstellar. 34:37 Nordstellar is a threat exposure management platform from the Nord security family, 34:41 the folks behind NordVPN that combines dark web intelligence, session hijacking prevention, 34:47 brand abuse detection, and external attack service management. 34:51 Keeping your team and your company secure is a daunting challenge. 34:55 That's why you need Nordstellar on your side. 34:57 It's a comprehensive set of services, monitoring, and alerts to limit your exposure to breaches 35:03 and attacks and act instantly if something does happen. 35:07 Here's how it works. 35:08 Nordstellar detects compromised employee and consumer credentials. 35:12 It detects stolen authentication cookies found in InfoStealer logs and dark web sources 35:18 and flags compromised devices, reducing MFA bypass ATOs without extra code in your app. 35:24 Nordstellar scans the dark web for cyber threats targeting your company. 35:28 It monitors forums, markets, ransomware blogs, and over 25,000 cybercrime telegram channels 35:34 with alerting and searchable context you can route to Slack or your IRR tool. 35:39 Nordstellar adds brand and domain protection. 35:42 It detects cyber squats and lookalikes via visual, content similarity, and search transparency logs, 35:49 plus broader brand abuse takedowns across the web, social, and app stores to cut the phishing risk for your users. 35:56 They don't just alert you about impersonation, they file and manage the removals. 36:00 Finally, Nordstellar is developer-friendly. 36:03 It's available as a platform and an API. 36:06 No agents to install. 36:08 If security is important to you and your organization, check out Nordstellar. 36:11 Visit talkpython.fm/nordstellar. 36:13 The link is in your podcast player's show notes and on the episode page. 36:17 Please use our link, talkpython.fm/nordstellar so that they know that you heard about their service from us. 36:23 And you know what time of year it is. 36:25 It's late fall. 36:26 That means Black Friday is in play as well. 36:29 So the folks at Nordstellar gave us a coupon, BlackFriday20, that's BlackFriday, all one word, all caps, 20, two zero, that grants you 20% off. 36:38 So if you're going to sign up for them soon, go ahead and use BlackFriday20 as a code and you 36:43 might as well save 20%. It's good until December 10th, 2025. Thank you to the whole Nord security 36:50 team for supporting Talk Python To Me. For sure. And especially because for a lot of the generalized 36:55 models, you're like, no matter how you look at this, you're not going to have the computer 36:58 resources anywhere near what like open anthropic has right so like in terms of speed and quality 37:04 what are you going to get you might get some like fine-tuned examples where some scenarios work very 37:09 very well but i think ultimately if we look at the general use case these generalizable models 37:14 are going to be ahead yeah i i definitely agree as well but i hadn't really considered how mcps 37:19 might allow you to use the really high-end models to compose specialized not quite as generally 37:24 smart but specialized versions of different things it could be yeah mcp can do anything mcp again is 37:31 just it's it's a pipe what you do with that pipe is up to you yeah well let's talk about how one 37:37 might build such pipes with uh with python so there's actually a model context protocol github 37:45 organization within there they have the python dash sdk the official python sdk for the mcp servers 37:51 and clients. So that's also interesting, the clients bit. So maybe we could kind of like, 37:56 there's a lot of concepts and things here, and I don't want to dive too much into code, 38:01 but maybe we could work our way through some of the concepts and some of the steps of building 38:06 such a thing. Yeah, totally. Well, I mean, it all starts from just getting the SDK, right? And this 38:11 is for like anybody that's using Python. You can just get it through pip or uv. I'm a big fan of 38:16 the folks at Astral. I think they're doing a fantastic job with uv and uvx. Like I use it for 38:20 or get up spec kit. 38:21 So, you know, uv add MCP, MCP CLI. 38:25 And there you go, you can be on your way. 38:27 It's as simple as that. 38:28 - Yeah, okay, that'll do it. 38:29 And then, yeah, you can specify like the CLI options or whatever kind you want. 38:34 - Yeah, yeah. 38:35 And also it's using fast MCP. 38:37 Are you familiar with fast MCP? 38:39 - No, I know some projects with fast in it, but not MCP. 38:42 - Yeah, so fast MCP is basically, think of it like FastAPI for MCP. 38:46 It's essentially like allowing you to compose MCP servers faster because it has a lot of the primitives baked in. 38:51 So things like authorization, which can be like kind of a pain point, but if you use 38:55 Fast MCP, it makes it a little easier. 38:57 And Fast MCP is a integral part of the Python SDK story for the actual like official Python 39:03 SDK. 39:03 Right. 39:04 The programming model looks like it would feel quite familiar to anyone who knows the 39:10 Flask API or beyond. 39:12 I think it's just, you know, a little sidebar. 39:14 I think it's really interesting how Flask is quite popular, but it's also spawned almost 39:20 every single web app after it has kind of borrowed its programming model. 39:25 So even if you're not exactly using Flask, if you're using Litestar or FastAPI or whatever, 39:30 you're still kind of doing that kind of programming. 39:32 And it's the same here, right? 39:34 You create an MCP as the app, you say @mcp.tool or @mcp.prompt and you put these onto functions 39:42 and they now become webized. 39:44 Yeah. Isn't that like, okay, like I am not like I write Python, but I'm not a Python expert. I'm 39:50 sorry, Brett Cannon, if you're watching this. But like, we'll take that part out, don't we? 39:55 As a stream's life. That's okay. So like these, like the do you call in Python, do you call them 40:02 decorators? Or is it like attributes like in C#, it's attributes. Yeah, in C#, it's attributes. 40:07 You do it with square brackets. In Python, it's decorators and you do it with the @ symbol. 40:11 Okay, so the decorators themselves. Look at the simplicity of this. Look at the screen right now of a sample where we're looking at the actual Python SDK repo. And one of the samples, you literally have a Python function, you have def add, and there is your arguments, you would pass you a function, like two integers. And then all you need to do to make that a tool that an LM can invoke is just add that at mcp.tool decorator. That's it. You're not going and crafting elaborate JSON RPC envelopes and converters and all these things. 40:41 like all the stuff is done for you add a decorator boom you have a tool that's it yeah it's simple 40:45 it's yeah it's really really simple to program and there's actually some fairly complicated 40:50 data exchange stuff going on like streaming partial results as they come in because 40:56 we're all used to two things ai requests taking a real long time but b that you see the little dots 41:03 thinking thinking and periodically like some stuff that's coming by to like oh yeah okay i see where 41:07 it's going. I don't know what it's going to come up with, but at least we could see it's working, 41:11 right? So to sort of keep that flow going, you've got the streaming style, right? 41:15 Exactly. And all of this is like, again, I'm looking at the sample. It's so, 41:20 the way I would describe it as a delightful developer experience. If I'm a developer, 41:24 I focus on writing the core functions. I don't have to worry about like, well, 41:27 how do I make this into a tool? Put a decorator on. That's how you make it a tool. 41:31 Yeah. Excellent. So I have this server and you mentioned that it's fast, 41:37 FastAPI or flask like how do I host it once once I call run or whatever I do on it yeah then what 41:44 I know I probably don't put it straight on the internet maybe I do I don't know so there's two 41:47 types of servers that you can have you can have local mcp servers and local mcp servers are 41:53 essentially just a local application think of it running like a console app or like your regular 41:57 python script and what it does it there might be referred to you might hear like they're called stdio 42:02 for standard input output. 42:03 And it's using basically native OS constructs to talk between processes, right? 42:08 The MCP client and the server. 42:10 So again, it's still JSON RPC, but JSON RPC over SDDIO pipes. 42:14 So the other one is streamable HTTP. 42:17 And streamable HTTP, it's again, MCP server that can be hosted somewhere in the cloud. 42:22 It can be hosted on your own home lab server if you want to, and you give it an IP address. 42:26 You can be hosted in AWS or Azure, GCP, doesn't really matter. 42:31 So for those servers, the JSON-RPC messages are basically done through the HTTP pipe with some set of HTTP conventions. 42:37 That's kind of where it is. 42:39 There's no constraint as to where you have to host it. 42:42 It's whoever supports running Python can host your MCP server. 42:47 Right, okay. 42:48 So I could put it behind Nginx or Caddy or whatever. 42:51 Like toss it into a container and put it somewhere. 42:54 Like it's totally fine. 42:55 Okay. 42:55 You know, you talked about all these sort of different, like, private, but online, but not quite online, you know, with, like, HomeLab and stuff. 43:02 I just want to give a shout out to Tailscale. 43:05 Like, have you? 43:05 Oh, yes. 43:06 Have you Tailscaled lately? 43:07 Oh, it is so good. 43:08 It is wonderful. 43:09 I love Tailscale. 43:10 It's my go-to thing. 43:12 And I'll tell you this. 43:12 Like, do you remember the days when you had to work? 43:14 This episode is not sponsored by Tailscale, for the record. 43:18 Should be. 43:19 Should be. 43:19 They can reach out. 43:20 Yeah. 43:21 Yeah. 43:21 Hey, Tailscale. 43:22 Yeah. 43:23 Yeah. 43:23 Michael Talks is awesome. 43:24 You should sponsor it. 43:25 But anyway. 43:25 So TailScale is great. 43:26 Like, remember the olden days when you had to like set up an open VPN 43:29 and be like, let me generate the keys. 43:31 Let me email myself the key so I can open it on the iPhone 43:34 and then add the key and then go through this process. 43:36 And it's just like, oh, man, such a pain. 43:39 Such a pain. 43:40 TailScale, just like flip the switch and you're in. 43:42 Yeah. 43:43 Magic. 43:43 Or DynDNS where you... 43:46 Oh, yeah. 43:46 Because you have to bind your IP address to their domain 43:50 and then you have to run this agent to constantly update it. 43:53 Oh, yes. 43:54 Yeah, the agent goes down change. Well, then there's also all the NAT firewall and your local machine on your local 43:59 network change. You're like, no, it doesn't work. Oh, it's my machine on my, we had a power outage 44:04 when the router rebooted, I got a new IP. It just, it was so bad. And so why is this sidebar worth 44:10 going into here, folks? Because this is what's called an overlay network. And so you can put it 44:16 up on your iPhone, you can put it on your laptop, you can put it on your desktop, you can put it on 44:19 your Linux server if you want. And it basically exposes all of those things over a network that's 44:25 like a VPN, but the rest of your behavior is just not VPN. 44:29 It's just normal, but it just brings those in in just the most incredible way. 44:33 So for example, I have a high-end Mac mini here that I use for the streaming that I'm talking to you on now. 44:39 It has tons of RAM and it has a pro chip and stuff. 44:41 So I just have my one LLM and my database servers running there. 44:46 And when I'm doing dev work, instead of every, you know, my laptop, 44:50 my other machine always running in replica, it all just goes here to this. 44:54 And even if I'm in a coffee shop or I'm out for work, right? 44:57 As long as TailScale is running, I do a database query or an LLM call through an API and it 45:02 just hits this thing. 45:03 Yep. 45:03 Just as if I was here. 45:04 And it's glorious. 45:06 And all that's for free, right? 45:06 There's paid versions, but you can do a lot. 45:08 Yeah. 45:09 You can do a lot for free. 45:10 In their free tier, it's amazing. 45:12 And it's all WireGuard. 45:13 It's all using the most modern secure standards. 45:17 I'll say to me, if you want to access things like, oh, your security camera is at home, 45:21 you do not trust cloud providers to have access to your security home cameras, put them in your 45:26 local network and use tail scale. And then you can go somewhere, flip the switch in your phone, 45:30 boom, you can see your cameras from remote without exposing them to the broader internet. It's 45:33 amazing. You don't open up any ports on your router, nothing like that. So why am I going on 45:39 such an excited diversion? One, it's just so awesome. And I just recently discovered it this 45:43 year. So it's a thing, but it's relevant. If you've got an MCP server and you want to keep it local, 45:49 even local from your server back to like your company or something potentially, 45:54 you could hide all that stuff behind tail scale. 45:57 It's like transparently available, but also there's, there's no ports. 46:01 There's no open internet. 46:02 The easiest way to secure stuff is to just not let the internet have at it. 46:06 Yeah. 46:06 Yep. 46:07 No, exactly. 46:07 This is what I've been actually doing with one of my friends who was setting up a 46:11 home lab and they were experimenting with some of the MCP servers for like, I believe 46:15 it was like setting up for like a Minecraft server. 46:17 And we just tossed them on the same server. 46:19 And because it's tail scale and then I connect them to the clients with a IP that tail scale gives me, 46:24 it just magically works. 46:25 And I didn't need to expose this to the internet. 46:27 I didn't need to pay for any cloud providers in somebody's home lab. 46:30 It's just there. 46:31 - Yeah, yeah. 46:31 And you don't need to use SSH across it. 46:33 Like you can just, it's just there. 46:35 It's all super, super good. 46:36 Okay, back, back to what I was asking. 46:39 - Back to MCP. 46:40 - Back to MCP, but I was asking, you know, how do you run it? 46:43 And you're like, I could, we could run it on our home lab or on a Raspberry Pi or something, right? 46:47 this tail scale thing is a way to sort of really nicely make that available to 46:51 you, make that available to your, your AI agents or whatever without going, well, 46:57 now how do I host it on like a server for real? Yeah. Okay. 47:01 So let's see. That is the registry. There we go. 47:04 So I want to talk about a couple of things. We talked about tools. Yeah. 47:08 And we talked about there's prompts, there's resources. 47:11 Let's maybe go through each one real quick. 47:13 These are all just decorators you put on functions, but they're all, 47:16 They're slightly different. 47:17 Yeah. 47:17 What is the purpose of a tool and why would I do that? 47:19 Yeah. 47:20 A tool basically is a function call, right? 47:22 It's like your tool equals function. 47:25 That's the way I describe it. 47:26 Like that's basically like, hey, I want the LLM to go do something. 47:29 What does it need to do? 47:30 And this is where like get weather, give me the sum. 47:34 It needs to go and do this. 47:35 This is what a tool is. 47:37 It's a primitive that does something. 47:39 Insert record into database or whatever. 47:40 This looks like you could probably find and replace Fast MCP with FastAPI and tool with 47:47 get. 47:47 Yeah. 47:47 Yeah. 47:48 And you or a post or something. 47:50 And you might be able to pretty much that is kind of the closest match, right? 47:54 Yeah, exactly. 47:55 Yep. 47:55 Yep. 47:55 That's that's basically it. 47:56 I want to invoke some kind of action. 47:59 Go do that action for me. 48:00 Right. 48:00 And at least in the examples, there's no AI in the action. 48:04 It's just. 48:05 No. 48:06 Just an AI. 48:06 The AI knows that it needs to invoke the action. 48:08 Like if I go to the LLM and say, send an email to Michael that says the podcast was awesome. 48:13 And then it's going to go in and say, oh, let me go find the tool that is capable of sending emails. 48:19 Oh, there's a tool from like, I don't know, like MailChimp. 48:21 Okay, let me go do that. 48:23 There's a tool in the MailChimp MCP server that says send email. 48:26 That sounds great. 48:27 I'm going to use that to send the email, right? 48:29 And that tool itself doesn't use AI behind the scenes. 48:31 It's just like, it's just going to do SMTP send email. 48:34 That's all it does. 48:35 Yeah. 48:35 Awesome. 48:36 It also has other examples of data exchange along the way, I guess. 48:41 Absolutely. 48:41 And you can pass in this context, and then the context can start pushing updates and information back. 48:49 Yes. 48:49 To the user, right? 48:50 And report progress back. 48:52 So, for example, if your email takes like seven hops, it's like, okay, let me first connect to the SMTP server. 48:57 Let me then verify the credentials. 48:58 Like, you can encode that basically if you implement that. 49:02 You might not, but you can implement progress reporting so that the client knows like, 49:06 oh, you're like 30% through your task or you're like 40% through your task now 49:10 because it reports on the progress of what you're doing. 49:13 - Yeah, super cool. 49:14 You can also do structured output, which is pretty interesting. 49:17 And there's many ways in which it can be done, but the number one way, 49:22 as in if it was a ordered list, the first thing would be Pydantic models, right? 49:27 Carrying on the FastAPI analogy here, right? 49:30 - Yep, yep. 49:31 For a lot of these things, again, it's very, like if you're a Python developer, 49:34 a lot of these concepts are gonna be very much familiar to you. 49:36 - Yeah, I think one of the challenges people have often is like structured data versus like I got an LLM answer 49:43 and it's a little different every time and they upgrade the model from 5.1 to 5.15 49:50 and now it does something totally different. 49:52 Like how do I code against this, right? 49:54 And so using structured data can be a big bonus, right? 49:58 - Yeah. 49:58 - Okay, super cool. 50:00 Let's see prompts. Now it's starting to sound AI like. 50:03 Yeah. So this is basically like the description says prompts are reusable templates that help 50:08 elements interact with your server effectively. If you have a server that does, I don't know, 50:13 cooking recipes, it might provide prompts for like, what is a like, what are the steps for a recipe 50:19 and with substitutions where needed. So it allows you to basically pre cook prompts that your server 50:24 might be using. Okay, they might be passing these internally to? Yes, yeah. So 50:30 return to the host AI, you know, there's a lot of AIs involved here. 50:34 Right, you know, essentially like, you're exposing prompt templates, like that's what it is, 50:38 like, and saying like, oh, if you're a user, if you're looking for like, creating a recipe, 50:42 this is a template for that prompt for a recipe. 50:44 Okay, cool. There's also a little bit of a UI component, which is interesting, 50:48 you can have a iconography representation of your actions. 50:53 Yeah, this is relatively new. But basically, for some of them, like, 50:56 bake in some of the icons to just make it easier to differentiate between different actions. Because 51:00 especially again, like different servers can have different tools and there are many tools. And how 51:04 do you like just parse the strings? Like just look at iconography. Yeah. Another thing that it has 51:09 built in support for is working with images. So that's pretty wild. I yeah, I noticed that for a 51:15 lot of the stuff, it's also like it's baked into the these are not necessarily like MCP spec constructs. 51:19 This is more like how the Python SDK exposes them and allows you to operate on them, right? 51:23 because like the fundamental constructs, the primitives are we have tools, 51:27 we have prompts, there is resources, which is another one. 51:30 And the resources is, allows the LM to basically think of it 51:34 as how do you refer to databases or files or entities within an API? 51:41 Those are, there's also elicitations as what Michael is showing right now on the screen. 51:45 So we have elicitations is a way for an MCP server to go to the client 51:49 and say, I want the client to provide me structured input on a specific question. 51:54 Like, hey, can you give us your date of birth? 51:58 And I expect a date. 52:00 Can you give me a date back exactly so I don't need to guess from the LLM context, right? 52:04 Or it can say like, you know, what kind of pet do you have? 52:07 And it can give you a list of options that you can actually have to pick from. 52:11 It's like, oh, dog, you know, pet, reptile, like dog, cat, reptile, whatever. 52:15 It allows you to have that structured controlled input that it's not just you're typing into the chat box, 52:21 but you're selecting from a list that the server asks you to. 52:25 So that's another neat thing that recently got added. 52:27 Yeah, that looks quite interesting. 52:29 And it has to do a little bit with the WebSocket type of exchange as well, right? 52:34 Not exactly, but it's going along. 52:37 You've asked it something. 52:38 While it's working on that, it's come back and it's asking you to give it more information to carry on. 52:42 Yes, exactly. 52:43 In that sense, right? 52:45 Yep, yep. 52:45 So could this be, I've worked on your request. 52:49 I've used the database MCP or whatever, and I've learned that there's 20 records. 52:54 Do you want to delete them like you asked or do you not want to delete them? 52:57 Yes, yes, exactly that. 52:58 Or it can say, hey, I found like 10 conflicting records. 53:02 Which ones do I need to delete? 53:03 And then you can help and basically do, yeah, right? 53:05 So it asks for structured input so that you don't have to have it guess from whatever you type in the chat. 53:11 Because if you type in the chat, it's like it's non-deterministic, right? 53:14 It could say, oh, delete all the records with the name John Doe. 53:18 And then it's like, oh, I'll delete everything with dough. 53:20 Because somehow, like, that sort of decides, like, oh, no, no, no. 53:23 Jane, come back. 53:24 Yeah. 53:26 So it adds a little bit more structure. 53:28 Yeah, got it. 53:29 And the programming model is super smooth here. 53:32 They did a great job. 53:33 So, for example, you might be doing this elicitation within a tool call. 53:39 And that's an async function, async web function. 53:41 And the way you do it is just await context that elicit some message and schema. 53:46 And then when the person responds, the async thing resumes and off you go, right? 53:51 There's not some nested callbacks and all that kind of business. 53:54 That's a very smooth developer experience. 53:56 I love it. 53:57 Yeah, it definitely is. 53:58 Okay, I do want to talk about some of the popular ones out there 54:04 through an awesome list because I'm just a sucker for awesome, awesome list. 54:08 But is there anything else that I feel that you feel like we should be covering here 54:12 on the SDK? 54:14 Yeah, there's a lot of great work done Python SDK and the FastMCP folks, 54:18 I would say like go through the repo. 54:21 It's getup.com slash model context protocol slash Python dash SDK. 54:26 Go there. 54:27 There's some great samples to get you started. 54:29 And again, we're always open to feedback. 54:31 So if something's like, oh, this was too confusing. 54:33 I didn't understand. 54:34 The team is very receptive to feedback. 54:36 So please let them know. 54:37 Yeah. 54:38 143 contributors. 54:40 Last release five days ago. 54:42 Bunch of PRs, right? 54:44 It looks like it's pretty open. 54:45 You know, Yeah. Oh, yeah. 54:46 Close PRs pretty open to people working. 54:49 Also, it looks kind of very beginner friendly in the sense that the issues are 54:55 tagged with lots of lots of stuff that you could search for, like needs motivation. 55:00 You know, you could go through and come up with some examples and help, even if 55:03 you're not an expert in the SDK, for example. 55:05 Absolutely. 55:06 And there's also, I believe the Python might be using the good first issue too. 55:10 So if you're, if you're a new contributor, you've never looked at it. 55:12 It's like, I like, don't be intimidated. 55:14 There's plenty of- 55:15 Good first issue. 55:16 Good first issue. 55:17 Like there's plenty of things that you can just drop in and see like, oh, I can help with that. 55:21 Yeah, love it. 55:22 Okay. 55:23 You too can be an AI developer. 55:24 I love it. 55:24 Now let's talk about awesome MCP servers. 55:27 Awesome MCP servers. 55:29 This comes to us from the very well-known PunkPi. 55:33 The person behind Glamour.ai. 55:36 Yeah, awesome. 55:37 And 72,000 GitHub stars, no joke. 55:41 So there may be a fad, but maybe people will stick around. 55:43 So this actually has support for a lot of different languages 55:47 and it's got scopes like is this cloud or local or embedded 55:50 and so on. 55:51 But then you scroll down. 55:53 Look at the list. 55:54 Massive. 55:55 The list is, I mean, look at the scroll bar. 55:58 It is massive. 55:59 Yeah, we keep scrolling and scrolling. 56:01 I don't know. 56:02 If I page down full speed and just pin page down, the pinch down button, it's something along the lines of like 56:09 five seconds just to get through the list. 56:11 And these are one per line. 56:12 Mm-hmm. 56:13 You know, it starts out as one should when they're building awesome lists with categories, right? 56:19 Command line, cloud platforms, biology medicine, and bioinformatics. 56:26 There's one for everything. 56:27 I know. 56:28 You want to just jump around a bit and we can see what's here when we riff on it? 56:31 Gaming. 56:32 MCP server for Unity 3D game engine integration for game to own. 56:36 That's kind of cool. 56:37 Go. 56:38 Unity MCP. 56:38 MCP chess. 56:40 An MCP server playing chess against LLMs. 56:42 Do you ever think of like, can I beat an LLM at chess? 56:46 And you want to like just get an MCP server to do that? 56:48 There is one for that. 56:49 I'm starting to feel like it's better to do the local models 56:51 for the chess playing against the one. 56:54 I don't want the really smart ones. 56:56 There's also chess MCP, which is, this is interesting. 56:59 It's not the same as the other one. 57:01 This is access your chess.com player data and records and other public info. 57:06 Yep. 57:06 Right. 57:07 That's kind of cool. 57:08 So if you wanted to say, hey, I'm building something and I would like access to sort of the Kaggle of chess players type of thing, right? 57:15 Like the list of competitive chess results. 57:18 Yeah. 57:18 That's kind of cool. 57:19 Yeah. 57:19 Yeah. 57:20 Yeah. 57:20 I personally have built one for Halo. 57:23 I'm a big fan of Halo, the video game. 57:24 Oh, yeah. 57:25 It's not on the list, which now I need to go and contribute to that list. 57:28 Let's do a PR. 57:30 Like, that's the thing that I have is basically analyze my Halo stats. 57:34 And I'll tell you what, the LLMs are getting really good at analyzing the stats. 57:38 You give them the data, they can make some conclusions. 57:40 Yeah, I bet. Let's just keep it really crazy. Let's do, I was going to do delivery. We'll do that in a moment. Marketing. 57:46 Marketing. Yeah. 57:48 Yeah. So I guess one of the things that looks, I'm after just a very quick first impression, like you're running ads on someone's platform or you're doing marketing on someone's platform, but you want visibility into how that's going. 58:00 So we've got the Facebook ads and PC server. 58:03 We've got the Google ads, MCP server, Amazon ads and so on. 58:07 Right. 58:08 But what else is, yeah, that sounds about like most of it there, I suppose. 58:11 But think of it this way. 58:12 Like if you connect several of these MCP servers to your client and then you connect them 58:17 to all your ads accounts and then say, how are my ads performing 58:20 and which ones of them are the best this past week? 58:23 Right. 58:23 Like I don't need to click around dashboards and figure out like the filters and everything. 58:26 Just ask the LLM, pull the data, make a conclusion. 58:29 Now, you still need to verify the conclusion that make sure it's not hallucinating things. 58:32 But nonetheless, it's kind of cool. 58:34 Yeah, it's very cool. 58:36 So one thing I know I realized now that we skipped over the Python SDK is we talked all 58:40 about the server. 58:41 What about client things? 58:42 If I wanted to create an MCP server that is effectively the composition of some other 58:48 MCP servers, could I do that? 58:49 You absolutely can. 58:51 Nothing stops you. 58:51 Like an MCP server can also act as an MCP client and then connect to other MCP servers. 58:57 Like there's no restriction to that, right? 58:59 Like it's basically, it's very composable. 59:01 And a client for all intents and purposes is basically an entity that can connect to an MCP server, 59:07 which can also be an MCP server. 59:08 It's kind of circular. 59:09 Yeah, yeah. 59:10 It's turtles all the way down, but MCP this time. 59:12 Yeah, it's AI turtles this time. 59:14 So delivery, we just have the DoorDash delivery MCP server. 59:17 Oh man, like who? 59:19 Claude, why is my food not here? 59:22 Have you ever seen those fail videos or whatever? 59:25 I watch weird YouTube stuff with my daughter sometimes and you'll see like cops delivering DoorDash. 59:31 I'm gonna say, sorry, we had to arrest your DoorDash delivery, 59:33 but we were pretty close. 59:34 So we thought we'd just go and deliver your food anyway. 59:36 I mean, I don't know what the server is gonna say, but it could say anything, you know? 59:40 - The police are on their way. 59:42 - Yeah. 59:42 People are generally really appreciative. 59:44 Like, well, thanks for getting me my dinner anyway. 59:47 Let's see what else is out here. 59:49 Got text to speech, which is interesting. 59:52 - Sports. 59:52 - Sports, hell yeah. 59:54 Oh, look at this, Strava. 59:55 Like if you're running or biking, you can use this also to analyze your data. 59:59 There's a lot of MCB servers for data analysis, which is kind of cool. 01:00:02 Okay, I don't even, this one, this is the one that appeals to me. 01:00:05 So Multivewer, this is actually not a thing that I would want, but I think it's interesting. 01:00:11 So Multivewer is a motorsports desktop client. 01:00:14 And what I think it does, it does for IndyCar, WAC, Formula One, 01:00:18 and even like the feeder classes. 01:00:20 I think what it lets you do is put up both an overlay of telemetry onto watching the live stream, 01:00:27 but also put the multiple people up in live streams at the same time or 01:00:32 something like that. 01:00:32 Right. 01:00:33 That's kind of cool. 01:00:34 That's cool. 01:00:35 So the, the reason I don't really like that is I don't watch any of those sports 01:00:38 live. 01:00:39 I record them. 01:00:39 And so I can then pause it and then skip the commercials. 01:00:42 And so this is like for a live stream sort of deal, but the MCP server, 01:00:46 it controls multi viewer for that. 01:00:49 So maybe you could set up an AI that is watching what's going on and switches the views around in the multi-viewer for you. 01:00:58 That's wild. 01:00:58 Or swaps to the most interesting telemetry at the specific moment. 01:01:02 Yeah, listen to the radio. 01:01:03 They start getting all frantic. 01:01:05 Like, all right, we're switching to that view. 01:01:08 Yeah, there's an NCC server for everything. 01:01:10 Like, this list is massive. 01:01:12 I'm actually like, every time I discover these things, like, we're looking at this right now, I was like, oh, I didn't know there was one for multi-viewer. 01:01:17 Like I didn't know what multiviewer is until we talked right now. 01:01:20 Yeah, but wouldn't that be a cool demo? 01:01:22 Yeah. 01:01:22 You know, at a conference, you're like, I know you've all seen the tic-tac-toe one, 01:01:27 but let me show you the final of F1. 01:01:30 Yeah, yeah, yeah. 01:01:30 Or something, right? 01:01:32 Very astute observation, because again, like there's a lot of these like hello world kind 01:01:35 of things like, oh, look, it's kind of neat. 01:01:36 It responded with a thing like, give me a real thing. 01:01:38 This is that real thing. 01:01:39 Yeah, yeah, that's, that's super neat. 01:01:41 All right. 01:01:42 I guess we've got the support one that Lassie and Jira quick chat. 01:01:46 It's whatever you want, right? 01:01:47 That's the one to reduce your boring work. 01:01:49 The GRI MCP server. 01:01:50 Like, you don't want to triage your bugs. 01:01:52 Just let the LLM do it for you. 01:01:53 Hey, can you go and find the things that are most important for me to work on today? 01:01:56 Give me the bug numbers. 01:01:58 Yeah. 01:01:58 Or if you see somebody assign a bug to me, close it. 01:02:01 Yeah. 01:02:01 Yeah, exactly. 01:02:02 Query all the bugs assigned to me, reassign them to somebody else. 01:02:09 Yeah, crazy. 01:02:10 Exactly. 01:02:11 Not a good fit for this person. 01:02:13 Yeah. 01:02:13 No, exactly. 01:02:14 These are the life hacks you learned only from this podcast. 01:02:17 That's right. 01:02:18 It's like, if it involves MCP servers and cool stuff I can code, give it to me. 01:02:22 Otherwise, send it somewhere else. 01:02:24 Send it somewhere else. 01:02:26 All right, Dan. 01:02:27 I think we're getting pretty close on time here in terms of what we got time to cover. 01:02:31 But this is super fun. 01:02:33 Maybe close things out for folks. 01:02:35 They want to get started with MCP servers, either building them, consuming them, building 01:02:39 and consuming them, plugging them into their tool chain. 01:02:43 What do you tell them? 01:02:43 Yeah. 01:02:44 So for folks that wanna build modelcontextprotocol.io, as simple as it gets, go there. 01:02:48 It has guides, tutorials, SDK starters, everything is there. 01:02:52 If you are a consumer of the MCPs and you wanna, hey, I wanna do this like awesome thing with MC servers. 01:02:58 First of all, the GitHub MCP registry that we showed earlier is one of those things 01:03:03 is github.com/mcp, go explore. 01:03:06 And then of course on GitHub, there's plenty of servers that are tagged with MCP. 01:03:10 You can also take a look there. 01:03:11 And there's other registries that also index MCP servers of all sorts, like Glama AI from 01:03:17 Punk Pie that we talked about before. 01:03:19 There's one such registry that you can also look at and see if there's anything that's 01:03:22 of interest. 01:03:24 I will say that as you are exploring MCP servers, exercise caution, just like you would exercise 01:03:30 with any other software and APIs and websites where you log in because the responsibility 01:03:36 is kind of on you to figure out what's safe, what's not. 01:03:40 If you have an MCP server that's like, oh, it's going to read all my iMessages and sort them by importance. 01:03:45 I'm like, yes. 01:03:47 And do you know who built that and where your messages are going? 01:03:50 So be careful. 01:03:51 Are they also scanning for credit card numbers? 01:03:53 Exactly. 01:03:54 Why not? 01:03:55 You messaged somebody with your social security number the other day. 01:03:57 Nice. 01:03:59 Yeah. 01:03:59 So be careful with those. 01:04:00 But I'd say, like, explore them. 01:04:01 And then we are working on formalizing discovery a bit better. 01:04:06 your clients like VS Code and Cursor and Cloud Desktop are going to become better and better 01:04:11 with more discoverability affordances. Awesome. All right. Thank you so much for coming on the 01:04:14 show. I learned a ton. I'm sure listeners did as well. And it was a lot of fun. Thank you for 01:04:19 having me. Yeah. See you later. Bye. This has been another episode of Talk Python To Me. 01:04:24 Thank you to our sponsors. Be sure to check out what they're offering. It really helps support 01:04:28 the show. This episode is sponsored by Posit Connect from the makers of Shiny. Publish, 01:04:34 share and deploy all of your data projects that you're creating using Python. Streamlit, Dash, 01:04:40 Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, Dashboards, and APIs. Posit Connect supports all 01:04:47 of them. Try Posit Connect for free by going to talkpython.fm/Posit, P-O-S-I-T. 01:04:54 And it's brought to you by Nordstellar. Nordstellar is a threat exposure management platform 01:04:59 from the Nord security family, the folks behind NordVPN that combines dark web intelligence, 01:05:05 session hijacking prevention, brand and domain abuse detection, and external attack surface 01:05:11 management. Learn more and get started keeping your team safe at talkpython.fm/nordstellar. 01:05:18 If you or your team needs to learn Python, we have over 270 hours of beginner and advanced courses 01:05:24 on topics ranging from complete beginners to async code, Flask, Django, HTML, and even LLMs. 01:05:31 best of all there's not a subscription in sight browse the catalog at talkpython.fm 01:05:36 be sure to subscribe to the show open your favorite podcast player app search for python we should be 01:05:41 right at the top if you enjoy the geeky rap theme song you can download the full track the link is 01:05:46 your podcast player show notes this is your host michael kennedy thank you so much for listening i 01:05:51 really appreciate it now get out there and write some python code 01:06:06 I'm out. Copyright © PDX Web Properties, LLC 2015-2026. All Rights Reserved Made with in Portland, OR, USA
Images (10):
|
|||||
| Langage de codage : quel est le plus utile pour … | https://tic-et-net.org/langage-de-codag… | 1 | Apr 07, 2026 08:00 | active | |
Langage de codage : quel est le plus utile pour le SEO? - Tic et NetURL: https://tic-et-net.org/langage-de-codage-quel-est-le-plus-utile-pour-le-seo/ Description: Certains moteurs de recherche ignorent purement et simplement le JavaScript mal optimisé, alors qu’un simple ajustement HTML peut changer le classement d’une page du tout au tout. Pourtant, des frameworks comme React dominent désormais de nombreux sites à fort trafic, malgré des défis persistants en matière d’indexation.Selon les dernières recommandations de Google, la structure du […] Content:
Certains moteurs de recherche ignorent purement et simplement le JavaScript mal optimisé, alors qu’un simple ajustement HTML peut changer le classement d’une page du tout au tout. Pourtant, des frameworks comme React dominent désormais de nombreux sites à fort trafic, malgré des défis persistants en matière d’indexation.Selon les dernières recommandations de Google, la structure du code source prime souvent sur la seule qualité du contenu. La compatibilité entre langages de programmation et robots d’indexation s’invite ainsi au cœur des stratégies SEO, avec des conséquences directes sur la visibilité organique. Le choix du langage de codage pèse lourd dans le dialogue entre vos pages web et les moteurs de recherche. Google, Bing ou Qwant accordent la priorité à la lisibilité du contenu et à la rapidité d’accès à l’information. Sur ce terrain, le HTML s’impose : il structure le site, trace des repères clairs, guide sans complexité les robots qui sillonnent le web. A découvrir également : Largeur idéale pour votre site web : comment la choisir ? JavaScript met l’accent sur l’expérience utilisateur et l’interactivité. Mais dès que trop de fonctionnalités lui sont confiées côté client, les robots d’indexation peuvent perdre leur chemin. Si le contenu apparaît trop tard ou se trouve masqué, la visibilité chute. Grandir sur le web grâce à une portion de Python côté serveur séduit de plus en plus pour générer du contenu pertinent, mais tout passe par une restitution HTML impeccable. À chaque technologie, son domaine de prédilection : A lire en complément : Zectayaznindus, miroir du web moderne : ce que ce mot révèle sur Google Savoir manier ces langages affine la présentation de l’information, favorise l’accessibilité et ouvre la porte à une indexation rapide. Pour viser haut, impossible d’ignorer les standards imposés par les moteurs, sans jamais oublier la fluidité de navigation. Atteindre les meilleures positions sur Google demande une combinaison pointue de technologies. Le socle reste le HTML : bien structuré, pensé pour guider robots et utilisateurs. Sans lui, les contenus même les plus inspirés passent à côté de leur public. JavaScript, quand il orchestre les animations et personnalise les applications web, devient un atout décisif. Mais il impose sa rigueur : si le contenu s’affiche trop tard, faute d’un bon rendu côté serveur, l’indexation s’en retrouve freinée. Miser sur le server-side rendering aide à contourner ces obstacles, offrant au robot tout ce qu’il est venu chercher. Python a pris sa place dans l’arsenal SEO : génération dynamique de pages, analyses en profondeur, gestion automatisée de la data. Il se glisse derrière chaque stratégie de contenu adaptée au marketing digital, tout comme PHP ou Java qui, moins médiatisés, gardent leurs fonctions clés dans les architectures solides. Pour y voir plus clair, voici une synthèse des forces de chaque langage : À chaque type de projet, s’assurer que le langage retenu communique efficacement avec les moteurs de recherche, qu’il hiérarchise clairement l’info et repousse les freins à l’accès instantané au contenu. La structure du code oriente directement la manière dont les moteurs classent et comprennent un site. Savoir dompter les balises HTML reste le socle d’un référencement solide. Balises meta, titres structurés, attributs alt judicieusement choisis : chaque détail compte quand il s’agit d’expliquer à un robot, ou à une personne en situation de handicap, ce que propose chaque page. L’enchaînement des balises, bien pensé, lisible et organisé, n’a rien d’accessoire. Un texte clarifié par son codage gagne en indexabilité. Sur de nombreux sites, cette exigence de clarté fait la différence. Il suffit parfois d’une structure soignée pour franchir une marche dans les résultats de recherche. Voici quelques principes techniques à intégrer pour renforcer la présence sur Google : Certains CMS proposent des automatismes sur la structuration, mais la main de l’humain demeure précieuse pour ajuster la finesse des balises. Un contenu repensé pour tous les utilisateurs, peaufiné côté technique, trace sa route vers de meilleures positions. La marche rapide du référencement naturel s’accélère, portée par l’irruption de l’intelligence artificielle et la prise en compte des comportements réels. En 2025, chaque adaptation à de nouveaux algorithmes peut signifier plusieurs places gagnées. Les robots des moteurs de recherche privilégient les sites qui assurent une expérience mobile irréprochable, une organisation des contenus limpide et des réponses précises aux besoins utilisateurs. Les mutations s’opèrent autour de trois axes principaux : Collecter, analyser, affiner : aujourd’hui, Python ou JavaScript associés à une analyse fine de l’audience donnent le rythme. Savoir anticiper les attentes, garantir une rapidité d’affichage et rester fidèle à l’intention de recherche forment le vrai terrain de la visibilité future. Ceux qui garderont la main sur la technique et l’agilité sur les usages se donneront toujours une longueur d’avance. Recherche Articles en vogue © 2025 | tic-et-net.org Sign in to your account Identifiant ou adresse e-mail Mot de passe Se souvenir de moi
Images (1):
|
|||||
| SunFounder Fusion AI HAT+ Price, Specs & LLM Support for … | https://www.geeky-gadgets.com/sunfounde… | 1 | Apr 07, 2026 08:00 | active | |
SunFounder Fusion AI HAT+ Price, Specs & LLM Support for Pi - Geeky GadgetsURL: https://www.geeky-gadgets.com/sunfounder-motor-servo-hat/ Description: Meet the SunFounder Fusion HAT+ for Raspberry Pi with 4 DC motor drivers and 12 servo channels, so you can build stable robots and voice assistants Content:
Geeky Gadgets The Latest Technology News 11:15 am December 15, 2025 By Julian Horsey What if your Raspberry Pi could do more than you ever imagined, like powering a humanoid robot, automating your home, or running advanced AI models? With the launch of the SunFounder Fusion HAT+, that vision is now within reach. This innovative expansion board is engineered to transform your Raspberry Pi into a powerhouse of innovation, offering seamless compatibility with models ranging from the latest Raspberry Pi 5 to the compact Zero 2W. Whether you’re a curious beginner or a seasoned developer, the Fusion HAT+ promises to unlock new possibilities in robotics, AI, and smart systems, all at an accessible price point. In this overview, we’ll explore how the Fusion HAT+ stands out with its advanced hardware features and support for leading AI platforms like OpenAI and Gemini AI. From precise motor control to built-in audio capabilities, this board is designed to handle complex, real-world challenges. You’ll discover how it enables creators to build everything from autonomous vehicles to voice-controlled assistants, all while making sure reliable power management and ease of use. If you’ve ever dreamed of pushing the boundaries of what your Raspberry Pi can achieve, this might just be the upgrade you’ve been waiting for. TL;DR Key Takeaways : SunFounder has introduced the Fusion HAT+, a highly versatile expansion board designed to elevate Raspberry Pi projects to new heights. This innovative board is compatible with a wide range of Raspberry Pi models, including the latest Raspberry Pi 5, as well as earlier versions like the Raspberry Pi 4, 3B+, and Zero 2W. With its robust features and broad compatibility, the Fusion HAT+ is ideal for applications in robotics, home automation, and artificial intelligence (AI). Whether you are a beginner or an experienced developer, this expansion board provides a powerful platform to bring your creative ideas to life. The Fusion HAT+ is designed to integrate effortlessly with multiple Raspberry Pi models, making sure flexibility for users with different setups. Its compatibility extends to the Raspberry Pi 5, 4, 3B+, and Zero 2W, making it a practical choice for both new and existing Raspberry Pi users. Beyond hardware, the Fusion HAT+ supports leading Large Language Models (LLMs) such as OpenAI, Gemini AI, and DeepSeek AI. This capability allows developers to incorporate advanced AI functionalities into their projects, allowing innovations in voice recognition, machine learning, and intelligent automation. The Fusion HAT+ is equipped with a range of advanced hardware features that make it suitable for diverse applications. These include: These features provide the foundation for building intricate systems, from robotic arms to AI-powered devices, offering users the tools they need to tackle complex challenges. The Fusion HAT+ is tailored to meet the demands of modern robotics and smart systems, making it an excellent choice for developers aiming to create innovative projects. Its capabilities enable the development of: With its compatibility with AI platforms and robust hardware, the Fusion HAT+ enables users to explore innovative solutions in robotics and automation. Power stability is a critical factor in any project, and the Fusion HAT+ addresses this with a well-designed power management system. It includes a rechargeable 7.4V, 14.8Wh battery and USB Type-C charging for convenience. Additional features such as power protection mechanisms, battery level indicators, and a safe shutdown button ensure uninterrupted operation and safeguard your hardware from potential damage. These features make the Fusion HAT+ a reliable choice for long-term and demanding projects. To simplify the integration process, the Fusion HAT+ comes with detailed documentation and step-by-step tutorials. These resources cover both hardware and software aspects, making it easier for users to incorporate the board into their projects. Python libraries are provided to assist seamless interaction with AI platforms, and users can access technical support through active maker community forums and troubleshooting guides. This comprehensive support ensures that users of all skill levels can maximize the potential of the Fusion HAT+. The Fusion HAT+ is designed to cater to a wide audience, from beginners exploring the basics of robotics to seasoned engineers working on advanced AI systems. Its user-friendly design and extensive support make it accessible to individuals aged 10 and above, including educators, hobbyists, and professionals. Priced at just £26.17 GBP, the Fusion HAT+ offers exceptional value for its extensive features and compatibility. This affordability makes it an attractive option for anyone looking to expand their Raspberry Pi’s capabilities without breaking the budget. The SunFounder Fusion HAT+ is a feature-rich expansion board that enables Raspberry Pi enthusiasts to push the boundaries of their projects. With advanced hardware, seamless AI integration, and robust support, it is well-suited for applications in robotics, home automation, and beyond. Whether you are building a smart car, a robotic arm, or an AI-powered assistant, the Fusion HAT+ provides the tools and resources to turn your ideas into reality. Its combination of affordability, versatility, and innovative features makes it an indispensable addition to any Raspberry Pi toolkit. Source: SunFounder Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Images (1):
|
|||||
| AWS Launches Strands Labs for Experimental AI Agent Projects - … | https://www.infoq.com/news/2026/03/aws-… | 1 | Apr 07, 2026 08:00 | active | |
AWS Launches Strands Labs for Experimental AI Agent Projects - InfoQURL: https://www.infoq.com/news/2026/03/aws-strands-agents/ Description: Amazon Web Services has introduced Strands Labs, a new GitHub organization created to host experimental projects related to agent-based AI development. Content:
A monthly overview of things you need to know as an architect or aspiring architect. View an example We protect your privacy. QCon San Francisco (Nov 16-20): What's next in AI? What's next in software? Learn from the teams already doing it. Register Now Facilitating the Spread of Knowledge and Innovation in Professional Software Development Unlock the full InfoQ experience by logging in! Stay updated with your favorite authors and topics, engage with content, and download exclusive resources. Soroosh Khodami discusses why we aren't ready for the next Log4Shell. He shares live demos of dependency confusion and compromised builds, explaining how minor oversights gift hackers total system access. He explains the value of Software Bill of Materials (SBOM), dependency firewalls, and shifting security left to build resilient DevSecOps cultures that protect the modern software supply chain. Andrew Harmel-Law and a panel of expert architects discuss the shifting practice of architecture in 2025. They explain strategies for communicating technical debt to stakeholders, the benefits of decentralized decision-making through ADRs, and the career paths of modern leaders. The panel shares insights on bridging the gap between mobile and backend teams to ensure a holistic system. In this episode, Thomas Betts and Adi Polak talk about the need for context engineering when interacting with LLMs and designing agentic systems. Prompt engineering techniques work with a stateless approach, while context engineering allows AI systems to be stateful. How can you focus in a sea of results from a large regression test suite? This article describes a stochastic approach that relies on some degree of redundancy in your CI regression test set. This approach does not guarantee you will catch every bug every time, but it gives you your best bet of not missing the subtle signatures of all the bugs uncovered by your CI regression test suite runs. Franka Passing discusses the architectural shift of Duolingo’s 500+ backend services to Kubernetes. She explains the move toward GitOps with Argo CD, the transition to IPv6-only pods, and the "cellular architecture" used to isolate environments. She shares "reports from the trenches" on managing developer trust, navigating AWS rate limits, and productionizing early adopter services. Join Luca Mezzalira for this 5-week online cohort. Master socio-technical architecture leadership. Register Now. Learn how leading engineering teams run AI in production—reliably, securely, and at scale. Early Bird ends April 14. Learn what's next in AI and software, from teams already doing it. Early Bird ends April 14. InfoQ Homepage News AWS Launches Strands Labs for Experimental AI Agent Projects Mar 12, 2026 2 min read by Daniel Dominguez Amazon Web Services has introduced Strands Labs, a new GitHub organization created to host experimental projects related to agent-based AI development. The initiative is linked to the Strands Agents SDK, an open-source toolkit that allows developers to build AI agents using Python or TypeScript. Strands Labs includes three projects: Robots, Robots Sim, and AI Functions. Each project explores different aspects of agent development, ranging from robotics integration to code generation workflows. The Strands Robots project focuses on connecting AI agents with physical hardware. It provides a unified interface that allows agents built with the Strands framework to interact with sensors and robotic devices. In demonstration examples, AWS shows an agent controlling an SO-101 robotic arm using the NVIDIA GR00T model. GR00T is a vision-language-action (VLA) model that takes camera images, robot joint positions, and language instructions as input and generates joint actions as output. The Robots project also integrates with LeRobot, an open framework designed to simplify interaction with robotics hardware and datasets. By combining LeRobot abstractions with VLA models, developers can build agents that process visual data, interpret instructions, and perform physical actions. The Strands Robots Sim project provides a simulation environment for robotics experimentation. Instead of using physical hardware, developers can run agents inside physics-based environments that simulate robot behavior. The system supports environments from the Libero robotics benchmark and can integrate VLA policies through an inference service. The simulator collects observations from cameras and robot joints and feeds them to policy models that produce motor commands. The environment can record simulation runs as video and supports iterative control loops for debugging or experimentation. The third project, AI Functions, explores a different approach to writing software with AI agents. Instead of implementing a function directly, developers define the intended behavior using natural language descriptions and validation conditions written in Python. A decorator called @ai_function triggers the Strands agent loop, which generates code to satisfy the specification and validates the result using pre- and post-conditions. If the validation fails, the system retries automatically. The framework can generate implementations that parse files, perform data transformations, or execute other tasks while returning standard Python objects such as Pandas DataFrames. Community reactions to the announcement have focused on the robotics integration and the experimental nature of the projects. Clare Liguori, senior principal engineer at AWS posted on X: I think of Strands Labs as a playground for the next generation of ideas for AI agent development, from how to build agentic robots to how to make our everyday applications more agentic. Others highlighted the AI Functions experiment as an example of a growing interest in specification-driven programming, where developers define behavior and validation rules while agents generate the underlying code. Design engineer John Hanacek shared: Robots animated by agentic frameworks alongside humans, sharing a perception and awareness layer to coordinate actions. AWS stated that Strands Labs will continue to expand with additional experiments contributed by different Amazon teams. The organization is intended to function as a testing ground for ideas related to agent orchestration, robotics integration, and agent-assisted software development before they potentially move into the core Strands SDK. Presented by: Karthik Ranganathan - Co-CEO & Co-Founder at YugabyteDB, and Aditi Gupta - Snr. GenAI/ML Specialist Solutions Architect Save your seat A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example We protect your privacy. A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example We protect your privacy. Reliability rules have changed. At QCon London 2026, unlearn legacy patterns and get the blueprints from senior engineers scaling production AI today. Join senior peers from high-scale orgs as they share how they are: InfoQ.com and all content copyright © 2006-2026 C4Media Inc. Privacy Notice, Terms And Conditions, Cookie Policy
Images (1):
|
|||||
| Chinese humanoid robots could soon beat the fastest human ever: … | https://interestingengineering.com/ai-r… | 1 | Apr 06, 2026 16:00 | active | |
Chinese humanoid robots could soon beat the fastest human ever: ReportURL: https://interestingengineering.com/ai-robotics/chinese-humanoid-robots-could-beat-fastest-human Description: Chinese humanoid robots could soon surpass human sprint speeds, with experts predicting 100m runs despite key technical hurdles. Content:
From daily news and career tips to monthly insights on AI, sustainability, Aerospace, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Chinese humanoid robots may soon rival or surpass human sprinting speeds. Chinese humanoid robots are rapidly advancing toward a milestone once reserved for elite human athletes: surpassing world-record sprint speeds. According to Wang Xingxing, founder of robotics firm Unitree Robotics, humanoid machines could soon outpace Olympic champion Usain Bolt in the 100-meter dash—a prospect that signals both technological progress and the growing ambition within embodied AI. Speaking at the Yabuli China Entrepreneurs Forum on Tuesday, Wang noted that while robots still trail humans in sprinting performance today, the gap is narrowing quickly. With improvements in mechanical design, control systems, and AI-driven coordination, researchers are beginning to push humanoid robots into performance territories that were once thought to be uniquely human. Recent developments highlight how close robots are getting to elite athletic benchmarks. In February, Zhejiang University and Shanghai-based JingShi Technology unveiled a full-size humanoid robot named “Bolt,” capable of reaching a peak running speed of 10 meters per second. The team described it as the fastest full-size running humanoid robot built to date. For context, Usain Bolt’s world-record 100-meter sprint of 9.58 seconds translates to an average speed of roughly 10.44 meters per second, with peak speeds slightly higher during the race. “In a few months, by around mid-year, humanoid robots globally — especially in China — may run faster than humans,” Wang said. “Their 100-meter sprint times could drop below 10 seconds,” he continued. While humanoid robots have not yet exceeded this threshold in real-world conditions, the margin is becoming increasingly narrow. If robots were to consistently achieve or surpass these speeds, it would represent more than just a technical achievement, it would mark a symbolic shift in how machines compare to human physical capabilities, particularly in dynamic and high-performance tasks like sprinting. Achieving high-speed locomotion in humanoid robots is far more complex than simply increasing motor power. Engineers are solving challenges related to balance, coordination, energy efficiency, and real-time decision-making. Unlike wheeled or quadruped robots, humanoids need to replicate the inherently unstable process of bipedal running. This involves precise synchronization between sensors, actuators, and control algorithms to maintain stability at high speeds. Even minor errors in timing or force distribution can lead to falls or inefficient movement. Despite these gains, Wang emphasized that the industry is still far from achieving a breakthrough comparable to generative AI systems like ChatGPT. The primary limitation lies in generalization, the ability of robots to perform reliably across diverse, unpredictable environments. While humanoid robots can achieve near-perfect performance in controlled or pre-trained settings, their capabilities often degrade when conditions change. Variations in terrain, obstacles, or external disturbances can significantly impact performance, making real-world deployment challenging. This gap highlights a broader issue in embodied AI: translating controlled, lab-based success into robust, adaptable real-world functionality. As development continues, the race between humans and machines may soon extend beyond symbolic comparisons. Whether robots ultimately surpass human sprinters or not, their rapid progress underscores a larger transformation, one where physical intelligence is becoming as critical as digital intelligence in the evolution of AI. Atharva is a full-time content writer with a post-graduate degree in media & amp; entertainment and a graduate degree in electronics & telecommunications. He has written in the sports and technology domains respectively. In his leisure time, Atharva loves learning about digital marketing and watching soccer matches. His main goal behind joining Interesting Engineering is to learn more about how the recent technological advancements are helping human beings on both societal and individual levels in their daily lives. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Premium Follow
Images (1):
|
|||||
| NHS Digital Selects Scandit’s Clinical Quality Computer Vision Technology to … | https://multichannelmerchant.com/press-… | 0 | Apr 06, 2026 00:00 | active | |
NHS Digital Selects Scandit’s Clinical Quality Computer Vision Technology to Digitise the Covid Testing ProcessContent: |
|||||
| Scandit raises $150M to automate inventory scanning with computer vision | https://venturebeat.com/2022/02/09/scan… | 0 | Apr 06, 2026 00:00 | active | |
Scandit raises $150M to automate inventory scanning with computer visionDescription: Scandit, a company developing algorithms to help companies manage inventory by scanning labels, has raised $150 million in capital. Content: |
|||||
| Humanoid Robots Steal Spotlight at Silicon Valley Tech Summit | https://www.techjuice.pk/humanoid-robot… | 1 | Apr 06, 2026 00:00 | active | |
Humanoid Robots Steal Spotlight at Silicon Valley Tech SummitURL: https://www.techjuice.pk/humanoid-robots-steal-spotlight-at-silicon-valley-tech-summit/ Description: Humanoid robots took center stage at a Silicon Valley summit, highlighting rapid advances that could reshape work, care and industry. Content:
Humanoid robots emerged as one of the most talked about technologies at a major Silicon Valley summit this week, signaling how quickly machines designed to move and interact like humans are moving from experimental labs into real world applications. At the event, technology companies, robotics startups, and artificial intelligence researchers demonstrated humanoid robots capable of walking, grasping objects, responding to voice commands, and navigating complex environments. These demonstrations underscored how advances in AI models, sensors, and mechanical design are converging to accelerate the development of robots that can operate in spaces built for people. Industry leaders at the summit said humanoid robots represent a critical next step in automation. Unlike traditional industrial robots that work in controlled factory settings, humanoid robots are designed to function in homes, hospitals, warehouses, and offices without requiring major infrastructure changes. This flexibility could make them suitable for tasks ranging from elder care and logistics to manufacturing support and disaster response. Several speakers highlighted how recent progress in large language models and computer vision has dramatically improved robots’ ability to understand instructions and adapt to unfamiliar situations. Instead of following rigid programming, newer humanoid systems can learn from observation, interpret spoken language, and make decisions in real time. Researchers noted that this shift brings robots closer to being general purpose assistants rather than single task machines. However, experts at the summit also acknowledged significant challenges ahead. Power efficiency, safety, affordability, and reliability remain major hurdles before humanoid robots can be deployed at scale. There are also ongoing debates about ethical considerations, workforce displacement, and how societies should regulate machines that closely mimic human behavior. As AI systems become more capable, companies are increasingly looking to give intelligence a physical form. While widespread adoption may still be years away, the momentum on display suggests humanoid robots are no longer a distant concept but an emerging reality that could reshape how humans work and live. Abdul Wasay explores emerging trends across AI, cybersecurity, startups and social media platforms in a way anyone can easily follow. Apple approves a driver that enables Nvidia eGPUs on Arm Macs, marking a shift in GPU support for Apple Silicon devices. A major EU data breach exposed emails, user data, and internal records after hackers accessed cloud systems and leaked files online. Large-scale theft has hit the Sukkur to Multan section of the M5 Motorway, where multiple high-tech surveillance and speed cameras have gone missing across nearly. China has officially moved up the delivery timeline of its J-35 stealth fighters to Pakistan. Initially set for late 2026, the advanced aircraft will now. Premier Pakistan technology news website with special focus on startups, entrepreneurship and consumer products. © 2026 TechJuice.PK – All rights reserved.
Images (1): |
|||||
| Xiaomi’s CyberOne humanoid robot with sweat glands in bionic hands | https://interestingengineering.com/ai-r… | 1 | Apr 05, 2026 16:00 | active | |
Xiaomi’s CyberOne humanoid robot with sweat glands in bionic handsURL: https://interestingengineering.com/ai-robotics/xiaomi-cyberone-humanoid-robotic-hand Description: Full-palm tactile sensing, liquid cooling channels, and high dexterity aims to improve humanoid robot's bionic hands for long operations. Content:
From daily news and career tips to monthly insights on AI, sustainability, Aerospace, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. The robot uses artificial sweating to cool the powerful motors. Xiaomi has unveiled a major redesign of its CyberOne humanoid robot, introducing a new full-palm tactile bionic hand. It combines high-density sensing, improved dexterity, and an unusual liquid cooling system inspired by human sweating. The update was detailed through Xiaomi Technology’s official WeChat account, where the company outlined how the new hand design moves closer to human-scale manipulation and long-duration industrial operation. The redesigned hand is significantly smaller than the previous version, with Xiaomi reducing the hand’s volume by 60 percent to achieve a 1:1 human scale. The dimensions are based on a 1.73-meter (5.6 feet) human hand model, which the company says helps improve sim-to-real transfer when training robotic manipulation systems in simulation before deploying them in the real world. The new bionic hand also introduces a major increase in dexterity. Xiaomi said the configuration increases active degrees of freedom by 83 percent, bringing the robot’s bionic hand closer to the human hand standard of roughly 22 to 27 degrees of freedom required for complex manipulation tasks. A key part of the redesign is full-palm tactile sensing. The sensing area reportedly covers around 8,200 square millimeters, allowing the robot to detect pressure and contact across the entire palm rather than just the fingertips. This is significant because many robotic hands rely primarily on vision systems and fingertip sensors. Full-palm tactile sensing allows the robot to continue manipulating objects even when cameras are obstructed or when precise force control is required, such as in assembly tasks. Xiaomi also reported durability improvements, with the hand surviving more than 150,000 grasping cycles, which is substantially higher than the roughly 10,000-cycle failure threshold commonly seen in tendon-driven robotic hands. One of the most unusual features of the new CyberOne hand is its liquid cooling system, designed to address overheating in high-density motors used in dexterous robotic hands. According to Xiaomi, the hand’s compact motors can generate significant heat during continuous operation. To manage this, the company integrated 3D-printed metal liquid cooling channels inside the hand that function similarly to sweat glands. Thermal management is a major challenge in humanoid robotics, particularly for robotic hands, which must pack multiple motors, sensors, and transmission systems into a very small space. Overheating can reduce motor performance, shorten component lifespan, and limit continuous operation time. Xiaomi also shared early industrial testing results for the new hand. In automotive assembly tests, CyberOne reportedly achieved a 90.2 percent success rate for nut-fastening tasks within a strict 76-second factory cycle over three hours of operation. To support broader research in robotic manipulation and embodied AI, Xiaomi said it used tactile gloves for direct data collection and has open-sourced the TacRefineNet framework along with 61 hours of raw tactile data. The company suggests that combining full-palm tactile sensing with active liquid cooling could help enable humanoid robots to operate continuously in industrial environments, where dexterity, reliability, and thermal stability are critical for deployment. Atharva is a full-time content writer with a post-graduate degree in media & amp; entertainment and a graduate degree in electronics & telecommunications. He has written in the sports and technology domains respectively. In his leisure time, Atharva loves learning about digital marketing and watching soccer matches. His main goal behind joining Interesting Engineering is to learn more about how the recent technological advancements are helping human beings on both societal and individual levels in their daily lives. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Premium Follow
Images (1):
|
|||||
| Video: Figure humanoid robot stuns Shawn Ryan in striking demo | https://interestingengineering.com/ai-r… | 1 | Apr 04, 2026 00:00 | active | |
Video: Figure humanoid robot stuns Shawn Ryan in striking demoURL: https://interestingengineering.com/ai-robotics/shawn-ryan-tests-figure-ais-humanoid Description: Shawn Ryan tests Figure AI’s humanoid robot as CEO Brett Adcock reveals how the AI-powered machine walks, balances, and works. Content:
From daily news and career tips to monthly insights on AI, sustainability, Aerospace, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Figure AI’s humanoid robot walks beside Shawn Ryan in a real-world demo. In a recent episode of the Shawn Ryan Show, host Shawn Ryan came face-to-face with something that until recently belonged mostly to science fiction. The former U.S. Navy SEAL and CIA contractor walked alongside a fully functioning AI-Powered humanoid robot. The machine, Figure 03, developed by robotics startup Figure AI, is designed to perform many of the same tasks humans do, from folding laundry and washing dishes to working in factories and logistics centers. During the walkthrough demonstration with Figure AI founder and CEO Brett Adcock, Ryan interacted directly with the robot, testing its balance, movement, and responsiveness. The brief tour followed a much longer interview on the show, during which Adcock explained how his company is racing to build general-purpose humanoid robots that could eventually become commonplace in workplaces and possibly homes. The short demonstration video shows the Figure 03 robot walking beside Ryan, guided entirely by AI. According to Adcock, the robot stands about 5 feet 6 inches tall and weighs roughly 130-135 pounds, placing it close to human proportions. Unlike earlier robotics systems that relied heavily on scripted movements, the robot’s locomotion and actions are controlled through a neural network. As Adcock explained during the demo, the walking motion is generated by AI rather than traditional coded instructions. The robot contains around 40 joints, powered by electric motors equipped with sensors that help it maintain balance and perform tasks. Ryan, impressed by the light, foam-like exterior, questioned the robot’s durability and its ability to recover if it fell. Fall recovery, being an essential feature for robots operating in real-world environments, is a critical part of any humanoid evaluation. And while Figure is trained in simulation for dynamic stability, strength, and coordination, Addcock remarked that it totally depends on how the body falls, and that sometimes they even end up breaking necks. Another feature highlighted in the walkthrough is the robot’s hands. Cameras embedded in the palms help the machine visually track objects as it grasps them, while tactile sensors in every fingertip measure pressure during contact. This combination enables the robot to perform dexterous tasks. According to Adcock, Figure’s machines can lift boxes weighing up to 40 pounds and even fold a T-shirt. During the demonstration, Ryan jokingly asked whether the robot could crush his hand when shaking it. Adcock reassured him that the machine’s force control prevents such scenarios. While the demonstration showcased the robot’s movement and interaction, the podcast’s longer conversation focused on Figure AI’s broader ambitions. Founded in 2022, the company aims to develop general-purpose humanoid robots capable of replacing or assisting human labor in industries facing worker shortages. Adcock said early deployments are focused on commercial environments such as manufacturing and logistics. The company already works with several major partners, including BMW, where the robots are being tested in manufacturing settings. Figure is also collaborating with large logistics and real estate organizations to evaluate how humanoid robots could integrate into industrial workflows. Inside the robot’s torso sits most of its computing hardware, including GPUs and battery systems that power the machine. According to Adcock, a fully charged robot can operate for about four to five hours, after which it can recharge in roughly one hour. One unusual design feature is the charging system. Instead of plugging in cables, the robot charges wirelessly through pads embedded in its feet, allowing it to replenish energy simply by standing on a charging mat. Adcock compared the development of humanoid robots to the early years of smartphones, predicting rapid improvements with each generation of hardware. “This will look like the iPhone lineup,” he told Ryan, suggesting each new version will bring major improvements in capability. Figure AI’s ambitions go beyond building a handful of demonstration machines. According to Adcock, the company has already set up a manufacturing facility capable of producing robots on an increasing scale. When the production line is running, the factory can currently assemble one robot roughly every 90 minutes. In the long term, the company hopes to dramatically increase that output. He suggested that humanoid robots could eventually reach production levels comparable to consumer electronics, potentially reaching millions of units per year. The ultimate goal, he added, is a future where robots become as ubiquitous as smartphones, possibly even approaching a “robot for every human.” Humanoids are increasingly appearing outside the lab. Last week, a Figure humanoid robot made an appearance at a White House event focused on artificial intelligence, greeting attendees and demonstrating its capabilities. The widely publicized moment signaled how quickly humanoid robotics is moving from experimental prototypes into the public spotlight. The technology is increasingly entering mainstream discussion. Kaif Shaikh is a journalist and writer passionate about turning complex information into clear, impactful stories. His writing covers technology, sustainability, geopolitics, and occasionally fiction. A graduate in Journalism and Mass Communication, his work has appeared in the Times of India and beyond. After a near-fatal experience, Kaif began seeing both stories and silences differently. Outside work, he juggles far too many projects and passions, but always makes time to read, reflect, and hold onto the thread of wonder. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Premium Follow
Images (1):
|
|||||
| Robot od Figure AI w programie Shawna Ryana. Humanoid zachwyca … | https://www.chip.pl/2026/04/robot-od-fi… | 1 | Apr 04, 2026 00:00 | active | |
Robot od Figure AI w programie Shawna Ryana. Humanoid zachwyca swoimi możliwościamiDescription: W jednym z najnowszych odcinków „Shawn Ryan Show”, prowadzący – znany z twardego stąpania po ziemi weteran – stanął twarzą w twarz z modelem Figure 03. I co Content:
Startup, który w zawrotnym tempie goni marzenia o robotycznej rewolucji, znów udowodnił, że ich humanoidy są już gotowe, by wyjść z laboratoriów prosto do fabryk, a w przyszłości także do naszych domów. Największym zaskoczeniem podczas demonstracji, którą poprowadził założyciel firmy Brett Adcock, był sposób poruszania się robota. W przeciwieństwie do maszyn starszej generacji, które poruszały się według sztywno zaprogramowanych skryptów, Figure 03 opiera się na „ruchach generowanych przez AI”. Oznacza to, że za każdy krok, uścisk dłoni czy stabilizację sylwetki odpowiada sieć neuronowa, a nie linijki tradycyjnego kodu. Robot o wzroście około 168 cm i wadze blisko 60 kg posiada proporcje zbliżone do ludzkich, co pozwala mu na operowanie w środowiskach zaprojektowanych dla nas. Shawn Ryan, testując responsywność maszyny, zwrócił uwagę na jej delikatne, piankowe wykończenie i zapytał o trwałość. Adcock szczerze przyznał, że choć roboty są trenowane w zaawansowanych symulacjach, upadki w realnym świecie wciąż bywają ryzykowne – czasem kończą się nawet „skręceniem karku”. Niemniej jednak, postęp w koordynacji ruchowej jest kolosalny. Figure 03 posiada aż 40 stawów napędzanych silnikami elektrycznymi, a jego dłonie to majstersztyk inżynierii: Robot jest w stanie podnosić skrzynie o wadze do 18 kg, co czyni go idealnym kandydatem do pracy w centrach logistycznych. Co ciekawe, system ładowania jest całkowicie bezprzewodowy – robot uzupełnia energię (która starcza na 4-5 godzin pracy), po prostu stając na specjalnej macie ładującej. Dobrze już wiemy, że firmy stojące za robotami, nie chcą ograniczać się tylko do prezentacji, nawet tych najbardziej widowiskowych. Adcock porównuje obecny etap rozwoju humanoidów do wczesnych lat smartfonów. Przewiduje, że każda kolejna generacja (podobnie jak kolejne modele iPhone’a) będzie przynosić skokową poprawę możliwości. Firma nie buduje już prototypów w garażu – posiada w pełni funkcjonalną fabrykę, która obecnie jest w stanie złożyć jednego robota w około 90 minut. Czytaj też: Robot, który obiera jabłka. Sharpa uczy maszyny ludzkiej zręczności Docelowo startup chce produkować miliony jednostek rocznie, dążąc do wizji „robota dla każdego człowieka”. Już teraz maszyny od Figure AI przechodzą testy w zakładach BMW, gdzie sprawdzają się w trudnych warunkach produkcyjnych. O tym, jak blisko mainstreamu jest ta technologia, świadczy fakt, że niedawno jeden z robotów Figure pojawił się w Białym Domu, witając gości podczas wydarzenia poświęconego sztucznej inteligencji. Przejście od eksperymentu do oficjalnych państwowych prezentacji zajęło firmie zaledwie cztery lata, a to imponujące. Oczywiście do spełnienia ambitnych celów wciąż daleka droga, ale jeśli do tego dojdzie, to być może za kilka lat roboty przestaną być ciekawostką, a staną się codziennością. Źródło: Shawn Ryan Show Portal technologiczny z ponad 29-letnią historią, piszący o nauce i technice, smartfonach, motoryzacji, fotografii. Technologie mamy we krwi!
Images (1):
|
|||||
| Les robots humanoïdes, une bulle spéculative de plus qui va … | https://www.generation-nt.com/actualite… | 1 | Apr 03, 2026 08:00 | active | |
Les robots humanoïdes, une bulle spéculative de plus qui va faire des déçusDescription: GNT est le portail Hi-Tech français consacré aux nouvelles technologies (internet, logiciel, matériel, mobilité, entreprise) et au jeu vidéo PC et consoles. Content:
Si votre email correspond à un compte, vous recevrez un lien de réinitialisation. Le secteur des robots humanoïdes, soutenu par des milliards d'investissements, ferait face à un risque de bulle selon des figures éminentes comme Rodney Brooks (cofondateur d'iRobot) et Yann LeCun (chef scientifique IA de Meta). Ils pointent l'incapacité des approches actuelles, notamment en matière de dextérité et d'intelligence générale, à justifier les promesses faites par des entreprises comme Tesla et Figure. La course pour développer des robots humanoïdes autonomes et polyvalents est en plein essor. Des sociétés comme Figure, récemment valorisée à un niveau "étonnant" de 39 milliards de dollars après une levée de fonds dépassant le milliard, ou encore Tesla avec son projet Optimus, nourrissent des ambitions démesurées. Le PDG de Figure, Mike Cagney, et Elon Musk, promettent un impact économique significatif d'ici cinq ans. Cependant, deux des esprits les plus respectés du domaine, le roboticien Rodney Brooks et le scientifique en chef de l'IA chez Meta, Yann LeCun, viennent de jeter une ombre sur cet optimisme financier. Ils estiment que nous sommes dans la phase initiale du cycle de battage médiatique (ou cycle de la hype) pour les humanoïdes, juste au moment où l'intelligence artificielle générale commence à descendre de son pic. Cette dichotomie entre l'optimisme financier et les réalités technologiques est au cœur de leur mise en garde. Rodney Brooks, roboticien de renom ayant passé des décennies au MIT, a co-écrit un article expliquant "Pourquoi les humanoïdes d'aujourd'hui n'apprendront pas la dextérité". Son constat est sans appel : les centaines de millions, voire les milliards, de dollars investis par les capitaux-risqueurs et les grandes entreprises technologiques pour leur entraînement sont dépensés pour une approche qui ne peut pas aboutir. Pour lui, croire qu'une dextérité humaine sera atteinte dans les décennies à venir est "de la pure fantaisie". Le cœur du problème réside dans les mains. Les mains humaines disposent d'environ 17 000 récepteurs tactiles spécialisés. Selon Brooks, aucune technologie robotique actuelle n'est proche de cette capacité. Alors que l'apprentissage automatique a transformé la reconnaissance vocale et le traitement d'image grâce à des décennies de données spécifiques, il n'existe pas de "tradition" équivalente pour les données de toucher dont les robots auraient besoin. Les tentatives de certaines entreprises, comme Figure ou Tesla, d'enseigner la dextérité aux robots en leur montrant des vidéos d'humains accomplissant des tâches sont particulièrement visées par le cofondateur d'iRobot. Il souligne que les efforts pour construire des mains de type humain, même s'ils existent depuis des décennies, n'ont pas encore résolu ce goulot d'étranglement fondamental lié à l'acquisition de données sensorielles riches. De son côté, Yann LeCun, lauréat du prix Turing et pionnier du deep learning, pointe du doigt l'intelligence même de ces machines. Le chef scientifique de Meta a averti lors du symposium inaugural de l'Impact de l'IA Générative du MIT que le "grand secret de l'industrie" est qu'aucune de ces entreprises n'a la moindre idée de la manière de rendre ces robots suffisamment intelligents pour être "généralement utiles". Il précise que si des robots peuvent être entraînés pour des tâches spécifiques, comme dans le domaine manufacturier, le robot domestique nécessitera des percées majeures en IA. LeCun estime que les grands modèles de langage (LLM) actuels ne sont pas la solution. Il rappelle qu'un enfant de quatre ans a emmagasiné autant de données visuelles "à haut débit" que le plus grand des LLM sur le texte public, soulignant que "nous n'atteindrons jamais l'intelligence de niveau humain en nous entraînant uniquement sur du texte". Pour sortir de cette impasse, l'avenir réside dans ce qu'on appelle les modèles du monde (world models). Ces systèmes IA apprennent à comprendre le monde physique à partir de données sensorielles (vidéo). L'objectif est de prédire l'état futur du monde après une action imaginée par l'agent. LeCun, qui mène des recherches sur des architectures comme le V-JEPA, est convaincu que ces modèles sont la clé pour que les robots accomplissent des tâches "sans entraînement" (zero shot). Au-delà de l'intelligence et de la dextérité, Rodney Brooks soulève un point souvent négligé : la sécurité. Les robots humanoïdes bipèdes, en raison de l'énergie massive qu'ils doivent déployer pour rester debout et marcher, représentent un danger non négligeable en cas de chute. Cette problématique physique s'ajoute aux défis logiciels, incitant Brooks à prédire que dans une quinzaine d'années, les humanoïdes qui réussiront ressembleront peu aux modèles anthropomorphes actuels. Ils seront probablement dotés de roues, de multiples bras et de capteurs spécialisés, abandonnant la forme humaine pour des raisons d'efficacité. L'alerte lancée par Brooks et LeCun force l'industrie à se poser la question fondamentale : le financement massif d'expériences d'entraînement coûteuses peut-il réellement conduire à une production de masse évolutive sans adresser d'abord les goulots d'étranglement de l'IA fondamentale ? Le débat fait rage, et l'échéance des cinq prochaines années fixée par certains entrepreneurs servira de juge de paix sur la viabilité de la forme humanoïde actuelle. La discussion est réservée aux membres GNT Commencez par créer un compte ou vous identifier Copyright © 2001-2026 GNT Media, tous droits réservés
Images (1):
|
|||||
| Towards LLM-powered Assistive Drone for Blind and Low Vision Users … | https://hal.science/hal-05567674v1 | 1 | Apr 03, 2026 08:00 | active | |
Towards LLM-powered Assistive Drone for Blind and Low Vision Users - Archive ouverte HALURL: https://hal.science/hal-05567674v1 Description: <div><p>Drones have gained traction as a versatile form of assistive robots for Blind and Low Vision (BLV) people. Nonetheless, novel interaction techniques are required to enable BLV people to communicate with drones naturally. In this work, we built an LLM-powered assistive drone for BLV users. We leverage an LLM to translate high-level user goals to step-by-step instructions for the drone and to extract visual information from the images. Through a formative study with BLV users (N=9), we identified envisioned use cases and desired interaction modalities. Then, we took a participatory and iterative approach to build a prototype, incorporating feedback received from 3 BLV users, as well as 5 domain experts. Finally, we conducted a user study with an additional 6 BLV participants to evaluate the iterated prototype, and received positive feedback. This work is contributing to a growing body of research on harnessing the power of LLMs to build a more inclusive world.</p></div> Content:
Drones have gained traction as a versatile form of assistive robots for Blind and Low Vision (BLV) people. Nonetheless, novel interaction techniques are required to enable BLV people to communicate with drones naturally. In this work, we built an LLM-powered assistive drone for BLV users. We leverage an LLM to translate high-level user goals to step-by-step instructions for the drone and to extract visual information from the images. Through a formative study with BLV users (N=9), we identified envisioned use cases and desired interaction modalities. Then, we took a participatory and iterative approach to build a prototype, incorporating feedback received from 3 BLV users, as well as 5 domain experts. Finally, we conducted a user study with an additional 6 BLV participants to evaluate the iterated prototype, and received positive feedback. This work is contributing to a growing body of research on harnessing the power of LLMs to build a more inclusive world. Drones have gained traction as a versatile form of assistive robots for Blind and Low Vision (BLV) people. Nonetheless, novel interaction techniques are required to enable BLV people to communicate with drones naturally. In this work, we built an LLM-powered assistive drone for BLV users. We leverage an LLM to translate high-level user goals to step-by-step instructions for the drone and to extract visual information from the images. Through a formative study with BLV users (N=9), we identified envisioned use cases and desired interaction modalities. Then, we took a participatory and iterative approach to build a prototype, incorporating feedback received from 3 BLV users, as well as 5 domain experts. Finally, we conducted a user study with an additional 6 BLV participants to evaluate the iterated prototype, and received positive feedback. This work is contributing to a growing body of research on harnessing the power of LLMs to build a more inclusive world. Connectez-vous pour contacter le contributeur https://hal.science/hal-05567674 Soumis le : jeudi 26 mars 2026-08:38:42 Dernière modification le : lundi 30 mars 2026-12:48:20 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| Elon Musk announces disappointing Tesla Optimus update | https://www.teslarati.com/elon-musk-ann… | 1 | Apr 02, 2026 08:00 | active | |
Elon Musk announces disappointing Tesla Optimus updateURL: https://www.teslarati.com/elon-musk-announces-disappointing-tesla-optimus-update/ Description: Elon Musk announced a disappointing update to the unveiling of Tesla Optimus and its third-generation iteration, missing a timeline it aimed to hit in the first quarter of the year. Content:
Tesla removes Model S and X custom orders as sunset officially begins SpaceX files confidentially for IPO that will rewrite the record books Elon Musk hints at “official ceremony” with throwback photo to close Tesla Model S, Model X chapter Elon Musk announces disappointing Tesla Optimus update Countdown: America is going back to the Moon and SpaceX holds the key to what comes after Tesla removes Model S and X custom orders as sunset officially begins Elon Musk hints at “official ceremony” with throwback photo to close Tesla Model S, Model X chapter Elon Musk announces disappointing Tesla Optimus update Musk forces Judge’s exit from shareholder battles over viral social media slip-up Tesla FSD mocks BMW human driver: Saves pedestrian from near miss SpaceX files confidentially for IPO that will rewrite the record books Countdown: America is going back to the Moon and SpaceX holds the key to what comes after Elon Musk debunks latest rumors about SpaceX IPO Tesla and SpaceX to merge in 2027, Wall Street analyst predicts TIME honors SpaceX’s Gwynne Shotwell: From employee No. 7 to world’s most valuable company SpaceX files confidentially for IPO that will rewrite the record books Elon Musk hints at “official ceremony” with throwback photo to close Tesla Model S, Model X chapter Elon Musk announces disappointing Tesla Optimus update Countdown: America is going back to the Moon and SpaceX holds the key to what comes after Elon Musk debunks latest rumors about SpaceX IPO In a post on X on March 31, Musk stated that Optimus 3 is mobile but requires some finishing touches before it is ready to be shown to the world. This update comes on the final day of the first quarter, a period when Tesla had previously signaled expectations for a Gen 3 reveal. Published on By Elon Musk announced a disappointing update to the unveiling of Tesla Optimus and its third-generation iteration, missing a timeline it aimed to hit in the first quarter of the year. Musk has confirmed that the highly anticipated Optimus Gen 3 humanoid robot is already walking around and operational, yet the public unveiling will face a short delay as the company applies final refinements. In a post on X on March 31, Musk stated that Optimus 3 is mobile but requires some finishing touches before it is ready to be shown to the world. This update comes on the final day of the first quarter, a period when Tesla had previously signaled expectations for a Gen 3 reveal. Optimus 3 is walking around, but needs some finishing touches before it’s ready to be shown — Elon Musk (@elonmusk) March 31, 2026Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Musk has confirmed that the highly anticipated Optimus Gen 3 humanoid robot is already walking around and operational, yet the public unveiling will face a short delay as the company applies final refinements. In a post on X on March 31, Musk stated that Optimus 3 is mobile but requires some finishing touches before it is ready to be shown to the world. This update comes on the final day of the first quarter, a period when Tesla had previously signaled expectations for a Gen 3 reveal. Optimus 3 is walking around, but needs some finishing touches before it’s ready to be shown — Elon Musk (@elonmusk) March 31, 2026Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement In a post on X on March 31, Musk stated that Optimus 3 is mobile but requires some finishing touches before it is ready to be shown to the world. This update comes on the final day of the first quarter, a period when Tesla had previously signaled expectations for a Gen 3 reveal. Optimus 3 is walking around, but needs some finishing touches before it’s ready to be shown — Elon Musk (@elonmusk) March 31, 2026Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Optimus 3 is walking around, but needs some finishing touches before it’s ready to be shown — Elon Musk (@elonmusk) March 31, 2026Advertisement — Elon Musk (@elonmusk) March 31, 2026Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Joey has been a journalist covering electric mobility at TESLARATI since August 2019. In his spare time, Joey is playing golf, watching MMA, or cheering on any of his favorite sports teams, including the Baltimore Ravens and Orioles, Miami Heat, Washington Capitals, and Penn State Nittany Lions. You can get in touch with joey at joey@teslarati.com. He is also on X @KlenderJoey. If you're looking for great Tesla accessories, check out shop.teslarati.com SpaceX files confidentially for a record-breaking IPO targeting a $1.75T valuation and $80B raise, driven by Starlink growth and its xAI merger. Published on By Elon Musk’s rocket and satellite company submitted its draft registration to the U.S. Securities and Exchange Commission today for an initial public offering, targeting June at a $1.75 trillion valuation. This would be the largest in history. SpaceX has filed confidentially with the SEC, first reported by Bloomberg. SpaceX would be valued above every S&P 500 company except Nvidia, Apple, Alphabet, Microsoft, and Amazon. The filing uses a confidential process that allows companies to work through SEC disclosures privately before initiating a public roadshow. With a June target, official details through a formal prospectus is expected to go public in April or early May, after which SpaceX must wait at least 15 days before beginning investor marketing. SpaceX IPO is coming, CEO Elon Musk confirms Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement SpaceX has filed confidentially with the SEC, first reported by Bloomberg. SpaceX would be valued above every S&P 500 company except Nvidia, Apple, Alphabet, Microsoft, and Amazon. The filing uses a confidential process that allows companies to work through SEC disclosures privately before initiating a public roadshow. With a June target, official details through a formal prospectus is expected to go public in April or early May, after which SpaceX must wait at least 15 days before beginning investor marketing. SpaceX IPO is coming, CEO Elon Musk confirms Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement The filing uses a confidential process that allows companies to work through SEC disclosures privately before initiating a public roadshow. With a June target, official details through a formal prospectus is expected to go public in April or early May, after which SpaceX must wait at least 15 days before beginning investor marketing. SpaceX IPO is coming, CEO Elon Musk confirms Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement SpaceX IPO is coming, CEO Elon Musk confirms Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement Elon Musk promises an official ceremony to mark the end of Tesla Model S and Model X production. Published on By Tesla has officially begun winding down production of the Model S and Model X, sending farewell emails to U.S. customers on March 27 and updating the website to reflect the end of the line. Shoppers visiting Tesla.com now find only a limited set of Model S and Model X inventory units available for purchase, with no option to configure a new factory build. The move formalizes what CEO Elon Musk announced on the company’s Q4 2025 earnings call in January, when he said it was “time to basically bring the Model S and X programs to an end with an honorable discharge.” Musk posted on X a throwback photo of himself speaking at the Model S production launch in 2012, and noting “We will have an official ceremony to mark the ending of an era. I love those cars.” The mention of an official ceremony is notable. Tesla has not held a formal farewell event for a vehicle before, and Musk’s wording suggests this will be something deliberate rather than a quiet line shutdown. Given that Musk’s X post shows a photo of him on stage with a microphone in front of an audience at the Fremont factory, it wouldn’t be too far-fetched to expect a closing ceremony to take place at the same location. Perhaps? Whether it becomes a public event, a private gathering for employees, or a livestreamed moment on X remains to be seen. Custom orders of the Tesla Model S & X have come to an end. All that’s left are some in inventory. We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement Musk posted on X a throwback photo of himself speaking at the Model S production launch in 2012, and noting “We will have an official ceremony to mark the ending of an era. I love those cars.” The mention of an official ceremony is notable. Tesla has not held a formal farewell event for a vehicle before, and Musk’s wording suggests this will be something deliberate rather than a quiet line shutdown. Given that Musk’s X post shows a photo of him on stage with a microphone in front of an audience at the Fremont factory, it wouldn’t be too far-fetched to expect a closing ceremony to take place at the same location. Perhaps? Whether it becomes a public event, a private gathering for employees, or a livestreamed moment on X remains to be seen. Custom orders of the Tesla Model S & X have come to an end. All that’s left are some in inventory. We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement The mention of an official ceremony is notable. Tesla has not held a formal farewell event for a vehicle before, and Musk’s wording suggests this will be something deliberate rather than a quiet line shutdown. Given that Musk’s X post shows a photo of him on stage with a microphone in front of an audience at the Fremont factory, it wouldn’t be too far-fetched to expect a closing ceremony to take place at the same location. Perhaps? Whether it becomes a public event, a private gathering for employees, or a livestreamed moment on X remains to be seen. Custom orders of the Tesla Model S & X have come to an end. All that’s left are some in inventory. We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement Custom orders of the Tesla Model S & X have come to an end. All that’s left are some in inventory. We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 — Elon Musk (@elonmusk) April 1, 2026 The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement NASA’s Artemis II launches Wednesday, sending humans near the Moon for the first time since 1972. Published on By For the first time since Apollo 17 touched down on the lunar surface in December 1972, the United States is sending humans back toward the Moon. NASA’s Artemis II mission is set to launch as early as this week from Kennedy Space Center in Florida, carrying four astronauts on a 10-day journey around the Moon and back to Earth. It will not land anyone on the surface this time, but it is the first crewed flight in over half a century to travel beyond low Earth orbit, and it sets the stage for Elon Musk’s SpaceX missions to follow. The mission uses NASA’s Space Launch System rocket and the Orion spacecraft, which will fly around the Moon before splashing down in the Pacific Ocean around April 10. For context, an uncrewed Artemis I flew the same path in 2022, proving the hardware worked. Artemis II now tests it with people aboard. According to NASA’s official countdown blog, launch preparations are on track with an 80 percent chance of favorable weather. “Hey, let’s go to the moon!” Commander Wiseman told reporters upon arriving at Kennedy Space Center. Source: NASA Beyond Artemis II lies the lander question, and that is where SpaceX enters directly. In 2021, NASA awarded SpaceX a $2.89 billion contract to develop the Starship Human Landing System, a modified version of Starship designed to ferry astronauts from lunar orbit to the surface. The original plan called for SpaceX to deliver that lander for Artemis III, which was to be the first crewed lunar landing. Timing for Starship development, however, caused NASA to restructure the mission sequence entirely. Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement The mission uses NASA’s Space Launch System rocket and the Orion spacecraft, which will fly around the Moon before splashing down in the Pacific Ocean around April 10. For context, an uncrewed Artemis I flew the same path in 2022, proving the hardware worked. Artemis II now tests it with people aboard. According to NASA’s official countdown blog, launch preparations are on track with an 80 percent chance of favorable weather. “Hey, let’s go to the moon!” Commander Wiseman told reporters upon arriving at Kennedy Space Center. Source: NASA Beyond Artemis II lies the lander question, and that is where SpaceX enters directly. In 2021, NASA awarded SpaceX a $2.89 billion contract to develop the Starship Human Landing System, a modified version of Starship designed to ferry astronauts from lunar orbit to the surface. The original plan called for SpaceX to deliver that lander for Artemis III, which was to be the first crewed lunar landing. Timing for Starship development, however, caused NASA to restructure the mission sequence entirely. Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement According to NASA’s official countdown blog, launch preparations are on track with an 80 percent chance of favorable weather. “Hey, let’s go to the moon!” Commander Wiseman told reporters upon arriving at Kennedy Space Center. Source: NASA Beyond Artemis II lies the lander question, and that is where SpaceX enters directly. In 2021, NASA awarded SpaceX a $2.89 billion contract to develop the Starship Human Landing System, a modified version of Starship designed to ferry astronauts from lunar orbit to the surface. The original plan called for SpaceX to deliver that lander for Artemis III, which was to be the first crewed lunar landing. Timing for Starship development, however, caused NASA to restructure the mission sequence entirely. Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement Source: NASA Beyond Artemis II lies the lander question, and that is where SpaceX enters directly. In 2021, NASA awarded SpaceX a $2.89 billion contract to develop the Starship Human Landing System, a modified version of Starship designed to ferry astronauts from lunar orbit to the surface. The original plan called for SpaceX to deliver that lander for Artemis III, which was to be the first crewed lunar landing. Timing for Starship development, however, caused NASA to restructure the mission sequence entirely. Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement How to give your Tesla a Custom Lovk Sound! Easy tutorial!! #tesla #teslatok #teslalocksound Copyright © TESLARATI. All rights reserved.
Images (1):
|
|||||
| CES 2026 Showcases Emotionally Intelligent Robots for All Age Groups | https://www.androidheadlines.com/2026/0… | 1 | Apr 02, 2026 00:03 | active | |
CES 2026 Showcases Emotionally Intelligent Robots for All Age GroupsDescription: Mind With Heart Robotics Co., Ltd. has showcased its portfolio of emotionally intelligent robots at the CES 2026 show. Content:
Sign Up! envelope_alt Get the latest Android News in your inbox every day arrow_right Sign up to receive the latest Android News every weekday: Only send updates once a week Android Headlines / Mobile Events News / CES / CES 2026 Showcases Emotionally Intelligent Robots for All Age Groups Mind With Heart Robotics Co., Ltd. has showcased its new emotionally intelligent robots at the CES 2026. The robots are designed to complement and support both the elders and the children. It uses artificial intelligence and clinically backed algorithms to adapt naturally to the user. The Consumer Electronics Show (CES) 2026 is going on full swing, and now, Mind With Heart Robotics Co., Ltd. has unveiled a broad portfolio of emotionally intelligent robots at the show. These robots are designed to meet the emotional and therapeutic needs of individuals across various age groups. The company showcases their robots’ natural movement, tactile interaction, and affective intelligence that adapts over time. The tech giant says that its emotionally intelligent robots are designed for both older people’s companionship and pediatric therapy. It shows how social robotics is moving toward clinically informed, human-centered design at a global scale for future care ecosystems worldwide. Robots are no longer limited to just mechanical and work-related tasks. The lineup is a result of years of research in affective computing and human-robot interaction led by founder and CEO Zhang Jiaming. The CEO has more than a decade of experience in the field. He has also overseen dozens of robotic systems and filed extensive patents in biomimetic design and emotional AI. With such extensive experience and knowledge, he designed the robots that read touch, voice, and behavior patterns and can also respond with lifelike motion. Further, it also keeps clinical collaboration and data ethics in mind for long-term safety and accuracy. The best part about the robots is that they can adapt to sensitive care settings in homes, hospitals, and schools across different global markets today. The main highlight of the show was the new An’An panda cub robot. It was also honored by the Consumer Technology Association with the CES Innovation Awards in artificial intelligence. It is designed specifically for loneliness and the care of old-aged people. The robot uses full-body tactile sensing and long-term memory to personalize interaction. Alongside An’An, the firm showcased its Duncan Series companion robots. These are meant for pediatric therapy, including support for children with autism and sensory challenges. The lineup is made while keeping skill-spanning communication, social interaction, motor development, play, and emotional well-being in mind. Mind With Heart Robotics says that they’re planning for a commercial release of all the robots in March. The products would be accessible at a worldwide scale across consumer, healthcare, and institutional markets. I am an experienced consumer tech writer dedicated to producing comprehensive guides and news that empower readers. My passion for technology drives me, and you can often find me exploring Tech Twitter. Feel free to reach out to me at: [email protected]. Copyright ©2026 Android Headlines. All Rights Reserved. Main Deals & More Android News Sign Up! envelope_alt Get the latest Android News in your inbox every day arrow_right Sign up to receive the latest Android News every weekday: Only send updates once a week
Images (1):
|
|||||
| McDonald's experimenting with robot employees that look like humans — … | https://nypost.com/2026/03/22/world-new… | 1 | Apr 02, 2026 00:03 | active | |
McDonald's experimenting with robot employees that look like humans — and even dress in uniformDescription: A McDonald's in a Chinese city welcome humanoid robots to serve up meals and entertain customers -- but only for a limited time. Content:
Switch between CA and NY editions here. A McDonald’s in a Chinese city welcome humanoid robots to serve up meals and entertain customers — but only for a limited time. Videos posted on social media captured the myriad of lifelike robots at a McDonald’s in Shanghai performing routine tasks typically completed by human workers, such as greeting customers and delivering food. Diners were seen interacting with the robots dressed in the fast-food joint’s iconic red-and-yellow uniforms behind counters, while children chased more of the moving machinery disguised as cute animals. The robots, supplied by Chinese firm Keenon Robotics, were deployed as part of a trial at the McDonald’s location, Digitaltrends reported. McDonald’s said the robots were only around for five days — from March 14 to the 19 — and were meant to plug the grand opening of the Shanghai Science and Technology Museum restaurant. “Our Humanoid series are leading the squad and hitting the streets,” Keenon Robotics posted on social media alongside a clip of the robots interacting with diners. “It’s a showcase of how service automation is becoming a seamless part of global dining, and how technology brings more smiles to every mealtime,” the company added. Jon Banner, the executive vice president and global chief impact officer of the beloved fast-food giant, tweeted that the robots were there for a “temporary greeting.” “Mission accomplished!” he said. “The robots were not involved in any service or operational functions. And if you didn’t visit prior to today, you missed them.” The footage comes amid concerns over artificial intelligence and robots replacing tasks typically completed by human workers at large corporations. In July, the Wall Street Journal reported that Amazon will soon use more robots in its warehouses than human employees, with more than 1 million machines already deployed across facilities. Many of these robots handle the heavy lifting in warehouse work, picking items from tall shelves and moving goods around facilities. Others are advanced enough to help humans sort and package orders, according to the Wall Street Journal. Three-quarters of Amazon’s global deliveries are now assisted by robots in some way, according to the company.
Images (1):
|
|||||
| Il robot Figure 03 ora pulisce e riordina casa | https://www.tecnoandroid.it/2026/03/13/… | 1 | Mar 31, 2026 08:01 | active | |
Il robot Figure 03 ora pulisce e riordina casaURL: https://www.tecnoandroid.it/2026/03/13/il-robot-figure-03-ora-pulisce-e-riordina-casa-1813224/ Description: Il robot umanoide Figure 03 mostra nuovi progressi nelle faccende domestiche grazie alla piattaforma AI Helix 02. Ecco i dettagli. Content:
Nel settore della robotica domestica, i video dimostrativi sono ormai una sorta di tradizione. Tra le aziende più attive in tal senso c’è Figure AI. Quest’ultima, infatti, ha attirato molta attenzione grazie ai suoi robot umanoidi progettati per lavorare con gli esseri umani. L’ultimo protagonista di tali dimostrazioni è Figure 03. Si tratta di un modello pensato per affrontare attività domestiche. Il nuovo video pubblicato dall’azienda mostra il robot impegnato in una piccola routine casalinga. Si muove tra mobili e oggetti raccogliendo giocattoli lasciati sul pavimento, sistema i cuscini del divano e passa un panno su alcune superfici per pulirle. Scene simili potrebbero sembrare quasi banali, ma proprio la loro normalità è ciò che rende interessante la dimostrazione. L’obiettivo non è stupire con movimenti spettacolari, ma dimostrare che un robot può interagire con un ambiente domestico reale, dove nulla è perfettamente ordinato. Non è la prima volta che l’azienda mostra le capacità dei suoi robot. Già in passato il precedente modello, Figure 02, aveva dato prova di una notevole abilità nella manipolazione degli oggetti. In alcune dimostrazioni lo si vedeva selezionare capi di abbigliamento o organizzare oggetti con movimenti precisi. Con il nuovo robot l’attenzione sembra spostarsi ancora di più sulla gestione di situazioni domestiche meno prevedibili. Alla base di tali capacità c’è il sistema AI sviluppato dall’azienda, chiamato Helix 02. Tale piattaforma integra diversi elementi fondamentali per la robotica moderna. Tra cui la visione artificiale per riconoscere oggetti e ambienti, la comprensione del linguaggio per interpretare istruzioni. A ciò si aggiunge anche una componente di pianificazione che traduce le informazioni raccolte in azioni concrete. Un dettaglio interessante riguarda la velocità del robot. Osservando il video, si nota che i movimenti sono ancora più lenti rispetto a quelli di una persona. Non si tratta di un limite tecnologico, ma una scelta legata alla sicurezza. In un ambiente domestico, dove il robot potrebbe trovarsi vicino a persone o animali, mantenere movimenti controllati e prevedibili riduce i rischi. Nonostante i progressi mostrati nel video, Figure AI non ha ancora annunciato quando robot come Figure 03 potrebbero arrivare sul mercato. Prima di una commercializzazione sarà necessario raccogliere grandi quantità di dati e dimostrare che il sistema può funzionare in modo affidabile. Ciao sono Margareth, per gli amici Maggie, la vostra amichevole web writer di quartiere. Questa piccola citazione dice già tanto di me: amo il cinema, le serie tv, leggere e cantare a squarciagola i musical a teatro. Se a questo aggiungiamo la passione per la fotografia e la tecnologia direi che è facile intuire perché ho deciso di studiare e poi lavorare con la comunicazione. 2012 – 2026 Tecnoandroid.it – Gestito dalla STARGATE SRLS – P.Iva: 15525681001 Testata telematica quotidiana registrata al Tribunale di Roma CON DECRETO N° 225/2015, editore STARGATE SRLS. Tutti i marchi riportati appartengono ai legittimi proprietari. Questo articolo potrebbe includere collegamenti affiliati: eventuali acquisti o ordini realizzati attraverso questi link contribuiranno a fornire una commissione al nostro sito. 🔥 Non perderti nemmeno un'offerta Smartphone, notebook, gadget tech al prezzo più basso. Unisciti a migliaia di lettori di TecnoAndroid! Puoi disiscriverti in qualsiasi momento. Niente spam, solo offerte vere. 🎯 Inserisci qualcosa di speciale: Tienimi connesso fino a quando non esco Password dimenticata? Ti sarà inviata una nuova password via email. Hai ricevuto una nuova password? Accedi qui.
Images (1):
|
|||||
| Figure AI: The robotics company hosted by Melania Trump | https://www.cnbc.com/2026/03/26/figure-… | 1 | Mar 31, 2026 08:01 | active | |
Figure AI: The robotics company hosted by Melania TrumpURL: https://www.cnbc.com/2026/03/26/figure-ai-the-robotics-company-hosted-by-melania-trump.html Description: The White House hosted its first humanoid robot guest, with first lady Melania Trump appearing alongside a robot from startup Figure AI. Content:
In this article The White House hosted its "first humanoid robot guest" on Wednesday, with first lady Melania Trump appearing alongside a robot from robotics upstart Figure AI. The robot, identified as Figure 3, accompanied the first lady during the second day of the Fostering the Future Together Global Coalition Summit, a gathering focused on technology and children's education. The machine greeted attendees in multiple languages and described itself as "a humanoid built in the United States of America," according to widely circulated footage from the event. The display represented one of, if not the, highest-profile showcases of humanoid robotics in the U.S. to date and highlights how the tech is becoming a national priority amid global tech competition. Beijing has also promoted humanoid robots at highly publicized events this year. The first lady used the robot to promote her push for artificial intelligence in children's education, suggesting that the robots could one day act as interactive educators at home. However, Figure AI says its third-generation humanoids are also applicable for more general purposes, including commercial and household tasks. The White House spotlight is likely to boost the brand of Nvidia-backed Figure AI, a lesser-known robot company compared to larger humanoid players like Tesla's Optimus and Boston Dynamics, though some of its team comes from those competitors, as well as tech giants like Apple. Figure AI was founded in 2022 by Brett Adcock, a tech entrepreneur and billionaire who previously co-founded the publicly traded aerospace company Archer Aviation and a digital hiring marketplace Vettery. Powering its robots is the firm's in-house Helix AI system, a vision-language-action model that powers its robots and enables learning through observation and verbal commands. Amid growing investor excitement for physical AI, the firm raised more than $1 billion in its Series C funding round in September led by Parkway Venture Capital with participation from other notable investors such as Nvidia, Intel Capital, Qualcomm Ventures and Salesforce. That gave it a post-money valuation of $39 billion. The fundraising is expected to be put towards the firm's aim to deploy thousands of robots in homes and logistics over the coming years — a goal that has likely been made easier by a major endorsement from the White House. Figure AI has already begun work with its first commercial customer in BMW, deploying its robots for tasks like handling sheet metal parts in manufacturing facilities. It's possible that Melania's endorsement of Figure AI's robots as potential educators will trigger a reexamination of an ongoing lawsuit the company found itself in last year. In November, Figure AI was sued by its former head of product safety, who alleged he was fired after warning executives that the company's robots were powerful enough to fracture a human skull. Robert Gruendel filed the complaint in federal court in California, claiming wrongful termination after raising safety concerns with CEO Brett Adcock and chief engineer Kyle Edelberg in September 2025. The suit stated that Figure AI's next-generation robots moved at superhuman speed and generated force approximately twice the level necessary to fracture an adult human skull. Gruendel also alleged that one robot had carved a gash into a steel refrigerator door during a malfunction. Figure AI contends that Gruendel had been fired for poor performance, and described the allegations as "falsehoods." Figure AI countersued in January, saying Gruendel failed in his role to help the company build a safe robot. The lawsuit drew attention to broader questions about safety standards in humanoid robotics development and remains pending. Interestingly, the White House event on Wednesday wasn't the first time that a company connected to Adcock received some major shine from the Trump administration. Shares of the aerospace company he co-founded, Archer Aviation, surged in June last year after U.S. President Donald Trump signed an Executive Order directing the establishment of a program to promote the safe integration of electric air taxis in U.S. cities. Archer is participating in the initiative and is working on projects involving aircraft demonstrations. Following the June 2025 executive order, Archer raised $850 million in a registered direct stock offering. Adcock co-founded Archer Aviation in 2018 with Adam Goldstein and initially served as co-CEO. However, Adcock stepped down in April 2022, and then resigned from the company's board of directors shortly afterward. He remains a shareholder, according to investment research platform Business Quant, but he has no active executive, board, or advisory position at the company. Correction: This story has been updated to reflect that Archer Aviation is an aerospace company. An earlier version of the story gave an incorrect description of the firm's business. Got a confidential news tip? We want to hear from you. Sign up for free newsletters and get more CNBC delivered to your inbox Get this delivered to your inbox, and more info about our products and services. © 2026 Versant Media, LLC. All Rights Reserved. A Versant Media Company. Data is a real-time snapshot *Data is delayed at least 15 minutes. Global Business and Financial News, Stock Quotes, and Market Data and Analysis. Data also provided by
Images (1):
|
|||||
| Are Humanoid Robots Really That Advanced Now? | HowStuffWorks | https://science.howstuffworks.com/human… | 1 | Mar 30, 2026 16:00 | active | |
Are Humanoid Robots Really That Advanced Now? | HowStuffWorksURL: https://science.howstuffworks.com/humanoid-robots.htm Description: Humanoid robots are machines designed to resemble the human body and replicate some humanlike abilities. Engineers in humanoid robotics build machines with arms, legs, and sensors that allow them to perform tasks in environments built for human beings. Content:
Advertisement Humanoid robots are machines designed to resemble the human body and replicate some humanlike abilities. Engineers in humanoid robotics build machines with arms, legs, and sensors that allow them to perform tasks in environments built for human beings. Unlike many traditional industrial robots used in factories, humanoid robots aim to work alongside humans in real world settings. Their humanlike structure helps them open doors, use tools, and interact with human operators. Advertisement Rapid advances in artificial intelligence, machine learning, and robot hardware are pushing these systems from science fiction into reality. Researchers now test advanced humanoid robot platforms in homes, workplaces and public spaces. Most humanoid robots copy the basic body plan of their human counterparts. Engineers design them with a torso, head, robotic arms, and bipedal robots legs that allow humanlike movements. Complex mechanical components and motors give these machines many degrees of freedom, meaning they can move joints in multiple directions. This flexibility helps robots perform complex tasks that require human dexterity. Advertisement Sensors such as cameras, tactile sensing systems and force/torque sensors allow a robot to detect objects, adjust its grip, and maintain balance in complex environments. Modern humanoid robot designed systems rely heavily on artificial intelligence. AI models help robots understand surroundings, track objects, and plan actions. Developers train AI models using machine learning techniques such as imitation learning and reinforcement learning. These methods allow robots to learn new skills by observing humans or experimenting with actions. Advertisement Data pipelines and control systems process information from sensors so the robot can react in real time. This tracking ability helps humanoid robots navigate unstructured environments and maintain safe human robot interaction. Several companies and research groups are developing humanoid robotics platforms. Boston Dynamics has explored agile robots capable of moving through difficult terrain. Agility Robotics created Digit robots designed for tasks such as carrying packages and moving totes in warehouses. Pal Robotics builds humanoid service robot systems used as development platforms for research. Advertisement Other humanoid robots come from companies such as SoftBank Robotics, Hanson Robotics, and Engineered Arts. These machines often focus on social robot roles, customer service roles, or public demonstrations that showcase facial expressions and communication abilities. Humanoid robots can perform some manual tasks that once required human workers. Robotic arms and motor control allow some humanoid robots to manipulate tools or handle objects. Developers are training robots to help with household tasks such as cleaning or organizing items. In industrial settings, autonomous robots may assist humans with assembling parts, transporting materials, or monitoring equipment. Advertisement Some robots can also be controlled remotely using remote control systems. Human operators guide the machine while the robot provides mobility and strength in dangerous or distant environments. Many experts believe the first wave of humanoid robots will appear in workplaces where labor shortages exist. These robots may help complete repetitive or physically demanding tasks while working alongside humans. Researchers continue improving balance, autonomous navigation, and humanlike motion so robots can operate in various environments. Advances in greater dexterity and machine perception may allow robots to interact more naturally with people. Advertisement While fully autonomous humanoid machines remain in early stages, ongoing research described in publications such as IEEE Spectrum shows steady progress. As artificial intelligence improves, humanoid robots may become capable assistants in homes, hospitals and workplaces across the world. We created this article in conjunction with AI technology, then made sure it was fact-checked and edited by a HowStuffWorks editor. Advertisement Please copy/paste the following text to properly cite this HowStuffWorks.com article: Advertisement Advertisement Advertisement Advertisement Advertisement
Images (1):
|
|||||
| Billionaire Brett Adcock Launches New Startup to Build Personal A.I. … | https://observer.com/2026/03/bret-adcoc… | 1 | Mar 30, 2026 08:00 | active | |
Billionaire Brett Adcock Launches New Startup to Build Personal A.I. | ObserverURL: https://observer.com/2026/03/bret-adcock-hark-personal-ai/ Description: Billionaire founder Brett Adcock is self-funding Hark, a lab that fuses multimodal A.I. with custom hardware to create assistants that think like humans. Content:
Brett Adcock has built and sold companies in robotics, security and air taxis, and now he wants to reinvent how people use A.I. His latest venture, Hark, is a new lab that pairs personalized intelligence with custom-built hardware. Instead of specializing in models or devices alone, Hark aims to own the whole pipeline—foundation models, software systems, hardware and user interfaces—under one roof. The company has recruited top talent from Apple and Meta to build an A.I. product that better bridges the gap between humans and machines. Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. “The A.I. systems I use today are far from my vision of what the future should be,” said Adcock in a statement. “We want to create intelligence that lets you offload your mental workload into a system that begins to think like you and sometimes ahead of you.” Hark is the latest in a string of ambitious projects launched by Adcock. He previously funded the hiring marketplace Vettery; Archer, which builds electric vertical takeoff and landing aircraft (eVTOLs); and Cover, an A.I. security company developing weapon-detection systems. Hadcock also remains CEO of Figure, a robotics startup he founded in 2022 that is developing humanoid robots to automate labor. Figure, which is testing A.I. agents on its robots but will remain a separate company from Hark, was most recently valued at $39 billion in 2025. For now, Hark is financed entirely by Adcock’s own money: $100 million in personal capital. The entrepreneur, who has an estimated net worth of $19.1 billion, wants to build multimodal A.I. systems that handle speech, text, vision and context, layered with personalized memory, proactive behavior and real-time speech capabilities. Those systems are meant to work hand in hand with Hark’s own hardware. Leading that effort is Abidur Chowdhury, hired as head of design after seven years as an industrial designer at Apple, where he worked on iPhone and Mac products such as the recent iPhone Air. “We believe that the future is a new interface that will understand you, intelligently anticipate your needs, and love doing tasks that you don’t want to do,” said Chowdhury in a statement. Hark’s broader team includes A.I. researchers and engineers drawn from some of Silicon Valley’s biggest firms. On the hardware side, hires include longtime Apple staffers like David Narajowski and Dave Wilkes, who worked on product development architecture and audio hardware systems. On the A.I. side, the company has brought in senior researchers from Meta’s Superintelligence Lab, including Mingbo Ma, Xubo Liu, Xianfeng Rui, Kainan Peng and Zhihong Lei. Hark’s headcount, which also includes talent from Google, Amazon and Tesla, is about 45 today and is expected to reach 100 in the first half of 2026. To speed up model development, Hark has struck a compute deal with Nvidia that will bring thousands of GPUs online next month for pre-training and post-training its systems. Hark is entering a crowded field of ventures trying to rethink how people interact with A.I. OpenAI has enlisted former Apple design chief Jony Ive for a still-secret device project, while Meta is betting heavily on A.I.-enabled smart glasses. Newer hardware startups like Sandbar have raised millions to develop wearables with personalized A.I. at their core. Adcock says Hark will begin releasing its first A.I. models this summer, followed shortly by hardware devices designed around those systems. “We believe the next computing platform will be personal A.I.—intelligence that understands you and works alongside you every day,” he said. “But that future only becomes possible when the entire stack is built together.” We get it: you like to have control of your own internet experience. But advertising revenue helps support our journalism. To read our full stories, please turn off your ad blocker.We'd really appreciate it. Below are steps you can take in order to whitelist Observer.com on your browser: Click the AdBlock button on your browser and select Don't run on pages on this domain. Click the AdBlock Plus button on your browser and select Enabled on this site. Click the AdBlock Plus button on your browser and select Disable on Observer.com.
Images (1):
|
|||||
| Embodying physical computing into soft robots | Nature Communications | https://www.nature.com/articles/s41467-… | 10 | Mar 30, 2026 08:00 | active | |
Embodying physical computing into soft robots | Nature CommunicationsDescription: Softening and onboarding computers and controllers is one of the final frontiers in soft robotics towards their robustness and intelligence for everyday use. In this regard, embodying soft and physical computing presents exciting potential. Physical computing seeks to encode inputs into a mechanical computing kernel and leverage the internal interactions among this kernel’s constituent elements to compute the output. Moreover, such input-to-output evolution can be re-programmable. This perspective paper proposes a framework for embodying physical computing into soft robots and discusses three unique strategies in the literature: analog oscillators, physical reservoir computing, and physical algorithmic computing. These embodied computers enable the soft robot to perform complex behaviors that would otherwise require CMOS-based electronics — including coordinated locomotion with obstacle avoidance, payload weight and orientation classification, and programmable operation based on logical rules. This paper will detail the working principles of these embodied physical computing methods, survey the current state-of-the-art, and present a perspective for future development. Physical computing in soft robots reveals new principles of mechanical intelligence. The authors show that embodied oscillators, reservoir dynamics and mechanical logic enable robots to sense act and move without conventional electronics. Content:
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Advertisement Nature Communications volume 17, Article number: 2455 (2026) Cite this article 6026 Accesses 7 Altmetric Metrics details Softening and onboarding computers and controllers is one of the final frontiers in soft robotics towards their robustness and intelligence for everyday use. In this regard, embodying soft and physical computing presents exciting potential. Physical computing seeks to encode inputs into a mechanical computing kernel and leverage the internal interactions among this kernel’s constituent elements to compute the output. Moreover, such input-to-output evolution can be re-programmable. This perspective paper proposes a framework for embodying physical computing into soft robots and discusses three unique strategies in the literature: analog oscillators, physical reservoir computing, and physical algorithmic computing. These embodied computers enable the soft robot to perform complex behaviors that would otherwise require CMOS-based electronics — including coordinated locomotion with obstacle avoidance, payload weight and orientation classification, and programmable operation based on logical rules. This paper will detail the working principles of these embodied physical computing methods, survey the current state-of-the-art, and present a perspective for future development. The dream of creating entirely soft, versatile, and capable robots—akin to the octopus—has long inspired scientists and engineers. We have witnessed significant progress in soft actuation1,2, sensing3, and power4, enabling these robots to operate in a wide range of challenging environments, from deep within our own bodies5 to the far bottom of the Mariana trench6. Yet, softening and onboarding computers and controllers remain a major challenge and present one of the final frontiers towards robust and intelligent soft robots suitable for everyday use. In this regard, roboticists have long recognized that the inherent material softness can facilitate and simplify control, and many innovative strategies have been explored. For example, soft and rotating legs can naturally accommodate uneven surfaces and large obstacles, allowing the robot to traverse challenging terrains without complex controls like in the quadrupeds7. Soft curling tentacles can wrap and entangle themselves around objects with widely different shapes, thus manipulating them with a simple global pressure input8. Such softness-facilitated control is sometimes referred to as “intelligence by mechanics”9 or “morphological computation”10,11. They offer exciting potential, but frequently lack the sophistication and (re-) programmability available from the more universal controllers based on digital computation. In parallel with the advancements in soft robotics (and partly inspired by the need for soft robotic computing and control), there is also growing interest in CMOS-free physical computers12,13,14,15,16. This emerging paradigm seeks to encode physical inputs into a mechanical construct (or kernel)—for example, in the form of deformation17,18,19,20,21, fluid flow22,23, thermal heat flux24, or waves25,26,27,28,29—and leverage the internal interactions among the kernel’s constituent elements to process these inputs according to a programmed evolution. The resulting output typically remains in the same physical domain as the input so that it can be easily decoded and interpreted. The paths to physical computing are diverse: one can use acoustic waves to solve differential equations28,30, re-purpose mechanical vibrations like neuron signals to perform machine learning tasks10,31,32, or construct mechanical logic gates and physical circuitry to perform algorithmic operations33. Overall, the idea of performing computation without CMOS electronics could benefit us with higher energy efficiency34, parallelization25, and resiliency against adversarial working conditions. Therefore, there is a tremendous opportunity to introduce physical computing into the field of soft robotics. That is, one could construct a physical computer out of soft materials and integrate it with soft sensors and actuators. Such integration can lead to a new class of entirely soft computation and control methods with flexibility, robustness, and programmability for more sophisticated tasks. As a result, we have witnessed a rapid emergence of soft robots with integrated and embodied physical computers over the past several years. And these physically computing robots have become an important part of recent reviews that offer a birds-eye overview on embodied intelligence (or mechanical intelligence, physical control) in robotics35,36,37,38. On the other hand, we believe a separate, deeper dive into physical computing in soft robots can benefit the research community. Specifically, we aim to define physical computing using a rigorous framework, including encoding, decoding, and a (re-)programmable computing kernel, and build upon this definition to categorize physical computing into two distinct types: analog and algorithmic (more in the following Section “what is physical computing (and what is not)?”). In this way, we can dissect and re-examine soft robotics through a different lens. We will also have a more systematic framework to introduce new physical computing concepts from other disciplines to robotics. Therefore, this perspective paper will first establish a more formal framework for physical computing, then survey the different analog and algorithmic computers implemented in the soft robots, and discuss challenges and future directions in the end. Before surveying soft robots with embodied computing, we should first clarify the definition of a physical computer. In the robotics literature, the scope of “computing” and “intelligence” has been quite broad and occasionally conflicting. We certainly do not intend to propose a new definition that everyone would agree upon. Instead, we would like to highlight a few key ingredients of the physical computer to anchor the scope of this particular paper. To this end, we propose that physical computing should involve two domains: one is the agents using the computer. They could be a human operator, but in this study, they typically refer to the non-computing parts of a soft robot, including sensors, actuators, and power supply. The agents will have an “input” that they wish to use, and expect an “output” from the computing. The second domain is the kernel, where the physical interaction between its constituent components embodies the computing program. Under this formalism, a complete physical computer should (1) have a mechanism to encode inputs from the agents into the computing kernel and decode the outputs correspondingly, and (2) have a mechanism to program (i.e., design and configure) the evolution from the input to the output in the computing kernel (Fig. 1a). a The computing architecture adapted in this perspective includes input encoding, output decoding, and programmable input-output evolution. b Analog computing: the harmonic analyzer is an elegant example of analog computing from the 1800s (photo credit to the Nimitz Library, United States Naval Academy). In the modern electronic computing paradigm, artificial neural networks also operate on analog (continuous) data. We will discuss two types of physical analog computing in soft robots: oscillator and reservoir. For example, an electronics-free legged robot uses an analog oscillator to walk70 (photo credit to David Baillot, Jacobs School of Engineering, UC San Diego), and a modular manipulator uses embodied reservoir computing to classify payloads (image adapted from ref. 145 CC BY 4.0). c Algorithmic computing: the difference machine is one of the first algorithmic computers (photo credits to Science Museum London, CC Attribution-SA 2.0). Modern CMOS-based computers are built exclusively on algorithmic Boolean logic. Here, we survey how mechanical logic is implemented in soft robots. For example, a robotic hand operates with fluidic logic control (image adapted from ref. 112 CC BY 4.0). Therefore, in this paper, computing does not exist without encoding, decoding, and programming39. Under this formalism, some nonconventional and innovative computing paradigms in the robotics field, such as the aforementioned “morphological computation,” are beyond our scope. Morphological computation generally refers to the idea that a robot body’s shape, deformation, and dynamics can perform part of the “computation” needed for control. Under this paradigm, “computation” can be quite diverse—it can be storing and releasing energy periodically to stabilize locomotion (e.g., passive walker40), or conforming to complex objects to assist manipulation (e.g., vacuum jamming gripper41), or pre-processing sensory data to assist perception (e.g., bat ear that mechanically process the incoming sound waves to assist object localization42). Therefore, the physical computing defined in this paper can be an example of morphological computation, but it has a more structured definition. That is, many morphological computation examples will not be considered as physically computing in this paper because they do not have the “encoding-kernel evolution-decoding” architecture, and they are not reprogrammable. On the other hand, a mechanical construct—e.g., architected materials or soft robotic body—that can incorporate encoding, decoding, and programming would meet the necessary condition to function as a physical computer. Moreover, based on these definitions, we will adopt the theory from Jaeger et al., and categorize physical computers into two sets (Fig. 1b)43,44. One is analog, where input and output signals are continuous, and the evolution from input to output is governed by smooth (and frequently physics-based) functions. Albert Michelson’s harmonic analyzer45 and our human neural system are classical examples of analog computers. In soft robots, this can be accomplished by exploiting their bodies’ nonlinear dynamic responses for physical reservoir computing (PRC). The other type of physical computer is the algorithmic, where the input and output take a discrete format, and the evolution from input to output is programmed via a set of abstract logical rules. Charles Babbage’s difference machine46 and our omnipresent CMOS-based computing chips are classical examples of an algorithmic computer. In soft robots, this can be accomplished by, for example, an assembly of mechanical Boolean gates featuring elastic bistability (i.e., mechanical logic gates). Table 1 summaries and compares the different computing approaches from this perspective. It is worth highlighting that many soft robots use responsive materials to interact with their surrounding environment and achieve adaptive behaviors. However, they do not necessarily compute according to the above-mentioned definition. Materials are considered “responsive” or “active” if they can change their shape or constitutive properties in response to external stimuli, such as temperature47, heat flux48, electric field49, magnetic field47,50, light48,51, and humidity52. They were initially introduced to soft robotics as artificial muscles. Shape memory alloys (SMAs) have been widely utilized in soft robotics since their inception53. Dielectric elastomer is another example54, and some liquid format dielectric materials can generate very high output forces to create jellyfish-like soft robots49. A programmable electrothermal actuator using silver nanowires (AgNW) can enable a robot to crawl55. One can also harvest responsive materials from nature, such as the self-drilling seed carrier made from white oak tissue, which can autonomously burrow by exploiting ambient humidity cycles52. (Interested readers can refer to the excellent reviews in refs. 56,57 for a comprehensive survey of responsive materials used for robotic actuation.) As responsive materials continue to evolve, researchers are beginning to explore how they can be strategically embedded in soft robotic bodies to facilitate and simplify control. For example, untethered robots with responsive materials can achieve simple and remote operation58, thereby reducing the associated control and computational complexity. Examples like this include miniature magnetic shape-programmable robots47,50, m-PDMS (magnetic particle-doped polydimethylsiloxane) robots59,60,61, photoresponsive LCE robots51, and piezoelectric polyvinylidene fluoride (PVDF) robots62. By integrating different types of responsive materials in one body, simple computational capabilities48,63 can be achieved (e.g., a soft robot that turns toward light only if heat is also present). However, although these responsive materials can enable complex tasks without sophisticated controllers (which suggests some intelligence in the mechanical domain, or mechano-intelligence64,65), they are not considered computing in this study due to a lack of clear mechanisms for input encoding, output decoding, and programmable input-output evolution. Instead, responsive materials can serve as the building blocks of physical computing, and we hope this will become clear as we survey soft robots with physical computing in the following sections. Rhythmic motions here refer to periodic changes in the shape or configuration of a soft structure over time. They are omnipresent in the animal kingdom, such as breathing, heart beating, and in particular, locomotions like walking, swimming, and wing flapping66. The underpinning mechanisms to generate rhythmic motions are diverse and still active topics of research. Among them, the central pattern generator (CPG) is a unique mechanism that can be considered as a physical computer and thus directly relates to this study. CPG is a self-organized neural circuit that produces rhythmic output from a simple, nonrhythmic input, and the input-output evolution is programmed by the neural circuit’s architecture. The CPG makes it possible to achieve and reconfigure complex locomotion gaits with minimal involvement from the brain or local sensory feedback67,68. The striking simplicity and capability of the CPG have inspired similar implementations in soft and continuous robots, where an analog oscillator—either electric or mechanical—is integrated to generate rhythmic deformation from a simple (and typically constant) input to drive locomotion69. Although many of these oscillators are not as complex as CPG’s neural circuit, their underlying computing principle are similar. An example of analog oscillator applied to soft robots: in the quadruped robot shown in Fig. 2a, rhythmic and coordinated leg swing motions are generated by an entirely pneumatic ring oscillator70. More specifically: Input encoding: in this robot, a small pressure tank supplies a constant pressure (P+) to drive the analog oscillator. Kernel: the oscillator circuit is the kernel. The three soft valves inside this circuit serve as inverters with a built-in delay, and a snap-through membrane alternates between closed and open flow paths, allowing the high-pressure flow to advance around the ring. This mechanism essentially transforms the steady input into a phase-shifted sequence of pressure pulses at the three nodes. Output decoding: the pulsed output pneumatic pressure from the oscillator flows to the corresponding soft legs, which convert the pressure inflation into mechanical swing motions for walking. Re-programming: in addition, a soft bistable valve and tethered mechanical controller are added to swap two connections like a latching switch, so triggering the valve can reverse the output pressure pulse sequence, thus reversing the locomotion direction. a Electronics-free pneumatic control: a soft ring-oscillator circuit generates rhythmic leg actuation from a constant pressure input, enabling a quadruped to produce diagonal-couplet walking gaits. A bistable 4/2 switch selects gait direction, and dual oscillators set the phase between leg pairs (image adapted from ref. 70 with permission). b Controller-free SMA modular robot: a curved mono stable beam and a mechanical slider can convert a single DC power supply into sustained self-oscillation. And a bistable switch can alternate power supply between the front and back modules for out-of-phase deformation and crawling (adapted from75, CC BY 4.0). c Twisted LCE ribbon robot: the ambient heating drives continuous self-rolling of this robotic structure for locomotion. When the robot contacts an obstacle, it will store elastic energy and then snap to reverse its direction, enabling autonomous avoidance and maze escape (adapted from ref. 72, CC BY-NC-ND 4.0). Note that all scale bars are approximate. Besides the fluidic circuitry, analog oscillation can also be achieved using other physical principles and material selections (Fig. 2b, c). For example, one can exploit mechanical instabilities and clever geometric design to generate motion with a constant power input, as seen in twisted LCE ribbons and architected structures that exploit snapping or buckling for autonomous rolling or twisting71,72. Similarly, beetle-like robots use spiral-shaped PVDF materials to generate mechanical resonance and rhythmic motion for insect-scale and high-speed crawling62,73,74. One can also use thermal or mechanical loops—such as SMA-actuated systems with built-in mechanical switching or microfluidic logic circuits—to generate self-sustained rhythmic actuation without digital controllers75,76,77. Challenges and opportunities of analog oscillators: analog oscillators are simple yet robust. They can tightly integrate with the soft robot’s body to generate locomotion without additional electronics. However, analog oscillators can suffer from programmability and scalability constraints: their dynamics are hard-wired into physical design. That is, the oscillator geometry, mechanical architecture, and constitutive material properties fully determine the output frequency and phase pattern. As a result, “programming” the kernel’s input-output evolution might require re-design rather than a straightforward parameter tuning. In addition, as the number of oscillators increases, fabrication tolerances and material variability can introduce mismatches that degrade synchronization. One can address these limitations by integrating the oscillators with other, more easily programmable components (e.g., combining the oscillator with fluidic logic gates as we show later in Section “algorithmic physical computing and mechano-logic”) and using high-precision manufacturing techniques at the smaller physical scale (as we discuss later in Section “perspective for future advancement”). While an analog oscillator provides a promising alternative to micro-controllers for computing and generating rhythmic motions, its information processing capability is largely embedded in its physical architecture. Recent work has shown that architected mechanical and metamaterial-based systems can support multiple motion sequences through controlled switching of actuation frequency78,79,80, rather than through real-time algorithmic control. Nevertheless, the space of attainable behaviors in these systems remains discretely prescribed by design and reconfiguration pathways. In contrast, there is an emerging notion of informational embodiment, where the combinatorial richness of a soft robot’s body deformation encodes spatiotemporal patterns without relying on symbolic or centralized representations81. This concept bridges soft-body dynamics and computation, offering another pathway toward decentralized, analog computing. Aligned with this, PRC offers a rigorous framework to formalize the soft robotic body as a computing kernel. In PRC, the body functions as a nonlinear, high-dimensional, and transiently stable dynamical system that maps input streams into distinguishable physical states. In other words, the soft body serves as a “physical recurrent neural network,” and its rich dynamics can substitute the digital recurrent neural network for temporal information processing. As we discussed in Section “analog oscillators and rhythmic motion”, a physical system truly computes only when it is intentionally used to compute abstract functions through defined input encoding and output decoding11. PRC satisfies this criterion by treating the mechanical body as a fixed, task-agnostic kernel, with only the output readout layer trained—typically via linear regression10. Compared to traditional artificial neural networks, the simplicity of treating the physical body as the computing kernel offers significantly lower computational cost, reduced memory and energy demands, and fast training—enabling its deployment in computation-embodied autonomous systems82. There are two rigorously developed frameworks for using PRC kernels. They have been demonstrated in a damped-mass-spring network10,31, and they can guide the use of PRC in soft robots. The first framework is open-loop, in which the mechanical system acts as a fixed nonlinear kernel, and only a static linear readout is trained to process the temporal data streams10. The second framework is closed-loop, in which the reservoir’s outputs are fed back to the actuators to shape future inputs, thus stabilizing or switching physical computing under simple static feedback31. Building on these two frameworks, one can deploy the open-loop PRC into robots for information perception—i.e., extracting and decoding meaningful information from the high-dimensional body dynamics. On the other hand, the closed-loop PRC can be used for embodied control—i.e., routing the reservoir computing outputs as control commands to the actuators, thus producing and modulating rhythmic body motions. In the following two subsections, we detail the working principles and applications of these two frameworks. An example of open-loop reservoir applied to soft robots: the open-loop framework allows PRC to enhance the perception of soft robots by transforming their bodies into multi-modal computing sensors. A compelling demonstration of this is the modular manipulator equipped with SMA coil actuators and simple strain gauges83 (Fig. 3a). The manipulator’s nonlinear body dynamics serve directly as the source for PRC. More specifically: Input encoding: when the manipulator grasps and lifts different payloads, its SMA actuators generate pulsed forces to “wobble” the body slightly. Kernel: in this robot, the soft body itself is the kernel (or reservoir). As the SMA wobbles the manipulator and its payload, the resulting body vibration, denoted as si(t), is captured by the strain gauges. Such a vibrational response is rich and nonlinear, so its spatiotemporal feature contains information about the weight and orientation of the payloads. Output decoding: by performing a simple and analog weighted linear summation of these strain gauge readings (O(t) = w0 + ∑wisi(t)), the robot can directly estimate the payload weight and orientation, thus classifying them. Re-programming: the readout weights wi in the output layer are trained by regression methods, which can be adjusted according to the particular computing task at hand. This example clearly illustrates how the open-loop reservoir computing framework enables soft robots to conduct spatiotemporal filtering through their intrinsic deformation dynamics, allowing them to extract complex information without requiring dense sensor arrays or extensive digital processing. As a result, numerous studies have emerged utilizing different soft robotic platforms. For instance, a fabric-based soft manipulator can estimate joint bending angle and payload weight simultaneously using only a few distributed pressure sensors84. Contact dynamics in a soft arm allow for tactile sensing and object property estimation without electronic skins85. For environment monitoring, a brush-like flexible sensor encodes surface textures through passive contact86(Fig. 3b, up), while in aerial applications, a flapping-wing robot detects wind direction directly from wing deformation, eliminating the need for airflow sensors87(Fig. 3b, middle). A SMA reservoir88 demonstrates the capability of predicting the future trajectory of its end effector under various driving signals (Fig. 3b, bottom). Collectively, these demonstrations show that soft robots with sparse, low-dimensional sensors can nonetheless achieve high-dimensional perception by exploiting their own body as the physical reservoir—making PRC a minimalist yet powerful strategy for embodied sensing. a Open-loop PRC for information perception: a modular manipulator with embedded strain gauges is driven by SMA actuators. Its high-dimensional body dynamics (measured by the strain si(t)) serve as reservoir states, which can be processed with trained linear readout wi to decode and identify the payload (image adapted from ref. 83 with permission). b Other examples of information perception with open-loop PRC, including terrain classification(image adapted from ref. 86 CC BY-ND 4.0), wind detection on a compliant membrane wing (image adapted from87, CC BY 4.0), and a self-sensing shape memory alloy actuator that could predict its end effector trajectory (image adapted from88 with permission). c Closed-loop PRC for embodied control: a quadruped robot uses its compliant spine as the reservoir. The four outputs of the reservoir kernel are fed back to the leg actuators to generate robust and adaptable locomotion gaits (image adapted from ref. 89 with permission). d Other examples of control embodiments with closed-loop PRC, including manipulation with a multi-segment continuum arm (image adapted from ref. 91 with permission) and a surface-swimming robot (adapted from ref. 94, CC BY 4.0). Note that all scale bars are approximate. An example of closed-loop reservoir applied to soft robots: beyond information perception, PRC enables soft robots to autonomously generate periodic and robust motions by embedding control into their intrinsic body dynamics. That is, instead of connecting robotic actuators to external digital controllers, one can feed the body reservoir’s output back to these actuators for real-time motor behavior control. In this case, the deformation dynamics of the robotic body and its interaction with the environment play a critical role. A representative example is the quadruped robot with its flexible spine serving as the reservoir computing kernel89(Fig. 3c). More specifically, Kernel: in this robot, its flexible spine is the kernel (or reservoir). Its intrinsic body dynamics are rich and nonlinear, capable of projecting input signals into a high-dimensional state vector. Input encoding and output decoding: in this case, the linear readout layer performs weighted linear summations, mapping the internal force and strain of the flexible spine into four control commands, each for a motored leg. The readouts are first trained in an open-loop setup with teacher forcing. Once the training is complete, the loop is closed. As a result, the reservoir computer’s output is also the input, eventually creating a self-sustained locomotion gait. Re-programming: once the loop is closed, and the robot can perform trotting, bounding, or turning—with a strong ability to recover from disturbance—simply by switching the readout weights. This embodied controller demonstrates how PRC converts compliant mechanics into an energy-efficient control system: the body remains unchanged, only the readout is “programmed,” and feedback routes it from perception to action. This closed-loop, reservoir-enabled control principle has been successfully implemented across multiple platforms, including soft silicone arms90,91(Fig. 3d), tensegrity robots92, and origami-inspired machines93, all demonstrating motor primitives and robust dynamic behaviors. A pneumatic soft robotic arm, for instance, learns different end-effector trajectories and autonomously recovers from disturbances by exploiting its intrinsic dynamic richness91(Fig. 3d, up). More recently, soft robots have also been shown to switch behaviors under varying environmental conditions by using reservoir systems that simultaneously encode control and sensory feedback94(Fig. 3d, bottom). Collectively, these demonstrations show that PRC not only simplifies and stabilizes motion generation but also enables behavior switching through embodied computation, offering an energy-efficient alternative to conventional digital control architectures. Challenges and opportunities of analog physical reservoirs: PRC offers an appealing framework to embody computation directly into soft robotic systems. In this framework, one can directly “multipurpose" a soft robotic body into a reservoir without substantial redesign, and quickly switch the computing function by adjusting the readout weights. However, reservoir computing also faces several fundamental challenges. The first is repeatability. Real-world physical systems can not guarantee identical dynamic output across multiple experiments; slight variations in fabrication, boundary conditions, temperature, and material behavior, along with drift and aging, can shift the reservoir’s dynamic responses and degrade computing performance. A second challenge is noise, as real hardware inevitably introduces sensing and actuation noise that can be amplified by the reservoir’s nonlinear dynamics, leading to degraded or even unstable output. The third challenge involves scaling. One can always increase the number of reservoir states to improve performance, but it means more sensors, more wires, and heavier data-processing burdens. Possible improvements across these areas include better operating-point stabilization and calibration procedures, noise-aware training with improved sensing electronics, and more efficient sensing architectures or dimensionality-reduction strategies that capture the essential physical dynamics without overwhelming the hardware. These efforts collectively point toward more reliable, robust, and scalable PRC systems for soft robotics. Unlike analog computing, an algorithmic computer uses abstract logical rules to drive the input-output evolution. Correspondingly, their input and output signals are typically in a discrete format (e.g., binary 0–1, on-off, or true-false bits). Our omnipresent, CMOS-based digital computers are built almost exclusively on an algorithmic architecture, relying on binary data streams passing through nested Boolean logic gates to perform computations44. However, one can also achieve algorithmic computing without electronics14. That is, instead of electrons flowing through binary logic gates, one can construct binary components that operates with elastic deformations, fluid flows, or other physical stimuli. Each physical component can act as an equivalent to logic gates, memory cells, or timing elements to fulfill computation roles. Though fundamentally different in shape and format, the underlying goals of digital and physical algorithmic computers remain the same: to perform computation tasks by following programmed logical rules for information processing. It is worth highlighting the important role of bistable mechanisms in physical algorithmic computing because they can directly emulate the 0–1 binary states of CMOS electronics. Bi-stability—defined as a physical construct’s ability to settle into two distant stable equilibria without additional external aids—arises from material or geometric nonlinearities and can be implemented using curved elastic beams95,96, elastomeric membranes97,98, or origami folds12,99,100. A bistable mechanism naturally exhibits large and rapid deformation when it snaps between its stable equilibria101, thereby amplifying the actuation output and simplifying the control of the soft robot102. For example, snapping elastic caps convert slow inflation into explosive jumping103, bistable curved fins enable fast, high-efficiency swimming104. They can also help program the robotic deformation, like in the soft sheets with an array of snap-through domes105. A bistable mechanism can also perform sensing and, therefore, encode inputs into the physical computer. For instance, a skin-like sensing surface with localized snap-through cells can act as a mechanical signal amplifiers that translate pressure or contact into discrete mechanical states106. Soft mechanosensors based on bistable structures can provide binary contact information without continuous electrical feedback107 Most importantly (and most relevant to this paper), the bistable mechanism can serve as a mechanical analog to transistors, functioning as one-bit memory units by switching between two stable configurations. These configurations can be mapped to binary states “0” and “1” based on the input force, pressure, or displacement, enabling the construction of logic gates and sequential logic circuits using entirely mechanical components14,108,109,110,111. Therefore, mechanical bi-stability provides a fundamental means to encode, store, and process information within a robot’s physical body. Typically, these robotic algorithmic computers directly borrow the design and architecture of CMOS-based systems, but they can be inherently energy efficient, retaining their state without continuous power input. To better illustrate the working principle, we detail a fluidic and algorithmic computer based on a reprogrammable metamaterial processor (Fig. 4a). The processor comprises identical bistable unit cells whose elastomeric chambers snap at defined pressure thresholds, converting vacuum and atmospheric pressure input into binary states112. As a result, a bistable unit cell with a clever tubing design can function like a resistor, enabling the construction of complex logical circuitry. More specifically Kernel: in the example, the kernel has 24 unit cells that are connected into a soft processor, including two SR latches, a 2–4 demultiplexer, and four ring oscillators (each linked to a soft robotic finger). The soft processor and the fingers are all powered by one constant vacuum pressure. The entire system can reversibly switch between four different operation modes, each of which corresponds to the oscillatory bending of one finger. Input encoding: the operator can choose the operating mode via manually pressing the input cells of the SR latches (input encoding). For example, if the first figure is activated initially, one can press the “S2” cell to activate the second figure and then press the “R2” cell to switch back. The outputs of the two SR latches are sent to the demultiplexer, so these 2 data lines are converted into 4. The resulting four outputs of the demultiplexer serve as the power source for the ring oscillators. Output decoding: the robotic fingers transform the output oscillatory pressures into mechanical bending. Notably, the current operation mode persists even after the removal of the pressing force, owing to the ability of the SR latches to retain their logic states until updated by new inputs. Re-programming: finally, one can re-arrange these fluidic unit cells to construct a new soft processor with different input-output mapping. a Reprogrammable metamaterial processor with robotic fingers: fluidic unit-cells with 0–1 binary states are connected to create mechanical logic circuitry to control finger actions (adapted from ref. 112, CC BY 4.0). b Complementary soft pneumatic valves: Piston-based, four-terminal modules are paired to achieve Boolean logic operation, nonvolatile latches, and analog pressure regulation. Then they are integrated with sub-circuits to create ring oscillators and counters to control crawling robots and wearable devices (adapted from ref. 114, CC BY-NC-ND 4.0). c Soft-matter computer: conductive-fluid receptors transduce spatiotemporal fluid patterns into electrical drives, which can realize analog filtering, amplification, and logic gates with simple composition. As a result, such conductive fluidic mechanism enables on-body control for locomotion, reflexive grasping, and behavior switching (image adapted from ref. 121 with permission). Note that all scale bars are approximate. In addition to the example above, there are several other attempts to copy the electronic logic circuitry into the fluidic domain and construct fluid Boolean gates with soft complementary valves, ring oscillators, and modular cells113,114,115,116,117,118,119 (Fig. 4b). Besides the pressurized fluidics, algorithmic computing can also be implemented with other novel materials and multi-stable mechanisms. Here, we list four additional approaches: (1) conductive fluidics: conductive fluidic receptors (CFRs) embedded in soft structures can act as hybrid mechanical-electrical logic units, enabling soft matter computers to perform sensing, logic, and actuation all in one continuous system120,121,122 (Fig. 4c). (2) Magnetic fluidics: magnetic liquid metal droplets can create flexible and reconfigurable logic gates with decoupled input/output channels and multi-modal control using phase-state transitions123. (3) Heat responsive materials: mechanical logic has also been achieved using mechanical and multiplexed switches that integrate bistable beams and thermally responsive materials to perform logic operations and mechanical memory storage112,124,125,126. (4) Multi-stable mechanisms: finally, algorithmic computing is also possible with pure elastic force and deformation. Unique architectures, such as counter-snapping metamaterials, provide logic behavior via geometric nonlinearity, where structural instability enables programmable stiffness transitions and collective switching sequences, making them useful for timing and computation21. Recent advances in modular chiral origami metamaterials further expand this logic repertoire by introducing multi-stable and reprogrammable architectures that can store information through mechanically encoded hysteresis and noncommutative state transitions127. It is worth noting that physical algorithmic computing can also enable locomotion generation and sequencing— locomotion turns out to be the robot task shared by all physical computers reviewed in this study. In this regard, algorithmic computing supports locomotion sequencing through timing control and built-in periodicity. For example, pneumatic ring oscillators and fluidic valve networks, constructed from bistable logic gates, have been used to generate self-sustained actuation cycles for crawling and walking gaits in soft quadrupeds and hexapods114,116,117. Morphologically encoded logic and routing delays, utilizing internal resistance gradients, have also been employed to produce pressure wave propagation and staggered motion, enabling gait generation through a single input channel128. Reconfigurable metamaterials and origami systems offer structural ways to embed sequencing. For instance, modular soft metamaterial robots have been programmed to switch between gaits—turning, serpentine, reciprocating—by physically re-arranging submodules acting as logic units112,117. In another case, origami robots with memory registers and rotating read-heads perform controlled motion paths by storing finite-state instructions mechanically124. Challenges and opportunities of physical algorithmic computing: compared to analog physical computers, algorithmic physical computing is quite versatile in that it can borrow many designs and working principles from well-established CMOS electronics. However, the reviewed examples above lag behind in terms of speed and scaling. Their computing speed is constrained by the relatively slow physical processes, such as pressurizing and venting of fluidic networks, deformation of thick elastomeric chambers, and, in some cases, heat diffusion through responsive materials. Using physical signals instead of electric ones also makes miniaturization more challenging: The finite size of multi-stable unit cells, the need for compliant interconnection devices, and the risk of mechanical crosstalk between unit cells make routing and isolation harder than in CMOS. Therefore, significant research efforts are necessary. For example, using advanced manufacturing technology can help minimize the unit cell size and enable more integrated packaging, thus speeding up physical computers (as we discuss later in Section “perspective for future advancement”). Regardless, physical algorithmic computing is still a desirable choice for small-scale logic and simple on-board control (as we discuss further in the conclusion section). Since this perspective lies at the intersection of physical computing and soft robots, it is intuitive to ask questions about the future direction using the “supply-and-demand” analogy. On the supply side: “are there any newly available capabilities in physical computing that can be deployed for soft robotics?” On demand side: “what additional computing power would future soft robots require?” Here, our unique perspective of dissecting soft robots into the encoding, kernel, and decoding layers can offer a systematic framework to introduce new computing concepts. For example, one can keep the encoding (e.g., sensor and input) and decoding components (e.g., actuator) the same, and “swap” the kernel with different designs that have new kinds of computing capacity. Alternatively, one can “upgrade” the kernel with a more advanced design. By surveying the current physical computing studies, one can discover many unique approaches that could be integrated into soft robots in the future (examples in Fig. 5). One can continue to advance the computing kernel’s density and capacity in soft robots by adopting new strategies to process encoded inputs, either locally or through a centralized kernel. To illustrate some of the promising concepts, we envision an octopus-inspired soft robot with computational capabilities distributed across its tentacles, as well as in its brain, following the mechanical computing framework described in this manuscript. Each component can encode, process, and decode data (with mechanical memory reserved for storage). Starting from the top right tentacle and moving clockwise: bistable soft shells enable rule-changeable logic operations (image adapted from ref. 137 CC BY 4.0); information processing during transmission via nondispersive mechanical solitary waves (image adapted from ref. 139 CC BY 4.0); a mechanical neural network offloads computation and can be attached to the robot’s skin (image adapted from ref. 146 with permission); mechanical analog-to-digital converters can be embedded in the tentacles (image adapted from ref. 33 with permission). In the robot’s brain, miniaturized physical circuits mimic an algorithmic logic unit (ALU) (image adapted from ref. 18 CC BY 4.0). Finally, reprogrammable and nonvolatile mechanical memories can store data either with magnetic (left, image adapted from ref. 134 with permission) or thermal principles (right, image adapted from ref. 132 CC BY 4.0). Note that all scale bars are approximate. Here, we highlight three most promising topics from three different angles: function, scale, and system integration. Regarding the function, the current physical computing has demonstrated an impressive ability to extract information from sensory signals and execute actuation commands. On the other hand, on-board nonvolatile memory, a vital component in modern computing paradigms, has yet to be implemented in soft robots. Regarding scale, the current physical computing in soft robots is relatively large in terms of physical size. Minimizing their scale using high-precision manufacturing techniques might help advance the performance and reliability of physical computers to a new level, addressing some of the challenges in physical computing as we discussed earlier. Finally, regarding systematic integration, the current physical computing setup in soft robots primarily operates in a standalone manner. However, some integration with digital hardware (e.g., for long-distance communication) can enhance the overall capability. Therefore, a meaningful and integrated mechanical-electrical hybrid circuit can be advantageous. In the following section, we present recent studies in these three aspects. An advanced and autonomous robot should also be able to memorize the operator’s instructions or the knowledge from its interaction with the working environment. In this regard, we have seen some promising examples of memory in the mechanical metamaterial domain. That is, by combining responsive materials (memory encoding/decoding) and elastic bistability (storage), mechanical metamaterials can achieve information storage via mechanical bits (m-bits), similar to their digital counterparts. These m-bits can be bistable elastic shell to realize mechano-fluidic memory129, or bistable origami structures130 or tiles of bistable Kirigami units131, forming 2D and 3D storage arrays. Examples include temperature-responsive bistable Kirigami units132,133, which sequentially retrieve stored information in 3D arrays, and magnetic-responsive bistable elastic shell units134, which enable on-demand re-programmability for 2D arrays. These configurations function as nonvolatile mechanical memories and could be used in soft robotics in the future. A key challenge will be designing an integrative approach to encode information from the physical memory into the computing kernel, and subsequently decode and store the computing output into the physical memory device. Just as CMOS-based computers’ never-ending quest to shrink the size of their basic electronic units, soft robots can also benefit from a smaller and more capable physical computer onboard. Early physical algorithmic computers —such as the waterbomb origami with bistable hinges135 — were bulky and limited to AND, OR, and NOT gates. More compact designs like bistable curved beam arrays have introduced NOR and NAND operations136. Bistable soft shells allowed re-programmable mechanologics so that their operation mode can be switched on demand (e.g., from XOR to XNOR)137. Additionally, self-powered origami mechanologics19 and thermal mechanical transistors24 completed the binary logic set with XNOR and XOR gates. More recently, with advancements in sub-millimeter additive manufacturing, small-scale mechanologics based on buckling micro-flexures have emerged18,138. Integrating these mechanologics has led to fully mechanical half-adders19,136,137,138, full-adders19,24,137, and solitary wave-based mechanical computing platforms139. It is not hard to imagine that some of these miniaturized and physical algorithmic units will be integrated into soft robots in the future, enabling fully onboard computation and control. While physical computing aspires to perform computation tasks without complex CMOS-based electronics, it can still benefit from using some simple electronic components. Indeed, the physical reservoir computer reviewed in this study is a mechanical-electric hybrid system, because its readout layer requires electronics to perform weighted linear summation (e.g., using an analog adder circuit with an Op-Amp). This provides the physical reservoir with excellent re-programmability and multi-tasking abilities that are not yet available in analog oscillators or physical algorithmic computers. Therefore, mechanical-electrical hybrid circuits could provide scalable computing capabilities for future soft robots. These hybrid systems convert mechanical input into electrical signals by opening or closing conductive pathways in response to deformation. Applications include mechanical digital sensors140, mechanical analog-to-digital converters (m-ADC)141, and mechanical arithmetic logic units (m-ALU), which embed Boolean logic into soft configurable structures33,142,143. Higher-level computation becomes feasible when memory is integrated into computation, as seen in in-memory mechanical computing platforms, such as mechanical neural networks17 and linear equation solvers144. These hybrid circuits have yet to be integrated into soft robotics, but they present an exciting pathway for systematic integrations. In summary, the convergence of physical computing with soft robotics is a promising strategy for softening and onboarding control. By integrating a physical computing kernel—such as an analog oscillator, a physical reservoir computer, or an algorithmic computer—into a soft robotic system, these robots can achieve sophisticated locomotion and manipulation tasks that would typically require a conventional digital control. For example, this perspective paper demonstrates that an electronics-free legged robot equipped with an analog oscillator can perform coordinated locomotion and reverse direction upon encountering an obstacle. The soft modular manipulator, with inherent PRC capacity, can utilize its body dynamics to estimate the weight and orientation of its payload, enabling it to classify the payload without relying on electronic sensors, such as cameras. Another soft robotic hand integrated with an algorithmic fluidic circuitry that can operate based on abstract Boolean logical rules. However, it is worth noting that all of the physically computing robots in this perspective remain at the proof-of-concept level. Implementing these exciting concepts into practical, real-world use still requires a significant amount of research efforts and system engineering. Despite the rapid advances in this field, it is unlikely that physical computers embedded in soft robots can catch up with digital hardware in terms of computational speed and information density in the foreseeable future. Therefore, it is unrealistic to replace conventional digital hardware entirely with physical computing. Instead, engineers must answer the critical question: “how much should we use physical computing?" “Where to apply them?" and “how can we seamlessly integrate physical computing with conventional digital hardware?" For robots, physical computing is advantageous because of its softness, simplicity, and robustness. Therefore, it makes the most sense to use physical computing in the following three scenarios. (1) The targeted tasks are closely related to the robot’s physical body—this is why we have seen great success in locomotion generation and information extraction via direct physical interaction (using PRC). (2) The robots need to be small and entirely soft—because physical computing could seamlessly integrate with the soft robotic body without the complexity of adding electronic components (e.g., using advanced 3D printing). (3) The working conditions are demanding—for example, fluidics-based computation is desirable for underwater operations, where electronics are vulnerable to damage. On the other hand, conventional digital electronics is more suitable for “over the distance” tasks, such as obstacle avoidance using vision data or long-distance communication with operators. Therefore, the future of physically computing robots hinges upon two pillars: the continual advances in physical computing and its strategic integration with conventional digital hardware. As we discussed in Section “perspective for future advancement”, in the foreseeable future, we are likely to witnessf the creation of more powerful physical computing thanks to miniaturization and integrated memory capacity. With a more advanced physical computing kernel, a soft robot can acquire information from interacting with the surrounding environment, memorize the acquired knowledge, and execute the action plan, all in a highly integrated mechanical domain. On the other hand, new strategies will emerge to tightly integrate physical computing with digital computing via novel mechanical-electrical hybrid circuits, enabling physical computing robots to operate within large-scale automated systems. This vision of soft robots hinges on the ongoing convergence of various engineering disciplines, including mechanical metamaterials, computing theory, advanced manufacturing, and interdisciplinary design. El-Atab, N. et al. Soft actuators for soft robotic applications: a review. Adv. Intell. Syst. 2, 2000128 (2020). Article Google Scholar Rus, D. & Tolley, M. T. Design, fabrication and control of soft robots. Nature 521, 467–475 (2015). Article ADS CAS PubMed Google Scholar Wang, H., Totaro, M. & Beccai, L. Toward perceptive soft robots: progress and challenges. Adv. Sci. 5, 1800541 (2018). Article Google Scholar Aubin, C. A. et al. Towards enduring autonomous robots via embodied energy. Nature 602, 393–402 (2022). Article ADS CAS PubMed Google Scholar Cianchetti, M., Laschi, C., Menciassi, A. & Dario, P. Biomedical applications of soft robotics. Nat. Rev. Mater. 3, 143–153 (2018). Article ADS Google Scholar Li, G. et al. Self-powered soft robot in the Mariana Trench. Nature 591, 66–71 (2021). Article ADS CAS PubMed Google Scholar Saranli, U., Buehler, M. & Koditschek, D. E. Rhex: a simple and highly mobile hexapod robot. Int. J. Robot. Res. 20, 616–631 (2001). Article Google Scholar Becker, K. et al. Active entanglement enables stochastic, topological grasping. Proc. Natl. Acad. Sci. USA 119, e2209819119 (2022). Article CAS PubMed PubMed Central Google Scholar Blickhan, R. et al. Intelligence by mechanics. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 365, 199–220 (2007). Article ADS MathSciNet Google Scholar Hauser, H., Ijspeert, A. J., Füchslin, R. M., Pfeifer, R. & Maass, W. Towards a theoretical foundation for morphological computation with compliant bodies. Biol. Cybern. 105, 355–370 (2011). Article MathSciNet PubMed Google Scholar Müller, V. C. & Hoffmann, M. What is morphological computation? On how the body contributes to cognition and control. Artif. Life 23, 1–24 (2017). Article PubMed Google Scholar Chen, B., Nam, J. & Kim, M. Advances in metamaterials for mechanical computing. APL Electronic Devices 1, 021502 (2025). Alù, A. et al. Roadmap on embodying mechano-intelligence and computing in functional materials and structures. Smart Mater. Struct. 34, 063501 (2025). Article ADS Google Scholar Yasuda, H. et al. Mechanical computing. Nature. 598, 39–48 (2021). Article ADS CAS PubMed Google Scholar Zangeneh-Nejad, F., Sounas, D. L., Alù, A. & Fleury, R. Analogue computing with metamaterials. Nat. Rev. Mater. 6, 207–225 (2021). Article ADS Google Scholar Qian, C., Kaminer, I. & Chen, H. A guidance to intelligent metamaterials and metamaterials intelligence. Nat. Commun. 16, 1154 (2025). Article ADS CAS PubMed PubMed Central Google Scholar Mei, T. & Chen, C. Q. In-memory mechanical computing. Nat. Commun. 14, 5204 (2023). Article ADS CAS PubMed PubMed Central Google Scholar Song, Y. et al. Additively manufacturable micro-mechanical logic gates. Nat. Commun. 10, 882 (2019). Article ADS PubMed PubMed Central Google Scholar Zhang, Q. et al. Meta-mechanotronics for self-powered computation. Mater. Today 65, 78–89 (2023). Article Google Scholar Liu, Z., Fang, H., Xu, J. & Wang, K.-W. Cellular automata inspired multistable origami metamaterials for mechanical learning. Adv. Sci. 10, 2305146 (2023). Article Google Scholar Ducarme, P., Weber, B., van Hecke, M. & Overvelde, J. T. Exotic mechanical properties enabled by countersnapping instabilities. Proc. Natl. Acad. Sci. USA 122, e2423301122 (2025). Article CAS PubMed PubMed Central Google Scholar Rajappan, A. et al. Logic-enabled textiles. Proc. Natl. Acad. Sci. USA 119, e2202118119 (2022). Article CAS PubMed PubMed Central Google Scholar El-Atab, N., Canas, J. C. & Hussain, M. M. Pressure-driven two-input 3d microfluidic logic gates. Adv. Sci. 7, 1903027 (2020). Article CAS Google Scholar Chen, H. et al. Thermal computing with mechanical transistors. Adv. Funct. Mater. 34, 2401244 (2024). Article CAS Google Scholar Mousa, M. & Nouh, M. Parallel mechanical computing: metamaterials that can multitask. Proc. Natl. Acad. Sci. USA 121, e2407431121 (2024). Article CAS PubMed PubMed Central Google Scholar Dorin, P. & Wang, K.-W. Embodiment of parallelizable mechanical logic utilizing multimodal higher-order topological states. Int. J. Mech. Sci. 284, 109697 (2024). Article Google Scholar Bilal, O. R., Foehr, A. & Daraio, C. Bistable metamaterial for switching and cascading elastic vibrations. Proc. Natl. Acad. Sci. USA 114, 4603–4606 (2017). Article ADS CAS PubMed PubMed Central Google Scholar Silva, A. et al. Performing mathematical operations with metamaterials. Science 343, 160–163 (2014). Article ADS MathSciNet CAS PubMed Google Scholar Tzarouchis, D. C., Edwards, B. & Engheta, N. Programmable wave-based analog computing machine: a metastructure that designs metastructures. Nat. Commun. 16, 908 (2025). Article ADS CAS PubMed PubMed Central Google Scholar Zangeneh-Nejad, F. & Fleury, R. Performing mathematical operations using high-index acoustic metamaterials. New J. Phys. 20, 073001 (2018). Article ADS Google Scholar Hauser, H., Ijspeert, A. J., Füchslin, R. M., Pfeifer, R. & Maass, W. The role of feedback in morphological computation with compliant bodies. Biol. Cybern. 106, 595–613 (2012). Article MathSciNet PubMed Google Scholar Louvet, T., Omidvar, P. & Serra-Garcia, M. Reprogrammable, in-materia matrix-vector multiplication with floppy modes. Adv. Intell. Syst. 7, 2500062 (2025). El Helou, C., Grossmann, B., Tabor, C. E., Buskohl, P. R. & Harne, R. L. Mechanical integrated circuit materials. Nature 608, 699–703 (2022). Article ADS CAS PubMed Google Scholar Zolfagharinejad, M. et al. Analogue speech recognition based on physical computing. Nature 645, 886–892 (2025). Chen, H. & Metin, S. Physical intelligence in small‐scale robots and machines. Advanced Materials 38, e10332 (2026). Chen, C. et al. Advancing physical intelligence for autonomous soft robots. Sci. Robot. 10, eads1292 (2025). Article PubMed Google Scholar Mengaldo, G. et al. A concise guide to modelling the physics of embodied intelligence in soft robotics. Nat. Rev. Phys. 4, 595–610 (2022). Article Google Scholar Milana, E., Santina, C. D., Gorissen, B. & Rothemund, P. Physical control: a new avenue to achieve intelligence in soft robotics. Sci. Robot. 10, eadw7660 (2025). Article PubMed Google Scholar Horsman, D., Stepney, S., Wagner, R. C. & Kendon, V. When does a physical system compute? Proc. R. Soc. A Math. Phys. Eng. Sci. 470, 20140182 (2014). ADS Google Scholar McGeer, T. et al. Passive dynamic walking. Int. J. Robot. Res. 9, 62–82 (1990). Article Google Scholar Brown, E. et al. Universal robotic gripper based on the jamming of granular material. Proc. Natl. Acad. Sci. USA 107, 18809–18814 (2010). Article ADS CAS PubMed Central Google Scholar Reijniers, J., Vanderelst, D. & Peremans, H. Morphology-induced information transfer in bat sonar. Phys. Rev. Lett. 105, 148701 (2010). Article ADS PubMed Google Scholar Jaeger, H. Towards a generalized theory comprising digital, neuromorphic and unconventional computing. Neuromorphic Comput. Eng. 1, 012002 (2021). Article Google Scholar Jaeger, H., Noheda, B. & Van Der Wiel, W. G. Toward a formal theory for computing machines made out of whatever physics offers. Nat. Commun. 14, 4911 (2023). Article ADS CAS PubMed PubMed Central Google Scholar Hammack, B., Kranz, S. & Carpenter, B. Albert Michelson’s Harmonic Analyzer: a Visual Tour of a Nineteenth Century Machine that Performs Fourier Analysis (Articulate Noise Books, 2014). Swade, D. & Babbage, C. Difference Engine: Charles Babbage and the Quest to Build the First Computer (Viking Penguin, 2001). Zhang, J., Guo, Y., Hu, W. & Sitti, M. Wirelessly actuated thermo-and magneto-responsive soft bimorph materials with programmable shape-morphing. Adv. Mater. 33, 2100336 (2021). Article CAS PubMed PubMed Central Google Scholar He, Q. et al. A modular strategy for distributed, embodied control of electronics-free soft robots. Sci. Adv. 9, eade9247 (2023). Article PubMed PubMed Central Google Scholar Wang, T. et al. A versatile jellyfish-like robotic platform for effective underwater propulsion and manipulation. Sci. Adv. 9, eadg0292 (2023). Article PubMed PubMed Central Google Scholar Fan, X., Dong, X., Karacakol, A. C., Xie, H. & Sitti, M. Reconfigurable multifunctional ferrofluid droplet robots. Proc. Natl. Acad. Sci. USA 117, 27916–27926 (2020). Article ADS CAS PubMed PubMed Central Google Scholar Wang, Y. et al. 3D-printed photoresponsive liquid crystal elastomer composites for free-form actuation. Adv. Funct. Mater. 33, 2210614 (2023). Article CAS Google Scholar Luo, D. et al. Autonomous self-burying seed carriers for aerial seeding. Nature 614, 463–470 (2023). Article ADS CAS PubMed Google Scholar Rodrigue, H., Wang, W., Han, M.-W., Kim, T. J. & Ahn, S.-H. An overview of shape memory alloy-coupled actuators and robots. Soft Robot. 4, 3–15 (2017). Article PubMed Google Scholar Guo, Y., Liu, L., Liu, Y. & Leng, J. Review of dielectric elastomer actuators and their applications in soft robots. Adv. Intell. Syst. 3, 2000282 (2021). Article Google Scholar Wu, S., Hong, Y., Zhao, Y., Yin, J. & Zhu, Y. Caterpillar-inspired soft crawling robot with distributed programmable thermal actuation. Sci. Adv. 9, eadf8014 (2023). Article CAS PubMed PubMed Central Google Scholar Zhao, Y. et al. Stimuli-responsive polymers for soft robotics. Annu. Rev. Control Robot. Auton. Syst. 5, 515–545 (2022). Article Google Scholar Shen, Z., Chen, F., Zhu, X., Yong, K.-T. & Gu, G. Stimuli-responsive functional materials for soft robotics. J. Mater. Chem. B 8, 8972–8991 (2020). Article CAS Google Scholar Boyvat, M. & Sitti, M. Remote modular electronics for wireless magnetic devices. Adv. Sci. 8, 2101198 (2021). Article CAS Google Scholar Ke, X. et al. Synergistical mechanical design and function integration for insect-scale on-demand configurable multifunctional soft magnetic robots. Soft Robot. 11, 43–56 (2024). Article CAS PubMed Google Scholar Soon, R. H. et al. Pangolin-inspired untethered magnetic robot for on-demand biomedical heating applications. Nat. Commun. 14, 3320 (2023). Article ADS CAS PubMed PubMed Central Google Scholar Wang, T. et al. Adaptive wireless millirobotic locomotion into distal vasculature. Nat. Commun. 13, 4465 (2022). Article ADS CAS PubMed PubMed Central Google Scholar Mu, W. et al. Spiral-shape fast-moving soft robots. Adv. Funct. Mater. 33, 2300516 (2023). Article CAS Google Scholar He, Q. et al. Modular stimuli-responsive valves for pneumatic soft robots. Adv. Intell. Syst. 7, 2400659 (2025). Sitti, M. Physical intelligence as a new paradigm. Extreme Mech. Lett. 46, 101340 (2021). Article PubMed PubMed Central Google Scholar Li, Y., Li, Z., Duan, Y. & Spulber, A.-B. Physical artificial intelligence (pai): the next-generation artificial intelligence. Front. Inf. Technol. Electron. Eng. 24, 1231–1238 (2023). Article ADS Google Scholar Dickinson, M. H. et al. How animals move: an integrative view. Science 288, 100–106 (2000). Article ADS CAS PubMed Google Scholar MacKay-Lyons, M. Central pattern generation of locomotion: a review of the evidence. Phys. Ther. 82, 69–83 (2002). Article PubMed Google Scholar Ijspeert, A. J. Central pattern generators for locomotion control in animals and robots: a review. Neural Netw. 21, 642–653 (2008). Article ADS PubMed Google Scholar Zhou, Q., Xu, J. & Fang, H. A CPG-based versatile control framework for metameric earthworm-like robotic locomotion. Adv. Sci. 10, 2206336 (2023). Article Google Scholar Drotman, D., Jadhav, S., Sharp, D., Chan, C. & Tolley, M. T. Electronics-free pneumatic circuits for controlling soft-legged robots. Sci. Robot. 6, eaay2627 (2021). Article PubMed Google Scholar Goswami, D., Liu, S., Pal, A., Silva, L. G. & Martinez, R. V. 3D-architected soft machines with topologically encoded motion. Adv. Funct. Mater. 29, 1808713 (2019). Article Google Scholar Zhao, Y. et al. Twisting for soft intelligent autonomous robot in unstructured environments. Proc. Natl. Acad. Sci. USA 119, e2200265119 (2022). Article CAS PubMed PubMed Central Google Scholar Yang, X., Chang, L. & Pérez-Arancibia, N. O. An 88-milligram insect-scale autonomous crawling robot driven by a catalytic artificial muscle. Sci. Robot. 5, eaba0015 (2020). Article PubMed Google Scholar Chen, G. et al. A non-electrical pneumatic hybrid oscillator for high-frequency multimodal robotic locomotion. Nat. Commun. 16, 1449 (2025). Article ADS PubMed PubMed Central Google Scholar Zhou, Z. & Li, S. Self-sustained and coordinated rhythmic deformations with SMA for controller-free locomotion. Adv. Intell. Syst. 6, 2300667 (2024). Article Google Scholar Wehner, M. et al. An integrated design and fabrication strategy for entirely soft, autonomous robots. Nature 536, 451–455 (2016). Article ADS CAS PubMed Google Scholar Kotikian, A. et al. Untethered soft robotic matter with passive control of shape morphing and propulsion. Sci. Robot. 4, eaax7044 (2019). Article PubMed Google Scholar Mousa, M., Rezanejad, A., Gorissen, B. & Forte, A. E. Frequency-controlled fluidic oscillators for soft robots. Adv. Sci. 11, 2408879 (2024). Article CAS Google Scholar van Laake, L. C. & Overvelde, J. T. B. Bio-inspired autonomy in soft robots. Commun. Mater. 5, 198 (2024). Article Google Scholar Comoretto, A., Schomaker, H. A. & Overvelde, J. T. Physical synchronization of soft self-oscillating limbs for fast and autonomous locomotion. Science 388, 610–615 (2025). Article ADS CAS PubMed Google Scholar Pitti, A., Austin, M., Nakajima, K. & Kuniyoshi, Y. Informational embodiment: computational role of information structure in codes and robots. Phys. Life Rev. 53, 262–276 (2025). Hauser, H. Physical reservoir computing in robotics. Reservoir Computing: Theory, Physical Implementations, and Applications, 169–190 (Springer, 2021). Wang, J. & Suyi, L. Embodied multi-modal sensing with a soft modular arm powered by physical reservoir computing. In 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 17658–17664 (IEEE, 2025). Wang, J., Qiao, Z., Zhang, W. & Li, S. Proprioceptive and exteroceptive information perception in a fabric soft robotic arm via physical reservoir computing with minimal training data. Adv. Intell. Syst. 7, 2400534 (2025). Article Google Scholar Yoshimura, K. & Hasegawa, T. Research on tactile sensation by physical reservoir computing with a robot arm and a Ag2S reservoir. Jpn. J. Appl. Phys. 63, 03SP17 (2024). Article CAS Google Scholar Tayama, Y., Furukawa, H. & Ogawa, J. Development of a soft robot with locomotion mechanism and physical reservoir computing for mimicking gastropods. J. Robot. Mechatron. 37, 105–113 (2025). Article Google Scholar Tanaka, K. et al. Flapping-wing dynamics as a natural detector of wind direction. Adv. Intell. Syst. 3, 2000174 (2021). Article Google Scholar Shougat, M. R. E. U., Kennedy, S. & Perkins, E. A self-sensing shape memory alloy actuator physical reservoir computer. IEEE Sens. Lett. 7, 1–4 (2023). Article Google Scholar Zhao, Q., Nakajima, K., Sumioka, H., Hauser, H. & Pfeifer, R. Spine dynamics as a computational resource in spine-driven quadruped locomotion. In Proc. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1445–1451 (IEEE, 2013). Nakajima, K., Li, T., Hauser, H. & Pfeifer, R. Exploiting short-term memory in soft body dynamics as a computational resource. J. R. Soc. Interface 11, 20140437 (2014). Article CAS PubMed PubMed Central Google Scholar Eder, M., Hisch, F. & Hauser, H. Morphological computation-based control of a modular, pneumatically driven, soft robotic arm. Adv. Robot. 32, 375–385 (2018). Article Google Scholar Caluwaerts, K., D’Haene, M., Verstraeten, D. & Schrauwen, B. Locomotion without a brain: physical reservoir computing in tensegrity structures. Artif. life. 19, 35–66 (2013). Article CAS PubMed Google Scholar Bhovad, P. & Li, S. Physical reservoir computing with origami and its application to robotic crawling. Sci. Rep. 11, 13002 (2021). Article ADS CAS PubMed PubMed Central Google Scholar Horii, Y. et al. Physical reservoir computing in a soft swimming robot. In Proc. ALIFE 2021: the 2021 Conference on Artificial Life (MIT Press, 2021). Chen, T., Bilal, O. R., Shea, K. & Daraio, C. Harnessing bistability for directional propulsion of soft, untethered robots. Proc. Natl. Acad. Sci. USA 115, 5698–5702 (2018). Article ADS CAS PubMed PubMed Central Google Scholar Raney, J. R. et al. Stable propagation of mechanical signals in soft media using stored elastic energy. Proc. Natl. Acad. Sci. USA 113, 9722–9727 (2016). Article ADS CAS PubMed PubMed Central Google Scholar Rothemund, P. et al. A soft, bistable valve for autonomous control of soft actuators. Sci. Robot. 3, eaar7986 (2018). Article PubMed Google Scholar Patel, D. K. et al. Highly dynamic bistable soft actuator for reconfigurable multimodal soft robots. Adv. Mater. Technol. 8, 2201259 (2023). Article Google Scholar Huang, C. et al. Bistable programmable origami based soft electricity generator with inter-well modulation. Nano Energy 103, 107775 (2022). Article CAS Google Scholar Kaufmann, J., Bhovad, P. & Li, S. Harnessing the multistability of kresling origami for reconfigurable articulation in soft robotic arms. Soft Robot. 9, 212–223 (2022). Article PubMed Google Scholar Pal, A., Restrepo, V., Goswami, D. & Martinez, R. V. Exploiting mechanical instabilities in soft robotics: control, sensing, and actuation. Adv. Mater. 33, 2006939 (2021). Article CAS Google Scholar Tang, Y. et al. Leveraging elastic instabilities for amplified performance: spine-inspired high-speed and high-force soft robots. Sci. Adv. 6, eaaz6912 (2020). Article ADS PubMed PubMed Central Google Scholar Gorissen, B., Melancon, D., Vasios, N., Torbati, M. & Bertoldi, K. Inflatable soft jumper inspired by shell snapping. Sci. Robot. 5, eabb1967 (2020). Article PubMed Google Scholar Chi, Y., Hong, Y., Zhao, Y., Li, Y. & Yin, J. Snapping for high-speed and high-efficient butterfly stroke–like soft swimmer. Sci. Adv. 8, eadd3788 (2022). Article PubMed PubMed Central Google Scholar Faber, J. A., Udani, J. P., Riley, K. S., Studart, A. R. & Arrieta, A. F. Dome-patterned metamaterial sheets. Adv. Sci. 7, 2001955 (2020). Article CAS Google Scholar Le Ferrand, H., Studart, A. R. & Arrieta, A. F. Filtered mechanosensing using snapping composites with embedded mechano-electrical transduction. ACS Nano 13, 4752–4760 (2019). Article PubMed Google Scholar Thuruthel, T. G., Abidi, S. H., Cianchetti, M., Laschi, C. & Falotico, E. A bistable soft gripper with mechanically embedded sensing and actuation for fast grasping. In Proc. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 1049–1054 (IEEE, 2020). Ramachandran, V., Bartlett, M. D., Wissman, J. & Majidi, C. Elastic instabilities of a ferroelastomer beam for soft reconfigurable electronics. Extreme Mech. Lett. 9, 282–290 (2016). Article Google Scholar Holmes, D. P., Tavakol, B., Froehlicher, G. & Stone, H. A. Control and manipulation of microfluidic flow via elastic deformations. Soft Matter 9, 7049–7053 (2013). Article ADS CAS Google Scholar Yang, B., Wang, B. & Schomburg, W. K. A thermopneumatically actuated bistable microvalve. J. Micromech. Microeng. 20, 095024 (2010). Article ADS Google Scholar Maffli, L., Rosset, S. & Shea, H. R. Mm-size bistable zipping dielectric elastomer actuators for integrated microfluidics. In Proc. Electroactive Polymer Actuators and Devices (EAPAD) 2013, Vol. 8687, 86872M (SPIE, 2013). Jiao, Z. et al. Reprogrammable metamaterial processors for soft machines. Adv. Sci. 11, 2305501 (2024). Article CAS Google Scholar Song, S., Joshi, S. & Paik, J. Cmos-inspired complementary fluidic circuits for soft robots. Adv. Sci. 8, 2100924 (2021). Article Google Scholar Decker, C. J. et al. Programmable soft valves for digital and analog control. Proc. Natl. Acad. Sci. USA 119, e2205922119 (2022). Article CAS PubMed PubMed Central Google Scholar Tracz, J. A. et al. Tube-balloon logic for the exploration of fluidic control elements. IEEE Robot. Autom. Lett. 7, 5483–5488 (2022). Article Google Scholar Conrad, S. et al. 3D-printed digital pneumatic logic for the control of soft robotic actuators. Sci. Robot. 9, eadh4060 (2024). Article CAS PubMed Google Scholar Liu, Z., Fang, H., Xu, J. & Wang, K.-W. Discriminative transition sequences of origami metamaterials for mechanologic. Adv. Intell. Syst. 5, 2200146 (2023). Article Google Scholar Mahon, S. T., Buchoux, A., Sayed, M. E., Teng, L. & Stokes, A. A. Soft robots for extreme environments: removing electronic control. In Proc. 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft), 782–787 (IEEE, 2019). Stanley, A. A., Roby, E. S. & Keller, S. J. High-speed fluidic processing circuits for dynamic control of haptic and robotic systems. Sci. Adv. 10, eadl3014 (2024). Article ADS CAS PubMed PubMed Central Google Scholar Bartlett, N. W. et al. A 3D-printed, functionally graded soft robot powered by combustion. Science 349, 161–165 (2015). Article ADS CAS PubMed Google Scholar Garrad, M., Soter, G., Conn, A., Hauser, H. & Rossiter, J. A soft matter computer for soft robots. Sci. Robot. 4, eaaw6060 (2019). Article PubMed Google Scholar Yue, T. et al. Embodying soft robots with octopus-inspired hierarchical suction intelligence. Sci. Robot. 10, eadr4264 (2025). Article PubMed Google Scholar Xu, Y., Zhu, J., Chen, H., Yong, H. & Wu, Z. A soft reconfigurable circulator enabled by magnetic liquid metal droplet for multifunctional control of soft robots. Adv. Sci. 10, 2300935 (2023). Article CAS Google Scholar Yan, W. et al. Origami-based integration of robots that sense, decide, and respond. Nat. Commun. 14, 1553 (2023). Article CAS PubMed PubMed Central Google Scholar Li, Z., Myung, N. V. & Yin, Y. Light-powered soft steam engines for self-adaptive oscillation and biomimetic swimming. Sci. Robot. 6, eabi4523 (2021). Article PubMed Google Scholar Yang, J., Wang, H., Lou, L. & Meng, Z. A review of chitosan-based electrospun nanofibers for food packaging: from fabrication to function and modeling insights. Nanomaterials 15, 1274 (2025). Article CAS PubMed PubMed Central Google Scholar Zhao, T. et al. Modular chiral origami metamaterials. Nature 640, 931–940 (2025). Article ADS CAS PubMed Google Scholar Matia, Y. et al. Harnessing nonuniform pressure distributions in soft robotic actuators. Adv. Intell. Syst. 5, 2200330 (2023). Article Google Scholar Comoretto, A. et al. Embodying mechano-fluidic memory in soft machines to program behaviors upon interactions. Device 3, 100863 (2025). Yasuda, H., Tachi, T., Lee, M. & Yang, J. Origami-based tunable truss structures for non-volatile mechanical memory operation. Nat. Commun. 8, 962 (2017). Article ADS PubMed PubMed Central Google Scholar Xin, L., Li, Y., Wang, B. & Li, Z. Magnetic poles enabled kirigami meta-structure for high-efficiency mechanical memory storage. Adv. Funct. Mater. 34, 2310969 (2024). Article CAS Google Scholar Meng, Z. et al. Encoding and storage of information in mechanical metamaterials. Adv. Sci. 10, 2301581 (2023). Article Google Scholar Yang, H., Qi, H. & Pasini, D. Role of geometric gradients and size effects in multi-shape memory kirigami metamaterials. Struct. Multidiscip. Optim. 68, 266 (2025). Article Google Scholar Chen, T., Pauly, M. & Reis, P. M. A reprogrammable mechanical metamaterial with stable memory. Nature 589, 386–390 (2021). Article ADS CAS PubMed Google Scholar Meng, Z. et al. Bistability-based foldable origami mechanical logic gates. Extreme Mech. Lett. 43, 101180 (2021). Article Google Scholar Mei, T., Meng, Z., Zhao, K. & Chen, C. Q. A mechanical metamaterial with reprogrammable logical functions. Nat. Commun. 12, 7234 (2021). Article ADS CAS PubMed PubMed Central Google Scholar Yang, N. et al. Bistable soft shells for programmable mechanical logic. Adv. Sci. 12, 2412372 (2025). Article Google Scholar Yue, C. et al. A flexibly function-oriented assembly mechanical metamaterial. Adv. Funct. Mater. 34, 2316181 (2024). Article CAS Google Scholar Byun, J., Pal, A., Ko, J. & Sitti, M. Integrated mechanical computing for autonomous soft machines. Nat. Commun. 15, 2933 (2024). Article ADS CAS PubMed PubMed Central Google Scholar Nick, Z. H., Tabor, C. E. & Harne, R. L. Liquid metal microchannels as digital sensors in mechanical metamaterials. Extreme Mech. Lett. 40, 100871 (2020). Article Google Scholar Hyatt, L. P. & Harne, R. L. Programming metastable transition sequences in digital mechanical materials. Extreme Mech. Lett. 59, 101975 (2023). Article Google Scholar El Helou, C., Buskohl, P. R., Tabor, C. E. & Harne, R. L. Digital logic gates in soft, conductive mechanical metamaterials. Nat. Commun. 12, 1633 (2021). Article ADS CAS PubMed PubMed Central Google Scholar Xi, K. et al. A kinematically bifurcated metamaterial for integrated logic operation and computing. Adv. Sci. 12, e09829 (2025). El Helou, C., Hyatt, L. P., Buskohl, P. R. & Harne, R. L. Intelligent electroactive material systems with self-adaptive mechanical memory and sequential logic. Proc. Natl. Acad. Sci. USA 121, e2317340121 (2024). Article CAS PubMed PubMed Central Google Scholar Wang, J. & Li, S. Re-purposing a modular origami manipulator into an adaptive physical computer for machine learning and robotic perception. Adv. Sci. 12, e09389 (2025). Hopkins, J. B., Lee, R. H. & Sainaghi, P. Using binary-stiffness beams within mechanical neural-network metamaterials to learn. Smart Mater. Struct. 32, 035015 (2023). Article ADS Google Scholar Download references The authors acknowledge support from the National Science Foundation (CMMI-2312422, 2328522, EFRI-2422340) and Virginia Tech (via the Startup Fund and a Graduate Student Assistantship). These authors contributed equally: Jun Wang, Ziyang Zhou, Ardalan Kahak. Department of Mechanical Engineering, Virginia Tech, Blacksburg, VA, USA Jun Wang, Ziyang Zhou, Ardalan Kahak & Suyi Li Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar J.W., Z.Z., and A.K. contributed equally to this work and jointly conceived the core ideas, reviewed the relevant literature. They collaboratively developed the figures and prepared Sections 2–5 of the manuscript. S.L. supervised the research, guided the conceptual framework, wrote the introduction and summary, secured funding, and contributed to manuscript editing and final approval. Correspondence to Jun Wang or Suyi Li. The authors declare no competing interests. Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. Reprints and permissions Wang, J., Zhou, Z., Kahak, A. et al. Embodying physical computing into soft robots. Nat Commun 17, 2455 (2026). https://doi.org/10.1038/s41467-026-70866-6 Download citation Received: 12 October 2025 Accepted: 06 March 2026 Published: 15 March 2026 Version of record: 16 March 2026 DOI: https://doi.org/10.1038/s41467-026-70866-6 Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Advertisement Nature Communications (Nat Commun) ISSN 2041-1723 (online) © 2026 Springer Nature Limited Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Images (10):
|
|||||
| Narwal prime day : profitez d’offres exceptionnelles sur les robots … | https://lecafedugeek.fr/narwal-prime-da… | 1 | Mar 29, 2026 08:00 | active | |
Narwal prime day : profitez d’offres exceptionnelles sur les robots | LCDGURL: https://lecafedugeek.fr/narwal-prime-day-profitez-doffres-exceptionnelles-sur-les-robots/ Description: Narwal propose des réductions inédites sur ses robots pendant le Prime Day d’Automne. Content:
Le Prime Day d’Automne réserve cette année de belles surprises à ceux qui souhaitent s’équiper en robots de nettoyage dernier cri. Narwal, leader mondial des robots aspirateurs intelligents, propose des offres irrésistibles. Découvrez comment profiter de ces promotions exclusives et des innovations qui facilitent vraiment le quotidien. Pour le Prime Day, Narwal offre jusqu’à 720 € d’économies sur ses modèles phare. Ces remises sont disponibles du 7 au 12 octobre sur Amazon et sur la boutique officielle de la marque. Les clients peuvent choisir parmi plusieurs robots avec différentes fonctionnalités avancées. En prime, les packs exclusifs sur le site incluent deux ans d’accessoires supplémentaires. Il s’agit d’une occasion unique d’acquérir un robot intelligent à un prix imbattable. La gamme Narwal brille par sa technologie et sa facilité d’utilisation. Parmi les modèles en promotion : Tous les modèles intègrent des technologies pratiques comme le lavage automatique, la stérilisation des serpillières ou l’évitement intelligent des obstacles. Ainsi, ils répondent aux besoins des familles modernes et actives. En plus des offres, Narwal propose un jeu concours exclusif pour le Prime Day. Il permet de gagner des robots Freo Z10, des coffrets cadeaux ou d’autres lots étonnants. Chaque participant repart avec un cadeau, ce qui renforce l’expérience client. Pour jouer, il suffit de se rendre sur le site officiel Narwal et de suivre les instructions pour tenter sa chance. Grâce au Prime Day Narwal, il n’a jamais été aussi facile d’équiper son foyer avec des robots intelligents et de profiter d’une maison toujours propre. Entre réductions importantes, accessoires offerts et jeu concours, Narwal s’impose comme la référence de l’entretien du sol pour un quotidien plus simple et agréable. Saisissez ces offres pour découvrir les avantages des robots aspirateurs nouvelle génération ! Lisez notre dernier article tech : Test – AAWireless Two : l’adaptateur Android Auto sans fil qui libère vos trajets Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec * Commentaire * Nom * E-mail * Δ
Images (1):
|
|||||
| Universal Robots and Scale AI Launch Imitation Learning System to … | https://www.manilatimes.net/2026/03/19/… | 0 | Mar 28, 2026 16:00 | active | |
Universal Robots and Scale AI Launch Imitation Learning System to Accelerate AI Model Training, Bridging the 'Lab-to-Factory' GapDescription: **media[979578]**SAN JOSÉ, Calif., March 19, 2026 /PRNewswire/ -- Universal Robots (UR) this week unveiled the UR AI Trainer at GTC 2026. Developed in collabo... Content: |
|||||
| Universal Robots and Scale AI Launch Imitation Learning System to … | https://moneycompass.com.my/universal-r… | 1 | Mar 28, 2026 16:00 | active | |
Universal Robots and Scale AI Launch Imitation Learning System to Accelerate AI Model Training, Bridging the 'Lab-to-Factory' Gap - Money CompassDescription: Money Compass is one of the credible Chinese and English financial media in Malaysia with strong influence in Malaysia’s financial industry. As the winner of the SME Award in Malaysia for 5 consecutive years, we persistently propel the financial industry towards a mutually beneficial framework. Since 2004, with the dedication to advocating the public to practice financial planning in everyday life, Money Compass has accumulated a vast connection in ASEAN financial industries and garnered government agencies and corporate resources. At present, Money Compass is adjusting its pace to transform into Money Compass 2.0. Consolidating the existing connections and network, Money Compass Integrated Media Platform is founded, which is well grounded in Malaysia whilst serving the ASEAN region. The mission of the new Money Compass Integrated Media Platform is to become the financial freedom gateway to assist internet users enhance financial intelligence, create wealth opportunities and achieve financial freedom for everyone! Content:
SAN JOSÉ, Calif., March 19, 2026 /PRNewswire/ — Universal Robots (UR) this week unveiled the UR AI Trainer at GTC 2026. Developed in collaboration with Scale AI, the AI Trainer marks a shift as robots move from pre-programmed applications to fully AI-driven tasks. “Our customers, ranging from large enterprises to AI research labs, are no longer just asking for AI features,” said Anders Beck, VP of AI Robotics Products at Universal Robots. “They need a way to collect high-fidelity, synchronized robot and vision data to train AI models on the same robots they intend to deploy. Our AI Trainer is the industry’s first direct lab-to-factory solution for AI model training.” Enabling AI-ready data capture AI robotics training is often hindered by fragmented hardware and low-fidelity data capture. Today’s training data is collected on research robots not suited for production environments, and many systems rely only on visual feedback, making delicate or contact-rich tasks difficult. “The AI Trainer directly addresses these barriers,” said Beck. “By utilizing our unique Direct Torque Control and force feedback features, we give developers direct influence over how the robot physically interacts with the world, training on the same robust hardware used in over 100,000 industrial deployments.” Scale AI partnership enables a flywheel of integrated robotics data The UR AI Trainer lets human operators guide UR robots through tasks in a leader-follower setup, capturing high-quality synchronized multimodal data during real-time demonstrations, creating the structured datasets needed to train Vision-Language-Action (VLA). Running on UR’s AI Accelerator platform, the AI Trainer combines collaborative industrial robots with Scale AI software to enable scalable data capture in production environments, supporting continuous optimization of physical AI systems. “Universal Robots is a leader in industrial robotics, and its global footprint offers the ideal foundation for data capture and AI deployment,” said Ben Levin, General Manager, Physical AI at Scale AI. “Together, we’ve created an integrated robotics data flywheel, allowing customers to train, deploy, and improve their AI models faster than ever before.” UR and Scale AI will release a large-scale industrial dataset collected on UR robots later this year. Experience AI Trainer at GTC Visitors to UR’s GTC booth can guide two UR3e ‘leader’ robots providing haptic input to control two UR7e ‘follower’ robots. The setup enables visitors to perform advanced smartphone packaging with haptic feedback for imitation learning and VLA training, with demonstration data recorded in real time on Scale’s stack and replayable directly on the AI Trainer. The process of capturing robot training data for AI models is complemented by an embodied foundation model demo with Generalist AI and a haptics-based training demo with Haply Robotics. Read more on the UR website. See image collection here. About Universal Robots is a global leader in collaborative robotics (cobots), used across a wide range of industries. With over 100,000 cobots sold worldwide, our user-friendly platform is supported by intuitive PolyScope software, award-winning training, comprehensive services, and the world’s largest cobot ecosystem, delivering innovation and choice to our customers. Universal Robots is part of Teradyne Robotics, a division of Teradyne (NASDAQ: TER), a leading supplier of automatic test equipment and advanced robotics technology. Scale AI‘s mission is to develop reliable AI systems for the world’s most important decisions. We provide high-quality data that powers the world’s AI models, and we help enterprises and governments build, deploy, and oversee AI applications that create real impact. Through our research and Safety, Evaluations, and Alignment Lab (SEAL), we test models with rigorous benchmarks and novel research to help ensure AI is developed in ways people can trust. Founded in 2016, Scale is headquartered in San Francisco. View original content:https://www.prnewswire.com/apac/news-releases/universal-robots-and-scale-ai-launch-imitation-learning-system-to-accelerate-ai-model-training-bridging-the-lab-to-factory-gap-302717348.html SOURCE Universal Robots Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved Login to your account below Remember Me Please enter your username or email address to reset your password. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved
Images (1):
|
|||||
| The multimodal leap: Engineering human-like intelligence into humanoid systems | https://timesofindia.indiatimes.com/blo… | 1 | Mar 28, 2026 16:00 | active | |
The multimodal leap: Engineering human-like intelligence into humanoid systemsDescription: Humanoid robots look convincing on stage or curated social media forwards. They walk, pick up objects, and in some demonstrations, they even smile and converse. This creates the expectation that machines will soon behave like... Content:
We encourage you to review our Terms of Service, and Privacy Policy. By continuing, you agree to the Terms listed here. In case you want to opt out, please click "Do Not Sell or Share My Personal Information" link in the footer of this page. We won't sell or share your personal information to inform the ads you see. You may still see interest-based ads if your information is sold or shared by other companies or was sold or shared previously. Interested in blogging for timesofindia.com? We will be happy to have you on board as a blogger, if you have the knack for writing. Just drop in a mail at toiblogs@timesinternet.in with a brief bio and we will get in touch with you. Somjit Amrit, stumbled upon the world of blogging. Reading is something he enjoys, a sort of second nature to him. The constructive corollary of reading is writing. A decade ago, he started writing reviews of the books, read. Subsequently, he started blogging on a variety of subjects: lessons learned from Mother Nature, first-hand experiences of relevance to help others, and technology for business are some of the areas of his interest. With over 30 years of rich professional experience, having led global business units spanning 4 continents, in the IT Services industry, he is the CEO of IIT Mandi iHUB and HCI Foundation which is one of the 25 Technology Innovation Hubs in the country sponsored by the Government of India. He is an engineer by qualification with a management degree from IIM Lucknow. He enjoys setting the “rake-through-the-hair” questions for quiz competitions. He is attempting to perfect a recently discovered natural talent in musical whistling as a stress buster. Can he pick up the harmonica this year as the grand resolution for the year? He will surely update you. LESS ... MORE Humanoid robots look convincing on stage or curated social media forwards. They walk, pick up objects, and in some demonstrations, they even smile and converse. This creates the expectation that machines will soon behave like humans. In practice, however, most humanoid platforms excel at isolated capabilities but struggle in continuous, unscripted social and physical interaction. They may drop objects, misinterpret gestures, mis-time responses, or pause when faced with noisy sensory input. These limitations reveal a deeper truth: building a humanoid robot is not about perfecting any single component. It is about closing tightly coupled loops between perception, reasoning, and action across multiple modalities. Multimodality is the structural solution to this problem. Human interaction is a tightly coupled stream of audio, visual, tactile and contextual signals that arrive and must be interpreted together in real time. For rigid robots to behave with human-like fluidity, their software stacks cannot treat these channels as separate pipelines that exchange occasional messages. Instead, they must build shared internal representations that are synchronized in time, fused across sensing modalities, and available both to perception modules that infer intent and to control modules that plan and execute motion. When a person points while saying, “Put it there,” the robot should align the gesture, the pointing vector, the spoken phrase, the gaze and the scene geometry in a single moment of understanding, and then generate a motor plan that respects force constraints, spatial and temporal balance, and the social context of the interaction. The Missing Link: Synchronization and Real-Time Fusion While multimodality provides the structural foundation, the real challenge lies in synchronizing and fusing these multiple sensory streams. Humanoid robots cannot achieve human-level fluency by processing visual, auditory, tactile, and contextual information independently. Each modality informs and constrains the others, and seamless integration in real time is essential for coherent decision-making. Key capabilities enabled by multimodal AI include: Synthesize Context: A robot interacting with a human, needs to combine facial expression data, speech audio, and environmental context to determine whether the person is frustrated, joking, or requesting urgent help. Adaptive Interaction: By fusing tactile feedback (object weight, texture) with visual input (object shape, location), a robot can dynamically adjust its grip or trajectory without pre-programming every possible scenario. Predictive Coordination: Multimodal fusion allows anticipatory action. For example, combining gaze tracking with speech patterns can enable the robot to act on intentions before they are explicitly verbalised. Developing these capabilities requires end-to-end multimodal neural networks that mirror human cognitive processes. Latent representations must encode cross-modal dependencies and be updated continuously to allow smooth, safe, and intelligent interaction. Without this real-time integration, humanoid robots would continue to operate with limited agility (or pronounced rigidity) and constrained social responsiveness (or visible gawkiness), regardless of how advanced their individual sensors or algorithms may be. Moving Forward: Towards Agile, Context-Aware Human-like Robots The future of AI is not merely automation; it is augmentation and interaction. To build more agile and context-aware humanoid robots, research efforts should focus on: Robust Data Fusion Techniques: Developing algorithms that fuse asynchronous, multi-sensory data into unified latent representations, rather than merely combining outputs from separate modules. Contextual Understanding Engines: Creating AI that can interpret intent, social nuance, and environmental context, enabling reliable operation in unpredictable, real-world environments. Ethical and Responsible AI: Ensuring that multimodal systems respect privacy, avoid bias, and interact safely, particularly as they begin to operate in sensitive human contexts. The current limitations of humanoid robots are not failures; they are building blocks. By investing in multimodal AI research and the Technology Innovation Hub at IIT Mandi, we are laying the foundation for fluidic humanoid robots (human-like robots) that redefine our relationship with machines. The ultimate goal is a future where the line between physical and digital, human and AI, becomes seamless. Robots will not merely act; they will perceive, reason, and interact in ways that are coherent, context-aware, and profoundly human-like while abiding with the overarching aspect of responsible and ethical AI. Views expressed above are the author's own. High desibels The Vance Dance OMG! Don’t bet on bans The 65 Lakh Question ‘Centrism isn’t nostalgia, it is survival’ So, who’s straying? Beyond the noise Rethinking stray dogs: From crisis to opportunity Swadeshi Diwali Down the drain Interested in blogging for timesofindia.com? We will be happy to have you on board as a blogger, if you have the knack for writing. Just drop in a mail at toiblogs@timesinternet.in with a brief bio and we will get in touch with you. TOI Edit Page,Voices Erratica,TOI Edit Page,Tracking Indian Communities Juggle-Bandhi,TOI Edit Page TOI Edit Page
Images (1):
|
|||||
| Universal Robots and Scale AI launch the UR AI Trainer | https://thenextweb.com/news/universal-r… | 1 | Mar 28, 2026 16:00 | active | |
Universal Robots and Scale AI launch the UR AI TrainerURL: https://thenextweb.com/news/universal-robots-and-scale-ai-launch-the-ur-ai-trainer Description: Universal Robots and Scale AI launch the UR AI Trainer at GTC 2026, a leader-follower system that captures force and visual data to train AI models. Content:
Revealed at GTC 2026, the leader-follower imitation learning platform captures force, motion, and visual data directly on production hardware, closing the gap between AI research labs and factory floors. Universal Robots has launched the UR AI Trainer, a hardware-software system built in collaboration with Scale AI that allows operators to generate high-fidelity robot training data directly on the same cobots they deploy in production. Announced at NVIDIA’s GTC 2026 conference in San Jose on 16 March, the system is designed to close what the robotics industry calls the lab-to-factory gap: the practical difficulty of moving AI models trained in controlled research settings into real-world manufacturing environments. The core mechanism is a leader-follower setup. A human operator physically guides a leader robot through a task, say, packaging a smartphone, while a follower robot mirrors the motion in real time. Throughout each demonstration, the system simultaneously captures motion trajectories, force feedback data, and visual information, producing the structured multimodal datasets needed to train Vision-Language-Action models. The key differentiator is that this happens on the same industrial cobots UR sells into production: training data collected on a UR3e or UR7e in a controlled AI training cell can be used to train models that then run on identical hardware in a factory. TNW City Coworking space - Where your best work happens A workspace designed for growth, collaboration, and endless networking opportunities in the heart of tech. “Our customers, ranging from large enterprises to AI research labs, are no longer just asking for AI features. They need a way to collect high-fidelity, synchronized robot and vision data to train AI models on the same robots they intend to deploy. Our AI Trainer is the industry’s first direct lab-to-factory solution for AI model training.” – Anders Beck, VP of AI Robotics Products, Universal Robots Most robot training data today is collected on research platforms using vision alone. That approach works for tasks where position is sufficient, but fails for anything involving delicate contact, screwing, pressing, inserting, or any manipulation where the robot needs to respond to resistance. Universal Robots argues that its Direct Torque Control and force feedback capabilities give the AI Trainer a physical fidelity advantage: the robot can not only learn what to do visually but also how it should feel to do it correctly. This matters particularly for the category of tasks the robotics research community describes as contact-rich manipulation, assembly operations where parts must fit together with precision and the robot must adjust its grip in response to what it encounters. Those tasks have historically been among the hardest to automate reliably, and they represent a significant share of the manufacturing operations that remain human-dependent. The UR AI Trainer deploys on UR’s AI Accelerator platform and integrates Scale AI’s software stack to capture, structure, and manage the training data generated during demonstrations. The collaboration is explicitly framed as a flywheel: operators collect demonstration data, models are trained on that data, deployed robots improve performance, and the improved performance feeds back into the next round of training. “Universal Robots is a leader in industrial robotics, and its global footprint offers the ideal foundation for data capture and AI deployment. Together, we’ve created an integrated robotics data flywheel, allowing customers to train, deploy, and improve their AI models faster than ever before.” – Ben Levin, General Manager, Physical AI, Scale AI As part of the collaboration, Universal Robots and Scale AI will release a large-scale industrial dataset collected on UR robots later in 2026. The GTC demo captures this pipeline in miniature: visitors at UR’s booth can guide two UR3e leader robots through a smartphone packaging task, with the demonstration data recorded in real time on Scale’s stack and immediately replayable on the AI Trainer. A parallel virtual demo, built in NVIDIA Omniverse using Isaac Sim, shows the same task being trained synthetically using two Haply Inverse3 haptic devices, demonstrating the simulation-to-real pathway alongside the physical data collection. Accompanying the AI Trainer launch is the first public demonstration of Generalist AI’s embodied foundation models. Generalist was founded by Pete Florence, a former Senior Research Scientist at Google DeepMind whose prior work includes co-authorship on RT-2 (Robotic Transformer 2) and PaLM-E, alongside Andy Zeng and Andy Barry, both former colleagues at DeepMind and MIT. The startup, which counts NVIDIA’s venture arm NVentures among its investors, emerged from stealth at GTC 2025 and has since been developing what it describes as embodied foundation models for general-purpose robot dexterity. At GTC 2026, two UR7e robots running Generalist’s model autonomously execute the same smartphone packaging task that the AI Trainer demos use for human-guided data collection. The demonstration is designed to show the end state that the training pipeline is building towards: robots that can complete contact-rich manipulation tasks reliably and without pre-programmed trajectories. “Generalist is building embodied foundation models that deliver industry-leading dexterity and reliability. This demonstration on Universal Robots’ trusted industrial platform shows how physical commonsense can be translated into real-world capability, paving the way for deployment across industries at scale.” – Pete Florence, co-founder and CEO, Generalist AI Universal Robots frames the industrial scale of its installed base, over 100,000 cobots deployed worldwide, as a structural advantage in the race to build physical AI. The argument is that the quality of an AI model depends heavily on the quality and quantity of the training data, and that UR’s fleet of production robots represents the largest potential source of real-world manipulation data in the industry. The AI Trainer is the mechanism for unlocking that data. NVIDIA’s physical AI ecosystem surrounds the launch: the company is also exploring use of the NVIDIA Physical AI Data Factory Blueprint to automate synthetic data generation, complementing the physical demonstration data. “The shift toward Physical AI requires a fundamental move from rigid, pre-programmed automation to generalist robots that can perceive, reason, and learn through human-like interaction. By leveraging the NVIDIA Isaac simulation frameworks, Universal Robots is building a scalable engine for high-fidelity data capture and generation, providing the essential infrastructure to train the next generation of autonomous systems at scale.” – Amit Goel, Head of Robotics and Edge AI Ecosystem, NVIDIA Universal Robots is a subsidiary of Teradyne Robotics, itself a division of Teradyne (NASDAQ: TER). The GTC 2026 announcement comes at a moment when physical AI, the application of AI techniques to real-world robotic manipulation, has attracted significant attention and investment, driven partly by the success of large language models and the argument that similar scaling approaches can work for robot learning given sufficient high-quality data. I am the Editor in Chief for TNW, covering technology not as a parade of launches and valuations, but as a system of influence, persuasion, (show all) I am the Editor in Chief for TNW, covering technology not as a parade of launches and valuations, but as a system of influence, persuasion, and change. I write about startups, venture capital, digital policy, and Europe ecosystem, with an eye on the larger story beneath them: who gets to build the future, who profits from it, and how Europe is learning to speak in a louder voice of its own. Before moving into senior editorial leadership, I've built my career for over +10 years across journalism, storytelling, content strategy, SEO, and digital publishing, with experience in SaaS, hospitality, art, and culture. Get the most important tech news in your inbox each week. The heart of tech A Tekpon Company Copyright © 2006—2026, Cogneve, INC. Made with <3 in Amsterdam.
Images (1):
|
|||||
| Google DeepMind teams up with Agile Robots for AI robotics … | https://seekingalpha.com/news/4568275-g… | 0 | Mar 28, 2026 16:00 | active | |
Google DeepMind teams up with Agile Robots for AI robotics push (GOOG:NASDAQ)Description: Google DeepMind partners with Agile Robots to advance AI-driven robotics using Gemini Robotics models, boosting deployment and training. Content: |
|||||
| Unitree targets IPO after revenue quadruples on robotics demand | https://kr-asia.com/unitree-targets-ipo… | 1 | Mar 28, 2026 00:00 | active | |
Unitree targets IPO after revenue quadruples on robotics demandURL: https://kr-asia.com/unitree-targets-ipo-after-revenue-quadruples-on-robotics-demand Description: The Chinese startup has filed for a listing on Shanghai’s STAR Market. Content:
Written by T. K. Lin Published on 24 Mar 2026 2 mins read Unitree Robotics has filed for an IPO on the Shanghai Stock Exchange, seeking to raise up to RMB 4.2 billion (USD 609 million). According to the company’s prospectus, about half of the proceeds will be allocated to R&D in robotics and embodied intelligence. The remaining funds are set to support product development and manufacturing facilities following the IPO. The review process is expected to begin soon, with Unitree aiming to list on the STAR Market, which typically caters to technology and tech-focused companies. The planned IPO follows strong sales growth. According to the prospectus, Unitree’s sales reached RMB 1.7 billion (USD 246.5 million) in 2025, about 4.3 times higher than the previous year, while net profit tripled to about RMB 287.6 million (USD 41.7 million). Notably, revenue from humanoid robots has risen sharply in recent years, alongside broad-based growth across its product categories, based on data disclosed in the filing: frameborder="0" scrolling="no" sandbox="allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation"> Even before reaching these financial milestones, Unitree had long been viewed as one of China’s leaders in robotics since its founding in 2016. That reputation stems not only from the company’s high-profile demonstrations and public showcases, but also from its founder, Wang Xingxing. Wang has been vocal about the need for embodied intelligence to advance through both hardware and software, rather than placing too much emphasis on hardware alone. In previous conversations with 36Kr, he said Unitree has a sizable team focused on artificial intelligence model development, including models that power its robots. According to Wang, the company’s efforts extend beyond motion control, where it is already considered strong, into more advanced capabilities such as cognition and planning. Among Unitree’s latest offerings is the H2, a full-sized humanoid robot, though the G1 is likely more familiar to the public. The G1 has been used in public showcases, including a martial arts segment during this year’s Lunar New Year gala in China, where 24 G1 robots were deployed to “spar” with human performers and execute movements such as table vaults and backflips. Unitree also participated in last year’s broadcast. height="278" frameborder="0" allowfullscreen="allowfullscreen" data-ytbridge="vidSurrogate2"> Beyond humanoid robots, Unitree also develops other robotic form factors, including quadruped machines such as the Go2 robot dog. In terms of commercialization, Unitree is among the frontrunners, projecting shipments of 10,000–20,000 units in 2026. Its domestic competitors include UBTech Robotics, which develops the Walker line of humanoid robots, as well as broader technology players such as Xpeng, which is developing the Iron humanoid robot. Globally, Unitree is often compared with Tesla, which is developing the Optimus robot. Note: RMB figures are converted to USD at rates of RMB 6.90 = USD 1 based on estimates as of March 24, 2026, unless otherwise stated. USD conversions are presented for ease of reference and may not fully match prevailing exchange rates. Loading... Subscribe to our newsletters KrASIA A digital media company reporting on China's tech and business pulse.
Images (1):
|
|||||
| Beyond kung fu: Unitree launching humanoid robot for household chores … | https://www.notebookcheck.net/Beyond-ku… | 1 | Mar 28, 2026 00:00 | active | |
Beyond kung fu: Unitree launching humanoid robot for household chores and elder care - NotebookCheck.net NewsDescription: The humanoid robot company that taught them to do backflips, march en masse, and do kung fu choreography might soon be having them do laundry. Unitree aims for more practical applications with a low-cost humanoid for household purposes. Content:
While Elon Musk is of the opinion that the Tesla Optimus robot will be better than its Chinese competition when it actually launches, companies like Unitree have already shipped thousands of humanoid robots and are moving from the flashy demo stage to developing household helpers. Unitree sold more than 5,500 humanoid robots in 2025, surpassing the combined output of all U.S. competitors, including Tesla, Figure AI, and Agility Robotics, and is aiming to ship up to 20,000 units in 2026. The company is clearly not waiting for the technology to be perfect before scaling. Better known for viral clips of a marching robot army or the majestic martial arts performance of its WuBots stealing the Spring Festival Gala spotlight, it is now planning a more domestically applicable future for its machines. According to Unitree's Shanghai Stock Exchange IPO filing with a proposed fundraising target of more than $600 million, the company will be launching a cheaper "general-purpose humanoid robot embodied foundation model" by 2030. The model is described as covering four core generalization pillars: scene, instruction, action, and task. It is designed to close the loop between cloud-based model training, edge-side inference, and real-world data collection. This is the kind of autonomous system that currently powers self-driving EVs but is repurposed for humanoid robot decision-making and execution. While the near-term focus for humanoid robots like Optimus or Hyundai's Atlas has been industrial and manufacturing environments, where conditions are controlled enough for today's models to operate reliably, Unitree aims to start selling a general-purpose humanoid within the next three years. As generalization, reliability, and safety mature, says Unitree, the application domain will expand from vertical industrial scenarios into household services, elder care, and daily living, like doing laundry. That ambition is not purely theoretical, as Unitree's R1 robot is already capable of voice- and vision-based multimodal interaction for simple household tasks, and its open-sourced UnifoLM-VLA-0 model allows the G1 humanoid to autonomously handle 12 different categories of complex manipulation using a single policy. It can unpack a tennis racket on its own, for instance, not just memorize preset kung fu choreography. The race now is to accumulate real-world interaction data, as industry experts argue that when a certain threshold is reached, the general intelligence will rise significantly, marking the point when humanoid robots will truly be ready to move from the stage to the home at a much lower cost than today's units. Get the Unitree Go2 robot dog quadruped on Amazon IT Home
Images (1):
|
|||||
| Reachy Mini Hits AliExpress, Taking Embodied AI Global - Money … | https://moneycompass.com.my/reachy-mini… | 1 | Mar 25, 2026 08:00 | active | |
Reachy Mini Hits AliExpress, Taking Embodied AI Global - Money CompassURL: https://moneycompass.com.my/reachy-mini-hits-aliexpress-taking-embodied-ai-global/ Description: Money Compass is one of the credible Chinese and English financial media in Malaysia with strong influence in Malaysia’s financial industry. As the winner of the SME Award in Malaysia for 5 consecutive years, we persistently propel the financial industry towards a mutually beneficial framework. Since 2004, with the dedication to advocating the public to practice financial planning in everyday life, Money Compass has accumulated a vast connection in ASEAN financial industries and garnered government agencies and corporate resources. At present, Money Compass is adjusting its pace to transform into Money Compass 2.0. Consolidating the existing connections and network, Money Compass Integrated Media Platform is founded, which is well grounded in Malaysia whilst serving the ASEAN region. The mission of the new Money Compass Integrated Media Platform is to become the financial freedom gateway to assist internet users enhance financial intelligence, create wealth opportunities and achieve financial freedom for everyone! Content:
Developer-favorite desktop robot now available to more consumers worldwide through AliExpress Anniversary Sale LOS ANGELES, March 17, 2026 /PRNewswire/ — Reachy Mini, the open-source desktop robot that has captivated the developer community, is now available to global consumers on AliExpress, coinciding with the platform’s Anniversary Sale promotion. This extension marks an important step in expanding Reachy Mini beyond the open-source developer community to a broader consumer audience across key markets including the United States, Europe, South Korea, Japan and Brazil. Developed within the Hugging Face ecosystem, a leading open-source AI platform, Reachy Mini is a flagship embodied AI project designed for human-AI interaction, creative coding, and hands-on experimentation. Reachy Mini has earned strong recognition among AI researchers and developers worldwide. Now, through AliExpress, the robot is becoming more broadly accessible to consumers interested in exploring the next wave of AI-powered robotics. The addition of Reachy Mini reflects AliExpress’ broader push to attract innovative technology products and brands. Seeed Studio, the hardware manufacturing partner behind Reachy Mini, selected AliExpress as its consumer market launchpad. “AliExpress has made a strong commitment to supporting premium hard tech products, which aligns perfectly with Seeed Studio’s mission,” said Joey Jiang, VP of Global Sales and Marketing at Seeed Studio. “As an open-source hardware provider focused on emerging technologies, we have built a technology ecosystem based on open-source hardware, AI-driven capabilities, and cross-domain co-creation. The Anniversary Sale gives us a rare opportunity to tap into global demand and introduce Reachy Mini to a wider audience.” The timing comes amid growing visibility for Reachy Mini. At CES 2026, NVIDIA CEO Jensen Huang featured Reachy Mini in his keynote address, demonstrating the desktop robot to a global audience and signaling that the era of embodied AI has arrived. The spotlight from one of the world’s most-watched tech stages has helped drive surging interest and pre-order demand for the compact robot. From March 16 through March 25, Reachy Mini will be available in stock – no pre-orders required. Seeed Studio’s official AliExpress store will release limited units daily throughout the promotional period, giving consumers worldwide a chance to own one of the most talked-about robots in the embodied AI space. Reachy Mini Robot Reachy Mini is among a growing roster of premium tech brands joining AliExpress Brand+, the platform’s dedicated channel for high-end global technology brands. In 2025, Unitree Robotics opened its official flagship store on AliExpress, while Rokid AR glasses and other emerging global tech brands have used the platform to reach international consumers directly. As more frontier technology brands seek global scale, AliExpress is emerging as the platform of choice for hard tech brands looking to grow beyond their home markets. “Reachy Mini joining AliExpress during our Anniversary Sale is a strong signal of where consumer technology is headed,” said Chris Carl, Head of Marketing, AliExpress U.S. “We aim to bring the world’s most innovative technology products to consumers everywhere.” Reachy Mini is available HERE through Seeed Studio’s official AliExpress store. About AliExpress Launched in 2010, AliExpress is a global e-commerce platform dedicated to creating a better shopping experience for hundreds of millions of consumers in more than 200 countries and regions. In addition to the English version, the AliExpress platform is available in 15 other languages. AliExpress is part of Alibaba International Digital Commerce Group. Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved Login to your account below Remember Me Please enter your username or email address to reset your password. Copyright © 2024 Money Compass Media (M) Sdn Bhd. All Rights Reserved
Images (1):
|
|||||
| Embodied AI turned into growth driver | http://www.ecns.cn/news/sci-tech/2026-0… | 1 | Mar 25, 2026 08:00 | active | |
Embodied AI turned into growth driverURL: http://www.ecns.cn/news/sci-tech/2026-03-21/detail-ihfaunkv7716550.shtml Content:
Xi stressed importance of future industries for boosting new quality productive forces Inside a humming factory in Hefei, Anhui province, a 1.66-meter-tall humanoid robot dubbed Lingshu steadies its grip, pivots and places a wafer-thin semiconductor onto a moving line, an extremely high precision task where an aberration of just a fraction of a millimeter could spell failure. The scene, once confined to research labs, is now moving rapidly onto Chinese factory floors, offering a glimpse into China's efforts to develop future industries using embodied artificial intelligence as a strategic pillar of economic growth. During this year's first group study session of the Political Bureau of the Communist Party of China Central Committee in January, President Xi Jinping emphasized that cultivating future industries is of great significance for developing new quality productive forces, building a modernized industrial system, improving people's quality of life, and promoting people's well-rounded development and all-around social progress. In February, Xi, who is also general secretary of the CPC Central Committee, visited Beijing's E-Town â a hotbed for emerging and future industries, where he took a look at a national information technology innovation park, once again turning his attention to representative scitech innovations. China's national policy signals are reinforcing the momentum. The 2026 Government Work Report pledged to establish mechanisms to boost investment in future industries, including embodied AI, future energy, quantum technology, brain-computer interface and 6G technology. It marked a renewed effort to develop embodied AI after the concept was first elevated as a national priority in 2025. Embodied AI refers to the integration of AI into physical systems, enabling them to interact with the physical world. Humanoid robots represent the most advanced form of embodied AI at the current stage. Yao Qizhi, a Turing Award winner and an academician at the Chinese Academy of Sciences, said, "Over the past five years, China has made rapid progress in embodied AI, especially humanoid robots, reaching the top international tier and even taking a leading position in some areas." Yao added that embodied AI represents a convergence of computing power, algorithms, hardware and real-world data. "For China, it is not a single technological breakthrough, but a systematic project," he said. Zhang Zhaohui, founder and CEO of Youibot, the developer of Lingshu, said that unlike traditional industrial robots working on programmed tasks, humanoid robots allow a single AI system to control different types of robotic bodies, which is more efficient. "Lingshu, for instance, has already been deployed in electronics factories and logistics centers in cities including Hefei, Suzhou in Jiangsu province and Chongqing," he said, adding that one such robot can work as efficiently as eight to 12 human workers per shift while operating 24 hours a day. Zhang said that the company is also testing tea picking using humanoid robots. As tea leaves are extremely fragile and grow in irregular patterns, robots would be required to identify and pick leaves with extremely high precision. Tangible results The Development Research Center of the State Council forecasts that the domestic embodied AI market could reach 400 billion yuan ($55 billion) by 2030 and surpass 1 trillion yuan by 2035, driving productivity gains across logistics, manufacturing and services. China's manufacturing scenarios, in particular, give embodied AI an edge. Early adopters are already reporting tangible results. At carmaker Nio's smart manufacturing plant, embodied AI technology is being used to navigate automated storage systems, retrieve parts and assemble vehicle bodies. Nio said the technology has boosted production efficiency by more than 30 percent, cut labor costs by 25 percent and reduced defect rates by 40 percent. Meanwhile, robot company UBTech's Walker S2 humanoid robots are being deployed in factories across South China. The company said it has secured orders worth over 100 million yuan and plans to deliver more than 1,000 units in 2026. Data from International Data Corporation show that global shipments of embodied AI industrial robots reached 18,000 units in 2025 and were expected to exceed 50,000 in 2026, with China accounting for more than 45 percent of the market. Public attention on embodied AI has also surged. During this year's Spring Festival Gala, humanoid robots performed complex stunts including flips, martial arts and synchronized group movements â a sharp leap from the simpler demonstrations seen just a year earlier. Li Lecheng, minister of Industry and Information Technology, said: "Such performances showcase more than entertainment. They reflect China's advances in translating AI into real-world applications, a vivid display of the country's growing innovation capacity." Hurdles, however, remain. Lin Yonghua, chief engineer at the Beijing Academy of Artificial Intelligence, said, "More efforts are needed to achieve stable, high-quality control of humanoid robots, improve dexterous manipulation capabilities and overcome constraints in power supply and heat management of the robots." Lin added that global competition is also intensifying with the United States, Japan and Germany ramping up investment in embodied AI. "For China, such efforts are also about securing industrial resilience and supply chains," she said. He Xiaopeng, CEO of electric vehicle maker Xpeng, called for greater national-level R&D funds and standardized frameworks â similar to autonomous driving classifications â to accelerate the commercialization of embodied AI. Chinese AI startup tops global embodied intelligence benchmark World's largest embodied AI data factory opens in Tianjin
Images (1):
|
|||||
| realistic humanoid robots by aheadform mimic living people's skin | https://www.designboom.com/technology/r… | 1 | Mar 24, 2026 16:00 | active | |
realistic humanoid robots by aheadform mimic living people's skinDescription: aheadform develops realistic humanoid robots that mimic living people with their lifelike skin and moving faces and mouths. Content:
AheadForm develops realistic humanoid robots that mimic living people with their lifelike skin and moving faces and mouths. Powered by AI and mechanics, the company designs these human-like machines to replicate our emotions and behavior and to have the ability to learn from what they see and where they’re at using algorithms and degrees of freedom in their movement. AheadForm’s realistic humanoid robots come with custom-designed brushless micro motors installed in their facial areas; that’s why they can move their eyebrows, lips, and eyes to match their speeches and emotions. It produces very little noise, so it’s barely audible, and it fits in the machine’s heads. The company’s engineers developed their own control software that synchronizes the motor’s response with the robot’s AI, so each facial movement matches their spoken words or facial expressions. The robot’s head design includes moving eyes, eyelids, and a mouth that syncs with voice output, and the structure under the skin features mechanical parts connected to micro motors that pull or release at different angles to create lifelike expressions. These systems help AheadForm’s realistic humanoid robots understand human gestures, facial expressions, and tone. The AI system integrates language and visual models so that the machines can look at a person, recognize their emotional state from their facial expression, and respond with matching tone and language. It allows real-time learning, meaning the robots improve their replies as they interact more with the living people. all images courtesy of AheadForm AheadForm has two series for its realistic humanoid robots. In the company’s ELF series, the machines use up to 30 degrees of freedom, meaning that each joint or facial feature can move independently, allowing for a wide range of actions and expressions. There’s a control system that handles the movement of each motor, while the AI system helps the devices learn from their surroundings and adjust their behavior over time. They’re aware of their environment, like living people using sensors, processing what they see and hear and then responding in a way that fits the situation. The ELF V1 model in the series already shows this, and the company says that it can talk to people, understand what they’re saying and what they want it to do, and perform tasks. Then there’s the LAN series, a group of AheadForm’s realistic humanoid robots designed for movements like head turns, hand motions, and walking balance. They’re covered in a synthetic skin material that mimics the softness and texture of human skin, and this material can resist moisture and temperature changes even after repeated movements. The company says that it plans to use the Lan Series in roles that need more mobility and handling ability, such as guiding people, performing simple manual tasks, or demonstrating products. The robots are built using a combination of lightweight metal alloys, synthetic polymers, and silicone-based materials that are waterproof and flexible. They can stretch and return to their original shape and are attached over a mechanical skeleton that supports facial and body movements. So far, AheadForm released busts of these moving and talking realistic humanoid robots in 2025, hinting at its future plans to unveil more. even the facial features, such as hair, are lifelike these realistic humanoid robots by AheadForm replicate the living people’s skin texture detailed view of the machines’ skin and features these robots come with custom-designed brushless micro motors installed in their facial areas neck view of the machines so far, AheadForm released busts of these moving and talking realistic humanoid robots project info: name: Elf Series, Lan Series company: AheadForm
Images (1):
|
|||||
| CATL successfully deploys humanoid robots to quality-critical work on EV … | https://www.gizmochina.com/2025/12/18/c… | 1 | Mar 24, 2026 16:00 | active | |
CATL successfully deploys humanoid robots to quality-critical work on EV batteries - GizmochinaDescription: Its main task involves attaching battery connectors, a job that demands high precision, consistency, and careful control of applied force. Content:
Humanoid robots have been a popular talking point in recent months, but most examples so far have involved carefully staged demos rather than real factory work. CATL, the worldâs largest EV battery maker, now claims it has crossed that gap, quietly rolling out humanoid robots on an actual production line. The company says it has completed large-scale deployment of its Moz humanoid robot at a battery pack factory, making it the first power battery production line to use humanoid âembodied intelligenceâ robots at scale. Moz was developed by Spirit AI, a CATL subsidiary focused on robotics and automation. Rather than handling simple pick-and-place jobs, Moz is reportedly positioned at quality-critical stages of the production process. Its main task involves attaching battery connectors, a job that demands high precision, consistency, and careful control of applied force. According to CATL, the robot has reached performance levels comparable to experienced human workers, delivering a reported 99% success rate in connector insertion. That level of reliability is achieved through an end-to-end vision system that allows the robot to adapt in real time. Moz can compensate for slight misalignments in materials or connection points by adjusting its posture and movements on the fly. It also monitors how much force it applies, ensuring wiring harnesses are secured firmly without damaging fragile components. CATL contrasts this with reports of difficulties faced by other humanoid robots during factory trials, including overheating joints and failures in complex mechanical assemblies. While those systems have drawn attention through public demonstrations, many have yet to prove they can operate continuously in demanding industrial environments. The timing is notable, as Chinaâs humanoid robotics sector is expanding rapidly, with some analysts already warning of potential overcapacity similar to what the country experienced in EV manufacturing. CATLâs deployment suggests that, at least in some cases, humanoid robots are moving beyond experiments and into practical, revenue-generating roles on the factory floor. Donât miss a thing! Join our Telegram community for instant updates and grab our free daily newsletter for the best tech stories! For more daily updates, please visit our News Section. (Source: CATL)
Images (1):
|
|||||
| Learning To Play Tennis | https://www.i-programmer.info/news/169-… | 1 | Mar 24, 2026 00:01 | active | |
Learning To Play TennisURL: https://www.i-programmer.info/news/169-robotics/18743-learning-to-play-tennis.html Description: Programming book reviews, programming tutorials,programming news, C#, Ruby, Python,C, C++, PHP, Visual Basic, Computer book reviews, computer history, programming history, joomla, theory, spreadsheets and more. Content:
Training humanoid robots to do things seems to be the way to go compared to the "traditional" engineering approach. We can't tell a robot how to play tennis but we can let it learn by trial and error. As we have commented before, the age of the engineering approach to robotics is probably well and truly over. We no longer need to work out how to program the movements that are needed to get a job done - we can simply use reinforcement learning to get the robot to figure it out for itself. Of course, this isn't as simple as it sounds and it takes a lot of computing power - but we are seeing more and more examples of it in action and working well. The latest is a humanoid robot that can play tennis - well it has learned how to return the ball and keep a rally going. While it has no strategy for playing tennis and isn't trying to score points against its opponent, It is remarkable impressive and slightly spooky. The system is called LATENT Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa - which is about as contrived an acronym as can be invented! However, don't let this put you off as its an interesting and generalizable approach. The training data is imperfect motion capture - motion fragments that capture the primitive skills involved - forehand, backhand and footwork. It seems that it is possible to learn these fragments and put them together into a useful order and, with some additional reinforcement learning, the humanoid robot, an off-the-shelf Unitree G1, can master returning the ball using classic tennis strokes. This is all done in simulation and then, using a clever technique, is transferred to the real world robot. As the research paper says: "Our method achieves surprising results in the real world and can stably sustain multi-shot rallies with human players..." Yes I have to agree and it seems such a short time ago we were laughing at humanoid robots attempting to walk like humans... Latent ANYmal For Badminton Quadrupedal Parkour Unitree Robots Perform Kung Fu Display Meet Figure 03: Dishwasher-Loading Robot Humanoid Alpha Learns To Wrap Xmas Presents A World First For Humanoid Robots To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Facebook or Linkedin. To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Facebook or Linkedin. JetBrains Releases VS Code Java To Kotlin Converter03/03/2026JetBrains, creators and major supporters of Kotlin, has released an extension for Visual Studio Code that can be used to convert Java code to Kotlin. The extension uses LLMs to provide "idiomatic [ ... ] + Full Story XAML.io Adds Nuget Package Browser Support26/02/2026XAML.io has been updated to add support for Nuget packages directly in the browser, along with code sharing via URL. XAML.io is a free, browser-based XAML designer and C# editor built on top of t [ ... ] + Full StoryMore NewsGoogle Is Still On Target To Close AndroidDiscover the 2026 Dates For Google I/O and Microsoft BuildpgEdge MCP Server For PostgresApache NetBeans 29 Improves Git IntegrationBun Adds Parallel Script SupportApache Geode 2 ReleasedQuantum Information Science Pioneers Gain Turing AwardWing Python Improves Syntax HighlightingLearn SQL For Data Engineering - The CourseZvec - Lightweight Vector DatabaseFuture-Proof Your Career with AI Professional CertificateRust 1.94 Adds Array WindowsCommunity Asks Oracle For MySQL Foundation JetBrains, creators and major supporters of Kotlin, has released an extension for Visual Studio Code that can be used to convert Java code to Kotlin. The extension uses LLMs to provide "idiomatic [ ... ] JetBrains, creators and major supporters of Kotlin, has released an extension for Visual Studio Code that can be used to convert Java code to Kotlin. The extension uses LLMs to provide "idiomatic [ ... ] XAML.io has been updated to add support for Nuget packages directly in the browser, along with code sharing via URL. XAML.io is a free, browser-based XAML designer and C# editor built on top of t [ ... ] XAML.io has been updated to add support for Nuget packages directly in the browser, along with code sharing via URL. XAML.io is a free, browser-based XAML designer and C# editor built on top of t [ ... ] Comments Make a Comment or View Existing Comments Using Disqus or email your comment to: comments@i-programmer.info Make a Comment or View Existing Comments Using Disqus or email your comment to: comments@i-programmer.info
Images (1):
|
|||||
| Model-based approaches and reinforcement learning for robust and autonomous locomotion … | https://theses.hal.science/tel-05559751… | 1 | Mar 24, 2026 00:01 | active | |
Model-based approaches and reinforcement learning for robust and autonomous locomotion of humanoid robots in dynamic contexts - TEL - Thèses en ligneURL: https://theses.hal.science/tel-05559751v1 Description: This thesis is set in a period of unprecedented growth in humanoid robotics, driven both by rapid technological advances and by a growing enthusiasm from private actors and the general public. In such a context, the recent progress in humanoid robot locomotion, together with their increasing social acceptance, seems to announce their forthcoming integration into real environments alongside humans. Faced with this ambition, a major challenge remains: ensuring the robustness and autonomy of locomotion in the dynamic contexts in which these robots will have to operate. To address this challenge, this work combines physical modeling and reinforcement learning, leveraging the complementary advantages of these two paradigms – the stability guarantees provided by modeling and the adaptability arising from learning. This manuscript begins with a state-of-the-art review of control and learning methods applied to humanoid locomotion, aimed at identifying the most promising approaches to reconcile robustness, adaptability, and dynamic realism. On these foundations, the PlaCo software, dedicated to motion planning and robot control, is developed. It aims to abstract the complexity of the optimization formulations required for trajectory generation, while maintaining performance compatible with real-time execution. This framework is then used to design and deploy on the humanoid robot Sigmaban a walking controller based on the Linear Inverted Pendulum Model (LIPM). This development highlights the ability of this model to produce coherent trajectories in real time, while revealing the practical limitations encountered on a real platform. To overcome these limitations and enable dynamic adaptation to disturbances, a reinforcement learning agent dedicated to fall recovery is developed. Trained in simulation, this agent is successfully transferred to the real robot, demonstrating a significant gain in autonomy. However, the difficulty of this transfer highlights the central issue of the gap between simulated and real environments. This observation leads to an investigation of how this gap can be reduced by improving simulation fidelity. A detailed study of friction phenomena in servo actuators is therefore carried out, showing how a more accurate consideration of these effects improves both the quality of simulation and the transferability of control policies. Content:
This thesis is set in a period of unprecedented growth in humanoid robotics, driven both by rapid technological advances and by a growing enthusiasm from private actors and the general public. In such a context, the recent progress in humanoid robot locomotion, together with their increasing social acceptance, seems to announce their forthcoming integration into real environments alongside humans. Faced with this ambition, a major challenge remains: ensuring the robustness and autonomy of locomotion in the dynamic contexts in which these robots will have to operate. To address this challenge, this work combines physical modeling and reinforcement learning, leveraging the complementary advantages of these two paradigms – the stability guarantees provided by modeling and the adaptability arising from learning. This manuscript begins with a state-of-the-art review of control and learning methods applied to humanoid locomotion, aimed at identifying the most promising approaches to reconcile robustness, adaptability, and dynamic realism. On these foundations, the PlaCo software, dedicated to motion planning and robot control, is developed. It aims to abstract the complexity of the optimization formulations required for trajectory generation, while maintaining performance compatible with real-time execution. This framework is then used to design and deploy on the humanoid robot Sigmaban a walking controller based on the Linear Inverted Pendulum Model (LIPM). This development highlights the ability of this model to produce coherent trajectories in real time, while revealing the practical limitations encountered on a real platform. To overcome these limitations and enable dynamic adaptation to disturbances, a reinforcement learning agent dedicated to fall recovery is developed. Trained in simulation, this agent is successfully transferred to the real robot, demonstrating a significant gain in autonomy. However, the difficulty of this transfer highlights the central issue of the gap between simulated and real environments. This observation leads to an investigation of how this gap can be reduced by improving simulation fidelity. A detailed study of friction phenomena in servo actuators is therefore carried out, showing how a more accurate consideration of these effects improves both the quality of simulation and the transferability of control policies. Cette thèse s’inscrit dans une période d’essor sans précédent de la robotique humanoïde, portée à la fois par des avancées technologiques rapides et par un engouement croissant des acteurs privés et du grand public. Dans un tel contexte, les progrès récents en matière de motricité des robots humanoïdes ainsi que leur acceptation sociale grandissante semblent annoncer leur prochaine intégration dans des environnements réels aux côtés des humains. Face à cette ambition, un défi majeur demeure : assurer la robustesse et l’autonomie de la locomotion dans les contextes dynamiques où ces robots seront amenés à évoluer. Pour y répondre, ce travail combine modélisation physique et apprentissage par renforcement, en exploitant les avantages complémentaires de ces deux paradigmes – les garanties de stabilité offertes par la modélisation et la capacité d’adaptation issue de l’apprentissage. Ce manuscrit débute par un état de l’art des méthodes de contrôle et d’apprentissage appliquées à la locomotion humanoïde, destiné à identifier les approches les plus prometteuses pour concilier robustesse, adaptabilité et réalisme dynamique. Sur ces fondations, le logiciel PlaCo, dédié à la planification de mouvement et au contrôle de robots, est développé. Il vise à abstraire la complexité des formulations d’optimisation nécessaires à la génération de trajectoires, tout en maintenant des performances compatibles avec une exécution en temps réel. Ce cadre est ensuite exploité pour concevoir et déployer sur le robot humanoïde Sigmaban un contrôleur de marche fondé sur le modèle du pendule inversé linéaire (LIPM). Ce développement met en évidence la capacité de ce modèle à produire des trajectoires cohérentes en temps réel, tout en révélant les limites pratiques rencontrées sur plateforme réelle. Afin de surmonter ces limitations et de permettre une adaptation dynamique aux perturbations, un agent d’apprentissage par renforcement dédié à la récupération en cas de chute est développé. Entraîné en simulation, cet agent est transféré avec succès sur le robot réel, démontrant un gain significatif en autonomie. La difficulté de ce transfert met néanmoins en évidence la problématique centrale de l’écart entre environnements simulé et réel. Cette observation conduit à chercher comment minimiser cet écart en améliorant la fidélité de la simulation. Une étude approfondie des phénomènes de friction dans les servo-actionneurs est menée, montrant comment une prise en compte plus fine de ces phénomènes améliore la qualité de la simulation et des transferts de politiques. Contact https://theses.hal.science/tel-05559751 Soumis le : jeudi 19 mars 2026-16:31:07 Dernière modification le : lundi 23 mars 2026-10:33:29 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| Tesla investor shares video of Nvidia-backed Figureâs Humanoid robot cleaning … | https://timesofindia.indiatimes.com/tec… | 1 | Mar 22, 2026 16:00 | active | |
Tesla investor shares video of Nvidia-backed Figureâs Humanoid robot cleaning living room; gets a query from Elon Musk - The Times of IndiaDescription: Tech News News: Tesla investor and influencer, Sawyer Merritt recently shared a video on social media platform X (formerly known as Twitter) showing Nvidia-backed sta. Content:
The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Deskâs news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.Read More â¹6,869 â¹10,999
Images (1):
|
|||||
| Xiaomi usa robot umanoidi per assemblare auto elettriche | https://www.punto-informatico.it/xiaomi… | 1 | Mar 22, 2026 16:00 | active | |
Xiaomi usa robot umanoidi per assemblare auto elettricheURL: https://www.punto-informatico.it/xiaomi-usa-robot-umanoide-assemblare-auto-elettriche/ Description: Due robot umanoidi Xiaomi lavorano sulla linea di produzione delle auto elettriche a Pechino, avvitando dadi ogni 76 secondi. Content:
Due figure bipedi, una per lato della catena di montaggio, avvitano i dadi sulle ruote di un telaio che scorre davanti a loro. Non sono operai. Sono robot umanoidi di Xiaomi, e stanno lavorando nella fabbrica di auto elettriche dell’azienda a Pechino. Non velocissimi, non perfetti, ma funzionanti, e abbastanza rapidi da stare al passo con il resto della linea di produzione. Per un “tirocinante,” come li definisce Xiaomi stessa, non è male. In un’intervista con CNBC, il presidente di Xiaomi Lu Weibing ha spiegato che i due robot hanno completato con successo il 90,2% del lavoro durante un turno di prova di tre ore. Il tempo ciclo è di 76 secondi, lo stesso intervallo con cui un’auto nuova esce dalla linea di produzione della fabbrica. La sfida più grande per integrare i robot nelle nostre linee di produzione è fargli tenere il passo, ha detto Lu. E due robot umanoidi riescono a tenere il nostro ritmo. Un paio di operai umani farebbero lo stesso lavoro più velocemente, senza dubbio. Ma il punto non è la velocità assoluta, è il fatto che dei robot stiano lavorando in un ambiente industriale reale, su un prodotto reale, senza rallentare la produzione. Lu tiene i piedi per terra, per ora sono nella fase sperimentale, ma il risultato è comunque notevole. Un video promozionale dell’azienda mostra i due robot che applicano i dadi con precisione su parti piccole del telaio, un compito che richiede accuratezza millimetrica e che fino a poco tempo fa era considerato fuori dalla portata dei robot umanoidi. Xiaomi non è la prima azienda a mettere robot bipedi in una fabbrica. A febbraio, la britannica Humanoid aveva completato un test simile con un tasso di successo superiore al 90% in un compito di impilamento. La differenza è nei dettagli, i robot Humanoid erano fissati su una base stabile, mentre quelli di Xiaomi sono effettivamente in piedi sulle proprie gambe, come esseri umani. La Cina ha già dispiegato più robot industriali di qualsiasi altro Paese nella storia. Ma i robot industriali tradizionali sono bracci meccanici fissi, progettati per svolgere un compito preciso su una linea di produzione. I robot umanoidi sono un’altra cosa: devono camminare, mantenere l’equilibrio e manipolare oggetti come farebbe una persona. Il passaggio dai bracci robotici ai robot bipedi è quindi un salto tecnologico enorme, anche se per ora si tratta più di una dimostrazione, siamo ancora lontani da una diffusione su larga scala. Tiziana Foglio Pubblicato il 10 mar 2026
Images (1):
|
|||||
| Toyota залучить гуманоїдних роботів Digit до виробництва автомобілів - ProstoMob | https://prostomob.com/264092-toyota-zal… | 1 | Mar 22, 2026 08:00 | active | |
Toyota залучить гуманоїдних роботів Digit до виробництва автомобілів - ProstoMobURL: https://prostomob.com/264092-toyota-zaluchyt-gumanoyidnyh-robotiv-digit-do-vyrobnycztva-avtomobiliv Description: Заводи з виробництва автомобілів продовжують автоматизувати свої робочі процеси. Компанія Toyota Motor Manufacturing Canada готується до впровадження Content:
Головна » Новини Заводи з виробництва автомобілів продовжують автоматизувати свої робочі процеси. Компанія Toyota Motor Manufacturing Canada готується до впровадження гуманоїдного робота Digit, створеного компанією Agility Robotics, на своєму підприємстві у Вудстоку (Онтаріо), де збирають кросовери Toyota RAV4. Про це повідомляє SlashGear. Хоча промислові роботизовані системи, такі як маніпулятори для зварювання чи фарбування, використовуються в автомобілебудуванні ще з 1960-х років, залучення двоногих гуманоїдів є новим етапом модернізації. На початковому етапі Toyota розгорне трьох роботів із семи, передбачених угодою. Їх задіють у логістиці, виробництві та управлінні ланцюгами постачання. Основна мета — передати машинам рутинні та фізично важкі процеси, що дозволить знизити рівень травматизму серед персоналу та звільнити людей для виконання більш відповідальних завдань. Digit має двоногу конструкцію із шарнірами зворотного типу. Його зріст становить близько 175 сантиметрів, а в головній частині розміщені світлодіодні індикатори для базової комунікації. Завдяки набору датчиків Digit здатний самостійно орієнтуватися в просторі, розпізнавати перешкоди та людей без необхідності встановлення додаткової інфраструктури, такої як магнітні стрічки на підлозі. Крім пересування рівними поверхнями, робот може підійматися пандусами та сходами, а в разі падіння здатен самостійно встати на ноги. Використання гуманоїдних систем стає все більш поширеним серед великих корпорацій — раніше подібних роботів уже впровадили Amazon та GXO Logistics. Наразі Toyota та Agility Robotics планують і надалі досліджувати можливості використання штучного інтелекту та передової робототехніки на автомобільних заводах. Читайте також:
Images (1):
|
|||||
| Humanoid robot maker Agility rebrands as factories face labor gaps | https://interestingengineering.com/ai-r… | 1 | Mar 22, 2026 08:00 | active | |
Humanoid robot maker Agility rebrands as factories face labor gapsURL: https://interestingengineering.com/ai-robotics/agility-humanoid-robot-maker-rebrands Description: Humanoid maker Agility rebrands as labor shortages push factories toward automation and companies explore robots for industrial work. Content:
From daily news and career tips to monthly insights on AI, sustainability, Aerospace, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. The robotics firm is planning to focus on broader initiatives while abiding by its promise of delivering humanoid robots. Oregon-based robotics firm Agility has announced it will drop the word ‘Robotics’ from its brand name and adopt a new brand identity to serve a larger cause. The company announced the news via a video and an official blog post on its social media handles. With the shift in identity, Agility plans to explore new use cases, services, and industries to serve, while simultaneously developing humanoid robots. The robotics firm is on track to deliver its first cooperatively safe humanoid robot in 2026. Alongside the name change announced last week, Agility has introduced a new logo and updated brand language inspired by the hardware and software it develops. The company said the redesigned logo is meant to reflect motion, innovation, progress, reliability, and durability. Agility’s humanoid robot, Digit, has already entered the market, demonstrating its readiness to tackle the labor shortage in factories and warehouses. The 5 ft 9 in tall robot set a record milestone in November last year, moving 100,000 totes at a GXO logistics facility. By the end of 2025, Agility partnered with the fintech firm Mercado to deploy Digit to handle physically demanding tasks in its fulfillment network. Earlier in February, automaker Toyota joined a long list of Fortune 500 companies, including Amazon and Schaeffler, in deploying the Digit humanoid robot in warehouses to handle repetitive tasks. A bird’s-eye view of Agility’s current progress and partnerships shows how the firm plans to expand its horizons by working across different industries to address the labor gap. “With our rebrand to Agility, we’re signaling our readiness to scale beyond our current deployments and our ability to lead the adoption of humanoids across many new industries,” said Daniel Diez, Chief Business Officer at Agility. “As we expand into new partnerships and enable new use cases, it is critical that our brand matches the maturity of our technology and our commercial momentum. Agility represents flexibility, durability, and forward motion – qualities our customers need as they integrate humanoids into real operations,” he added. Diez revealed that manufacturers worldwide are struggling to find laborers for highly repetitive physical tasks. “It’s the same exact issue: Labor gaps in these highly repetitive physical tasks. They simply can’t find the people to do this work,” he said in a conversation with Business Insider. As of December 2025, the Bureau of Labor Statistics has reported over 400,000 job openings. According to a 2024 survey conducted across 200 companies, talent retention remains another major concern for manufacturers, contributing to labor scarcity. A significant portion of the manufacturing workforce is 55 and over, approaching retirement, with the BLS Current Population Survey estimating that figure at just over 25%. Added to that, the Trump Administration’s attempts to bring onshore manufacturing back will further increase the need for labor, Diez revealed. “This re-shoring of manufacturing in the US is going to only occur through a combination of human employment and automation technology, like humans and robotics,” he said. With labor pressures mounting and more manufacturers exploring automation, Agility’s long-term trajectory will depend on how successfully it scales Digit across real-world industrial environments. Atharva is a full-time content writer with a post-graduate degree in media & amp; entertainment and a graduate degree in electronics & telecommunications. He has written in the sports and technology domains respectively. In his leisure time, Atharva loves learning about digital marketing and watching soccer matches. His main goal behind joining Interesting Engineering is to learn more about how the recent technological advancements are helping human beings on both societal and individual levels in their daily lives. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Premium Follow
Images (1):
|
|||||
| China vs US in Robotics: Nvidia CEO Reveals the Real … | https://www.gizmochina.com/2026/03/21/c… | 1 | Mar 21, 2026 16:00 | active | |
China vs US in Robotics: Nvidia CEO Reveals the Real Power Shift - GizmochinaURL: https://www.gizmochina.com/2026/03/21/china-robotics-dominance-nvidia-physical-ai/ Description: China leads robotics hardware while Nvidia bets on Physical AI to power future machines. Hereâs what it means for the global tech race. Content:
Jensen Huang highlighted that China currently holds a significant edge in robotics, primarily due to its dominance in core components like microelectronics, motors, rare earth materials, and magnets. These elements are foundational to building robots at scale, and Chinaâs ecosystem is described as the âworldâs best.â This deep supply chain strength means even the US robotics industry remains heavily dependent on China for hardware, reinforcing its global influence in the sector. Nvidia is shifting focus beyond traditional GPUs toward what Huang calls Physical AI, the integration of AI into real-world machines like robots and autonomous systems. This marks a major evolution from generative AI to embodied intelligence. Nvidiaâs strategy includes building a full-stack ecosystem, from AI models to robotics infrastructure, positioning itself not just as a chipmaker but as the backbone of future intelligent machines. At its GTC event, Nvidia introduced the Physical AI Data Factory to automate data generation, simulation, and model evaluation. Huang also described a âthree-computerâ model powering robotics: training systems for AI models, simulation platforms like Omniverse, and edge computers embedded in robots. This approach shows robotics is no longer just hardware; it requires a tightly integrated AI ecosystem. While Chinaâs robotics companies, such as Unitree, are scaling rapidly with strong financial growth, Nvidia is navigating a complex market. Its China market share has dropped sharply due to restrictions, but the company is preparing a return with approved H200 AI chips. Despite revenue declines, demand from Chinese firms remains strong. Huang believes robotics adoption is just 3â5 years away from widespread use, driven by exponential growth in AI compute and agentic AI systems. The long-term vision points to a massive economic opportunity, where robots augment human labor across industries. The global balance is becoming clear: China dominates the physical layer, while Nvidia aims to control the intelligence powering it. Read More; (via)
Images (1):
|
|||||
| Robotic Rabbits Help Researchers Catch Invasive Pythons in the Florida … | https://www.discovermagazine.com/roboti… | 0 | Mar 20, 2026 00:01 | active | |
Robotic Rabbits Help Researchers Catch Invasive Pythons in the Florida EvergladesDescription: Learn about the Burmese python problem in South Florida, and about an innovative initiative that relies on robotic rabbits that look and smell like the real thi... Content: |
|||||
| Arduino Ventuno Q: la nueva placa que integra la IA … | https://hipertextual.com/tecnologia/ard… | 1 | Mar 20, 2026 00:01 | active | |
Arduino Ventuno Q: la nueva placa que integra la IA en robotsURL: https://hipertextual.com/tecnologia/arduino-ventuno-q-placa-robots-ia-qualcomm/ Description: La nueva Ventuno Q de Arduino permite crear robots que ejecutan modelos de IA en el hardware gracias al chip de Qualcomm. Content:
Hipertextual Tecnología, ciencia y cultura digital Arduino ha presentado una nueva placa que permite controlar robots con ayuda de la IA. La compañía aprovechó la víspera del Embedded World para anunciar la Ventuno Q, un nuevo dispositivo capaz de ejecutar tareas de inteligencia artificial sin necesidad de depender de la nube. La nueva placa sigue los pasos de la UNO Q e integra una arquitectura híbrida con chip de Qualcomm y unas especificaciones de software robustas. De acuerdo con el comunicado de prensa, la Ventuno Q monta un procesador Dragonwing IQ8 de Qualcomm. Este chip está diseñado para ejecutar modelos de lenguaje y visión de forma nativa, por lo que los robots no tendrán que conectarse a internet para interpretar las instrucciones. Esta capacidad le permite comprender lenguaje hablado, escrito, reconocer objetos o mapear el entorno por el que se desplazarán usando su propio hardware. Arquitectónicamente, la Ventuno Q utiliza un sistema de doble cerebro como el que encontramos en la UNO Q. Por un lado, la unidad de procesamiento principal se apoya en una NPU que ofrece hasta 40 TOPS para tareas de IA. Por otro lado, el microcontrolador STM32H5 se encarga del control crítico para evitar que el procesamiento de algoritmos interfiera con funciones como la seguridad o el movimiento de un robot. Más allá del enfoque, la diferencia principal con la placa de Arduino que se lanzó a finales de 2025 está en las características de hardware. La Ventuno Q llega con 16 GB de RAM y 64 GB de almacenamiento expandible, un salto notable si consideramos que la UNO Q se vende con 2 GB de RAM y hasta 32 GB de almacenamiento. Este ajuste es necesario debido a los requerimientos de un robot y los modelos de IA. Lo que separa a la Ventuno Q de un ordenador de placa única como el que usamos en el prototipado es el diseño orientado a la acción física. Mientras que una Raspberry Pi suele ser de propósito general, la nueva Arduino integra interfaces industriales nativas como CAN-FD, PWM y GPIO. Estas conexiones permiten que la placa pueda interactuar con máquinas, motores y sistemas de control en entornos robóticos. CAN-FD es un bus de comunicación que permite que sensores, actuadores o controladores se comuniquen rápido y sin contratiempos. PWM es una técnica que regula la velocidad de un motor o la intensidad de una señal de manera precisa. Por último, GPIO son pines de entrada o salida que ayudan a que la Ventuno Q pueda leer sensores o activar actuadores con tiempos de respuesta muy cortos. En la parte de software, Arduino ha optado por una estructura híbrida. El procesador principal corre distribuciones de Linux, mientras que el microcontrolador de tiempo real utiliza el núcleo de Arduino sobre Zephyr OS. Con esta combinación, los desarrolladores pueden optar por Python o C++ según la necesidad, y apoyarse del Qualcomm AI Hub para integrar modelos de reconocimiento de gestos, estimación de postura y más. Por último, el Ventuno Q incluye Ethernet de 2,5 GB y conectores de alta velocidad para cámaras MIPI-CSI. La placa funciona con los shields de la familia UNO, sensores Qwiic y mantiene soporte para HATs de Raspberry Pi, lo que ayudará a los usuarios que ya tienen alguno de estos componentes. Arduino confirmó que la VENTUNO Q estará disponible en el segundo trimestre de 2026 a través de la tienda de Arduino, DigiKey o Mouser. Aunque todavía no existe una web con la configuración, se espera que el precio ronde por debajo de los 300 dólares. Link Copy link
Images (1):
|
|||||
| Microsoft's New "Physical AI" Could Make Robots Smarter Than Ever | https://propakistani.pk/2026/01/26/micr… | 1 | Mar 19, 2026 08:00 | active | |
Microsoft's New "Physical AI" Could Make Robots Smarter Than EverURL: https://propakistani.pk/2026/01/26/microsofts-new-physical-ai-could-make-robots-smarter-than-ever/ Description: Microsoft has announced Rho-alpha, a new robotics AI model derived from its Phi vision-language series, aimed at helping robots operate more effectively Content:
Microsoft has announced Rho-alpha, a new robotics AI model derived from its Phi vision-language series, aimed at helping robots operate more effectively outside tightly controlled industrial environments. While robots have long performed reliably on assembly lines with predictable conditions, Microsoft argues they often struggle in less structured, real-world settings. The company believes robots need better ways to see, understand instructions, and adapt to changing conditions rather than relying on rigid scripts. Rho-alpha is Microsoft’s first robotics model built on its Phi vision-language framework and is positioned as a step toward what the company describes as “physical AI.” Microsoft links Rho-alpha to the broader shift toward physical AI, where software models guide machines through environments that are not predefined or highly structured. The system combines language, perception, and action in a single model, reducing reliance on fixed production lines and static instructions. Rho-alpha translates natural language commands into robotic control signals, allowing robots to respond dynamically to tasks. A key focus of the model is bimanual manipulation, which requires precise coordination between two robotic arms and fine-grained motor control. Microsoft says Rho-alpha extends traditional vision-language-action approaches by expanding both perception inputs and learning sources. Rho-alpha incorporates tactile sensing alongside visual input, with additional sensing modalities such as force currently under development. These capabilities are designed to help robots better understand physical interactions, narrowing the gap between simulated intelligence and real-world manipulation. Microsoft Research says these design choices aim to improve how robots handle complex tasks in environments where conditions vary and cannot be fully anticipated in advance. Ashley Llorens, Corporate Vice President and Managing Director at Microsoft Research Accelerator, said vision-language-action models are enabling physical systems to perceive, reason, and act with increasing autonomy in environments that are far less structured. A central part of Microsoft’s approach addresses the limited availability of large-scale robotics data, particularly data involving touch. To overcome this, the company relies heavily on simulation. Synthetic trajectories are generated through reinforcement learning using NVIDIA Isaac Sim, and are combined with physical demonstrations sourced from commercial and open datasets. Deepu Talla, Vice President of Robotics and Edge AI at Nvidia, said training foundation models capable of reasoning and acting requires overcoming the scarcity of diverse real-world data. He added that using NVIDIA Isaac Sim on Azure allows Microsoft Research to accelerate the development of models like Rho-alpha that can handle complex manipulation tasks. Microsoft also emphasizes the role of human corrective input during deployment. Operators can intervene using teleoperation devices and provide feedback, which the system can learn from over time. This creates a training loop that blends simulation data, real-world demonstrations, and human correction. The approach reflects a broader trend in robotics toward using AI tools to compensate for limited embodied datasets. Professor Abhishek Gupta, Assistant Professor at the University of Washington, noted that while teleoperated data collection is common, there are many environments where teleoperation is impractical or impossible. He said researchers are working with Microsoft Research to enrich pre-training datasets using diverse synthetic demonstrations generated through simulation and reinforcement learning. 📢 For the latest Tech & Telecom news, videos and analysis join ProPakistani's WhatsApp Group now! Follow ProPakistani on Google News & scroll through your favourite content faster! Shares We are almost there, TERMINATOR a journey from fiction to reality. I’ll be back. ProPakistani is the premier and most trustworthy resource for all happenings in technology, telecom, business, sports, auto, education, real estate and entertainment news in Pakistan. Whether it's the top trending news, inside scoops or features, interviews, market trends and analysis, product reviews, How to's or tutorials – we cover it all. © 2026 ProPakistani.PK - All rights reserved Join the groups below to get latest news and updates. Session expired Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.
Images (1):
|
|||||
| Humanoid robot locomotion, footstep planning and fall resilience via reinforcement … | https://theses.hal.science/tel-05549379… | 1 | Mar 19, 2026 08:00 | active | |
Humanoid robot locomotion, footstep planning and fall resilience via reinforcement learning policies - TEL - Thèses en ligneURL: https://theses.hal.science/tel-05549379v1 Description: Humanoid Robot Locomotion, Footstep Planning, and Fall Resilience via Reinforcement Learning Policies explores how to endow humanoid robots with robust locomotion and rapid fall recovery without relying on brittle heuristics or hand-crafted keyframes. The work addresses a central challenge in robotic autonomy: real humanoids must operate reliably in uncertain, contact-rich environments on limited onboard compute. Traditional model-based pipelines often lack adaptability, while deep reinforcement learning (DRL) offers the promise of data-driven, generalizable behaviors. The thesis thus investigates how to design DRL policies that are both computationally efficient and transferable zero-shot from simulation to physical robots, while integrating seamlessly into conventional locomotion stacks. Methodologically, the thesis develops foundational RL algorithms and robotics-oriented architectures, leading to two primary contributions trained in simulation with domain randomization and deployed on small humanoid robots. FootstepNet is an efficient actor-critic footstep planner that generates continuous, task-oriented step placements and, via its critic, predicts the number of steps required to reach multiple local goals—enabling rapid, upstream decision-making. It eliminates reliance on discrete footstep sets and fragile heuristics, supports onboard inference, and matches or surpasses ARA* baseline performance with substantially lower computational cost. FootstepNet was validated both in simulation and on hardware during RoboCup 2023 and 2025. FRASA (Fall Recovery and Stand-up Agent) is a unified, end-to-end policy for fall recovery that directly maps proprioceptive inputs to motor commands, first establishing stabilizing ground contacts before transitioning to a standing pose. Leveraging the Cross-Q algorithm and exploiting robot symmetry, FRASA reduces training time to roughly 30 minutes and transfers zero-shot to real robots, outperforming a keyframe baseline while handling a wide variety of initial postures. Overall, the thesis demonstrates that lightweight, modular DRL policies can achieve practical, safe control for embedded humanoid systems, substantially reducing downtime after disturbances and paving the way for general, learning-based whole-body autonomy in real-world settings. Content:
Humanoid Robot Locomotion, Footstep Planning, and Fall Resilience via Reinforcement Learning Policies explores how to endow humanoid robots with robust locomotion and rapid fall recovery without relying on brittle heuristics or hand-crafted keyframes. The work addresses a central challenge in robotic autonomy: real humanoids must operate reliably in uncertain, contact-rich environments on limited onboard compute. Traditional model-based pipelines often lack adaptability, while deep reinforcement learning (DRL) offers the promise of data-driven, generalizable behaviors. The thesis thus investigates how to design DRL policies that are both computationally efficient and transferable zero-shot from simulation to physical robots, while integrating seamlessly into conventional locomotion stacks. Methodologically, the thesis develops foundational RL algorithms and robotics-oriented architectures, leading to two primary contributions trained in simulation with domain randomization and deployed on small humanoid robots. FootstepNet is an efficient actor-critic footstep planner that generates continuous, task-oriented step placements and, via its critic, predicts the number of steps required to reach multiple local goals—enabling rapid, upstream decision-making. It eliminates reliance on discrete footstep sets and fragile heuristics, supports onboard inference, and matches or surpasses ARA* baseline performance with substantially lower computational cost. FootstepNet was validated both in simulation and on hardware during RoboCup 2023 and 2025. FRASA (Fall Recovery and Stand-up Agent) is a unified, end-to-end policy for fall recovery that directly maps proprioceptive inputs to motor commands, first establishing stabilizing ground contacts before transitioning to a standing pose. Leveraging the Cross-Q algorithm and exploiting robot symmetry, FRASA reduces training time to roughly 30 minutes and transfers zero-shot to real robots, outperforming a keyframe baseline while handling a wide variety of initial postures. Overall, the thesis demonstrates that lightweight, modular DRL policies can achieve practical, safe control for embedded humanoid systems, substantially reducing downtime after disturbances and paving the way for general, learning-based whole-body autonomy in real-world settings. La thèse « Locomotion, planification de pas et résistance aux chutes des robots humanoïdes via des politiques d’apprentissage par renforcement » étudie comment doter les robots humanoïdes d’une marche fiable et d’une capacité de relevage rapide après chute, sans recourir à des heuristiques ni à des trajectoires préprogrammées. Elle s’inscrit dans un enjeu central de l’autonomie robotique : permettre à des robots réels d’agir de façon robuste dans des environnements incertains et riches en contacts, tout en respectant les contraintes du calcul embarqué. Les approches fondées sur des modèles atteignent leurs limites, tandis que l’apprentissage par renforcement profond (DRL) promet des comportements généralisables issus des données. La problématique posée est : comment concevoir des politiques DRL à la fois légères, transférables de la simulation au robot réel et intégrables dans les piles de locomotion existantes ? Méthodologiquement, la thèse présente les fondements de l’apprentissage par renforcement appliqué à la robotique, puis propose deux contributions majeures, entraînées en simulation avec randomisation de domaines et validées sur des humanoïdes de petite taille. FootstepNet est un planificateur de pas acteur-critique efficace, produisant des placements de pas continus tout en anticipant le nombre de pas nécessaires pour atteindre divers objectifs locaux. Il supprime la dépendance aux ensembles discrets et aux heuristiques, fonctionne en inférence embarquée et égale ou dépasse la qualité des planifications d’ARA*, pour un coût de calcul bien moindre, validé en simulation et sur robot réel lors des RoboCup 2023 et 2025. FRASA est un agent unifié de rattrapage et de relevage : une seule politique transforme des observations proprioceptives en commandes moteurs établissant des contacts stabilisateurs avant de se relever. Exploitant l’algorithme Cross-Q et la symétrie du robot, il réduit l’entraînement à environ 30 minutes et se transfère zero-shot sur le robot réel, surpassant une référence à trajectoires préprogrammées et gérant une large variété de postures initiales. En conclusion, ces travaux montrent que des politiques DRL légères, modulaires et sûres peuvent être rendues pratiques pour le contrôle embarqué des humanoïdes, réduisant fortement les temps d’indisponibilité après perturbation et ouvrant la voie à une autonomie plus robuste en conditions réelles. Contact https://theses.hal.science/tel-05549379 Soumis le : jeudi 12 mars 2026-15:03:08 Dernière modification le : vendredi 13 mars 2026-03:19:37 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| NXP unveils robotics solutions with NVIDIA Holoscan for real-time AI | https://www.fonearena.com/blog/477663/n… | 1 | Mar 19, 2026 00:04 | active | |
NXP unveils robotics solutions with NVIDIA Holoscan for real-time AIURL: https://www.fonearena.com/blog/477663/nxp-robotics-solutions-nvidia-holoscan.html Content:
Fone Arena The Mobile Blog NXP Semiconductors has announced a new set of robotics solutions focused on reliable, secure, real-time data processing, transport, and advanced networking. Developed in collaboration with NVIDIA, these ready-to-deploy solutions mark the first in a series of NXP’s foundational robotics platforms. The solutions integrate NVIDIA Holoscan Sensor Bridge with NXP’s highly integrated system-on-chips (SoCs). This reduces discrete components, lowers power consumption and system cost, and simplifies software complexity for robotic sensing and actuation, including humanoid form factors. NXP positions physical AI as a category of systems capable of sensing, interpreting, and interacting with real-world environments with precision, reliability, and safety. Humanoid robots are among the most advanced implementations, requiring low-latency data processing and transport across the robot body to support synchronized motion, dense sensor fusion, and advanced actuation. To address these requirements, NXP’s integrated robot body solutions deliver edge intelligence and low-latency networking for real-time communication. The platform embeds NVIDIA Holoscan Sensor Bridge into NXP’s software environment, enabling real-time processing and establishing a direct transport path between the robot body and pre-specified regions of the robot brain, reducing latency in data flow. The joint architecture combines: This creates a flexible and energy-efficient system architecture for full-body humanoid robotics. NXP’s first Holoscan Sensor Bridge-ready solutions include machine vision and precision motor control systems. These are designed to handle high-bandwidth sensor data, enable synchronized motion, and support real-time decision-making across humanoid robot bodies. Key components include: These software-driven, highly integrated solutions provide a complete and scalable foundation for full-body humanoid robot design. NXP stated that these robotics solutions will be available in the first half of 2026 (1H 2026). Speaking on the robotics solutions, Charles Dachs, Executive Vice President and General Manager, Secure Connected Edge, NXP Semiconductors, said: Physical AI is redefining what machines can achieve in the real world, and humanoid robots represent the most complex expression of that revolution. By combining NXP’s deep expertise in edge processing, secure networking, functional safety, and real-time control with NVIDIA robotics platforms, we are greatly simplifying physical AI development, enabling seamless connectivity between the physical AI edge and the central brain. This is just the beginning of what NXP will deliver to accelerate the ecosystem for physical AI. Commenting on the development, Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA, said: The development of autonomous machines requires a high-performance computing architecture that can synchronize complex motor controls with real-time perception. By integrating NVIDIA Holoscan Sensor Bridge into its edge portfolio, NXP is offering developers a scalable foundation to accelerate the deployment of physical AI.
Images (1):
|
|||||
| NVIDIA and Global Robotics Leaders Take Physical AI to the … | https://www.manilatimes.net/2026/03/17/… | 0 | Mar 19, 2026 00:04 | active | |
NVIDIA and Global Robotics Leaders Take Physical AI to the Real WorldDescription: NVIDIA and Global Robotics Leaders Take Physical AI to the Real World Content: |
|||||
| National AI model project race heats up as consortia expand … | https://www.koreatimes.co.kr/business/t… | 1 | Mar 18, 2026 16:00 | active | |
National AI model project race heats up as consortia expand new AI partners - The Korea TimesDescription: The government-led artificial intelligence (AI) foundation model project is intensifying its race as the four consortia in the second-round evaluat... Content:
Attendees visit Upstage's booth during a presentation for the national artificial intelligence foundation model project at Coex in Seoul, Dec. 30, 2025. Yonhap The government-led artificial intelligence (AI) foundation model project is intensifying its race as the four consortia in the second-round evaluation bulk up with new specialized partners spanning 3D AI, large language model (LLM) inference chips and high‑end training data. The Ministry of Science and ICT recently selected a consortium led by Motif Technologies to advance to the project’s second round, filling a vacant slot after only three teams — LG AI Research, SK Telecom and Upstage — moved on from the first round of evaluations in January. Each consortium is now rushing to bring in partners that boost its technical edge ahead of the next evaluation, evolving a model-building contest into creating a globally competitive sovereign AI stack from models to real-world deployment. LG AI Research announced on Feb. 25 that AI infrastructure and solutions provider Elice has joined its consortium to help commercialize its model K-EXAONE across public and private sectors. Elice will leverage its modular data center infrastructure to expose the AI model through stable application programming interfaces (APIs), offering a managed AI platform that lets public agencies and enterprises spin up dedicated environments without operational burden. The company also plans to deploy the AI model in security‑sensitive sectors such as manufacturing, finance and government, offering cloud‑based services for workflow automation, retrieval-augmented generation (RAG)‑powered search and document-generation tools. A visitor tries LG AI Research's artificial intelligence (AI) model K-EXAONE during a presentation event for the national AI foundation model project at Coex in Seoul, Dec. 30, 2025. Yonhap Meanwhile, Upstage is reinforcing its consortium with AI semiconductor startup HyperAccel and physical AI startup RLWRLD. HyperAccel has been developing an LLM processing unit (LPU) based on its own chip architecture to ease inference bottlenecks, and cut power and operating costs for LLM services. As part of the team, the company plans to further refine LPU design and performance, build inference acceleration optimized for generative AI workloads and roll out a high-availability, full-stack software platform. "The race in ultralarge AI models is not just about how big they are, but how efficiently you can serve them," HyperAccel CEO Kim Joo-young said. "With LPU-based inference acceleration, we aim to help Korea’s AI infrastructure stand on its own technologically and reach global-level cost competitiveness." RLWRLD is joining the team to help bridge Upstage’s multimodal AI model, Solar, into real‑world robotics deployment. The company will define vision‑language model (VLM) requirements for robot control optimization and integration with robotics foundation models and identify commercially viable tasks in hotels, logistics and retail. It will also co‑design detailed validation scenarios and test protocols that translate Solar’s capabilities into robots that can see, understand and act in real‑world settings. Motif Technologies announced that 3D AI startup N.Light and AI training data platform Crowdworks additionally joined its consortium, which aims to build a 300-billion-parameter LLM and scale it into VLM and vision-language‑action models (VLAs). N.Light will develop an AI-based 3D data pipeline that turns text or images directly into manufacturable, high-precision 3D computer-aided design models and automatically converts them into formats that simulators can use. It will also generate large synthetic datasets, collected through simulation, to train VLA models that jointly control vision, language and action, which are essential for physical-AI learning. Crowdworks is taking the role of core data provider for the team’s AI model to deliver high-quality data. It will focus its core capabilities on building datasets specialized for step‑by‑step reasoning to maximize the model’s capacity for intelligent reasoning. It will also deploy its proprietary unstructured data preprocessing solution, Alpy Knowledge Compiler, to convert complex documents such as tables and charts into data that AI systems can understand. Robotics company XYZ also joined the consortium to provide real-world data it has collected through its robots, while also gathering and refining multimodal datasets on human-robot interaction and high-precision manipulation data using its proprietary system. SK Telecom has not announced any additional members to its consortium since the first round.
Images (1):
|
|||||
| Qualcomm's partnership with Neura Robotics is just the beginning | … | https://techcrunch.com/2026/03/09/qualc… | 1 | Mar 16, 2026 08:00 | active | |
Qualcomm's partnership with Neura Robotics is just the beginning | TechCrunchURL: https://techcrunch.com/2026/03/09/qualcomms-partnership-with-neura-robotics-is-just-the-beginning/ Description: Neura Robotics is going to build new robots on top of Qualcomm's new IQ10 processors that were released at CES. Content:
TechCrunch Founder Summit 2026: Last day for ticket savings of up to $300. Register Now. Save up to $680 on your Disrupt 2026 pass. Ends 11:59 p.m. PT tonight. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us German robotics startup Neura Robotics has inked a partnership with semiconductor giant Qualcomm to build the next generation of robots and physical AI. The deal is the latest coupling in the emerging physical AI industry between robotics startups and larger tech hardware and software companies. While no specific products were mentioned in the Monday announcements, the companies will work together to build the “brain and nervous system” of robots in a quest to advance the deployment of humanoid and general-purpose robots in the real world in both domestic and industrial settings. More specifically, Neura will use Qualcomm’s Dragonwing Robotics IQ10 processors as reference designs in its robots. This IQ10 series was announced at CES earlier this year, and these chips are designed to work with autonomous mobile robots (AMRs) and humanoids. Neura also plans to use its Neuraverse robotic simulation and training platform, which was released in June 2025, to test and fine-tune the robots running on Qualcomm’s IQ10 processors. “This collaboration marks a major step toward making physical AI real: open, scalable, and trusted,” David Reger, CEO and founder of Neura Robotics, said in a press release. “By bringing together our cognitive robotics platforms and the Neuraverse ecosystem with Qualcomm Technologies’ leadership in edge AI and connectivity, we’re aiming to accelerate a future where cognitive robots operate safely alongside humans across industries and throughout everyday life.” This deal makes a lot of sense for both sides. And it’s a formula that will likely become a popular strategy for robotics companies trying to bring their products into the real world. For instance, Boston Dynamics announced a strategic partnership with Google DeepMind in January to speed up the development of the robotic company’s Atlas humanoid robot by using Google’s AI foundational models. While Boston Dynamics and Neura’s respective partnerships deal with different technologies — AI models versus chips — the same conclusion can be drawn. Instead of these two companies just being customers of tech vendors, partnering allows for these robotic companies to better use and embed these technologies. A robotic company that has technical prowess in software will have a much easier — and likely cheaper — path to market and scale through partnering with hardware companies that have already figured out tough technical challenges like building robotics hands with dexterity, for example. In Neura’s case, the company gets to build and test robots designed for the chips they are running on while Qualcomm gets an intimate look at how robotic companies can use its processors. As more AI companies like Nvidia look to physical AI as the next major market for their technology, they are going to want a seat at the table of how their tech is being used. The upshot: expect more partnerships. Topics Senior Reporter, Venture Becca is a senior writer at TechCrunch that covers venture capital trends and startups. She previously covered the same beat for Forbes and the Venture Capital Journal. You can contact or verify outreach from Becca by emailing rebecca.szkutak@techcrunch.com. Actively scaling? Fundraising? Planning your next launch?TechCrunch Founder Summit 2026 delivers tactical playbooks and direct access to 1,000+ founders and investors who are building, backing, and closing.Register by March 13 to save up to $300. ‘Not built right the first time’ — Musk’s xAI is starting over again, again Lovable says it added $100M in revenue last month alone, with just 146 employees DOGE employee stole Social Security data and put it on a thumb drive, report says Meta acquired Moltbook, the AI agent social network that went viral because of fake posts Google rolls out new Gemini capabilities to Docs, Sheets, Slides, and Drive Yann LeCun’s AMI Labs raises $1.03B to build world models Anthropic launches code review tool to check flood of AI-generated code © 2026 TechCrunch Media LLC.
Images (1):
|
|||||
| Qualcomm Launches Dragonwing Robotics Hub, Announces NEURA AI Robotics Collaboration … | https://hothardware.com/news/qualcomm-l… | 1 | Mar 16, 2026 08:00 | active | |
Qualcomm Launches Dragonwing Robotics Hub, Announces NEURA AI Robotics Collaboration | HotHardwareDescription: Qualcomm and NEURA Robotics have announced a collaboration to integrate edge computing with embodied AI, aiming to move cognitive humanoid robots from research labs into large-scale industrial and domestic environments. Content:
Aaron Leong Stay updated with the latest news and updates. Subscribe to our newsletter! Home Reviews News Blogs Full Site Sitemap PC Components Systems Mobile IT Infrastructure Leisure Videos About Advertise News Tips Contact Privacy Policy HotTech Accessibility Shop Twitter Facebook YouTube RSS Or sign in manually:
Images (1):
|
|||||
| Should service robots have rights? | https://www.hospitalitynet.org/news/413… | 0 | Mar 16, 2026 00:03 | active | |
Should service robots have rights?URL: https://www.hospitalitynet.org/news/4130338.html Description: From hotel room deliveries to cooking, the use of service robots — some containing human characteristics in terms of appearance and communication — has grow... Content: |
|||||