| Action | Title | URL | Images | Scraped At | Status |
|---|---|---|---|---|---|
| Episode #527 - MCP Servers for Python Devs | Talk … | https://talkpython.fm/episodes/show/527… | 10 | Apr 07, 2026 08:00 | active | |
Episode #527 - MCP Servers for Python Devs | Talk Python To Me PodcastURL: https://talkpython.fm/episodes/show/527/mcp-servers-for-python-devs Description: Today we’re digging into the Model Context Protocol, or MCP. Think LSP for AI: build a small Python service once and your tools and data show up across editors and agents like VS Code, Claude Code, and more. My guest, Den Delimarsky ... Content:
Den Delamarski is a Principal Product Engineer at Microsoft working in the Core AI division, focusing on AI tools for developers. Den is one of the core maintainers of the Model Context Protocol (MCP), having initially joined the project through his expertise in security and authorization. When MCP first launched with an auth specification, Den identified opportunities to improve it for enterprise scale and worked with the Anthropic team to rewrite the authorization framework, which was merged into the June 2024 version of the protocol. Beyond MCP, Den helps drive projects like GitHub SpecKit, which enables spec-driven development with agentic coding tools. His work centers on building developer tools and experiences in the rapidly evolving AI ecosystem, including projects like Copilot and other Microsoft AI initiatives. The Model Context Protocol solves a fundamental problem in AI systems: LLMs are trained on data that gets locked at a specific point in time, but users need to work with fresh, dynamic data. MCP provides a universal interface that allows any LLM or AI client to connect to data sources, applications, and services without custom integrations. Just as the Language Server Protocol (LSP) standardized how editors communicate with language tools, MCP standardizes how AI agents access external capabilities. The protocol is highly opinionated about authentication, message passing, and primitive exposure, eliminating the inconsistency found in traditional REST API integrations. The protocol went from non-existent to widely adopted in less than a year, with major companies across banking, healthcare, and gaming now integrating MCP into their AI strategies. The composability of MCP means you can connect multiple servers to a single client, allowing an LLM to coordinate across different data sources and services seamlessly. The Python SDK makes building MCP servers remarkably simple through the FastMCP framework, which provides a Flask-like developer experience. Creating an MCP tool is as straightforward as writing a Python function and adding a decorator. The SDK handles all the complex JSON-RPC envelope creation, streaming, and protocol compliance automatically. Developers can focus on business logic rather than protocol implementation details. FastMCP is integral to the official Python SDK and simplifies common pain points like authorization. The programming model supports async functions naturally, allowing you to await user input via elicitations without complex callback patterns. The framework also includes built-in support for structured output using Pydantic models, progress reporting, and image handling. MCP servers expose three fundamental primitives that LLMs can interact with. Tools are function calls that perform actions - think of them as API endpoints that do something like sending an email, querying a database, or creating a 3D scene in Blender. Prompts are reusable templates that help LLMs interact with your server effectively, such as "create a recipe with substitutions." Resources allow LLMs to reference and work with entities like databases, files, or API objects. Each primitive serves a distinct purpose in the agent workflow. Tools enable actions and side effects. Prompts guide the LLM on how to best use your server. Resources provide structured access to data and entities. Together, these primitives create a complete interaction model that's both powerful and constrained enough to be reliable. MCP servers can run in two distinct modes depending on your architecture needs. Local MCP servers use stdio (standard input/output) to communicate via native OS constructs between the MCP client and server processes. This is perfect for development machines where you want your editor or AI tool to access local capabilities without network overhead. Remote MCP servers use streamable HTTP and can be hosted anywhere - AWS, Azure, GCP, your home lab, or behind a reverse proxy like Nginx or Caddy. The transport layer is abstracted by the SDK, so the same server code can work in both modes with minimal changes. For local development with remote access, tools like Tailscale provide secure overlay networks without exposing ports or configuring complex VPN setups. This makes it trivial to run an MCP server on your home lab and access it securely from anywhere. The MCP Registry launched in September 2024 as a centralized API that indexes all publicly available MCP servers. Think of it like Docker Hub for MCP servers - you can discover servers, but you're not required to use the registry. The registry supports both public servers (like the GitHub-maintained registry) and private enterprise registries for internal company use. This allows organizations to maintain approved MCP servers behind security gates while still benefiting from the discoverability infrastructure. Discovery is improving rapidly with better integration into clients like VS Code, Cursor, and Claude Desktop. The Awesome MCP Servers list on GitHub has become a valuable community resource with hundreds of servers categorized by function - from biology and medicine to gaming, marketing, and sports analytics. Security and authorization was Den's entry point into MCP development. The June 2024 spec introduced formal OAuth 2.1-based authorization, eliminating the need for developers to implement custom auth flows or check API keys into source control. The brilliant part is that MCP server developers don't need to become OAuth experts - the SDKs handle it. For consumers, authentication is as simple as logging in when you connect a server. The client bootstraps the auth flow, stores tokens securely, and ensures you access only your data. MCP servers can specify whether they use API keys (stored in configuration) or OAuth (handled via standard browser-based login flows). This approach scales from hobby projects to enterprise deployments where data access controls are critical. The standardization means you don't face "17 different dances" to get authentication tokens from different services. GitHub SpecKit represents Microsoft's hypothesis for how spec-driven development works with AI coding tools. The approach starts with defining what and why you're building in a specification document, then breaks down the technical implementation plan, and finally decomposes it into consumable tasks that AI can execute iteratively or in parallel. This isn't the only way to do spec-driven development, but it provides a recipe book and ingredient box for teams wanting to adopt this workflow. The philosophy recognizes that there's no single correct approach to spec-driven development - it depends on your models, team structure, and project complexity. However, starting with a thorough planning phase using high-quality models, then executing with faster models guided by those specs, has proven effective for managing AI agent workflows on complex projects. The MCP ecosystem has exploded with creative and practical implementations. The Blender MCP server lets you describe a medieval scene with a dragon and lighting, and it builds it for you by translating high-level descriptions into Blender's native API calls. Gaming servers exist for Unity 3D, Minecraft, and even analyzing Halo stats. Marketing professionals can connect Facebook Ads, Google Ads, and Amazon Ads MCP servers to ask "how are my ads performing this week" across all platforms without clicking through dashboards. Sports enthusiasts can use Strava MCP for running and biking analytics, or the Formula 1 Multiviewer MCP that controls viewing angles and telemetry during live races. For developers, there are Jira and Atlassian MCP servers to automate bug triage and ticket management. The diversity shows MCP's flexibility - it's not just for data retrieval, but for controlling applications, analyzing information, and automating workflows across domains. Retrieval Augmented Generation (RAG) and MCP serve different purposes in the AI architecture landscape. RAG builds vector databases to augment an LLM's context with additional knowledge, helping it understand what exists in a codebase or documentation set. It's primarily about giving the LLM more relevant context for making decisions. MCP, on the other hand, provides universal access to live data and actionable capabilities. It's not just about knowing what exists - it's about doing something with that information. While RAG helps an LLM understand that an authorization component exists in your codebase, MCP lets it actually invoke authentication services, update records, or chain multiple actions across services. The two technologies can complement each other: RAG for knowledge augmentation and MCP for capability extension. Many real-world AI applications benefit from using both - RAG for understanding context and MCP for taking action. There's ongoing debate about whether specialized local models or general-purpose cloud models work better for specific tasks. Den's perspective is that general-purpose models like Claude and GPT-4 will typically outperform local models for most scenarios due to superior training resources and compute power. However, local models excel for privacy-sensitive workloads - like organizing family photos without sending them to remote servers - or domain-specific tasks where a small, focused model can be as effective as a large general one. MCP enables an interesting hybrid approach: use powerful general-purpose models for orchestration and decision-making, but delegate specific subtasks to specialized local models or services via MCP servers. For example, a general model could coordinate a photo organizing workflow while a local computer vision model handles the actual image analysis. This composability allows building sophisticated systems that balance capability, privacy, cost, and latency. The Python MCP SDK prioritizes developer experience through familiar patterns and minimal boilerplate. The decorator-based approach (@mcp.tool) mirrors Flask and FastAPI, making it immediately intuitive for Python web developers. Async/await support is first-class, allowing natural progress reporting and elicitations without callback hell. The SDK includes 143+ contributors, ships releases every few days, and maintains "good first issue" tags for new contributors. Documentation and samples are comprehensive, with the official Python SDK repo containing multiple example servers. The team actively solicits feedback and iterates quickly on developer pain points. Installation is as simple as uv add mcp or pip install mcp, and you can have a working MCP server in under 10 lines of code. The combination of low barrier to entry and production-ready features makes MCP accessible to Python developers at all skill levels. While MCP provides secure authentication mechanisms, users must still exercise caution when installing third-party MCP servers. Like any software that accesses your data, you should verify the source and understand what an MCP server does before connecting it. An MCP server that reads your iMessages to "sort by importance" could potentially scan for credit card numbers or social security numbers. The responsibility for vetting servers lies with the user, just as it does with browser extensions or system-level applications. Best practices include reviewing source code for open-source MCP servers, starting with servers from trusted organizations, using private registries for enterprise deployments, and being cautious about granting broad permissions. Never check API keys into source control - use environment variables or OAuth flows instead. The MCP community is working on improved discovery with trust signals, but individual diligence remains essential for security. "Think about it like last year at this time, like at the time when we were recording the work item episode, MCP did not exist. Just not a thing. And now everybody's talking about MCP. Like you talk to any big companies, you talk to like the banks, the healthcare, the gaming, everybody, everybody cares about MCP." -- Den Delamarski "The way the folks at Anthropic have been describing it, it is USB-C for AI." -- Den Delamarski "Look at the simplicity of this. You literally have a Python function, you have def add, and there is your arguments, you would pass you a function, like two integers. And then all you need to do to make that a tool that an LM can invoke is just add that @mcp.tool decorator. That's it. You're not going and crafting elaborate JSON RPC envelopes and converters and all these things." -- Den Delamarski on the developer experience "I'll tell you what, the LLMs are getting really good at analyzing the stats. You give them the data, they can make some conclusions." -- Den Delamarski on his Halo stats MCP server "Do you remember the days when you had to work, this episode is not sponsored by Tailscale, for the record. Should be." -- Den Delamarski and Michael Kennedy discussing VPN complexity vs. Tailscale simplicity "The power is composability. It's the fact that you can compose things together and have them work together based on the prompts that you have and scenarios that you have." -- Den Delamarski "There's an MCP server for everything. Like, this list is massive. I'm actually like, every time I discover these things, I was like, oh, I didn't know there was one for multiviewer." -- Den Delamarski exploring the Awesome MCP Servers list "These are the life hacks you learned only from this podcast. Query all the bugs assigned to me, reassign them to somebody else." -- Den Delamarski joking about Jira MCP automation "Exercise caution, just like you would exercise with any other software and APIs and websites where you log in because the responsibility is kind of on you to figure out what's safe, what's not." -- Den Delamarski on MCP server security Model Context Protocol (MCP): An open protocol that provides a standardized way for AI applications to connect to data sources, services, and tools. It acts as a universal translation layer between LLMs and external systems, similar to how LSP standardized language tooling for editors. MCP Server: A service that implements the MCP specification and exposes tools, prompts, and resources that AI clients can use. Servers can run locally via stdio or remotely via HTTP. MCP Client: An application or editor that connects to MCP servers and makes their capabilities available to LLMs. Examples include VS Code, Cursor, Claude Desktop, and custom applications. Tools: Function calls that MCP servers expose to LLMs, allowing them to perform actions like querying databases, sending emails, or controlling applications. Prompts: Reusable templates that MCP servers provide to guide LLMs on how to interact effectively with their capabilities. Resources: References to databases, files, or API entities that MCP servers make available to LLMs for data access and manipulation. Elicitations: A mechanism for MCP servers to request structured input from users during tool execution, enabling confirmation dialogs, dropdown selections, and data validation. FastMCP: The primary framework within the Python SDK that provides a Flask-like decorator-based programming model for building MCP servers quickly. stdio Transport: A local communication method where MCP servers use standard input/output pipes to exchange JSON-RPC messages with clients on the same machine. Streamable HTTP Transport: A remote communication method where MCP servers expose HTTP endpoints for JSON-RPC message exchange, enabling cloud deployment and distributed architectures. JSON-RPC: The underlying message format used by MCP for communication between clients and servers, abstracted away by SDKs for developer convenience. MCP Registry: A centralized index of available MCP servers, similar to Docker Hub, that enables discovery and installation of servers into MCP clients. Supports both public and private registries. OAuth 2.1: The authentication and authorization standard used by MCP for secure access to protected resources, handled automatically by SDKs. RAG (Retrieval Augmented Generation): A technique that builds vector databases to augment LLM context with additional knowledge, complementary to MCP's action-oriented approach. Spec-Driven Development: A development methodology where projects start with detailed specifications that guide AI coding tools through implementation, promoted by GitHub SpecKit. If you want to dive deeper into the topics covered in this episode, these courses from Talk Python Training can help you build the foundational skills and advanced techniques you'll need. LLM Building Blocks for Python: This concise 1.2-hour course teaches you to move beyond basic "text in, text out" with LLMs, covering structured data, chat workflows, async pipelines, and caching - essential skills for building MCP servers that integrate AI capabilities. Modern APIs with FastAPI and Python: Since FastMCP uses FastAPI-like patterns, this course provides deep knowledge of building modern Python APIs with type hints, async/await, and data validation - all of which directly apply to MCP server development. Async Techniques and Examples in Python: MCP servers heavily use async/await for streaming responses and progress reporting. This course covers Python's entire async ecosystem, from basic async/await to parallel processing and thread safety. Rock Solid Python with Python Typing: Type hints are fundamental to MCP servers and structured output with Pydantic. Learn how to use Python's typing system effectively, which powers frameworks like FastAPI and FastMCP. Build An Audio AI App: This course combines AI, FastAPI, and MongoDB to build real applications - a perfect companion for creating MCP servers that work with audio content, transcripts, and multimedia data. The Model Context Protocol represents a fundamental shift in how we build AI-powered applications. Rather than creating custom integrations for every data source and service, MCP provides a universal standard that works across LLMs, editors, and agentic tools. The Python ecosystem has embraced MCP with remarkable speed, delivering a developer experience that feels as natural as Flask or FastAPI while handling the complexity of JSON-RPC, streaming, and authentication behind the scenes. What makes MCP truly powerful is its composability. You can connect multiple servers to a single client, enabling LLMs to coordinate sophisticated workflows across different services. The registry ecosystem is exploding with servers for everything from 3D modeling in Blender to analyzing Formula 1 telemetry to automating Jira tickets. Yet beneath this diversity lies a consistent, well-designed protocol that makes both building and consuming MCP servers straightforward. For Python developers, now is the perfect time to explore MCP. The barriers to entry are low - you can have a working server in minutes. The community is active and welcoming, with good first issues available for contributors. The use cases span every domain imaginable, from enterprise data integration to creative hobby projects. Whether you're building the next generation of AI agents or simply want to give your AI tools access to your custom data, MCP provides the plumbing that just works. As Den put it, "MCP can do anything - it's just a pipe. What you do with that pipe is up to you." 00:00 On this episode, we're digging into the Model Context Protocol, or MCP. 00:04 Think LSP for AI. Build a small Python service once, and your tools and data show up across 00:11 editors and agents like VS Code, Claude Code, and more. My guest, Den Delamarski from Microsoft, 00:17 helps build this space and keeps us honest about what's solid versus what's just shiny. 00:23 We'll keep it practical, transports that actually work, guardrails you can trust, 00:27 and a tiny server you could ship this week. 00:29 By the end, you'll have a clear mental model and a path to plug Python into the internet of agents. 00:36 This is Talk Python To Me, episode 526, recorded September 30th, 2025. 00:43 Talk Python To Me, yeah, we ready to roll. 00:46 Upgrading the code, no fear of getting old. 00:48 Async in the air, new frameworks in sight. 00:51 Geeky rap on deck. 00:52 Quarth crew, it's time to unite. 00:54 We started in Pyramid, cruising old school. 00:57 lanes. Had that stable base. Yes. Welcome to Talk Python To Me, the number one podcast for Python 01:02 developers and data scientists. This is your host, Michael Kennedy. I'm a PSF fellow who's been coding 01:07 for over 25 years. Let's connect on social media. You'll find me and Talk Python on Mastodon, 01:13 Bluesky, and X. The social links are all in the show notes. You can find over 10 years of past 01:19 episodes at talkpython.fm. And if you want to be part of the show, you can join our recording 01:24 live streams. That's right. We live stream the raw uncut version of each episode on YouTube. 01:30 Just visit talkpython.fm/youtube to see the schedule of upcoming events. And be sure to 01:36 subscribe and press the bell so you'll get notified anytime we're recording. This episode is sponsored 01:41 by Posit Connect from the makers of Shiny. Publish, share, and deploy all of your data projects that 01:47 you're creating using Python. Streamlit, Dash, Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, 01:54 dashboards, and APIs. Posit Connect supports all of them. Try Posit Connect for free by going to 02:00 talkpython.fm/posit, P-O-S-I-T. And it's brought to you by Nordstellar. Nordstellar is a 02:07 threat exposure management platform from the Nord security family, the folks behind NordVPN, 02:13 that combines dark web intelligence, session hijacking prevention, brand and domain abuse 02:19 detection, and external attack surface management. Learn more and get started keeping your team safe 02:24 at talkpython.fm/nordstellar. Hey, I want to take just a minute and talk to you guys. I just 02:31 released a really cool new course called Agentic AI Programming for Python Developers and Data 02:36 Scientists. You've heard me mention a couple times on the podcast how I've had some incredible success 02:42 with some of these Agentic AI coding tools. I hear people talking about how they're not really 02:47 working for them. And then I look at the results that I'm getting and think, wow, that's something 02:53 that would have taken two weeks. It's built in two hours and it's well factored and good looking code. 03:00 What gives? Why is this difference here? Well, I decided to create this course to share all the 03:06 things that I'm doing with these agentic coding tools with the idea of making you as successful 03:12 and productive as well. Yes, I know we're all tired about hearing about how AI is going to 03:17 change everything for software developers. 03:19 But there are some tools here that will give you truly difference 03:23 making levels of productivity. 03:25 And that's what this course is about. 03:27 So check it out at talkpython.fm/agenticai. 03:31 The links in your podcast player show notes. 03:33 Let's get to the interview. 03:35 Ben, welcome to Talk Python To Me. 03:36 Great to have you here. 03:37 Hello, hello. 03:38 I'm excited to be here. 03:39 I'm a big fan of Talk Python. 03:41 I'm a big fan of you and I'm a big fan of Python. 03:43 So there we go. 03:45 Wow. 03:45 Thank you. 03:46 I've been on your show, Work Item, which was really fun. 03:49 Thank you for having me. 03:50 And now it's time to dive into your expertise. 03:53 I'm going to talk agentic stuff, and especially we're going to talk model context protocol, MCP. 04:01 I think this is one of the really important layers that is kind of invisible, right? 04:05 A lot of the coding agents and coding AI and chat LLMs and all that, 04:10 that's what people think when they hear all these things. 04:13 But there's got to be plumbing, right? 04:15 We're going to talk to plumbing. 04:16 - There has to be, yeah. 04:17 - Nothing is more amazing than plumbing. 04:18 Like we all get excited about plumbing. 04:20 So no. 04:21 - I know. 04:22 - Technology plumbing is cool. 04:25 - Yeah. 04:25 I mean, it's one of those things too, that look at how fast it grew. 04:28 Think about it like last year at this time, like at the time when we were recording 04:32 the work item episode, MCP did not exist. 04:34 - Yeah. 04:34 - Just not a thing. 04:35 - That's wild. 04:35 - And now everybody's talking about MCP. 04:38 Like you talk to any big companies, you talk to like the banks, the healthcare, the gaming, 04:44 Like everybody, everybody cares about MCP. 04:46 That's great. 04:46 It's very great. 04:48 We're going to dive into it. 04:49 Before we do, let's dive into you. 04:51 Give us a quick background on yourself. 04:53 Absolutely. 04:53 So I am Den Delamarski. 04:54 I am a principal product engineer at Microsoft. 04:57 I work in the core AI division. 05:00 So we're focusing on, as the name suggests, AI stuff, but applied to developers. 05:06 So I'm very, very heavily in the developer ecosystem. 05:09 And I'm one of the core maintainers of the Model Converse Protocol. 05:13 So I say one of because there's many of us. 05:15 It's not just me. 05:16 There's many wonderful, talented people way smarter than me. 05:19 And yeah, that's a short intro. 05:21 Okay. 05:22 So when we talk about MCP, you're one of the people helping build it. 05:26 That's incredible. 05:27 That is correct. 05:29 Yeah. 05:29 I try to contribute as much as I can. 05:31 Well, you know, before we move on, just how'd you get into that position? 05:36 Oh, it all started with one of the things that was actually near and dear to my heart, 05:40 which is security and authorization. 05:41 So when MCP first came out, it had a auth spec. 05:45 So we see on the screen right now, Michael is showing the kind of the model context vertical 05:49 specification page. 05:51 But when MCP first started, it had essentially a spec that outlines how to do authorization 05:57 for MCP servers. 05:58 And that spec was a good start, but it made a lot of assumptions about the infrastructure 06:04 and the tooling and how developers build MCP servers that were, I want to say, a little 06:08 flawed at scale. 06:09 So my thought was like, oh, I'll just get some smart people with me and we'll help rewrite this. 06:15 And we asked the MCP folks at Anthropic and they said yes. 06:18 And so we did. 06:19 And I basically like incorporate all the feedback and iterated on it. 06:23 And then again, it's a massive community effort. 06:26 We pushed it out and got it merged in the June version of the protocol. 06:29 And then the folks at Anthropic just reached out and said, hey, do you want to help shape the protocol? 06:35 And here I am helping shape the protocol. 06:37 You seem to know what you're talking about and you sure are participating a lot. Why don't you just hang around? 06:41 Yeah, basically. 06:42 Okay, that's great. And you work at Microsoft. What do you do there? 06:46 That is correct. At Microsoft, I work on developer tools. So think like if you ever use Copilot, if you ever use any, oh, by the way, GitHub spec kit for folks that have not heard about it, we released it like last month. 06:58 But that's something that I helped drive and help maintain is how do you do spec driven development with agentic tools, agentic coding tools? 07:06 Yeah, that's what I do. 07:07 Okay, cool. 07:08 So something that I've started to do a lot when I'm involving AI, I go in like spurts. 07:14 I'll work for a long time, just sort of writing regular. 07:16 And then I'm like, ah, this is really a lot of drudgery, not critical or central to what I'm doing. 07:21 Let me just uncork some agentic AI on it and let it go. 07:25 But one of the things I've started doing a lot, and it has to do with the spec thing that you've touched on here, 07:30 is I will force, I'll pick a really high level model, like a complex smart model. 07:36 And I'll say, I want to plan this out. 07:39 I've given you some ideas, look at the code and let's create a detailed plan 07:42 of what you're gonna do. 07:43 And I'll have it write a markdown file. 07:45 And even though a lot of my projects, I have just a plans folder 07:47 and it's just full of all these different projects. 07:49 You know, maybe they're sort of equivalent to a PR in the end. 07:52 - Yeah. 07:52 - And I'll plan that out really well. 07:54 Then I'll switch it down to a lower model, to a new context and say, let's just do phase one. 07:58 Let's do phase two and knock it out. 08:00 That sounds like a Michael just made up some stuff equivalent of the spec based programming. 08:06 Is that right? 08:06 Like, how does that compare to what you're talking about here? 08:09 It's close. 08:10 It's very close. 08:10 And this is where when when people talk about spec driven development, I want to emphasize 08:14 the fact that there's no one correct approach. 08:17 Like people think that it's like, oh, I'm just going to wait for whatever company is 08:20 going to come out and come up with the right thing. 08:21 Like it all depends on your experience. 08:23 It depends on your models. 08:25 The spec kit project that we launched is our hypothesis, our experiment on how we believe 08:30 And what it does is basically what do you describe? 08:32 You start with a spec. 08:34 You start outlining what and why I am building. 08:37 Then you focus on the technical implementation plan, which is like, OK, now what technology stack I'm using here. 08:43 And then you break that down into tasks, which are basically just consumable chunks that the AI can go and either iteratively or in parallel execute and build the stuff that you want to build. 08:54 So all of it, again, is still an experiment. 08:56 So I'm not by any stretch claiming that what we have is the end of it all or the right way to do this. 09:02 There's many, many ways to do this. 09:04 Okay. 09:04 And you even over on DevBlogs wrote, diving into spec-driven development with GitHub SpecKit. 09:11 That is correct. 09:11 There's also a GitHub blog that I highly recommend folks check out. 09:14 It's actually on the github.blog. 09:17 So you can go there and look for, there you go. 09:19 It's called Spec-Driven Development with AI. 09:21 Get started with a new open source toolkit. 09:23 And we do have an open source toolkit. 09:25 All right. So how is this different than just what I've done? I know I've seen this before. 09:29 Yeah. Okay. Yeah. It just, all it does is think of it as this is the recipe book, 09:34 right? Like if you decided to like, Oh, I want to cook up a new application and you're like, 09:39 well, what's the recipe? Like this bundles the recipe for instance. And by the way, 09:43 here's the box set of ingredients that you can just use to build this. That's what this is. 09:47 That's SpecKit. Okay. Well, very exciting. Let's maybe start to get into the main topic though. 09:54 So MCP servers. 09:56 I've heard this put out as sort of an analogy to the LSP, which I know is, I first heard of it in VS Code. 10:05 I don't know if it came from VS Code. 10:07 Maybe it did, but it's the thing that allows so many different editors to plug into tooling 10:14 like PyLance or Powerfly or ty or a bunch of cool things are coming out around here, 10:19 different implementations of LSPs. 10:22 And I've heard that MCPs are kind of like that for AI. 10:26 Maybe contrast those a bit for people. 10:30 Yeah. 10:30 I mean, if you look at the MCP specification, if you look through the website 10:34 and just peruse through the documentation, you might have like faint echoes of LSP design decisions, 10:40 faint echoes of kind of the LSP architecture. 10:42 But yes, basically think of it this way. 10:44 The way the folks at Anthropic have been describing it, it is USB-C for AI. 10:50 And when I say that is the problem with a lot of the LLMs, a lot of the modern models is the fact that it takes a some amount of time to train them, which means that inherently they get locked into a specific training date, if you will. 11:06 So the corpus of knowledge that gets embedded in them gets locked at a certain date. 11:11 And when you talk to a lot of enterprise customers, you talk to a lot of customers in the wild, 11:15 it doesn't need to be enterprise, by the way, it could be startups, could be hobbies, developers 11:18 like, well, I want to use AI with this fresh data that I have. 11:23 Maybe I have, I don't know, a Dropbox account and I want to use AI to sort my files. 11:27 Or maybe I want to use some data inside Salesforce to go and help me analyze my sales and find 11:34 out outliers and maybe customers I want to focus on. And I just interviewed the people from Nice Guy, 11:40 Nice GUI, and they build robots that cruise around in architectural areas. Like what maybe I want a, 11:48 I want some way to like ask AI, look at how the robots are doing now and then, or see if they're 11:55 busy, find a free one, right? That might be a thing, huh? Yeah. Yeah, no, for sure. Exactly. 12:00 It's like any kind of live data or managed data, something that is more dynamic than the corpus of knowledge that is embedded in these models by default. 12:09 And for those, if I would ask you like, OK, well, let's imagine a world where MCP does not exist. 12:14 How would you go about plugging this data in like into your LLM? Right. 12:19 And like there's different ways to do this. Like people have done like the rags. 12:25 People have done, you know, dump like CSV files and then be like, oh, analyze the CSV file and all these like hacky solutions. 12:31 But it feels like it's not universal. 12:34 It doesn't really work for all cases. 12:35 And something that you've done in one LLM doesn't work in another. 12:39 And now you're locked into this environment. 12:41 So it becomes very hard to manage. 12:43 So MCP is essentially the answer to that. 12:45 MCP says, look, we don't care what data you're connecting to, what applications, what actions. 12:50 we provide you a universal interface by which every single LLM, every single client that 12:57 understands MCP can invoke those primitives, get the data and embed the data in the context that 13:02 you're operating in. And that's another thing, important thing. People think of MCP as the data 13:06 connector, but it's not only a data connector. It's a, I want to call it like a primitive connector 13:11 because you can use MCP with a lot of wonderful things that folks have probably seen already. 13:15 Like I, my favorite example here is Blender MCP. Like for folks that don't know, Blender is a 3D 13:20 modeling tool. And there's an MCP server by which you can actually guide an LLM saying like, I am 13:25 building this like medieval scene with a dragon and the lighting and so, and it goes and it just, 13:30 it builds it for you, right? Through this MCP and MCP is the connective layer between Blender, 13:36 which has its own native API. And then there's the MCP server that the LLM knows how to talk to, 13:42 right? Because the LLM wouldn't know how to like, okay, how do you talk to Blender? How do you, 13:45 how do you go and set up the plugin and whatever the web sockets, whatever they might be using, 13:50 It's super complex, so it needs expertise, but an MCP server is essentially saying, I have these set of primitives that the LLM can invoke at any time, like create polygon or create scene or create sphere, and then based on that information, go and iterate on it. 14:03 So MCP is that adapter. 14:05 Yeah, I see. 14:06 So the LLM or agentic AI or whatever that you're working with, it says, all right, I'm going to talk to Blender. 14:12 Blender says, I have these core ideas, these core building blocks. 14:15 it sort of turns it more into Lego instead of just I'm going to have a saw or whatever I can 14:21 go. Exactly. Okay, I have spheres, I have cylinders, I have squares, I have shading. 14:28 They've asked me to do this. What can I build composing that sort of? Exactly. Precisely. Right. 14:33 So it's you're operating on a set of primitives, right? And this is where you don't even need to 14:37 expose the entirety of the surface of blender API's. You can just say like, oh, I want to have like, 14:42 there's the 10 primitives that I think are the most valuable. 14:44 I'm going to go ahead and use those. 14:46 And out of those, you compose things. 14:48 And maybe there's an advantage to that too, right? 14:50 Maybe you're like, I want to use Blender to create 2D scenes. 14:53 So I'm only going to expose stuff or rotations or whatever that preserves some sort of 2D view of the thing. 14:59 Like it's, we're doing CAD where it's top down from the side. 15:02 Like those are the ways you're going to look at. 15:03 You can't arbitrarily rotate it. 15:04 Yeah. 15:05 So yeah, so essentially like the MCP servers in this case act as a universal translation layer between whatever's downstream 15:11 of the MC server, which can be an application, an API, a database, like anything. And the client, 15:18 which knows like, I know how to talk to MCP and nothing else. I have no idea what's behind. I 15:22 don't know what the REST API you have, what's the authentication authorization logic, 15:26 just give an MCP server. Okay. It sounds a little bit like an API. And by API, I mean, 15:32 yes, most general sense of the word not, oh, it's a REST API. And it makes sure it uses the verbs 15:38 this way. I mean, like anything that you you could sort of call and either get data or cause an 15:43 action that could be a REST API, but it could just be, you know, an OS level API or some something 15:49 like that. Yeah, right. Yeah, totally. I mean, it's all it is just a connective layer. So yeah, 15:55 and people often ask like, well, couldn't you do this with like REST APIs? Couldn't you do this 16:00 with a GraphQL APIs instead? Right? Because like, it's been invented. Why are we creating new 16:05 things. But the thing about this is, even if you look in the world of REST APIs, like think about 16:10 the last time you worked with a REST API from some vendor and then switched another REST API from 16:15 someone, how much of that knowledge was like one-to-one reused or the infrastructure that 16:19 you built or authentication logic? You have like, you have these like 17 different dances by which 16:23 you get the token, right? And MCP essentially is the opinionated version of saying, no, this is how 16:31 you do auth. This is how you do message passing between entities. This is how you expose primitives. 16:37 It's a highly opinionated stack. This portion of Talk Python and me is brought to you by Sentry's 16:43 AI agent monitoring. Are you building AI capabilities into your Python applications? 16:49 Whether you're using open AI, local LLMs, or something else, visibility into your AI agent's 16:55 behavior, performance, and cost is critical. You will definitely want to give Sentry's brand new 17:01 AI agent monitoring a look. 17:03 AI agent monitoring gives you transparent observability into every step of your AI features 17:09 so you can debug, optimize, and control the cost with confidence. 17:14 You'll get full observability into every step of your AI agent. 17:17 That is model calls, prompts, external tool usage, and custom logic steps. 17:23 AI agent monitoring captures every step of an AI agent's workflow 17:27 from the user's input to the final response. 17:30 And your app will have a dedicated AI agent's dashboard showing traces and timelines for each agent run. 17:37 You'll get alerts on model errors, latency spikes, token usage surges, and API failures protecting both performance and cost. 17:46 It's plug-and-play Python SDK integration. 17:49 Open AI for now for Django, Flask, and FastAPI apps with more AI platforms coming soon. 17:55 In summary, AI agent monitoring turns the often black box behavior of AI in your app 18:01 into transparent, debuggable processes. 18:05 If you're adding AI capabilities to your Python app, give Sentry's AI agent monitoring the look. 18:10 Just visit talkpython.fm/sentry agents to get started and be sure to use our code, TALKPYTHON, one word, all caps. 18:19 The link is in your podcast player's show notes. 18:22 Thank you to Sentry for supporting Talk Python and me. 18:25 And so then once implemented them, we call them the hosts like VS Code or PyCharm or cursor or whatever, cloud code, it knows, all right, here's how I inspect the capabilities of this thing. 18:38 Here's how I stream back the responses if it's going to take it 10 minutes to do what I asked it. 18:43 This is how you do it with streaming HTTP APIs and so on. 18:46 Precisely, right? 18:47 Because you only need to then implement once. 18:50 And especially if you use one of the existing MCP SDKs that we're going to be talking down 18:54 the line, like that's the core value prop is like you do it once and it just works. 18:59 You don't need to worry about like, oh, but this other MCP server decided to implement 19:02 their auth in a completely different way. 19:04 What do I do now? 19:05 Yeah. 19:05 Now, if I want to build one of these things, does it does it have to be implemented in 19:11 an LLM or can I build just a traditional FastAPI API that ultimately does queries against 19:19 a database with no, no prompt? 19:22 Yeah, no, I mean, like MCP servers themselves are just essentially entities that they're 19:29 capable of exchanging JSON RPC messages. 19:31 Like you can absolutely, you can write a client that is completely detached from an LLM and 19:35 just invokes tools, right? 19:36 Okay. 19:37 Right. 19:37 Awesome. 19:38 You can, if you want to, I don't know why you would do that, but you 19:41 Absolutely can. 19:42 I'm sure people have a reason. 19:44 So I see a comment out in the audience from Frankie about RAG. 19:48 And also, you mentioned RAG at the beginning. 19:51 You say, well, maybe RAG is not working for you. 19:53 Let's just sort of contrast that a bit, right? 19:56 Like, maybe not everyone knows what RAG is. 19:58 Retrieval Augmented Generation. 20:00 What is this? 20:01 Yeah, essentially, if you have a way for you to optimize, basically, the context for the LLM. 20:07 I'll put it this way. 20:09 in very layman terms, it's like, I have a code base. 20:12 I have a code base that has a number of entities like classes and functions and everything. 20:19 And in a rag, you're essentially building a vector database 20:23 that says like, okay, here's the list of things that exist. 20:26 And then the LLM, you can go and query this thing and find out what exists in this code base. 20:31 So if you make decisions about like, I want to build a authorization component, 20:35 how do I do this? 20:36 Like, okay, well, it can build out that context for itself. 20:39 This is kind of the very basic idea behind the rag. 20:42 Got it. 20:43 So instead of trying to put all the information just into a prompt, 20:46 it has to read every time you can kind of additionally train it on these things 20:50 and then keep the question shorter because it knows the details. 20:54 Right. 20:54 You essentially have a knowledge base that's outside of the primary training set. 20:59 Like that's the core value prop of this is you're augmenting the LLM 21:03 with additional knowledge that you have in the context that you're operating in. 21:07 Okay. So something I've wanted to build for a while, and I do intend to, but we'll see if I 21:13 ever get there, is something where people could go and have like an AI conversation with this 21:18 episode, for example, right? With something on the podcast, I've got 10 years of transcripts. 21:23 Yeah. 21:23 You know, like over a million words, I'm pretty sure. That doesn't fit in most contexts. And 21:33 thing for Talk Python. Maybe there's an MCP angle that's really interesting. Like, what could I do 21:40 with MCPs in the podcast, do you think? MCPs in a podcast. So one of them, of course, is like querying 21:45 the data, which is I want to make sure that, you know, find me all the episodes where I ever talked 21:51 with Michael about AI, right? It could be one thing. I actually think that because of the richness of 21:59 the MCP capabilities, to me, when it comes to like podcasts, I envision a world where I can use MCP 22:03 piece to edit podcasts. That's my dream of this. And actually, like this is something that I've 22:08 been experimenting with because I haven't fully wrapped around kind of like how exactly that would 22:15 look like. But one of the things that I do, like as I'm sure you do when you edit the podcast, 22:18 you know, you have to go through it, generate the transcript, clean up things, then make sure that 22:24 you add timestamps, select the most interesting parts about the podcast. So can I potentially go 22:30 and say, okay, here's where my MP3 file is. 22:34 Can you go and generate transcript, clean it up, 22:37 and then find me the most interesting parts about this and then produce me a report 22:42 that I can then use to maybe like a HTML-based web app and I can just like a one-click save like publish, right? 22:49 And to me, like the value of maybe the MCP connector here is that maybe I can plug it in behind the scenes 22:54 with like FFmpeg to go and convert the MP3 into a WAV file and then use whisper to go and generate the transcript and then go and extract things for it 23:03 right and for a lot of these pieces of the tasks that you need to do you would imagine that you 23:07 would have a different tool inside my mc server which is tool is one of the primitives that 23:11 basically a lm invokes and it says oh let me generate the transcript and there's a tool that's 23:15 called generate transcript and it's gonna have that to produce a transcript and it's like okay 23:19 there's another tool that says yeah yes you could give just give the lm an episode number 200 or 23:26 something. And it could go to your podcast MCP server and say, transcript for 200, even if it 23:33 doesn't exist, it'll figure it out and generate it, that kind of stuff. Yes. And also the wonderful 23:38 thing about LLMs and MCP servers is that you're not actually using just one MCP server, right? So 23:43 I might have an MCP server for myself that is basically, like I said, the one that generates 23:47 transcripts, you know, creates a landing page in my podcast website. And then based on that content, 23:54 there's also next steps. Now I have an MP3, I want to upload that MP3 to Cloudflare, where I host my 23:59 podcast. So there may be a Cloudflare MCP server that the LM is going to invoke and say, I need to 24:04 now upload this. And then it's going to invoke the other MCP server, right? So you have this 24:08 basically stack of MCP servers that you can start using one with another. And that's where the 24:12 superpower comes from. Like you're not just using one application and saying like, okay, hold on, 24:15 let me, let me finish a task for podcast production. Then I'll do other things. Like it can chain 24:19 things together and then say, oh, and by the way, there's an MCP server maybe for audio conversion 24:23 that produces like 10 variations of the format. 24:26 Let me invoke that. 24:27 And then you're going through this process. 24:28 Yeah, I think that's one of the really big, hints at one of the really big differences 24:32 between just using a chat LLM versus some of the agent tool using types of things, right? 24:39 The ability to say, now I have to accomplish this task. 24:42 And I know I figured out there is some way I'm capable of accomplishing that, right? 24:46 Either that's to list a directory, to look for a file or to communicate with the Cloudflare MCP 24:52 that we talked about and so on. 24:55 Yeah, it's power is composability. 24:57 I'll put it this way. 24:58 It's the fact that you can compose things together and have them work together 25:01 based on the prompts that you have and scenarios that you have. 25:04 Okay, cool. 25:05 So imagining the Cloud Player MCP thing exists, your podcast preparation MCP thing exists. 25:12 How does my AI know? 25:14 Let's keep it real basic. 25:15 Let's say I'm using Claude Code, but we could plug this into others, 25:19 but just even something just terminal-based, no UI or whatever, like, yeah, it just is it going to discover them just out of the blue? 25:26 Probably not all of them. You got to point it at them. And yeah, how does it know which ones it's 25:30 allowed to use in this context? Right? Like, how do I get it so I can actually use one of these? 25:34 And we'll talk about maybe building them. Yeah. So for MCP servers themselves, you add them 25:39 explicitly to your host or your client, whatever that might be. VS Code, cloud code, cloud desktop, 25:44 doesn't matter. So you explicitly say, I want to use my podcast MCP. I want to be using my 25:49 Cloudflare MCP server. I want to use my, I don't know, Descript MCP servers to remove the ums and 25:54 uhs from the podcast, right? So you would essentially go through some means in that client, 25:59 on your client of choice, to go and add those MCP servers. Now, the question is, how do you 26:03 discover those MCP servers? So there's various places where you can go to. We just launched the 26:08 MCP registry that is nothing short other than an API that indexes all of the available MCP servers 26:15 that are out there, right? 26:17 So we're looking right now at a blog post on the MCP blog. 26:20 It's called Introducing the MCP Registry that got published September 8th of this year. 26:24 So not that long ago, but basically- 26:26 22 days or something like that. 26:28 And when you say we, you're talking the official 26:30 model context protocol.io working group. 26:33 Yes, yes. 26:34 The model context protocol folks, and there's a bunch of them 26:36 that were specifically focused on the registry, right? 26:39 And you see them in the authors like David Sariapara, Adam Jones, 26:44 But they essentially were in charge of kind of building this out. 26:47 And the registry is a centralized API, essentially, that aggregates an index of MCP servers that are out there. 26:54 So you can use the registry inside your client, whatever client you might be using, to find MCP servers for what you want. 27:00 Maybe there is a Playwright MCP server. 27:02 Maybe there is a Perplexity MCP server. 27:05 So it's all coming from the registry. 27:06 Okay. 27:07 Sounds a little bit like Docker Hub. 27:09 Kind of. 27:10 Yes. 27:10 And just like Docker Hub, you actually don't need Docker Hub to install an MCP server. 27:14 or in this case, like a Docker container, right? 27:15 Like you can just go to random GitHub repos and find somebody to build an MCV server 27:19 for what you're trying to do, and you can just plug it in. 27:21 - Yeah, interesting. 27:22 Yeah, that's how I use Docker Hub by not using Docker Hub for all the stuff I build, 27:26 but you know, I get the foundations. 27:27 - I know it exists. 27:29 - Exactly, I'm like, ah, but I'm gonna build it here. 27:32 It also has the concept of public and private registries. 27:35 - Yes, yeah. 27:36 So public registry is essentially something that like GitHub, 27:40 by the way, maintains their own registry, right? 27:42 So it's public and you can just go and discover MCP servers through the GitHub registry or the public registry. 27:48 Also, we know that MCP servers are used within different companies. 27:51 You might have, let's say, some data that you're locking in behind seven gates that only certain people can access. 27:59 You can build internal MCP servers. 28:01 And for those things, you ship internal private registries where you can say, no, no, no. 28:06 I want my folks in my company to only access these servers and nothing else. 28:11 Right. 28:11 Sure. 28:12 Yeah, that makes sense. Is there a place that I can go to the model context protocol registry, the MCP registry and like browse it like you can? 28:20 Yeah. Yeah. So right now you can't browse it through a UI, but you can look at other registries that can consume some of the content from here. 28:29 So like I believe GitHub registry is one of the consumers. 28:32 So you can look at I think it's gethub.com slash MCP. 28:35 There we go. 28:36 Yeah. 28:36 Okay. 28:37 And you can see some of the registries and you can see like if you click on one of the install buttons is going to like allow you 28:42 to take it directly into like VS Code and then just bring it in and install it 28:46 in the context of your editor. 28:47 - Okay, yeah, very nice. 28:49 So some of these are like web crawling, Notion. 28:52 Okay, I know Notion just added a big agentic AI thing and I've seen a lot of pushback. 28:58 There's probably a lot of happy users who just use it, but people are like, why is this in my way? 29:02 I just wanna work with this. 29:04 But you know, if you were, it'd be really cool to maybe plug that in instead of going, 29:08 we're gonna try to use the API to download this embedded database with the information. 29:13 - Exactly. 29:14 - Like you just talk to it, right? 29:15 - Exactly. 29:16 That's again, what I like about MCP is that if I want to connect to Notion to get my notebook 29:22 and some notes from my standup meetings, I don't need to worry about how they structure their API 29:26 and how to use auth or something. 29:27 Just install the Notion MCP and then ask the alum, pull the latest notes and summarize them for me. 29:32 And then it's gonna know. 29:33 - It's their LinkedIn one. 29:34 Their API is so bad. 29:35 - Oh. 29:37 - Oh. 29:37 - Oh, it makes me sad. 29:39 For any LinkedIn people watching this, we need to have a LinkedIn MCP server. 29:42 - Yes, I think so. 29:44 It might save me. 29:45 Okay, very interesting here. 29:47 I think people should come here and just kind of poke around. 29:50 You can see there's a lot of, a lot of interesting things that I think might 29:53 spark some ideas. 29:55 - Yeah. 29:55 - As you start to play with it, you know, like Postman. 29:58 So I guess one of the problem, well, not one of the problems, 30:00 one of the things you're gonna want to deal with is, a lot of these I see here, 30:04 LaunchDarkly, Postman, Atlassian, Notion, and so on. 30:08 You got to pass things like I am this person. 30:11 Therefore, I want to see my information, not other people's or only public. 30:16 I got to see private info, but mine. 30:18 There's a whole security side. 30:19 And I think that's kind of how you got pulled into it, right? 30:22 Yeah. 30:22 Oh, yeah. 30:23 Yeah. 30:23 So for these things. 30:25 Yeah. 30:25 We just put like an API key in GitHub and you just check that in and just use that when you're trying to. 30:30 Don't do that. 30:31 Don't put API keys in GitHub and check them in. 30:33 What can be done, so starting with the latest spec of MCP that, again, shipped in June, there is a formal way for services to do authorization. 30:42 So it's based on OAuth, OAuth 2.1. 30:45 I know that there's people listening that's like, oh, no, did you just say OAuth? 30:49 I have to learn OAuth now. 30:50 You don't. 30:51 Again, there's a lot of libraries that do this. 30:53 If you're an MCP server developer, it's solved for you. 30:55 If you're an MCP server consumer, you don't even need to think about it. 30:58 So when you connect an MCP server, as a consumer, you'll essentially have the ability to log in with your credentials. 31:04 So if an MCP server, for example, for like we saw Chromo and we're like MongoDB, that's on the screen here. 31:10 If I use the MongoDB server and I want to connect to a database, usually they provide you a way to either one. 31:16 You go into your MCP server config and you say, I will give you an API key if your server is using an API key. 31:22 Or if it's using OAuth, then you can just essentially snap to using OAuth the standard flow. 31:28 Your client is going to bootstrap the authentication flow. 31:31 You're going to go to the box, enter your credentials, log in. 31:33 The client is going to store the tokens, and then you access the server with your credentials as you getting access to your data, not something else. 31:41 One thing that looks really interesting, and there's an example of it right here with the Nux. 31:46 Never written a Nux app in my life, but here we have. 31:49 I have one that helps you understand your Vite Nux app. 31:53 One of the things that I think could be really interesting and probably MCPs could play a really important role is we have these huge foundation models, OpenAI and Cloud Opus and so on, that are generally knowledgeable about the whole world and are big, expensive to train. 32:09 But I can see a future where we get good enough to have a bunch of small models. 32:13 Like this is the Vue.js model. 32:17 If you need to know Vue.js, it's as good as anything, but it runs on your computer in a gig of RAM because it's just trained so specifically on Vue. 32:26 And I feel like maybe you could MCP your way together like, well, I'm using this tech stack. 32:31 So we're going to click together a bunch of things that don't provide data, but provide information about what your architecture or something like that. 32:39 What do you think? 32:39 Yeah, I mean, I think it can go both ways, right? 32:41 Like there's a specialized model. 32:42 And there's an argument for saying that the more general scenarios would always work best. 32:47 Like there's, I think there's always two camps of those folks that I talk to. 32:50 I personally think that I think for certain things, there is a tremendous amount of value 32:55 for hyper centralized or hyper local models. 32:59 I'll give an example, right? 33:00 Like I want to organize the photos on my machine. 33:05 Like maybe I have a lot of duplicates that, you know, because when you take photos of your modern cell phones, 33:09 like just click, click, click, click. 33:10 and then you have like 10 images of your dog and you're like, they're kind of the same, 33:13 but I want to pick the best one. 33:14 Like from a privacy standpoint, like I don't want to send that off to some server 33:18 remotely somewhere with my photos, which, you know, there's like family photos. 33:22 There's all sorts of like stuff that I do not want to send off to some remote server. 33:25 For those things, I want to use a local model. 33:27 And maybe there's an MCP server that allows me to basically like, 33:30 oh, I can find the photos and then crop them and like add some metadata or remove metadata 33:35 or whatever I want to do, right? 33:36 So for those things, I absolutely see the value in these like local models 33:40 where I can just say, I want it to be very good at this one specific task and that task only. And I 33:45 will never use this photo model for web app creation, but photos is going to be darn good. 33:49 And I think there's a lot of value for that. And if you augment it with MCP, I think it's 33:53 superpowers right there. Yeah, it does seem like it could be. It could be this little step would 33:58 benefit from a local model, but I don't want to constrain the entire problem solving to a local 34:03 model. Right. I think that's kind of the problem. Like I use LM Studio a lot and I've got, for 34:08 For example, I have the open AI 20 billion parameter open weights model that I actually 34:13 program against. 34:14 And it does all sorts of cool stuff for me, but I don't use it for my general work because 34:17 it's either too slow because it's on my Mac mini or I just want something that is better, 34:23 right? 34:23 Yeah. 34:24 And so if you're going to just start a, like I'm using this model to solve this problem, 34:28 that might not be the final outcome where we end up, right? 34:34 This portion of Talk Python To Me is brought to you by Nordstellar. 34:37 Nordstellar is a threat exposure management platform from the Nord security family, 34:41 the folks behind NordVPN that combines dark web intelligence, session hijacking prevention, 34:47 brand abuse detection, and external attack service management. 34:51 Keeping your team and your company secure is a daunting challenge. 34:55 That's why you need Nordstellar on your side. 34:57 It's a comprehensive set of services, monitoring, and alerts to limit your exposure to breaches 35:03 and attacks and act instantly if something does happen. 35:07 Here's how it works. 35:08 Nordstellar detects compromised employee and consumer credentials. 35:12 It detects stolen authentication cookies found in InfoStealer logs and dark web sources 35:18 and flags compromised devices, reducing MFA bypass ATOs without extra code in your app. 35:24 Nordstellar scans the dark web for cyber threats targeting your company. 35:28 It monitors forums, markets, ransomware blogs, and over 25,000 cybercrime telegram channels 35:34 with alerting and searchable context you can route to Slack or your IRR tool. 35:39 Nordstellar adds brand and domain protection. 35:42 It detects cyber squats and lookalikes via visual, content similarity, and search transparency logs, 35:49 plus broader brand abuse takedowns across the web, social, and app stores to cut the phishing risk for your users. 35:56 They don't just alert you about impersonation, they file and manage the removals. 36:00 Finally, Nordstellar is developer-friendly. 36:03 It's available as a platform and an API. 36:06 No agents to install. 36:08 If security is important to you and your organization, check out Nordstellar. 36:11 Visit talkpython.fm/nordstellar. 36:13 The link is in your podcast player's show notes and on the episode page. 36:17 Please use our link, talkpython.fm/nordstellar so that they know that you heard about their service from us. 36:23 And you know what time of year it is. 36:25 It's late fall. 36:26 That means Black Friday is in play as well. 36:29 So the folks at Nordstellar gave us a coupon, BlackFriday20, that's BlackFriday, all one word, all caps, 20, two zero, that grants you 20% off. 36:38 So if you're going to sign up for them soon, go ahead and use BlackFriday20 as a code and you 36:43 might as well save 20%. It's good until December 10th, 2025. Thank you to the whole Nord security 36:50 team for supporting Talk Python To Me. For sure. And especially because for a lot of the generalized 36:55 models, you're like, no matter how you look at this, you're not going to have the computer 36:58 resources anywhere near what like open anthropic has right so like in terms of speed and quality 37:04 what are you going to get you might get some like fine-tuned examples where some scenarios work very 37:09 very well but i think ultimately if we look at the general use case these generalizable models 37:14 are going to be ahead yeah i i definitely agree as well but i hadn't really considered how mcps 37:19 might allow you to use the really high-end models to compose specialized not quite as generally 37:24 smart but specialized versions of different things it could be yeah mcp can do anything mcp again is 37:31 just it's it's a pipe what you do with that pipe is up to you yeah well let's talk about how one 37:37 might build such pipes with uh with python so there's actually a model context protocol github 37:45 organization within there they have the python dash sdk the official python sdk for the mcp servers 37:51 and clients. So that's also interesting, the clients bit. So maybe we could kind of like, 37:56 there's a lot of concepts and things here, and I don't want to dive too much into code, 38:01 but maybe we could work our way through some of the concepts and some of the steps of building 38:06 such a thing. Yeah, totally. Well, I mean, it all starts from just getting the SDK, right? And this 38:11 is for like anybody that's using Python. You can just get it through pip or uv. I'm a big fan of 38:16 the folks at Astral. I think they're doing a fantastic job with uv and uvx. Like I use it for 38:20 or get up spec kit. 38:21 So, you know, uv add MCP, MCP CLI. 38:25 And there you go, you can be on your way. 38:27 It's as simple as that. 38:28 - Yeah, okay, that'll do it. 38:29 And then, yeah, you can specify like the CLI options or whatever kind you want. 38:34 - Yeah, yeah. 38:35 And also it's using fast MCP. 38:37 Are you familiar with fast MCP? 38:39 - No, I know some projects with fast in it, but not MCP. 38:42 - Yeah, so fast MCP is basically, think of it like FastAPI for MCP. 38:46 It's essentially like allowing you to compose MCP servers faster because it has a lot of the primitives baked in. 38:51 So things like authorization, which can be like kind of a pain point, but if you use 38:55 Fast MCP, it makes it a little easier. 38:57 And Fast MCP is a integral part of the Python SDK story for the actual like official Python 39:03 SDK. 39:03 Right. 39:04 The programming model looks like it would feel quite familiar to anyone who knows the 39:10 Flask API or beyond. 39:12 I think it's just, you know, a little sidebar. 39:14 I think it's really interesting how Flask is quite popular, but it's also spawned almost 39:20 every single web app after it has kind of borrowed its programming model. 39:25 So even if you're not exactly using Flask, if you're using Litestar or FastAPI or whatever, 39:30 you're still kind of doing that kind of programming. 39:32 And it's the same here, right? 39:34 You create an MCP as the app, you say @mcp.tool or @mcp.prompt and you put these onto functions 39:42 and they now become webized. 39:44 Yeah. Isn't that like, okay, like I am not like I write Python, but I'm not a Python expert. I'm 39:50 sorry, Brett Cannon, if you're watching this. But like, we'll take that part out, don't we? 39:55 As a stream's life. That's okay. So like these, like the do you call in Python, do you call them 40:02 decorators? Or is it like attributes like in C#, it's attributes. Yeah, in C#, it's attributes. 40:07 You do it with square brackets. In Python, it's decorators and you do it with the @ symbol. 40:11 Okay, so the decorators themselves. Look at the simplicity of this. Look at the screen right now of a sample where we're looking at the actual Python SDK repo. And one of the samples, you literally have a Python function, you have def add, and there is your arguments, you would pass you a function, like two integers. And then all you need to do to make that a tool that an LM can invoke is just add that at mcp.tool decorator. That's it. You're not going and crafting elaborate JSON RPC envelopes and converters and all these things. 40:41 like all the stuff is done for you add a decorator boom you have a tool that's it yeah it's simple 40:45 it's yeah it's really really simple to program and there's actually some fairly complicated 40:50 data exchange stuff going on like streaming partial results as they come in because 40:56 we're all used to two things ai requests taking a real long time but b that you see the little dots 41:03 thinking thinking and periodically like some stuff that's coming by to like oh yeah okay i see where 41:07 it's going. I don't know what it's going to come up with, but at least we could see it's working, 41:11 right? So to sort of keep that flow going, you've got the streaming style, right? 41:15 Exactly. And all of this is like, again, I'm looking at the sample. It's so, 41:20 the way I would describe it as a delightful developer experience. If I'm a developer, 41:24 I focus on writing the core functions. I don't have to worry about like, well, 41:27 how do I make this into a tool? Put a decorator on. That's how you make it a tool. 41:31 Yeah. Excellent. So I have this server and you mentioned that it's fast, 41:37 FastAPI or flask like how do I host it once once I call run or whatever I do on it yeah then what 41:44 I know I probably don't put it straight on the internet maybe I do I don't know so there's two 41:47 types of servers that you can have you can have local mcp servers and local mcp servers are 41:53 essentially just a local application think of it running like a console app or like your regular 41:57 python script and what it does it there might be referred to you might hear like they're called stdio 42:02 for standard input output. 42:03 And it's using basically native OS constructs to talk between processes, right? 42:08 The MCP client and the server. 42:10 So again, it's still JSON RPC, but JSON RPC over SDDIO pipes. 42:14 So the other one is streamable HTTP. 42:17 And streamable HTTP, it's again, MCP server that can be hosted somewhere in the cloud. 42:22 It can be hosted on your own home lab server if you want to, and you give it an IP address. 42:26 You can be hosted in AWS or Azure, GCP, doesn't really matter. 42:31 So for those servers, the JSON-RPC messages are basically done through the HTTP pipe with some set of HTTP conventions. 42:37 That's kind of where it is. 42:39 There's no constraint as to where you have to host it. 42:42 It's whoever supports running Python can host your MCP server. 42:47 Right, okay. 42:48 So I could put it behind Nginx or Caddy or whatever. 42:51 Like toss it into a container and put it somewhere. 42:54 Like it's totally fine. 42:55 Okay. 42:55 You know, you talked about all these sort of different, like, private, but online, but not quite online, you know, with, like, HomeLab and stuff. 43:02 I just want to give a shout out to Tailscale. 43:05 Like, have you? 43:05 Oh, yes. 43:06 Have you Tailscaled lately? 43:07 Oh, it is so good. 43:08 It is wonderful. 43:09 I love Tailscale. 43:10 It's my go-to thing. 43:12 And I'll tell you this. 43:12 Like, do you remember the days when you had to work? 43:14 This episode is not sponsored by Tailscale, for the record. 43:18 Should be. 43:19 Should be. 43:19 They can reach out. 43:20 Yeah. 43:21 Yeah. 43:21 Hey, Tailscale. 43:22 Yeah. 43:23 Yeah. 43:23 Michael Talks is awesome. 43:24 You should sponsor it. 43:25 But anyway. 43:25 So TailScale is great. 43:26 Like, remember the olden days when you had to like set up an open VPN 43:29 and be like, let me generate the keys. 43:31 Let me email myself the key so I can open it on the iPhone 43:34 and then add the key and then go through this process. 43:36 And it's just like, oh, man, such a pain. 43:39 Such a pain. 43:40 TailScale, just like flip the switch and you're in. 43:42 Yeah. 43:43 Magic. 43:43 Or DynDNS where you... 43:46 Oh, yeah. 43:46 Because you have to bind your IP address to their domain 43:50 and then you have to run this agent to constantly update it. 43:53 Oh, yes. 43:54 Yeah, the agent goes down change. Well, then there's also all the NAT firewall and your local machine on your local 43:59 network change. You're like, no, it doesn't work. Oh, it's my machine on my, we had a power outage 44:04 when the router rebooted, I got a new IP. It just, it was so bad. And so why is this sidebar worth 44:10 going into here, folks? Because this is what's called an overlay network. And so you can put it 44:16 up on your iPhone, you can put it on your laptop, you can put it on your desktop, you can put it on 44:19 your Linux server if you want. And it basically exposes all of those things over a network that's 44:25 like a VPN, but the rest of your behavior is just not VPN. 44:29 It's just normal, but it just brings those in in just the most incredible way. 44:33 So for example, I have a high-end Mac mini here that I use for the streaming that I'm talking to you on now. 44:39 It has tons of RAM and it has a pro chip and stuff. 44:41 So I just have my one LLM and my database servers running there. 44:46 And when I'm doing dev work, instead of every, you know, my laptop, 44:50 my other machine always running in replica, it all just goes here to this. 44:54 And even if I'm in a coffee shop or I'm out for work, right? 44:57 As long as TailScale is running, I do a database query or an LLM call through an API and it 45:02 just hits this thing. 45:03 Yep. 45:03 Just as if I was here. 45:04 And it's glorious. 45:06 And all that's for free, right? 45:06 There's paid versions, but you can do a lot. 45:08 Yeah. 45:09 You can do a lot for free. 45:10 In their free tier, it's amazing. 45:12 And it's all WireGuard. 45:13 It's all using the most modern secure standards. 45:17 I'll say to me, if you want to access things like, oh, your security camera is at home, 45:21 you do not trust cloud providers to have access to your security home cameras, put them in your 45:26 local network and use tail scale. And then you can go somewhere, flip the switch in your phone, 45:30 boom, you can see your cameras from remote without exposing them to the broader internet. It's 45:33 amazing. You don't open up any ports on your router, nothing like that. So why am I going on 45:39 such an excited diversion? One, it's just so awesome. And I just recently discovered it this 45:43 year. So it's a thing, but it's relevant. If you've got an MCP server and you want to keep it local, 45:49 even local from your server back to like your company or something potentially, 45:54 you could hide all that stuff behind tail scale. 45:57 It's like transparently available, but also there's, there's no ports. 46:01 There's no open internet. 46:02 The easiest way to secure stuff is to just not let the internet have at it. 46:06 Yeah. 46:06 Yep. 46:07 No, exactly. 46:07 This is what I've been actually doing with one of my friends who was setting up a 46:11 home lab and they were experimenting with some of the MCP servers for like, I believe 46:15 it was like setting up for like a Minecraft server. 46:17 And we just tossed them on the same server. 46:19 And because it's tail scale and then I connect them to the clients with a IP that tail scale gives me, 46:24 it just magically works. 46:25 And I didn't need to expose this to the internet. 46:27 I didn't need to pay for any cloud providers in somebody's home lab. 46:30 It's just there. 46:31 - Yeah, yeah. 46:31 And you don't need to use SSH across it. 46:33 Like you can just, it's just there. 46:35 It's all super, super good. 46:36 Okay, back, back to what I was asking. 46:39 - Back to MCP. 46:40 - Back to MCP, but I was asking, you know, how do you run it? 46:43 And you're like, I could, we could run it on our home lab or on a Raspberry Pi or something, right? 46:47 this tail scale thing is a way to sort of really nicely make that available to 46:51 you, make that available to your, your AI agents or whatever without going, well, 46:57 now how do I host it on like a server for real? Yeah. Okay. 47:01 So let's see. That is the registry. There we go. 47:04 So I want to talk about a couple of things. We talked about tools. Yeah. 47:08 And we talked about there's prompts, there's resources. 47:11 Let's maybe go through each one real quick. 47:13 These are all just decorators you put on functions, but they're all, 47:16 They're slightly different. 47:17 Yeah. 47:17 What is the purpose of a tool and why would I do that? 47:19 Yeah. 47:20 A tool basically is a function call, right? 47:22 It's like your tool equals function. 47:25 That's the way I describe it. 47:26 Like that's basically like, hey, I want the LLM to go do something. 47:29 What does it need to do? 47:30 And this is where like get weather, give me the sum. 47:34 It needs to go and do this. 47:35 This is what a tool is. 47:37 It's a primitive that does something. 47:39 Insert record into database or whatever. 47:40 This looks like you could probably find and replace Fast MCP with FastAPI and tool with 47:47 get. 47:47 Yeah. 47:47 Yeah. 47:48 And you or a post or something. 47:50 And you might be able to pretty much that is kind of the closest match, right? 47:54 Yeah, exactly. 47:55 Yep. 47:55 Yep. 47:55 That's that's basically it. 47:56 I want to invoke some kind of action. 47:59 Go do that action for me. 48:00 Right. 48:00 And at least in the examples, there's no AI in the action. 48:04 It's just. 48:05 No. 48:06 Just an AI. 48:06 The AI knows that it needs to invoke the action. 48:08 Like if I go to the LLM and say, send an email to Michael that says the podcast was awesome. 48:13 And then it's going to go in and say, oh, let me go find the tool that is capable of sending emails. 48:19 Oh, there's a tool from like, I don't know, like MailChimp. 48:21 Okay, let me go do that. 48:23 There's a tool in the MailChimp MCP server that says send email. 48:26 That sounds great. 48:27 I'm going to use that to send the email, right? 48:29 And that tool itself doesn't use AI behind the scenes. 48:31 It's just like, it's just going to do SMTP send email. 48:34 That's all it does. 48:35 Yeah. 48:35 Awesome. 48:36 It also has other examples of data exchange along the way, I guess. 48:41 Absolutely. 48:41 And you can pass in this context, and then the context can start pushing updates and information back. 48:49 Yes. 48:49 To the user, right? 48:50 And report progress back. 48:52 So, for example, if your email takes like seven hops, it's like, okay, let me first connect to the SMTP server. 48:57 Let me then verify the credentials. 48:58 Like, you can encode that basically if you implement that. 49:02 You might not, but you can implement progress reporting so that the client knows like, 49:06 oh, you're like 30% through your task or you're like 40% through your task now 49:10 because it reports on the progress of what you're doing. 49:13 - Yeah, super cool. 49:14 You can also do structured output, which is pretty interesting. 49:17 And there's many ways in which it can be done, but the number one way, 49:22 as in if it was a ordered list, the first thing would be Pydantic models, right? 49:27 Carrying on the FastAPI analogy here, right? 49:30 - Yep, yep. 49:31 For a lot of these things, again, it's very, like if you're a Python developer, 49:34 a lot of these concepts are gonna be very much familiar to you. 49:36 - Yeah, I think one of the challenges people have often is like structured data versus like I got an LLM answer 49:43 and it's a little different every time and they upgrade the model from 5.1 to 5.15 49:50 and now it does something totally different. 49:52 Like how do I code against this, right? 49:54 And so using structured data can be a big bonus, right? 49:58 - Yeah. 49:58 - Okay, super cool. 50:00 Let's see prompts. Now it's starting to sound AI like. 50:03 Yeah. So this is basically like the description says prompts are reusable templates that help 50:08 elements interact with your server effectively. If you have a server that does, I don't know, 50:13 cooking recipes, it might provide prompts for like, what is a like, what are the steps for a recipe 50:19 and with substitutions where needed. So it allows you to basically pre cook prompts that your server 50:24 might be using. Okay, they might be passing these internally to? Yes, yeah. So 50:30 return to the host AI, you know, there's a lot of AIs involved here. 50:34 Right, you know, essentially like, you're exposing prompt templates, like that's what it is, 50:38 like, and saying like, oh, if you're a user, if you're looking for like, creating a recipe, 50:42 this is a template for that prompt for a recipe. 50:44 Okay, cool. There's also a little bit of a UI component, which is interesting, 50:48 you can have a iconography representation of your actions. 50:53 Yeah, this is relatively new. But basically, for some of them, like, 50:56 bake in some of the icons to just make it easier to differentiate between different actions. Because 51:00 especially again, like different servers can have different tools and there are many tools. And how 51:04 do you like just parse the strings? Like just look at iconography. Yeah. Another thing that it has 51:09 built in support for is working with images. So that's pretty wild. I yeah, I noticed that for a 51:15 lot of the stuff, it's also like it's baked into the these are not necessarily like MCP spec constructs. 51:19 This is more like how the Python SDK exposes them and allows you to operate on them, right? 51:23 because like the fundamental constructs, the primitives are we have tools, 51:27 we have prompts, there is resources, which is another one. 51:30 And the resources is, allows the LM to basically think of it 51:34 as how do you refer to databases or files or entities within an API? 51:41 Those are, there's also elicitations as what Michael is showing right now on the screen. 51:45 So we have elicitations is a way for an MCP server to go to the client 51:49 and say, I want the client to provide me structured input on a specific question. 51:54 Like, hey, can you give us your date of birth? 51:58 And I expect a date. 52:00 Can you give me a date back exactly so I don't need to guess from the LLM context, right? 52:04 Or it can say like, you know, what kind of pet do you have? 52:07 And it can give you a list of options that you can actually have to pick from. 52:11 It's like, oh, dog, you know, pet, reptile, like dog, cat, reptile, whatever. 52:15 It allows you to have that structured controlled input that it's not just you're typing into the chat box, 52:21 but you're selecting from a list that the server asks you to. 52:25 So that's another neat thing that recently got added. 52:27 Yeah, that looks quite interesting. 52:29 And it has to do a little bit with the WebSocket type of exchange as well, right? 52:34 Not exactly, but it's going along. 52:37 You've asked it something. 52:38 While it's working on that, it's come back and it's asking you to give it more information to carry on. 52:42 Yes, exactly. 52:43 In that sense, right? 52:45 Yep, yep. 52:45 So could this be, I've worked on your request. 52:49 I've used the database MCP or whatever, and I've learned that there's 20 records. 52:54 Do you want to delete them like you asked or do you not want to delete them? 52:57 Yes, yes, exactly that. 52:58 Or it can say, hey, I found like 10 conflicting records. 53:02 Which ones do I need to delete? 53:03 And then you can help and basically do, yeah, right? 53:05 So it asks for structured input so that you don't have to have it guess from whatever you type in the chat. 53:11 Because if you type in the chat, it's like it's non-deterministic, right? 53:14 It could say, oh, delete all the records with the name John Doe. 53:18 And then it's like, oh, I'll delete everything with dough. 53:20 Because somehow, like, that sort of decides, like, oh, no, no, no. 53:23 Jane, come back. 53:24 Yeah. 53:26 So it adds a little bit more structure. 53:28 Yeah, got it. 53:29 And the programming model is super smooth here. 53:32 They did a great job. 53:33 So, for example, you might be doing this elicitation within a tool call. 53:39 And that's an async function, async web function. 53:41 And the way you do it is just await context that elicit some message and schema. 53:46 And then when the person responds, the async thing resumes and off you go, right? 53:51 There's not some nested callbacks and all that kind of business. 53:54 That's a very smooth developer experience. 53:56 I love it. 53:57 Yeah, it definitely is. 53:58 Okay, I do want to talk about some of the popular ones out there 54:04 through an awesome list because I'm just a sucker for awesome, awesome list. 54:08 But is there anything else that I feel that you feel like we should be covering here 54:12 on the SDK? 54:14 Yeah, there's a lot of great work done Python SDK and the FastMCP folks, 54:18 I would say like go through the repo. 54:21 It's getup.com slash model context protocol slash Python dash SDK. 54:26 Go there. 54:27 There's some great samples to get you started. 54:29 And again, we're always open to feedback. 54:31 So if something's like, oh, this was too confusing. 54:33 I didn't understand. 54:34 The team is very receptive to feedback. 54:36 So please let them know. 54:37 Yeah. 54:38 143 contributors. 54:40 Last release five days ago. 54:42 Bunch of PRs, right? 54:44 It looks like it's pretty open. 54:45 You know, Yeah. Oh, yeah. 54:46 Close PRs pretty open to people working. 54:49 Also, it looks kind of very beginner friendly in the sense that the issues are 54:55 tagged with lots of lots of stuff that you could search for, like needs motivation. 55:00 You know, you could go through and come up with some examples and help, even if 55:03 you're not an expert in the SDK, for example. 55:05 Absolutely. 55:06 And there's also, I believe the Python might be using the good first issue too. 55:10 So if you're, if you're a new contributor, you've never looked at it. 55:12 It's like, I like, don't be intimidated. 55:14 There's plenty of- 55:15 Good first issue. 55:16 Good first issue. 55:17 Like there's plenty of things that you can just drop in and see like, oh, I can help with that. 55:21 Yeah, love it. 55:22 Okay. 55:23 You too can be an AI developer. 55:24 I love it. 55:24 Now let's talk about awesome MCP servers. 55:27 Awesome MCP servers. 55:29 This comes to us from the very well-known PunkPi. 55:33 The person behind Glamour.ai. 55:36 Yeah, awesome. 55:37 And 72,000 GitHub stars, no joke. 55:41 So there may be a fad, but maybe people will stick around. 55:43 So this actually has support for a lot of different languages 55:47 and it's got scopes like is this cloud or local or embedded 55:50 and so on. 55:51 But then you scroll down. 55:53 Look at the list. 55:54 Massive. 55:55 The list is, I mean, look at the scroll bar. 55:58 It is massive. 55:59 Yeah, we keep scrolling and scrolling. 56:01 I don't know. 56:02 If I page down full speed and just pin page down, the pinch down button, it's something along the lines of like 56:09 five seconds just to get through the list. 56:11 And these are one per line. 56:12 Mm-hmm. 56:13 You know, it starts out as one should when they're building awesome lists with categories, right? 56:19 Command line, cloud platforms, biology medicine, and bioinformatics. 56:26 There's one for everything. 56:27 I know. 56:28 You want to just jump around a bit and we can see what's here when we riff on it? 56:31 Gaming. 56:32 MCP server for Unity 3D game engine integration for game to own. 56:36 That's kind of cool. 56:37 Go. 56:38 Unity MCP. 56:38 MCP chess. 56:40 An MCP server playing chess against LLMs. 56:42 Do you ever think of like, can I beat an LLM at chess? 56:46 And you want to like just get an MCP server to do that? 56:48 There is one for that. 56:49 I'm starting to feel like it's better to do the local models 56:51 for the chess playing against the one. 56:54 I don't want the really smart ones. 56:56 There's also chess MCP, which is, this is interesting. 56:59 It's not the same as the other one. 57:01 This is access your chess.com player data and records and other public info. 57:06 Yep. 57:06 Right. 57:07 That's kind of cool. 57:08 So if you wanted to say, hey, I'm building something and I would like access to sort of the Kaggle of chess players type of thing, right? 57:15 Like the list of competitive chess results. 57:18 Yeah. 57:18 That's kind of cool. 57:19 Yeah. 57:19 Yeah. 57:20 Yeah. 57:20 I personally have built one for Halo. 57:23 I'm a big fan of Halo, the video game. 57:24 Oh, yeah. 57:25 It's not on the list, which now I need to go and contribute to that list. 57:28 Let's do a PR. 57:30 Like, that's the thing that I have is basically analyze my Halo stats. 57:34 And I'll tell you what, the LLMs are getting really good at analyzing the stats. 57:38 You give them the data, they can make some conclusions. 57:40 Yeah, I bet. Let's just keep it really crazy. Let's do, I was going to do delivery. We'll do that in a moment. Marketing. 57:46 Marketing. Yeah. 57:48 Yeah. So I guess one of the things that looks, I'm after just a very quick first impression, like you're running ads on someone's platform or you're doing marketing on someone's platform, but you want visibility into how that's going. 58:00 So we've got the Facebook ads and PC server. 58:03 We've got the Google ads, MCP server, Amazon ads and so on. 58:07 Right. 58:08 But what else is, yeah, that sounds about like most of it there, I suppose. 58:11 But think of it this way. 58:12 Like if you connect several of these MCP servers to your client and then you connect them 58:17 to all your ads accounts and then say, how are my ads performing 58:20 and which ones of them are the best this past week? 58:23 Right. 58:23 Like I don't need to click around dashboards and figure out like the filters and everything. 58:26 Just ask the LLM, pull the data, make a conclusion. 58:29 Now, you still need to verify the conclusion that make sure it's not hallucinating things. 58:32 But nonetheless, it's kind of cool. 58:34 Yeah, it's very cool. 58:36 So one thing I know I realized now that we skipped over the Python SDK is we talked all 58:40 about the server. 58:41 What about client things? 58:42 If I wanted to create an MCP server that is effectively the composition of some other 58:48 MCP servers, could I do that? 58:49 You absolutely can. 58:51 Nothing stops you. 58:51 Like an MCP server can also act as an MCP client and then connect to other MCP servers. 58:57 Like there's no restriction to that, right? 58:59 Like it's basically, it's very composable. 59:01 And a client for all intents and purposes is basically an entity that can connect to an MCP server, 59:07 which can also be an MCP server. 59:08 It's kind of circular. 59:09 Yeah, yeah. 59:10 It's turtles all the way down, but MCP this time. 59:12 Yeah, it's AI turtles this time. 59:14 So delivery, we just have the DoorDash delivery MCP server. 59:17 Oh man, like who? 59:19 Claude, why is my food not here? 59:22 Have you ever seen those fail videos or whatever? 59:25 I watch weird YouTube stuff with my daughter sometimes and you'll see like cops delivering DoorDash. 59:31 I'm gonna say, sorry, we had to arrest your DoorDash delivery, 59:33 but we were pretty close. 59:34 So we thought we'd just go and deliver your food anyway. 59:36 I mean, I don't know what the server is gonna say, but it could say anything, you know? 59:40 - The police are on their way. 59:42 - Yeah. 59:42 People are generally really appreciative. 59:44 Like, well, thanks for getting me my dinner anyway. 59:47 Let's see what else is out here. 59:49 Got text to speech, which is interesting. 59:52 - Sports. 59:52 - Sports, hell yeah. 59:54 Oh, look at this, Strava. 59:55 Like if you're running or biking, you can use this also to analyze your data. 59:59 There's a lot of MCB servers for data analysis, which is kind of cool. 01:00:02 Okay, I don't even, this one, this is the one that appeals to me. 01:00:05 So Multivewer, this is actually not a thing that I would want, but I think it's interesting. 01:00:11 So Multivewer is a motorsports desktop client. 01:00:14 And what I think it does, it does for IndyCar, WAC, Formula One, 01:00:18 and even like the feeder classes. 01:00:20 I think what it lets you do is put up both an overlay of telemetry onto watching the live stream, 01:00:27 but also put the multiple people up in live streams at the same time or 01:00:32 something like that. 01:00:32 Right. 01:00:33 That's kind of cool. 01:00:34 That's cool. 01:00:35 So the, the reason I don't really like that is I don't watch any of those sports 01:00:38 live. 01:00:39 I record them. 01:00:39 And so I can then pause it and then skip the commercials. 01:00:42 And so this is like for a live stream sort of deal, but the MCP server, 01:00:46 it controls multi viewer for that. 01:00:49 So maybe you could set up an AI that is watching what's going on and switches the views around in the multi-viewer for you. 01:00:58 That's wild. 01:00:58 Or swaps to the most interesting telemetry at the specific moment. 01:01:02 Yeah, listen to the radio. 01:01:03 They start getting all frantic. 01:01:05 Like, all right, we're switching to that view. 01:01:08 Yeah, there's an NCC server for everything. 01:01:10 Like, this list is massive. 01:01:12 I'm actually like, every time I discover these things, like, we're looking at this right now, I was like, oh, I didn't know there was one for multi-viewer. 01:01:17 Like I didn't know what multiviewer is until we talked right now. 01:01:20 Yeah, but wouldn't that be a cool demo? 01:01:22 Yeah. 01:01:22 You know, at a conference, you're like, I know you've all seen the tic-tac-toe one, 01:01:27 but let me show you the final of F1. 01:01:30 Yeah, yeah, yeah. 01:01:30 Or something, right? 01:01:32 Very astute observation, because again, like there's a lot of these like hello world kind 01:01:35 of things like, oh, look, it's kind of neat. 01:01:36 It responded with a thing like, give me a real thing. 01:01:38 This is that real thing. 01:01:39 Yeah, yeah, that's, that's super neat. 01:01:41 All right. 01:01:42 I guess we've got the support one that Lassie and Jira quick chat. 01:01:46 It's whatever you want, right? 01:01:47 That's the one to reduce your boring work. 01:01:49 The GRI MCP server. 01:01:50 Like, you don't want to triage your bugs. 01:01:52 Just let the LLM do it for you. 01:01:53 Hey, can you go and find the things that are most important for me to work on today? 01:01:56 Give me the bug numbers. 01:01:58 Yeah. 01:01:58 Or if you see somebody assign a bug to me, close it. 01:02:01 Yeah. 01:02:01 Yeah, exactly. 01:02:02 Query all the bugs assigned to me, reassign them to somebody else. 01:02:09 Yeah, crazy. 01:02:10 Exactly. 01:02:11 Not a good fit for this person. 01:02:13 Yeah. 01:02:13 No, exactly. 01:02:14 These are the life hacks you learned only from this podcast. 01:02:17 That's right. 01:02:18 It's like, if it involves MCP servers and cool stuff I can code, give it to me. 01:02:22 Otherwise, send it somewhere else. 01:02:24 Send it somewhere else. 01:02:26 All right, Dan. 01:02:27 I think we're getting pretty close on time here in terms of what we got time to cover. 01:02:31 But this is super fun. 01:02:33 Maybe close things out for folks. 01:02:35 They want to get started with MCP servers, either building them, consuming them, building 01:02:39 and consuming them, plugging them into their tool chain. 01:02:43 What do you tell them? 01:02:43 Yeah. 01:02:44 So for folks that wanna build modelcontextprotocol.io, as simple as it gets, go there. 01:02:48 It has guides, tutorials, SDK starters, everything is there. 01:02:52 If you are a consumer of the MCPs and you wanna, hey, I wanna do this like awesome thing with MC servers. 01:02:58 First of all, the GitHub MCP registry that we showed earlier is one of those things 01:03:03 is github.com/mcp, go explore. 01:03:06 And then of course on GitHub, there's plenty of servers that are tagged with MCP. 01:03:10 You can also take a look there. 01:03:11 And there's other registries that also index MCP servers of all sorts, like Glama AI from 01:03:17 Punk Pie that we talked about before. 01:03:19 There's one such registry that you can also look at and see if there's anything that's 01:03:22 of interest. 01:03:24 I will say that as you are exploring MCP servers, exercise caution, just like you would exercise 01:03:30 with any other software and APIs and websites where you log in because the responsibility 01:03:36 is kind of on you to figure out what's safe, what's not. 01:03:40 If you have an MCP server that's like, oh, it's going to read all my iMessages and sort them by importance. 01:03:45 I'm like, yes. 01:03:47 And do you know who built that and where your messages are going? 01:03:50 So be careful. 01:03:51 Are they also scanning for credit card numbers? 01:03:53 Exactly. 01:03:54 Why not? 01:03:55 You messaged somebody with your social security number the other day. 01:03:57 Nice. 01:03:59 Yeah. 01:03:59 So be careful with those. 01:04:00 But I'd say, like, explore them. 01:04:01 And then we are working on formalizing discovery a bit better. 01:04:06 your clients like VS Code and Cursor and Cloud Desktop are going to become better and better 01:04:11 with more discoverability affordances. Awesome. All right. Thank you so much for coming on the 01:04:14 show. I learned a ton. I'm sure listeners did as well. And it was a lot of fun. Thank you for 01:04:19 having me. Yeah. See you later. Bye. This has been another episode of Talk Python To Me. 01:04:24 Thank you to our sponsors. Be sure to check out what they're offering. It really helps support 01:04:28 the show. This episode is sponsored by Posit Connect from the makers of Shiny. Publish, 01:04:34 share and deploy all of your data projects that you're creating using Python. Streamlit, Dash, 01:04:40 Shiny, Bokeh, FastAPI, Flask, Quarto, Reports, Dashboards, and APIs. Posit Connect supports all 01:04:47 of them. Try Posit Connect for free by going to talkpython.fm/Posit, P-O-S-I-T. 01:04:54 And it's brought to you by Nordstellar. Nordstellar is a threat exposure management platform 01:04:59 from the Nord security family, the folks behind NordVPN that combines dark web intelligence, 01:05:05 session hijacking prevention, brand and domain abuse detection, and external attack surface 01:05:11 management. Learn more and get started keeping your team safe at talkpython.fm/nordstellar. 01:05:18 If you or your team needs to learn Python, we have over 270 hours of beginner and advanced courses 01:05:24 on topics ranging from complete beginners to async code, Flask, Django, HTML, and even LLMs. 01:05:31 best of all there's not a subscription in sight browse the catalog at talkpython.fm 01:05:36 be sure to subscribe to the show open your favorite podcast player app search for python we should be 01:05:41 right at the top if you enjoy the geeky rap theme song you can download the full track the link is 01:05:46 your podcast player show notes this is your host michael kennedy thank you so much for listening i 01:05:51 really appreciate it now get out there and write some python code 01:06:06 I'm out. Copyright © PDX Web Properties, LLC 2015-2026. All Rights Reserved Made with in Portland, OR, USA
Images (10):
|
|||||
| Langage de codage : quel est le plus utile pour … | https://tic-et-net.org/langage-de-codag… | 1 | Apr 07, 2026 08:00 | active | |
Langage de codage : quel est le plus utile pour le SEO? - Tic et NetURL: https://tic-et-net.org/langage-de-codage-quel-est-le-plus-utile-pour-le-seo/ Description: Certains moteurs de recherche ignorent purement et simplement le JavaScript mal optimisé, alors qu’un simple ajustement HTML peut changer le classement d’une page du tout au tout. Pourtant, des frameworks comme React dominent désormais de nombreux sites à fort trafic, malgré des défis persistants en matière d’indexation.Selon les dernières recommandations de Google, la structure du […] Content:
Certains moteurs de recherche ignorent purement et simplement le JavaScript mal optimisé, alors qu’un simple ajustement HTML peut changer le classement d’une page du tout au tout. Pourtant, des frameworks comme React dominent désormais de nombreux sites à fort trafic, malgré des défis persistants en matière d’indexation.Selon les dernières recommandations de Google, la structure du code source prime souvent sur la seule qualité du contenu. La compatibilité entre langages de programmation et robots d’indexation s’invite ainsi au cœur des stratégies SEO, avec des conséquences directes sur la visibilité organique. Le choix du langage de codage pèse lourd dans le dialogue entre vos pages web et les moteurs de recherche. Google, Bing ou Qwant accordent la priorité à la lisibilité du contenu et à la rapidité d’accès à l’information. Sur ce terrain, le HTML s’impose : il structure le site, trace des repères clairs, guide sans complexité les robots qui sillonnent le web. A découvrir également : Largeur idéale pour votre site web : comment la choisir ? JavaScript met l’accent sur l’expérience utilisateur et l’interactivité. Mais dès que trop de fonctionnalités lui sont confiées côté client, les robots d’indexation peuvent perdre leur chemin. Si le contenu apparaît trop tard ou se trouve masqué, la visibilité chute. Grandir sur le web grâce à une portion de Python côté serveur séduit de plus en plus pour générer du contenu pertinent, mais tout passe par une restitution HTML impeccable. À chaque technologie, son domaine de prédilection : A lire en complément : Zectayaznindus, miroir du web moderne : ce que ce mot révèle sur Google Savoir manier ces langages affine la présentation de l’information, favorise l’accessibilité et ouvre la porte à une indexation rapide. Pour viser haut, impossible d’ignorer les standards imposés par les moteurs, sans jamais oublier la fluidité de navigation. Atteindre les meilleures positions sur Google demande une combinaison pointue de technologies. Le socle reste le HTML : bien structuré, pensé pour guider robots et utilisateurs. Sans lui, les contenus même les plus inspirés passent à côté de leur public. JavaScript, quand il orchestre les animations et personnalise les applications web, devient un atout décisif. Mais il impose sa rigueur : si le contenu s’affiche trop tard, faute d’un bon rendu côté serveur, l’indexation s’en retrouve freinée. Miser sur le server-side rendering aide à contourner ces obstacles, offrant au robot tout ce qu’il est venu chercher. Python a pris sa place dans l’arsenal SEO : génération dynamique de pages, analyses en profondeur, gestion automatisée de la data. Il se glisse derrière chaque stratégie de contenu adaptée au marketing digital, tout comme PHP ou Java qui, moins médiatisés, gardent leurs fonctions clés dans les architectures solides. Pour y voir plus clair, voici une synthèse des forces de chaque langage : À chaque type de projet, s’assurer que le langage retenu communique efficacement avec les moteurs de recherche, qu’il hiérarchise clairement l’info et repousse les freins à l’accès instantané au contenu. La structure du code oriente directement la manière dont les moteurs classent et comprennent un site. Savoir dompter les balises HTML reste le socle d’un référencement solide. Balises meta, titres structurés, attributs alt judicieusement choisis : chaque détail compte quand il s’agit d’expliquer à un robot, ou à une personne en situation de handicap, ce que propose chaque page. L’enchaînement des balises, bien pensé, lisible et organisé, n’a rien d’accessoire. Un texte clarifié par son codage gagne en indexabilité. Sur de nombreux sites, cette exigence de clarté fait la différence. Il suffit parfois d’une structure soignée pour franchir une marche dans les résultats de recherche. Voici quelques principes techniques à intégrer pour renforcer la présence sur Google : Certains CMS proposent des automatismes sur la structuration, mais la main de l’humain demeure précieuse pour ajuster la finesse des balises. Un contenu repensé pour tous les utilisateurs, peaufiné côté technique, trace sa route vers de meilleures positions. La marche rapide du référencement naturel s’accélère, portée par l’irruption de l’intelligence artificielle et la prise en compte des comportements réels. En 2025, chaque adaptation à de nouveaux algorithmes peut signifier plusieurs places gagnées. Les robots des moteurs de recherche privilégient les sites qui assurent une expérience mobile irréprochable, une organisation des contenus limpide et des réponses précises aux besoins utilisateurs. Les mutations s’opèrent autour de trois axes principaux : Collecter, analyser, affiner : aujourd’hui, Python ou JavaScript associés à une analyse fine de l’audience donnent le rythme. Savoir anticiper les attentes, garantir une rapidité d’affichage et rester fidèle à l’intention de recherche forment le vrai terrain de la visibilité future. Ceux qui garderont la main sur la technique et l’agilité sur les usages se donneront toujours une longueur d’avance. Recherche Articles en vogue © 2025 | tic-et-net.org Sign in to your account Identifiant ou adresse e-mail Mot de passe Se souvenir de moi
Images (1):
|
|||||
| SunFounder Fusion AI HAT+ Price, Specs & LLM Support for … | https://www.geeky-gadgets.com/sunfounde… | 1 | Apr 07, 2026 08:00 | active | |
SunFounder Fusion AI HAT+ Price, Specs & LLM Support for Pi - Geeky GadgetsURL: https://www.geeky-gadgets.com/sunfounder-motor-servo-hat/ Description: Meet the SunFounder Fusion HAT+ for Raspberry Pi with 4 DC motor drivers and 12 servo channels, so you can build stable robots and voice assistants Content:
Geeky Gadgets The Latest Technology News 11:15 am December 15, 2025 By Julian Horsey What if your Raspberry Pi could do more than you ever imagined, like powering a humanoid robot, automating your home, or running advanced AI models? With the launch of the SunFounder Fusion HAT+, that vision is now within reach. This innovative expansion board is engineered to transform your Raspberry Pi into a powerhouse of innovation, offering seamless compatibility with models ranging from the latest Raspberry Pi 5 to the compact Zero 2W. Whether you’re a curious beginner or a seasoned developer, the Fusion HAT+ promises to unlock new possibilities in robotics, AI, and smart systems, all at an accessible price point. In this overview, we’ll explore how the Fusion HAT+ stands out with its advanced hardware features and support for leading AI platforms like OpenAI and Gemini AI. From precise motor control to built-in audio capabilities, this board is designed to handle complex, real-world challenges. You’ll discover how it enables creators to build everything from autonomous vehicles to voice-controlled assistants, all while making sure reliable power management and ease of use. If you’ve ever dreamed of pushing the boundaries of what your Raspberry Pi can achieve, this might just be the upgrade you’ve been waiting for. TL;DR Key Takeaways : SunFounder has introduced the Fusion HAT+, a highly versatile expansion board designed to elevate Raspberry Pi projects to new heights. This innovative board is compatible with a wide range of Raspberry Pi models, including the latest Raspberry Pi 5, as well as earlier versions like the Raspberry Pi 4, 3B+, and Zero 2W. With its robust features and broad compatibility, the Fusion HAT+ is ideal for applications in robotics, home automation, and artificial intelligence (AI). Whether you are a beginner or an experienced developer, this expansion board provides a powerful platform to bring your creative ideas to life. The Fusion HAT+ is designed to integrate effortlessly with multiple Raspberry Pi models, making sure flexibility for users with different setups. Its compatibility extends to the Raspberry Pi 5, 4, 3B+, and Zero 2W, making it a practical choice for both new and existing Raspberry Pi users. Beyond hardware, the Fusion HAT+ supports leading Large Language Models (LLMs) such as OpenAI, Gemini AI, and DeepSeek AI. This capability allows developers to incorporate advanced AI functionalities into their projects, allowing innovations in voice recognition, machine learning, and intelligent automation. The Fusion HAT+ is equipped with a range of advanced hardware features that make it suitable for diverse applications. These include: These features provide the foundation for building intricate systems, from robotic arms to AI-powered devices, offering users the tools they need to tackle complex challenges. The Fusion HAT+ is tailored to meet the demands of modern robotics and smart systems, making it an excellent choice for developers aiming to create innovative projects. Its capabilities enable the development of: With its compatibility with AI platforms and robust hardware, the Fusion HAT+ enables users to explore innovative solutions in robotics and automation. Power stability is a critical factor in any project, and the Fusion HAT+ addresses this with a well-designed power management system. It includes a rechargeable 7.4V, 14.8Wh battery and USB Type-C charging for convenience. Additional features such as power protection mechanisms, battery level indicators, and a safe shutdown button ensure uninterrupted operation and safeguard your hardware from potential damage. These features make the Fusion HAT+ a reliable choice for long-term and demanding projects. To simplify the integration process, the Fusion HAT+ comes with detailed documentation and step-by-step tutorials. These resources cover both hardware and software aspects, making it easier for users to incorporate the board into their projects. Python libraries are provided to assist seamless interaction with AI platforms, and users can access technical support through active maker community forums and troubleshooting guides. This comprehensive support ensures that users of all skill levels can maximize the potential of the Fusion HAT+. The Fusion HAT+ is designed to cater to a wide audience, from beginners exploring the basics of robotics to seasoned engineers working on advanced AI systems. Its user-friendly design and extensive support make it accessible to individuals aged 10 and above, including educators, hobbyists, and professionals. Priced at just £26.17 GBP, the Fusion HAT+ offers exceptional value for its extensive features and compatibility. This affordability makes it an attractive option for anyone looking to expand their Raspberry Pi’s capabilities without breaking the budget. The SunFounder Fusion HAT+ is a feature-rich expansion board that enables Raspberry Pi enthusiasts to push the boundaries of their projects. With advanced hardware, seamless AI integration, and robust support, it is well-suited for applications in robotics, home automation, and beyond. Whether you are building a smart car, a robotic arm, or an AI-powered assistant, the Fusion HAT+ provides the tools and resources to turn your ideas into reality. Its combination of affordability, versatility, and innovative features makes it an indispensable addition to any Raspberry Pi toolkit. Source: SunFounder Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.
Images (1):
|
|||||
| AWS Launches Strands Labs for Experimental AI Agent Projects - … | https://www.infoq.com/news/2026/03/aws-… | 1 | Apr 07, 2026 08:00 | active | |
AWS Launches Strands Labs for Experimental AI Agent Projects - InfoQURL: https://www.infoq.com/news/2026/03/aws-strands-agents/ Description: Amazon Web Services has introduced Strands Labs, a new GitHub organization created to host experimental projects related to agent-based AI development. Content:
A monthly overview of things you need to know as an architect or aspiring architect. View an example We protect your privacy. QCon San Francisco (Nov 16-20): What's next in AI? What's next in software? Learn from the teams already doing it. Register Now Facilitating the Spread of Knowledge and Innovation in Professional Software Development Unlock the full InfoQ experience by logging in! Stay updated with your favorite authors and topics, engage with content, and download exclusive resources. Soroosh Khodami discusses why we aren't ready for the next Log4Shell. He shares live demos of dependency confusion and compromised builds, explaining how minor oversights gift hackers total system access. He explains the value of Software Bill of Materials (SBOM), dependency firewalls, and shifting security left to build resilient DevSecOps cultures that protect the modern software supply chain. Andrew Harmel-Law and a panel of expert architects discuss the shifting practice of architecture in 2025. They explain strategies for communicating technical debt to stakeholders, the benefits of decentralized decision-making through ADRs, and the career paths of modern leaders. The panel shares insights on bridging the gap between mobile and backend teams to ensure a holistic system. In this episode, Thomas Betts and Adi Polak talk about the need for context engineering when interacting with LLMs and designing agentic systems. Prompt engineering techniques work with a stateless approach, while context engineering allows AI systems to be stateful. How can you focus in a sea of results from a large regression test suite? This article describes a stochastic approach that relies on some degree of redundancy in your CI regression test set. This approach does not guarantee you will catch every bug every time, but it gives you your best bet of not missing the subtle signatures of all the bugs uncovered by your CI regression test suite runs. Franka Passing discusses the architectural shift of Duolingo’s 500+ backend services to Kubernetes. She explains the move toward GitOps with Argo CD, the transition to IPv6-only pods, and the "cellular architecture" used to isolate environments. She shares "reports from the trenches" on managing developer trust, navigating AWS rate limits, and productionizing early adopter services. Join Luca Mezzalira for this 5-week online cohort. Master socio-technical architecture leadership. Register Now. Learn how leading engineering teams run AI in production—reliably, securely, and at scale. Early Bird ends April 14. Learn what's next in AI and software, from teams already doing it. Early Bird ends April 14. InfoQ Homepage News AWS Launches Strands Labs for Experimental AI Agent Projects Mar 12, 2026 2 min read by Daniel Dominguez Amazon Web Services has introduced Strands Labs, a new GitHub organization created to host experimental projects related to agent-based AI development. The initiative is linked to the Strands Agents SDK, an open-source toolkit that allows developers to build AI agents using Python or TypeScript. Strands Labs includes three projects: Robots, Robots Sim, and AI Functions. Each project explores different aspects of agent development, ranging from robotics integration to code generation workflows. The Strands Robots project focuses on connecting AI agents with physical hardware. It provides a unified interface that allows agents built with the Strands framework to interact with sensors and robotic devices. In demonstration examples, AWS shows an agent controlling an SO-101 robotic arm using the NVIDIA GR00T model. GR00T is a vision-language-action (VLA) model that takes camera images, robot joint positions, and language instructions as input and generates joint actions as output. The Robots project also integrates with LeRobot, an open framework designed to simplify interaction with robotics hardware and datasets. By combining LeRobot abstractions with VLA models, developers can build agents that process visual data, interpret instructions, and perform physical actions. The Strands Robots Sim project provides a simulation environment for robotics experimentation. Instead of using physical hardware, developers can run agents inside physics-based environments that simulate robot behavior. The system supports environments from the Libero robotics benchmark and can integrate VLA policies through an inference service. The simulator collects observations from cameras and robot joints and feeds them to policy models that produce motor commands. The environment can record simulation runs as video and supports iterative control loops for debugging or experimentation. The third project, AI Functions, explores a different approach to writing software with AI agents. Instead of implementing a function directly, developers define the intended behavior using natural language descriptions and validation conditions written in Python. A decorator called @ai_function triggers the Strands agent loop, which generates code to satisfy the specification and validates the result using pre- and post-conditions. If the validation fails, the system retries automatically. The framework can generate implementations that parse files, perform data transformations, or execute other tasks while returning standard Python objects such as Pandas DataFrames. Community reactions to the announcement have focused on the robotics integration and the experimental nature of the projects. Clare Liguori, senior principal engineer at AWS posted on X: I think of Strands Labs as a playground for the next generation of ideas for AI agent development, from how to build agentic robots to how to make our everyday applications more agentic. Others highlighted the AI Functions experiment as an example of a growing interest in specification-driven programming, where developers define behavior and validation rules while agents generate the underlying code. Design engineer John Hanacek shared: Robots animated by agentic frameworks alongside humans, sharing a perception and awareness layer to coordinate actions. AWS stated that Strands Labs will continue to expand with additional experiments contributed by different Amazon teams. The organization is intended to function as a testing ground for ideas related to agent orchestration, robotics integration, and agent-assisted software development before they potentially move into the core Strands SDK. Presented by: Karthik Ranganathan - Co-CEO & Co-Founder at YugabyteDB, and Aditi Gupta - Snr. GenAI/ML Specialist Solutions Architect Save your seat A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example We protect your privacy. A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example We protect your privacy. Reliability rules have changed. At QCon London 2026, unlearn legacy patterns and get the blueprints from senior engineers scaling production AI today. Join senior peers from high-scale orgs as they share how they are: InfoQ.com and all content copyright © 2006-2026 C4Media Inc. Privacy Notice, Terms And Conditions, Cookie Policy
Images (1):
|
|||||
| Chinese humanoid robots could soon beat the fastest human ever: … | https://interestingengineering.com/ai-r… | 1 | Apr 06, 2026 16:00 | active | |
Chinese humanoid robots could soon beat the fastest human ever: ReportURL: https://interestingengineering.com/ai-robotics/chinese-humanoid-robots-could-beat-fastest-human Description: Chinese humanoid robots could soon surpass human sprint speeds, with experts predicting 100m runs despite key technical hurdles. Content:
From daily news and career tips to monthly insights on AI, sustainability, Aerospace, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Chinese humanoid robots may soon rival or surpass human sprinting speeds. Chinese humanoid robots are rapidly advancing toward a milestone once reserved for elite human athletes: surpassing world-record sprint speeds. According to Wang Xingxing, founder of robotics firm Unitree Robotics, humanoid machines could soon outpace Olympic champion Usain Bolt in the 100-meter dash—a prospect that signals both technological progress and the growing ambition within embodied AI. Speaking at the Yabuli China Entrepreneurs Forum on Tuesday, Wang noted that while robots still trail humans in sprinting performance today, the gap is narrowing quickly. With improvements in mechanical design, control systems, and AI-driven coordination, researchers are beginning to push humanoid robots into performance territories that were once thought to be uniquely human. Recent developments highlight how close robots are getting to elite athletic benchmarks. In February, Zhejiang University and Shanghai-based JingShi Technology unveiled a full-size humanoid robot named “Bolt,” capable of reaching a peak running speed of 10 meters per second. The team described it as the fastest full-size running humanoid robot built to date. For context, Usain Bolt’s world-record 100-meter sprint of 9.58 seconds translates to an average speed of roughly 10.44 meters per second, with peak speeds slightly higher during the race. “In a few months, by around mid-year, humanoid robots globally — especially in China — may run faster than humans,” Wang said. “Their 100-meter sprint times could drop below 10 seconds,” he continued. While humanoid robots have not yet exceeded this threshold in real-world conditions, the margin is becoming increasingly narrow. If robots were to consistently achieve or surpass these speeds, it would represent more than just a technical achievement, it would mark a symbolic shift in how machines compare to human physical capabilities, particularly in dynamic and high-performance tasks like sprinting. Achieving high-speed locomotion in humanoid robots is far more complex than simply increasing motor power. Engineers are solving challenges related to balance, coordination, energy efficiency, and real-time decision-making. Unlike wheeled or quadruped robots, humanoids need to replicate the inherently unstable process of bipedal running. This involves precise synchronization between sensors, actuators, and control algorithms to maintain stability at high speeds. Even minor errors in timing or force distribution can lead to falls or inefficient movement. Despite these gains, Wang emphasized that the industry is still far from achieving a breakthrough comparable to generative AI systems like ChatGPT. The primary limitation lies in generalization, the ability of robots to perform reliably across diverse, unpredictable environments. While humanoid robots can achieve near-perfect performance in controlled or pre-trained settings, their capabilities often degrade when conditions change. Variations in terrain, obstacles, or external disturbances can significantly impact performance, making real-world deployment challenging. This gap highlights a broader issue in embodied AI: translating controlled, lab-based success into robust, adaptable real-world functionality. As development continues, the race between humans and machines may soon extend beyond symbolic comparisons. Whether robots ultimately surpass human sprinters or not, their rapid progress underscores a larger transformation, one where physical intelligence is becoming as critical as digital intelligence in the evolution of AI. Atharva is a full-time content writer with a post-graduate degree in media & amp; entertainment and a graduate degree in electronics & telecommunications. He has written in the sports and technology domains respectively. In his leisure time, Atharva loves learning about digital marketing and watching soccer matches. His main goal behind joining Interesting Engineering is to learn more about how the recent technological advancements are helping human beings on both societal and individual levels in their daily lives. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Premium Follow
Images (1):
|
|||||
| NHS Digital Selects Scandit’s Clinical Quality Computer Vision Technology to … | https://multichannelmerchant.com/press-… | 0 | Apr 06, 2026 00:00 | active | |
NHS Digital Selects Scandit’s Clinical Quality Computer Vision Technology to Digitise the Covid Testing ProcessContent: |
|||||
| Scandit raises $150M to automate inventory scanning with computer vision | https://venturebeat.com/2022/02/09/scan… | 0 | Apr 06, 2026 00:00 | active | |
Scandit raises $150M to automate inventory scanning with computer visionDescription: Scandit, a company developing algorithms to help companies manage inventory by scanning labels, has raised $150 million in capital. Content: |
|||||
| Humanoid Robots Steal Spotlight at Silicon Valley Tech Summit | https://www.techjuice.pk/humanoid-robot… | 1 | Apr 06, 2026 00:00 | active | |
Humanoid Robots Steal Spotlight at Silicon Valley Tech SummitURL: https://www.techjuice.pk/humanoid-robots-steal-spotlight-at-silicon-valley-tech-summit/ Description: Humanoid robots took center stage at a Silicon Valley summit, highlighting rapid advances that could reshape work, care and industry. Content:
Humanoid robots emerged as one of the most talked about technologies at a major Silicon Valley summit this week, signaling how quickly machines designed to move and interact like humans are moving from experimental labs into real world applications. At the event, technology companies, robotics startups, and artificial intelligence researchers demonstrated humanoid robots capable of walking, grasping objects, responding to voice commands, and navigating complex environments. These demonstrations underscored how advances in AI models, sensors, and mechanical design are converging to accelerate the development of robots that can operate in spaces built for people. Industry leaders at the summit said humanoid robots represent a critical next step in automation. Unlike traditional industrial robots that work in controlled factory settings, humanoid robots are designed to function in homes, hospitals, warehouses, and offices without requiring major infrastructure changes. This flexibility could make them suitable for tasks ranging from elder care and logistics to manufacturing support and disaster response. Several speakers highlighted how recent progress in large language models and computer vision has dramatically improved robots’ ability to understand instructions and adapt to unfamiliar situations. Instead of following rigid programming, newer humanoid systems can learn from observation, interpret spoken language, and make decisions in real time. Researchers noted that this shift brings robots closer to being general purpose assistants rather than single task machines. However, experts at the summit also acknowledged significant challenges ahead. Power efficiency, safety, affordability, and reliability remain major hurdles before humanoid robots can be deployed at scale. There are also ongoing debates about ethical considerations, workforce displacement, and how societies should regulate machines that closely mimic human behavior. As AI systems become more capable, companies are increasingly looking to give intelligence a physical form. While widespread adoption may still be years away, the momentum on display suggests humanoid robots are no longer a distant concept but an emerging reality that could reshape how humans work and live. Abdul Wasay explores emerging trends across AI, cybersecurity, startups and social media platforms in a way anyone can easily follow. Apple approves a driver that enables Nvidia eGPUs on Arm Macs, marking a shift in GPU support for Apple Silicon devices. A major EU data breach exposed emails, user data, and internal records after hackers accessed cloud systems and leaked files online. Large-scale theft has hit the Sukkur to Multan section of the M5 Motorway, where multiple high-tech surveillance and speed cameras have gone missing across nearly. China has officially moved up the delivery timeline of its J-35 stealth fighters to Pakistan. Initially set for late 2026, the advanced aircraft will now. Premier Pakistan technology news website with special focus on startups, entrepreneurship and consumer products. © 2026 TechJuice.PK – All rights reserved.
Images (1): |
|||||
| Xiaomi’s CyberOne humanoid robot with sweat glands in bionic hands | https://interestingengineering.com/ai-r… | 1 | Apr 05, 2026 16:00 | active | |
Xiaomi’s CyberOne humanoid robot with sweat glands in bionic handsURL: https://interestingengineering.com/ai-robotics/xiaomi-cyberone-humanoid-robotic-hand Description: Full-palm tactile sensing, liquid cooling channels, and high dexterity aims to improve humanoid robot's bionic hands for long operations. Content:
From daily news and career tips to monthly insights on AI, sustainability, Aerospace, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. The robot uses artificial sweating to cool the powerful motors. Xiaomi has unveiled a major redesign of its CyberOne humanoid robot, introducing a new full-palm tactile bionic hand. It combines high-density sensing, improved dexterity, and an unusual liquid cooling system inspired by human sweating. The update was detailed through Xiaomi Technology’s official WeChat account, where the company outlined how the new hand design moves closer to human-scale manipulation and long-duration industrial operation. The redesigned hand is significantly smaller than the previous version, with Xiaomi reducing the hand’s volume by 60 percent to achieve a 1:1 human scale. The dimensions are based on a 1.73-meter (5.6 feet) human hand model, which the company says helps improve sim-to-real transfer when training robotic manipulation systems in simulation before deploying them in the real world. The new bionic hand also introduces a major increase in dexterity. Xiaomi said the configuration increases active degrees of freedom by 83 percent, bringing the robot’s bionic hand closer to the human hand standard of roughly 22 to 27 degrees of freedom required for complex manipulation tasks. A key part of the redesign is full-palm tactile sensing. The sensing area reportedly covers around 8,200 square millimeters, allowing the robot to detect pressure and contact across the entire palm rather than just the fingertips. This is significant because many robotic hands rely primarily on vision systems and fingertip sensors. Full-palm tactile sensing allows the robot to continue manipulating objects even when cameras are obstructed or when precise force control is required, such as in assembly tasks. Xiaomi also reported durability improvements, with the hand surviving more than 150,000 grasping cycles, which is substantially higher than the roughly 10,000-cycle failure threshold commonly seen in tendon-driven robotic hands. One of the most unusual features of the new CyberOne hand is its liquid cooling system, designed to address overheating in high-density motors used in dexterous robotic hands. According to Xiaomi, the hand’s compact motors can generate significant heat during continuous operation. To manage this, the company integrated 3D-printed metal liquid cooling channels inside the hand that function similarly to sweat glands. Thermal management is a major challenge in humanoid robotics, particularly for robotic hands, which must pack multiple motors, sensors, and transmission systems into a very small space. Overheating can reduce motor performance, shorten component lifespan, and limit continuous operation time. Xiaomi also shared early industrial testing results for the new hand. In automotive assembly tests, CyberOne reportedly achieved a 90.2 percent success rate for nut-fastening tasks within a strict 76-second factory cycle over three hours of operation. To support broader research in robotic manipulation and embodied AI, Xiaomi said it used tactile gloves for direct data collection and has open-sourced the TacRefineNet framework along with 61 hours of raw tactile data. The company suggests that combining full-palm tactile sensing with active liquid cooling could help enable humanoid robots to operate continuously in industrial environments, where dexterity, reliability, and thermal stability are critical for deployment. Atharva is a full-time content writer with a post-graduate degree in media & amp; entertainment and a graduate degree in electronics & telecommunications. He has written in the sports and technology domains respectively. In his leisure time, Atharva loves learning about digital marketing and watching soccer matches. His main goal behind joining Interesting Engineering is to learn more about how the recent technological advancements are helping human beings on both societal and individual levels in their daily lives. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Premium Follow
Images (1):
|
|||||
| Video: Figure humanoid robot stuns Shawn Ryan in striking demo | https://interestingengineering.com/ai-r… | 1 | Apr 04, 2026 00:00 | active | |
Video: Figure humanoid robot stuns Shawn Ryan in striking demoURL: https://interestingengineering.com/ai-robotics/shawn-ryan-tests-figure-ais-humanoid Description: Shawn Ryan tests Figure AI’s humanoid robot as CEO Brett Adcock reveals how the AI-powered machine walks, balances, and works. Content:
From daily news and career tips to monthly insights on AI, sustainability, Aerospace, and more—pick what matters and get it in your inbox. Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies. We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Follow Us On Access expert insights, exclusive content, and a deeper dive into engineering and innovation. Engineering-inspired textiles, mugs, hats, and thoughtful gifts We connect top engineering talent with the world's most innovative companies We empower professionals with advanced engineering and tech education to grow careers. We recognize outstanding achievements in engineering, innovation, and technology. All Rights Reserved, IE Media, Inc. Figure AI’s humanoid robot walks beside Shawn Ryan in a real-world demo. In a recent episode of the Shawn Ryan Show, host Shawn Ryan came face-to-face with something that until recently belonged mostly to science fiction. The former U.S. Navy SEAL and CIA contractor walked alongside a fully functioning AI-Powered humanoid robot. The machine, Figure 03, developed by robotics startup Figure AI, is designed to perform many of the same tasks humans do, from folding laundry and washing dishes to working in factories and logistics centers. During the walkthrough demonstration with Figure AI founder and CEO Brett Adcock, Ryan interacted directly with the robot, testing its balance, movement, and responsiveness. The brief tour followed a much longer interview on the show, during which Adcock explained how his company is racing to build general-purpose humanoid robots that could eventually become commonplace in workplaces and possibly homes. The short demonstration video shows the Figure 03 robot walking beside Ryan, guided entirely by AI. According to Adcock, the robot stands about 5 feet 6 inches tall and weighs roughly 130-135 pounds, placing it close to human proportions. Unlike earlier robotics systems that relied heavily on scripted movements, the robot’s locomotion and actions are controlled through a neural network. As Adcock explained during the demo, the walking motion is generated by AI rather than traditional coded instructions. The robot contains around 40 joints, powered by electric motors equipped with sensors that help it maintain balance and perform tasks. Ryan, impressed by the light, foam-like exterior, questioned the robot’s durability and its ability to recover if it fell. Fall recovery, being an essential feature for robots operating in real-world environments, is a critical part of any humanoid evaluation. And while Figure is trained in simulation for dynamic stability, strength, and coordination, Addcock remarked that it totally depends on how the body falls, and that sometimes they even end up breaking necks. Another feature highlighted in the walkthrough is the robot’s hands. Cameras embedded in the palms help the machine visually track objects as it grasps them, while tactile sensors in every fingertip measure pressure during contact. This combination enables the robot to perform dexterous tasks. According to Adcock, Figure’s machines can lift boxes weighing up to 40 pounds and even fold a T-shirt. During the demonstration, Ryan jokingly asked whether the robot could crush his hand when shaking it. Adcock reassured him that the machine’s force control prevents such scenarios. While the demonstration showcased the robot’s movement and interaction, the podcast’s longer conversation focused on Figure AI’s broader ambitions. Founded in 2022, the company aims to develop general-purpose humanoid robots capable of replacing or assisting human labor in industries facing worker shortages. Adcock said early deployments are focused on commercial environments such as manufacturing and logistics. The company already works with several major partners, including BMW, where the robots are being tested in manufacturing settings. Figure is also collaborating with large logistics and real estate organizations to evaluate how humanoid robots could integrate into industrial workflows. Inside the robot’s torso sits most of its computing hardware, including GPUs and battery systems that power the machine. According to Adcock, a fully charged robot can operate for about four to five hours, after which it can recharge in roughly one hour. One unusual design feature is the charging system. Instead of plugging in cables, the robot charges wirelessly through pads embedded in its feet, allowing it to replenish energy simply by standing on a charging mat. Adcock compared the development of humanoid robots to the early years of smartphones, predicting rapid improvements with each generation of hardware. “This will look like the iPhone lineup,” he told Ryan, suggesting each new version will bring major improvements in capability. Figure AI’s ambitions go beyond building a handful of demonstration machines. According to Adcock, the company has already set up a manufacturing facility capable of producing robots on an increasing scale. When the production line is running, the factory can currently assemble one robot roughly every 90 minutes. In the long term, the company hopes to dramatically increase that output. He suggested that humanoid robots could eventually reach production levels comparable to consumer electronics, potentially reaching millions of units per year. The ultimate goal, he added, is a future where robots become as ubiquitous as smartphones, possibly even approaching a “robot for every human.” Humanoids are increasingly appearing outside the lab. Last week, a Figure humanoid robot made an appearance at a White House event focused on artificial intelligence, greeting attendees and demonstrating its capabilities. The widely publicized moment signaled how quickly humanoid robotics is moving from experimental prototypes into the public spotlight. The technology is increasingly entering mainstream discussion. Kaif Shaikh is a journalist and writer passionate about turning complex information into clear, impactful stories. His writing covers technology, sustainability, geopolitics, and occasionally fiction. A graduate in Journalism and Mass Communication, his work has appeared in the Times of India and beyond. After a near-fatal experience, Kaif began seeing both stories and silences differently. Outside work, he juggles far too many projects and passions, but always makes time to read, reflect, and hold onto the thread of wonder. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Exclusive content, expert insights and a deeper dive into engineering and tech. No ads, no limits. Premium Follow
Images (1):
|
|||||
| Robot od Figure AI w programie Shawna Ryana. Humanoid zachwyca … | https://www.chip.pl/2026/04/robot-od-fi… | 1 | Apr 04, 2026 00:00 | active | |
Robot od Figure AI w programie Shawna Ryana. Humanoid zachwyca swoimi możliwościamiDescription: W jednym z najnowszych odcinków „Shawn Ryan Show”, prowadzący – znany z twardego stąpania po ziemi weteran – stanął twarzą w twarz z modelem Figure 03. I co Content:
Startup, który w zawrotnym tempie goni marzenia o robotycznej rewolucji, znów udowodnił, że ich humanoidy są już gotowe, by wyjść z laboratoriów prosto do fabryk, a w przyszłości także do naszych domów. Największym zaskoczeniem podczas demonstracji, którą poprowadził założyciel firmy Brett Adcock, był sposób poruszania się robota. W przeciwieństwie do maszyn starszej generacji, które poruszały się według sztywno zaprogramowanych skryptów, Figure 03 opiera się na „ruchach generowanych przez AI”. Oznacza to, że za każdy krok, uścisk dłoni czy stabilizację sylwetki odpowiada sieć neuronowa, a nie linijki tradycyjnego kodu. Robot o wzroście około 168 cm i wadze blisko 60 kg posiada proporcje zbliżone do ludzkich, co pozwala mu na operowanie w środowiskach zaprojektowanych dla nas. Shawn Ryan, testując responsywność maszyny, zwrócił uwagę na jej delikatne, piankowe wykończenie i zapytał o trwałość. Adcock szczerze przyznał, że choć roboty są trenowane w zaawansowanych symulacjach, upadki w realnym świecie wciąż bywają ryzykowne – czasem kończą się nawet „skręceniem karku”. Niemniej jednak, postęp w koordynacji ruchowej jest kolosalny. Figure 03 posiada aż 40 stawów napędzanych silnikami elektrycznymi, a jego dłonie to majstersztyk inżynierii: Robot jest w stanie podnosić skrzynie o wadze do 18 kg, co czyni go idealnym kandydatem do pracy w centrach logistycznych. Co ciekawe, system ładowania jest całkowicie bezprzewodowy – robot uzupełnia energię (która starcza na 4-5 godzin pracy), po prostu stając na specjalnej macie ładującej. Dobrze już wiemy, że firmy stojące za robotami, nie chcą ograniczać się tylko do prezentacji, nawet tych najbardziej widowiskowych. Adcock porównuje obecny etap rozwoju humanoidów do wczesnych lat smartfonów. Przewiduje, że każda kolejna generacja (podobnie jak kolejne modele iPhone’a) będzie przynosić skokową poprawę możliwości. Firma nie buduje już prototypów w garażu – posiada w pełni funkcjonalną fabrykę, która obecnie jest w stanie złożyć jednego robota w około 90 minut. Czytaj też: Robot, który obiera jabłka. Sharpa uczy maszyny ludzkiej zręczności Docelowo startup chce produkować miliony jednostek rocznie, dążąc do wizji „robota dla każdego człowieka”. Już teraz maszyny od Figure AI przechodzą testy w zakładach BMW, gdzie sprawdzają się w trudnych warunkach produkcyjnych. O tym, jak blisko mainstreamu jest ta technologia, świadczy fakt, że niedawno jeden z robotów Figure pojawił się w Białym Domu, witając gości podczas wydarzenia poświęconego sztucznej inteligencji. Przejście od eksperymentu do oficjalnych państwowych prezentacji zajęło firmie zaledwie cztery lata, a to imponujące. Oczywiście do spełnienia ambitnych celów wciąż daleka droga, ale jeśli do tego dojdzie, to być może za kilka lat roboty przestaną być ciekawostką, a staną się codziennością. Źródło: Shawn Ryan Show Portal technologiczny z ponad 29-letnią historią, piszący o nauce i technice, smartfonach, motoryzacji, fotografii. Technologie mamy we krwi!
Images (1):
|
|||||
| Les robots humanoïdes, une bulle spéculative de plus qui va … | https://www.generation-nt.com/actualite… | 1 | Apr 03, 2026 08:00 | active | |
Les robots humanoïdes, une bulle spéculative de plus qui va faire des déçusDescription: GNT est le portail Hi-Tech français consacré aux nouvelles technologies (internet, logiciel, matériel, mobilité, entreprise) et au jeu vidéo PC et consoles. Content:
Si votre email correspond à un compte, vous recevrez un lien de réinitialisation. Le secteur des robots humanoïdes, soutenu par des milliards d'investissements, ferait face à un risque de bulle selon des figures éminentes comme Rodney Brooks (cofondateur d'iRobot) et Yann LeCun (chef scientifique IA de Meta). Ils pointent l'incapacité des approches actuelles, notamment en matière de dextérité et d'intelligence générale, à justifier les promesses faites par des entreprises comme Tesla et Figure. La course pour développer des robots humanoïdes autonomes et polyvalents est en plein essor. Des sociétés comme Figure, récemment valorisée à un niveau "étonnant" de 39 milliards de dollars après une levée de fonds dépassant le milliard, ou encore Tesla avec son projet Optimus, nourrissent des ambitions démesurées. Le PDG de Figure, Mike Cagney, et Elon Musk, promettent un impact économique significatif d'ici cinq ans. Cependant, deux des esprits les plus respectés du domaine, le roboticien Rodney Brooks et le scientifique en chef de l'IA chez Meta, Yann LeCun, viennent de jeter une ombre sur cet optimisme financier. Ils estiment que nous sommes dans la phase initiale du cycle de battage médiatique (ou cycle de la hype) pour les humanoïdes, juste au moment où l'intelligence artificielle générale commence à descendre de son pic. Cette dichotomie entre l'optimisme financier et les réalités technologiques est au cœur de leur mise en garde. Rodney Brooks, roboticien de renom ayant passé des décennies au MIT, a co-écrit un article expliquant "Pourquoi les humanoïdes d'aujourd'hui n'apprendront pas la dextérité". Son constat est sans appel : les centaines de millions, voire les milliards, de dollars investis par les capitaux-risqueurs et les grandes entreprises technologiques pour leur entraînement sont dépensés pour une approche qui ne peut pas aboutir. Pour lui, croire qu'une dextérité humaine sera atteinte dans les décennies à venir est "de la pure fantaisie". Le cœur du problème réside dans les mains. Les mains humaines disposent d'environ 17 000 récepteurs tactiles spécialisés. Selon Brooks, aucune technologie robotique actuelle n'est proche de cette capacité. Alors que l'apprentissage automatique a transformé la reconnaissance vocale et le traitement d'image grâce à des décennies de données spécifiques, il n'existe pas de "tradition" équivalente pour les données de toucher dont les robots auraient besoin. Les tentatives de certaines entreprises, comme Figure ou Tesla, d'enseigner la dextérité aux robots en leur montrant des vidéos d'humains accomplissant des tâches sont particulièrement visées par le cofondateur d'iRobot. Il souligne que les efforts pour construire des mains de type humain, même s'ils existent depuis des décennies, n'ont pas encore résolu ce goulot d'étranglement fondamental lié à l'acquisition de données sensorielles riches. De son côté, Yann LeCun, lauréat du prix Turing et pionnier du deep learning, pointe du doigt l'intelligence même de ces machines. Le chef scientifique de Meta a averti lors du symposium inaugural de l'Impact de l'IA Générative du MIT que le "grand secret de l'industrie" est qu'aucune de ces entreprises n'a la moindre idée de la manière de rendre ces robots suffisamment intelligents pour être "généralement utiles". Il précise que si des robots peuvent être entraînés pour des tâches spécifiques, comme dans le domaine manufacturier, le robot domestique nécessitera des percées majeures en IA. LeCun estime que les grands modèles de langage (LLM) actuels ne sont pas la solution. Il rappelle qu'un enfant de quatre ans a emmagasiné autant de données visuelles "à haut débit" que le plus grand des LLM sur le texte public, soulignant que "nous n'atteindrons jamais l'intelligence de niveau humain en nous entraînant uniquement sur du texte". Pour sortir de cette impasse, l'avenir réside dans ce qu'on appelle les modèles du monde (world models). Ces systèmes IA apprennent à comprendre le monde physique à partir de données sensorielles (vidéo). L'objectif est de prédire l'état futur du monde après une action imaginée par l'agent. LeCun, qui mène des recherches sur des architectures comme le V-JEPA, est convaincu que ces modèles sont la clé pour que les robots accomplissent des tâches "sans entraînement" (zero shot). Au-delà de l'intelligence et de la dextérité, Rodney Brooks soulève un point souvent négligé : la sécurité. Les robots humanoïdes bipèdes, en raison de l'énergie massive qu'ils doivent déployer pour rester debout et marcher, représentent un danger non négligeable en cas de chute. Cette problématique physique s'ajoute aux défis logiciels, incitant Brooks à prédire que dans une quinzaine d'années, les humanoïdes qui réussiront ressembleront peu aux modèles anthropomorphes actuels. Ils seront probablement dotés de roues, de multiples bras et de capteurs spécialisés, abandonnant la forme humaine pour des raisons d'efficacité. L'alerte lancée par Brooks et LeCun force l'industrie à se poser la question fondamentale : le financement massif d'expériences d'entraînement coûteuses peut-il réellement conduire à une production de masse évolutive sans adresser d'abord les goulots d'étranglement de l'IA fondamentale ? Le débat fait rage, et l'échéance des cinq prochaines années fixée par certains entrepreneurs servira de juge de paix sur la viabilité de la forme humanoïde actuelle. La discussion est réservée aux membres GNT Commencez par créer un compte ou vous identifier Copyright © 2001-2026 GNT Media, tous droits réservés
Images (1):
|
|||||
| Towards LLM-powered Assistive Drone for Blind and Low Vision Users … | https://hal.science/hal-05567674v1 | 1 | Apr 03, 2026 08:00 | active | |
Towards LLM-powered Assistive Drone for Blind and Low Vision Users - Archive ouverte HALURL: https://hal.science/hal-05567674v1 Description: <div><p>Drones have gained traction as a versatile form of assistive robots for Blind and Low Vision (BLV) people. Nonetheless, novel interaction techniques are required to enable BLV people to communicate with drones naturally. In this work, we built an LLM-powered assistive drone for BLV users. We leverage an LLM to translate high-level user goals to step-by-step instructions for the drone and to extract visual information from the images. Through a formative study with BLV users (N=9), we identified envisioned use cases and desired interaction modalities. Then, we took a participatory and iterative approach to build a prototype, incorporating feedback received from 3 BLV users, as well as 5 domain experts. Finally, we conducted a user study with an additional 6 BLV participants to evaluate the iterated prototype, and received positive feedback. This work is contributing to a growing body of research on harnessing the power of LLMs to build a more inclusive world.</p></div> Content:
Drones have gained traction as a versatile form of assistive robots for Blind and Low Vision (BLV) people. Nonetheless, novel interaction techniques are required to enable BLV people to communicate with drones naturally. In this work, we built an LLM-powered assistive drone for BLV users. We leverage an LLM to translate high-level user goals to step-by-step instructions for the drone and to extract visual information from the images. Through a formative study with BLV users (N=9), we identified envisioned use cases and desired interaction modalities. Then, we took a participatory and iterative approach to build a prototype, incorporating feedback received from 3 BLV users, as well as 5 domain experts. Finally, we conducted a user study with an additional 6 BLV participants to evaluate the iterated prototype, and received positive feedback. This work is contributing to a growing body of research on harnessing the power of LLMs to build a more inclusive world. Drones have gained traction as a versatile form of assistive robots for Blind and Low Vision (BLV) people. Nonetheless, novel interaction techniques are required to enable BLV people to communicate with drones naturally. In this work, we built an LLM-powered assistive drone for BLV users. We leverage an LLM to translate high-level user goals to step-by-step instructions for the drone and to extract visual information from the images. Through a formative study with BLV users (N=9), we identified envisioned use cases and desired interaction modalities. Then, we took a participatory and iterative approach to build a prototype, incorporating feedback received from 3 BLV users, as well as 5 domain experts. Finally, we conducted a user study with an additional 6 BLV participants to evaluate the iterated prototype, and received positive feedback. This work is contributing to a growing body of research on harnessing the power of LLMs to build a more inclusive world. Connectez-vous pour contacter le contributeur https://hal.science/hal-05567674 Soumis le : jeudi 26 mars 2026-08:38:42 Dernière modification le : lundi 30 mars 2026-12:48:20 Contact Ressources Informations Questions juridiques Portails CCSD
Images (1):
|
|||||
| Elon Musk announces disappointing Tesla Optimus update | https://www.teslarati.com/elon-musk-ann… | 1 | Apr 02, 2026 08:00 | active | |
Elon Musk announces disappointing Tesla Optimus updateURL: https://www.teslarati.com/elon-musk-announces-disappointing-tesla-optimus-update/ Description: Elon Musk announced a disappointing update to the unveiling of Tesla Optimus and its third-generation iteration, missing a timeline it aimed to hit in the first quarter of the year. Content:
Tesla removes Model S and X custom orders as sunset officially begins SpaceX files confidentially for IPO that will rewrite the record books Elon Musk hints at “official ceremony” with throwback photo to close Tesla Model S, Model X chapter Elon Musk announces disappointing Tesla Optimus update Countdown: America is going back to the Moon and SpaceX holds the key to what comes after Tesla removes Model S and X custom orders as sunset officially begins Elon Musk hints at “official ceremony” with throwback photo to close Tesla Model S, Model X chapter Elon Musk announces disappointing Tesla Optimus update Musk forces Judge’s exit from shareholder battles over viral social media slip-up Tesla FSD mocks BMW human driver: Saves pedestrian from near miss SpaceX files confidentially for IPO that will rewrite the record books Countdown: America is going back to the Moon and SpaceX holds the key to what comes after Elon Musk debunks latest rumors about SpaceX IPO Tesla and SpaceX to merge in 2027, Wall Street analyst predicts TIME honors SpaceX’s Gwynne Shotwell: From employee No. 7 to world’s most valuable company SpaceX files confidentially for IPO that will rewrite the record books Elon Musk hints at “official ceremony” with throwback photo to close Tesla Model S, Model X chapter Elon Musk announces disappointing Tesla Optimus update Countdown: America is going back to the Moon and SpaceX holds the key to what comes after Elon Musk debunks latest rumors about SpaceX IPO In a post on X on March 31, Musk stated that Optimus 3 is mobile but requires some finishing touches before it is ready to be shown to the world. This update comes on the final day of the first quarter, a period when Tesla had previously signaled expectations for a Gen 3 reveal. Published on By Elon Musk announced a disappointing update to the unveiling of Tesla Optimus and its third-generation iteration, missing a timeline it aimed to hit in the first quarter of the year. Musk has confirmed that the highly anticipated Optimus Gen 3 humanoid robot is already walking around and operational, yet the public unveiling will face a short delay as the company applies final refinements. In a post on X on March 31, Musk stated that Optimus 3 is mobile but requires some finishing touches before it is ready to be shown to the world. This update comes on the final day of the first quarter, a period when Tesla had previously signaled expectations for a Gen 3 reveal. Optimus 3 is walking around, but needs some finishing touches before it’s ready to be shown — Elon Musk (@elonmusk) March 31, 2026Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Musk has confirmed that the highly anticipated Optimus Gen 3 humanoid robot is already walking around and operational, yet the public unveiling will face a short delay as the company applies final refinements. In a post on X on March 31, Musk stated that Optimus 3 is mobile but requires some finishing touches before it is ready to be shown to the world. This update comes on the final day of the first quarter, a period when Tesla had previously signaled expectations for a Gen 3 reveal. Optimus 3 is walking around, but needs some finishing touches before it’s ready to be shown — Elon Musk (@elonmusk) March 31, 2026Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement In a post on X on March 31, Musk stated that Optimus 3 is mobile but requires some finishing touches before it is ready to be shown to the world. This update comes on the final day of the first quarter, a period when Tesla had previously signaled expectations for a Gen 3 reveal. Optimus 3 is walking around, but needs some finishing touches before it’s ready to be shown — Elon Musk (@elonmusk) March 31, 2026Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Optimus 3 is walking around, but needs some finishing touches before it’s ready to be shown — Elon Musk (@elonmusk) March 31, 2026Advertisement — Elon Musk (@elonmusk) March 31, 2026Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement The announcement follows reports of Optimus Gen 3 appearing at the Tesla Diner in Los Angeles, where it was observed serving and moving about until sunset. Images and videos shared by observers captured the robot in action, highlighting its progress in real-world mobility. Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Tesla had aimed to showcase the production intent version of Optimus Gen 3 during the first quarter of 2026, positioning it as a major step toward factory deployment and eventual commercial availability. Musk has described the robot as featuring advanced capabilities, including highly dexterous hands with significant degrees of freedom, powered by Tesla’s AI systems for complex tasks. This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement This minor postponement aligns with Tesla’s iterative approach to development. Earlier statements from Musk indicated that Gen 3 would represent the most advanced humanoid robot yet, designed primarily for internal factory use before scaling to external customers.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Production timelines point toward low-volume output starting in the summer of 2026, with volume ramp-up targeted for 2027. The delay underscores the company’s commitment to quality over speed, ensuring the robot meets rigorous standards for safety and performance in practical environments. Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Optimus represents a cornerstone of Tesla’s long-term vision beyond electric vehicles. Musk has repeatedly emphasized that successful humanoid robotics could transform industries by addressing labor shortages and enabling new forms of productivity. Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Competitors in the space continue to advance their own platforms, yet Tesla’s vertical integration, from custom actuators to end-to-end AI training, positions Optimus as a potential leader. Community reactions on social media range from excitement over visible progress to impatience with shifting timelines, a familiar pattern in Tesla’s innovation journey.Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Investors and enthusiasts view Optimus as critical to Tesla’s valuation, potentially surpassing its automotive business in scale. With the robot already demonstrating walking and basic interactions, the finishing touches likely involve software polishing, hardware fine-tuning, and reliability enhancements. Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Musk’s update suggests the reveal could arrive in the coming weeks or months, maintaining momentum toward broader deployment. As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement As Tesla pushes the boundaries of physical artificial intelligence, this latest development keeps Optimus in the spotlight. The company continues to prioritize rapid iteration while delivering on its promises to shareholders and customers. The robotics revolution at Tesla appears closer than ever, promising profound impacts on manufacturing, services, and daily life in the years ahead. Advertisement Joey has been a journalist covering electric mobility at TESLARATI since August 2019. In his spare time, Joey is playing golf, watching MMA, or cheering on any of his favorite sports teams, including the Baltimore Ravens and Orioles, Miami Heat, Washington Capitals, and Penn State Nittany Lions. You can get in touch with joey at joey@teslarati.com. He is also on X @KlenderJoey. If you're looking for great Tesla accessories, check out shop.teslarati.com SpaceX files confidentially for a record-breaking IPO targeting a $1.75T valuation and $80B raise, driven by Starlink growth and its xAI merger. Published on By Elon Musk’s rocket and satellite company submitted its draft registration to the U.S. Securities and Exchange Commission today for an initial public offering, targeting June at a $1.75 trillion valuation. This would be the largest in history. SpaceX has filed confidentially with the SEC, first reported by Bloomberg. SpaceX would be valued above every S&P 500 company except Nvidia, Apple, Alphabet, Microsoft, and Amazon. The filing uses a confidential process that allows companies to work through SEC disclosures privately before initiating a public roadshow. With a June target, official details through a formal prospectus is expected to go public in April or early May, after which SpaceX must wait at least 15 days before beginning investor marketing. SpaceX IPO is coming, CEO Elon Musk confirms Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement SpaceX has filed confidentially with the SEC, first reported by Bloomberg. SpaceX would be valued above every S&P 500 company except Nvidia, Apple, Alphabet, Microsoft, and Amazon. The filing uses a confidential process that allows companies to work through SEC disclosures privately before initiating a public roadshow. With a June target, official details through a formal prospectus is expected to go public in April or early May, after which SpaceX must wait at least 15 days before beginning investor marketing. SpaceX IPO is coming, CEO Elon Musk confirms Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement The filing uses a confidential process that allows companies to work through SEC disclosures privately before initiating a public roadshow. With a June target, official details through a formal prospectus is expected to go public in April or early May, after which SpaceX must wait at least 15 days before beginning investor marketing. SpaceX IPO is coming, CEO Elon Musk confirms Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement SpaceX IPO is coming, CEO Elon Musk confirms Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement While SpaceX is best known for its Falcon 9 and Starship rockets, the $1.75 trillion valuation is anchored by Starlink, its satellite internet service. Starlink ended 2025 with 9.2 million subscribers and over $10 billion in revenue, which is a figure analysts project could reach a staggering $24 billion by the end of 2026. A February all-stock merger with xAI, Musk’s artificial intelligence venture, further boosted the valuation. SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement SpaceX officially acquires xAI, merging rockets with AI expertise Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement Bank of America, Goldman Sachs, JPMorgan Chase, and Morgan Stanley are lined up as senior underwriters. SpaceX is also considering a dual-class share structure to preserve insider voting control, and plans to allocate up to 30% of shares to retail investors, which is roughly three times the typical norm. Advertisement Elon Musk promises an official ceremony to mark the end of Tesla Model S and Model X production. Published on By Tesla has officially begun winding down production of the Model S and Model X, sending farewell emails to U.S. customers on March 27 and updating the website to reflect the end of the line. Shoppers visiting Tesla.com now find only a limited set of Model S and Model X inventory units available for purchase, with no option to configure a new factory build. The move formalizes what CEO Elon Musk announced on the company’s Q4 2025 earnings call in January, when he said it was “time to basically bring the Model S and X programs to an end with an honorable discharge.” Musk posted on X a throwback photo of himself speaking at the Model S production launch in 2012, and noting “We will have an official ceremony to mark the ending of an era. I love those cars.” The mention of an official ceremony is notable. Tesla has not held a formal farewell event for a vehicle before, and Musk’s wording suggests this will be something deliberate rather than a quiet line shutdown. Given that Musk’s X post shows a photo of him on stage with a microphone in front of an audience at the Fremont factory, it wouldn’t be too far-fetched to expect a closing ceremony to take place at the same location. Perhaps? Whether it becomes a public event, a private gathering for employees, or a livestreamed moment on X remains to be seen. Custom orders of the Tesla Model S & X have come to an end. All that’s left are some in inventory. We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement Musk posted on X a throwback photo of himself speaking at the Model S production launch in 2012, and noting “We will have an official ceremony to mark the ending of an era. I love those cars.” The mention of an official ceremony is notable. Tesla has not held a formal farewell event for a vehicle before, and Musk’s wording suggests this will be something deliberate rather than a quiet line shutdown. Given that Musk’s X post shows a photo of him on stage with a microphone in front of an audience at the Fremont factory, it wouldn’t be too far-fetched to expect a closing ceremony to take place at the same location. Perhaps? Whether it becomes a public event, a private gathering for employees, or a livestreamed moment on X remains to be seen. Custom orders of the Tesla Model S & X have come to an end. All that’s left are some in inventory. We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement The mention of an official ceremony is notable. Tesla has not held a formal farewell event for a vehicle before, and Musk’s wording suggests this will be something deliberate rather than a quiet line shutdown. Given that Musk’s X post shows a photo of him on stage with a microphone in front of an audience at the Fremont factory, it wouldn’t be too far-fetched to expect a closing ceremony to take place at the same location. Perhaps? Whether it becomes a public event, a private gathering for employees, or a livestreamed moment on X remains to be seen. Custom orders of the Tesla Model S & X have come to an end. All that’s left are some in inventory. We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement Custom orders of the Tesla Model S & X have come to an end. All that’s left are some in inventory. We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 We will have an official ceremony to mark the ending of an era. I love those cars.Advertisement This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 This was me at production launch 14 years ago: pic.twitter.com/6kvCf9HTHc — Elon Musk (@elonmusk) April 1, 2026 — Elon Musk (@elonmusk) April 1, 2026 The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement The Model S first went on sale nearly fifteen years ago and was Tesla’s first fully in-house designed vehicle, proving that an electric car could be fast, desirable, and capable of long distance on a single charge. The Model X followed in 2015, turning heads with its unmistakable and distinctive falcon-wing doors, while becoming one of the first all-electric SUVs on the market. Tesla’s two flagship vehicles would ultimately push legacy automakers to take all-electric transportation seriously and help fund development of the more affordable Model 3 and Model Y.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement By 2025, however, both models had been reduced to a rounding error in Tesla’s sales figures. Musk was direct about what comes next, stating “We are going to convert that production space to an Optimus factory. It’s part of our overall shift to an autonomous future.” Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement Elon Musk’s $10 Trillion robot: Inside Tesla’s push to mass produce Optimus That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement That shift is already underway. Tesla officially started Optimus Gen 3 production at its Fremont factory in January 2026, with the line targeting a run rate of one million units per year. The Gen 3 robot features 22 degrees of freedom per hand, runs on Tesla’s AI5 chip, and shares the same neural network architecture as Full Self-Driving. A dedicated Optimus factory at Gigafactory Texas is also under construction, with a planned annual capacity of 10 million units. The production lines that once built the Model S and Model X are being converted to support that ramp. Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement Tesla confirmed it will continue to support existing owners with service, software updates, and parts for as long as people own the vehicles. For buyers still interested in a new example, remaining U.S. inventory is discounted and the window is closing fast.Advertisement NASA’s Artemis II launches Wednesday, sending humans near the Moon for the first time since 1972. Published on By For the first time since Apollo 17 touched down on the lunar surface in December 1972, the United States is sending humans back toward the Moon. NASA’s Artemis II mission is set to launch as early as this week from Kennedy Space Center in Florida, carrying four astronauts on a 10-day journey around the Moon and back to Earth. It will not land anyone on the surface this time, but it is the first crewed flight in over half a century to travel beyond low Earth orbit, and it sets the stage for Elon Musk’s SpaceX missions to follow. The mission uses NASA’s Space Launch System rocket and the Orion spacecraft, which will fly around the Moon before splashing down in the Pacific Ocean around April 10. For context, an uncrewed Artemis I flew the same path in 2022, proving the hardware worked. Artemis II now tests it with people aboard. According to NASA’s official countdown blog, launch preparations are on track with an 80 percent chance of favorable weather. “Hey, let’s go to the moon!” Commander Wiseman told reporters upon arriving at Kennedy Space Center. Source: NASA Beyond Artemis II lies the lander question, and that is where SpaceX enters directly. In 2021, NASA awarded SpaceX a $2.89 billion contract to develop the Starship Human Landing System, a modified version of Starship designed to ferry astronauts from lunar orbit to the surface. The original plan called for SpaceX to deliver that lander for Artemis III, which was to be the first crewed lunar landing. Timing for Starship development, however, caused NASA to restructure the mission sequence entirely. Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement The mission uses NASA’s Space Launch System rocket and the Orion spacecraft, which will fly around the Moon before splashing down in the Pacific Ocean around April 10. For context, an uncrewed Artemis I flew the same path in 2022, proving the hardware worked. Artemis II now tests it with people aboard. According to NASA’s official countdown blog, launch preparations are on track with an 80 percent chance of favorable weather. “Hey, let’s go to the moon!” Commander Wiseman told reporters upon arriving at Kennedy Space Center. Source: NASA Beyond Artemis II lies the lander question, and that is where SpaceX enters directly. In 2021, NASA awarded SpaceX a $2.89 billion contract to develop the Starship Human Landing System, a modified version of Starship designed to ferry astronauts from lunar orbit to the surface. The original plan called for SpaceX to deliver that lander for Artemis III, which was to be the first crewed lunar landing. Timing for Starship development, however, caused NASA to restructure the mission sequence entirely. Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement According to NASA’s official countdown blog, launch preparations are on track with an 80 percent chance of favorable weather. “Hey, let’s go to the moon!” Commander Wiseman told reporters upon arriving at Kennedy Space Center. Source: NASA Beyond Artemis II lies the lander question, and that is where SpaceX enters directly. In 2021, NASA awarded SpaceX a $2.89 billion contract to develop the Starship Human Landing System, a modified version of Starship designed to ferry astronauts from lunar orbit to the surface. The original plan called for SpaceX to deliver that lander for Artemis III, which was to be the first crewed lunar landing. Timing for Starship development, however, caused NASA to restructure the mission sequence entirely. Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement Source: NASA Beyond Artemis II lies the lander question, and that is where SpaceX enters directly. In 2021, NASA awarded SpaceX a $2.89 billion contract to develop the Starship Human Landing System, a modified version of Starship designed to ferry astronauts from lunar orbit to the surface. The original plan called for SpaceX to deliver that lander for Artemis III, which was to be the first crewed lunar landing. Timing for Starship development, however, caused NASA to restructure the mission sequence entirely. Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement Before SpaceX’s Starship Human Landing System (HLS) can put anyone on the Moon, it has to solve a problem no rocket has demonstrated at scale, which is refueling in orbit. Because the Starship HLS requires approximately ten tanker launches worth of propellant loaded into a depot in low Earth orbit before it has enough fuel to reach the lunar surface, SpaceX plans to conduct this refueling process using its upgraded V3 Starship. And until that demonstration flies and succeeds, the Starship moon lander remains a question mark.Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement SpaceX’s Starship V3 is almost ready and it will change space travel forever In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement In February 2026, NASA Administrator Jared Isaacman confirmed that Artemis III, now planned for mid-2027, and will instead test lunar landers in low Earth orbit, with the actual landing pushed to Artemis IV that’s targeted for 2028. Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement Musk responded to earlier criticism of SpaceX’s schedule by posting on X that his company is “moving like lightning compared to the rest of the space industry,” and added that “Starship will end up doing the whole Moon mission.” The contract competition was also reopened in October 2025 by then NASA chief Sean Duffy, who cited Starship’s delays and said the agency needed speed given China’s own stated goal of landing astronauts on the Moon by 2030. They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement They won’t. SpaceX is moving like lightning compared to the rest of the space industry. Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 Moreover, Starship will end up doing the whole Moon mission. Mark my words.Advertisement — Elon Musk (@elonmusk) October 20, 2025 — Elon Musk (@elonmusk) October 20, 2025 Artemis came from the first Trump administration’s 2017 Space Policy Directive 1, which directed NASA to return humans to the Moon. The program picked up pace through the 2020s, with the Orion spacecraft and SLS taking years to develop at enormous costs. SpaceX entered the picture in 2021 as the chosen lander contractor, tying the commercial space sector into what had historically been an all government undertaking. Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement Whether SpaceX’s Starship ultimately carries astronauts to the lunar surface or shares that role with Blue Origin’s competing lander, this week’s Artemis II launch is the necessary first step. Getting four humans to the Moon’s vicinity and back safely is the proof of concept everything else depends on. Advertisement How to give your Tesla a Custom Lovk Sound! Easy tutorial!! #tesla #teslatok #teslalocksound Copyright © TESLARATI. All rights reserved.
Images (1):
|
|||||
| CES 2026 Showcases Emotionally Intelligent Robots for All Age Groups | https://www.androidheadlines.com/2026/0… | 1 | Apr 02, 2026 00:03 | active | |
CES 2026 Showcases Emotionally Intelligent Robots for All Age GroupsDescription: Mind With Heart Robotics Co., Ltd. has showcased its portfolio of emotionally intelligent robots at the CES 2026 show. Content:
Sign Up! envelope_alt Get the latest Android News in your inbox every day arrow_right Sign up to receive the latest Android News every weekday: Only send updates once a week Android Headlines / Mobile Events News / CES / CES 2026 Showcases Emotionally Intelligent Robots for All Age Groups Mind With Heart Robotics Co., Ltd. has showcased its new emotionally intelligent robots at the CES 2026. The robots are designed to complement and support both the elders and the children. It uses artificial intelligence and clinically backed algorithms to adapt naturally to the user. The Consumer Electronics Show (CES) 2026 is going on full swing, and now, Mind With Heart Robotics Co., Ltd. has unveiled a broad portfolio of emotionally intelligent robots at the show. These robots are designed to meet the emotional and therapeutic needs of individuals across various age groups. The company showcases their robots’ natural movement, tactile interaction, and affective intelligence that adapts over time. The tech giant says that its emotionally intelligent robots are designed for both older people’s companionship and pediatric therapy. It shows how social robotics is moving toward clinically informed, human-centered design at a global scale for future care ecosystems worldwide. Robots are no longer limited to just mechanical and work-related tasks. The lineup is a result of years of research in affective computing and human-robot interaction led by founder and CEO Zhang Jiaming. The CEO has more than a decade of experience in the field. He has also overseen dozens of robotic systems and filed extensive patents in biomimetic design and emotional AI. With such extensive experience and knowledge, he designed the robots that read touch, voice, and behavior patterns and can also respond with lifelike motion. Further, it also keeps clinical collaboration and data ethics in mind for long-term safety and accuracy. The best part about the robots is that they can adapt to sensitive care settings in homes, hospitals, and schools across different global markets today. The main highlight of the show was the new An’An panda cub robot. It was also honored by the Consumer Technology Association with the CES Innovation Awards in artificial intelligence. It is designed specifically for loneliness and the care of old-aged people. The robot uses full-body tactile sensing and long-term memory to personalize interaction. Alongside An’An, the firm showcased its Duncan Series companion robots. These are meant for pediatric therapy, including support for children with autism and sensory challenges. The lineup is made while keeping skill-spanning communication, social interaction, motor development, play, and emotional well-being in mind. Mind With Heart Robotics says that they’re planning for a commercial release of all the robots in March. The products would be accessible at a worldwide scale across consumer, healthcare, and institutional markets. I am an experienced consumer tech writer dedicated to producing comprehensive guides and news that empower readers. My passion for technology drives me, and you can often find me exploring Tech Twitter. Feel free to reach out to me at: [email protected]. Copyright ©2026 Android Headlines. All Rights Reserved. Main Deals & More Android News Sign Up! envelope_alt Get the latest Android News in your inbox every day arrow_right Sign up to receive the latest Android News every weekday: Only send updates once a week
Images (1):
|
|||||
| McDonald's experimenting with robot employees that look like humans — … | https://nypost.com/2026/03/22/world-new… | 1 | Apr 02, 2026 00:03 | active | |
McDonald's experimenting with robot employees that look like humans — and even dress in uniformDescription: A McDonald's in a Chinese city welcome humanoid robots to serve up meals and entertain customers -- but only for a limited time. Content:
Switch between CA and NY editions here. A McDonald’s in a Chinese city welcome humanoid robots to serve up meals and entertain customers — but only for a limited time. Videos posted on social media captured the myriad of lifelike robots at a McDonald’s in Shanghai performing routine tasks typically completed by human workers, such as greeting customers and delivering food. Diners were seen interacting with the robots dressed in the fast-food joint’s iconic red-and-yellow uniforms behind counters, while children chased more of the moving machinery disguised as cute animals. The robots, supplied by Chinese firm Keenon Robotics, were deployed as part of a trial at the McDonald’s location, Digitaltrends reported. McDonald’s said the robots were only around for five days — from March 14 to the 19 — and were meant to plug the grand opening of the Shanghai Science and Technology Museum restaurant. “Our Humanoid series are leading the squad and hitting the streets,” Keenon Robotics posted on social media alongside a clip of the robots interacting with diners. “It’s a showcase of how service automation is becoming a seamless part of global dining, and how technology brings more smiles to every mealtime,” the company added. Jon Banner, the executive vice president and global chief impact officer of the beloved fast-food giant, tweeted that the robots were there for a “temporary greeting.” “Mission accomplished!” he said. “The robots were not involved in any service or operational functions. And if you didn’t visit prior to today, you missed them.” The footage comes amid concerns over artificial intelligence and robots replacing tasks typically completed by human workers at large corporations. In July, the Wall Street Journal reported that Amazon will soon use more robots in its warehouses than human employees, with more than 1 million machines already deployed across facilities. Many of these robots handle the heavy lifting in warehouse work, picking items from tall shelves and moving goods around facilities. Others are advanced enough to help humans sort and package orders, according to the Wall Street Journal. Three-quarters of Amazon’s global deliveries are now assisted by robots in some way, according to the company.
Images (1):
|
|||||
| Il robot Figure 03 ora pulisce e riordina casa | https://www.tecnoandroid.it/2026/03/13/… | 1 | Mar 31, 2026 08:01 | active | |
Il robot Figure 03 ora pulisce e riordina casaURL: https://www.tecnoandroid.it/2026/03/13/il-robot-figure-03-ora-pulisce-e-riordina-casa-1813224/ Description: Il robot umanoide Figure 03 mostra nuovi progressi nelle faccende domestiche grazie alla piattaforma AI Helix 02. Ecco i dettagli. Content:
Nel settore della robotica domestica, i video dimostrativi sono ormai una sorta di tradizione. Tra le aziende più attive in tal senso c’è Figure AI. Quest’ultima, infatti, ha attirato molta attenzione grazie ai suoi robot umanoidi progettati per lavorare con gli esseri umani. L’ultimo protagonista di tali dimostrazioni è Figure 03. Si tratta di un modello pensato per affrontare attività domestiche. Il nuovo video pubblicato dall’azienda mostra il robot impegnato in una piccola routine casalinga. Si muove tra mobili e oggetti raccogliendo giocattoli lasciati sul pavimento, sistema i cuscini del divano e passa un panno su alcune superfici per pulirle. Scene simili potrebbero sembrare quasi banali, ma proprio la loro normalità è ciò che rende interessante la dimostrazione. L’obiettivo non è stupire con movimenti spettacolari, ma dimostrare che un robot può interagire con un ambiente domestico reale, dove nulla è perfettamente ordinato. Non è la prima volta che l’azienda mostra le capacità dei suoi robot. Già in passato il precedente modello, Figure 02, aveva dato prova di una notevole abilità nella manipolazione degli oggetti. In alcune dimostrazioni lo si vedeva selezionare capi di abbigliamento o organizzare oggetti con movimenti precisi. Con il nuovo robot l’attenzione sembra spostarsi ancora di più sulla gestione di situazioni domestiche meno prevedibili. Alla base di tali capacità c’è il sistema AI sviluppato dall’azienda, chiamato Helix 02. Tale piattaforma integra diversi elementi fondamentali per la robotica moderna. Tra cui la visione artificiale per riconoscere oggetti e ambienti, la comprensione del linguaggio per interpretare istruzioni. A ciò si aggiunge anche una componente di pianificazione che traduce le informazioni raccolte in azioni concrete. Un dettaglio interessante riguarda la velocità del robot. Osservando il video, si nota che i movimenti sono ancora più lenti rispetto a quelli di una persona. Non si tratta di un limite tecnologico, ma una scelta legata alla sicurezza. In un ambiente domestico, dove il robot potrebbe trovarsi vicino a persone o animali, mantenere movimenti controllati e prevedibili riduce i rischi. Nonostante i progressi mostrati nel video, Figure AI non ha ancora annunciato quando robot come Figure 03 potrebbero arrivare sul mercato. Prima di una commercializzazione sarà necessario raccogliere grandi quantità di dati e dimostrare che il sistema può funzionare in modo affidabile. Ciao sono Margareth, per gli amici Maggie, la vostra amichevole web writer di quartiere. Questa piccola citazione dice già tanto di me: amo il cinema, le serie tv, leggere e cantare a squarciagola i musical a teatro. Se a questo aggiungiamo la passione per la fotografia e la tecnologia direi che è facile intuire perché ho deciso di studiare e poi lavorare con la comunicazione. 2012 – 2026 Tecnoandroid.it – Gestito dalla STARGATE SRLS – P.Iva: 15525681001 Testata telematica quotidiana registrata al Tribunale di Roma CON DECRETO N° 225/2015, editore STARGATE SRLS. Tutti i marchi riportati appartengono ai legittimi proprietari. Questo articolo potrebbe includere collegamenti affiliati: eventuali acquisti o ordini realizzati attraverso questi link contribuiranno a fornire una commissione al nostro sito. 🔥 Non perderti nemmeno un'offerta Smartphone, notebook, gadget tech al prezzo più basso. Unisciti a migliaia di lettori di TecnoAndroid! Puoi disiscriverti in qualsiasi momento. Niente spam, solo offerte vere. 🎯 Inserisci qualcosa di speciale: Tienimi connesso fino a quando non esco Password dimenticata? Ti sarà inviata una nuova password via email. Hai ricevuto una nuova password? Accedi qui.
Images (1):
|
|||||
| Figure AI: The robotics company hosted by Melania Trump | https://www.cnbc.com/2026/03/26/figure-… | 1 | Mar 31, 2026 08:01 | active | |
Figure AI: The robotics company hosted by Melania TrumpURL: https://www.cnbc.com/2026/03/26/figure-ai-the-robotics-company-hosted-by-melania-trump.html Description: The White House hosted its first humanoid robot guest, with first lady Melania Trump appearing alongside a robot from startup Figure AI. Content:
In this article The White House hosted its "first humanoid robot guest" on Wednesday, with first lady Melania Trump appearing alongside a robot from robotics upstart Figure AI. The robot, identified as Figure 3, accompanied the first lady during the second day of the Fostering the Future Together Global Coalition Summit, a gathering focused on technology and children's education. The machine greeted attendees in multiple languages and described itself as "a humanoid built in the United States of America," according to widely circulated footage from the event. The display represented one of, if not the, highest-profile showcases of humanoid robotics in the U.S. to date and highlights how the tech is becoming a national priority amid global tech competition. Beijing has also promoted humanoid robots at highly publicized events this year. The first lady used the robot to promote her push for artificial intelligence in children's education, suggesting that the robots could one day act as interactive educators at home. However, Figure AI says its third-generation humanoids are also applicable for more general purposes, including commercial and household tasks. The White House spotlight is likely to boost the brand of Nvidia-backed Figure AI, a lesser-known robot company compared to larger humanoid players like Tesla's Optimus and Boston Dynamics, though some of its team comes from those competitors, as well as tech giants like Apple. Figure AI was founded in 2022 by Brett Adcock, a tech entrepreneur and billionaire who previously co-founded the publicly traded aerospace company Archer Aviation and a digital hiring marketplace Vettery. Powering its robots is the firm's in-house Helix AI system, a vision-language-action model that powers its robots and enables learning through observation and verbal commands. Amid growing investor excitement for physical AI, the firm raised more than $1 billion in its Series C funding round in September led by Parkway Venture Capital with participation from other notable investors such as Nvidia, Intel Capital, Qualcomm Ventures and Salesforce. That gave it a post-money valuation of $39 billion. The fundraising is expected to be put towards the firm's aim to deploy thousands of robots in homes and logistics over the coming years — a goal that has likely been made easier by a major endorsement from the White House. Figure AI has already begun work with its first commercial customer in BMW, deploying its robots for tasks like handling sheet metal parts in manufacturing facilities. It's possible that Melania's endorsement of Figure AI's robots as potential educators will trigger a reexamination of an ongoing lawsuit the company found itself in last year. In November, Figure AI was sued by its former head of product safety, who alleged he was fired after warning executives that the company's robots were powerful enough to fracture a human skull. Robert Gruendel filed the complaint in federal court in California, claiming wrongful termination after raising safety concerns with CEO Brett Adcock and chief engineer Kyle Edelberg in September 2025. The suit stated that Figure AI's next-generation robots moved at superhuman speed and generated force approximately twice the level necessary to fracture an adult human skull. Gruendel also alleged that one robot had carved a gash into a steel refrigerator door during a malfunction. Figure AI contends that Gruendel had been fired for poor performance, and described the allegations as "falsehoods." Figure AI countersued in January, saying Gruendel failed in his role to help the company build a safe robot. The lawsuit drew attention to broader questions about safety standards in humanoid robotics development and remains pending. Interestingly, the White House event on Wednesday wasn't the first time that a company connected to Adcock received some major shine from the Trump administration. Shares of the aerospace company he co-founded, Archer Aviation, surged in June last year after U.S. President Donald Trump signed an Executive Order directing the establishment of a program to promote the safe integration of electric air taxis in U.S. cities. Archer is participating in the initiative and is working on projects involving aircraft demonstrations. Following the June 2025 executive order, Archer raised $850 million in a registered direct stock offering. Adcock co-founded Archer Aviation in 2018 with Adam Goldstein and initially served as co-CEO. However, Adcock stepped down in April 2022, and then resigned from the company's board of directors shortly afterward. He remains a shareholder, according to investment research platform Business Quant, but he has no active executive, board, or advisory position at the company. Correction: This story has been updated to reflect that Archer Aviation is an aerospace company. An earlier version of the story gave an incorrect description of the firm's business. Got a confidential news tip? We want to hear from you. Sign up for free newsletters and get more CNBC delivered to your inbox Get this delivered to your inbox, and more info about our products and services. © 2026 Versant Media, LLC. All Rights Reserved. A Versant Media Company. Data is a real-time snapshot *Data is delayed at least 15 minutes. Global Business and Financial News, Stock Quotes, and Market Data and Analysis. Data also provided by
Images (1):
|
|||||
| Are Humanoid Robots Really That Advanced Now? | HowStuffWorks | https://science.howstuffworks.com/human… | 1 | Mar 30, 2026 16:00 | active | |
Are Humanoid Robots Really That Advanced Now? | HowStuffWorksURL: https://science.howstuffworks.com/humanoid-robots.htm Description: Humanoid robots are machines designed to resemble the human body and replicate some humanlike abilities. Engineers in humanoid robotics build machines with arms, legs, and sensors that allow them to perform tasks in environments built for human beings. Content:
Advertisement Humanoid robots are machines designed to resemble the human body and replicate some humanlike abilities. Engineers in humanoid robotics build machines with arms, legs, and sensors that allow them to perform tasks in environments built for human beings. Unlike many traditional industrial robots used in factories, humanoid robots aim to work alongside humans in real world settings. Their humanlike structure helps them open doors, use tools, and interact with human operators. Advertisement Rapid advances in artificial intelligence, machine learning, and robot hardware are pushing these systems from science fiction into reality. Researchers now test advanced humanoid robot platforms in homes, workplaces and public spaces. Most humanoid robots copy the basic body plan of their human counterparts. Engineers design them with a torso, head, robotic arms, and bipedal robots legs that allow humanlike movements. Complex mechanical components and motors give these machines many degrees of freedom, meaning they can move joints in multiple directions. This flexibility helps robots perform complex tasks that require human dexterity. Advertisement Sensors such as cameras, tactile sensing systems and force/torque sensors allow a robot to detect objects, adjust its grip, and maintain balance in complex environments. Modern humanoid robot designed systems rely heavily on artificial intelligence. AI models help robots understand surroundings, track objects, and plan actions. Developers train AI models using machine learning techniques such as imitation learning and reinforcement learning. These methods allow robots to learn new skills by observing humans or experimenting with actions. Advertisement Data pipelines and control systems process information from sensors so the robot can react in real time. This tracking ability helps humanoid robots navigate unstructured environments and maintain safe human robot interaction. Several companies and research groups are developing humanoid robotics platforms. Boston Dynamics has explored agile robots capable of moving through difficult terrain. Agility Robotics created Digit robots designed for tasks such as carrying packages and moving totes in warehouses. Pal Robotics builds humanoid service robot systems used as development platforms for research. Advertisement Other humanoid robots come from companies such as SoftBank Robotics, Hanson Robotics, and Engineered Arts. These machines often focus on social robot roles, customer service roles, or public demonstrations that showcase facial expressions and communication abilities. Humanoid robots can perform some manual tasks that once required human workers. Robotic arms and motor control allow some humanoid robots to manipulate tools or handle objects. Developers are training robots to help with household tasks such as cleaning or organizing items. In industrial settings, autonomous robots may assist humans with assembling parts, transporting materials, or monitoring equipment. Advertisement Some robots can also be controlled remotely using remote control systems. Human operators guide the machine while the robot provides mobility and strength in dangerous or distant environments. Many experts believe the first wave of humanoid robots will appear in workplaces where labor shortages exist. These robots may help complete repetitive or physically demanding tasks while working alongside humans. Researchers continue improving balance, autonomous navigation, and humanlike motion so robots can operate in various environments. Advances in greater dexterity and machine perception may allow robots to interact more naturally with people. Advertisement While fully autonomous humanoid machines remain in early stages, ongoing research described in publications such as IEEE Spectrum shows steady progress. As artificial intelligence improves, humanoid robots may become capable assistants in homes, hospitals and workplaces across the world. We created this article in conjunction with AI technology, then made sure it was fact-checked and edited by a HowStuffWorks editor. Advertisement Please copy/paste the following text to properly cite this HowStuffWorks.com article: Advertisement Advertisement Advertisement Advertisement Advertisement
Images (1):
|
|||||
| Billionaire Brett Adcock Launches New Startup to Build Personal A.I. … | https://observer.com/2026/03/bret-adcoc… | 1 | Mar 30, 2026 08:00 | active | |
Billionaire Brett Adcock Launches New Startup to Build Personal A.I. | ObserverURL: https://observer.com/2026/03/bret-adcock-hark-personal-ai/ Description: Billionaire founder Brett Adcock is self-funding Hark, a lab that fuses multimodal A.I. with custom hardware to create assistants that think like humans. Content:
Brett Adcock has built and sold companies in robotics, security and air taxis, and now he wants to reinvent how people use A.I. His latest venture, Hark, is a new lab that pairs personalized intelligence with custom-built hardware. Instead of specializing in models or devices alone, Hark aims to own the whole pipeline—foundation models, software systems, hardware and user interfaces—under one roof. The company has recruited top talent from Apple and Meta to build an A.I. product that better bridges the gap between humans and machines. Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. “The A.I. systems I use today are far from my vision of what the future should be,” said Adcock in a statement. “We want to create intelligence that lets you offload your mental workload into a system that begins to think like you and sometimes ahead of you.” Hark is the latest in a string of ambitious projects launched by Adcock. He previously funded the hiring marketplace Vettery; Archer, which builds electric vertical takeoff and landing aircraft (eVTOLs); and Cover, an A.I. security company developing weapon-detection systems. Hadcock also remains CEO of Figure, a robotics startup he founded in 2022 that is developing humanoid robots to automate labor. Figure, which is testing A.I. agents on its robots but will remain a separate company from Hark, was most recently valued at $39 billion in 2025. For now, Hark is financed entirely by Adcock’s own money: $100 million in personal capital. The entrepreneur, who has an estimated net worth of $19.1 billion, wants to build multimodal A.I. systems that handle speech, text, vision and context, layered with personalized memory, proactive behavior and real-time speech capabilities. Those systems are meant to work hand in hand with Hark’s own hardware. Leading that effort is Abidur Chowdhury, hired as head of design after seven years as an industrial designer at Apple, where he worked on iPhone and Mac products such as the recent iPhone Air. “We believe that the future is a new interface that will understand you, intelligently anticipate your needs, and love doing tasks that you don’t want to do,” said Chowdhury in a statement. Hark’s broader team includes A.I. researchers and engineers drawn from some of Silicon Valley’s biggest firms. On the hardware side, hires include longtime Apple staffers like David Narajowski and Dave Wilkes, who worked on product development architecture and audio hardware systems. On the A.I. side, the company has brought in senior researchers from Meta’s Superintelligence Lab, including Mingbo Ma, Xubo Liu, Xianfeng Rui, Kainan Peng and Zhihong Lei. Hark’s headcount, which also includes talent from Google, Amazon and Tesla, is about 45 today and is expected to reach 100 in the first half of 2026. To speed up model development, Hark has struck a compute deal with Nvidia that will bring thousands of GPUs online next month for pre-training and post-training its systems. Hark is entering a crowded field of ventures trying to rethink how people interact with A.I. OpenAI has enlisted former Apple design chief Jony Ive for a still-secret device project, while Meta is betting heavily on A.I.-enabled smart glasses. Newer hardware startups like Sandbar have raised millions to develop wearables with personalized A.I. at their core. Adcock says Hark will begin releasing its first A.I. models this summer, followed shortly by hardware devices designed around those systems. “We believe the next computing platform will be personal A.I.—intelligence that understands you and works alongside you every day,” he said. “But that future only becomes possible when the entire stack is built together.” We get it: you like to have control of your own internet experience. But advertising revenue helps support our journalism. To read our full stories, please turn off your ad blocker.We'd really appreciate it. Below are steps you can take in order to whitelist Observer.com on your browser: Click the AdBlock button on your browser and select Don't run on pages on this domain. Click the AdBlock Plus button on your browser and select Enabled on this site. Click the AdBlock Plus button on your browser and select Disable on Observer.com.
Images (1):
|
|||||