LangChain releases Deep Agents Deploy in beta — a single-command, model-agnostic agent harness deployment alternative to proprietary systems like Claude's Managed Agents.
LangChain launched Deep Agents Deploy in beta, enabling developers to deploy production-ready AI agents with a single `deepagents deploy` command. The platform is built on Deep Agents, an open-source agent harness that works with any LLM provider including OpenAI, Anthropic, Google, Azure, Bedrock, Fireworks, Baseten, Open Router, and Ollama. Each deployment spins up 30+ endpoints covering MCP, A2A, Agent Protocol, human-in-the-loop, and memory management — all backed by LangSmith's horizontally scalable server infrastructure. The key differentiator is memory portability: agent memory is stored in open formats (AGENTS.md, skills files) and never locked into a proprietary harness.
Deep Agents Deploy collapses the painful multi-step process of deploying agent orchestration, sandboxes, and endpoints into a single CLI command. You get 30+ production endpoints out of the box — MCP, A2A, Agent Protocol, human-in-the-loop — backed by LangSmith's scalable server. Since it's model-agnostic and built on open formats, you're not locked into Anthropic's harness or any single provider's memory schema.
Install Deep Agents this week and run `deepagents deploy` on a prototype agent you're already building — benchmark cold-start time and endpoint response latency against your current custom deployment setup to decide whether to migrate.
Run: `pip install deepagents` then `deepagents init` in a new project directory
Tags
Also today
Signals by role
Also today
Tools mentioned