Moda, an AI-native design tool for non-designers, details its multi-agent architecture using LangChain's Deep Agents, LangGraph, and LangSmith for production-grade design generation.
Moda published a technical breakdown of how it built a production multi-agent system for AI-generated visual design. The platform uses three agents — Design, Research, and Brand Kit — with the latter two running on LangChain's Deep Agents framework and the Design Agent on a custom LangGraph loop. Moda developed a proprietary context representation layer to solve LLM reasoning limitations with visual coordinates, replacing verbose XML-based layout specs. LangSmith provides observability across all agents for debugging and regression testing.
Moda's case study is a rare public breakdown of how a production multi-agent system handles a genuinely hard problem: spatial reasoning for visual layout. The core insight — that LLMs fail with raw XY coordinate systems and need a compact, relationship-based context layer — is directly applicable to any agent touching structured visual or spatial data. The triage-then-loop-with-dynamic-context pattern they describe is a reusable architectural template, and the LangSmith observability layer is the part most teams skip and then regret.
Implement LangSmith tracing on any existing LangGraph or LangChain agent this week — even a single-node chain — to get a baseline trace you can compare against after your next prompt change. Catching silent regressions is the immediate ROI.
Install dependencies: pip install langchain langsmith langgraph and set env vars LANGCHAIN_TRACING_V2=true and LANGCHAIN_API_KEY=your_key
Tags
Also today
Signals by role
Also today
Tools mentioned