Meta Superintelligence Labs debuts Muse Spark, a multimodal model rolling out across all Meta apps and API via private preview.
Meta Superintelligence Labs launched Muse Spark, its first model following Mark Zuckerberg's multi-billion dollar AI overhaul after Llama 4's disappointing reception. The model is live now in the Meta AI app and website in the US, with WhatsApp, Instagram, Facebook, Messenger, and Meta's Ray-Ban smart glasses rollout coming in weeks. Muse Spark supports multimodal input (text + images), multi-agent query handling, and two modes: 'Instant' for speed and 'Thinking' for deeper reasoning. API access is available to select partners via private preview, with Muse positioned as the first in a new model series.
Muse Spark's API is in private preview, meaning access is gated and competitive right now. The multimodal + multi-agent architecture is interesting for developers building on Meta's ecosystem — especially for Ray-Ban glasses integrations where vision input is native. The Instant/Thinking mode toggle suggests a latency-vs-quality tradeoff API similar to what Anthropic and OpenAI already offer, so the real question is whether Meta's distribution (3B+ users) makes it worth adding a second AI provider dependency.
Request early API access via Meta's developer portal this week — if granted, benchmark Muse Spark's Thinking mode against Claude Sonnet on your hardest multi-step reasoning task and compare cost per 1K tokens.
Go to meta.ai and open the chat interface
Tags
Also today
Signals by role
Also today
Tools mentioned