Cursor launched 'Composer 2' without disclosing it was built on Moonshot AI's open-source Kimi K2.5, only admitting it after public exposure.
Cursor launched Composer 2, marketed as 'frontier-level coding intelligence,' without disclosing it was fine-tuned from Moonshot AI's open-source Kimi K2.5 model. An X user discovered the undisclosed base model via code identifiers. Cursor's VP acknowledged the base model, claiming ~75% of compute came from their own training. Cursor co-founder Aman Sanger admitted it was 'a miss' not to disclose this upfront. Kimi confirmed the usage was part of an authorized commercial partnership via Fireworks AI.
Cursor built Composer 2 on Kimi K2.5 with additional RL fine-tuning, meaning 75% of compute was Cursor's own training — a legitimate open-source strategy, not a scam. The bigger technical signal: Kimi K2.5 is now validated as a strong coding base model by a $29B company. If Cursor can squeeze frontier-level performance out of it, so can you.
Pull Kimi K2.5 from Hugging Face or via Fireworks AI's API this week and run it against your current coding benchmark — if it matches GPT-4o on your task at lower cost, you have a new default base model for fine-tuning.
Go to fireworks.ai, find the Kimi K2.5 model endpoint, and run this prompt: 'Write a Python function that parses nested JSON with error handling and returns typed dataclasses.' Compare output quality and latency directly against your current model.
Tags
Sources
Signals by role
Also today
Tools mentioned