Cursor’s Composer 2 Was Based on Kimi. That’s Fine. Not Saying So? No.
This week, Cursor launched Composer 2 — its new coding model — positioning it as “frontier-grade coding intelligence.” Within hours, a developer on X noticed that the model ID referenced Kimi internally. Cursor’s VP of Developer Education confirmed it: Composer 2 started from Moonshot AI’s open source Kimi 2.5 base, with approximately 75% of the total compute coming from Cursor’s own additional training.
Co-founder Aman Sanger acknowledged it directly: “It was a mistake not to mention the Kimi base in our blog from the start.”
He’s right.
The Technical Picture Is Reasonable
Building on open source base models is standard practice — most specialized coding models are built this way. Moonshot AI confirmed that the usage was fully licensed and part of an authorized commercial partnership through Fireworks AI. Kimi’s account on X even publicly congratulated Cursor. No one was harmed here.
The Communication Picture Is Another Story
When you raise $2.3 billion at a $29 billion valuation and market a model as frontier-grade, your users — companies, teams, individual developers on monthly subscriptions — make decisions based on what you tell them about your product. The model’s provenance is product information. Failing to disclose it isn’t a legal problem; it’s a trust problem.
This matters more and more as coding tools compete on model quality. When “our model” becomes a key differentiator, what exactly does “our model” mean? Fine-tuned from open source? Trained from scratch? Distilled from a larger proprietary model? These aren’t trivial distinctions. They affect how you evaluate capability claims, how you think about model behavior under specific workloads, and — yes — how you think about supply chain risk in enterprise contexts.
The geopolitical dimension will probably remain uncomfortable for U.S.-based AI companies. Building on base models of Chinese origin isn’t inherently problematic, but in the current climate it’s information that users and enterprise buyers have a reasonable interest in knowing upfront.
The Right Practice Is Simple
Disclose the base, describe what you built on top, and let the benchmarks speak. Cursor’s additional training may be genuinely substantial — the numbers they cite suggest a significant investment. That story is worth telling on its own merits.
The community tends to handle “we fine-tuned an open model” better than “we said nothing and got caught.” Cursor learned that this week.
Does the base model’s origin matter to you when choosing a coding tool? Or is what matters the benchmarks on your real use cases?
Source: TechCrunch
