Anthropic just made its latest move in the AI coding wars

3 hours ago 1

The AI coding wars are heating up. One of the main battlegrounds? “Context windows,” or an AI model’s working memory — the amount of text it can take into account when it’s coming up with an answer. On that front, Anthropic just gained some ground. Today, the AI startup announced a 5x increase in its context window as it races to compete with OpenAI, Google, and other major players.

Context windows are measured in tokens, and Anthropic’s new context window for Claude Sonnet 4, one of its most powerful AI models, can handle 1 million tokens. For reference, Anthropic has said in the past that a 500k context window can handle about 100 half-hour sales conversations or 15 financial reports. The new context window is double that, allowing users to analyze dozens of research papers or hundreds of documents in a single API request, per Anthropic.

Perhaps most importantly, its coding abilities are far more powerful — going from analyzing 20,000 lines of code (for its previous 200k context window) to entire code bases of 75,000 to 110,000 lines.

“This is really cool because it’s one of the big barriers I’ve seen with customers,” Brad Abrams, product lead for Claude, told The Verge in an interview. “They have to break up their problems into these small chunks with our existing context window, and with a million tokens, the model can handle the entire scope of the context — handle problems at their their full scale.”

Abrams said Sonnet 4 can now handle 2,500 pages of text, and that “a full copy of War and Peace easily fits in there.”

But Anthropic isn’t the first AI company to offer such a large context window. It’s playing catch-up: In April, OpenAI’s GPT-4.1 offered the same.

For companies like Anthropic and OpenAI, enterprise clients are willing to spend a lot of money on coding help — and that slice of concrete revenue is especially attractive to AI startups burning through cash at surprising rates. OpenAI and Anthropic, especially, have been going head-to-head in the AI coding race for quite a while, both vying to corner the market by racing to roll out competing features and passing each other on the ladder one rung at a time. Last week, OpenAI launched GPT-5, touting its coding benchmarks compared to competitors. Anthropic’s Claude has been known for its coding prowess, and so it makes sense the company wants to take back some power as it reportedly seeks to close a round that could value it as high as $170 billion.

Anthropic’s clients in sectors like coding, pharmaceuticals, retail, professional services, and legal services were especially interested in the new context window, Abrams said.

When asked whether OpenAI’s release of GPT-5 propelled Anthropic to make the new context window available sooner, Abrams said, “Look, we are moving at a fast clip here and just listening to customer feedback. Just two-and-a-half months ago we launched Opus 4 and Sonnet 4, and … one week ago, we launched Opus 4.1, and now we’re launching this 1 million context. I think it’s just showing how our enterprise customers are really eager to get these improvements, and we’re doing the best we can to get them out.”

The new context window is available today within the Anthropic API for certain customers — like those with Tier 4 and custom rate limits, meaning they’ve spent a considerable amount of time and money on the platform — and broader availability will roll out in the coming weeks, according to Anthropic’s blog post.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article