GPT-5.5 and DeepSeek V4 Drop on the Same Day, and Anthropic's Security Model Got Hacked
GPT-5.5 and DeepSeek V4 Launch Within 24 Hours, and the Safety Question Is Who Gets to Answer It
The most consequential 48 hours of the 2026 model race happened this week. On April 23, OpenAI released GPT-5.5, its new flagship, to Plus, Pro, Business, and Enterprise users. On April 24, DeepSeek released a preview of V4, its next-generation open-source model. The back-to-back launches turned an abstract competition into a direct, real-time exchange.
GPT-5.5 arrives with what OpenAI calls its "strongest set of safeguards to date," including targeted red-teaming for cybersecurity and biology capabilities, feedback from nearly 200 early-access partners, and a deliberate decision to withhold API access while the company studies security implications. The model significantly improves long-context performance, maintaining quality past 128K tokens where GPT-5.4 degraded. DeepSeek V4 arrives in two variants: V4-Pro (1.6T total parameters, 49B active) and V4-Flash (284B total, 13B active), both supporting 1M-token context. Crucially, DeepSeek trained V4 partly on Huawei chips, reducing dependence on US export-controlled hardware.
The strategic subtext is clear. OpenAI is tightening control over distribution, releasing to consumers first and restricting API access until it can enforce safety constraints at the infrastructure level. DeepSeek is open-sourcing the weights immediately, letting anyone download and run the model. Transformer News asked the right question this week: "OpenAI shouldn't be deciding if its models are safe." Neither should any single company. But the alternative, a patchwork of self-reported evaluations with no independent verification, is what we have. For builders, both models represent meaningful capability jumps. For governance teams, the gap between model capability and independent safety evaluation just widened again.