One specific model from Anthropic remains unreleased due to safety risks. The company decided the system's capabilities exceeded current alignment safeguards. This move highlights the tension between rapid scaling and risk mitigation. Practitioners should monitor if these safety thresholds lead to delayed feature parity across the LLM industry.