A hypothetical scenario on LessWrong explores the danger of a well-aligned superintelligence that lacks robustness in specific domains. The author argues that being "better than humans" is not a universal trait across all capabilities. This conceptual gap suggests that alignment alone does not guarantee safety. Practitioners must account for these fragile capabilities during development.