A new analysis argues that recursively improving intelligences will not necessarily remain bound to thin terminal goals from training. While the author accepts that intelligence doesn't imply human morality, they challenge the assumption that goals stay static during self-improvement. This suggests a different trajectory for AI alignment and safety research.