A new analysis on LessWrong challenges the idea that AI welfare can be deferred until after an intelligence explosion. The author argues that early value lock-in or premature space colonization could permanently bake in unethical treatment of synthetic minds. This suggests that AI welfare research requires immediate prioritization to avoid irreversible moral failures.