The term 'alignment' was coined to tackle existential safety in AI. In LessWrong, researchers clarified that alignment means making systems mirror human values so they can be trusted to uphold AI safety. The post argues that vague use of the term breeds confusion, diluting policy focus. Precise language lets regulators and developers target real risks.