The term 'alignment' was coined to address AI existential safety, a hard technical problem. LessWrong writers point out that most researchers once dismissed it as a fringe concern. This confusion over terminology can misguide funding and policy. Practitioners should adopt precise language to align safety goals with regulatory frameworks and ensure compliance.