A study of 409 systems in the Federal AI Register reveals that 86% of tools serve internal efficiency rather than public-facing services. Researchers used the ADMAPS framework to show how the government obscures accountability through selective reporting. This gap between sovereign AI rhetoric and actual practice limits the ability of auditors to track algorithmic harm.