Researchers at Apple studied modeling choices to optimize learned image codecs for both human perception and runtime. They tested several novel techniques to close the gap between theoretical quality and practical deployment. This incremental work focuses on balancing visual fidelity with speed. Practitioners gain a blueprint for designing more efficient, perceptually-aware neural codecs.