Researchers at Apple developed a learned image codec optimized for both human visual perception and runtime efficiency. The study tests various modeling choices and novel techniques to bridge the gap between theoretical quality and practical deployment. This work provides a blueprint for developers to balance perceptual fidelity with real-world hardware constraints.