Parameter size strictly bounds the world knowledge Small Language Models can pretrain. This limitation often triggers factual errors in output. Apple researchers propose mitigating these gaps by granting SLMs access to external databases or larger models. Practitioners should prioritize retrieval-augmented architectures over parameter expansion to fix specific factual inaccuracies.