The Granite Embedding Multilingual R2 model supports 32K context tokens and 100+ languages under an Apache 2.0 license. It outperforms larger competitors in retrieval quality despite its sub-100M parameter size. This efficiency reduces memory overhead for RAG pipelines. Developers can now deploy high-performance multilingual search on smaller, cheaper hardware.