The Gemma 4 Vision-Language-Action (VLA) model now runs on the Jetson Orin Nano Super. This deployment enables real-time robotic control and spatial reasoning on low-power edge hardware. Developers can now execute complex multimodal tasks without relying on cloud inference. It proves that high-capability VLAs can fit within strict memory constraints.