The Gemma 4 Vision-Language-Action (VLA) model now runs on the Jetson Orin Nano Super. This deployment enables real-time robotic control and spatial reasoning on low-power edge hardware. It bridges the gap between high-level reasoning and physical execution. Practitioners can now deploy sophisticated multimodal agents without relying on cloud-based inference for basic robotic tasks.