The Gemma 4 Vision-Language-Action model now runs on the Jetson Orin Nano Super. This deployment enables real-time robotic control by mapping visual inputs directly to motor actions. It proves that high-performance VLA capabilities can fit on edge hardware. Developers can now deploy complex spatial reasoning to small-scale robots without relying on cloud inference.