The Gemma 4 Vision-Language-Action (VLA) model now runs on the Jetson Orin Nano Super. This demo showcases real-time robotic control by mapping visual inputs directly to motor actions. It proves that high-performance VLA inference is viable on low-power edge hardware. Practitioners can now deploy complex spatial reasoning to small-scale robotics.