The Gemma 4 VLA model now runs on the Jetson Orin Nano Super, enabling real-time robotic control. This deployment proves that vision-language-action models can operate on edge hardware without relying on the cloud. Developers can now build autonomous agents with low latency. It is a practical step for local robotic inference.