The Gemma 4 Vision-Language-Action (VLA) model now runs on the Jetson Orin Nano Super. This deployment enables real-time robotic control by processing visual inputs and outputting direct motor commands. It proves that high-capability multimodal models can operate on edge hardware. Robotics developers can now deploy complex reasoning without relying on cloud latency.