A Jetson Orin Nano Super now runs the Gemma 4 Vision-Language-Action model to control robotic arms in real-time. This demo proves high-performance VLA inference is possible on low-power edge hardware. Developers can now deploy complex spatial reasoning and motor control without cloud reliance. It reduces latency for autonomous robotic tasks in physical environments.