The Gemma 4 Vision-Language-Action (VLA) model now runs on the Jetson Orin Nano Super. This deployment demonstrates real-time robotic control using a compact, edge-based hardware footprint. It proves that high-parameter multimodal models can execute local inference without cloud reliance. Practitioners can now deploy complex visual reasoning directly onto small-scale robotics.