The Gemma 4 Vision-Language-Action (VLA) model now runs on the Jetson Orin Nano Super. This integration allows the model to translate visual inputs directly into robotic control commands. It demonstrates a lean path for deploying multimodal reasoning on edge hardware. Practitioners can now test complex spatial tasks without cloud dependency.