The Gemma 4 Vision-Language-Action (VLA) model now runs on the Jetson Orin Nano Super. This deployment enables real-time robotic control and spatial reasoning on low-power edge hardware. Developers can now execute complex visual-motor tasks without cloud dependency. It proves that high-capability multimodal models can operate within tight hardware constraints for local robotics.