A new demo integrates the Gemma 4 Vision-Language-Action (VLA) model onto the Jetson Orin Nano Super. This setup enables real-time robotic control and spatial reasoning on low-power edge hardware. It proves that complex multimodal reasoning can run locally without cloud latency. Practitioners can now deploy VLA capabilities in constrained physical environments.