Front-facing cameras and AI analyze visual input to provide real-time audio cues for visually impaired runners at the London Marathon. These smart glasses convert environmental data into speech via built-in speakers. The technology allows athletes to navigate courses independently. This application demonstrates a practical, narrow use case for multimodal computer vision in wearable hardware.