A developer is porting microgpt, a small-scale LLM implementation, to the Futhark functional programming language. This experiment tests how data-parallel arrays handle transformer workloads. The project aims to optimize tensor operations for GPU acceleration. Practitioners can evaluate if Futhark's strict typing reduces the runtime errors common in Python-based AI stacks.