A developer is porting MicroGPT, a small-scale language model, to the Futhark functional programming language. This experiment tests how high-performance data-parallelism affects LLM inference speeds. The project focuses on optimizing tensor operations for GPU execution. Practitioners can observe how strictly typed, parallel languages handle the heavy matrix multiplication required for transformer architectures.