The sectorllm project implements Llama2 inference using under 1,500 bytes of x86 assembly. This lean implementation strips away high-level abstractions to run on bare metal. It serves as a technical curiosity rather than a practical tool. Developers can use it to study the absolute minimum requirements for LLM execution on legacy hardware.