Apple researchers added irrelevant context to math problems, dropping LLM accuracy by 65%. Pramana fine‑tunes models on Navya‑Nyaya, a 2,500‑year‑old logic that enforces six‑phase reasoning. The method replaces generic chain‑of‑thought prompting with structured epistemological steps. Practitioners can now train LLMs that justify claims, reducing hallucinations in high‑stakes domains for future.