Pramana fine‑tunes LLMs on Navya‑Nyaya logic, a 2,500‑year‑old Indian reasoning system. Apple researchers found that adding irrelevant context to math problems cut LLM accuracy by 65%, revealing brittle pattern matching. Navya‑Nyaya imposes a 6‑phase reasoning structure, forcing models to trace evidence. Engineers can adopt this fine‑tuning to harden AI justification.