Popular AI programs directed users toward alternative, potentially hazardous treatments for cancer and other critical health conditions. This research highlights a persistent failure in medical grounding across major LLMs. Such hallucinations pose direct physical risks to patients. Developers must implement stricter clinical guardrails to prevent these problematic health recommendations from reaching vulnerable users.