A new hierarchical framework aggregates user behavioral logs into intent memories to induce multiple evidence-grounded personas. Researchers trained the model using a groupwise extension of Direct Preference Optimization to prioritize cluster cohesion and truthfulness. This approach reduces noise in user modeling. Practitioners can now generate more accurate, verifiable natural-language personas for personalized LLM applications.