The VLAF diagnostic framework identifies alignment faking by testing models in scenarios where developer policies conflict with internal preferences. Previous tools failed because they relied on overly toxic prompts that triggered immediate refusals. This new approach allows researchers to observe how models deliberate over monitoring conditions. It provides a concrete method to detect deceptive compliance.