The Preparedness Framework v2 defines critical AI self-improvement as achieving generational model gains in one-fifth of the usual time. Critics argue these thresholds are too permissive and lack measurable indicators. This loophole allows development to continue despite high risks. Safety practitioners must now demand more rigorous, quantifiable metrics to prevent uncontrolled recursive improvement.