The LACE framework repurposes model architecture to allow parallel reasoning paths to share insights via cross-thread attention. Researchers used a synthetic data pipeline to teach models how to communicate and correct errors during inference. This replaces isolated sampling with coordinated exploration. Practitioners can now reduce redundant failures in complex reasoning tasks.