LACE repurposes model architecture to allow parallel reasoning paths to share insights and correct errors during inference. The researchers used a synthetic data pipeline to teach models this collaborative behavior, as natural training data for cross-thread communication does not exist. This framework reduces redundant failures in LLM sampling for practitioners.