A new proposal for Pivotal suggests training tensor-transformers on synthetic languages built from known computational primitives. By controlling the data-generating process, researchers can isolate how induction heads and skip-trigrams interfere. This approach simplifies the study of compositionality. It offers a controlled environment for mechanistic interpretability practitioners to test circuit hypotheses.