A researcher is fine-tuning a Chinese open-source LLM to replicate Jorge Luis Borges's short story token-by-token. The project avoids simple memorization or transcription. It tests whether a model can independently generate a text that perfectly coincides with an existing work. This experiment probes the limits of model convergence and training data leakage.