Researchers represented Binary Spiking Neural Networks as binary causal models to decode their internal behavior. The team used SAT and SMT solvers to extract abductive explanations from MNIST classifications. This logic-based approach outperforms SHAP in identifying specific pixel-level features. It provides a formal framework for interpreting non-traditional neural architectures.