Unveiling Hidden Neural Codes: SIMPL – A Scalable and Fast Approach for Optimizing Latent Variables and Tuning Curves in Neural Population Data
Background and Motivation
Traditional approaches commonly assume the observed behavioral variable as the
latent neural code. However, this assumption can lead to inaccuracies because
neural activity sometimes encodes internal cognitive states differing subtly
from observable behavior (e.g., anticipation, mental simulation). Existing
latent variable models face challenges such as high computational cost, poor
scalability to large datasets, limited expressiveness of tuning models, or
difficulties interpreting complex neural network-based function approximators.
SIMPL Algorithm Overview SIMPL
iteratively optimizes neural tuning curves and latent trajectories by
alternating between fitting curves to latent estimates and decoding latents
from tuning curves, using behavior as an initial condition to aid convergence
and interpretability. This EM-like approach integrates two well-established
steps familiar to neuroscientists—fitting tuning curves and latent variable
decoding—making it accessible and practical for broad adoption. Unlike neural
network-based methods, SIMPL relies on simpler nonparametric models (e.g.,
kernel density estimators) and can scale efficiently to large neural datasets
(e.g., hundreds of neurons over one hour of recording) without expensive
hardware.
Validation on Synthetic Datasets SIMPL
was evaluated on synthetic datasets closely simulating neuroscientific
experiments, including a discrete two-alternative forced choice decision-making
task and a continuous 2D grid cell spatial coding environment. Results showed
that SIMPL rapidly converges to accurate latent trajectories and tuning curves
closely matching ground truth while improving log-likelihood of the spike data
and spatial information content of tuning curves. Behavioral initializations
dramatically reduce issues of identifiability and local minima in the
model-fitting process.
Application to Hippocampal Place
Cell Data Applied to a real rodent hippocampal dataset (226 neurons recorded over
2 hours), SIMPL improved upon behaviorally-derived tuning curves by refining
place fields to be smaller, more numerous, and more uniformly sized. This
enhanced latent space better explained observed neural spikes and suggested
that the hippocampus encodes spatial information at a higher resolution than
traditional behavioral proxies alone reveal. These findings indicate SIMPL’s
potential in reinterpreting neurophysiological data and revealing subtler
aspects of spatial cognition.
Broader Implications and Future
Directions The paper highlights SIMPL as a specific instance in a broader latent
optimization class. While current components (e.g., kernel density estimation)
might not scale optimally to very high-dimensional latent spaces, substituting
with parametric models like neural networks is feasible at potential
computational cost. Furthermore, SIMPL could be extended to account for complex
neural phenomena such as replay events or theta sweeps that introduce
asymmetric latent-behavior discrepancies; this may clarify predictive
properties of place cell tuning curves.
Conclusion SIMPL
offers a conceptually simple, fast, and scalable tool for improving latent
variable estimation in neural data analysis. Its ability to effectively leverage
behavioral measurements for initialization and iteratively refine latent
variables and tuning curves marks a significant advance. It opens avenues for
more accurate interpretations of neural population codes, especially in
navigation and cognition research, and comes with theoretical connections to
classical expectation-maximization methods ensuring robust performance.
George, T. M., Glaser, P.,
Stachenfeld, K., Barry, C., & Clopath, C. (2024). SIMPL: Scalable and
hassle-free optimization of neural representations from behaviour. bioRxiv. https://doi.org/10.1101/2024.11.11.623030
.jpg)
Comments
Post a Comment