Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning

Abstract

Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational ‘drift’ naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Learning localized RFs in Hebbian/anti-Hebbian networks.
Fig. 2: Drift dynamics in the PSP task.
Fig. 3: Drift of manifold-tiling localized RFs in nonlinear Hebbian/anti-Hebbian networks.
Fig. 4: Predictions of the nonlinear model.
Fig. 5: Drift of place fields.
Fig. 6: Representational drift in the PPC.

Similar content being viewed by others

Data availability

No new experimental data were generated in this study.

Experimental data presented in Fig. 5 are originally described in ref. 10. We used the processed data and MATLAB code, which are available at the Caltech Research Data Repository (https://doi.org/10.22002/d1.1229) to produce these plots.

Experimental data presented in Fig. 6 is extracted from Fig. 2c and d of ref. 9. The data are freely available in ref. 79.

Code availability

Codes for numerical experiments were written in MATLAB (R2020b). Analysis and figures were made using MATLAB (R2020b) except Fig. 4a,b, which is made by R (version 4.2.0). All codes are available in the GitHub repository https://github.com/Pehlevan-Group/representation-drift.

References

  1. Ziv, Y. et al. Long-term dynamics of CA1 hippocampal place codes. Nat. Neurosci. 16, 264 (2013).

    Article  CAS  Google Scholar 

  2. Li, M. et al. Long-term two-photon imaging in awake macaque monkey. Neuron 93, 1049–1057 (2017).

    Article  CAS  Google Scholar 

  3. Schoonover, C. E. et al. Representational drift in primary olfactory cortex. Nature 594, 541–546 (2021).

    Article  CAS  Google Scholar 

  4. Katlowitz, K. A., Picardo, M. A. & Long, M. A. Stable sequential activity underlying the maintenance of a precisely executed skilled behavior. Neuron 98, 1133–1140 (2018).

    Article  CAS  Google Scholar 

  5. Ulivi, A. F. et al. Longitudinal two-photon imaging of dorsal hippocampal CA1 in live mice. J. Vis. Exp. 148, e59598 (2019).

    Google Scholar 

  6. Luo, T. Z. et al. An approach for long-term, multi-probe Neuropixels recordings in unrestrained rats. eLife 9, e59716 (2020).

    Article  CAS  Google Scholar 

  7. Rule, M. E., O’Leary, T. & Harvey, C. D. Causes and consequences of representational drift. Curr. Opin. Neurobiol. 58, 141–147 (2019).

    Article  CAS  Google Scholar 

  8. Mau, W., Hasselmo, M. E. & Cai, D. J. The brain in motion: how ensemble fluidity drives memory-updating and flexibility. eLife 9, e63550 (2020).

    Article  CAS  Google Scholar 

  9. Driscoll, L. N. et al. Dynamic reorganization of neuronal activity patterns in parietal cortex. Cell 170, 986–999 (2017).

    Article  CAS  Google Scholar 

  10. Gonzalez, W. G. et al. Persistence of neuronal representations through time and damage in the hippocampus. Science 365, 821–825 (2019).

    Article  CAS  Google Scholar 

  11. Lee, J. S. et al. The statistical structure of the hippocampal code for space as a function of time, context, and value. Cell 183, 620–635 (2020).

    Article  CAS  Google Scholar 

  12. Rokni, U. et al. Motor learning with unstable neural representations. Neuron 54, 653–666 (2007).

    Article  CAS  Google Scholar 

  13. Chestek, C. A. et al. Single-neuron stability during repeated reaching in macaque premotor cortex. J. Neurosci. 27, 10742–10750 (2007).

    Article  CAS  Google Scholar 

  14. Gallego, J. A. et al. Long-term stability of cortical population dynamics underlying consistent behavior. Nat. Neurosci. 23, 260–270 (2020).

    Article  CAS  Google Scholar 

  15. Redman, W. T. et al. Long-term transverse imaging of the hippocampus with glass microperiscopes. eLife 11, e75391 (2022).

    Article  Google Scholar 

  16. Grewe, B. F. et al. Neural ensemble dynamics underlying a long-term associative memory. Nature 543, 670–675 (2017).

    Article  CAS  Google Scholar 

  17. Deitch, D., Rubin, A. & Ziv, Y. Representational drift in the mouse visual cortex. Curr. Biol. 31, 4327–4339 (2021).

    Article  CAS  Google Scholar 

  18. Marks, T. D. & Goard, M. J. Stimulus-dependent representational drift in primary visual cortex. Nat. Commun. 12, 5169 (2021).

    Article  CAS  Google Scholar 

  19. Rumpel, S. & Triesch, J. The dynamic connectome. Neuroforum 22.3, 48–53 (2016).

    Article  Google Scholar 

  20. Attardo, A., Fitzgerald, J. E. & Schnitzer, M. J. Impermanence of dendritic spines in live adult CA1 hippocampus. Nature 523, 592–596 (2015).

    Article  CAS  Google Scholar 

  21. Hazan, L. & Ziv, N. E. Activity dependent and independent determinants of synaptic size diversity. J. Neurosci. 40, 2828–2848 (2020).

    Article  CAS  Google Scholar 

  22. Attneave, F. Some informational aspects of visual perception. Psychol. Rev. 61, 183–193 (1954).

    Article  CAS  Google Scholar 

  23. H. Barlow. Sensory Communication (MIT Press, 1961).

  24. Atick, J. J. & Redlich, A. N. What does the retina know about natural scenes?. Neural Comput. 4, 196–210 (1992).

    Article  Google Scholar 

  25. Srinivasan, M. V., Laughlin, S. B. & Dubs, A. Predictive coding: a fresh view of inhibition in the retina. Proc. R. Soc. Lond. B Biol. Sci. 216, 427–459 (1982).

    Article  CAS  Google Scholar 

  26. van Hateren, J. H. A theory of maximizing sensory information. Biol. Cybern. 68, 23–29 (1992).

    Article  Google Scholar 

  27. Rao, R. P. N. & Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999).

    Article  CAS  Google Scholar 

  28. Olshausen, B. A. & Field, D. J. Sparse coding with an overcomplete basis set: a strategy employed by V1?. Vis. Res. 37, 3311–3325 (1997).

    Article  CAS  Google Scholar 

  29. Pehlevan, C., Hu, T. & Chklovskii, D. B. A Hebbian/anti-Hebbian neural network for linear subspace learning: a derivation from multidimensional scaling of streaming data. Neural Comput. 27, 1461–1495 (2015).

    Article  Google Scholar 

  30. Chalk, M., Marre, O. & Tkacik, G. Toward a unified theory of efficient, predictive, and sparse coding. Proc. Natl Acad. Sci. USA 115, 186–191 (2018).

    Article  CAS  Google Scholar 

  31. O’Keefe, J. & Dostrovsky, J. The hippocampus as a spatial map: preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34, 171–175 (1971).

    Article  Google Scholar 

  32. Nieh, E. H. et al. Geometry of abstract learned knowledge in the hippocampus. Nature 595, 80–84 (2021).

    Article  CAS  Google Scholar 

  33. Földiak, P. Forming sparse representations by local anti-Hebbian learning. Biol. Cybern. 64, 165–170 (1990).

    Article  Google Scholar 

  34. Pehlevan, C. & Chklovskii, D. B. Neuroscience-inspired online unsupervised learning algorithms: artificial neural networks. IEEE Signal Process Mag. 36, 88–96 (2019).

    Article  Google Scholar 

  35. Kriegeskorte, N., Mur, M. & Bandettini, P. A. Representational similarity analysis-connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 4 (2008).

    Google Scholar 

  36. Pehlevan, C., Sengupta, A. M. & Chklovskii, D. B. Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks? Neural Comput. 30, 84–124 (2018).

    Article  Google Scholar 

  37. Sengupta, A. M. et al. Manifold-tiling localized receptive fields are optimal in similarity-preserving neural networks. In: Advances in Neural Information Processing Systems 7080–7090 (2018).

  38. Kämmerer, S., Kob, W. & Schilling, R. Dynamics of the rotational degrees of freedom in a supercooled liquid of diatomic molecules. Phys. Rev. E 56, 5450 (1997).

    Article  Google Scholar 

  39. Mazza, M. G. et al. Relation between rotational and translational dynamic heterogeneities in water. Phys. Rev. Lett. 96, 057803 (2006).

    Article  Google Scholar 

  40. Hubel, D. H. Eye, Brain, and Vision (Scientific American Library) (1995).

  41. Peña, J. L. & Konishi, M. Auditory spatial receptive fields created by multiplication. Science 292, 249–252 (2001).

    Article  Google Scholar 

  42. Solstad, T., Moser, E. I. & Einevoll, G. T. From grid cells to place cells: a mathematical model. Hippocampus 16, 1026–1031 (2006).

    Article  Google Scholar 

  43. Savelli, F. & Knierim, J. J. Hebbian analysis of the transformation of medial entorhinal grid-cell inputs to hippocampal place fields. J. Neurophysiol. 103, 3167–3183 (2010).

    Article  Google Scholar 

  44. Bezaire, M. J. & van Soltesz, I. Quantitative assessment of CA1 local circuits: knowledge base for interneuron-pyramidal cell connectivity. Hippocampus 23, 751–785 (2013).

    Article  Google Scholar 

  45. Rolotti, S. V. et al. Local feedback inhibition tightly controls rapid formation of hippocampal place fields. Neuron 110, 783–794 (2022).

    Article  CAS  Google Scholar 

  46. Udakis, M. et al. Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory synapses onto CA1 pyramidal neurons shapes hippocampal output. Nat. Commun. 11, 4395 (2020).

    Article  CAS  Google Scholar 

  47. Basu, J. & Siegelbaum, S. A. The corticohippocampal circuit, synaptic plasticity, and memory. Cold Spring Harb. Perspect. Biol. 7, a021733 (2015).

    Article  Google Scholar 

  48. Yoon, K. J. et al. Grid cell responses in 1D environments assessed as slices through a 2D lattice. Neuron 89, 1086–1099 (2016).

    Article  CAS  Google Scholar 

  49. Harvey, C. D., Coen, P. & Tank, D. W. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature 484, 62–68 (2012).

    Article  CAS  Google Scholar 

  50. Kuan, A. T. et al. Synaptic wiring motifs in posterior parietal cortex support decision-making. Preprint at bioRxiv https://doi.org/10.1101/2022.04.13.488176 (2022).

  51. Zuo, Y. et al. Long-term sensory deprivation prevents dendritic spine loss in primary somatosensory cortex. Nature 436, 261–265 (2005).

    Article  CAS  Google Scholar 

  52. Aitken, K., Garrett, M., Olsen, S. & Mihalas, S. The geometry of representational drift in natural and artificial neural networks. PLoS Comput. Biol. 18, e1010716 (2022).

    Article  CAS  Google Scholar 

  53. Hainmueller, T. & Bartos, M. Parallel emergence of stable and dynamic memory engrams in the hippocampus. Nature 558, 292–296 (2018).

    Article  CAS  Google Scholar 

  54. Mankin, E. A. et al. Neuronal code for extended time in the hippocampus. Proc. Natl Acad. Sci. USA 109, 19462–19467 (2012).

    Article  CAS  Google Scholar 

  55. Amaral, D. G. & Witter, M. P. The three-dimensional organization of the hippocampal formation: a review of anatomical data. Neuroscience 31, 571–591 (1989).

    Article  CAS  Google Scholar 

  56. Rolls, E. T. An attractor network in the hippocampus: theory and neurophysiology. Learn. Mem. 14, 714–731 (2007).

    Article  Google Scholar 

  57. Rajan, K., Harvey, C. D. & Tank, D. W. Recurrent network models of sequence generation and memory. Neuron 90, 128–142 (2016).

    Article  CAS  Google Scholar 

  58. Xia, J. et al. Stable representation of a naturalistic movie emerges from episodic activity with gain variability. Nat. Commun. 12, 5170 (2021).

    Article  CAS  Google Scholar 

  59. Rubin, A. et al. Revealing neural correlates of behavior without behavioral measurements. Nat. Commun. 10, 4745 (2019).

    Article  Google Scholar 

  60. Kinsky, N. R. et al. Hippocampal place fields maintain a coherent and flexible map across long timescales. Curr. Biol. 28, 3578–3588 (2018).

    Article  CAS  Google Scholar 

  61. Chen, T. et al. A simple framework for contrastive learning of visual representations. In Proc. of the 37th International Conference on Machine Learning. PMLR, 1597–1607 (2020).

  62. Zbontar, J. et al. Barlow twins: self-supervised learning via redundancy reduction. In Proc. of the 38th International Conference on Machine Learning. PMLR, 12310–12320 (2021).

  63. Bordelon, B. & Pehlevan, C. Population codes enable learning from few examples by shaping inductive bias. eLife 11, e78606 (2022).

    Article  Google Scholar 

  64. Druckmann, S. & Chklovskii, D. B. Neuronal circuits underlying persistent representations despite time varying activity. Curr. Biol. 22, 2095–2103 (2012).

    Article  CAS  Google Scholar 

  65. Kaufman, M. T. et al. Cortical activity in the null space: permitting preparation without movement. Nat. Neurosci. 17, 440–448 (2014).

    Article  CAS  Google Scholar 

  66. Rule, M. E. et al. Stable task information from an unstable neural population. eLife 9, e51121 (2020).

    Article  CAS  Google Scholar 

  67. Rule, M. E. & O’Leary, T. Self-healing codes: how stable neural populations can track continually reconfiguring neural representations. Proc. Natl Acad. Sci. USA 119, e2106692119 (2022).

    Article  CAS  Google Scholar 

  68. Masset, P., Qin, S. & Zavatone-Veth, J. A. Drifting neuronal representations: bug or feature?. Biol. Cybern. 116, 253–266 (2022).

    Article  Google Scholar 

  69. Duffy, A. et al. Variation in sequence dynamics improves maintenance of stereotyped behavior in an example from bird song. Proc. Natl Acad. Sci. USA 116, 9592–9597 (2019).

    Article  CAS  Google Scholar 

  70. Kappel, D. et al. Network plasticity as Bayesian inference. PLoS Comput. Biol. 11, e1004485 (2015).

    Article  Google Scholar 

  71. Hunter, G. L. et al. Tracking rotational diffusion of colloidal clusters. Opt. Express 19, 17189–17202 (2011).

    Article  CAS  Google Scholar 

  72. Pehlevan, C. A spiking neural network with local learning rules derived from nonnegative similarity matching. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE. 7958–7962 (2019).

  73. Pehlevan, C. & Chklovskii, D. B. A Hebbian/anti-Hebbian network derived from online non-negative matrix factorization can cluster and discover sparse features. In Proc. of 48th Asilomar Conference on Signals, Systems and Computers. IEEE, 769–775 (2014).

  74. Pehlevan, C., Mohan, S. & Chklovskii, D. B. Blind nonnegative source separation using biological neural networks. Neural Comput. 29, 2925–2954 (2017).

    Article  Google Scholar 

  75. Stensola, H. et al. The entorhinal grid map is discretized. Nature 492, 72–78 (2012).

    Article  CAS  Google Scholar 

  76. Kropff, E. & Treves, A. The emergence of grid cells: Intelligent design or just adaptation? Hippocampus 18, 1256–1269 (2008).

    Article  Google Scholar 

  77. Lian, Y. & Burkitt, A. N. Learning an efficient hippocampal place map from entorhinal inputs using Non-Negative sparse coding. eNeuro 8, ENEURO.0557-20.2021 (2021).

    Article  Google Scholar 

  78. Samorodnitsky, G. & Taqqu, M. S. Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance: Stochastic Modeling (Routledge, 2017).

  79. Driscoll, L. N. et al. Data From: Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex Dataset (Dryad, 2020).

  80. Sanger, T. D. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw. 2, 459 (1989).

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the NIH grant 1UF1NS111697-01 (C.P. and S.Q.), the Intel Corporation through Intel Neuromorphic Research Community (C.P.) and a Google Faculty Research Award (C.P.). We thank W. Gonzalez, H. Zhang, A. Harutyunyan and C. Lois from California Institute of Technology for sharing the data on place cell recordings. We thank L. Driscoll, N. Pettit, M. Minderer, S. Chettih and C. Harvey for making the T-maze experimental data available. We are grateful to members of Pehlevan group for helpful discussions, and W. Gonzalez, L. Driscoll and C. Harvey for comments on the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

This work resulted from the merging of two independent projects at their initial stages: one by S.Q. and C.P. and the other by S.F., D.L., A.M.S and D.B.C. For the current manuscript, S.Q. and C.P. conceived and designed the study with input from S.F., D.L., A.M.S and D.B.C. S.Q. performed the numerical simulations and analytical calculations. S.Q. and C.P. analyzed and interpreted the data with input from S.F., D.L., A.M.S. and D.B.C. S.Q. and C.P. wrote the manuscript with comments from S.F., D.L., A.M.S. and D.B.C.

Corresponding author

Correspondence to Cengiz Pehlevan.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Neuroscience thanks Adrienne Fairhall, Timothy O’Leary and Alessandro Treves for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Performance of the linear Hebbian/anti-Hebbian network in the PSP task.

(a,b) The PSP error as quantified by \(\left\| {{{{\mathbf{F}}}}_{t}^{{{\bf{ \top }}}}{{{\mathbf{F}}}}_{t} - {{{\mathbf{U}}}}{{{\mathbf{U}}}}^{{{\bf{ \top }}}}} \right\|_F/\left\| {{{{\mathbf{UU}}}}^{{{\bf{ \top }}}}} \right\|_F\), where U is a n × k matrix whose columns are the top k left singular vectors of \({{\mathbf{X}}} \equiv \left[ {{{{\mathbf{x}}}}_1, \cdots ,{{{\mathbf{x}}}}_T} \right]\) and FtM−1tWt, drops very quickly during training (a) and maintains the low error in the presence of synaptic noise (b). (c) The relative change of the similarity matrix at time t compared to time point 0, corresponding to the point where the network initially learned the task, defined as \(\left\| {{{{\mathbf{Y}}}}_{t}^ \top {{{\mathbf{Y}}}}_{t} - {{{\mathbf{Y}}}}_0^ \top {{{\mathbf{Y}}}}_0} \right\|_{{{\mathrm{F}}}}/\left\| {{{{\mathbf{Y}}}}_0^ \top {{{\mathbf{Y}}}}_0} \right\|_{{{\mathrm{F}}}}\). (d) Estimating rotational diffusion constant \(D_{{{\mathrm{\varphi }}}}\) from mean squared angular displacement (MSAD). Gray lines are MSAD estimated based on individual representation trajectory \(\mathbf{y}(t)\). The dashed line is a linear fit between \(\left\langle {\left( {{{\Delta }}\varphi } \right)^2} \right\rangle \equiv \left\langle {\left( {{{{{\varphi }}}}\left( {t + {{\Delta }}t} \right) - {{{{\varphi }}}}\left( t \right)} \right)^2} \right\rangle\) and \(\Delta t\) to estimate the rotational diffusion constant. Inset: illustration of Δφ. Parameters are the same as Fig. 2 in the main text.

Extended Data Fig. 2 A single output neuron’s RF drift when stimuli lives on a ring.

(a) With stimuli living on a ring, a single RF has the shape of a truncated cosine curve, whose centroid drifts on the ring like a random walk. (b,c) The effective diffusion constant D of the centroid position increases with learning rate η both without explicit synaptic noise (σ = 0) (b), and with explicit noise (c). Error bars: mean ± SD, n = 40 simulations. Magenta lines correspond to theory Eq. (27) in the main text. (d) The single RF with larger amplitude has smaller diffusion constant. The amplitude of RF is varied by changing the value of α. Shading: mean ± SD, n = 40 simulations.

Extended Data Fig. 3 Distinct contribution of noise from forward synapses and recurrent synapses to representational drift in the 1D place cell model.

To further verify the role of noise in feedforward synapses, we simulated models of representational drift in 1D place cells, and compared the correlation coefficient of population vectors of the principal output neurons in three different noise scenarios: full model with all synaptic noises (blue); noise only in the forward synapses \({{{\mathbf{W}}}}\) (\(\sigma _M = 0\), red); and noise only in recurrent synapses \({{{\mathbf{M}}}}\) (\(\sigma _W = 0\), gray). These models are further explored in main text Fig. 5 and Extended Data Fig. 4. In both the simplified 1D place cell model (a) and the more detailed network model with inhibitory neurons (b), noise in the forward matrix has much larger influence on the representational drift. For the network with inhibitory neurons, forward noise corresponds to all noises in matrices \({{{\mathbf{M}}}},{{{\mathbf{W}}}}^{EI},{{{\mathbf{W}}}}^{IE}\) are set to 0. Shading: mean ± SD, n = 200 output neurons. Parameters used are in Supplementary Table 1 of SI.

Extended Data Fig. 4 A Hebbian/anti-Hebbian network model of CA1 with both excitatory and inhibitory neurons exhibits similar representational drift as the network in Fig. 5 of the main text.

(a) A Hebbian/anti-Hebbian network with inhibitory neurons derived from a similarity matching objective. The derivation is given SI Section 3. (b) Upper: learned place fields tile a 1D linear track when sorted by their centroid positions (left), but continuously change over time (right). Lower: Representational similarity matrix \({{{\mathbf{Y}}}}^ \top {{{\mathbf{Y}}}}\) of position is stable over time. (c) Peak amplitude of an example place field during a simulation. (d) Due to the drift, the average autocorrelation coefficient of population vectors decays over time. Shading: mean ± SD, n = 200 places, population vectors consist of only excitatory neurons. (e) Despite the continuous reconfiguration of place cell ensembles, the fraction of cells with active place fields is stable over time. (f) Neurons whose RFs have larger average amplitude is more stable, as characterized by smaller D. (g) Probability distribution of centroid drifts of place cells at three different time intervals. (h) Same as Fig. 5k in main text. Drifts of RFs show distance-dependent correlations, quantified by the average Pearson correlation coefficient. Shading: mean ± SD, n = 20 repeats. Error bars: mean ± SD, n = 13 animals. Parameters used are in Supplementary Table 1 of SI.

Extended Data Fig. 5 Drift of 2D place cells in the model.

(a) Representational similarity is preserved despite the continuous drift of place cell RFs. Positions on the plane (discretized as 32 × 32 lattice) are represented by an index from 1 to 1024. (b) The dynamics of RFs are intermittent. The peak amplitude of an example place field has active and silent bouts. (c) The intervals of silent bouts follow approximately an exponential distribution. (d) At the population level, there is a constant fraction of active RFs over time. (e) Dependence of the effective diffusion constant on the total number output neurons. Error bars: mean ± SD, n = 40 simulations. (f,g) Place cells that have stronger place fields tend to be active more often (f) and also more stable as indicated by smaller diffusion constant (g). Parameters used are in Supplementary Table 1 of SI.

Extended Data Fig. 6 Representational drift in a modified 1D place cell model with alternating learning and forgetting periods.

We introduced a forgetting time scale (1/ηforget) to our learning rules. The model is described in detail in SI Section 4. (a) 100 synaptic updates (shaded region) are sequentially followed by a forgetting period with 500 synaptic updates. Including a slower forgetting time scale significantly enhances the stability of learned representation as quantified by the similarity matrix alignment (RSA), defined in equation (41) of SI (upper). The representational similarity matrices \({{{\mathbf{Y}}}}^ \top {{{\mathbf{Y}}}}\) after the last forgetting period for three different forgetting time scales (lower). (b) Place fields of 3 exemplar output neurons in the presence of input and synaptic noise. Time starts from when the system has fully learned the representation. (c) Even with slow forgetting time scale, the representation still drifts during ‘experiment’ sessions as shown by the decay of coefficients of population vectors across learning sessions (shaded regions in (a)). Parameters are listed in Supplementary Table 1 of SI. Shading: mean ± SD, n = 200 output neurons. In (a) and (b), \(\eta _{forget} = 10^{ - 3}\).

Extended Data Fig. 7 A Hebbian/anti-Hebbian network model of the PPC with both excitatory and inhibitory neurons exhibits similar representational drift as the network in Fig. 6 of main text.

(a) Population activity of excitatory neurons for the left-turn and right-turn task before (upper) and after (lower) sorting based on the centroids of their RFs. Only neurons that have active RFs at the given time point are shown. (b) Population activity drifts but representational similarity is stable over time. Activity of excitatory neurons that are active (either tuned to left turn or right turn) in the sorted time (upper and middle). Representational similarity matrix is stable for both left-turn and right-turn task (lower panels). (c,d) Comparison of drift statistics between model and experiment, corresponding to panels df of Fig. 6 in the main text. Error bars: mean ± SD, n = 5, 5, 4 mice for ∆ = 1, 10, 20 days.

Extended Data Fig. 8 Degeneracy of the learning objective function and representational drift.

We compare the long-term behavior of learned representations in three different networks. (a) Upper: the Hebbian/anti-Hebbian network for PSP. Lower: the evolution of the three components of a representation \({{{\boldsymbol{y}}}}_t\). (b) Upper: The network differs from the Hebbian/anti-Hebbian network only in the recurrent matrix M which breaks the rotational symmetry of the PSP solution. The learning rule is the same. Lower: the learned representation is stabilized and only fluctuates around its equilibrium. (c) A single feedforward network that perform online principal component analysis with Sanger’s rule80. This network has only feedforward input matrix W and the learning rule is nonlocal. Lower: learned representation is relatively stable in the presence of noise. Parameters are the same as in the Fig. 2 of main text except that η = 0.01.

Supplementary information

Supplementary Information

Supplementary Note and Supplementary Table 1.

Reporting Summary

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qin, S., Farashahi, S., Lipshutz, D. et al. Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning. Nat Neurosci 26, 339–349 (2023). https://doi.org/10.1038/s41593-022-01225-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41593-022-01225-z

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing