Description: This is a picture of me

Sarah Harvey, Ph.D.


Flatiron Research Fellow
Center for Computational Neuroscience, Flatiron Institute

Orcid: 0000-0003-2601-7487
My Google Scholar
My Github




Hello! I am Sarah, a theoretical physics and neuroscience postdoctoral research fellow at the Flatiron Institute, which is part of the Simons Foundation. My research work in the Williams Lab currently centers around understanding ways to measure representational similarity between neural systems and how these methods relate to each other. I am interested generally in statistical physics, computational neuroscience, and machine learning. In particular, I have always been fascinated by how biological brains can solve complex computational problems much more efficiently than our artificial models.

I am a graduate of the applied physics department at Stanford University, where I was theorist in the Ganguli Lab. In graduate school, I worked on using methods from nonequilibrium statistical physics and large deviation theory to study biological computation, from the micoscopic scale of single receptor computations to macroscopic reinforcement learning.

My paper (with Subhaneil Lahiri) studies thermodynamic limits on sensors modeled as continuous time Markov chains, using stochastic thermodynamics and large deviation theory. We can place interesting bounds on sensors of this type by first deriving a thermodynamic uncertainty relation for densities in subsets of Markov chain states.

With Chris Stock and Sam Ocko, I also worked on a project deriving biologically plausible synaptic update rules that provably preserve the task an RNN is trained to do, while also increasing the network robustness. We can think of this synaptic update rule as traversing a manifold in synaptic weight space of networks that perform exactly the same task, to find the network with the best noise robustness quality.

Links to my papers and preprints are listed below. The preprints and published versions mostly have very similar or identical content.

Publication Links

Sarah E. Harvey, Brett W. Larsen, Alex H. Williams. (2023). Duality of Bures and Shape Distances with Implications for Comparing Neural Representations. UniReps 2023. Preprint: https://arxiv.org/abs/2311.11436.

Dean A. Pospisil, Brett W. Larsen, Sarah E. Harvey, Alex H. Williams. (2023). Estimating Shape Distances on Neural Representations with Limited Samples. ICLR 2024. Preprint: https://arxiv.org/abs/2310.05742.

Sarah E. Harvey, Subhaneil Lahiri, Surya Ganguli. (2022). Universal energy-accuracy tradeoffs in nonequilibrium cellular sensing. Physical Review E (link). Preprint: https://arxiv.org/abs/2002.10567.

Christopher H. Stock, Sarah E. Harvey, Samuel A. Ocko, Surya Ganguli. (2021). Synaptic balancing: a biologically plausible local learning rule that provably increases neural network noise robustness without sacrificing task performance. PLOS Computational Biology (link). Preprint: https://arxiv.org/abs/2107.08530.

Todd Karin, Xiayu Linpeng, M. M. Glazov, M. V. Durnev, E. L. Ivchenko, Sarah Harvey, Ashish K. Rai, Arne Ludwig, Andreas D. Wieck, and Kai-Mei C. Fu. July 2016. Giant permanent dipole moment of two-dimensional excitons bound to a single stacking fault. Physical Review B, 94(4). doi:10.1103/physrevb.94.041201. (link)

Education

University of Washington               B.S., Physics and Astronomy, summa cum laude, 2015

Stanford University                  Ph.D., Applied Physics, 2022

Talks

Teaching

Honors, Awards, Programs

Some other stuff I enjoy

OEIS
Ham radio: KI7UXI
Drawing
Analog Photography

US mail: Sarah Harvey
         162 Fifth Avenue
         Center for Computational Neuroscience
         New York, NY  10010
Email:   sharvey@flatironinstitute.org
Twitter: @SarahLizHarvey

Sarah Harvey < sharvey@flatironinstitute.org>
Last modified: 1/24