Hello! I am Sarah, a theoretical physics and neuroscience postdoctoral research fellow at the Flatiron Institute, which is part of the Simons Foundation and based in NYC. I am interested generally in statistical physics, computational neuroscience, and machine learning. In particular, I am fascinated by how biological brains can solve complex computational problems much more efficiently than our artificial models.
I perform theoretical and computational research in the Williams Lab, and my work currently centers around understanding and developing meaningful ways to measure when neural systems are similar to each other. I don't think that we will "understand" biological brains until we understand how they come to be, both developmentally and evolutionarily. I hope methods for meaningful comparison between neural systems will be helpful to this end.
I am a graduate of the applied physics department at Stanford University, where I was theorist in the Ganguli Lab. In graduate school, I worked on using methods from nonequilibrium statistical physics and large deviation theory to study biological computation, from the micoscopic scale of single receptor computations to macroscopic reinforcement learning. My paper (with Subhaneil Lahiri) from this time studies thermodynamic limits on sensors modeled as continuous time Markov chains, using stochastic thermodynamics and large deviation theory. We can place interesting bounds on sensors of this type by first deriving a thermodynamic uncertainty relation for densities in subsets of Markov chain states.
Before Stanford I graduated from the wonderful University of Washington in Seattle, with Bachelor's degrees in physics and astronomy. Sadly I haven't used my astronomy degree very much in my research career (yet).
Links to my papers and preprints are listed below. The preprints and published versions mostly have very similar or identical content.
Jenelle Feather, David Lipshutz, Sarah E. Harvey, Alex H. Williams, Eero P. Simoncelli. (2024). Discriminating image representations with principal distortions. Under review for ICLR 2025. Preprint: https://arxiv.org/abs/2410.15433.
Sarah E. Harvey, Brett W. Larsen, Alex H. Williams. (2024). Duality of Bures and Shape Distances with Implications for Comparing Neural Representations. UniReps 2023. Preprint: https://arxiv.org/abs/2311.11436.
Dean A. Pospisil, Brett W. Larsen, Sarah E. Harvey, Alex H. Williams. (2023). Estimating Shape Distances on Neural Representations with Limited Samples. ICLR 2024. Preprint: https://arxiv.org/abs/2310.05742.
Sarah E. Harvey, Subhaneil Lahiri, Surya Ganguli. (2022). Universal energy-accuracy tradeoffs in nonequilibrium cellular sensing. Physical Review E (link). Preprint: https://arxiv.org/abs/2002.10567.
Christopher H. Stock, Sarah E. Harvey, Samuel A. Ocko, Surya Ganguli. (2021). Synaptic balancing: a biologically plausible local learning rule that provably increases neural network noise robustness without sacrificing task performance. PLOS Computational Biology (link). Preprint: https://arxiv.org/abs/2107.08530.
Todd Karin, Xiayu Linpeng, M. M. Glazov, M. V. Durnev, E. L. Ivchenko, Sarah Harvey, Ashish K. Rai, Arne Ludwig, Andreas D. Wieck, and Kai-Mei C. Fu. (2016). Giant permanent dipole moment of two-dimensional excitons bound to a single stacking fault. Physical Review B, 94(4). doi:10.1103/physrevb.94.041201. (link)
University of Washington B.S., Physics and Astronomy, summa cum laude, 2015
Stanford University Ph.D., Applied Physics, 2022, my thesis: Combating noise and uncertainty in biophysical models
US mail: Sarah Harvey 162 Fifth Avenue Center for Computational Neuroscience New York, NY 10010 Email: sharvey@flatironinstitute.org Twitter: @SarahLizHarvey Bluesky: @sarah-harvey.bsky.social