Hello! I am Sarah, a theoretical physics and neuroscience research fellow at the Flatiron Institute, which is part of the Simons Foundation and based in NYC. I am interested generally in statistical physics, computational neuroscience, and machine learning. In particular, I am fascinated by how biological brains can solve complex computational problems much more efficiently than our artificial models.
I perform theoretical and computational research with the Williams Lab, and my work currently centers around understanding and developing meaningful ways to measure when neural systems are similar to each other, which is sometimes referred to as "representational similarity". In deep networks, this involves answering questions such as "what important aspects of the representations have changed when I fine tune my network?" or "when should I conclude that two layers in two different deep network architectures have similar functional roles?"
On the neuroscience side, I think about understanding representational similarity as a prerequisite to determine when and if we have constructed a "good" model of a biological brain. Furthermore, I don't think that we will "understand" biological brains until we understand how they come to be, both developmentally and evolutionarily, which implies an understanding of how neural systems change. Methods for meaningful comparison between these systems are then a crucial step toward this end.
I am a graduate of the applied physics department at Stanford University, where I was theorist in the Ganguli Lab. In graduate school, I worked on using methods from nonequilibrium statistical physics and large deviation theory to study biological computation, from the micoscopic scale of single receptor computations to macroscopic reinforcement learning. My paper (with Subhaneil Lahiri) from this time studies thermodynamic limits on sensors modeled as continuous time Markov chains, using stochastic thermodynamics and large deviation theory. We can place interesting bounds on sensors of this type by first deriving a thermodynamic uncertainty relation for densities in subsets of Markov chain states.
Before Stanford I graduated from the wonderful University of Washington in Seattle, with Bachelor's degrees in physics and astronomy. Sadly I haven't used my astronomy degree very much in my graduate research career. However, I think many fields of science, and in particular AI research, could benefit from the long-timescale thinking common among astronomers.
Links to my papers and preprints are listed below. The preprints and published versions mostly have very similar or identical content.
Sarah E. Harvey, David Lipshutz, Alex H. Williams. (2024). What Representational Similarity Measures Imply about Decodable Information. UniReps 2024. Preprint: https://arxiv.org/abs/2411.08197. Code: github.com/SarahHarvey/decoding-similarity
Jenelle Feather, David Lipshutz, Sarah E. Harvey, Alex H. Williams, Eero P. Simoncelli. (2024). Discriminating image representations with principal distortions. ICLR 2025. Preprint: https://arxiv.org/abs/2410.15433.
Sarah E. Harvey, Brett W. Larsen, Alex H. Williams. (2024). Duality of Bures and Shape Distances with Implications for Comparing Neural Representations. UniReps 2023. Preprint: https://arxiv.org/abs/2311.11436.
Dean A. Pospisil, Brett W. Larsen, Sarah E. Harvey, Alex H. Williams. (2023). Estimating Shape Distances on Neural Representations with Limited Samples. ICLR 2024. Preprint: https://arxiv.org/abs/2310.05742.
Sarah E. Harvey, Subhaneil Lahiri, Surya Ganguli. (2022). Universal energy-accuracy tradeoffs in nonequilibrium cellular sensing. Physical Review E (link). Preprint: https://arxiv.org/abs/2002.10567.
Christopher H. Stock, Sarah E. Harvey, Samuel A. Ocko, Surya Ganguli. (2021). Synaptic balancing: a biologically plausible local learning rule that provably increases neural network noise robustness without sacrificing task performance. PLOS Computational Biology (link). Preprint: https://arxiv.org/abs/2107.08530.
Todd Karin, Xiayu Linpeng, M. M. Glazov, M. V. Durnev, E. L. Ivchenko, Sarah Harvey, Ashish K. Rai, Arne Ludwig, Andreas D. Wieck, and Kai-Mei C. Fu. (2016). Giant permanent dipole moment of two-dimensional excitons bound to a single stacking fault. Physical Review B, 94(4). doi:10.1103/physrevb.94.041201. (link)
University of Washington B.S., Physics and Astronomy, summa cum laude, 2015
Stanford University Ph.D., Applied Physics, 2022, my thesis: Combating noise and uncertainty in biophysical models
US mail: Sarah Harvey 162 Fifth Avenue Center for Computational Neuroscience New York, NY 10010 Email: sharvey@flatironinstitute.org Bluesky: @sarah-harvey.bsky.social