I'm a postdoctoral researcher in the Curtis lab, working jointly with Wei Ji Ma and Jonathan Winawer. I received my undergraduate degree in Cognitive Science at Rice University in 2010, and my PhD in Computational Neurosciences at the University of California, San Diego with John Serences in 2016.
My research is focused on applying novel analysis techniques to understand how neural systems support representations of information, and how cognitive operations are directed and constrained by these systems
In January 2019 I'll be starting a lab in the Department of Psychological and Brain Sciences at the beautiful beachside University of California, Santa Barbara. Get in touch with me if you're interested in finding out about available positions for students and post-docs (email address in CV)
Visual cognition: priority maps
How does the visual system select items in the scene for further processing and guidance of behavior? A prominent theoretical framework posits that different scene elements are indexed according to their 'priority' in a series of interacting retinotopic maps. I seek to understand how manipulations of different factors, such as the visual salience and behavioral relevance of scene elements, alters neural activity patterns in visual, parietal, and frontal regions thought to support priority maps. By applying computational neuroimaging methods, we can reconstruct high-fidelity spatial priority maps from neural activity patterns, and quantify how those maps change across task demands
During my PhD with John Serences, I developed analysis techniques to recover large-scale quantifiable neural representations of simple visual stimulus features, like visual spatial location on the screen. These techniques, called "inverted encoding models" (IEM), enable transformation from a participant-specific measurement space (i.e., one measurement for each voxel) to a shared and interpretable 'information space' (i.e., one measurement for each 'location' on the screen).
At NYU, I am extending these methods, both to apply to high-speed neuroimaging datasets (sub-second whole brain imaging with fMRI) and to probabilistic models of neural activity.
The human visual system contains over a dozen retinotopic maps of visual space. In previous studies, I've found information about the contents of visual spatial working memory in nearly all of these maps. Do each of these regions play a unique role in visual spatial cognition? To answer these questions, we disrupt ongoing activity using functionally-guided TMS applied to visual, parietal, and frontal visual field maps while participants perform visual cognitive tasks.
When studying visual working memory representations in our lab, we often apply an analysis technique - called an 'inverted encoding model' (IEM) - that enables us to transform neural activation patterns from measurement space (fMRI activation in each voxel) into a modeled 'information space' (activation for each feature value). When we do this, we make a set of assumptions that mitigates our ability to ever make inferences about single-neuron response properties. Indeed, it remains impossible to make firm claims about single-neuron response properties from aggregate signals like BOLD fMRI even when using more conventional analyses due to the intractable inverse problem. In this commentary, we discuss these issues, as well as a recent report which emphasizes the inability to make single-unit tuning inferences from stimulus reconstructions derived with IEM analyses.
Dissociable neural signatures of visual salience and behavioral relevance across attentional priority maps in human cortex
Priority maps are thought to represent a combination of the image-computable 'visual salience' and task-dependent 'behavioral relevance' of each position on the screen. We tested whether priority maps reconstructed from fMRI activation patterns in visual, parietal, and frontal cortex reflect salience, relevance, or a combination of the two. We found that, for each region, priority maps indexed an independent combination of salience and relevance, with some regions (visual cortex) encoding both salience and relevance, and other regions (parietal cortex) only encoding behavioral relevance
Spatial tuning shifts increase the discriminability and fidelity of population codes in visual cortex
Participants can perform better when cued that one or another stimulus is relevant for behavior. We sought to understand how this ability - called spatial attention - results from changes in neural activity. We measured fMRI responses to visual stimuli presented at dozens of locations on the screen while participants attended to a peripheral target location or the fixation point. The visual stimuli we presented were never relevant for behavior: their purpose was to evoked strong visual fMRI responses which we compared across conditions. We applied a version of voxel receptive field mapping to determine that attention causes neural responses to over-prefer the attended region of space, as well as a spatial inverted encoding model to reconstruct enhanced region-level representations of the irrelevant stimulus near the attended location. Finally, we evaluated which changes in voxel receptive fields were necessary to see the changes in region-level stimulus reconstructions observed. These analyses revealed that changes in receptive field position, but not size, were most important for the enhancement in region-level stimulus reconstructions we observed.
Restoring latent visual working memory representations in human cortex
The more information participants must remember over a brief delay, the worse their performance is when asked about that information. This is suggestive (and our previous work demonstrated) that neural representations of the remembered information are lower in quality when more information is remembered. But, there's a wrinkle: participants can use a cue presented during the delay period, when there is no accessible information on the screen, to improve their performance when asked about the cued item. Where does this information come from? Are participants better at making decisions, or otherwise accessing an intact neural representation? Or does the neural representation itself get stronger after the cue? If so, this implies that previous studies (like our own!)identifying 'degraded' memory representations when more information is remembered may be missing the full picture. Indeed, when participants are cued that one of two items in visual working memory will be queried at the end of a trial, its representation is enhanced relative to a condition with a non-informative cue. Additionally, on trials when participants can better enhance their neural representation, behavioral performance is improved, implying that this process of neural restoration is involved in the task. However, a recent modeling study (Schneegans & Bays, 2017) suggests that our results may be possible without the need to suggest 'enhancement' in WM representations. Continuing work will further address these questions!