catrionascriv Profile Banner
Catriona Scrivener Profile
Catriona Scrivener

@catrionascriv

Followers
247
Following
616
Statuses
214

Postdoc at the University of Edinburgh in visual attention, imagery, and multimodal neuroimaging

Edinburgh, Scotland
Joined February 2017
Don't wanna be here? Send us removal request.
@catrionascriv
Catriona Scrivener
1 month
RT @Runhao_Lu: 🚨New paper @brainstimj !⚡Using TMS-EEG & MVPA in a selec attention task, we show parietal alpha rh-TMS (entrainment) specifi…
0
11
0
@catrionascriv
Catriona Scrivener
3 months
@catrionascrivener.bsky.social
0
0
0
@catrionascriv
Catriona Scrivener
4 months
RT @mlueckel: Where to place an ultrasound transducer to optimally target a specific brain region in a given individual? We developed 𝗣𝗹𝗮𝗻…
0
30
0
@catrionascriv
Catriona Scrivener
5 months
RT @ArranReader: Happy to share a new preprint! Me and my excellent RA co-authors provide insight into how humans use a single hand to cumu…
0
7
0
@catrionascriv
Catriona Scrivener
5 months
RT @lauracrucianel1: 🌟Exciting PhD opportunity🌟 If you are interested in touch, interoception, mental health, please apply by the 15th of…
0
51
0
@catrionascriv
Catriona Scrivener
5 months
RT @num_ole: Why is motion correction a thing for fMRI but nobody quantifies it during #TMS? Well, we just put out a metric to easily quan…
0
21
0
@catrionascriv
Catriona Scrivener
6 months
Although perhaps not suprising, we recommend examining the spatial overlap between category-selective and retinotopic ROIs and take this overlap into account when drawing inferences about their responses - especially if image properties vary across space. 12/n
0
0
0
@catrionascriv
Catriona Scrivener
6 months
Using a representational similarity approach, we found numerically different model correlations across ROIs. However, these could not explain the pattern of responses we observed between ROIs. 7/n
Tweet media one
1
0
0
@catrionascriv
Catriona Scrivener
6 months
How can we explain this structure? We first tested 7 candidate models: 3 scene dimensions (Content, Expanse, Distance), 2 modelling low-level visual features (GIST & LGN), 1 modelling high-level visual features (convolutional neural network), and navigational affordances. 6/n
1
0
0
@catrionascriv
Catriona Scrivener
6 months
Next, we assessed the activity within each field map while participants viewed 96 different scenes during multi-echo fMRI. The pattern of responses across OPA divisions grouped into 3 possible clusters: LO1 & LO2, V3A & V3B, and V7. 5/n
Tweet media one
1
0
0
@catrionascriv
Catriona Scrivener
6 months
Next, we computed the visual field coverage for these maps using population receptive field modelling. All ROIs had a contralateral bias. Some represented the lower visual field (LO1 & LO2), the upper visual field (V7), or a full hemifield (V3A & V3B). 4/n
Tweet media one
1
0
0
@catrionascriv
Catriona Scrivener
6 months
First, we calculated the overlap between the OPA and its visual field maps. Roughly 48% of the OPA overlapped five separate maps, whereas the other 52% overlapped cortex with no known visual field maps (OPA other). 3/n
Tweet media one
1
0
0
@catrionascriv
Catriona Scrivener
6 months
Most fMRI research of the occipital place area (OPA) is based on the response across all voxels, averaging over the visual field maps that subdivide it (LO1, LO2, V3A, V3B & V7). Here we investigated the specific role played by OPA’s visual field maps during scene processing. 2/n
1
0
0
@catrionascriv
Catriona Scrivener
6 months
RT @Runhao_Lu: Happy to share our new results on neural basis of domain-general cognition!🧠 We've known the frontoparietal multiple-demand…
0
9
0
@catrionascriv
Catriona Scrivener
6 months
Looking forward to presenting our concurrent TMS-fMRI work on the role of the IPS in prioritising task-relevant visual information @AlexWoolgar @Runhao_Lu @Jade_BJackson
@Brainbox_Init
Brainbox Initiative
6 months
With a talk on the role of the right intraparietal sulcus in driving brain-wide focus, Dr Catriona Scrivener will be joining us from @UniOfEdinburgh this September in London Join us: #BBI2024 #Neuroscience #NIBS
Tweet media one
0
4
12