Bria Long, PhD Profile Banner
Bria Long, PhD Profile
Bria Long, PhD

@brialong

Followers
1,565
Following
980
Media
30
Statuses
524

Interested in how we learn to derive visual meaning. @NIH K99 postdoc, Fall 2024 Asst. Prof. @UCSDPsychology . Mom x2. She/her. .

Stanford, CA
Joined July 2009
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@brialong
Bria Long, PhD
1 year
I’m thrilled to share that I’ll be joining @UCSDPsychology as an Assistant Professor starting in July 2024! 🎉✨ This job is truly a dream come true. I feel beyond lucky to join such a wonderful department and to have the chance to build my own lab. (1/3)
61
29
492
@brialong
Bria Long, PhD
6 years
Interested in how drawings of object categories change throughout childhood? Come by Friday morning at 11:50am at #CogSci2018 to the early cognition session (Room MNQR)! @judyefan @mcxfrank paper:
Tweet media one
4
79
264
@brialong
Bria Long, PhD
1 year
New paper out today 🎉in Behavioral Research Methods: “The BabyView camera: Designing a new head‑mounted camera to capture children’s early social and visual environments.” Example video clip below! And a short 🧵(1/11)
8
55
222
@brialong
Bria Long, PhD
5 years
How might changes in children’s drawings reflect changes in their visual concepts? For our #CogSci2019 paper (with @judyefan @mcxfrank , ) we collected a large-scale database of children’s drawings (~13K) and analyzed changes in their visual features. (1/6)
Tweet media one
3
58
174
@brialong
Bria Long, PhD
7 months
📣 Excited to share that our paper, “Parallel developmental changes in children’s production and recognition of line drawings of visual concepts” has been published in Nature Communications! 🎉
1
41
168
@brialong
Bria Long, PhD
6 months
📣I'm hiring a lab coordinator! This position is ideal for someone looking for research experience before applying to graduate school. Deadline is April 1st—flexible summer start date. Details and instructions below!
4
86
140
@brialong
Bria Long, PhD
1 year
📣 🎉Excited to announce that I'm recruiting graduate students and a postdoc to launch the Visual Learning Lab @UCSDPsychology in Fall 2024! More information at —feel free to reach out with questions!
1
55
127
@brialong
Bria Long, PhD
3 years
Excited to announce our new preprint, “Parallel developmental changes in children’s drawing and recognition of visual concepts” with Judy Fan ( @judyefan ), Zixian Chai, and Michael C. Frank ( @mcxfrank ): (1/13)
Tweet media one
1
21
127
@brialong
Bria Long, PhD
1 year
So excited to join UCSD at the same time 🎉🙌!
@m_zettersten
Martin Zettersten
1 year
I'm thrilled to announce that next summer, I'll be starting as an Assistant Professor in the Department of Cognitive Science at @UCSanDiego . I feel so fortunate to be joining such a wonderful department and research community. It's a dream I never dared to dream come true. (1/5)
Tweet media one
88
29
572
2
1
103
@brialong
Bria Long, PhD
6 years
Very excited to share that this paper with @talia_konkle , "Mid-level visual features underlie the high-level categorical organization of the ventral stream" is finally out! ($ wall).
5
26
85
@brialong
Bria Long, PhD
3 years
Excited to announce a new paper: “Automated detections reveal the social information in the changing infant view” with Alessandro Sanchez, Allison M. Kraus, @_ketan0 and @mcfrank at Child Development, ($) (1/15) 🧵
2
25
76
@brialong
Bria Long, PhD
7 months
Finally, if you find this kind of work interesting, I’m currently hiring! I'm recruiting both a postdoc and a lab manager (more at ) to launch the Visual Learning Lab at @UCSDPsychology —please feel free to reach out (or come say hi at CDS)!
0
28
61
@brialong
Bria Long, PhD
5 years
Excited to share that a new article with Mariko Moher, Susan Carey, and @talia_konkle is out at JEP:HPP ($wall, copy at ). We found evidence that preschoolers automatically process the real-world sizes of *depicted* objects (1/3)
Tweet media one
2
12
54
@brialong
Bria Long, PhD
6 months
Absolutely thrilled to announce that Jane Yang @jane_yt_yang will be an inaugural grad student in the Visual Learning Lab at @UCSDPsychology ()! I couldn't be more excited to have her join the lab and to get to do science together! Welcome Jane!!! 🎉🎉🎉
0
1
50
@brialong
Bria Long, PhD
4 years
Excited to speak tomorrow at the @svrhm2020 NeurIPS workshop (Dec 12th, 3pm PST/6pm EST) on my work with @judyefan , Zixian Chai, & @mcxfrank : "Parallel developmental changes in children's drawing and recognition of visual concepts." (1/8)
2
9
43
@brialong
Bria Long, PhD
1 year
For now, if you’re struggling through the academic job market, I see you. This has been a long road, especially as an #academicmama of two. Deeply grateful for the support of my family, friends, and mentors @mcxfrank @talia_konkle @grez72 🙏 I wouldn’t be here without you! (3/3)
2
0
37
@brialong
Bria Long, PhD
4 years
Wonderful summary by @natvelali of my recent work!
@natvelali
Natalia Vélez (natvelali.bsky.social @ 🦋)
4 years
First Zoom sketchnotes! Fabulous talk by @brialong on what social cues are available within the infant’s view. Such a cool model and dataset. Turns out babies spend quite a lot of time looking at hands!
Tweet media one
1
8
74
1
2
29
@brialong
Bria Long, PhD
1 year
Congratulations, Talia! @talia_konkle 🎉🎉🎉 You're an incredible scientist and a wonderful mentor. This award is so well-deserved!
@LanguageMIT
Ted Gibson, Language Lab MIT
1 year
Talia Konkle is the recipient of the Inaugural Lila R. Gleitman Prize (2023). Way to go Talia!!
Tweet media one
3
7
104
1
2
26
@brialong
Bria Long, PhD
5 years
So grateful for our partnership with @PurpleMuseum ! Check out their blog post on our work with @judyefan @mcxfrank on children's drawings And if you're in the San Jose area, you can visit our drawing station and their wonderful children's museum!
0
5
22
@brialong
Bria Long, PhD
1 year
My lab will examine how children “learn to see” using primarily empirical and computational methods (examples of recent work at ). I’ll be hiring at all levels in the next year to start the lab with me in 2024—more to come! (2/3)
1
1
19
@brialong
Bria Long, PhD
3 years
What a creative use of texforms! So fun to see these stimuli used to ask new and exciting experimental questions
@MCohanpour
Michael Cohanpour
3 years
If you didn't have a chance to see our poster at CNS, here it is (along with a video walk through: ). If you want to talk about (perceptual) curiosity and how it might affect memory, let me know!
Tweet media one
3
7
33
1
2
16
@brialong
Bria Long, PhD
1 year
So excited that my future department is hiring (I'm starting July 2024)!!
@CarenMWalker
Caren Walker
1 year
We are hiring! Come work with me 🙂 UC San Diego Psychology will be hiring two Assistant Professors this year in: 1. Social psychology. 2. Computational approaches to behavior and/or its neural mechanisms. Apply by November 7th
0
2
8
0
0
13
@brialong
Bria Long, PhD
7 years
Excited to post my first fMRI paper up on #biorxiv_neursci ! @talia_konkle
0
3
9
@brialong
Bria Long, PhD
5 years
As children got older, these category classifications became more accurate. And these gains in classification were not explainable by children’s ability to trace simple shapes or low-level effort covariates (amount of time spent drawing, number of strokes, ‘ink’ used). (4/6)
Tweet media one
1
0
10
@brialong
Bria Long, PhD
6 years
Follow @natvelali for lovely summaries of #cogsci2018 talks!
@natvelali
Natalia Vélez (natvelali.bsky.social @ 🦋)
6 years
Starting #CogSci2018 with @celestekidd ’s talk on the role of information and learning in exploration!
Tweet media one
1
11
55
0
4
9
@brialong
Bria Long, PhD
1 year
With an all-star team: Sarah Goodin, @kachergis , @V_Marchman , @SamaFRadwan , @rbzsparks , Violet Xiang, @ChengxuZhuang , Oliver Hsu, Brett Newman, @dyamins , and @mcxfrank . Working with @daylightsf & generously funded by @SchmidtFutures . (2/11)
1
2
8
@brialong
Bria Long, PhD
6 years
Thanks for the great summary and the sketch @natvelali !
@natvelali
Natalia Vélez (natvelali.bsky.social @ 🦋)
6 years
@brialong ’s #CogSci2018 talk on how children’s drawings change through childhood: children’s drawings become more recognizable with age because they better capture relevant visual distinctions. Very cool work!
Tweet media one
0
5
19
0
1
9
@brialong
Bria Long, PhD
1 year
Many more details are in the paper and linked on the website: . We are currently collecting data and plan to share videos through @databrary ! Feel free to reach out with any questions! (11/11)
1
1
9
@brialong
Bria Long, PhD
4 years
Taken together, these results suggest visual production and recognition of object categories improve throughout middle childhood. Preprint coming soon! (7/8)
1
0
7
@brialong
Bria Long, PhD
2 years
Looking forward to presenting and discussing at this seminar on Monday (7/11)—join us (it's free)!
@judyefan
Judy Fan
2 years
On Mon 7/11 at 10AM PT: Our Dev Sci seminar featuring talks by @Logan_Fiorella ("Motivation and metacognition in learning by drawing") & @brialong ("Parallel developmental changes in children's production and recognition of drawings"), w/ broader discussion led by @Moira_Dillon !
Tweet media one
1
0
3
1
3
7
@brialong
Bria Long, PhD
5 years
Children were simply asked to touch the picture that was smaller *on the screen* on an iPad (same task/stim in Konkle & Oliva, 2012). But children were better at the task when the relative real-world sizes of the objects were congruent with their sizes in the real-world. (2/3)
Tweet media one
2
0
7
@brialong
Bria Long, PhD
6 years
Stimuli (Fig 1) for those interested!
Tweet media one
0
4
7
@brialong
Bria Long, PhD
5 years
@mariam_s_aly Walking around (often with headphones on so others aren't confused who I'm talking to :)
2
0
7
@brialong
Bria Long, PhD
6 years
thought provoking article by Rufin VanRullen -- Perception Science in the Age of Deep Neural Networks.
0
5
7
@brialong
Bria Long, PhD
6 years
Talia’s an amazing mentor—apply! I’ll be at #ccn2018 as well and am happy to chat!
@talia_konkle
talia konkle
6 years
I’ll be at #CCN18 soon! Get in touch if you’re looking for a post doc on topics in high level visual representation using neuroimaging and modeling approaches.
0
7
17
0
1
7
@brialong
Bria Long, PhD
5 years
Furthermore, this effect didn't vary according to how well children could identify the depicted objects—instead, adults & kids showed similar item effects. While a first step, these data hint that similar mechanisms underly size processing in adults & preschoolers! (3/3)
1
0
7
@brialong
Bria Long, PhD
4 years
Do we continue to learn about which features best distinguish between categories (e.g. ‘dogs’ vs. ‘rabbits’) across childhood? We asked if children improve at (1) emphasizing these features when drawing and (2) using these features when recognizing drawings. (2/8)
1
0
6
@brialong
Bria Long, PhD
6 years
@melissaekline Thanks for talking about this openly! I also actively think about alternative career paths and encourage others to do the same. I’d love to land a faculty job, but the reality is that there *really* aren’t enough jobs for the number of qualified postdocs!
0
0
6
@brialong
Bria Long, PhD
2 years
@melissaekline @jdyeatman @mcxfrank @AnyaWMa Thanks Melissa! This task is part of the GARDEN project so that is the plan — but we could always get it up and running earlier!
2
0
6
@brialong
Bria Long, PhD
2 years
Come join us!!
@mcxfrank
Michael C. Frank
2 years
My lab, the Stanford Language and Cognition Lab (), is looking for a new Research Coordinator to start this summer. Lots of ways to get involved in research and learn new skills/methods in a fun, supportive environment!
Tweet media one
Tweet media two
2
95
125
0
1
6
@brialong
Bria Long, PhD
4 years
We installed a free-standing station at the Children’s Discovery Museum in San Jose ( @PurpleMuseum ), where we collected over 37,000 drawings from children 2-10 years of age, and also hosted a "guessing game" where children recognized each other's drawings. (3/8)
Tweet media one
1
0
6
@brialong
Bria Long, PhD
4 years
We found that older children were better able to recognize all kinds of drawings. But older children were also differentially sensitive to the presence of diagnostic visual features in these drawings. (6/8)
1
0
5
@brialong
Bria Long, PhD
3 years
Overall, we think these results call for systematic exploration of how different kinds of experience change visual production and recognition. Our project repository is and we’ll be publicly releasing the full dataset soon! Feedback welcome! (13/13)
1
0
5
@brialong
Bria Long, PhD
4 years
@RaBhui I'm in the same boat :)
Tweet media one
1
0
5
@brialong
Bria Long, PhD
3 years
Highly recommend working with these two fabulous mentors and scientists 🙌
@grez72
George Alvarez
3 years
I'm hiring a postdoc in the Vision Sciences Lab at Harvard, run jointly with @talia_konkle . If you're interested in using deep neural network models to understand human visual processing & human intuitive physical reasoning, please apply here
0
46
92
0
0
5
@brialong
Bria Long, PhD
5 years
@mcxfrank @SbllDtrch @DermotLynott @cogsci_soc Fairly sure SRCD didn't offer it this year, but VSS (vision sciences society) is offering it this May for the first time. I'm happy to help coordinate.
1
0
4
@brialong
Bria Long, PhD
3 years
Older children’s recognizable drawings also contained more diagnostic features than did younger children’s (i.e. higher classifier evidence). Thus, older children’s drawings increasingly emphasized the relevant distinctions between categories in their drawings. (7/13)
Tweet media one
1
1
5
@brialong
Bria Long, PhD
6 years
@ceptional I think it's definitely something our community should focus on and that your talk/poster could be a great way to get conversations going!
1
0
5
@brialong
Bria Long, PhD
3 years
Do children’s visual concepts develop early, or do children continue to learn what features distinguish categories (e.g. dogs vs. rabbits)? We asked if children improve at 1) emphasizing distinctive features when drawing & 2) using these features when recognizing drawings (2/13)
Tweet media one
1
1
5
@brialong
Bria Long, PhD
1 year
Of course, these high-resolution images are still challenging for machine-learning models—but they’re dramatically better than the lower-resolution images from prior cameras. Here are some images with off-the-shelf Mask-R CNN segmentations superimposed. (8/11)
Tweet media one
1
1
3
@brialong
Bria Long, PhD
1 year
The BabyView captures a large, portrait-oriented vertical field-of-view that encompasses both children’s interactions with objects and with their social partners and has an effective view angle of 100° vertical by 75° horizontal. (7/11)
Tweet media one
1
0
4
@brialong
Bria Long, PhD
2 years
To do so, we collected drawings of 12 familiar objects from 4–9-year-olds (N=253) in San Jose and Beijing; each child completed 6 picture-cued drawings, 6 verbal-cued drawings, and 2 shape tracings. Then, we asked adults to guess the category depicted in each drawing. (3/9)
Tweet media one
1
1
4
@brialong
Bria Long, PhD
1 year
Thoughtful thread 🧵!
@mcxfrank
Michael C. Frank
1 year
How do we compare the scale of language learning input for large language models vs. humans? I've been trying to come to grips with recent progress in AI. Let me explain these two illustrations I made to help. 🧵
Tweet media one
Tweet media two
58
386
2K
0
0
3
@brialong
Bria Long, PhD
2 years
@mvazirip @UDelaware Yes!! Congratulations Maryam ❤️🎉 they are lucky to have you!
0
0
4
@brialong
Bria Long, PhD
1 year
Head-mounted cameras provide a rich and comprehensive view of what infants and young children see during their everyday experiences. But variation between these devices in both device resolution and field-of-view has limited the field’s ability to compare results. (3/11)
1
0
4
@brialong
Bria Long, PhD
3 years
These results suggest visual production and recognition of object categories improve throughout middle childhood, pointing towards refinements in children's ability to connect internal and external representations of visual concepts. What could be driving these changes? (11/13)
1
0
4
@brialong
Bria Long, PhD
1 year
Further, the video data captured by all cameras to date has been relatively low-resolution. This limits how well machine learning algorithms can operate over these rich video data. (4/11)
1
0
4
@brialong
Bria Long, PhD
4 years
Come to SVHRM tomorrow to learn more! And do check out the exciting lineup of other talks & posters (). (8/8)
1
0
4
@brialong
Bria Long, PhD
5 years
In this paper, we extracted the high-level visual features for each drawing using a CNN trained to recognize objects in photographs (VGG-19), and asked how well these features could be used to estimate the category that children intended to draw. (3/6)
1
0
4
@brialong
Bria Long, PhD
1 year
Finally, if a new lab hopes to adopt the head-camera method for a particular developmental population, there is no single device and procedure that is widely available and easy to adopt—thus, labs must constantly “reinvent the wheel.” (5/11)
1
0
4
@brialong
Bria Long, PhD
3 years
We were inspired by the work of Caitlin Fausey @UOLearningLab revealing changes in how infants see faces and hands across early development—and by @JohnFranchak ’s work documenting how both infant and caregiver posture jointly shape what’s in view for infants. (3/15)
1
0
4
@brialong
Bria Long, PhD
1 year
Here, we present our design process for the BabyView Camera, which collects high-resolution video, accelerometer, and gyroscope data from children approximately 6–30 months of age via a GoPro camera custom mounted on a soft child-safety helmet. (6/11)
Tweet media one
2
0
4
@brialong
Bria Long, PhD
4 years
We extracted feature representations for all drawings using a deep CNN optimized for object recognition, and then trained a classifier using these features to predict the category children were asked to draw for iterative subsets of the data. (4/8)
1
0
4
@brialong
Bria Long, PhD
6 years
@MGreenePhD @IrisVanRooij @mcxfrank Thanks Michelle! and here is a link to the relevant cogsci paper if you’re interested!
0
0
4
@brialong
Bria Long, PhD
6 years
@timothyfbrady @catherineols @talia_konkle Yep unfortunately I can’t update the preprint but you can find a version on my website at
0
0
4
@brialong
Bria Long, PhD
3 years
We found that drawings became increasingly recognizable across childhood, and neither measures of effort (e.g., time spent drawing) nor estimates of children’s tracing ability explained away this trend. (6/13)
Tweet media one
2
0
4
@brialong
Bria Long, PhD
1 year
@databrary whoops, tagging co-author Violet Xiang @ZiyuX !🙌
0
0
4
@brialong
Bria Long, PhD
7 months
It’s also a testament to our wonderful partnership with the Children’s Discovery Museum of San Jose ( @PurpleMuseum ), where we installed a free-standing drawing station that you can still visit today! Over two years, we collected >37,000 drawings made by 2-10 year old children.
2
0
3
@brialong
Bria Long, PhD
3 years
We installed a free-standing drawing station at the Children’s Discovery Museum in San Jose ( @PurpleMuseum ), where over two years we collected >37,000 drawings from 48 categories, drawn by children 2-10 years of age. (3/13)
Tweet media one
1
0
3
@brialong
Bria Long, PhD
3 years
All of the data and analysis code for the study are available here: ​​ Feel free to reach out if you have questions of any kind! (15/15)
0
0
3
@brialong
Bria Long, PhD
3 years
Children’s unrecognizable drawings still contained rich information—both animacy and object size were decodable in drawings made by children of all ages (consistent with my work with @taliakonkle suggesting animacy/size can be inferred without basic-level identification). (9/13)
1
0
3
@brialong
Bria Long, PhD
6 years
Really enjoyed this #cogsci2018 talk by @joshuacpeterson and @suchow Paper:
0
0
3
@brialong
Bria Long, PhD
6 years
@Mark_A_Thornton @PNASNews @talia_konkle Thanks Mark! While people can sometimes tell their animacy/size in the real-world, texforms are unrecognizable at basic-level.
Tweet media one
0
0
3
@brialong
Bria Long, PhD
1 year
We also present all of our protocols for interacting with families (including detailed instructions and customizable PDFs) as well as for data management and video pre-processing. And we will continue to update these documents and scripts as we make any changes. (9/11)
1
0
3
@brialong
Bria Long, PhD
3 years
Infants’ posture and orientation to their caregiver changed dramatically across this age range—many 8-month-olds spent time sitting with their caregiver’s supporting their position, while older infants were often standing and exploring the playroom on their own. (8/15)
Tweet media one
1
0
3
@brialong
Bria Long, PhD
1 year
We expect that our exact design should remain usable for years to come. However, our modular design allows for components to be easily updated — and we commit to updating our schematics and plans if certain components change or become unavailable. (10/11)
1
0
3
@brialong
Bria Long, PhD
4 years
Older children's drawings were increasingly recognizable, and children’s tracing performance didn’t explain away this trend. Further, older children’s recognizable drawings contained more diagnostic features than did younger children’s (i.e. higher classifier evidence) (5/8)
1
0
3
@brialong
Bria Long, PhD
5 years
@JhendersonIMB But I think that you need both (1) carefully controlled stimuli and (2) alternative models to start making meaningful claims (2/2)
1
0
3
@brialong
Bria Long, PhD
2 years
@AaronHertzmann @hardmaru Hi! We find that children not only get better at including diagnostic features in their drawings but that children *also* get better at using these same visual features when recognizing each other's drawings
1
0
3
@brialong
Bria Long, PhD
2 years
@ashleyruba @EvelinaLeivada Hi 👋 I don’t think this is black and white. I'm still in my postdoc (still learning and still love the work) but I really think there’s no way I would have had the same job opportunities after my PhD vs postdoc. My computational and data science skill sets have vastly expanded.
1
0
3
@brialong
Bria Long, PhD
6 years
@WilmaBainbridge Been using OpenPose which has worked pretty well for our purposes!
0
0
3
@brialong
Bria Long, PhD
2 years
Come be part of this amazing team!
@jdyeatman
Jason Yeatman
2 years
Job posting draft #Dyslexia #edtech @StanfordEd be a part of building bridges from academia to public education working with a dynamic and collaborative team creating the next generation of open-source technology to support learning differences
0
4
11
0
0
3
@brialong
Bria Long, PhD
1 year
Grad applications are due by Dec 6th (see ) and a postdoc ad will be posted soon. I especially encourage individuals from underrepresented backgrounds to apply!
0
0
3
@brialong
Bria Long, PhD
5 years
@JhendersonIMB Well, we've found that texture-like representations do a surprisingly good job at explaining some visual cognitive behaviors and some portion of neural responses to objects. So I don't think it's that crazy that deep CNN layers will sometimes do the same thing... (1/2)
1
0
2
@brialong
Bria Long, PhD
3 years
What a great resource!👏
@martin_hebart
Martin Hebart
3 years
We tried making extracting neural network activations easier for you! Here is a Google Colab that you can run to get activations for your favorite network, based on our recently published THINGSvision library. Pytorch: Tensorflow:
2
20
108
0
0
2
@brialong
Bria Long, PhD
2 years
@RuthRosenholtz Took me a full 3ish weeks before I felt normal tbh. Hope you feel like yourself soon!
0
0
2
@brialong
Bria Long, PhD
5 years
@KohitijKar Would love a reprint -- not at SfN this year but this looks very interesting!
1
0
2
@brialong
Bria Long, PhD
4 years
@mcxfrank @naitisb @StanfordCSLI Seconded! Very thankful for your contributions to our project and the lab!
0
0
2
@brialong
Bria Long, PhD
3 years
To analyze changes in children’s drawings, we first extracted feature representations using a deep CNN optimized for object recognition. We then trained a classifier using these features to predict the category children were asked to draw. (4/13)
Tweet media one
1
0
2
@brialong
Bria Long, PhD
6 years
When reviewing, I often want to ask for a direct replication of the primary findings, especially if there isn't one built in to the paper. But what if these data aren't easy to collect (e.g., developmental / fMRI data)? It's OK to recommend replication if...
Very surprising results
22
High-prestige journal
5
Always; editor decides
20
Never; not appropriate
6
3
1
2
@brialong
Bria Long, PhD
1 year
@DeonTBenton My guess is current weight rather than birth weight, but I don’t know the age of the child!
0
0
2
@brialong
Bria Long, PhD
3 years
Infants are engaged in learning from others from their earliest days. How often do infants see and use social cues in naturalistic learning environments? (2/15)
Tweet media one
1
0
2
@brialong
Bria Long, PhD
3 years
We then asked children (N=1789) to recognize a subset of these drawings in four different 4AFC touch-screen games. Older children were better at recognizing all kinds of drawings—and were also differentially sensitive to diagnostic visual features during recognition. (10/13)
Tweet media one
1
0
2
@brialong
Bria Long, PhD
2 years
0
0
2
@brialong
Bria Long, PhD
5 years
@vayzenberg90 @bluevincent Would check out great work by @joshuacpeterson that deeply considers what these labels should be -- not sure if he has an open source list though
0
0
2