csburlingham Profile Banner
Charlie S. Burlingham Profile
Charlie S. Burlingham

@csburlingham

Followers
187
Following
506
Statuses
132

Vision Scientist, Meta Reality Labs

Redmond, WA
Joined April 2021
Don't wanna be here? Send us removal request.
@csburlingham
Charlie S. Burlingham
8 months
New paper alert! @RealityLabs Eye gaze in everyday life contains multi-scale temporal dependencies across objects (1-7 fixations into past, depending on task). Akin to natural language. Key to foundation models for visual understanding in mixed reality
2
4
26
@csburlingham
Charlie S. Burlingham
7 months
RT @EikoFried: So in 2007, physicists wrote a paper that made the headlines: according to their calculations, human coin flips aren’t 50/50…
0
238
0
@csburlingham
Charlie S. Burlingham
7 months
0
0
1
@csburlingham
Charlie S. Burlingham
7 months
RT @AiriSyoshimoto: I’m very excited to share that my graduate work is now online in @ScienceMagazine today! With generous help from my me…
0
68
0
@csburlingham
Charlie S. Burlingham
8 months
RT @MichaelProulx: All-day AR would benefit from AI models that understand a person's context & eye tracking could be key for task recognit…
0
2
0
@csburlingham
Charlie S. Burlingham
8 months
RT @LeahBanellis: Got Butterflies in your Stomach?😵‍💫I am super excited to share the first major study of my postdoc @visceral_mind! We rep…
0
93
0
@csburlingham
Charlie S. Burlingham
8 months
Once the input state space is well-aligned with human action & vision, and appropriate models that can represent long-term dependencies are used, we believe that multiple problems in contextual AI may be solved convergently by a single (gaze-based) visual foundation model. 6/
0
0
1
@csburlingham
Charlie S. Burlingham
8 months
@Meta @tsmurdison @XiuyunWu5 @MichaelProulx Scanpaths are often assumed to obey a Markovian assumption. We analyzed scanpaths during nine everyday tasks and quantified long temporal dependencies using mutual information, with an average timescale of four fixations (2 seconds) into past (w/ large variation across tasks) 1/
1
0
2
@csburlingham
Charlie S. Burlingham
8 months
RT @Jingyang_zhou: New article on unifying perceived magnitude and discriminability is out: @EeroSimoncelli @lyndor
0
13
0
@csburlingham
Charlie S. Burlingham
8 months
RT @Yasasi_Abey: PETMEI Workshop at @ETRA_conference 2024 kicked off with the keynote speech by @MichaelProulx from @RealityLabs. Insightfu…
0
5
0
@csburlingham
Charlie S. Burlingham
8 months
RT @Yasasi_Abey: Paper Session 1: Visual Attention @ETRA_conference 2024 just started. @olegkomo from @RealityLabs and @txst is now present…
0
6
0
@csburlingham
Charlie S. Burlingham
8 months
RT @Yasasi_Abey: .@csburlingham from @RealityLabs is now presenting their paper titled, "Real-World Scanpaths Exhibit Long-Term Temporal De…
0
4
0
@csburlingham
Charlie S. Burlingham
8 months
If you're at ETRA 2024 (Glasgow), I will be presenting today in the PETMEI workshop on "Timescales of information in everyday scanpaths - considerations for contextual AI for all-day AR"! Also check out @MichaelProulx's keynote and other interesting talks!
@ChuhanJiao
Chuhan Jiao🇺🇦
8 months
Dear #ETRA2024 attendees, Don't miss the PETMEI workshop tomorrow! Catch @MichaelProulx's keynote on XR eye tracking challenges and join our panel discussion with leading experts to discuss the future of machine learning in eye tracking. See you there! @ETRA_conference
Tweet media one
0
1
9
@csburlingham
Charlie S. Burlingham
8 months
RT @ChuhanJiao: Dear #ETRA2024 attendees, Don't miss the PETMEI workshop tomorrow! Catch @MichaelProulx's keynote on XR eye tracking chal…
0
5
0