Yuchen Cui Profile Banner
Yuchen Cui Profile
Yuchen Cui

@YuchenCui1

Followers
1,159
Following
552
Media
5
Statuses
26

Postdoc @Stanford in Interactive Robot Learning 🤖🤖🤖 | previously CS PhD @UTAustin , BS @PurdueECE

Stanford, CA
Joined August 2011
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@YuchenCui1
Yuchen Cui
1 month
Thrilled to announce that I am joining @CS_UCLA as an Assistant Professor this Fall! 🐻 Many thanks to my incredible advisors, mentors, family and friends for the encouragement and support. ❤️Looking forward to this exciting new chapter and all the opportunities ahead! 🤖🤖🤖
37
15
647
@YuchenCui1
Yuchen Cui
10 months
We use gestures all the time for specifying targets! How can robots make sense of “gimme that one”? We propose GIRAF, a framework for interpreting human gesture instructions using LLMs. Paper to appear in @corl_conf : Website:
7
15
85
@YuchenCui1
Yuchen Cui
3 years
Thanks Scott for all the support and guidance throughout my PhD!
@scottniekum
Scott Niekum
3 years
Congrats to @YuchenCui1 for successfully defending her dissertation, "Efficient Algorithms for Low-Effort Human Teaching of Robots". Keep an eye out for her upcoming work as a postdoc with @DorsaSadigh at Stanford!
Tweet media one
3
5
83
2
0
24
@YuchenCui1
Yuchen Cui
3 years
How does the design of ML interaction influences the human in the loop and the learning outcomes? Check out our latest survey paper on this!
@tescafitz
Tesca Fitzgerald
3 years
How should we structure an AI's interaction with a teacher to improve learning? Our survey paper on "Understanding the Relationship between Interactions and Outcomes in Human-in-the-Loop Machine learning" was just accepted to IJCAI'21. Pre-print:
Tweet media one
1
6
21
0
0
7
@YuchenCui1
Yuchen Cui
10 months
Our user study shows that GIRAF achieves higher success rates and is preferred by users over the language-only baseline (code as policies) in tasks that are hard to specify.
0
0
7
@YuchenCui1
Yuchen Cui
2 months
It is frustrating to see robots making the same mistakes over and over again, come to @LihanZha 's poster at ICRA and see how we enable robots to remember language corrections with LLMs!
@LihanZha
Lihan Zha
2 months
Excited to share our work DROC at #ICRA2024 !🚀 DROC empowers robots to learn from online language corrections and effortlessly generalize to new tasks. Join my presentation in room AX-F201 from 13:30 - 15:00 on Thu 16 May,and swing by our poster at BT13-AX.6 between 16:30-18:00!
Tweet media one
0
2
20
0
0
8
@YuchenCui1
Yuchen Cui
10 months
We further construct a GestureInstruct evaluation suite and demonstrate that GIRAF can reason about diverse types of gestures including deictic, iconic, symbolic and semaphoric gestures. Link:
1
0
5
@YuchenCui1
Yuchen Cui
10 months
GIRAF describes the scene and human gesture in language to prompt an LLM task planner. This enables grounding the LLM to capture the scene and human intent.
1
0
5
@YuchenCui1
Yuchen Cui
4 years
Excited to meet you all at our workshop tomorrow!
0
0
6
@YuchenCui1
Yuchen Cui
7 months
We are presenting this work at #NeurIPS2023 Wednesday morning at poster #423 . Come and check it out if you are also in New Orleans! ⚜️
@suneel_belkhale
Suneel Belkhale
1 year
In imitation learning (IL), we often focus on better algorithms, but what about improving the data? What does it mean for a dataset to be high quality? Our work takes a first step towards formalizing and analyzing data quality. (1/5)
1
14
79
0
1
3
@YuchenCui1
Yuchen Cui
1 month
@LihanZha Thanks Lihan! All the best for your PhD :)
0
0
3
@YuchenCui1
Yuchen Cui
10 months
GIRAF also enables the robot to interpret different gestures as well as interpret the same gesture differently in long-horizon interaction.
0
0
3
@YuchenCui1
Yuchen Cui
10 months
Work with @lihenglin0213 , Yilun Hao, @xf1280 , and @DorsaSadigh
0
0
2