Wan𝕏in Jin Profile Banner
Wan𝕏in Jin Profile
Wan𝕏in Jin

@jinwanxin

Followers
386
Following
231
Media
8
Statuses
62

Asst. Prof @ASU | Working on robotics | Prev. @GRASPlab | PI of Intelligent Robotics and Interactive Systems (IRIS) Lab

Tempe
Joined April 2014
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@jinwanxin
Wan𝕏in Jin
2 months
🚀 Can #MPC rival and even surpass #ReinforcementLearning in solving #DexterousManipulation ? Our answer is a resounding YES! PROUD to share: 🔥Complementarity-Free Multi-Contact Modeling and Optimization, our latest method that sets shattering benchmarks in various challenging
4
36
226
@jinwanxin
Wan𝕏in Jin
2 months
🚀Can a robotic hand master #DexterousManipulation in just 2 mins? YES! 🎉Excited to share our work “ContactSDF,” a physics-inspired representation using signed distance functions ( #SDFs ) for dexterous manipulation, from #geometry to #MPC . 🔥Watch the UNCUT video of Allegro Hand
1
10
57
@jinwanxin
Wan𝕏in Jin
3 years
Our work “Safe Pontryagin Differentiable Programming” () is accepted by #NeurIPS2021 . With Prof. George J. Pappas @pappasg69 & Prof. Shaoshuai Mou, we provide a safe differentiable framework to solve a board class of safety-critical learning & control tasks
1
3
20
@jinwanxin
Wan𝕏in Jin
3 months
Robots can infer rewards from human feedback, but what about safety boundaries? Our work, Safe MPC Alignment (), led by @ZhixianXie_ASU , collab’d with @asuriselab , Yi Ren, @zhaoran_wang , and @pappasg69 , shows it’s possible and can be human-data efficient!
1
4
12
@jinwanxin
Wan𝕏in Jin
2 years
What a great experience to work with Michael @MichaelAPosa for efficient learning & control of dexterous manipulation! Contact-rich manipulation is computationally tough due to its hybrid nature. But our idea is simple: far fewer hybrid modes could be enough to achieve many tasks
@MichaelAPosa
Michael Posa
2 years
I'm excited to share our latest preprint. We use 4 minutes of experiential data to learn a model for robust real-time manipulation of a previously unknown object. Work led by @jinwanxin , supported by @ToyotaResearch . 1/
1
9
72
0
0
11
@jinwanxin
Wan𝕏in Jin
2 years
Can a robot learn its cost function from a human user's DIRECTIONAL correction? Our new T-RO paper (with @todd_murphey , Z. Lu, S. Mou) provides a simple, efficient, but comprehensive solution to this question. Below is the intro video. 1/5
2
2
9
@jinwanxin
Wan𝕏in Jin
2 months
The key is our novel "Complementarity-Free Multi-Contact model", which effectively models contact-rich interactions (particularly handling friction), while being very simple and optimizer-friendly! 🔗 Dive deeper: • Preprint: • Code:
0
0
6
@jinwanxin
Wan𝕏in Jin
3 months
preprint: youtube: website: code:
0
0
3
@jinwanxin
Wan𝕏in Jin
2 months
1
0
3
@jinwanxin
Wan𝕏in Jin
2 months
The following video summarizes the idea of ContactSDF:
1
0
3
@jinwanxin
Wan𝕏in Jin
3 months
Our method enables robots to learn safety constraints with a small handful of user corrections. It is certifiable, providing an upper bound on the number of human corrections needed for successful learning or indicating if the true constraint is outside the hypothesis space.
1
0
3
@jinwanxin
Wan𝕏in Jin
9 months
@Tesla My model 3 failed on installing update 2023.44 30.... multiple times. I want to know why....
0
0
2
@jinwanxin
Wan𝕏in Jin
2 years
Our algorithm is simple to implement and intuitive: repeatedly “cut” through the space of possible objective parameters in the direction given by the human corrections until the remaining space is small enough. 3/5
1
0
2
@jinwanxin
Wan𝕏in Jin
3 months
@zhaoran_wang Thank you for sharing our work. Great to work with you!
0
0
2
@jinwanxin
Wan𝕏in Jin
2 months
@ChongZitaZhang Oops, the cat’s out of the bag! 🎉 Haven’t even tweeted it yet, "secrets" slipped out. LOL
0
0
2
@jinwanxin
Wan𝕏in Jin
2 years
The user study and real-world experiments indicate our method is significantly more effective (higher success rate), efficient/effortless (less human corrections needed), and potentially more accessible (fewer early wasted trials) than the state-of-the-art. 4/5
1
0
2
@jinwanxin
Wan𝕏in Jin
9 months
@GuanyaShi nice work, congrats!
0
0
2
@jinwanxin
Wan𝕏in Jin
2 years
Our method learns robot cost function from human correction using only the directionality of correction, not the magnitude. We show learning from the directionality of correction has provable convergence, whereas prior works using the magnitude do not and can be inefficient. 2/5
1
0
2
@jinwanxin
Wan𝕏in Jin
8 months
0
0
1
@jinwanxin
Wan𝕏in Jin
4 years
@AlexBeatson @zhaoran_wang Great suggestions👍. We may add it to an updated version.
0
0
1
@jinwanxin
Wan𝕏in Jin
2 years
@amitra_90 congrats!
1
0
1
@jinwanxin
Wan𝕏in Jin
2 months
@YouJiacheng lol. Your major comments are well received🤣. Will add the model as a co-author.
0
0
1
@jinwanxin
Wan𝕏in Jin
4 years
@AlexBeatson @zhaoran_wang Hi Alex, (13) and (18) are for the general case where the parameterized objective function J(\theta) in (1) is not zero. For the OC, J(\theta)=0 and (13b-d) and (18) are trivialized. The auxiliary control system for OC is simplified to (21), which is forward. Plz see Algorithm 5
0
0
1
@jinwanxin
Wan𝕏in Jin
4 years
@AlexBeatson @zhaoran_wang Hi Alex, one additional comment from mine, as i previously thought you were only talking about the OC mode here. For the general PDP framework, the difference from the adjoint method is: PDP also differentiates the adjoint state as well. Please see Equ (13b).
0
0
1
@jinwanxin
Wan𝕏in Jin
4 years
@_sam_sinha_ @zhaoran_wang Thank for your comments. The learned optimal policy is initial-condition dependent, since the learned policy is a result of a finite-horizon optimal control system. However, given different initial conditions, the proposed PDP can quickly learn the corresponding optimal policy.
1
0
1
@jinwanxin
Wan𝕏in Jin
2 years
@MichaelAPosa @NSF Congratulations!! 🏆🏆🏆🏆🏆🏆
0
0
1
@jinwanxin
Wan𝕏in Jin
4 years
@Bluengine @zhaoran_wang Thanks. Differentiating through the PMP can be readily extended to continuous-time systems: just differentiating through the continuous-time version of the PMP, and then the very similar results can be achieved. We will update soon to Arxiv.
0
0
1
@jinwanxin
Wan𝕏in Jin
2 months
@HaozhiQ thank you, Haozhi! Your work is also amazing!
0
0
1