JingkangY Profile Banner
Jingkang Yang @NTU🇸🇬 Profile
Jingkang Yang @NTU🇸🇬

@JingkangY

Followers
2K
Following
1K
Statuses
300

NTU MMLab PhD Student - Reasoning in the Open World. ECCV’22 Best Backpack Award 🎒

Singapore
Joined March 2021
Don't wanna be here? Send us removal request.
@JingkangY
Jingkang Yang @NTU🇸🇬
16 days
RT @liuziwei7: 🔥Announcing Video-MMMU🔥 Spanning 6 disciplines (Art, Bus., Sci., Med., Humanities, Eng.), *Video-MMMU* challenges LLMs to l…
0
15
0
@JingkangY
Jingkang Yang @NTU🇸🇬
17 days
RT @BoLi68567011: I think the image metaphor of DeepSeek is incorrect. Recently, there has been a lot of discussion about DeepSeek. Many p…
0
19
0
@JingkangY
Jingkang Yang @NTU🇸🇬
1 month
@yizhe_ang @SCMPgraphics @threejs So cool! Just DM you on further collaboration, please check 👍
0
0
2
@JingkangY
Jingkang Yang @NTU🇸🇬
1 month
RT @BoLi68567011: After nearly a year of development, LMMs-Eval has reached 2K+ stars and 60+ contributors! 🚀 Now with integrated image, v…
0
9
0
@JingkangY
Jingkang Yang @NTU🇸🇬
2 months
RT @BoLi68567011: 🔔We release lmms-eval/0.3.0, focusing on audio evaluations. Code: Doc:
0
12
0
@JingkangY
Jingkang Yang @NTU🇸🇬
3 months
RT @lmmslab: New work from LMMs-Lab! This time we present our latest research on the interpretation and safety of multimodal models
0
5
0
@JingkangY
Jingkang Yang @NTU🇸🇬
3 months
RT @BoLi68567011: TL;DR We present Large Multi-modal Models Can Interpret Features in Large Multi-modal Models We successfully use a 72B l…
0
14
0
@JingkangY
Jingkang Yang @NTU🇸🇬
3 months
RT @BoLi68567011: Curious about what multimodal models have truly learned and how we can interpret them? Large-scale multimodal models are…
0
5
0
@JingkangY
Jingkang Yang @NTU🇸🇬
3 months
RT @liuziwei7: 🔥Exploring o1-like Multimodal Reasoning🔥 🧠Insight-V🧠 scalably produces reasoning data and enhances the multimodal LLM reaso…
0
74
0
@JingkangY
Jingkang Yang @NTU🇸🇬
3 months
RT @dyhTHU: 🚀🚀🚀Introducing Insight-V! An early attempt towards o1-like multi-modal reasoning. We offer a multi-agent system to unleash the…
0
19
0
@JingkangY
Jingkang Yang @NTU🇸🇬
4 months
@AntonObukhov1 That’s funnier than I thought 🤣
0
0
3
@JingkangY
Jingkang Yang @NTU🇸🇬
4 months
@tfc_ai Bingo!!! Binzhu worked with me on his cool undergraduate project FunQA and resulted in an ECCV paper! He is looking for PhD positions so I advertise for him! Come and checkout his cool poster on October 1, afternoon!
0
0
2
@JingkangY
Jingkang Yang @NTU🇸🇬
4 months
@tfc_ai @CSProfKGD Come and check out his poster Oct 1 afternoon!
0
0
1
@JingkangY
Jingkang Yang @NTU🇸🇬
4 months
Carry your cool backpack 🎒 and come to visit us in the October 1 and 2 afternoon sessions! Octopus 🐙 - a vision-language model for robots 🤖 or GTA/Minecraft game play 🎮 - We solve it via training the VLM into a good programmer with sight. It calls correct functions and executes. FunQA 💃🪩🕺 - a testbed to see whether video language models can understand stupid 🤪funny🔥 viral videos. - Let’s see how your VLMs understand these fun videos!
@liuziwei7
Ziwei Liu
4 months
* Multimodal Models: - MMBench: - Octopus (Embodied Vision-Language Programmer): - FunQA (Surprising Video Comprehension):
Tweet media one
0
3
11
@JingkangY
Jingkang Yang @NTU🇸🇬
5 months
RT @liuziwei7: Congrats to Fangzhou @hongfz16 , Jingkang @JingkangY , Ziqi @ziqi_huang_ , Tong @wutong_16 and Xian @AlvinLiu27 for being se…
0
3
0
@JingkangY
Jingkang Yang @NTU🇸🇬
5 months
0
0
1