Yuki Profile
Yuki

@y_m_asano

Followers
3,750
Following
698
Media
158
Statuses
593

Assistant Professor for Computer Vision and Machine Learning at the QUVA Lab, @UvA_Amsterdam . Previously @Oxford_VGG , @OxfordAI , interned @facebookai

Joined July 2014
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@y_m_asano
Yuki
2 years
Our 2nd workshop on "Self-supervised Learning - What is Next?" coming to next #ECCV22 ! More updates soon! In the meantime, checkout the previous iteration and marvel at how far we've come in SSL since then: Organised with @chrirupp @dlarlus A. Zisserman
2
18
178
@y_m_asano
Yuki
3 years
Just presented and successfully defended my D.Phil. to my examiners @phillip_isola and Philip Torr!🥳🎉 It was an honor and pleasure @Oxford_VGG with Andrea Vedaldi, @chrirupp and many amazing colleagues like @mandelapatrick_ and @afourast ! On to new adventures!
Tweet media one
21
3
175
@y_m_asano
Yuki
1 year
Getting excited for my "Self-supervised and Vision-Language Learning" lectures starting tomorrow for the @UvA_IvI 's MSc in AI, Deep Learning 2 course: Sharing a preview in @CSProfKGD style :) Soo much recent progress, learned a lot in preparing it.😊
3
29
163
@y_m_asano
Yuki
6 months
Check out our @iclr_conf [oral] paper on learning state-of-the-art ViTs from a single video from scratch! One of the coolest things is that multi-object tracking emerges from the different heads in the plain ViTs (three heads visualised below in R,G,B).
@shawshank_v
Shashank
6 months
Really happy to share that DoRA is accepted as an Oral to @iclr_conf #ICLR2024 Using just “1 video” from our new egocentric dataset - Walking Tours, we develop a new method that outperforms DINO pretrained on ImageNet on image and video downstream tasks. More details in 🧵👇
2
20
116
2
16
123
@y_m_asano
Yuki
2 months
Today we introduce Bidirectional Instruction Tuning (Bitune). It's a new way of adapting LLMs for the instruction->answering stage. It allows the model to process the instruction/question with bidirectional attention, while the answer generation remains causal.
Tweet media one
3
16
117
@y_m_asano
Yuki
4 years
Looking forward to the Self-Supervised Learning Workshop we’ve organized with @chrirupp , A. Vedaldi and A. Joulin at #ECCV2020 . Join us tomorrow for our speakers: @avdnoord , P. Favaro, @CarlDoersch , A. Zisserman, I. Misra, S. Yu, A. Efros, @pathak2206 ! .
Tweet media one
2
33
108
@y_m_asano
Yuki
3 months
Check out our @CVPR paper on making caption-based Vision-Language Models do object-localization without _any_ human-supervised detection data! ⁉️ We develop a new *VLM-specific PEFT method* 🤩which is more powerful than LoRA etc. We test on non-training categories only!
Tweet media one
@mdorkenw
Michael Dorkenwald
3 months
How can one easily teach caption-pretrained VLMs to localize objects? We show that a small Positional Insert (PIN) can unlock object localization abilities without annotated data on frozen autoregressive VLMs. #CVPR2024 📝: 🌐:
Tweet media one
1
23
98
1
11
101
@y_m_asano
Yuki
1 year
With @iclr_conf done & @NeurIPSConf deadline rapidly approaching, here's something to look forward to 🤩: Our workshop @ICCVConference : "🍔BigMAC: Big Model Adaptation for Computer Vision" with amazing speakers 🌐: 📆: 2nd October 9am-1pm, details soon
Tweet media one
1
22
99
@y_m_asano
Yuki
10 months
Reminder from Bill Freeman at the Quo Vadis workshop that it's not the quantity but that _one_ creative/weird paper that matters.
Tweet media one
2
12
89
@y_m_asano
Yuki
2 years
This week marks my one year anniversary of being assistant prof at the @UvA_Amsterdam . 🥳🎉 To celebrate this, I want to share a few of my distilled reflections.
1
0
90
@y_m_asano
Yuki
2 years
Visit our ICLR poster "Measuring the Interpretability of Unsupervised Representations via Quantized Reversed Probing" with @irolaina and A. Vedaldi, from @Oxford_VGG . We linear-probe SSL models, but in ǝsǝʌǝɹ!🤯 For better interpretability. in 1h:
Tweet media one
Tweet media two
1
10
89
@y_m_asano
Yuki
2 months
@y0b1byte At the top of my head: A lot of strands. Synthetic data: phi models, newest stable diffusion. MoEs: megablocks, llava-moe. PEFT: eg DoRA, VeRA (ours). Instruction tuning: alpaca,moe+IT paper from Google. VLMs: Apple and HF papers, LLM embeddings: eg llm2vec,
2
5
79
@y_m_asano
Yuki
2 years
Today, my friend & collaborator @TengdaHan sent me this: I've arrived at 1000 citations! 🥳 Or rather, the works I've co-authored with many brilliant & inspiring individuals, have, collectively reached a nice arbitrary number! Still: 🥳🎉! To celebrate: here's some TL;DRs
Tweet media one
6
0
68
@y_m_asano
Yuki
1 year
Here's the (re-)recording of lecture 1 + updated slides: 🎥: 📄: Also, checkout the other cool modules of the DL2 course: from @egavves , @saramagliacane , @erikjbekkers , @eric_nalisnick , @wilkeraziz
@y_m_asano
Yuki
1 year
Getting excited for my "Self-supervised and Vision-Language Learning" lectures starting tomorrow for the @UvA_IvI 's MSc in AI, Deep Learning 2 course: Sharing a preview in @CSProfKGD style :) Soo much recent progress, learned a lot in preparing it.😊
3
29
163
0
17
65
@y_m_asano
Yuki
4 months
@Ellis_Amsterdam ELLIS Winter school on Foundation Models kicking off! Happy to have a super diverse students with us in Amsterdam. @ELLISforEurope
Tweet media one
Tweet media two
Tweet media three
1
9
60
@y_m_asano
Yuki
1 year
Full house at the practical of our SSL + vision-language module! Want to follow along? Find the collab notebook made by my fabulous TAs @ivonajdenkoska and @mmderakhshani here 💻: lecture 2 📺: slides 📄:
Tweet media one
Tweet media two
Tweet media three
@y_m_asano
Yuki
1 year
Here's the (re-)recording of lecture 1 + updated slides: 🎥: 📄: Also, checkout the other cool modules of the DL2 course: from @egavves , @saramagliacane , @erikjbekkers , @eric_nalisnick , @wilkeraziz
0
17
65
0
11
56
@y_m_asano
Yuki
27 days
Congratulations! Truly an "example in research, teaching/mentoring", as is the prize's intention. Couldn't have had a better PhD advisor.
@Oxford_VGG
Visual Geometry Group (VGG)
27 days
Congratulations to Andrea on receiving the Thomas Huang Memorial Prize @CVPR 🎉
Tweet media one
4
11
155
0
3
54
@y_m_asano
Yuki
11 months
Finally it's out! 🎉 Our new work on leveraging the passage of time in videos for learning better image encoders. Big improvements for spatial tasks like unsupervised object segmentation. Check out the great thread below! Paper @ICCVConference
@MrzSalehi
mrz.salehi
11 months
New paper on exploring the power of videos for learning better image encoders 🎥🧠. Introducing "TimeTuning", a self-supervised method that tunes models on the temporal dimension, enhancing their capabilities for spatially dense tasks, such as unsupervised semantic segmentation.
3
20
86
0
6
53
@y_m_asano
Yuki
1 month
So. Loud. 🔊 Even while sitting in the rooms and only breathing you constantly hear the clinging. once you start hearing it you cannot unhear. 😅
4
1
46
@y_m_asano
Yuki
3 years
I will be giving a public talk about some of my research tomorrow as part of the QUVA Deep Vision lecture. It'll be about self-supervised learning and privacy/ethics in CV (SeLa, PASS, SeLaVi, GDT and our GPT-2 bias paper). Tune in here:
0
8
45
@y_m_asano
Yuki
2 years
Our #ECCV2022 @eccvconf workshop on self-supervised learning and its many new forms. This time with a call for papers with a deadline conveniently after ECCV decisions. Check it out ☑️ and share🔀!
@dlarlus
Diane Larlus
2 years
📢 Call for papers #ECCV2022 workshop 📢 *Self-supervised Learning - What is Next?* Submission deadline: July 6th More info:
Tweet media one
1
18
89
0
4
42
@y_m_asano
Yuki
1 year
Research -> meeting friends! 🥳 After speaking at Bristol's Machine Learning and Vision group of @dimadamen and having exciting discussions about research yesterday, I was happy to see old and new colleagues from @Oxford_VGG at my talk at @oxengsci in Oxford today.
0
0
35
@y_m_asano
Yuki
10 months
Final day at @ICCVConference ! We have one oral at around 9:20am: "Self-ordering Point Clouds" where we learn how to select the most informative points with hierarchical contrastive learning (subsets as positive augmentations) and use Sinkhorn-Knopp for differentiable sorting.
Tweet media one
Tweet media two
1
4
34
@y_m_asano
Yuki
1 month
VLM people when a new LLM like llama-3 comes out
@OdedRechavi
Oded Rechavi
1 month
Professors when you need their signature
27
253
3K
0
2
32
@y_m_asano
Yuki
2 years
I'm sure that if we had this tool earlier, efforts like ImageNet-blurred, removing the person-subtree and PASS would've happened earlier. We don't have an excuse anymore to not see the bias and problems in our datasets. (5/5)
3
3
29
@y_m_asano
Yuki
2 years
Our paper on causal representation learning from videos got accepted into ICML. 🍋🎉 while we use toy datasets this is a great step for the future of representation learning.
@egavves
Efstratios Gavves
2 years
Great way to wake on a Sunday, our work on Causal Representation Learning from Temporal Intervened Sequences, w @phillip_lippe @TacoCohen @y_m_asano @saramagliacane @sindy_loewe accepted in #icml2022 ! 2 very good papers rejected, more for #neurips2022 😀
3
15
76
1
1
28
@y_m_asano
Yuki
28 days
Another nice oral at @CVPR 's vision-language session. And another good demonstration that current VLMs are pretty broken. But the authors propose a nifty way to distill procedural knowledge of coding of LLMs to VLMs, improving them on benchmarks.
Tweet media one
Tweet media two
Tweet media three
@huyushi98
Yushi Hu
3 months
Excited that VPD has been selected as Oral at #CVPR2024 (90 orals in total, 0.8%). Congrats to all coauthors, and see you in Seattle! Let's distill all the powerful specialist models into one VLM! paper: proj:
0
10
57
0
2
25
@y_m_asano
Yuki
4 years
Really happy to share our new work on arXiv! do check out our interactive cluster visualization here:
@Oxford_VGG
Visual Geometry Group (VGG)
4 years
Check out our new work "Labelling unlabelled videos from scratch with multi-modal self-supervision" by @y_m_asano @mandelapatrick_ @chrirupp and Andrea Vedaldi in colab with @facebookai ! See below for our automatically discovered clusters on VGG-Sound!
1
10
40
3
3
24
@y_m_asano
Yuki
2 years
It was such a nice event! Seeing friends and meeting new ones all the while seeing NeurIPS works -- and it being simply downstairs from my office is 💯 . Especially also super happy to see the mingling between PhD and MSc students and other researchers! 🙌
@Ellis_Amsterdam
ELLIS Amsterdam
2 years
Happening now: the NeurIPS Fest 🎉 at @Lab42UvA over 100 people are here to see 38 poster presentations of the latest AI research in Amsterdam #ai #machinelearning #ELLISamsterdam
Tweet media one
0
8
40
1
1
24