Cihang Xie Profile Banner
Cihang Xie Profile
Cihang Xie

@cihangxie

Followers
2,327
Following
797
Media
83
Statuses
609

Assistant Professor, @BaskinEng ; PhD, @JHUCompSci ; @Facebook Fellowship Recipient; 🐱

Santa Cruz, CA
Joined July 2014
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@cihangxie
Cihang Xie
4 months
Big thanks to @_akhaliq for the retweet! 🚀 We are very excited about presenting 𝑹𝒆𝒄𝒂𝒑-𝑫𝒂𝒕𝒂𝑪𝒐𝒎𝒑-1𝑩, where we use a 𝐋𝐋𝐚𝐌𝐀-𝟑-powered LLaVA model to recaption the entire 𝟏.𝟑 𝐛𝐢𝐥𝐥𝐢𝐨𝐧 images from DataComp-1B. Compared to the original textual descriptions,
@_akhaliq
AK
4 months
What If We Recaption Billions of Web Images with LLaMA-3? Web-crawled image-text pairs are inherently noisy. Prior studies demonstrate that semantically aligning and enriching textual descriptions of these pairs can significantly enhance model training across various
Tweet media one
3
57
267
2
19
89
@cihangxie
Cihang Xie
3 years
Thanks for tweeting, @ak92501 ! Regarding robustness, we surprisingly find that 1) ViTs are NO MORE robust than CNNs on adversarial examples; training recipes matter 2) ViTs LARGELY outperform CNNs on out-of-distribution samples; self-attention-like architectures matter
Tweet media one
@_akhaliq
AK
3 years
Are Transformers More Robust Than CNNs? abs:
Tweet media one
1
53
242
4
116
562
@cihangxie
Cihang Xie
2 years
Her name is Claire 🥰
Tweet media one
13
0
174
@cihangxie
Cihang Xie
5 months
New work alert 🚨🚨 𝐌𝐚𝐦𝐛𝐚®: 𝐕𝐢𝐬𝐢𝐨𝐧 𝐌𝐚𝐦𝐛𝐚 𝐀𝐋𝐒𝐎 𝐍𝐞𝐞𝐝𝐬 𝐑𝐞𝐠𝐢𝐬𝐭𝐞𝐫𝐬. Similar to ViT, we identify that Vision Mamba's feature maps also contain artifacts, but it's more intense --- even tiny models show extensive activation in background areas. (1/n)
Tweet media one
4
30
170
@cihangxie
Cihang Xie
1 year
📢 Introducing CLIPA-v2, our latest update to the efficient CLIP training framework! 🚀✨ 🏆 Our top-performing model, H/14 @336x336 on DataComp-1B, achieves an impressive zero-shot ImageNet accuracy of 81.8! ⚡️ Plus, its estimated training cost is <$15k!
Tweet media one
@_akhaliq
AK
1 year
An Inverse Scaling Law for CLIP Training abs: paper page:
Tweet media one
3
38
217
1
26
107
@cihangxie
Cihang Xie
4 years
Until today, I realize I am not a Ph.D. anymore, but a professor 😂
Tweet media one
1
0
99
@cihangxie
Cihang Xie
3 years
If you plan to "attend" @CVPR , check out our Tutorial on AdvML in Computer Vision (). Our amazing speakers will cover both the basic backgrounds & their most recent research in AdvML. Save the date (09:55 am - 5:30 pm ET, June 19), and we will see you soon
Tweet media one
0
31
96
@cihangxie
Cihang Xie
2 years
Last year, we showed ViTs are inherently more robust than CNNs on OOD. While our latest work challenges this point---we actually can build pure CNNs with stronger robustness than ViTs. The secrets are: 1) patchifying inputs 2) enlarging kernel size 3) reducing norm & act layers
Tweet media one
@cihangxie
Cihang Xie
3 years
Thanks for tweeting, @ak92501 ! Regarding robustness, we surprisingly find that 1) ViTs are NO MORE robust than CNNs on adversarial examples; training recipes matter 2) ViTs LARGELY outperform CNNs on out-of-distribution samples; self-attention-like architectures matter
Tweet media one
4
116
562
2
14
88
@cihangxie
Cihang Xie
9 months
📢📢📢 Several talented students from my group, skilled in computer vision, multimodal learning, and GenAI, are seeking summer internships. They're ready to bring innovative solutions to your projects! Don't miss out on this talent pool. 🚀 Check out their profiles: (1/n)
3
11
86
@cihangxie
Cihang Xie
4 years
If you decide to apply for Ph.D. programs this year and are interested in adversarial machine learning, computer vision & deep learning, please consider my group @ucsc @UCSC_BSOE . We will have multiple openings for Fall 2021. No GRE required & DDL is Jan 11, 2021 RT Appreciated
@BaskinEng
Baskin Engineering at UCSC
4 years
👩🏾‍💻 @ucsc ’s Computer Science and Engineering department is accepting applicants to its Ph.D. and M.S. programs for Fall 2021! Applications due Jan. 11, 2021. No GRE required. #BaskinEngineering Learn more & apply:
0
9
23
2
25
71
@cihangxie
Cihang Xie
3 months
Disappointed to see nearly 𝐢𝐝𝐞𝐧𝐭𝐢𝐜𝐚𝐥 reviews for two 𝐯𝐞𝐫𝐲 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭 submissions to #NeurIPS 😮‍💨😮‍💨
Tweet media one
Tweet media two
Tweet media three
Tweet media four
5
1
61
@cihangxie
Cihang Xie
3 years
📢 #CallForPaper AROW Workshop is back @eccvconf ! This year we have a great lineup of 9 speakers, and set 3 best paper prizes ($10k each, sponsored by @ftxfuturefund ). For more information, please check
Tweet media one
2
16
57
@cihangxie
Cihang Xie
1 year
Our best model, G/14 with 83.0% zero-shot ImageNet top-1 accuracy, is released at both OpenCLIP () and CLIPA ()
Tweet media one
@wightmanr
Ross Wightman
1 year
v2.23.0 of OpenCLIP was pushed out the door! Biggest update in a while, focused on supporting SigLIP and CLIPA-v2 models and weights. Thanks @gabriel_ilharco @gpuccetti92 @rom1504 for help on the release, and @bryant1410 for catching issues. There's a leaderboard csv now!
2
18
147
0
11
56
@cihangxie
Cihang Xie
6 months
🔥 Introducing HQ-Edit, a dataset with high-res images & detailed and aligned image editing instructions. Our fine-tuned InstructPix2Pix delivers superior editing performance. Dataset: Demo: Project:
@_akhaliq
AK
6 months
HQ-Edit A High-Quality Dataset for Instruction-based Image Editing This study introduces HQ-Edit, a high-quality instruction-based image editing dataset with around 200,000 edits. Unlike prior approaches relying on attribute guidance or human feedback on building datasets,
Tweet media one
1
40
144
0
11
53
@cihangxie
Cihang Xie
3 years
We are hosting a workshop on “Neural Architectures: Past, Present and Future” at @ICCV_2021 , with 7 amazing speakers. We invite submissions on any aspects of neural architectures. Both long paper and extended abstract are welcomed. Visit for more details
Tweet media one
1
11
48
@cihangxie
Cihang Xie
4 months
Just finished my poster for the #CVPR AC workshop. See you all next Sunday
Tweet media one
2
10
45
@cihangxie
Cihang Xie
1 year
Interested in CLIP training but constrained by resources? We've got you covered! In our latest work , we introduce an efficient training strategy that's affordable for researchers with limited computational capacities. (1/n)
@_akhaliq
AK
1 year
An Inverse Scaling Law for CLIP Training abs: paper page:
Tweet media one
3
38
217
3
5
46
@cihangxie
Cihang Xie
3 years
always enjoyed when reading lecture notes from my student 😃
Tweet media one
Tweet media two
Tweet media three
0
0
40
@cihangxie
Cihang Xie
10 months
We're excited to share two recent works on multimodal learning. The first one introduces a human-knowledge-based algorithm to address the alignment and efficiency of large-scale image-text data. It can secure model performance by compressing the image-text datasets up to ~90%.
Tweet media one
Tweet media two
1
6
37
@cihangxie
Cihang Xie
1 year
Our little Claire is already 10 months old! Should we start teaching her Python🤔?
Tweet media one
4
0
38
@cihangxie
Cihang Xie
4 years
Check out our latest work on studying the effects of activation functions in adversarial training. We found that making activation functions be SMOOTH is critical for obtaining much better robustness. Joint work with @tanmingxing , @BoqingGo , @YuilleAlan and @quocleix .
@quocleix
Quoc Le
4 years
A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness.
Tweet media one
20
260
1K
1
10
36
@cihangxie
Cihang Xie
4 years
Our Adversarial Machine Learning Workshop #CVPR2020 is tomorrow, with 9 amazing speakers: @YuilleAlan , @aleks_madry , @EarlenceF , @MatthiasBethge , Laurens van der Maaten, @pinyuchenTW , Cho-Jui Hsieh, @BoqingGo and @tdietterich . Come and join us via
Tweet media one
4
12
36
@cihangxie
Cihang Xie
1 year
An incredible experience at #ICML23 --- meeting over 50 Email/Twitter friends in person for the first time was truly amazing. Building real connections with real people (plus beautiful Hawaii) makes ICML unforgettable! 🥳🥳
0
0
32
@cihangxie
Cihang Xie
5 months
𝙄𝙣𝙩𝙚𝙧𝙥𝙧𝙚𝙩𝙖𝙗𝙡𝙚 𝙢𝙤𝙙𝙚𝙡𝙨 𝙘𝙖𝙣 𝙨𝙘𝙖𝙡𝙚🚀, 𝙩𝙤𝙤!!! Excited to present 𝐂𝐑𝐓𝐀𝐄-α, our latest efforts on scaling up the white-box transformer CRATE for vision tasks. By slightly tweaking the design of the sparse coding block and the training recipes,
Tweet media one
2
13
34
@cihangxie
Cihang Xie
5 years
I am happy to share that I will be joining @UCSC_BSOE as an Assistant Professor in the Computer Science and Engineering Department in Spring 2021! Thanks everyone for the great support along my Ph.D. journey 😀
2
0
32
@cihangxie
Cihang Xie
1 year
Just arrived #ICML2023 Looking forward to catching up with old friends and making new friends
Tweet media one
2
0
31
@cihangxie
Cihang Xie
2 years
The afternoon session of our @CVPR Workshop on The Art of Robustness starts now in Room 203-205!! Our first speaker is Nicholas Carlini
1
2
31
@cihangxie
Cihang Xie
6 months
Happy to share that our D-iGPT is accepted by #ICML2024 . By only using public datasets for training, our best model strongly attains 𝟗𝟎.𝟎% ImageNet accuracy with a vanilla ViT-H. Code & Model: Huge congrats to @RenSucheng 🥳
Tweet media one
@_akhaliq
AK
11 months
Rejuvenating image-GPT as Strong Visual Representation Learners paper page: This paper enhances image-GPT (iGPT), one of the pioneering works that introduce autoregressive pretraining to predict next pixels for visual representation learning. Two simple
Tweet media one
1
9
37
1
10
32
@cihangxie
Cihang Xie
4 years
Interested in benchmarking your NAS algorithm? Check out our @CVPR 1st lightweight NAS challenge () with 3 competition tracks: Supernet Track, Performance Prediction Track, Unseen Data Track. 🏆The prize pool is $2,500 for each competition track
1
9
26
@cihangxie
Cihang Xie
4 months
VideoHallucer: a comprehensive benchmark for detecting hallucinations in large video-language models. By categorizing hallucinations into intrinsic and extrinsic types and adopting adversarial binary VideoQA, we offer nuanced insight into where current models excel and falter
Tweet media one
1
3
26
@cihangxie
Cihang Xie
5 years
Our Adversarial Machine Learning in Computer Vision Workshop (co-organized with @xinyun_chen_ , @SongBai_ , Bo Li, Kaiming He, @drfeifei , Luc Van Gool, Philip Torr( @OxfordTVG ), @dawnsongtweets and Alan Yuille) is now accepting submissions, with a BEST PAPER PRIZE! DDL: Mar 15 AoE
Tweet media one
1
8
25
@cihangxie
Cihang Xie
3 years
I will co-advise 1-2 students with @yuyinzhou_cs . If you are interested, please submit your application via
@yuyinzhou_cs
yuyin zhou
3 years
I am seeking to hire 3-5 Ph.D. students @ucsc @BaskinEng . If you are interested in computer vision, machine learning, or AI for Healthcare, feel free to reach out and to let me know to look for your application! Application deadline: Jan 10 (GRE not required) RTs appreciated
42
423
736
0
3
25
@cihangxie
Cihang Xie
4 years
My fiancée @yuyinzhou_cs and I do PhD together with @YuilleAlan from 2016-2020. I think both of us do good jobs :)
@sangeetha_a_j
Sangeetha Abdu Jyothi
4 years
1/ Yesterday, a student entering grad school asked me if it is possible to get married during PhD and still do well. Of course you can! Please don't be carried away by insane productivity stats. You can work for 40 hours/week and do a great job.
5
13
319
1
0
25
@cihangxie
Cihang Xie
6 months
Checking out our TMLR paper on how to scale down CLIP models
@_akhaliq
AK
6 months
Google announces Scaling (Down) CLIP A Comprehensive Analysis of Data, Architecture, and Training Strategies This paper investigates the performance of the Contrastive Language-Image Pre-training (CLIP) when scaled down to limited computation budgets. We explore CLIP
Tweet media one
6
100
512
0
2
25
@cihangxie
Cihang Xie
3 years
Our ICCV Workshop on Neural Architectures: Past, Present and Future will start soon (8:55 am EDT, Oct 11), with 7 amazing speakers. Join us via Zoom from @ICCV_2021 platform! We will also be streaming live on Youtube! For more info, please visit
Tweet media one
0
3
21
@cihangxie
Cihang Xie
4 years
Thank you @trustworthy_ml for featuring me!
@trustworthy_ml
Trustworthy ML Initiative (TrustML)
4 years
1/ 📢 In this week’s TrustML-highlight, we are delighted to feature Cihang Xie @cihangxie 🎊🎉 Dr. Xie is an Assistant Professor of Computer Science and Engineering at the University of California, Santa Cruz.
Tweet media one
1
4
18
0
1
22
@cihangxie
Cihang Xie
6 months
We released CLIPA ~1 year ago, and it now has 𝟏𝟔.𝟖𝐤 𝐌𝐨𝐧𝐭𝐡𝐥𝐲 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝𝐬! Huge congratulations to the leading authors Xianhang and Zeyu 🥳
Tweet media one
@cihangxie
Cihang Xie
1 year
Interested in CLIP training but constrained by resources? We've got you covered! In our latest work , we introduce an efficient training strategy that's affordable for researchers with limited computational capacities. (1/n)
3
5
46
1
0
20
@cihangxie
Cihang Xie
4 months
Just note that our 𝑹𝒆𝒄𝒂𝒑-𝑫𝒂𝒕𝒂𝑪𝒐𝒎𝒑-1𝑩 is now trending on Huggingface Dataset! 🌟 Check out the thread below for an introductory overview of our work. 👇 HF link:
Tweet media one
@cihangxie
Cihang Xie
4 months
Big thanks to @_akhaliq for the retweet! 🚀 We are very excited about presenting 𝑹𝒆𝒄𝒂𝒑-𝑫𝒂𝒕𝒂𝑪𝒐𝒎𝒑-1𝑩, where we use a 𝐋𝐋𝐚𝐌𝐀-𝟑-powered LLaVA model to recaption the entire 𝟏.𝟑 𝐛𝐢𝐥𝐥𝐢𝐨𝐧 images from DataComp-1B. Compared to the original textual descriptions,
2
19
89
0
4
19
@cihangxie
Cihang Xie
4 months
We see another significant boost in the Monthly Downloads of our CLIPA models, from 𝟏𝟔.𝟖𝐤 to 𝟒𝟖.𝟐𝐤. (weights are available at ) We also plan to release stronger CLIP models (especially in Zero-Shot Cross-Modal Retrieval) in the coming weeks. Stay
Tweet media one
@cihangxie
Cihang Xie
6 months
We released CLIPA ~1 year ago, and it now has 𝟏𝟔.𝟖𝐤 𝐌𝐨𝐧𝐭𝐡𝐥𝐲 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝𝐬! Huge congratulations to the leading authors Xianhang and Zeyu 🥳
Tweet media one
1
0
20
1
5
17
@cihangxie
Cihang Xie
11 months
Excited to share our newest project: developing a comprehensive safety evaluation benchmark for Vision-LLMs. Also, a shout-out to our awesome project leader, @HaoqinT , who will apply for PhD this year. He is strong and always energetic about research. Go get him!
@HaoqinT
Haoqin Tu
11 months
Vision LLMs like LLaVA and GPT4V, are good at handling regular vision-language tasks, but are they really robust and safe😈 With our new VLLM safety benchmark, we shed light on two types of safety evaluations in existing VLLMs: OOD situation and adversarial attack. 🧵👇
Tweet media one
1
10
30
2
4
18
@cihangxie
Cihang Xie
11 months
Thank @_akhaliq for highlighting our work. We show autoregressive pretraining can create powerful vision backbones. E.g., our ViT-L attains an impressive 89.5% ImageNet accuracy. Larger models are coming soon. Big shoutout to @RenSucheng for leading this incredible project! 🌟
@_akhaliq
AK
11 months
Rejuvenating image-GPT as Strong Visual Representation Learners paper page: This paper enhances image-GPT (iGPT), one of the pioneering works that introduce autoregressive pretraining to predict next pixels for visual representation learning. Two simple
Tweet media one
1
9
37
0
1
18
@cihangxie
Cihang Xie
4 months
Our unicorn benchmark is accepted by #ECCV2024 . Kudos to our incoming PhD students @HaoqinT and Zijun.
@cihangxie
Cihang Xie
11 months
Excited to share our newest project: developing a comprehensive safety evaluation benchmark for Vision-LLMs. Also, a shout-out to our awesome project leader, @HaoqinT , who will apply for PhD this year. He is strong and always energetic about research. Go get him!
2
4
18
0
6
18
@cihangxie
Cihang Xie
1 year
Our @ICCVConference Workshop on Adversarial Robustness is now accepting submissions!
@MZhou73277685
M. Zhou
1 year
📢 [Call For Papers] We invite participants to submit their work to the 4th Workshop on Adversarial Robustness In the Real World, ICCV 2023, France! 📷 Workshop Website: #AROW #ICCV2023 #AdversarialRobustness #DeepLearning #ComputerVision #Paris
1
5
12
0
4
18
@cihangxie
Cihang Xie
6 months
proud to see my students @HaoqinT and @BingchenZhao contribute to this interesting open-source project
@_akhaliq
AK
6 months
Eagle and Finch RWKV with Matrix-Valued States and Dynamic Recurrence We present Eagle (RWKV-5) and Finch (RWKV-6), sequence models improving upon the RWKV (RWKV-4) architecture. Our architectural design advancements include multi-headed matrix-valued states and a
Tweet media one
2
20
73
0
3
17
@cihangxie
Cihang Xie
5 months
D-iGPT is accepted as an Oral presentation 😎 Congratulations again to @RenSucheng on completing this exciting project.
Tweet media one
@cihangxie
Cihang Xie
6 months
Happy to share that our D-iGPT is accepted by #ICML2024 . By only using public datasets for training, our best model strongly attains 𝟗𝟎.𝟎% ImageNet accuracy with a vanilla ViT-H. Code & Model: Huge congrats to @RenSucheng 🥳
Tweet media one
1
10
32
0
2
17
@cihangxie
Cihang Xie
5 months
Lastly, huge kudo to the first author, Feng Wang from @JHUCompSci , for leading this interesting project. arxiv: project page:
1
4
17
@cihangxie
Cihang Xie
4 years
@mcgenergy @quocleix @DigantaMisra1 Besides the functions we listed in the paper, we actually have tried a lot of other smooth activation functions (including Mish). The general conclusion is as long as your function is a good smooth approximation of ReLU, it will work.
1
1
16
@cihangxie
Cihang Xie
3 years
I will be giving a talk entitled “Not All Networks Are Born Equal for Robustness” at this @NeurIPSConf competition. Check it out!
@kate_saenko_
Kate Saenko
3 years
Which #AI #ComputerVision models generalize and adapt best to new domains with different viewpoints, style, or missing/novel categories? Find out on Tue Dec 7 20:00 GMT at our @NeurIPSConf ViSDA'21 competition! Hear from winners and speakers; more at
Tweet media one
0
8
36
0
0
16
@cihangxie
Cihang Xie
1 year
The “perk” of missing my flight
Tweet media one
3
0
16
@cihangxie
Cihang Xie
4 months
Thanks @arankomatsuzaki for retweeting our work. A detailed introduction of this project can be found at
@arankomatsuzaki
Aran Komatsuzaki
4 months
What If We Recaption Billions of Web Images with LLaMA-3 ? - Finetunes a LLaVA-1.5 and recaptions ~1.3B images from the DataComp-1B dataset - Opensources the resulting dataset data: proj: abs:
Tweet media one
7
66
235
0
3
16
@cihangxie
Cihang Xie
5 months
To mitigate this issue, we follow the ViT Register work to also add register tokens into Mamba. Two tweaks are needed: 1) inserting registers evenly, and 2) reusing these registers for final decision-making. We term the resulting architecture 𝐌𝐚𝐦𝐛𝐚® (2/n)
Tweet media one
1
0
16
@cihangxie
Cihang Xie
3 years
Waiter Please!!
Tweet media one
1
0
15
@cihangxie
Cihang Xie
4 years
Cannot wait to join
@BaskinEng
Baskin Engineering at UCSC
4 years
Happy first day of classes!!! This year, @ucsc Baskin Engineering welcomes nine new faculty with specialties in statistical computing, computer security, human-robot interaction, wireless communications, serious games, AI, and more. Meet them here:
Tweet media one
1
5
24
0
0
15
@cihangxie
Cihang Xie
3 years
My 🐈‍⬛ wants to read papers
1
0
13
@cihangxie
Cihang Xie
2 months
🚀 Excited about releasing MedTrinity-25M, a large-scale medical multimodal dataset with over 25 million images across 10 modalities, with multigranular annotations for more than 65 diseases Try it at
@_akhaliq
AK
2 months
MedTrinity-25M A Large-scale Multimodal Dataset with Multigranular Annotations for Medicine discuss: This paper introduces MedTrinity-25M, a comprehensive, large-scale multimodal dataset for medicine, covering over 25 million images across 10
Tweet media one
0
41
166
0
2
14
@cihangxie
Cihang Xie
2 years
Exciting to share our @NeurIPSConf paper on defending against score-based query attacks 🥳 The leading author, @MrSizheChen , will apply for Ph.D. programs this year. He is a strong researcher and awesome collaborator; try to get him 😉
@_Sizhe_Chen_
Sizhe Chen
2 years
Excited that our work with Prof. @cihangxie has been accepted by #NeurIPS2022 after an unforgettable rebuttal (2 4 4 5 -> 7 7 4 7). Looking forward to seeing you!
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
0
10
1
5
13
@cihangxie
Cihang Xie
4 months
an interesting work from our incoming PhD @xiaoke_shawn_h
@_akhaliq
AK
4 months
VoCo-LLaMA Towards Vision Compression with Large Language Models Vision-Language Models (VLMs) have achieved remarkable success in various multi-modal tasks, but they are often bottlenecked by the limited context window and high computational cost of processing
Tweet media one
3
52
208
1
1
13
@cihangxie
Cihang Xie
4 years
We offer a best paper prize and several travel grants!! The submission deadline is Feb 26, 2021 AoE.
@xinyun_chen_
Xinyun Chen
4 years
Our Security and Safety in Machine Learning Systems Workshop at @iclr_conf (co-organized with Cihang Xie ( @cihangxie ), Ali Shafahi, Bo Li, Ding Zhao ( @zhao__ding ), Tom Goldstein, and Dawn Song ( @dawnsongtweets )) is now accepting submissions, with a best paper prize! DDL: Feb 26.
Tweet media one
1
6
26
0
6
13
@cihangxie
Cihang Xie
1 year
Our @CVPR adversarial machine learning workshop starts now at EAST 3 meeting room. The first speaker is @aleks_madry , on talking about “Preventing Data-driven Manipulation”
2
2
11
@cihangxie
Cihang Xie
4 years
👇This is exactly how I get my FAIR internship in 2018 summer
@polynoamial
Noam Brown
4 years
Pro tip for PhD students looking for a research internship: cold emailing works! Find someone you think would be a good fit, specify topics they've worked on that interest you (point to papers and customize the email!), and make the case for why you're a strong candidate.
13
42
688
0
0
12
@cihangxie
Cihang Xie
3 years
We extend the deadline to Aug 5, 2021. For more information, please visit
@cihangxie
Cihang Xie
3 years
The submission deadline is August 1, 2021 Anywhere on Earth (AoE). Both long paper (8-page) and extended abstract (4-page) are welcomed!! CMT site:
0
1
7
0
3
11
@cihangxie
Cihang Xie
6 months
Proud Advisor Moment - Huge congratulations to my student, Xianhang Li, for winning the esteemed 2024-25 Jack Baskin & Peggy Downes-Baskin Fellowship. Your hard work and dedication truly shine through! 🥳🥳 @BaskinEng
Tweet media one
1
1
12
@cihangxie
Cihang Xie
10 months
I will be at CLIPA poster tomorrow morning from 10:45 am - 12:45 pm Also I will be at #NeurIPS2023 until this Saturday (Dec 16). Happy to chat with old friends and make new connections
Tweet media one
Tweet media two
Tweet media three
0
0
11
@cihangxie
Cihang Xie
2 years
Tomorrow (June 23) afternoon, we will be presenting our @CVPR work 𝐀 𝐒𝐢𝐦𝐩𝐥𝐞 𝐃𝐚𝐭𝐚 𝐌𝐢𝐱𝐢𝐧𝐠 𝐏𝐫𝐢𝐨𝐫 𝐟𝐨𝐫 𝐈𝐦𝐩𝐫𝐨𝐯𝐢𝐧𝐠 𝐒𝐞𝐥𝐟-𝐒𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐞𝐝 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 at 126b. Come visit! Both @HuiyuWang3 and Xianhang Li will be there!
Tweet media one
1
1
11
@cihangxie
Cihang Xie
1 year
I will be (in-person) talking about this work at CVPR'23 Adversarial Machine Learning Workshop () tomorrow, from 10:40 am - 11:10am.
@cihangxie
Cihang Xie
1 year
📢 Introducing CLIPA-v2, our latest update to the efficient CLIP training framework! 🚀✨ 🏆 Our top-performing model, H/14 @336x336 on DataComp-1B, achieves an impressive zero-shot ImageNet accuracy of 81.8! ⚡️ Plus, its estimated training cost is <$15k!
Tweet media one
1
26
107
0
2
11
@cihangxie
Cihang Xie
4 years
Want to make your networks get better performance? Try AdvProp #CVPR2020 ! By using adversarial examples, AdvProp achieves 85.5% accuracy on ImageNet. Join me via at 10AM PT TODAY! Joint work with @tanmingxing , @BoqingGo ,JiangWang, @YuilleAlan and @quocleix
Tweet media one
0
2
11
@cihangxie
Cihang Xie
3 years
Our Workshop on Security and Safety in Machine Learning Systems is tomorrow, starting at 8:45 am PT Join us at
@xinyun_chen_
Xinyun Chen
3 years
Our SecML Workshop at @iclr_conf is tomorrow, with 11 amazing speakers/panelists: @zicokolter , @AdamKortylewski , @aleks_madry , @catherineols , @AlinaMOprea , Aditi Raghunathan, Christopher Ré, @RaquelUrtasun , David Wagner, @YuilleAlan , @ravenben . Link:
Tweet media one
1
12
49
0
3
11
@cihangxie
Cihang Xie
4 months
Just found my covid test is positive. If you talked to me at #CVPR , maybe should closely monitor your health situation in the next few days….. sorry
2
2
11
@cihangxie
Cihang Xie
1 year
Sea turtle at Waikiki Beach
0
0
10
@cihangxie
Cihang Xie
3 years
thanks @Chewy
Tweet media one
Tweet media two
2
0
7
@cihangxie
Cihang Xie
5 months
Quantitatively, our 𝐌𝐚𝐦𝐛𝐚® also attains stronger performance and scales better. For example, our 𝐌𝐚𝐦𝐛𝐚®-B attains 82.9% ImageNet accuracy, largely outperforms its non-register version by 1.1%. Furthermore, we provide the first successful scaling to the large model size
Tweet media one
1
0
10
@cihangxie
Cihang Xie
9 months
Meet Xianhang Li, a third-year PhD student making strides in video understanding and multimodal learning. 🌟 With 6 top-tier publications, he's a tech prodigy. Explore Xianhang's work at and CV at . (2/n)
1
0
10
@cihangxie
Cihang Xie
2 months
Our MedTrinity-25M dataset is trending at HuggingFace; check it out at
Tweet media one
0
1
9
@cihangxie
Cihang Xie
2 years
Today is the submission DDL of our @eccvconf AROW Workshop!! 𝐴𝑠 𝑎 𝑟𝑒𝑚𝑖𝑛𝑑𝑒𝑟, 𝑤𝑒 𝑤𝑖𝑙𝑙 𝑠𝑒𝑙𝑒𝑐𝑡 3 𝑏𝑒𝑠𝑡 𝑝𝑎𝑝𝑒𝑟𝑠, 𝑎𝑤𝑎𝑟𝑑𝑖𝑛𝑔 𝑒𝑎𝑐ℎ 𝑤𝑖𝑡ℎ 𝑎 $10,000 𝑝𝑟𝑖𝑧𝑒.
@cihangxie
Cihang Xie
3 years
📢 #CallForPaper AROW Workshop is back @eccvconf ! This year we have a great lineup of 9 speakers, and set 3 best paper prizes ($10k each, sponsored by @ftxfuturefund ). For more information, please check
Tweet media one
2
16
57
0
2
9
@cihangxie
Cihang Xie
4 years
Attending #CVPR2020 ? Please check out our work on fooling object detectors in the physical world! Q&A Time: 10:00–12:00 and 22:00–00:00 PT on June 16 Link: Code: Paper: We also have a very cool demo👇
1
2
9
@cihangxie
Cihang Xie
4 years
Missed our #CVPR2020 Adversarial Machine Learning Workshop? No worries, the full recording is available at
0
1
9
@cihangxie
Cihang Xie
11 months
We will be presenting CLIPA at @NeurIPSConf on Tuesday, Dec 12. This work shows that larger models are better at dealing with inputs at reduced token length, enabling fast training speed. The code is released at , including G/14 at 83.0 zero-shot acc.
@xwang_lk
Xin Eric Wang
11 months
UCSC AI Group will be at #NeurIPS23 and #EMNLP23 , presenting research covering LLMs, T2I Generation/Editing/Evaluation, VLM Efficiency, Embodied Agents, Explainable AI, Machine Unlearning, Fairness ML, etc. Check out the blogpost for detailed schedule: .
Tweet media one
2
5
38
1
0
8
@cihangxie
Cihang Xie
3 years
The code and the models of our iBOT are released Please check it out
@_akhaliq
AK
3 years
iBOT: Image BERT Pre-Training with Online Tokenizer abs: achieves an 81.6% linear probing accuracy and an 86.3% fine-tuning accuracy evaluated on ImageNet-1K
Tweet media one
3
13
65
0
3
8
@cihangxie
Cihang Xie
4 years
Anyway, I am very disappointed with #neurips2020 --- reject your works without peer-reviews AND do not allow you to do a rebuttal. If my works cannot be properly reviewed in a conference, then what is the value of submitting a paper there?
1
2
8
@cihangxie
Cihang Xie
4 years
Please consider submitting your works to this workshop! We will also offer two Titan RTX GPUs for prize winners!
@AdamKortylewski
Adam Kortylewski
4 years
We have a great lineup of speakers at our upcoming workshop on the importance of Adversarial Robustness in the Real World () at #ECCV2020 with @MatthiasBethge @YuilleAlan @honglaklee @pappasg69 @RaquelUrtasun @judyfhoffman . Consider submitting a paper!
Tweet media one
1
16
43
1
0
8
@cihangxie
Cihang Xie
4 years
Happy new year
@year_progress
Year Progress
4 years
░░░░░░░░░░░░░░░ 0%
408
21K
132K
1
0
8
@cihangxie
Cihang Xie
9 months
Meet another star PhD student, Zeyu Wang! 🌠 In his third year, Zeyu has already published 5 papers, covering topics of multimodal learning, self-supervised learning, and robustness. Learn more about Zeyu’s work at and CV at (3/n)
1
0
8
@cihangxie
Cihang Xie
3 years
🐱
0
0
7
@cihangxie
Cihang Xie
4 years
Our "Shape-Texture Debiased Neural Network Training" paper is just accepted by ICLR. Congrats @yingwei_li github:
Tweet media one
@trustworthy_ml
Trustworthy ML Initiative (TrustML)
4 years
4/ 2] "Shape-Texture Debiased Neural Network Training": They developed a simple algorithm for shape-texture debiased learning; the key is to train models with images with conflicting shape and texture information & provide the supervisions from shape and texture simultaneously.
1
1
1
0
0
8
@cihangxie
Cihang Xie
9 months
Meet Sucheng, a remarkable first-year PhD student at @JHUCompSci ! 🌟 Sucheng has an impressive 16 publications, covering multimodal learning, self-supervised learning, and knowledge distillation. Check his work at with CV at (5/n)
1
0
8
@cihangxie
Cihang Xie
1 year
Lesson learned --- International flights must be checked-in at least 1hr before 😭
2
0
7
@cihangxie
Cihang Xie
9 months
Say hello to Junfei Xiao from @JHUCompSci , a second-year PhD student with expertise in segmentation, multimodal learning, and large vision-language model, boasting 8 publications. 🚀 Discover his profile at and access CV at (4/n)
2
0
7
@cihangxie
Cihang Xie
2 years
If you are working on robustness, please consider submitting your work(s) to our @eccvconf AROW workshop. 𝐖𝐞 𝐰𝐢𝐥𝐥 𝐬𝐞𝐥𝐞𝐜𝐭 𝐓𝐇𝐑𝐄𝐄 𝐛𝐞𝐬𝐭 𝐩𝐚𝐩𝐞𝐫 𝐚𝐰𝐚𝐫𝐝𝐬, 𝐞𝐚𝐜𝐡 𝐰𝐢𝐭𝐡 𝐚 $𝟏𝟎,𝟎𝟎𝟎 𝐩𝐫𝐢𝐳𝐞! CMT site: DDL: Aug 1, 2022
@cihangxie
Cihang Xie
3 years
📢 #CallForPaper AROW Workshop is back @eccvconf ! This year we have a great lineup of 9 speakers, and set 3 best paper prizes ($10k each, sponsored by @ftxfuturefund ). For more information, please check
Tweet media one
2
16
57
0
5
7
@cihangxie
Cihang Xie
4 years
A very relaxed way for attending #CVPR2020 😃
Tweet media one
0
0
7
@cihangxie
Cihang Xie
4 years
The devil is in the details
@TianyuPang1
Tianyu Pang
4 years
Implementation details affect the robust accuracy of adversarially trained models more than expected. For example, a slightly different value of weight decay can send TRADES back to #1 in the AutoAttack benchmark. A unified training setting is necessary.
3
6
30
0
0
7
@cihangxie
Cihang Xie
3 years
The submission deadline is August 1, 2021 Anywhere on Earth (AoE). Both long paper (8-page) and extended abstract (4-page) are welcomed!! CMT site:
@cihangxie
Cihang Xie
3 years
We are hosting a workshop on “Neural Architectures: Past, Present and Future” at @ICCV_2021 , with 7 amazing speakers. We invite submissions on any aspects of neural architectures. Both long paper and extended abstract are welcomed. Visit for more details
Tweet media one
1
11
48
0
1
7
@cihangxie
Cihang Xie
5 months
Qualitatively, our 𝐌𝐚𝐦𝐛𝐚® can effectively suppress those artifacts, making the feature maps look much cleaner. Additionally, our inserted register tokens sometimes can emerge strong semantics in representing objects. (3/n)
Tweet media one
Tweet media two
1
0
7
@cihangxie
Cihang Xie
9 months
Lastly, I would say that I am super privileged to collaborate with these incredibly talented students! 🌟 Their expertise and dedication are exceptional. Don't miss the chance to work with them – hire them now😉 (n/n)
0
0
6
@cihangxie
Cihang Xie
3 years
0
0
5
@cihangxie
Cihang Xie
4 months
I will be presenting this paper at the #CVPR Area Chair Workshop this Sunday. Additionally, if you're attending #CVPR and want to dive deeper into this project, feel free to ping me for an in-person discussion. Looking forward to engaging with you all! (5/n)
Tweet media one
1
0
6
@cihangxie
Cihang Xie
3 years
Our SecML workshop starts now! After @xinyun_chen_ 's opening remark, @AlinaMOprea will talk about "Machine Learning Integrity in Adversarial Environments" at 9am PT. The link is here 👇
@cihangxie
Cihang Xie
3 years
Our Workshop on Security and Safety in Machine Learning Systems is tomorrow, starting at 8:45 am PT Join us at
0
3
11
0
3
6
@cihangxie
Cihang Xie
4 months
To validate its empirical benefits, we kicked off with CLIP model training as our first experiment. Results show: 1) training with a mixture of the original captions and our recaptions boosts cross-modal retrieval significantly, despite a tiny dip in ImageNet accuracy. 2) Using
Tweet media one
Tweet media two
3
0
5
@cihangxie
Cihang Xie
3 years
@pinyuchenTW @ak92501 Hi Pin-Yu, thanks for sharing this interesting work. We will definitely take a look and discuss it in the next version :)
0
0
5
@cihangxie
Cihang Xie
4 years
Same here, with two very vague comments: (1) not supported with sufficient empirical evidence, or important baselines are missing; (2) not sufficiently positioned with respect to prior work, either in terms of the presentation or the empirical validation.
@sineadwilliamso
Sinead Williamson
4 years
Oh, great. Neurips desk reject. If the results are “known results” (I’m pretty confident they aren’t), perhaps the area chair could have made the effort to leave a note telling what they are? 1/n
14
19
307
1
0
5