![Chen Change Loy Profile](https://pbs.twimg.com/profile_images/1743098876687761409/6M3MFtX-_x96.jpg)
Chen Change Loy
@ccloy
Followers
3K
Following
2K
Statuses
901
President's Chair Professor @NTUsg Director of @MMLabNTU Computer vision and deep learning
Singapore
Joined November 2010
We turned our method, rejected by CVPR and ECCV, into the iOS app "Cutcha". EdgeSAM, our fast Segment Anything Model, runs at over 30 FPS on an iPhone 14. Enjoy intuitive one-touch object selection and precise editing—all processed locally on your device. No cloud needed! Download now on the App Store! 📸✨ Kudos to our in-house development team Han Soong and Voon Hueh #Cutcha #EdgeSAM #PhotoEditing #iOSApp
8
25
212
RT @ShangchenZhou: 🔥Introducing #𝐌𝐚𝐭𝐀𝐧𝐲𝐨𝐧𝐞 for human video matting!🔥 🤡 Fast video matting with customizable target 🤡 Stable human tracking…
0
95
0
RT @KangLiao929: Happy to share that our work "Denoising as Adaptation" has been accepted to #ICLR2025! Huge thanks to @ccloy and all colla…
0
14
0
RT @BoLi68567011: I think the image metaphor of DeepSeek is incorrect. Recently, there has been a lot of discussion about DeepSeek. Many p…
0
19
0
RT @liuziwei7: 🔥Unbounded 4D City Generation🔥 #CityDreamer4D is a generative model of unbounded 4D cities that decouples static and dynami…
0
61
0
RT @zeqi_xiao: Introducing 💡Trajectory Attention for Fine-grained Video Motion Control💡. By augmenting attention along predefined trajector…
0
10
0
RT @dreamingtulpa: Omegance can control detail levels in diffusion-based synthesis using a single parameter, ω (uwu???) Guess we don't nee…
0
16
0
RT @caizhongang: 🚀 Announcing GTA-Human II for expressive human pose & shape estimation! Compared to its predecessor, this latest game-pla…
0
12
0
RT @caizhongang: 🚀 HuMMan-MoGen is here! 🚀 HuMMan v1.0: Motion Generation Subset features 112,112 fine-grained temporal (by stage) and spat…
0
17
0
RT @caizhongang: HuMMan v1.0: 3D Vision Subset (HuMMan-Point) has just been released! ✅ RGB-D @ 30 FPS ✅ Captured with Kinect & iPhone ✅ 34…
0
16
0
Like this work!
Split-Aperture 2-in-1 Cameras at #SIGGRAPH2024 for single-shot HDR, hyperspectral, and depth imaging! By splitting the camera aperture and using a dual-pixel sensor, we show that it is possible to capture an optically coded and conventional image simultaneously without increasing the camera size! This makes the conditional reconstruction problem so much easier! Project: Super fun project with Zheng Shi, @_ilya_c , Mario Bijelic, Geoffroi Côté, Jiwoon Yeom, Qiang Fu, Hadi Amata, Wolfgang Heidrich.
0
0
2
Excited to share that our survey paper "Transformer-Based Visual Segmentation: A Survey" is now accepted by TPAMI! 📝 🔍 Highlights: 1️⃣ Unlike previous surveys, we categorize transformer-based methods from a technical perspective. 2️⃣ We explore methods for mainstream tasks with DETR-like meta-architecture and related directions by tasks. 3️⃣ We re-benchmark representative works on image semantic & panoptic segmentation datasets. Paper: GitHub repo: Hard work from Xiangtai Li @xtl994 , Henghui Ding @HenghuiDing , Haobo Yuan @HarborYuan , Wenwei Zhang @wenweiz97 and other co-authors.
0
9
59