![Oncel Tuzel Profile](https://abs.twimg.com/sticky/default_profile_images/default_profile_x96.png)
Oncel Tuzel
@OncelTuzel
Followers
1K
Following
197
Statuses
70
AI researcher @Apple (opinions are my own)
Cupertino
Joined August 2011
RT @jramapuram: Small update on SigmoidAttn (arXiV incoming). - 1B and 7B LLM results added and stabilized. - Hybrid Norm [on embed dim,โฆ
0
36
0
RT @HPouransari: ๐ข๐ข๐ข We released the code for dataset-decomposition [NeurIPS 2024]: a simple method to speed up LLM pre-training using seqโฆ
0
6
0
Check out FastVLM from @Apple MLR, a new family of vision language models with state-of-the-art size/accuracy/latency tradeoff:
What matters for runtime optimization in Vision Language Models (VLMs)? Vision encoder latency ๐ค? Image resolution ๐ค? Number of visual tokens ๐ค? LLM size ๐ค? In this thread, we break it all down and introduce FastVLM โ a family of fast and accurate VLMs. (1/n ๐งต)
0
2
15
RT @i_mirzadeh: We have open-sourced GSM-Symbolic templates and generated data! ๐ - Github: - Hugging Face: https:โฆ
0
4
0
RT @thoma_gu: ๐คImage-to-3D, monocular depth estimation, camera pose estimation, โฆ, can we achieve all of this with just ONE model easily?โฆ
0
51
0
Join our team! Applications are open until December 16. Submit your application through the portal below, and feel free to send me a message afterward. This position is also available in Cupertino!
Our Machine Learning Research (MLR) team at #Apple is seeking a passionate AI resident to conduct research on multi-modal generative models (vision, 3D, language, audio) and to explore effective control mechanisms for these models. Application details:
0
3
13
RT @HPouransari: We just released the MobileCLIP zero-shot image classification iOS app.๐ฑ MobileCLIP is a family of fast on-device CLIP moโฆ
0
6
0
RT @PavankumarVasu: ๐ข Presenting our app for real-time zero-shot image classification using MobileCLIP! Fully open-sourceโcode & models avโฆ
0
11
0
RT @alaa_nouby: ๐๐ผ๐ฒ๐ ๐ฎ๐๐๐ผ๐ฟ๐ฒ๐ด๐ฟ๐ฒ๐๐๐ถ๐๐ฒ ๐ฝ๐ฟ๐ฒ-๐๐ฟ๐ฎ๐ถ๐ป๐ถ๐ป๐ด ๐๐ผ๐ฟ๐ธ ๐ณ๐ผ๐ฟ ๐๐ถ๐๐ถ๐ผ๐ป? ๐ค Delighted to share AIMv2, a family of strong, scalable, and open visionโฆ
0
28
0
RT @thoma_gu: ๐Excited to introduce our recent work @ AppleMLR -- DART: Denoising AutoRegressive Transformer for Scalable Text-to-Image Genโฆ
0
67
0
RT @MFarajtabar: 1/ Can Large Language Models (LLMs) truly reason? Or are they just sophisticated pattern matchers? In our latest preprint,โฆ
0
1K
0
RT @itsbautistam: I am looking for strong PhD interns to join Apple MLR late 2024 or early 2025! Topics will be around training large-scaleโฆ
0
70
0
RT @PierreAblin: ๐ Apple ML research in Paris has multiple open internship positions!๐ We are looking for Ph.D. students interested in genโฆ
0
79
0
RT @jramapuram: Enjoy attention? Want to make it ~18% faster? Try out Sigmoid Attention. We replace the traditional softmax in attention wiโฆ
0
166
0
RT @ruomingpang: As Apple Intelligence is rolling out to our beta users today, we are proud to present a technical report on our Foundationโฆ
0
196
0
RT @HPouransari: Base LLMs are frequently updated (e.g., LLaMA 1โ2โ3...), leading to higher performance. However, a more accurate base modeโฆ
0
16
0