![Yoad Tewel Profile](https://pbs.twimg.com/profile_images/1662055738288553984/hKoNs80E_x96.jpg)
Yoad Tewel
@YoadTewel
Followers
197
Following
468
Statuses
60
Research Scientist @NvidiaAI PhD student @ Tel-Aviv University
Joined May 2017
I'm going to present ConsiStory๐ at #SIGGRAPH2024 this Monday @ 2pm! If you're around this week, DM me if you want to chat! Details belowโฌ๏ธ๐งต
Nvidia presents ConsiStory Training-Free Consistent Text-to-Image Generation paper page: enable Stable Diffusion XL (SDXL) to generate consistent subjects across a series of images, without additional training.
3
20
64
RT @ItamarZimerman: Thrilled to share that our work has been accepted to #ICLR2025 !๐๐ This research introduces SoTA attribution methods fโฆ
0
13
0
RT @hila_chefer: VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with mโฆ
0
192
0
Very cool idea: build a personalization encoder so that each query selects the right values encoded from the input image, resulting in a more expressive representation. The results look awesome!
Ever wondered how a SINGLE token represents all subject regions in personalization? Many methods use this token in cross-attention, meaning all semantic parts share the same single attention value. We present Nested Attention, a mechanism that generates localized attention values
0
1
12
The one true benchmark for personalization, results from @pika_labs are crazy good!
0
0
5
RT @yoni_kasten: Excited to present TracksTo4D at NeurIPS 2024! Our method enables efficient reconstruction of dynamic 3D point clouds andโฆ
0
37
0
RT @omri_kaduri: ๐ Unveiling new insights into Vision-Language Models (VLMs)! In collaboration with @OneViTaDay & @talidekel, we analyzedโฆ
0
11
0
RT @yoavhacohen: Excited to announce LTX-Video! Our new text-to-video model generates stunning, high-quality videos faster than real-timeโโฆ
0
67
0
RT @_akhaliq: Nvidia presents Add-it Training-Free Object Insertion in Images With Pretrained Diffusion Models
0
77
0
RT @RoyiRassin: How diverse are the outputs of text-to-image models and how can we measure that? In our new work, we propose a measure baseโฆ
0
30
0
๐ Excited to release the code and demo for ConsiStory, our #SIGGRAPH2024 paper! No fine-tuning needed โ just fast, subject-consistent image generation! Check it out here ๐ Code: Demo:
Nvidia presents ConsiStory Training-Free Consistent Text-to-Image Generation paper page: enable Stable Diffusion XL (SDXL) to generate consistent subjects across a series of images, without additional training.
1
33
140
RT @RinonGal: TL;DR - we improve text-to-image output quality by tuning an LLM to predict ComfyUI workflows tailored to each generation proโฆ
0
75
0
RT @ChenTessler: Excited to share our latest work! ๐คฉ Masked Mimic ๐ฅท: Unified Physics-Based Character Control Through Masked Motion Inpaiโฆ
0
91
0
RT @RinonGal: We've released the code for our #SIGGRAPHAsia2024 TurboEdit paper, where we edit images in 3 steps using SDXL-Turbo ๐ https:/โฆ
0
9
0
@ziv_ravid @relnox ืื ืืช ืื ืืจืื ืืืงืจืื ืืฉืจืืืื: ืื ื ืืกืืื ืฉืื ืื ื ืขืื ืจืืืงืื ืืืืฆืืจ ืืฉืืงืื ืืืฉืื, ืืื ืื ืฆืขื ืจืืฉืื ืืืืืื ืืขื ืืื ืฉืืืื ืืืชืคืชื ืืจืื ืืจืืข ืฉื ืืื ืscale ืืืื ืืกืคืืง ืืขื ืืื ืจืขืืื ืืช ืืืฉืื - ืืืืืง ืืื ืืืชืคืชืืืช ืฉื ืืืืื ืืืืื.
1
0
3
RT @segal_eran: Our new arXiv paper: We built a generative AI model of blood glucose levels from 10 million continuous glucose monitoring mโฆ
0
144
0
RT @yuvalalaluf: If you're at #SIGGRAPH2024 this week, come check out our talks on ConceptLab ๐งช๐ฎ & Cross-Image Attention ๐ฆ๐ฆ this Thursday @โฆ
0
15
0
We have many more cool examples in our project page: And shoutout to my great collaborators: @omri_kaduri @RinonGal @yoni_kasten @liorwolf @GalChechik @AtzmonYuval
0
0
4