![Pauline Luc Profile](https://pbs.twimg.com/profile_images/2109060012/Photo_profile_x96.jpg)
Pauline Luc
@paulineluc_
Followers
472
Following
28
Statuses
25
Research Scientist @GoogleDeepMind - 🦩 | PHD at @MetaAI & @Inria
Joined April 2012
RT @skandakoppula: We're excited to release TAPVid-3D: an evaluation benchmark of 4,000+ real world videos and 2.1 million metric 3D point…
0
62
0
RT @CarlDoersch: We present a new SOTA on point tracking, via self-supervised training on real, unlabeled videos! BootsTAPIR achieves 67.4%…
0
67
0
RT @Mesnard_Thomas: Thrilled to present to you Gemma! A family of lightweight, state-of-the art and open models by @GoogleDeepMind. We prov…
0
8
0
RT @alexsablay: Our latest release @MistralAI Mixtral 8x7B mixture of experts - performance of a GPT3.5 - inference cost of a 12B mod…
0
16
0
RT @PierreStock: Mixtral 8x7B is here, 11 weeks only after Mistral 7B. Outperforms Llama 2 70B and GPT 3.5 on most benchmarks, at the infer…
0
27
0
RT @demishassabis: Thrilled to share #Lyria, the world's most sophisticated AI music generation system. From just a text prompt Lyria produ…
0
520
0
RT @arankomatsuzaki: Demystifying CLIP Data Reveals CLIP’s data curation approach and makes it open to the community repo:
0
41
0
RT @emilymbender: Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripp…
0
519
0
RT @anas_awadalla: 🦩 Introducing OpenFlamingo! A framework for training and evaluating Large Multimodal Models (LMMs) capable of processing…
0
463
0
RT @AntoineYang2: Introducing Vid2Seq, a new visual language model for dense video captioning. To appear at #CVPR2023. Work done @Google w…
0
18
0
RT @DeepMind: In case you missed it...Flamingo 🦩 a new SOTA visual language model. Read more below ⬇️ Paper: Blog:…
0
25
0
RT @conormdurkan: Chatting with Flamingo about images is definitely the most organic experience I’ve had with an ML model. The ability to r…
0
7
0
RT @antoine77340: Finally able to share what I have been working on this year! 🦩 Tldr: We took our best LM (Chinchilla), froze it and adde…
0
8
0
RT @arthurmensch: 10B extra parameters for adaptation and visual conditioning, new cross-modality data and a lot of love makes Chinchilla a…
0
5
0
RT @Inoryy: A group of Flamingos is called “flamboyance” which could be an apt description for the family of vision-language models I’m thr…
0
5
0
RT @millikatie: Great to finally share our newest addition to the DeepMind large-scale model zoo! 🦩
0
15
0
RT @malcolm_rynlds: Was a total pleasure to be part of the team for Flamingo. Lots of exciting capabilities that we are just beginning to e…
0
3
0
RT @yanahasson: A lot happenned in the last year ! I defended my PhD and joined @DeepMind where I worked with an incredible team on Flaming…
0
21
0
RT @jalayrac: Delighted to finally be able to share what I have been up to for the last year or so 🦩. I am really proud of what we achieved…
0
37
0
RT @jeffdonahue: Back on Twitter after a quick 13 year hiatus to shamelessly plug 🦩! It's been tremendously satisfying working on this with…
0
15
0