![Zijing Ou Profile](https://pbs.twimg.com/profile_images/1597936642379112448/9YfOENm4_x96.jpg)
Zijing Ou
@JzinOu
Followers
142
Following
679
Statuses
52
PhD student @ Imperial College London
Joined December 2017
RT @JiajunHe614: 🎉 Our paper Training Neural Samplers with Reverse Diffusive KL Divergence has been accepted at #AISTATS2025! We propose…
0
3
0
RT @MingtianZhang: 🎉Thrilled to share that our paper on improving diffusion models with a better covariance estimation method has been acce…
0
6
0
RT @YiCheng77783310: 🚀 Accepted by ICLR’25! We introduce Integrative Decoding, a novel decoding algorithm to tackle the hallucination prob…
0
13
0
RT @linzhengisme: 🚀 Meet EvaByte: The best open-source tokenizer-free language model! Our 6.5B byte LM matches modern tokenizer-based LMs w…
0
83
0
@ValentinDeBort1 @JamesTThorn @ArnaudDoucet1 @agalashov @ArthurGretton thanks for the clarification! awesome work 🥳😍
1
0
2
@ArnaudDoucet1 @ValentinDeBort1 @agalashov @ArthurGretton but for diffusion models, which typically dont have causal masks, that means it needs to evaluate the score of q, b_t^q, n_L - n times, instead of in parallel? It seems not very efficient. do I miss sth?
1
0
0
RT @YuyangW95: 1/n 🚨New preprint! Our work “Coordinate In and Value Out: Training Flow Transformers in Ambient Space”
0
16
0
RT @thoma_gu: Life update: Excited to share that I will be joining @CIS_Penn @PennEngineers as an Assistant Professor in Fall 2025!🤯 I’m…
0
52
0
@ArashVahdat @su_lin_liu @wgrathwohl P_AR (x_0) can be treated as a reward function, similar importance sampling trick can be applied to any reward functions with pre-trained discrete diffusion models
0
0
1
RT @agopal42: Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024! TL;DR: Our mo…
0
41
0