![Xia “Ben” Hu Profile](https://pbs.twimg.com/profile_images/1111102127622053888/himJCamZ_x96.jpg)
Xia “Ben” Hu
@huxia
Followers
655
Following
218
Statuses
103
Associate Professor of CS@Rice working on AutoML, XAI and Network Analytics. Author of AutoKeras and NCF.
Houston, TX
Joined September 2009
RT @Saboo_Shubham_: AWS just released a new Multi-Agent AI framework It lets you manage multiple AI agents, dynamically route LLM queries,…
0
447
0
If this isn't racism, then what is it?
NeurIPS acknowledges that the cultural generalization made by the keynote speaker today reinforces implicit biases by making generalisations about Chinese scholars. This is not what NeurIPS stands for. NeurIPS is dedicated to being a safe space for all of us. We want to address the comment made during the invited talk this afternoon, as it is something that NeurIPS does not condone and it doesn't align with our code of conduct. We are addressing this issue with the speaker directly. NeurIPS is dedicated to being a diverse and inclusive place where everyone is treated equally.
0
0
18
RT @hanjie_chen: 📢Postdoc Position📢 Dr. Xia Hu @huxia and I are looking for a Chairman's Postdoctoral Fellow in Efficient and Trustworthy…
0
20
0
Without fine-tuning, Self-Extend has significantly improved performance of the Gemma-2b-it performance in the needle in the haystack task, increasing its capability from less than 8k (pretraining window) to over 90k!!
Despite the mixed feelings about Google's latest Gemma model, we're big fans! @GoogleAI Why? Coz we found it pairs incredibly well with our SelfExtend 🤣🤣🤣 - like, perfectly! With Self-Extend, no fine-tuning needed, we effortlessly expanded Gemma's window from 8k to 90k+! On the 'Needle in the haystack' task, Gemma-2b-it even struggled at 8k, but with SelfExtend, Gemma-2b-it easily tackles it within 90k range! #AI #Gemma #SelfExtend #LLMs 🚀 Paper: Github:
0
0
9
RT @JiayiYuan99: Excited to introduce KIVI🥝, the first 2bit KV cache quantization breakthrough! 🚀 KIVI can be directly integrated into exis…
0
6
0
RT @JingfengY: Welcome more people to trying Self-Extend/LongLM on more diverse tasks and models to see when it works and identify its limi…
0
2
0