Bhuwan Dhingra Profile
Bhuwan Dhingra

@bhuwandhingra

Followers
1K
Following
172
Statuses
96

Natural Language Processing / Machine Learning research. Assistant Professor @dukecompsci, @duke_nlp; Research Scientist @Apple

Durham, NC
Joined May 2014
Don't wanna be here? Send us removal request.
@bhuwandhingra
Bhuwan Dhingra
3 months
@MohitIyyer @umdcs @ClipUmd @UMass_NLP Exciting news! Congrats Mohit!
0
0
1
@bhuwandhingra
Bhuwan Dhingra
4 months
New benchmark for multimodal information extraction from materials science papers!
@ghazalkhn
Ghazal Khalighinejad
4 months
πŸ“’ New preprint on a benchmark for multimodal information extraction! Structured data extraction from long documents consisting of interconnected data in text, tables, and figures remains a challenge. MatViX aims to fill this gap.
Tweet media one
1
1
24
@bhuwandhingra
Bhuwan Dhingra
4 months
Paper: Co-authors: @YukunHuang9 @sanxing_chen @ James Cai Data: Code:
0
1
1
@bhuwandhingra
Bhuwan Dhingra
7 months
1
0
2
@bhuwandhingra
Bhuwan Dhingra
7 months
RT @ghazalkhn: πŸŽ‰ Excited to share that IsoBench has been accepted at @COLM_conf! IsoBench features isomorphic inputs across Math/Graph pro…
0
5
0
@bhuwandhingra
Bhuwan Dhingra
8 months
Exciting new work from Roy on membership inference for LLMs. TL;DR: conditional likelihood is a strong predictor of membership.
@RoyXie_
Roy Xie
8 months
🚨 Breaking: >90% AUC on the WikiMIA dataset for membership inference! Want to know if your data is in LLM's training set?πŸ” Check out our latest work "ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods" ✨ 🧡1/6
Tweet media one
1
1
10
@bhuwandhingra
Bhuwan Dhingra
8 months
@florian_tramer @ruoyuxyz @daphneipp @ChunyuanDeng @armancohan @WeijiaShi2 @niloofar_mire @iamgroot42 @jeff_cheng_77 @alexsablay @aterzis @srxzr @kandpal_nikhil @Chris_Choquette Though your work definitely suggests we should try out our idea on more datasets. Stay tuned for that!
0
0
5
@bhuwandhingra
Bhuwan Dhingra
8 months
New paper on prompt extraction from LLMs by @JunlinWang3! TL;DR: GPT-4, Gemini, Llama-2 and Mixtral are all susceptible to prompt extraction -- but simple defenses can go a long way.
@JunlinWang3
Junlin Wang
8 months
🦝Excited to announce our work on robustness & security of LLM systems! π‘πšπœπœπ¨π¨π§: Prompt Extraction Benchmark of LLM-Integrated Applications Prompt extraction from LLM-integrated apps like GPT-s is a critical security concern. ‼️
Tweet media one
0
1
4
@bhuwandhingra
Bhuwan Dhingra
9 months
RT @ghazalkhn: πŸ” How well can large language models extract structured data from materials science literature? Accepted at #ACL2024nlp Fin…
0
1
0
@bhuwandhingra
Bhuwan Dhingra
9 months
Welcome Shuyan! Really excited to work with you soon :D
@shuyanzhxyc
Shuyan Zhou
9 months
I am thrilled to announce that I will be joining @DukeU @dukecompsci as an Assistant Professor in summer 2025. Super excited for the next chapter! Stay tuned for the launch of my lab πŸ§ πŸ€–
Tweet media one
1
0
11
@bhuwandhingra
Bhuwan Dhingra
9 months
Check out @RichStureborg 's NAACL paper on tailoring vaccine messages to common opinions for (hopefully) more effective interventions!
@RichStureborg
Rich Stureborg
9 months
Can we tailor LLM responses to better establish common-ground in high-stakes communication? Happy to announce our recent #NAACL findings paper where we investigate a new perspective on ~LLM personalization~ through a task for vaccine messaging. 1/3...
Tweet media one
0
0
5
@bhuwandhingra
Bhuwan Dhingra
11 months
RT @DeqingFu: Do multimodal foundation models treat every modality equally? Hint: Humans have picture superiority. How about machines? In…
0
25
0