shangbinfeng Profile Banner
Shangbin Feng Profile
Shangbin Feng

@shangbinfeng

Followers
2K
Following
9K
Media
118
Statuses
840

PhD student @uwcse @uwnlp. Multi-LLM collaboration, social NLP, networks and structures. #水文学家

Joined June 2021
Don't wanna be here? Send us removal request.
@shangbinfeng
Shangbin Feng
4 months
👀 How to find a better adapted model?.✨ Let the models find it for you!. 👉🏻 Introducing Model Swarms, multiple LLM experts collaboratively search for new adapted models in the weight space and discover their new capabilities. 📄 Paper:
Tweet media one
4
40
192
@shangbinfeng
Shangbin Feng
2 years
Thank you for the best paper award! #ACL2023NLP
Tweet media one
Tweet media two
Tweet media three
Tweet media four
29
33
357
@shangbinfeng
Shangbin Feng
2 years
Do LLMs have inherent political leanings? How do their political biases impact downstream tasks?. We answer these questions in our #ACL2023 paper: "From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models"
Tweet media one
2
42
174
@shangbinfeng
Shangbin Feng
5 months
Tweet media one
8
3
176
@shangbinfeng
Shangbin Feng
2 years
LLMs are adopted in tasks and contexts with implicit graph structures, but . Are LMs graph reasoners?. Can LLMs perform graph-based reasoning in natural language?. Introducing NLGraph, a comprehensive testbed of graph-based reasoning designed for LLMs.
2
43
171
@shangbinfeng
Shangbin Feng
2 years
Ever felt hopeless when LLMs make factual mistakes? Always waiting for big companies to release LLMs with improved knowledge abilities?. Introducing CooK, a community-driven initiative to empower black-box LLMs with modular and collaborative knowledge.
5
37
161
@shangbinfeng
Shangbin Feng
7 months
Don't hallucinate, abstain! #ACL2024. "Hey ChatGPT, is your answer true?" Sadly LLMs can't reliably self-eval/self-correct :(. Introducing teaching LLMs to abstain via multi-LLM collaboration!. A thread 🧵
Tweet media one
3
26
144
@shangbinfeng
Shangbin Feng
9 months
First paper to make 100 and also my first ever paper, where it all started 🥥
Tweet media one
6
3
125
@shangbinfeng
Shangbin Feng
9 months
ACL Accepts: LLMs+. abstain. detect and design social bots. geometric knowledge reasoning. simulate reactions to detect misinfo. robustness of AI text detection.
9
10
87
@shangbinfeng
Shangbin Feng
8 months
ICLR: International Conference on seLected summer inteRnship works.
1
1
78
@shangbinfeng
Shangbin Feng
6 months
Thank you for the double awards! #ACL2024 @aclmeeting . Area Chair Award, QA track.Outstanding Paper Award. huge thanks to collaborators!!! @WeijiaShi2 @YikeeeWang @Wenxuan_Ding_ @vidhisha_b @tsvetshop.
@shangbinfeng
Shangbin Feng
7 months
Don't hallucinate, abstain! #ACL2024. "Hey ChatGPT, is your answer true?" Sadly LLMs can't reliably self-eval/self-correct :(. Introducing teaching LLMs to abstain via multi-LLM collaboration!. A thread 🧵
Tweet media one
4
7
79
@shangbinfeng
Shangbin Feng
2 years
Looking for a summarization factuality metric? Are existing ones hard-to-use, require re-training, or not compatible with HuggingFace?. Introducing FactKB, an easy-to-use, shenanigan-free, and state-of-the-art summarization factuality metric!.
1
18
73
@shangbinfeng
Shangbin Feng
1 year
What should be the desirable behaviors of LLMs when knowledge conflicts arise?. Are LLMs currently exhibiting those desirable behaviors?. Introducing Knowledge Conflict, a protocol for resolving knowledge conflicts in LLMs and an evaluation framework.
Tweet media one
3
16
73
@shangbinfeng
Shangbin Feng
3 months
Attending #EMNLP2024 to present:. Pluralism through Modular LLMs.Wed 1030. Abstain with Multilingual Feedback w/ @Wenxuan_Ding_.Tue 1600. LLM Graph Reasoning w/ @MatthewZ21157 @HengWang_xjtu.Tue 1100. Chat!.
3
14
72
@shangbinfeng
Shangbin Feng
7 months
How can LLMs help counter (LLM-generated) misinformation? #ACL2024. Introducing DELL, an approach that.1) simulates diverse persona and generate synthetic reactions to news 👩‍🌾👨‍🎓👩‍🔬👨‍🚒.2) goes beyond classification and provide explanations 🔎💬. A thread 🧵
Tweet media one
2
14
56
@shangbinfeng
Shangbin Feng
7 months
🚨 Detecting social media bots has always been an arms race: we design better detectors with advanced ML tools, while more evasive bots emerge adversarially. What does LLM bring to the arms race between bot detectors and operators?. A thread 🧵#ACL2024
Tweet media one
2
10
55
@shangbinfeng
Shangbin Feng
7 months
What can we do when certain values, cultures, and communities are underrepresented in LLM alignment?. Introducing Modular Pluralism, where a general LLM interacts with a pool of specialized community LMs in various modes to advance pluralistic alignment. A thread 🧵
Tweet media one
1
15
54
@shangbinfeng
Shangbin Feng
7 months
LLMs should abstain in lack of knowledge/confidence. However, this abstain capability is far from equitable for diverse language speakers. Working well in English, existing abstain methods drop by up to 20% when employed in low-resource languages. What do we do about it? A 🧵
Tweet media one
2
9
43
@shangbinfeng
Shangbin Feng
6 months
Instruction tuning with synthetic graph data leads to graph LLMs, but:. Are LLMs learning generalizable graph reasoning skills or merely memorizing patterns in the synthetic training data? 🤔 (for examples, patterns like how you describe a graph in natural language). A thread 🧵
Tweet media one
2
9
41
@shangbinfeng
Shangbin Feng
7 months
LLM evaluation research isn't just X models over Y datasets across Z settings.
4
0
40
@shangbinfeng
Shangbin Feng
1 year
Knowledge Card is now accepted at @iclr_conf, oral!. (with a new name and title).
@shangbinfeng
Shangbin Feng
2 years
Ever felt hopeless when LLMs make factual mistakes? Always waiting for big companies to release LLMs with improved knowledge abilities?. Introducing CooK, a community-driven initiative to empower black-box LLMs with modular and collaborative knowledge.
1
8
37
@shangbinfeng
Shangbin Feng
4 months
Congrats to @HengWang_xjtu on his first 100-citation work! Important piece on LLM and structured data opening up space for much LLM & Graph research. Check it out if you haven't: (ty to collaborators: @TianxingH @zhaoxuan_t @XiaochuangHan @tsvetshop)
Tweet media one
2
5
35
@shangbinfeng
Shangbin Feng
1 year
Knowledge often exists in networks and structures. Can LLMs conduct geometric reasoning over structured knowledge?. Introducing Knowledge Crosswords: given a partially filled entity network, LLMs are tasked with figuring out the missing entities.
Tweet media one
1
10
34
@shangbinfeng
Shangbin Feng
2 years
I am at #ACL2023NLP! You can discuss with me about/at:. - LM political biases: Mon 11am.- Twitter bot detection: Mon 2pm.- LM and knowledge graphs: Tue 4:15pm (all at Frontenac Ballroom and Queen’s Quay).- [redacted] 🤔.
2
4
36
@shangbinfeng
Shangbin Feng
1 year
When you realize you haven't been reimbursed for the last conference yet and it is already time to book for the next (plus unpaid RA for almost a month) 😅.
2
0
33
@shangbinfeng
Shangbin Feng
2 years
Bunch of papers accepted at #ACL2023 🥳. From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models.(w/ @chan_young_park @Lyhhhh2333 & Yulia).
1
4
32
@shangbinfeng
Shangbin Feng
5 months
now accepted at EMNLP 2024.
@shangbinfeng
Shangbin Feng
7 months
LLMs should abstain in lack of knowledge/confidence. However, this abstain capability is far from equitable for diverse language speakers. Working well in English, existing abstain methods drop by up to 20% when employed in low-resource languages. What do we do about it? A 🧵
Tweet media one
2
1
27
@shangbinfeng
Shangbin Feng
5 months
Big day for EMNLP! Wishing all the fantastic people the best of luck. You’ve worked hard—like I always say, it’s all about winning! Let’s see those papers! Tremendous papers. Together, we’re making academia great again—believe me!.
3
0
27
@shangbinfeng
Shangbin Feng
11 months
Now accepted at @naaclmeeting!.
@YuhanLiu_nlp
Yuhan Liu
1 year
What Constitutes a Faithful Summary?. In this paper, we first present a surprising finding that existing approaches alter the political opinions and stances of news articles in more than 50% of summaries.
Tweet media one
0
2
27
@shangbinfeng
Shangbin Feng
5 months
now accepted at EMNLP 2024, findings.
@shangbinfeng
Shangbin Feng
6 months
Instruction tuning with synthetic graph data leads to graph LLMs, but:. Are LLMs learning generalizable graph reasoning skills or merely memorizing patterns in the synthetic training data? 🤔 (for examples, patterns like how you describe a graph in natural language). A thread 🧵
Tweet media one
0
0
26
@shangbinfeng
Shangbin Feng
1 year
Now accepted at @NeurIPSConf spotlight!.
@shangbinfeng
Shangbin Feng
2 years
LLMs are adopted in tasks and contexts with implicit graph structures, but . Are LMs graph reasoners?. Can LLMs perform graph-based reasoning in natural language?. Introducing NLGraph, a comprehensive testbed of graph-based reasoning designed for LLMs.
2
2
23
@shangbinfeng
Shangbin Feng
2 years
To address extreme levels of being-scooped-by-arxiv anxiety, we’ve applied the following limits to arxiv paper downloads:. - Verified researchers are limited to downloading 6000 papers/day.- Unverified researchers to 600 papers/day.- Undergrad students to 300 papers/day.
1
0
24
@shangbinfeng
Shangbin Feng
2 years
Good news on a Monday night🥳
Tweet media one
2
0
22
@shangbinfeng
Shangbin Feng
1 year
I will attend @NeurIPSConf in person to present our spotlight paper NLGraph w/ @HengWang_xjtu!. Happy to chat about LLM and knowledge/factuality, (political) biases and fairness, misinformation and social media, LLM+Graphs, and more :)
Tweet media one
0
2
20
@shangbinfeng
Shangbin Feng
8 months
mood
Tweet media one
1
0
21
@shangbinfeng
Shangbin Feng
2 months
Check out our work on information seeking and interactive QA in the medical domain!.
@StellaLisy
Stella Li
2 months
31% of US adults use generative AI for healthcare🤯But most AI systems answer questions assertively—even when they don’t have the necessary context. Introducing #MediQ a framework that enables LLMs to recognize uncertainty🤔and ask the right questions❓when info is missing: 🧵
Tweet media one
0
5
20
@shangbinfeng
Shangbin Feng
2 years
What is the overall bot percentage on Twitter? How many bots voted for Twitter to reinstate Trump?. 🚨New preprint: "BotPercent: Estimating Twitter Bot Populations from Groups to Crowds", out now:. A thread 🧵 [1/n].
1
7
19
@shangbinfeng
Shangbin Feng
2 years
Rooftop bbq with @XiaochuangHan to celebrate his amazing ACL reviews
Tweet media one
0
0
18
@shangbinfeng
Shangbin Feng
5 months
now accepted at EMNLP 2024.
@shangbinfeng
Shangbin Feng
7 months
What can we do when certain values, cultures, and communities are underrepresented in LLM alignment?. Introducing Modular Pluralism, where a general LLM interacts with a pool of specialized community LMs in various modes to advance pluralistic alignment. A thread 🧵
Tweet media one
0
2
17
@shangbinfeng
Shangbin Feng
2 years
One paper accepted at @emnlpmeeting, “PAR: Political Actor Representation Learning with Social Context and Expert Knowledge”. More details soon. 🎉🎉🎉.
1
1
17
@shangbinfeng
Shangbin Feng
2 years
Our work “BotMoE: Twitter Bot Detection with Community-Aware Mixtures of Modal-Specific Experts” accepted at @SIGIRConf ! 🥳🥳🥳. Led by the awesome @Lyhhhh2333.
2
4
17
@shangbinfeng
Shangbin Feng
1 year
Did you now that summarization changes the political leanings of articles? Check out our work 👀.
@YuhanLiu_nlp
Yuhan Liu
1 year
What Constitutes a Faithful Summary?. In this paper, we first present a surprising finding that existing approaches alter the political opinions and stances of news articles in more than 50% of summaries.
Tweet media one
2
1
16
@shangbinfeng
Shangbin Feng
1 year
When you review for a conference for the first time and they assigned you to 7 submissions 😅.
2
0
16
@shangbinfeng
Shangbin Feng
6 months
Thank you!!!.
@aclmeeting
ACL 2025
6 months
🎉SAC Awards:. 14) Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration by Feng et al. 15) VariErr NLI: Separating Annotation Error from Human Label Variation by Weber-Genzel et al. #NLProc #ACL2024NLP.
0
1
16
@shangbinfeng
Shangbin Feng
6 months
Collaborators presenting #ACL2024, all in convention center A1. Abstain @YikeeeWang @Wenxuan_Ding_ .Wed 1030. Bot Detection @wanherun .Mon 1100. Geometric Knowledge Reasoning @Wenxuan_Ding_ .Mon 1245. LLM & Misinformation @wanherun .Mon 1245. AI Text Detection @YichenZW.Tue 1600.
0
1
16
@shangbinfeng
Shangbin Feng
1 year
KGQuiz, a study of LLM knowledge generalization across tasks and contexts, now accepted at @TheWebConf!.
1
4
15
@shangbinfeng
Shangbin Feng
9 months
Knowledge Card at @iclrconf Oral! Due to visa issues I could not attend, but we will have the awesome @WeijiaShi2 to give the oral talk!. 💬Session: Oral 7B.🕙Time: Friday, 10 AM.📍Place: Halle A 7. Paper link: Code & resources:
@shangbinfeng
Shangbin Feng
2 years
Ever felt hopeless when LLMs make factual mistakes? Always waiting for big companies to release LLMs with improved knowledge abilities?. Introducing CooK, a community-driven initiative to empower black-box LLMs with modular and collaborative knowledge.
1
2
16
@shangbinfeng
Shangbin Feng
6 months
Tweet media one
Tweet media two
Tweet media three
Tweet media four
@shangbinfeng
Shangbin Feng
7 months
Don't hallucinate, abstain! #ACL2024. "Hey ChatGPT, is your answer true?" Sadly LLMs can't reliably self-eval/self-correct :(. Introducing teaching LLMs to abstain via multi-LLM collaboration!. A thread 🧵
Tweet media one
0
0
15
@shangbinfeng
Shangbin Feng
2 years
Twitter bot detection research ends.X bot detection research begins.
3
1
14
@shangbinfeng
Shangbin Feng
7 months
Now accepted at @COLM_conf congrats @YikeeeWang.
@shangbinfeng
Shangbin Feng
1 year
What should be the desirable behaviors of LLMs when knowledge conflicts arise?. Are LLMs currently exhibiting those desirable behaviors?. Introducing Knowledge Conflict, a protocol for resolving knowledge conflicts in LLMs and an evaluation framework.
Tweet media one
0
0
14
@shangbinfeng
Shangbin Feng
1 year
Paper on spoiler detection in movie reviews now accepted at @emnlpmeeting . check out the paper and dataset! (full thread pending @wh2213210554).
Tweet media one
0
3
12
@shangbinfeng
Shangbin Feng
4 months
Attending COLM, let’s talk.
0
0
14
@shangbinfeng
Shangbin Feng
4 months
Check it out!.
@stanfordnlp
Stanford NLP Group
4 months
For this week’s NLP Seminar, we are thrilled to host @WeijiaShi2 to talk about "Beyond Monolithic Language Models"!. When: 10/17 Thurs 11am PT.Non-Stanford affiliates registration form (closed at 9am PT on the talk day):
Tweet media one
0
1
12
@shangbinfeng
Shangbin Feng
3 months
Submit your awesome work to our workshop on LLM knowledge!.
@ManlingLi_
Manling Li
3 months
📢Our 2nd Knowledgeable Foundation Model workshop will be at AAAI 25!. Submission Deadline: Dec 1st. Thanks to the wonderful organizer team @ZoeyLi20 @megamor2 @Glaciohound @XiaozhiWangNLP @shangbinfeng @silingao and advising committee @hengjinlp @IAugenstein @mohitban47 !
Tweet media one
0
2
13
@shangbinfeng
Shangbin Feng
1 year
I will present our work FactKB on summarization factuality @emnlpmeeting virtually, here's the poster :). Paper:
Tweet media one
0
1
12
@shangbinfeng
Shangbin Feng
2 years
With a pool of community-contributed specialized LMs, we propose bottom-up and top-down, two approaches to integrate black-box LLMs and these modular knowledge repos. bottom-up: multi-domain knowledge synthesis.top-down: LLM select and activate specialized LMs when necessary
Tweet media one
1
2
12
@shangbinfeng
Shangbin Feng
6 months
Wrapped up posters on day 1: thank you @wanherun @Wenxuan_Ding_ . On day 2 will have @YichenZW presenting about machine-generated text detection, poster at 16:00!
Tweet media one
Tweet media two
Tweet media three
@shangbinfeng
Shangbin Feng
6 months
Collaborators presenting #ACL2024, all in convention center A1. Abstain @YikeeeWang @Wenxuan_Ding_ .Wed 1030. Bot Detection @wanherun .Mon 1100. Geometric Knowledge Reasoning @Wenxuan_Ding_ .Mon 1245. LLM & Misinformation @wanherun .Mon 1245. AI Text Detection @YichenZW.Tue 1600.
0
1
12
@shangbinfeng
Shangbin Feng
2 years
1) LLMs do have political leanings. - occupying all four quadrants.- more variation re. social issues than economic ones.- GPT-4 is the most liberal of them all
Tweet media one
3
7
12
@shangbinfeng
Shangbin Feng
9 months
Will do full threads for them w/ code and data later! Congrats to @Wenxuan_Ding_ @YichenZW 🎉.Thank you @WeijiaShi2 @YikeeeWang @Wenxuan_Ding_ @vidhisha_b @wanherun @mrwangyou @zhaoxuan_t @HengWang_xjtu @Lyhhhh2333 @TianxingH.
0
0
11
@shangbinfeng
Shangbin Feng
1 year
Happening today at 5:00PM at slot #403! Come say hi.
@shangbinfeng
Shangbin Feng
1 year
I will attend @NeurIPSConf in person to present our spotlight paper NLGraph w/ @HengWang_xjtu!. Happy to chat about LLM and knowledge/factuality, (political) biases and fairness, misinformation and social media, LLM+Graphs, and more :)
Tweet media one
0
1
11
@shangbinfeng
Shangbin Feng
1 year
LLMs, taxonomies, and knowledge: check it out!.
@qingkaizeng_cs
QingkaiZeng
1 year
🐲 Check out our new preprint!.🔍 Chain-of-Layer is a novel framework for taxonomy induction via prompting #LLMs. Link: Thanks to all collaborators: @YuyangBai02 @zhaoxuan_t @LiangZhenwen @zhihz0535 and @Meng_CS from @ND_CSE; @shangbinfeng from @uwnlp!
Tweet media one
0
0
10
@shangbinfeng
Shangbin Feng
2 years
We propose to train small, independent, and specialized language models on knowledge corpora from diverse sources as modular knowledge repositories. All stakeholders in LLM research and applications could contribute specialized LMs trained on information of their choosing.
Tweet media one
1
1
11
@shangbinfeng
Shangbin Feng
4 months
Preference alignment with wrong answers only! Check out our work 😃.
@jihan_yao
Jihan Yao
4 months
🚀Varying Shades of Wrong: When no correct answers exist, can alignment still unlock better outcome?. Introducing wrong-over-wrong alignment, where models learn to prefer "less-wrong" over "more-wrong". Surprisingly, aligning with wrong answers only can lead to correct solutions!
Tweet media one
0
1
10
@shangbinfeng
Shangbin Feng
9 months
What's the point of going to ICLR if you don't check out Weijia's works?.
@WeijiaShi2
Weijia Shi
9 months
Super excited to be attending #ICLR2024 to present our work:. ✅In-Context Pretraining ( .⏰: Thursday 10:45 am (Halle B #95). ✅ Detecting Pretraining Data from LLMs (.⏰: Friday 10:45 am (Halle B #95) .Come say hi 🍻.
0
0
10
@shangbinfeng
Shangbin Feng
1 year
Chan is an amazing researcher and mentor, pls hire her for your department 😃.
@chan_young_park
Chan Young Park
1 year
Elated to share that I've been named a K&L Gates Presidential Fellow🔥.And. I’m on the academic job market! .I tackle problems in the intersection of NLP, AI Ethics, and computational social science. See my website for my CV, research statement, and more �.
0
1
8
@shangbinfeng
Shangbin Feng
1 year
new preprint with the awesome @YikeeeWang.
@yikewang_
Yike Wang
1 year
What should be the desirable behaviors of LLMs when knowledge conflicts arise? Are LLMs currently exhibiting those desirable behaviors? . Introducing KNOWLEDGE CONFLICT
0
0
10
@shangbinfeng
Shangbin Feng
7 months
faculty jobs.
1
0
10
@shangbinfeng
Shangbin Feng
5 months
For context, this is ChatGPT’s impersonation of Trump discussing EMNLP decisions.
1
0
10
@shangbinfeng
Shangbin Feng
2 years
KALM: Knowledge-Aware Integration of Local, Document, and Global Contexts for Long Document Understanding.(w/ @SiuhinT @CLB_BG @FicsherLzy & Yulia). BIC: Twitter Bot Detection with Text-Graph Interaction and Semantic Consistency.(lead: @FicsherLzy). now off to the next deadline✍️.
0
1
10
@shangbinfeng
Shangbin Feng
1 year
Now accepted at @emnlpmeeting findings.
@shangbinfeng
Shangbin Feng
2 years
What is the overall bot percentage on Twitter? How many bots voted for Twitter to reinstate Trump?. 🚨New preprint: "BotPercent: Estimating Twitter Bot Populations from Groups to Crowds", out now:. A thread 🧵 [1/n].
1
1
10
@shangbinfeng
Shangbin Feng
1 year
paper after paper.
1
0
9
@shangbinfeng
Shangbin Feng
6 months
Simple solution. If you didn’t submit reviews on time, you shouldn’t be able to see your reviews on time. If you didn’t engage with the authors, you shouldn’t be able to see reviewer engagement on your submissions.
0
0
9
@shangbinfeng
Shangbin Feng
6 months
Check out JPEG-LM pleaseeeeeee.
@XiaochuangHan
Xiaochuang Han
6 months
👽Have you ever accidentally opened a .jpeg file with a text editor (or a hex editor)?. Your language model can learn from these seemingly gibberish bytes and generate images with them!. Introducing *JPEG-LM* - an image generator that uses exactly the same architecture as LLMs
Tweet media one
Tweet media two
0
2
9
@shangbinfeng
Shangbin Feng
6 months
People are now buying ads for their paper threads on this site?.
2
0
9
@shangbinfeng
Shangbin Feng
7 months
llm-powered bots, in the wild
Tweet media one
@shangbinfeng
Shangbin Feng
7 months
🚨 Detecting social media bots has always been an arms race: we design better detectors with advanced ML tools, while more evasive bots emerge adversarially. What does LLM bring to the arms race between bot detectors and operators?. A thread 🧵#ACL2024
Tweet media one
0
0
9
@shangbinfeng
Shangbin Feng
11 months
rebuttal mood.
1
0
9
@shangbinfeng
Shangbin Feng
3 months
Implementation now available!.
@shangbinfeng
Shangbin Feng
4 months
👀 How to find a better adapted model?.✨ Let the models find it for you!. 👉🏻 Introducing Model Swarms, multiple LLM experts collaboratively search for new adapted models in the weight space and discover their new capabilities. 📄 Paper:
Tweet media one
0
0
9
@shangbinfeng
Shangbin Feng
6 months
Check out our survey on LLM abstaining!.
@bingbingwen1
Bingbing Wen
6 months
🤔💭To answer or not to answer? We survey research on when language models should abstain in our new paper, "The Art of Refusal." . Thread below! 🧵⬇️ Joint w/ @jihan_yao @shangbinfeng Chenjun Xu @tsvetshop @billghowe @lucyluwang.@uw_ischool @uwcse #nlproc
Tweet media one
Tweet media two
0
0
9
@shangbinfeng
Shangbin Feng
2 years
Thank you for your time!. Read the paper: NLGraph benchmark: joint work w/ @wh2213210554 @TianxingH @SiuhinT @XiaochuangHan @tsvetshop.
0
0
9
@shangbinfeng
Shangbin Feng
2 years
Watching the novelty of your current project diminish by the day due to some arxiv link on Twitter 😅.
0
0
9
@shangbinfeng
Shangbin Feng
2 years
FYI, this also posts great challenges to existing Twitter bot detection systems. Our recent work shows that they are not remotely robust to such large-scale feature tampering 🤔.
@washingtonpost
The Washington Post
2 years
Accounts pushing Kremlin propaganda are using Twitter’s new paid verification system to appear more prominently on the platform, another sign that Elon Musk’s takeover is accelerating the spread of misinformation, a nonprofit research group has found.
0
0
9
@shangbinfeng
Shangbin Feng
7 months
moving heaven and earth to have 3 reviews per paper.
0
0
9
@shangbinfeng
Shangbin Feng
2 years
Yes, but is it a good thing to publicly bash junior authors on Twitter?.
1
0
9
@shangbinfeng
Shangbin Feng
2 years
"KRACL: Contrastive Learning with Graph Context Modeling for Sparse Knowledge Graph Completion" is accepted at @TheWebConf 🥳. Here's a preprint version and I will leave the full tweet for @SiuhinT :.
0
2
9
@shangbinfeng
Shangbin Feng
2 months
faculty.
2
0
9
@shangbinfeng
Shangbin Feng
5 months
Thank you for covering our work!.
@uwcse
Allen School
5 months
In the “arms race” between social media bots and those trying to stop them, the best way to detect #LLM-powered bots may be with #LLMs themselves, according to research by @UW #UWAllen @uwnlp's @tsvetshop + @shangbinfeng. #AI #NLProc #ACL2024NLP #UWserves.
0
1
8
@shangbinfeng
Shangbin Feng
2 years
Reached 100 followers on the same day of EMNLP notifications. Thanks, everyone! Hopefully, I will do (or at least retweet) good research in the future.
Tweet media one
0
0
8
@shangbinfeng
Shangbin Feng
2 years
When you try soooo hard to seem confident, but you still get it wrong #GoogleBard . Hint: perhaps related to an incoming preprint of ours? 😈
Tweet media one
Tweet media two
Tweet media three
0
0
8
@shangbinfeng
Shangbin Feng
3 years
I will be joining @tsvetshop @uwcse for a Ph.D. starting this fall! Looking forward to it! 😆.
1
0
8
@shangbinfeng
Shangbin Feng
2 years
Super grateful for my co-authors @chan_young_park @Lyhhhh2333 @tsvetshop and everyone who gave generous feedback in and outside of the lab. Couldn't have done it without you! 🥳.
1
0
8
@shangbinfeng
Shangbin Feng
2 years
Thank you so much for this honor and recognition! I am committed to helping authors improve their work to the best of my ability 😎.
@LogConference
Learning on Graphs Conference 2024
2 years
Congratulations to our top 20 reviewers for LoG! Thank you for your service to the community!. List below 👇
Tweet media one
0
0
8
@shangbinfeng
Shangbin Feng
4 months
📌Grab the experts, define your goal, and let the swarming commence!. 📄Paper: Joint work with @ZifengWang315 @yikewang_ @SaynaEbrahimi @hmd_palangi Lesly Miculicich Achin Kulshrestha Nathalie Rauschmayr @YejinChoinka @tsvetshop @chl260 @tomaspfister
Tweet media one
0
1
8
@shangbinfeng
Shangbin Feng
11 months
Check out our work on stress-testing machine-generated text detectors!.
@YichenZW
Yichen (Zach) Wang
11 months
Can current detectors robustly detect machine-generated texts (MGTs)?🔎We find some Stumbling Blocks!🧗‍♂️. ✨Excited to share our paper on stress testing MGT detectors under attacks, where we reveal that *most* detectors exhibit different loopholes!🕳. [1/5].
0
0
7
@shangbinfeng
Shangbin Feng
3 months
LLM-generated reviews.
1
0
8
@shangbinfeng
Shangbin Feng
1 year
Now accepted at @emnlpmeeting.
@shangbinfeng
Shangbin Feng
2 years
Looking for a summarization factuality metric? Are existing ones hard-to-use, require re-training, or not compatible with HuggingFace?. Introducing FactKB, an easy-to-use, shenanigan-free, and state-of-the-art summarization factuality metric!.
0
0
8
@shangbinfeng
Shangbin Feng
2 years
CooK advances the state-of-the-art on:. MMLU, testing general-purpose knowledge QA;. Misinformation detection, testing multi-domain knowledge synthesis.
Tweet media one
Tweet media two
1
0
7
@shangbinfeng
Shangbin Feng
4 months
Interview Wenda please.
@WendaXu2
Wenda Xu
4 months
I am on job market for full-time industry positions. My research focuses on text generation evaluation and LLM alignment. If you have relevant positions, I’d love to connect! Here are list of my publications and summary of my research:.
3
0
7
@shangbinfeng
Shangbin Feng
8 months
If you are at @naaclmeeting, check this out & bookmark at your earliest convenience.
@XiaochuangHan
Xiaochuang Han
8 months
❓Is your LM not paying enough attention to your input context?. Check out our simple decoding technique to mitigate LMs' hallucinations!.We are presenting context-aware decoding at #NAACL2024. 📖 ⏰ June 17, 14:00pm-15:30pm.📌 Poster Session 2; Don Diego.
1
0
7
@shangbinfeng
Shangbin Feng
7 months
Paper: Code: PRs are welcome: come add your method, data, metric, model!. Joint work with @WeijiaShi2 @YikeeeWang @Wenxuan_Ding_ @vidhisha_b @tsvetshop . Join our presentation in Thailand @aclmeeting :).
0
1
7
@shangbinfeng
Shangbin Feng
2 years
Honored to be selected as a top reviewer at @NeurIPSConf this year!.
Tweet media one
1
0
7