Sarkar Snigdha Sarathi Das Profile
Sarkar Snigdha Sarathi Das

@sarkarssdas

Followers
40
Following
224
Statuses
38

PhD Student @PennStateEECS | Prior Research Intern @AmazonScience, @MSFTResearch

Joined October 2023
Don't wanna be here? Send us removal request.
@sarkarssdas
Sarkar Snigdha Sarathi Das
1 year
Running low on data for refining your model in an Information Extraction task? Consider sparse finetuning! Thrilled to present our paper at #EMNLP2023, for sequence labeling in constrained data scenarios. Paper: Code:
Tweet media one
7
3
12
@sarkarssdas
Sarkar Snigdha Sarathi Das
2 months
RT @RyoKamoi: VLMEvalKit now supports our VisOnlyQA dataset 🔥🔥🔥 VisOnlyQA reveals that even recent LVLMs like GPT-…
0
2
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
2 months
RT @RyoKamoi: 📢 New preprint! Do LVLMs have strong visual perception capabilities? Not quite yet... We introduce VisOnlyQA, a new dataset…
0
19
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
2 months
RT @RyoKamoi: Curious about LLM self-correction? Check out our reading list! 📚 We feature papers & blogs in * Key…
0
28
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
3 months
RT @NLP_PennState: This Friday we had an amazing talk by @srush_nlp on State Space Models. Mamba is here! Thanks a lot for coming in-pers…
0
8
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
3 months
RT @vipul_1011: The fan-boy in me was really happy (and a bit nervous) while organizing the talk by @srush_nlp and having discussions with…
0
4
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
3 months
RT @vipul_1011: The code and SMART-Filtered datasets are now open-sourced! 🚀✨ 🔗Code: 🤗SMART-Filtered datasets on…
0
11
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
3 months
RT @RyoKamoi: We will present our survey on self-correction of LLMs (TACL) at #EMNLP2024 in person! Oral: Nov 12 (Tue) 11:00- (Language Mo…
0
11
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
4 months
@bnjmn_marie Did you make sure that during batch processing you are using left padding instead of right padding? I am not sure but I remember that probably Llama3Tokenizer by default does right padding which is probably not good.
0
0
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
7 months
RT @RyoKamoi: Our ReaLMistake paper has been accepted at @COLM_conf ! We introduce the ReaLMistake benchmark for evaluating LLMs at detecti…
0
14
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
8 months
@vipul_1011 Congratulations man! Finally!!!!
1
0
1
@sarkarssdas
Sarkar Snigdha Sarathi Das
8 months
RT @HamelHusain: This talk from Apple is the best ad for fine tuning that probably exists. They are also using adapters (and hot swapping…
0
117
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
8 months
@MSheshera Congratulations Sheshera!
1
0
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
8 months
RT @RyoKamoi: 📢 New survey on Self-Correction of LLMs! 😢 LLMs often cannot correct their mistakes by prompting themselves 😢 Many studies co…
0
62
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
8 months
RT @xtimv: Tokenizers are a total dumpster fire. You can't even reliably get the string representation of a token ID.
0
2
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
10 months
RT @RyoKamoi: 📢 New Preprint! Can LLMs detect mistakes in LLM responses? We introduce ReaLMistake, error detection benchmark with errors by…
0
25
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
11 months
RT @karpathy: # automating software engineering In my mind, automating software engineering will look similar to automating driving. E.g.…
0
2K
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
1 year
RT @ruizhang_nlp: I cannot make it to #EMNLP2023, but our students are presenting papers on (1) Trustworthy Medical Summarization, (2) Spar…
0
4
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
1 year
Grateful for the opportunity to jointly work with @ranranhrzhang, @ShiPeng16, @Wenpeng_Yin, @ruizhang_nlp.
0
0
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
1 year
Against in-context learning with GPT-3.5-turbo (0301), FISH-DIP results in substantially better performance across different data availability settings.
Tweet media one
0
0
0
@sarkarssdas
Sarkar Snigdha Sarathi Das
1 year
We also compare against popular parameter-efficient finetuning approaches where FISH-DIP results in overall better performance, particularly in challenging very extreme low-data scenarios.
Tweet media one
0
0
0