Debdeep Sanyal Profile
Debdeep Sanyal

@debdeeplikesai

Followers
21
Following
3K
Statuses
267

im just trying to figure out how this "thinking" thing works.

Joined March 2022
Don't wanna be here? Send us removal request.
@debdeeplikesai
Debdeep Sanyal
1 day
New paper alert 🚨 Introducing ALU - Agentic LLM Unlearning, the first unlearning method with real time agents!! 📌No retraining for every new unlearning target. 📌No extra setup. 📌No compromise on safety. 📌Maximal scaling compaerd to existing methods.
Tweet media one
4
7
10
@debdeeplikesai
Debdeep Sanyal
3 hours
@rohanpaul_ai thank you for such an amazing overview of our work! as always, grateful to your contributions to the research society.
0
0
3
@debdeeplikesai
Debdeep Sanyal
3 hours
@HobbesMatraca @rohanpaul_ai haha it actually does, i am the author to this.
1
0
2
@debdeeplikesai
Debdeep Sanyal
1 day
Paper Link -
0
0
1
@debdeeplikesai
Debdeep Sanyal
1 day
Most 🔒SECURE 🔒unlearning framework *across model sizes* . We compare existing post-hoc methods with ALU over, not one, but five SOTA jailbreak techniques, across a range of model sizes (3B, 14B, couple 100B??). ALU outperforms across model sizes, across jailbreak methods.
Tweet media one
0
0
1
@debdeeplikesai
Debdeep Sanyal
1 day
We exhibit unmatched scaling to unlearning targets. We create a sparse data with a few unlearning targets and rest dummy targets. While other methods (both post-hoc and optimization based) have a linear performance decrease with scaling targets, ALU is unphased😎.
Tweet media one
0
0
1
@debdeeplikesai
Debdeep Sanyal
1 day
@murari_ai Optimization methods require model retraining for every new unlearning request to unlearn it, and that's impractical. We are the first post-hoc method to beat *every* SOTA optimization method while retaining response utility.
Tweet media one
0
0
2
@debdeeplikesai
Debdeep Sanyal
11 days
icml submission done, i can finally catch up with the papers.
0
0
2
@debdeeplikesai
Debdeep Sanyal
2 months
RT @murari_ai: Happy to present our work on Machine Unlearning @indoml_sym @BITSPilaniGoa #responsibleAI
Tweet media one
0
11
0
@debdeeplikesai
Debdeep Sanyal
2 months
@IntuitMachine @fchollet LLMs, will hit the wall. we're not reaching AGI with them, and I speak as someone whose research interest lies in progressing LLMs. Even Hinton mentioned that backtracking is not the way to go.
0
0
2
@debdeeplikesai
Debdeep Sanyal
2 months
finetuning a base LLM and using tokenizer.apply_chat_template for inference is one of those bugs that would require you to really scratch your head to figure out what's going wrong.
0
0
0
@debdeeplikesai
Debdeep Sanyal
3 months
0
0
1
@debdeeplikesai
Debdeep Sanyal
4 months
RT @wateriscoding: This helped me learn everything btw Most informative videos i found on this topic
Tweet media one
0
103
0
@debdeeplikesai
Debdeep Sanyal
4 months
@maximelabonne @LiquidAI_ are we getting a research paper on this? that would be amazing!
0
0
0
@debdeeplikesai
Debdeep Sanyal
5 months
RT @MingYin_0312: Four papers accepted to #NeurIPS2024 🥳🥳 Topics include: LLM alignment, theory for speculative decoding, accelerating LLM…
0
10
0
@debdeeplikesai
Debdeep Sanyal
5 months
@dinisguarda really interested to hear about the findings/observations that back your belief.
0
0
1
@debdeeplikesai
Debdeep Sanyal
5 months
@Prince_Canuma @mwrites__ reading more papers to eventually come up with my own idea.
1
0
0
@debdeeplikesai
Debdeep Sanyal
5 months
@HamelHusain with the current state of AI, it's way better to use them as teachers instead of servants. There has never been a better time to learn things with such breadth
0
0
0
@debdeeplikesai
Debdeep Sanyal
5 months
but, i mean you really care about optimality only when you already have a working solution in the first place. we have not solved reasoning and it will take some more time. sorry @sama
0
0
0