![Saad Khan Profile](https://pbs.twimg.com/profile_images/235327310/SaadCity4_x96.png)
Saad Khan
@saadventures
Followers
5K
Following
729
Statuses
4K
playing peek-a-boo. Free Radical @ Uprising.
Joined March 2007
@ocontrerasv @shasta721 Do you two know each other? If not you can thank me later. :) (Oscar -I was in that piano #scifoo session when we got to check out your biometrics :)
0
0
0
Gangster
I touched on the idea of sleeper agent LLMs at the end of my recent video, as a likely major security challenge for LLMs (perhaps more devious than prompt injection). The concern I described is that an attacker might be able to craft special kind of text (e.g. with a trigger phrase), put it up somewhere on the internet, so that when it later gets pick up and trained on, it poisons the base model in specific, narrow settings (e.g. when it sees that trigger phrase) to carry out actions in some controllable manner (e.g. jailbreak, or data exfiltration). Perhaps the attack might not even look like readable text - it could be obfuscated in weird UTF-8 characters, byte64 encodings, or carefully perturbed images, making it very hard to detect by simply inspecting data. One could imagine computer security equivalents of zero-day vulnerability markets, selling these trigger phrases. To my knowledge the above attack hasn't been convincingly demonstrated yet. This paper studies a similar (slightly weaker?) setting, showing that given some (potentially poisoned) model, you can't "make it safe" just by applying the current/standard safety finetuning. The model doesn't learn to become safe across the board and can continue to misbehave in narrow ways that potentially only the attacker knows how to exploit. Here, the attack hides in the model weights instead of hiding in some data, so the more direct attack here looks like someone releasing a (secretly poisoned) open weights model, which others pick up, finetune and deploy, only to become secretly vulnerable. Well-worth studying directions in LLM security and expecting a lot more to follow.
1
0
1
Oh snap! Finally some shoes for my wild children :) (cc @sidraqasim )
Introducing Kids Model 123 – comfortable & durable, made with a redesigned outsole that flexes with every movement. This has become a personal project for us, as we set out to make the best shoes that our kids will love wearing everyday!
0
1
6
RT @pillars_fund: Applications for the 2024 Pillars Artist Fellowship are now open! Don’t miss out on this incredible opportunity if you ar…
0
75
0
Sweet! Let the games begin 🕹
Excited about the release of EndeavorOTC, no-prescription required, non-drug, video game treatment for adults with ADHD! Built on the same technology as Akili’s EndeavorRx, the world’s first FDA-authorized pediatric video game treatment. Available on Apple’s App Store.
0
0
2
Still got @SynBioBeta on the brain. @johncumbers Reflecting on potential tracks for you next year. Question: Doesn’t AI + Biology = ‘I’? Just saying :)
0
0
0
About to get our DNA on this week. Feels like the night before the first day of school. :) cc @SynBioBeta @johncumbers
0
1
7
Dope.
Javed Akhtar's masterclass in Lahore on the problem with the idea of a 'pure language'. @Javedakhtarjadu
0
0
1
Anyone rolling to @SynBioBeta next week? Programming DNA is just better with friends :) (cc @johncumbers @PaulStamets )
4
2
10