Last year,
@saligrama_a
,
@CooperdeNicola
, and I discovered serious vulnerabilities in Fizz, an "anonymous" social app built by our classmates.
We responsibly disclosed the issues to the founders… and they responded with legal threats.
@EFF
had our backs and we didn't give in.
After Stanford students discovered vulnerabilities in a popular “anonymous” social app, their fellow students behind the app threatened them with legal action. EFF stepped up to help the student researchers fight back.
Can OpenAI Whisper and GPT 3.5 bring Voice Memos to the next level?
Introducing Paxo (), an AI audio notes app purpose-built for meetings, journaling, and in-person conversations.
Built with
@rhythmrg
for fun. Turns out it’s… actually super useful.
so
@buildyourcorner
is a lovely app and I was curious about how people use it — so I pulled ~all their users and lists.
here are some fun visualizations of user network dynamics (just off testflight so still small & dense) and heatmaps of saved places
specific cities below...
@saligrama_a
@CooperDenicola
@EFF
This is a case study in how *not* to respond to good-faith security researchers.
Fix the issues. Notify your users. And then move on.
...don't try to scare your classmates into silence with the threat of 20 years in prison!
was just poking around with the chatgpt web app, and it turns out that they call their moderation endpoint from the client?
so you can inspect element, go to the network tab, and just turn it off...?
this is separate from the safety measures baked into the model, but still 😕
Platforms and disinformation are on a lot of our minds right now. Today, we released new work examining politically-motivated editing on Wikipedia (with Maha Al Fahim, Sean Gallagher, and Nick Rubin). Thread.
Working on a new project!
Location sharing apps like Like360 can help keep your family safe. But most have horrible privacy — and are often instruments of control.
I’m building Latitude with
@rhythmrrg
, a privacy-and-agency-first location sharing app.
📢 New work out today
@stanfordio
exploring how to uncover Wikipedia pages worth investigating for inauthentic editing. We’re also releasing an open-source notebook for surfacing potentially-suspicious pages that anyone can use. Thread.
I’m incredibly excited about generative AI — and have been for a long time — but the safety concerns aren’t theoretical.
Non-consensual deepfakes. Spearphishing. It’s hard to keep track of all the “badness”.
Let’s work together to catalog harms at :
@EFF
Thanks for having our backs — you all really saved the day. Without your help, there's no way we could have stood up for ourselves like we did. Cc:
@saligrama_a
Incredibly grateful to
@EFF
for having our backs when
@MilesMcCain
@CooperDenicola
and I faced legal threats last year for responsibly disclosing vulns in a purportedly anonymous app.
Legal threats to security researchers are sadly commonplace. They make the internet less safe.
Wow — this screenshot is real, but Sama did *not* actually post this tweet; it's a Google Search indexing/display problem. It's from his interaction with Dean Phillips, who made the original tweet.
Twitter suspended Marjorie Taylor Greene, so her tweets are no longer visible in the app. Which means it’s time for me to shill PolitiTweet.
In case you wanted 7.3k MTG tweets (including 40 deleted ones), you can get them here. All archived.
Introducing Rewind Pendant - a wearable that captures what you say and hear in the real world!
⏪ Rewind powered by truly everything you’ve seen, said, or heard
🤖 Summarize and ask any question using AI
🔒 Private by design
Learn more & preorder:
I think it's easy to get caught up in thinking that celebrities and public figures are the most at risk from AI voice cloning tech.
But is that right? E.g., we're starting to see deepfake-enabled kidnapping scams targeting regular people:
GPT-4 just dropped — it's multimodal, passes the bar exam in the 90th percentile, passes AP Calculus and AO Bio exams with 5's and 4's, gets a near-perfect GRE verbal, and has remarkable SOTA performance across many languages.
But it can't leetcode🤷♂️
I’m incredibly excited about generative AI — and have been for a long time — but the safety concerns aren’t theoretical.
Non-consensual deepfakes. Spearphishing. It’s hard to keep track of all the “badness”.
Let’s work together to catalog harms at :
Over the past month
@stanfordio
, I and
@elegant_wallaby
took a deep dive into GETTR, a new alt-network that launched in July. Today, we released a comprehensive report on the platform's first month. A brief thread about our (now open source!) tooling...
This is technically quite cool, but… I do not consent to being recorded via people’s pendants. It’s creepy! All party consent matters.
(To their credit, Rewind says they’ll offer features to prevent recording without consent, but TBD what those features are/if they’re opt in.)
Introducing Rewind Pendant - a wearable that captures what you say and hear in the real world!
⏪ Rewind powered by truly everything you’ve seen, said, or heard
🤖 Summarize and ask any question using AI
🔒 Private by design
Learn more & preorder:
A panel on legal rep. for hackers that run into legal threats due to research and vuln disclosure + resources for defense. Including Andrew Crocker
@EFF
;
@HarleyGeiger
, Venable; Kurt Opsahl, Security Research Legal Defense Fund;
@MilesMcCain
, Hacker; +
@charley_snyder_
@Google
!
Platforms often apply labels to misleading content. But how do those labels appear for people who browse the web in languages other than English?
If we want labels to be effective, we have to make them accessible. New from
@sbradshaww
and me in Lawfare:
If you're going to
#DEFCON
, please come to my talk!
It's all about conducting good-faith security research legally and ethically. (And what to do when you get a legal threat.)
There will be fun stories. Come through!
if your threat model is "we don't want unsuspecting users running into really harmful output," this is totally fine
but if your threat model is "we want a second layer of security that prevents medium-sophistication actors from using our model to do X", then this is a problem
@rightscon
The
@bellingcatgap
team's popular webinar introduced the platform ATLOS to the RightsCon community and discussed data models in open source investigations. ATLOS is being used by
@bellingcatgap
to manage our data and community. For those interested, visit .
Please consider working with us — I'm really excited about the future of Atlos, and we're looking for a teammate who can help make that future a ✨reality✨!
We're hiring a part-time community and support manager to support Atlos's growth. Come join a small, dynamic team and support crucial investigative work. Learn more here:
"Truth Social" appears to mostly be reskinned Mastodon, so they're required to release their source code (per the terms of Mastodon's license).
More investigation into Truth Social coming soon! In the meantime, here's an auto-updating repo of their code:
note: did some googling and i'm not the first person to find this, which is why I think this is appropriate to share publicly (it's also not a *huge* deal)
Seems like ChatGPT may have a security issue where it displays your chat history to other users — and displays other users' chat history to you.
Tried to replicate but now my chat history just doesn't load. Maybe someone at OpenAI pulled the plug.
New paper! “Americans’ perceptions of privacy and surveillance in the COVID-19 pandemic” by
@baobaofzhang
,
@sekreps
,
@nmcmurry
, and me is out now in PLOS ONE! So excited.
Some of the crimes I've seen in San Francisco:
- tan shoes worn with dark suits
- worsted suit jackets worn as sport coats
- oxford shoes worn with casual outfits
- dress sneakers
- "business backpacks"
- Patagonia vests with dress shirt and chinos
“Metaphors aren’t just carelessly tossed around—they shape ideologies. An automatic acceptance of the ecosystem metaphor can lead, at the worst, to an unbounded attitude of extraction and harm in our digital world; a resigned approach to technological inevitability…”
What's in an ecosystem? In this piece for
@reboot_hq
, I investigate the ecosystem metaphor so prevalent in the digital world and suggest how to use it to demand better of the technologies we use and build:
Update: we got our ads running and... 75% of our conversions are fraudulent (clickfarms, etc.).
I really wonder how much of the ad ecosystem is inflated by fraud.
Spent some time trying to setup Google Ads (as an advertiser) with a friend, and I was SHOCKED with how bad the experience was.
Like, support-reps-hanging-up-on-you-bad. They’ve gotten complacent. (I guess that’s what happens when you’re functionally a monopoly!)
Two examples:
This is a really important thread, especially for those of us with technical backgrounds thinking about how we can best help Ukraine.
Don't just deploy. Be thoughtful. Talk to experts. Understand the situation. Sometimes you have to know when *not* to build. Cc
@IgorBarakaiev
.
Sometimes the labels are fully translated! Sometimes they fall back to English. And sometimes they just disappear entirely.
Check out Sam’s great thread on our piece here:
When it comes to labelling
#misinfo
, language diversity is often overlooked. In a new piece for
@lawfareblog
,
@MilesMcCain
and I look at how labels appear for users who browse the Internet in languages other than English. 🧵👇
Spent some time trying to setup Google Ads (as an advertiser) with a friend, and I was SHOCKED with how bad the experience was.
Like, support-reps-hanging-up-on-you-bad. They’ve gotten complacent. (I guess that’s what happens when you’re functionally a monopoly!)
Two examples:
We all agree the status quo is unsustainable.
Here are 1,000 words on how we could get the role of Open Source maintainer to graduate to a real, properly paid profession.
The thing is, companies need it as much as maintainers do.
i dont really understand the logic of milking every red cent out of Twitter's API if you're going to kill the random stupid bots that make the service fun to scroll through
just feels like a bad strategic move imo
What content did platforms label as misleading in the run up to the 2020 election? And were platforms consistent in their labeling decisions?
In our new paper,
@shelbygrossman
,
@sbradshaww
, and I leverage a unique dataset to answer these questions:
Have you been the victim to a cryptocurrency scam? (Alternatively, are you a cryptocurrency scammer?)
If so, DM me... would love to ask a few questions.
I wrote about how AI image generators might totally disrupt child safety investigations. SIO and Thorn put out an excellent report on this issue a month ago, but it deserves more attention.
I'm a huge
@lawfareblog
nerd, so getting to work with them on this piece was super exciting. And a major shoutout to everyone at
@StanfordIO
who gave us feedback.
Background checks are a crazy phishing vector.
Friend of mine just got an email to complete a background check for a job he’s about to start. Asked for DOB, photo ID, SSN, addresses, past jobs, phone, etc.
Problem? He already completed his background check.
“Unlike social media platforms, [Wikipedia] editors don’t fight for engagement—the incentives that push content to the extremes on other platforms simply don’t exist on Wikipedia” ~ from our students
@MilesMcCain
and Sean Gallagher in
@TIME
. 🔗👇
Latitude gives you safe arrival notifications, low battery alerts, car crash detection, and more.
A novel feature I’m especially excited about: following someone’s live progress to a destination using Live Activities.
Ok, just released a major new version of my design library, a17t! It's now a Tailwind plugin, the elements look better, and the docs are more thorough. Try it!
It was originally going to be v0.10.0, but I borked the first publish so it's actually v0.10.1.
6/ Wikipedia’s objectivity standards could be more consistently applied across pages. But overall, we find that Wikipedia’s open governance model is impressively effective—but motivated editors can and do slip through Wikipedia’s (formidable) safeguards.
tfw gradescope has a vuln that lets you change your grades,* they're aware of it, and... there's no fix?
Aditya's deep dive is a fantastic read.
*editorializing slightly
NEW POST: Gradescope's autograder config has *no security* by default, allowing attacks including grade modification, reverse shell RCE, and hidden test case exfiltration
Today in absurdist SwiftUI bugs: The built-in photo picker fails when I try to load an image from the library (running in the simulator).
I google the bug. It's a KNOWN ISSUE that the RED FLOWER PHOTO fails in the photo picker. All other images work. Totally arbitrary.
Can OpenAI Whisper and GPT 3.5 bring Voice Memos to the next level?
Introducing Paxo (), an AI audio notes app purpose-built for meetings, journaling, and in-person conversations.
Built with
@rhythmrg
for fun. Turns out it’s… actually super useful.
"I'm a junior with passion for societal harm... I considered hiring someone else to build me a quirky social media app that stores sensitive user data in a misconfigured database, but that seems to be a saturated market here at Stanford." 😂🤦♂️
Interesting behavior from the
@github
education verification flow. You have to share your location to get verified. I wonder what the threat model is here. (And assuming they look at geographic proximity, how does this affect students who attend college remotely/live off campus?)
We’ve added initial support for ChatGPT plugins — a protocol for developers to build tools for ChatGPT, with safety as a core design principle. Deploying iteratively (starting with a small number of users & developers) to learn from contact with reality: