Our paper has been accepted at
#chi2024
! I'm glad that our first work examining
#LLM
#privacy
issues from a human-centered perspective has found its home at CHI. Congratulations team especially
@ZhipingZhang5
for your first CHI paper! Check out our preprint below 👇
🚨 New preprint on sensitive disclosure behaviors in
#LLM
-based chatbots (e.g.,
#chatgpt
)
We found users are constantly juggling between privacy, utility, and convenience, affected by flawed mental models and dark patterns.
#privacy
#hci
#ai
Paper: 🧵👇
🌟 I'm recruiting PhD students with strong HCI/system building skills, who are passionate to
- Understand and address privacy issues in Human-LLM interactions
- Use LLM to empower responsible software development
Apply by Dec 15!
I'm on the job market for academic & industry positions!!
My research is in HCI, privacy, and software engineering. I design, build, and evaluate tools that help developers create responsible apps, especially when it comes to privacy.
Info on relevant openings is appreciated!
📢 I'll be recruiting graduate students for Fall 2024, and I’m actively looking for research interns at different levels! Interested in designing human-centered solutions for today's privacy challenges, considering various roles: users, developers, designers, etc? Let's connect!
🚨 New preprint on sensitive disclosure behaviors in
#LLM
-based chatbots (e.g.,
#chatgpt
)
We found users are constantly juggling between privacy, utility, and convenience, affected by flawed mental models and dark patterns.
#privacy
#hci
#ai
Paper: 🧵👇
Our paper “Understanding Challenges for Developers to Create Accurate Privacy Nutrition Labels” is conditionally accepted to CHI 2022! This is a joint work with Kayla Reiman, Yuvraj Agarwal,
@lorrietweet
and
@jas0nh0ng
and also my first CHI paper:)
Thrilled to share that this work has received a
#chi2022
best paper honorable mention award! Again, many thanks to my coauthors Kayla Reiman, Yuvraj Agarwal,
@lorrietweet
, and
@jas0nh0ng
!
Check out our paper:
Our paper “Understanding Challenges for Developers to Create Accurate Privacy Nutrition Labels” is conditionally accepted to CHI 2022! This is a joint work with Kayla Reiman, Yuvraj Agarwal,
@lorrietweet
and
@jas0nh0ng
and also my first CHI paper:)
Heading to
#soups23
and
#usesec23
now! I’m on the lookout for PhD students and interns. Happy to chat with folks there! Additionally, we have two posters accepted at SOUPS this year. Hope to see you at tomorrow's poster session!
📢 I'll be recruiting graduate students for Fall 2024, and I’m actively looking for research interns at different levels! Interested in designing human-centered solutions for today's privacy challenges, considering various roles: users, developers, designers, etc? Let's connect!
My paper analyzing developers’ privacy practices and perceptions on r/androiddev is accepted to
#CSCW
(w/ minor revision)! It feels so surreal because this is my first ever submission to CSCW. Thanks a lot for the help from my coauthors
@dabbish
@jas0nh0ng
🌟 I'm recruiting PhD students with strong HCI/system building skills, who are passionate to
- Understand and address privacy issues in Human-LLM interactions
- Use LLM to empower responsible software development
Apply by Dec 15!
Posting this on my flight to Minneapolis ✈️ Excited to attend my first in-person
#CSCW2023
! I only have a SIG on Tuesday morning and I’m mostly here just to hang out with friends and meet people. I’m very easy to approach so just come and say hi👋
Tianshi Li (
@tianshi_li
) will be on campus next Thursday to present a seminar talk on user privacy! If you're also excited to hear from Tianshi, click the link below to add the talk to your calendar ⏬
Interested in what
#LLMs
mean for
#privacy
? Attending
#chi2024
?
Join us for a SIG on "Human-Centered Privacy Research in the Age of Large Language Models"
Kudos to
@tianshi_li
for her leadership on this (and our forthcoming paper at
#chi2024
)!
RT appreciated! We (CMU researchers) are looking for Android developers to participate in a 90-min remote interview to test an Android Studio plugin we built for creating accurate data safety labels.
#androiddev
Compensation: $70.
Sign up/more info:
For those who have the courage to wake up for the first session in the last day of
#CHI2022
I’ll present my TOCHI paper on smartphone notifications with
@jwnichls
@jas0nh0ng
Julia and Miguel (and it’s the first talk)
Finishing this long and sad day with this thought sticking in my head: It is an anti-Asian hate crime. And some people simply don't dare to call it out.
#cscw2023
is approaching 👋 Curious about how fellow researchers think about the
#ethics
of using
#LLM
in research? Come to our SIG next Tuesday and we’ll share what we learned from our survey!
Care to share your thoughts? We’d love to learn from you!👇
Our
#cscw2023
SIG was a blast thanks to all the participants and co-organizers
@hongshenus
@TobyJLi
@Diyi_Yang
@RuiyiWang153
. Slides and notes will be shared and we hope to build a community from here 👉
p.s. our survey is still open for response 👇
#cscw2023
is approaching 👋 Curious about how fellow researchers think about the
#ethics
of using
#LLM
in research? Come to our SIG next Tuesday and we’ll share what we learned from our survey!
Care to share your thoughts? We’d love to learn from you!👇
Privacy manifests, tracking domains, required reason APIs — Apple is building more privacy tools for developers and also asking developers to do more things. As a researcher studying privacy support for developers, here are my thoughts and concerns.
#WWDC2023
Here’s an overdue thread of our IMWUT’21 paper “Honeysuckle: Annotation-Guided Code Generation of In-App Privacy Notices”
Built on top of our Coconut IDE plugin (IMWUT’18), Honeysuckle translates developer-provided privacy annotations to user-facing in-app privacy notices.
Have you thought about using
#llm
in your
#hci
research? Have you ever wondered whether and how you should use LLM in your research? We’d love to hear your thoughts at
#cscw2023
and/or via this survey!
Together with
@tianshi_li
@TobyJLi
@joon_s_pk
@Diyi_Yang
, we are organizing a Special Interest Group on 🌈Shaping the Emerging Norms of Using LLMs in Social Computing 🌈 at
#cscw2023
and will share the results from the survey. Please come and join us!
Before officially starting at Northeastern, I'll be spending a year in the bay area at
@ChecksHq
and
@Berkeley_EECS
(w/ Serge Egelman
@v0max
)! Looking forward to exciting collaborations ahead!
An update on paper status: Our paper is accepted (w/ minor revision) to the special issue “IoT for Fighting COVID-19” of Elsevier Pervasive and Mobile Computing (PMC) Journal!✨
@lorenterveen
For me I went to CHI partly because the in-person social part is needed for my mental health, and partly because I underestimated how contagious the virus still is (even after vaxxed and boosted and wearing masks)
Do developers have meaningful discussions about
#privacy
online? Are there situations that they should’ve paid more attention to privacy? I’ll present our answers at the “Privacy and Security” session (Oct 19, 7:00-8:30 PDT) of
#CSCW2020
See U there 🥳
One day I realized that when people think something is good, what they really mean could be “it feels familiar to me”. This has changed how I view the world and how I view myself completely.
This tweet sums up EXACTLY why academia has a such huge diversity problem.
Who decides what’s a “boring research program,” a “poor pub record,” or a “higher-tier” PhD program?
It’s profs like this: a U Chicago prof with an MIT PhD & Yale BA…
Dark patterns are another issue. By default, ChatGPT users allow OpenAI to use their data for model training, exposing them to memorization risks. The opt-out interfaces unnecessarily link privacy with reduced functionality, and the more flexible control is hard to find and use.
Now the Google form for opt out training but keep history is closed, and when sending an email to request for opt-out,
#openai
responded “It is not possible to opt-out and maintain history of an account. The two preferences are connected.” This is wild.
#llm
#privacy
Hi! We (
@cmuhcii
researchers) are looking for Android developers to participate in a 2-hour research study (via Zoom) about trying out tools that help build privacy notices for Android apps. Compensation: 75 USD Amazon gift card. Signup link:
I'll present our paper "Coconut: An IDE Plugin for Developing Privacy-Friendly Apps" at
#UbiComp2019
in the first session "Privacy Attacks" on Friday. Come and hear about how we could better engage developers in protecting user privacy!
My 1st
@PET_Symposium
but going all-in: presenting 2 papers about privacy support for software developers (6A: & 9C:), and chairing a great session on regulation and compliance (4A); join all if interested in human-centered privacy!
My project is listed here (improving privacy by helping devs). There are also a ton of other interesting projects covering all kinds of topics in HCI. Check them out, consider applying, or help us spread the word!
*Deadline Extended!* Attention undergrads: we heard you needed more time, so we're extending the app deadline for our summer 2020 Human-Computer Interaction Summer Research Program by 2 weeks to Wednesday, January 29 at 5pm EST! APPLY NOW:
I’m also on the job market for industry research positions this coming 2023! I design and build systems that accelerate online sensemaking for developers and facilitate human-AI interactions for end-users.
Please get in touch if you know of any relevant opportunities! 😃
Although people keep saying this, the fact is papers with negative results are just much harder to get published :( What if we have a conference/track dedicated for failed studies? (just some random crazy idea...)
Finally, let's work on sharing our informal knowledge more widely—such as by writing up negative results—and incentivizing people to do so. That would make much of the advice in this thread redundant (like the importance of networking), which would be a good thing!
We interviewed 19 LLM-based conversational agent (CA) users, many of whom were pessimistic about having both utility and privacy. Yet nearly all took privacy-protective measures, e.g., censor/falsify sensitive info, sanitize inputs copied from elsewhere, and seek general advice
Virginia's contact-tracing app
#COVIDwise
verifies COVID-19 positive cases using a six-digit unique PIN to prevent false reporting.
However, if all VA's 100k confirmed cases have a unique PIN, then there's 1/10 chance of collision just by input random numbers. 🤦♀️
We discovered flawed mental models that could hinder users' awareness and comprehension of emerging privacy risks in LLM-based CAs, such as the memorization risks (having the sensitive input memorized by the model and leaked to other users)
To solve this problem,
@jackieyang_
@heycori
@kingjen
Yuvraj Agarwal
@dabbish
@jas0nh0ng
and I ran the first survey study comparing people’s installation preferences on different contact-tracing designs based on their different trade-offs (U.S.-only sample on MTurk, N=208).
Relatedly, happy to share our CHI’22 LBW paper about a large-scale analysis of privacy labels of 1.4 million iOS apps. This work is led by my intern Yucheng Li. Other authors are Deyuan Chen, Yuvraj Agarwal,
@lorrietweet
,
@jas0nh0ng
, and me.
preprint:
On another note, LLM enhances human abilities, but its use raises privacy concerns. We found users fear being discovered using AI, worrying that others will question their abilities.
Simply love this quote from “Letters To A Young Poet”. It makes me think that maybe writing papers is similar to writing poems in some sense. If you don’t believe in your own work’s potentials then who will?
The "Sparks of AGI" paper shows how
#gpt4
can detect PII better than an open-source tool custom built for this task, but you have to trust OpenAI and share the data with them first to remove the PII data, isn't that missing the whole point of PII detection?
Analyzing real-world ChatGPT conversations from the ShareGPT dataset, we found users disclose PII and personal experiences, not only of their own but also of others. The use of interactive prompt strategies led to a gradual increase in disclosure, sometimes prompted by ChatGPT
I finally read through this article carefully "What’s the Role of Developer Experience in Programming Languages Research?" by
@jeanqasaur
-- It resonates with me sooo much as I think of my work in helping developers build privacy
In celebration of
#DataPrivacyDay
today, we're highlighting a recent CHIMPS Lab paper from
#CSCW2020
: "How Developers Talk About Personal Data and What It Means for User Privacy: A Case Study of a Developer Forum on Reddit" by
@tianshi_li
, Elizabeth Louie,
@dabbish
,
@jas0nh0ng
.
Need RT! Have you coded iOS apps and published iOS apps on the App Store? We (CMU researchers) are looking for participants for a 90-minute interview to offer perspectives on the app submission process.
#iosdev
Compensation: $70
Sign up/find more info:
A quick list of some of the best material out there on the
#UX
of
#ExposureNotification
apps (one of the results of today's GAEN Symposium hosted by LFPH):
@scyrusk
@Namzo098
It seems like people haven’t mentioned research on the dev side?
@yazz_acar
did a lot of research on secure coding, e.g., “You get where you're looking for: The impact of information sources on code security”. And I and
@mTahaei
also have work on this more focused on privacy.
I was there to witness the entire journey of this project coming to life. This is groundbreaking work that has the potential to revolutionize the way people make multimodal apps!
🎉 Happy Weekend, everyone! Ever wonder how Large Language Models can help your app do better? 🤔 Introducing ReactGenie, a cutting-edge framework that leverages LLMs to create intuitive and powerful multimodal mobile app interactions! 🚀
These are just some reflections from watching the video. Overall, I think there is a trend that devs will play an increasingly important role in privacy, and it raises more research questions regarding how to support and audit devs.
@heycori
@d_chrisbrown2
I have tried many approaches and platforms to recruit professional developers and developed some know-how about which methods are more efficient or what type of developers I can get from which platform. Not sure if these are helpful to you but I'm happy to chat more!
Our paper (authors:
@tuochao
, Fang Qin,
@MonicaSLam
,
@landay
) about a practical solution to full-body tracking in VR is conditionally accepted to
#CHI2022
! Here are some funny pictures we generated (based on COCO) for improving the 2D pose detector in the system 🤣
I really hope
#chi2022
in-person attendees fill out the post conference survey that is coming. I’m going to fill it out myself and ask the PhD students working with me to as well. A high response rate this yr seems so impt for future conf planning and for public health.
2) It's surprising that privacy manifests and privacy nutrition labels, despite using the same terminology, are separate features. Devs must cross-check a privacy report generated from the manifests to create the labels, leading to barriers for adoption
@jeanqasaur
Overall, I can definitely see why developer tools for privacy is not the best direction to go about for your company. And that’s where I believe research should fill in and I’m very passionate about planning my thesis proposal around this topic :)
Apps that provided privacy label on iOS App Store report data collection increasingly before the week of 04/23 but decreasingly afterwards (esp. for tracking). Any changes around the Spring Loaded event could have caused this?
#ios
#iosdev
#privacy
A key challenge in developer-driven solutions is verifying the accuracy of information provided by developers. Past studies found errors in privacy labels created by developers, so how can we ensure that the reasons given for using APIs match how the data is actually used?
@jeanqasaur
Privacy laws/platform policies have created more incentives for developers to talk about privacy (see my recent work at CSCW), but the discussion is still at a superficial level. The knowledge gap is just too big.
Interesting question 🤔 I could think of papers that study s&p concerns/strategies of these ppl but not much about novel systems/tools…the first thing that came to mind is the iOS 16 safety check feature… curious to learn more from ppl more knowledgeable about this field
I'm looking for papers that introduce a novel system/tool that is meant to make some component of cybersecurity and/or digital privacy easier for *specific* under-served, ability-diverse, vulnerable, marginalized, and/or high-risk populations...
Pointers?
@kjw_chiu
@mikarv
@yvesalexandre
Thanks for the question! We stated in our survey that tech-savvy can infer identities of *nearby* infected users. Although the app may not collect location, tech-savvy users may install the app modded by other ppl, and get location-time tuple of infected users.