🚨Out in Science!🚨
Conspiracy beliefs famously resist correction, ya?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%!
-Lasts over 2mo
-Works on entrenched beliefs
-Tailored AI response rebuts specific evidence offered by believers
1/
🚨Out now in Nature!🚨
A fundamentally new way of fighting misinfo online:
Surveys+field exp w >5k Twitter users show that gently nudging users to think about accuracy increases quality of news shared- bc most users dont share misinfo on purpose
1/
🚨New WP🚨
Many people - from Trump to
@elonmusk
- have accused Twitter of anti-conservative bias
Is this accusation accurate?
We test for evidence of such a bias empirically - and turns out it's more complicated than you might think...
1/
🚨Working paper alert!🚨 "Understanding and reducing the spread of misinformation online"
We introduce a behavioral intervention (accuracy salience) & show in surveys+field exp w >5k Twitter users that it increases quality of news sharing
1/
🚨WP🚨
Conspiracy beliefs famously resist correction, right?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20pp (d~1)!
🡆Tailored AI evidence rebut specific arguments offered by believers
🡆Effect lasts 2+mo
🡆Works on entrenched beliefs
🚨New working paper!🚨
"Fighting COVID-19 misinformation on social media:
Experimental evidence for a scalable accuracy nudge intervention"
We test if an intervention we developed for political fake news works for
#COVID19
- seems like YES!
PDF:
1/
🚨Out in PNAS!🚨
Political microtargeting has caused great concern (eg Cambridge Analytica)
But does it actually WORK?
To find out we quantified the persuasive advantage of political microtargeting for US issue advocacy
The answer is complicated...
1/
🚨Out in
@NatureHumBehav
🚨
We examine psychology of misinformation across
16 countries, N=34k
➤Consistent cognitive, social & ideological predictors of misinfo belief
➤Interventions (accuracy prompts, diglit tips, crowdsourcing) all broadly effective
1/
🚨WP:Examining psychology of misinformation around the globe🚨
Across 16 countries N=34k
➤Strong regularities in cognitive, social & ideological predictors of misinfo belief
➤Broad intervention efficacy (accuracy prompts, literacy tips, crowdsourcing)
1/
🚨Accuracy Prompt Meta-Thread🚨
Weve proposed that prompting people to think about accuracy reduces misinfo sharing. But is this effect replicable & robust?
@GordPennycook
& I analyzed 20 exps, N=26k
Answer: resounding YES, across many headlines/prompts
1/
KEY TAKEHOME: So even though Reps were 4.6x more likely to get suspended by Twitter than Dems, this does NOT provide evidence of partisan bias. It could simply be the result of suspending people for sharing misinformation-which has bipartisan support!
Even more than in past work, the Republicans shared links from MUCH lower quality news sources than Democrats - with source quality judged either by fact-checkers or a politically balanced set of laypeople (heading off claims of bias in the evaluation of what counts as misinfo)
Ref said they were worried that our "large sample size raises potential concerns regarding overpowered statistical analyses" (N=800 per cell for a 2x2 exp design) My understanding is that more power is better for experiments. Agree or disagree? Any cites to suggest on this point?
My take on Psych of Misinformation (15min vid)
Repetition, consistency w prior beliefs, source cues = more belief regardless of truth
Lack of reasoning, lack of literacy = more belief in falsehoods specifically
Crowdsourcing & accuracy nudges can help
The root of the challenge when inferring political bias is that Republicans/conservatives are substantially more likely to share misinformation/fake news, as shown eg by
@andyguess
@j_a_tucker
@grinbergnir
@davidlazer
et al
Final tidbit:
In survey, we asked some Americans if a person who is anti-Qanon is anti-conservative - answer was overwhelmingly "No"
But when we asked other people if platforms taking action against Qanon was anti-conservative, many Republicans said "Yes"
Double standard?
Today
@GordPennycook
& I wrote a
@nytimes
op ed
"The Right Way to Fix Fake News"
tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong
In this thread I unpack the many studies behind our op ed
1/
We then ask how well different features predict a user's probability of being suspended.
KEY RESULT: Suspension is predicted as well or better by the user's misinformation sharing as it is by their politics! And not all predicted by other features
📢POSTDOC OPPORTUNITY📢
Come join me
@GordPennycook
&
@tomstello_
as we launch a research program using dialogues with LLMs as a tool to study human psychology and decision-making! Located at MIT or Cornell, and open applicants from all kinds of intellectual backgrounds/training
🚨New WP🚨
Field experiments with 33 million FB users & 75k Twitter users: Ads prompting users to think about accuracy reduce misinformation sharing!
Accuracy prompts offer platforms a content-neutral approach that is scalable and preservers user autonomy
Come do NSF-funded postdoc
@MIT
with me &
@AdamBerinsky
to study misinformation & what to do about it!
We are interested in candidates with a wide range of backgrounds-feel free to DM w questions
Plz RT / share widely!
🚨Out in
@ScienceAdvances
🚨
SCALING UP FACT-CHECKING USING THE WISDOM OF CROWDS
How can platforms identify misinfo at scale? We find small groups of laypeople can match professional factcheckers when evaluating URLs flagged for checking by Facebook!
1/
Putting these two observations together shows the problem: in responding to bi-partisan demand for reducing misinformation, platforms may wind up enforcing on conservatives more so than liberals
Something optimistic for once about fake news!
Dems & Reps both put more trust in mainstream news outlets than fake/hyperpartisan ones. So having algorithms downrank content from untrusted sources might reduce misinfo on social media.
w/
@GordPennycook
Is falling for fake news driven by motivated reasoning? No - it's more inattention than willful ignorance!
We find BETTER truth discernment for news that is ideologically aligned & reflective people are more discerning regardless of news' partisanship
🚨WP🚨
We test 9 online samples and find clear tradeoffs between attentiveness and representativeness - which sample is best depends on research q and priorities. For social/political qs I rec Bovitz/Lucid, for complex designs I rec Cloud/Prolific.
Misinfo researchers, what do you see as the major controversies / points of disagreement within the field currently? Plz share widely to see what we come up with - thanks!
🚨Working paper alert!🚨
"Scaling up fact-checking using the wisdom of crowds"
We find that 10 laypeople rating just headlines match performance of professional fact-checkers researching full articles- using set of URLs flagged by internal FB algorithm
We are hiring up to 3 post-docs interested in misinformation or meta-cognition - review starting Feb 1.
Please apply!!
Looking for wide range of backgrounds: eg psych, computational social science, poli sci, comm, econ, comp sci - DM with qs.
Share/RT appreciated!
We (myself &
@DG_Rand
) are looking for up to *three* postdoctoral fellows to being summer 2020. If you're interested in misinformation/fake news or the metacognition of belief (and related), please consider applying! Review will begin Feb 1st. Ad is here:
and as we show in a large national survey, there is bi-partisan support for platforms taking action to reduce misinformation - this is true both for misinfo in general, and for a specific instance of misinfo (Qanon conspiracy theories)
🚨New anti-misinfo tools🚨
We teamed up w
@Google
@Jigsaw
to design a suite of accuracy prompts to combat
#COVID
misinfo online
🡆We identify many ways to shift attention to accuracy
🡆Effective regardless of party/COVID concern level
Peer-reviewed paper
🚨JOB ALERT🚨
MIT Sloan/College of Computing is hiring assistant prof in computational social science!
We are looking for candidates at intersection of CS and social science- *very* broad search
Feel free to DM w Qs (Im on committee)
Please RT widely!!
Out in
@nature
w
@GordonKraftTodd
: How can advocates best spread prosocial innovations (eg solar panels)? Adopt the innovations themselves-this communicates real belief in the benefits. Plus, empirical support for CREDs cultural evolution theory!
To evaluate this possibility empirically, we identified a politically balanced set of 9k politically active Twitter users who shared election hashtags in Oct 2020. We followed them for 6mo after election, recording what links they shared & which accounts got suspended by Twitter
Why do people share misinfo? Are they just confused and can't tell whats true?
Probably not!
When asked about accuracy of news, subjects rated true posts much higher than false. But when asked if theyd *share* online, veracity had little impact-instead was mostly about politics
🚨Out in PNAS🚨
Should we be worried about persuasive power of political video (ads, deepfakes, etc)?
Maybe not so much...
In 2 large exps, video (relative to text) caused
→More belief that depicted event occurred
→BUT only slightly more persuasion
1/
Another example of a literacy intervention that reduces belief in both false AND true information, and therefore is on balance likely counterproductive. This is a very common (and problematic) pitfall of literacy interventions!
Our group at MIT is hiring an assistant professor! Come join our wonderful crew
@RaBhui
@deaneckles
Drazen Prelec
@ce_tucker
@reneegosline
@sinanaral
Duncan Simester, John Hauser, JuanJuan Zhang. Applications due Aug 12 - happy to answer any qs!
🚨New in HKS Misinfo Review🚨
"Examining false beliefs about voter fraud"
>65% of Reps believe Trump won election
40% would think Biden was illegitimate even if Trump conceded
Most rejected violence
More political knowledge -> more false beliefs
🚨Out now!🚨 in ManSci
THE IMPLIED TRUTH EFFECT
Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings
Potentially a BIG PROBLEM for social media given scalability issues for fact-checking
1/
🚨Major paper by
@ArianaMGalian
& Higham in press at JEP:G 🚨
Widely adopted "Bad News" and "Go Viral" games don't actually make people better at spotting misinformation - they just reduce belief in everything😬
PDF:
1/
4yo: My toe hurts
Me: Here, I'll put a bandaid on it
4yo: No, I want to show mommy
Me: Why?
4yo: She's a doctor. You're just a teacher. You don't know about doctor things.
Burn
#notthatkindofdoctor
#AcademicTwitter
How big of a platform is
@Twitter
? Now that
@elonmusk
made view counts public, we (
@_mohsen_m
) can find out
ANSWER: Not nearly as big as it looks! For accounts with lots of followers, only a small fraction (~1%) see median tweet 😬 (and this overcounts, as it includes RT views)
🚨Out in
@NatureComms
🚨
New measure of Twitter users' exposure to misinfo from *ELITES* using
@politifact
ratings of the elites a user follows
➤Predicts users' misinfo sharing
➤More extreme Reps = more exposure
Check out your own exposure w/ web app!
1/
🚨Out in PoPS🚨
Can crowds help identify misinfo at scale? In this review we show seemingly contradictory findings in the lit are simply due to different analytic approaches. In all data, crowd is highly correlated w experts!
Crowd ratings=useful signals
@kylascan
@elonmusk
@Cernovich
If people want more details, here's a thread walking through our study about whether Twitter is biased against conservatives:
🚨New WP🚨
Many people - from Trump to
@elonmusk
- have accused Twitter of anti-conservative bias
Is this accusation accurate?
We test for evidence of such a bias empirically - and turns out it's more complicated than you might think...
1/
More important things happening right now, but if its useful for reseachers using MTurk:
@AaArechar
finds substantial changes in pool during quarantine
tl;dr: participants are less attentive, more Republican, less white
"Turking in the time of COVID"
1/
COVID-19 misinformation paper out today in Psych Science
Hope this is both scientifically interesting and practically useful, and will lead to much more research on this important topic!!
Our paper "Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention" is now in press at Psych Science!
I’m super proud of this paper - but first, a thread on the results.
Preprint:
"Belief in Fake News is Associated with Delusionality, Dogmatism, Religious Fundamentalism, and Reduced Analytic Thinking" out in JARMAC just in time for the midterms! Title pretty much says it all.
Non-paywall PDF
w Bronstein Bear
@GordPennycook
Cannon
🚨WP🚨
Conspiracy beliefs famously resist correction, right?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20pp (d~1)!
🡆Tailored AI evidence rebut specific arguments offered by believers
🡆Effect lasts 2+mo
🡆Works on entrenched beliefs
Should you be AFRAID OF DEEP FAKES?
Maybe not!
In new working paper, we compare video vs text & find
* Slight increase in belief that depicted events occurred
* But no increase in persuasion for political content (small effect for non-political)
1/
New in
@ScienceMagazine
What FB news drove COVID vax hesitancy in US? False misinfo?
Not so much: We find unflagged ‘vax-skeptical’ news had *46X larger* impact than flagged misinfo
Why? Flagged misinfo had bigger impact when seen, but ~100x fewer views
We argue the answer is *inattention*: accuracy motives are often overshadowed bc social media focuses attention on other factors, eg desire to attract/please followers
This lines up w past finding that more intuitive Twitter users share lower quality news
Me: If the number of lilypads in a pond doubled every day...
Wife: Oh God. It's one of those crt questions. I'm going to get it wrong, and that means I believe fake news
I explain why I think being an academic is like playing in a band, and how I try to take a punk rock approach to social science, in this
@MIT
News profile
Plus, I show off my powered-up new beard
@mnvrsngh
This is a really interesting thread! It makes me think of this Science paper about how American participants would rather give themselves electric shocks than do nothing
Interesting piece from Nick Chater and George Loewenstein arguing that behavioral science research has focused too much on behavior change at the individual level, and neglected system-level forces (and need for system level change)
*Extremely* excited for this to see the light of day! I was honestly pessimistic about interventions being able to reduce anti-democratic attitudes. Very happy to be wrong! Lots of insights here that are practically useful + theoretically interesting. Check out the thread!👇
🚨New WP: How can we reduce partisan animosity & anti-democratic attitudes in US?🚨
We share results of the Strengthening Democracy Challenge: N=32k megastudy testing 25 depolarization interventions
Shows effective treatments for many neg outcomes!
🧵👇
🚩Working paper🚩
DIGITAL LITERACY & SUSCEPTIBILITY TO FAKE NEWS
Lots of assumptions-but little data-out there on link b/w digital literacy & fake news
We find 2 diff digital lit measures predict ability to tell true vs false-but NOT sharing intent
1/
🚨New WP🚨
How can more Republicans be convinced of the importance of climate change?
We show tweets from
@elonmusk
- a fav of the right, but still pro-climate - sig increase climate concern/action among Reps. Musk can be a powerful climate messenger!
So why this disconnect between accuracy judgments and sharing intentions? Is it that we are in a "post-truth world" and people no longer *care* much about accuracy?
Probably not!
Participants overwhelmingly say that accuracy is very important when deciding what to share
New WP for your doomscroll:
➤We follow 842 Twitter users with Dem or Rep bot
➤We find large causal effect of shared partisanship on tie formation: Users ~3x more likely to follow-back a co-partisan
Led by
@_mohsen_m
w/
@Cameron_Martel_
@deaneckles
1/
*Very* excited for this paper-led by amazing
@Cameron_Martel_
(on the job market!)-to be out. He validates a scale for trust in fact-checkers & presents experiments w 14k subjects showing that fact-checker warnings reduce misinfo belief + sharing even among those low in trust!
🚨New in
@NatureHumBehav
🚨
Will misinfo warning labels backfire for ppl who distrust fact-checkers? No!
Labels reduce belief in & sharing of false news even for those highly distrusting of fact-checkers - warning labels are a key tool for platforms!
Has anyone else been thinking about how our phones are amazing disease vectors?
*Put phone in pocket
*Wash hands maniacally for 20s
*Take phone right back out
Seems like a major issue that's missing from all the COVID-19 recs going around?
Or am I missing something?
Really interesting-made me realize that having switched in from the natural sciences, something I *love* about the social sciences is that *every* conversation is a scholarly conversation. It's so exciting to be able to use the lens of what we study to understand our social world
Interested in how ranking algorithms work, challenges in understanding their impacts (even with full data access), and policy implications?
Check out extremely clear doc from
@deaneckles
as part of his senate testimony
Here are some highlights:
Very cool to see TikTok (w Irrational Labs) translate work from me &
@GordPennycook
's team into a successful anti-misinformation accuracy nudge intervention!! I'm impressed by TikTok's willingness to test+implement this kind of intervention & hope that Twitter FB etc will follow
Today I taught my first virtual class
Halfway through, an ambulance pulled up across the street
Though my window, I watched paramedics in masks load my elderly neighbor onto a stretcher. His wife stood in the street filling out forms
We are in an awful moment in time
🚨New WP🚨
Are Republicans really more inclined to share fake news? Or just exposed to more of it? And are they resistant to accuracy nudges?
To find out, we presented a national sample from YouGov with a large set of politically balanced headlines
1/
🚨New
@PNASNexus
🚨
We join work on misinformation & harmful language and find:
➡More harmful language in tweets w low-quality news links β=0.1 & in false headlines β=0.19
➡Users who share more misinfo use more harmful language in non-news tweets β=0.13
These studies help us see past the illusion that everyday citizens on the other side must be either stupid or evil- instead, we are often simply distracted from accuracy when online. Another implication of our results is that widely-RTed claims are not necessarily widely BELIEVED
Seeing talk about if accuracy prompts work for conservatives
In big (20 studies, N>26k) meta-analysis we show the answer is a clear YES:
Regardless of ideology, getting people to think about accuracy increases the quality of news they would share
1/
🚨WP alert🚨
Ukraine🇺🇦 has long been target of Russian🇷🇺 disinformation. In 2 large surveys in 🇺🇦 we studied predictors of truth discernment
Key finding: More analytic thinking⇿less susceptibility to pro-Kremlin disinfo, even for those who are pro🇷🇺
1/
🚨Out today in
@NatureHumBehav
🚨
In this comment led by
@BrianMGuay
we argue that misinfo researchers need to look at effects on/relationships with both false *and* true news (not just false) - and have discernment between the two as the key outcome
In one exp, Treatment participants rate accuracy of every news post before indicating how likely they'd be to share it. In Control they just indicate sharing intentions
Treatment reduces sharing of false news by 50%! Most of remaining sharing of false news explained by confusion
New WP led by
@MohsenMosleh
w
@GordPennycook
@AaArechar
"Digital Fingerprints of Cognitive Reflection"
In a hybrid lab-field study we look at relationship between (i) the tendency to go with one's gut vs stop+reflect
(ii) behavior on Twitter
1/
New meta-analysis of 91 experiments confirms that promoting intuition increases cooperation - and that this effect is not limited to emotion-induction manipulations.
Next step for this field: better paradigms that achieve high compliance & comprehension!
Cognitive Reflection Test (CRT) scores are remarkably stable over time, correlating at r=.75 even when separated by 2+ years!
Good news for work using CRT as an individual difference measure.
New WP w Stagnaro &
@GordPennycook
(N=2,992)
"Human Cooperation & the Crises of Climate Change, COVID-19, and Misinformation"
Annual Rev Psych piece out today w
@PaulvanLange
Points out each of these crises is a social dilemma (self-interest vs greater good)
Reviews work on promoting cooperation
🚨New WP🚨
Intuition favors belief in false claims. But why?
@ROrchinik
argues its the result of rationally adaptative intuitions adjusting to low base rate of false claims in the US media environment
Check out Reed's talk at SJDM on 11/18 9:10am!
PDF:
🚨Out in JQD🚨
→We have crowds rate 3 large sets of headlines on various dimensions
→Find factors capturing (i) accurate (ii) evocativeness (iii) familiarity
→Each separately predicts sharing
☠️Evocative headlines are less true but more shareable☠️
Our treatment could be easily implemented by platforms, eg periodically asking users to rate the accuracy of random posts. This primes accuracy (+generates useful crowd ratings to identify misinformation )
Scalable+doesnt make platforms arbiters of truth!
🚨Working paper alert!🚨
"Scaling up fact-checking using the wisdom of crowds"
We find that 10 laypeople rating just headlines match performance of professional fact-checkers researching full articles- using set of URLs flagged by internal FB algorithm
This study is the latest in our research group's efforts to understand why people believe and share misinformation, and what can be done to combat it. For a full list of our papers, with links to PDFs and tweet threads, see
Finally, if you made it this far into the thread and want to know how this work connects to broader psychological and cognitive science theory, check out this recent review "The Psychology of Fake News" that
@GordPennycook
and I published in
@TrendsCognSci
Next, we test our intervention "in the wild" on Twitter. We build up follower-base of users who retweet Breitbart or Infowars. We then send N=5379 users a DM asking them to judge the accuracy of a nonpolitical headline (w DM date randomly assigned to allow causal inference)