What works best to change diets: nutritional labels or price policies?
It's a part of the titanic battle pitching behavioral vs traditional economists: cues or incentives?
In our lab setting, labels win.
Working paper (with Muller, Ruffieux)
🧵:
Does your (EU) country attract or lose top researchers?
AT, CH attract a lot.
FR, DE, BE, ES are open places: as many leave as arrive.
IE is a closed system.
Italy, there is something very wrong: *very* few come, many go.
Plot made with
@ERC_Research
data.
Ok this is big.
Web of Science just removed the MDPI flagship journal IJERPH from their lists. This means IJERPH has no more an Impact Factor.
Why is this big? What are the implications? 🧵
Academic publishing is in deep trouble.
We all know it: paper mills, thousands of Special Issues, retractions, skyrocketing APCs...
What is going on?
@danbrockington
,
@HansonM90
,
@pagomba
& me we have a preprint just out, with tons of data.
A 🧵
Lo sapevamo già in fondo, ma i dati sono brutali.
Su 100 ricercatori ERC accolti nelle università italiane, 91 sono Italiani.
Su 100 italiani vincitori di ERC, 55 lo svolgono all'estero.
Tanti italiani partono, ma soprattutto, nessuno arriva.
Bisogna fare davvero qualcosa.
Yesterday I circulated a plot I made about ERC grantees.
I made a horrible mistake: I mistook Ireland (IE) for Israel (IL). The data concern Israel and **not Ireland**.
Also, data covers 12 largest ERC host countries.
I apologize. Here is the correct plot.
Many scientists are puzzled by
@MDPIOpenAccess
: is it predatory or not?
In a data-driven post I argue it is something different: aggressive rent extraction.
MDPI strategy is innovative & successful, but unsustainable and ultimately bad for science.
Updated 2021 data on
@MDPI
special sssues.
8 journals have more than 1000 open special issues.
35 have more than 1 *per day*.
Sustainability has 8.7 *per day*.
This is predatory publishing.
Stand clear of them.
This is infuriating.
How DARE YOU throw your RAs under the bus with not even a shred of evidence.
First, do the actual research yourself. You are a researcher not an underpaid RAs manager.
Second, you sign the paper, you are responsible for it.
Here is Gino's argument that her RAs someone prepared the data for analysis and conducted preliminary analysis and she did not run the studies nor handle the data.
Nerd alert: here you'll find a LaTeX template to reply to referees.
As everybody else I am just winging it, so I patched this together from code sent by
@ploteo
some years ago + googling + stitching latexdiff code in.
Feel free to use & share.
Code:
Read this and try telling me that STATA is not a joke.
Changing the *order* of things in a formula impacts results because of (HIDDEN!) assumptions on what gets dropped that STATA implements without telling you.
Come on.
Re-analysis shows that the choice of command and fixed effects are crucial for the paper’s results. The results disappear when we drop the year dummies, the month dummies, both these dummies, or even when we change the *order* of the fixed effects in the regression command. 7/11
I got interviewed by Science about MDPI's IJERPH losing its impact factor.
A small thread to list what we know of this case as of today (29.3) -- if you know more, chip in.
A 🧶
This paper will make it into textbooks. General ones, stuff that will be known everywhere, by everyone. It is such a stunning intellectual achievement, and it is rare to see these real, deep breakthroughs.
It is also humbling to know it took more than a decade to brew.
Kudos!
Giulio Regeni was a Cambridge PhD student. He was murdered 8 years ago in Egypt because of his research.
Next week an academic event is to be held in Egypt. 27 other Italian colleagues,
@giannetti_cate
and me have written a letter to raise awareness.
On a side note, I hate to say that I told you so, but I told you so.
Two years ago I ran an analysis of MDPI that predicted that MDPIs trajectory would worsen over time, bringing the whole castle down.
Well, here we are.
You can fool some people sometimes, but you cannot fool all the people all the time.
And when you open 56k special issues in a year, there cannot possibly be that many good (or even decent) papers out there.
At WoS they started noticing, and went for the biggest fish first.
This decision might be reversed, it is being challenged. This is not the end of the story.
But all pyramid schemes unravel eventually, all bubbles burst.
It's the beginning of the end.
If you are interested in scientific publishing, MDPI and all of this, read this piece by
@HansonM90
. Full of insights, data and experience, it summarise well where we're at.
These are *enormous* numbers, making IJERPH one of the largest open-access mega-journals in the world.
So now you know why this is BIG news.
The second largest journal in the world just lost its impact factor.
First, the facts. WoS announced several de-listings with the aim of keeping the publishing sector clean.
IJERPH loses its IF alongside ~50 other journals.
This impacts nearly all publishers, from Elsevier to Hindawi.
How best to elicit beliefs?
Experimental Economists - now mostly heading towards
@EcScienceAssoc
Bologna - have used several interfaces: text, sliders
With
@Thomas__de_Haan
we came up with the Click-and-Drag belief elicitation interface - and it rocks. We have evidence.
A 🧵
#AcademicTwitter
: if you get invited to edit a Special Issue at
@MDPIOpenAccess
journals, before you feel flattered and jump on it, consider this:
MDPI journals run *hundreds* of SIs per year.
They chose to flood the market for profit, and are using you to do it. Say no.
De-listing such big a journal is clearly a sign.
And the sign says this: you *do* get a high IF, dead easy publication to add to your CV. But it might be worth nothing a few years down the line, when people stop believing in magic.
About to publish in a scientific journal? Want to know more about specific publishers?
We released The Strain Explorer, where you find data on Turnaround times, output, shares of special issues, and impact factor inflation.
Here:
Read along for a 🧵
But why does it matter for scientists?
Colleagues constrained by the publish and perish mode were feeding these mega-journals. The system asks me to publish lots of papers with high IF, and these journals happen to conveniently provide me just that -- for a few thousand bucks.
The list of de-listed journals is here, thanks to
@Alina_Botezat
…
So: lots of journals lose their IF, among those 2 MDPI journals, including their biggest journal, IJERPH.
So while a handful of journals got de-listed, most other journals follow very, very similar patterns. If the flagship sinks, then it might be a sign that the whole business model is shaking.
Might be time to jump ship.
This plot focuses on MDPI special issues, but a similar plot could be made for others. Publishers got away with:
- exponential growth in papers
- lighting fast peer review times
- low and decreasing rejection rates
All this under the cover of an IF.
That cover is gone.
"For AER & JEEA, but not for EE, papers authored by economists with social ties to the editors receive significantly fewer citations"
"authors at longer distances to editors have to write papers of higher quality in order to get published in AER & JEEA"
Important paper!
Relieved that this joint work with
@JnsC_econ
and
@matthiasgreiff
finally found a home after quite an odyssey. Particularly happy about the fair and constructive editorial and review process at the Journal of Behavioral and Experimental Economics...
Second, you might think that IJERPH is special in some respect to have been de-listed.
But you'd be wrong.
Most mega-journals share a similar model: decent IF, explosive growth, low & decreasing rejection rates, fast & decreasing turnaround.
Most MDPI journals are the same:
So OA mega journals offer you a 50%+ chance of a high IF publication in under a month. Add that to your CV, boom.
If it sounds like it's too good to be true, it's because it is.
Publishers baited us, and we happily went for it -- spending millions in APCs in the process.
So I just learnt that with the {magick} package you can add an a images gif to a ggplot.
Beware, oh attendees to my future presentation, for I will use this in oddly inappropriate ways!
But why does this matter?
First, it matters for MDPI.
Mega journals aren't usually chosen for their editorial rigor (>50% accept in ~1 month with revisions) but because they give you a reputation badge that says "high IF".
That's gone. It usually spells doom for the journal.
How best to elicit beliefs in online surveys?
With our Click-and-Drag belief elicitation interface -- now with a ready-made Qualtrics plugin!
Info, new working paper, access to the Qualtrics plugin, live demos, and more here:
Read the 🧵for details
How good are our laboratory measures of risk attitudes?
I (re)launched a large meta-analysis to find out.
All data and analyses are available on an interactive shiny app:
Want to contribute your data? DMs are open!
@ldv_ldv
Non mi pare strano, in tanti corsi gli esami degli anni precedenti sono disponibili. In entrambi i corsi che ho tenuto in Italia a livello specialistica li fornivo io per primo.
L'obiettivo di un docente non è fregare gli studenti, ma che imparino.
The best thread I read on the Ariely et al data fabrication.
My generation, especially in social psy and behavioural econ, has been bred and fed on this.
About time we scrap it completely.
I want serious, rigorous, boring *measures* of important things. Not exciting tricks.
The result has been a research program that aims to find surprising, counterintuitive effects -- where some tiny, almost unnoticeable aspect of the situation radically changes people's behavior
Some numbers.
IJERPH published 17085 articles in 2022. This is 13 times as many as 2016, when it published 1318.
Average turnaround time from submission to acceptance, including revisions, is 41.5 days, 36% down from 62 in 2016.
Rejection rate is 45%, down from 57% in 2016.
First, the problem.
In the end it's simple: we publish A. LOT. OF. PAPERS.
We added ~900k papers/year since 2016.
This crunches more and more time off a limited - and stagnant - number of scientists. And it costs a LOT of money.
It's not sustainable.
We call this *strain*.
This is the scientific publishing industry for you.
MDPI milks you for money. Nature milks you for money. Elsevier, Wiley, Frontiers... you name it.
They do little and reap large benefits.
The game is rigged, and they always win.
We ought to stop playing.
How? A thread.
So, take stock. Nature Publishing Group creates a new journal. It’s nothing, but they know the name Nature will rock the boat. So they price it at a ridiculous level at the outset, gambling that we scientists will do their bidding. 5/10
Here comes a little story on how *not* to engage in scientific debate, involving
@FrontiersIn
.
It features the strain on scientific publishing paper (), linear exponentials, derogatory accusations, misleading data visualisation and straw men.
Off we go!
@TomValletti
I would love to hear that *actual* applause erupted. But I suppose that the applause was just a figure of speech and the scende ended in silence. Still, I want to believe: people applauded to the boy and shamed the lady. Did they? They did right? Right?
🇪🇺scholar willing to work in🇮🇹?
Easy!
You just need to provide, *to be allowed to apply*:
- PhD diploma + official translation
- undergrad diploma + transl
- undergrad exams & marks + transl
- approval from🇮🇹embassy in your country
But please apply, we welcome foreigners!
The journal is (was?) still growing exponentially, with 4k+ articles already published in 2023.
IJERPH also has 3099 (!) open special issues for 2023, more than 8 per day, up from 754 SIs closed in 2023.
(Image from
@unnombrealazar
)
Tip to all researchers: start documenting what you do on a project from day 1.
Set up a repo with all data & scripts. Make extensive readme. Document your choices.
Why do I know?
Well, a student wanted to reproduce a 2015 paper of mine.
It took a while to tame the beast.
One and a half years after the start of the pandemic I am about to run an experimental session with real people in a real room. New lab, new equipment, new pandemic rules on masks, sanitizer, distance, aeration. but same old thrill not to know if all will run smoothly. On we go!
Personal/Professional news.
A quick look at my birth date told me it was time to pay forward the support I've been enjoying since grad school.
So I stepped up & I was elected Vice President of the French Association for Experimental Economics ().
@KathrinWunsch
I am sure it was a good paper, as there were many in there. the problem lies not with the author but with the publisher. If the publisher I flares a journal to earn more, it is making a disservice first and foremost to the authors who did a good job.
See
Spending the nights in the ICU with my daughter (she'll be fine) draws home a major advantage of socialized healthcare: no financial stress when you worry for a loved one. Lots to worry about, but not money. Yes, it's not "free". But it is, for those in need, when they need it.
For full transparency, the code used to generate the plot and scrape the website is available on my github page.
[disclaimer: all data is publicly available, I just collected and plotted it]
[I know it's crappy code, but it works, gimme a break]
You all know the attraction effect.
You want small pop-corn for 3$, but end up buying large for 7$ because the medium sold for 6.5$.
@agaudeul
and me have a new paper about it and let us tell you: the effect disappears upon reflection.
paper
thread 👇
📢Conference📢
We will be hosting the 14th ASFEE French Experimental conference in Grenoble, France, June 27th-28th 2024
Keynotes by
@CBicchieri
and Guillaume Fréchette, sessions on replications and LLMs.
Info & submissions:
Elsevier is the one pocketing millions for their supposed role as quality control keepers. They chose that role, they get paid enormous money for it, they'd better do the job.
I just donated 100 USD to
@uri_sohn
and his colleagues at DataColada to help them with the lawsuit they are facing for uncovering with rigor & honesty scientific fraud. Scientific arguments should not be settled in courts.
Where does this *strain* come from?
- traditional publishers (Elsevier, Springer) increased both the number of journals & the number of papers therein.
- some open access publishers (MDPI, Frontiers) increased massively a small number of journals, using Special Issues.
@HendirkB
No, but
1. Gino was not in that league at the time when the papers were written, far from it
2. Have you seen the extent of worrying things in the data? At least *some* checks are in order
3. If RAs do all the work up to data analysis, why aren't they even cited in the thank you
Eliciting beliefs is cool -- but hard.
Ask for a point estimate -- what about uncertainty?
Asking for a distribution -- hard & unintuitive!
In a new article out in Judgment and Decision Making (OA here: )
@Thomas__de_Haan
and me have a solution!
A 🧵
Today in predatory conferences
I was just invited to present "insights from my pioneering work" to a crowd of "eager attendees that anticipate the opportunity" at a conference...
...in Nursing and Midwifery.
I am an experimental economist.
#EconTwitter
I am considering signing my referee reports. I have tenure. The reason is transparency, accountability & commitment not to be ref
#2
.
Now: is it dumb? Does it facilitate or hinder the editors job? Is it unfair? Would it be interpreted as me looking for cross favours?
The two models add roughly the same amount of strain.
But they differ in all other respects.
Turnaround times (submission to acceptance including revisions) have gone up for most publishers.
Down for MDPI, Frontiers and Hindawi.
MDPI sits at the lowest end of credible lags.
You explain to subjects the uniform distribution. You quiz them: they pass.
Then you ask them where do they believe the draw would land.
And you get this.
Why?
New paper on Jo Eco Psy with
@FilippinAntonio
Peter Katuščák & John Smith:
Thread 👇
Despite the growth of web/field experiments in Grenoble we still believe in old brick-and-mortar labs. Thanks to a grant by
@IdexFormation
, feedback from
@EcScienceAssoc
colleagues, support from
@GrenobleINP
and hard work by
@aurelevel
we renovated the lab. Isn't it beautiful?
Many AER papers are not robust -- basically because of p-hacking + errors.
But *this* is the key insight: this is caused by the incentives we face. If selectively reporting lands a job, people will do it.
Not easy to change incentives, but change we must.
A concluding takeaway is that science is hard. And the profession should be constructed in such a way – taking replication and reproducibility seriously, and institutionalizing it, rather than punishing it – to reflect that reality.
While strain comes from most publishers, following different business models, MDPI is an outlier in our analysis for all indicators.
Papers up by 1000%, mostly in SIs, nearly halved its TATs over 7 years, decreased rejection rate, highest impact inflation.
It's impressive.
It is not for us to say what should be done to address this strain.
We have crunched the numbers -- a LOT of numbers -- so that YOU can have this conversation.
You the funders, you the authors, you the publishers.
Science depends on good publishing.
So, what do YOU think?
I was contacted by MDPI's Nutrients to edit a Special Issue.
I was seriously considering it, then
@marcfbellemare
@maritkragt
advised against it.
I am now convinced MDPI is milking the Nutrient cash cow, and will decline the offer.
Why? Look at this then follow on the thread.
🚨Summer School🚨
On June 17th-21st PSE will hold a summer school in Experimental Economics in Paris.
If you are a PhD student or ECR interested in getting up to speed with experimental methods: join us!
Info & subscriptions here:
Very important paper for all experimentalists out. If we care about external validity we shouldn't overly rely on "one neat measure" but take many and produce more reliable composite indexes.
Taking noise seriously rather than noise mining as some have been doing for too long.
One way to substantially increase the external validity of economic games is by aggregating measures to reduce measurement error. We show this is in our new paper (with Xinghua Wang), now published in Games and Economic Behavior.
Open Access:
Scientific publishers: "yes we charge you fees but we add a lot of value!"
Also scientific publishers: "hold my beer while we apply our writing & maths style, mess up your paper in 100s of places & then give you 48h to find and correct these errors"
We actually PAY for this???
Is intellectual property good or bad for innovation?
If it's bad, would we be able to transition from the current IP regime to a new, no-IP one?
In a paper freshly accepted at JEBO
@ismaelbensliman
, Raul Magni-Berton, Simon Varaine & me we attack this question in the lab.
🧵👇
@tom_b_elliot
@MDPIOpenAccess
Here I analyze MDPIs business model using scraped data.
TL;DR: they invested in creating reputation till ~2016/17, then started milking this reputation for money.
It has spectacularly paid off (for them).
Their business is increasingly predatory.
So papers get reviewed faster. Good news, right?
Not if this compromises the quality of peer review. And not if rejection rates have also gone down at the same time.
Rejection rates are hard to find, but here is what we know: sometimes increasing, decreasing at MDPI.
For all those that think that since some frauds in Beh Econ were caught then the whole field is a non-replicable mess: wrong.
Some (most?) results are robust & come out of serious work.
Onwards!
PREPRINT: Mental accounting - treating money differently at small & large amounts - holds 100% across 21 countries (plus 91% of 147 unique effects). Even financial literacy did *not* explain decision-making. We note implications for you, me, & policy.
A "new" paper by Antonio Filippin & me is out.
It contains a conjecture that turns out to be partly true & a meta-message: publication is a long & winding road.
Study: '12. 13 rejections. Accept: '22. Publish: '23.
Do safe option trigger gender differences in risk attitudes?
Mass resignation of editors at an Econ journal. This is getting more common as publishers try to turn also traditionally small/medium journals into larger outlets to ensure more revenue.
Well done.
Now that collective action has started, don't stop: migrate the journal away!
Important news about the Journal of Economic Surveys:
All editors resigned in reaction to the policies that the publisher is imposing on the journal. Here is the resignation letter (that I co-signed as former AE).
This is concerning about trends in publishers' policies.
[1/3]
One year ago, I was proudly tweeting about our large investment in our new brick and mortar lab. Of course despite our best intentions we never got to use it this year.
So here is to 2021: may we finally use the lab for our growing backlog of exciting projects!
This is true not only on average, but also in distribution.
For MDPI and Frontiers not only papers are reviewed faster, but homogeneously so.
For MDPI, 90% of journals, irrespective of field, have a mean turnaround time between 31 and 43 days.
Different field, same lag.
This can be done in a few ways - the most obvious being self-citations.
If you cite other papers in the same journal your IF goes up. It's a win for the authors & the publishers, at little cost.
Again, two models here, MDPI, Hindawi and Frontiers increasing self-citing.
Very pleased to run into my former Harvard colleague, and former editor of Econometrica, Andreu Mas-Colell, at the Econometric Society Meetings in Barcelona.
@ecmaEditors
@econometricsoc
#eeaesem2023
@GGrimalda
Totally agree. And the very fact that we need a central, private, for-profit authority and we all live by their word is compete madness on our part.
So there is this study that tells you that money can indeed buy happiness. And then, boom, 12 hours after it hits social media, we get a convincing statistical analysis of why the effect is actually tiny.
Something is deeply wrong about peer review & scientific incentives.
PNAS just published a paper titled "Experienced well-being rises with income, even above $75,000 per year". The author did not publish the raw data, but luckily, there's enough info to simulate it. I wrote a blog: and this thread, 1/7
Surely we can count on signals as the Impact Factor to sort through the increasing mass of articles?
Yes, but any metric can be gamed.
We create "impact inflation" as the ratio of IF to the Scimago Journal Rank, a less gameable metric.
IF have been inflated -- by everyone.
Making replication packages compulsory for publication works: a large majority of papers end up being fully reproducible.
But this is only thanks to the incredibly valuable work of colleagues that devote their time and skills to the task.
Kudos to them!!
📢 New working paper alert! With the help of more than 700 volunteer reviewers, we (
@MilosFisar
,
@chrhuber_
,
@elena_katok
,
@ozkesali
and myself) assessed the reproducibility of ~500 articles published in the journal Management Science. Some results. 📊🔍
#Reproducibility
Fed up by Elsevier asking
1. what are the area of origin of my ancestors
2. what race do I identify with
Questions are stupid, US-centric, reinforce the racist views they have been apparently designed to tackle.
There are NO races. Stop this.
I was today years old when I discovered that you can export a
#ggplot
to
#tikz
and then embed this in your latex document; as a result, your fonts in the plot will match your document fonts perfectly, and change with it.
Magic.
Ok people of
#EconTwitter
, I applied for the position of Editor at the European Review of Agricultural Economics.
Wish me luck -- or call me crazy.
I hope to pay forward the support I've received from the community so far.
Here is my vision.
More digging in the
@ERC_research
data.
Here is the evolution of inflows and outflows from the 2007-13 to the 2014-20 period.
Sorry to say 🇮🇹 is not moving in a good direction either.
Never use Excel for serious data work, exhibit no. 1.245.389.
This goes straight to my intro to R course alongside the Reinhart & Rogoff 'oh sorry we did not select the whole column' mess.
In the UK the number of cases rose rapidly.
But the public – and authorities – are only learning this now because these cases were only published now as a backlog.
The reason was apparently that the database is managed in Excel and the number of columns had reached the maximum.
Yesterday I circulated a plot I made about ERC grantees.
I made a horrible mistake: I mistook Ireland (IE) for Israel (IL). The data concern Israel and **not Ireland**.
Also, data covers 12 largest ERC host countries.
I apologize. Here is the correct plot.