One simple angle is Ioannidis simply makes up some parameters to show things could be bad. Later empirical work measuring those parameters found Ioannidis off by orders of magnitude.
One example https://arxiv.org/abs/1301.3718
There’s ample other published papers showing other holes in the claims.
https://scholar.google.com/scholar?cites=1568101778041879927...
Google scholar papers citing this
It's worth noting though that in many research fields, teasing out the correct hypotheses and all affecting factors are difficult. And, sometimes it takes quite a few studies before the right definitions are even found; definitions which are a prerequisite to make a useful hypothesis. Thus, one cannot ignore the usefulness of approximation in scientific experiments, not only to the truth, but to the right questions to ask.
Not saying that all biases are inherent in the study of sciences, but the paper cited seems to take it for granted that a lot of science is still groping around in the dark, and to expect well-defined studies every time is simply unreasonable.
Why most published research findings are false (2005) - https://news.ycombinator.com/item?id=37520930 - Sept 2023 (2 comments)
Why most published research findings are false (2005) - https://news.ycombinator.com/item?id=33265439 - Oct 2022 (80 comments)
Why Most Published Research Findings Are False (2005) - https://news.ycombinator.com/item?id=18106679 - Sept 2018 (40 comments)
Why Most Published Research Findings Are False - https://news.ycombinator.com/item?id=8340405 - Sept 2014 (2 comments)
Why Most Published Research Findings Are False - https://news.ycombinator.com/item?id=1825007 - Oct 2010 (40 comments)
Why Most Published Research Findings Are False (2005) - https://news.ycombinator.com/item?id=833879 - Sept 2009 (2 comments)
When you spent an entire week working on a test or experiment that you know should work, at least if you give it enough time, but it isn’t for whatever reason, it can be extremely tempting to invent the numbers that you think it should be, especially if your employer is pressuring you for a result. Now, obviously, reason we run these tests is precisely because we don’t actually know what the results will be, but that’s sometimes more obvious in hindsight.
Obviously it’s wrong, and I haven’t done it, but I would be lying if I said that the thought hadn’t crossed my mind.
Does the world really want/need such a system? (The answer seems obvious to me, but not above question.) If so, how could it be designed? What incentives would it need? What conflicting interests would need to be disincentivized?
I think it's been pretty evident for a long time that the "peer-reviewed publications system" doesn't produce the results people think it should. I just don't hear anybody really thinking through the systems involved to try to invent one that would.
https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9...
All published research will turn out to be false.
The problem is ill-posed: can we establish once and for all that something is true? Almost all history had this ambition, yet every day we find that something we believed to be true wasn't. Data isn't encouraging.
"If your experiment needs statistics, you ought to have done a better experiment" - Rutherford
My favorite example was a huge paper that was almost entirely mathematics-based. It wasn't until you implemented everything that you would realize it just didn't even make any sense. Then, when you read between the lines, you even saw their acknowledgement of that fact in the conclusion. Clever dude.
Anyway, I have very little faith in academic papers; at least when it comes to computer science. Of all the things out there, it is just code. It isn't hard to write and verify what you purport (usually takes less than a week to write the code), so I have no idea what the peer reviews actually do. As a peer in the industry, I would reject so many papers by this point.
And don't even get me started on when I send the (now professor) questions via email to see if I just implemented it wrong, or whatever, that just never fucking reply.
I speak to a lot of people in various science fields and generally they are some of the heaviest drinkers I know simply because of the system they have been forced into. They want to do good but are railroaded into this nonsense for dear of losing their livelihood.
Like those that are trying to progress our treatment of mental health but have ended up almost exclusively in the biochemicals space because that is where the money is even though that is not the only path. It is a real shame.
Also other heavy drinkers are the ecologists and climatologists, for good reason. They can see the road ahead and it is bleak. They hope they are wrong.
True vs false seems like a very crude metric, no?
Perhaps this paper’s research claim is also false.
I also saw: a head of design school insisting that they and their spouse were credited on all student and staff movies, the same person insisting that massive amounts of school cash be spent promoting their solo exhibition that no one other than students attended, a chair of research who insisted they were given an authorship role on all published output in the school, labs being instituted and teaching hires brought in to support a senior admin's research interested (despite them not having any published output in this area), research ideas stolen from undergrad students and given to PhD students... I could go on all day.
If anyone is interested in how things got like this, you might start with Margret Thatcher. It was she who was the first to insist that funding of universities be tied to research. Given the state of British research in those days it was a reasonable decision, but it produced a climate where quantity is valued over quality and true 'impact'.
Why do we expect most published results to be true?
A genius who figured it academic publishing had gone to shit decades ahead of everyone else.
P.S. We built the future of academic publishing, and it's an order of magnitude better than anything else out there.
It talks on things like power, reproducibility, etc. Which is fine. There are minority of papers with mathematical errors. What it fails to examine is what is "false". Their results may be valid for what they studied. Future studies may have new and different findings. You may have studies that seem to conflict with each other due to differences in definitions (eg what constitutes a "child", 12yo or 24yo?) or the nuance in perspective apllied to the policies they are investigating (eg aggregate vs adjusted gender wage gap).
It's about how you use them - "Research suggests..." or "We recommend further studies of larger size", etc. It's a tautology that if you misapply them they will be false a majority of the time.
Unfortunately the author John Ioannidis turned out to be a Covid conspiracy theorist, which has significantly affected his reputation as an impartial seeker of truth in publication.
Also this is called research. You don't know the answer before head. You have limitations in tech and tools you use. You might miss something, didn't have access to more information that could change the outcome. That is why research is a process. Unfortunately common science books talks only about discoveries, results that are considered fact but usually don't do much about the history of how we got there. I would like to suggest a great book called "How experiments end"[1] and enjoy going into details on how scientific conscious is built for many experiments in different fields (mostly physics).
[1] https://press.uchicago.edu/ucp/books/book/chicago/H/bo596942...