elashri
There is at least one thing wrong about this. This is an essay about a paper published a simulation based scenarios in medical research. It then try to generalize to "research" and avoid this very narrow support to the claim. I think this is something true and it should make us more cautious when deciding based on single studies. But things are different in other fields.

Also this is called research. You don't know the answer before head. You have limitations in tech and tools you use. You might miss something, didn't have access to more information that could change the outcome. That is why research is a process. Unfortunately common science books talks only about discoveries, results that are considered fact but usually don't do much about the history of how we got there. I would like to suggest a great book called "How experiments end"[1] and enjoy going into details on how scientific conscious is built for many experiments in different fields (mostly physics).

[1] https://press.uchicago.edu/ucp/books/book/chicago/H/bo596942...

SideQuark
This paper, almost 20 years old, has plenty of follow-up work showing the claims in this original paper aren’t true.

One simple angle is Ioannidis simply makes up some parameters to show things could be bad. Later empirical work measuring those parameters found Ioannidis off by orders of magnitude.

One example https://arxiv.org/abs/1301.3718

There’s ample other published papers showing other holes in the claims.

https://scholar.google.com/scholar?cites=1568101778041879927...

Google scholar papers citing this

vouaobrasil
> In this framework, a research finding is less likely to be true [...] where there is greater flexibility in designs, definitions, outcomes, and analytical modes

It's worth noting though that in many research fields, teasing out the correct hypotheses and all affecting factors are difficult. And, sometimes it takes quite a few studies before the right definitions are even found; definitions which are a prerequisite to make a useful hypothesis. Thus, one cannot ignore the usefulness of approximation in scientific experiments, not only to the truth, but to the right questions to ask.

Not saying that all biases are inherent in the study of sciences, but the paper cited seems to take it for granted that a lot of science is still groping around in the dark, and to expect well-defined studies every time is simply unreasonable.

dang
Related. Others?

Why most published research findings are false (2005) - https://news.ycombinator.com/item?id=37520930 - Sept 2023 (2 comments)

Why most published research findings are false (2005) - https://news.ycombinator.com/item?id=33265439 - Oct 2022 (80 comments)

Why Most Published Research Findings Are False (2005) - https://news.ycombinator.com/item?id=18106679 - Sept 2018 (40 comments)

Why Most Published Research Findings Are False - https://news.ycombinator.com/item?id=8340405 - Sept 2014 (2 comments)

Why Most Published Research Findings Are False - https://news.ycombinator.com/item?id=1825007 - Oct 2010 (40 comments)

Why Most Published Research Findings Are False (2005) - https://news.ycombinator.com/item?id=833879 - Sept 2009 (2 comments)

tombert
As I’ve transitioned to more exploratory and researchy roles in my career, I have started to understand the science fraudsters like Jan Hendrik Schön.

When you spent an entire week working on a test or experiment that you know should work, at least if you give it enough time, but it isn’t for whatever reason, it can be extremely tempting to invent the numbers that you think it should be, especially if your employer is pressuring you for a result. Now, obviously, reason we run these tests is precisely because we don’t actually know what the results will be, but that’s sometimes more obvious in hindsight.

Obviously it’s wrong, and I haven’t done it, but I would be lying if I said that the thought hadn’t crossed my mind.

md224
Something that continues to puzzle me: how do molecular biologists manage to come up with such mindbogglingly complex diagrams of metabolic pathways in the midst of a replication crisis? Is our understanding of biology just a giant house of cards or is there something about the topic that allows for more robust investigation?
smeej
This kind of report always raises the question for me of what the existing system's goals are. I think people assume that "new, reliable knowledge" is among the goals, but I don't see that the incentives align toward that goal, so I don't know that that's actually among them.

Does the world really want/need such a system? (The answer seems obvious to me, but not above question.) If so, how could it be designed? What incentives would it need? What conflicting interests would need to be disincentivized?

I think it's been pretty evident for a long time that the "peer-reviewed publications system" doesn't produce the results people think it should. I just don't hear anybody really thinking through the systems involved to try to invent one that would.

tdba
One study tried to replicate 100 psychology studies and only 36% attained significance.

https://osf.io/ezcuj/wiki/home/

youainti
Please note the peerpub comments discussing that it appears that followup research shows about 15% is wrong, not the 5% anticipated.

https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9...

blackeyeblitzar
It’s a matter of incentives. Everyone who wants a PhD has to publish and before that they need to produce findings that align with the values of their professors. These bad incentives combined with rampant statistical errors lead to bad findings. We need to stop putting “studies” on a pedestal.
motohagiography
i wonder if science could benefit from publishing using pseudonyms the way software has. if it's any good, people will use it, the reputations will be made by the quality of contributions alone, it makes fraud expensive and mostly not worth it, etc.
fedeb95
This published research is false.

All published research will turn out to be false.

The problem is ill-posed: can we establish once and for all that something is true? Almost all history had this ambition, yet every day we find that something we believed to be true wasn't. Data isn't encouraging.

Animats
How broad a range is this result supposed to cover? It seems to be mostly applicable to areas where data is too close to the noise threshold. Some phenomena are like that, and some are not.

"If your experiment needs statistics, you ought to have done a better experiment" - Rutherford

withinboredom
I've implemented several things from computer science papers in my career now, mostly related to database stuff. They are mostly terribly wrong or show the exact OPPOSITE as to what they claim in the paper. It's so frustrating. Even occasionally, they offer their code used to write the paper and it is missing entire features they claim are integral for it to function properly; to the point that I wonder how they even came up with the results they came up with.

My favorite example was a huge paper that was almost entirely mathematics-based. It wasn't until you implemented everything that you would realize it just didn't even make any sense. Then, when you read between the lines, you even saw their acknowledgement of that fact in the conclusion. Clever dude.

Anyway, I have very little faith in academic papers; at least when it comes to computer science. Of all the things out there, it is just code. It isn't hard to write and verify what you purport (usually takes less than a week to write the code), so I have no idea what the peer reviews actually do. As a peer in the industry, I would reject so many papers by this point.

And don't even get me started on when I send the (now professor) questions via email to see if I just implemented it wrong, or whatever, that just never fucking reply.

DaoVeles
It has been said that "Publish or Perish" would make a good tomb stone epitaph for a lot of modern sciences.

I speak to a lot of people in various science fields and generally they are some of the heaviest drinkers I know simply because of the system they have been forced into. They want to do good but are railroaded into this nonsense for dear of losing their livelihood.

Like those that are trying to progress our treatment of mental health but have ended up almost exclusively in the biochemicals space because that is where the money is even though that is not the only path. It is a real shame.

Also other heavy drinkers are the ecologists and climatologists, for good reason. They can see the road ahead and it is bleak. They hope they are wrong.

skybrian
(2005). I wonder what's changed?
carabiner
This only applies to life sciences, social sciences right? Or are most papers in computer science or mechanical engineering also false?
meling
I only read the abstract; “Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.”

True vs false seems like a very crude metric, no?

Perhaps this paper’s research claim is also false.

ninetyninenine
So whenever someone gives me a detailed argument with cited sources I can show them this and render the truth into an unobtainable objective.
PeterZaitsev
If most Published Research Findings are false, does not this mean this article is likely to be false as well ? :)
mjparrott
Arguing why this paper is false is ironic in agreeing with the papers point
Daub
From my experience, my main criticism of research in the field of computer vision is that most of it is 'meh'. In a university that focused on security research, I saw mountains of research into detection/recognition, yet most of it offered no more than slightly different ways of doing the same old thing.

I also saw: a head of design school insisting that they and their spouse were credited on all student and staff movies, the same person insisting that massive amounts of school cash be spent promoting their solo exhibition that no one other than students attended, a chair of research who insisted they were given an authorship role on all published output in the school, labs being instituted and teaching hires brought in to support a senior admin's research interested (despite them not having any published output in this area), research ideas stolen from undergrad students and given to PhD students... I could go on all day.

If anyone is interested in how things got like this, you might start with Margret Thatcher. It was she who was the first to insist that funding of universities be tied to research. Given the state of British research in those days it was a reasonable decision, but it produced a climate where quantity is valued over quality and true 'impact'.

DrNosferatu
Have LLMs cross check papers and point out experiments to be repeated.
iskander
I think unpopular to mention here but John Ioannidis did a really weird turn in his career and published some atrociously non-rigorous Covid research that falls squarely in the cross-hairs of "why...research findings are false".
angry_octet
... including the junk pushed by Ioannidis. His completely trashed his credibility during COVID.
hofo
Oh the irony
titanomachy
2022
marcosdumay
Yeah, when you try new things, you often get them wrong.

Why do we expect most published results to be true?

ape4
So is this paper false too? .. infinite recursion...
23B1
Imagine if tech billionaires, instead of building dickships and buying single-family homes, decided to truly invest in humanity by realigning incentives in science.
debacle
Most? Really?
dominikusbrian
[dead]
breck
On a livestream the other day, Stephan Wolfram said he stopped publishing through academic journals in the 1980's because he found it far more efficient to just put stuff online. (And his blog is incredible: https://writings.stephenwolfram.com/all-by-date/)

A genius who figured it academic publishing had gone to shit decades ahead of everyone else.

P.S. We built the future of academic publishing, and it's an order of magnitude better than anything else out there.

codingwagie
[flagged]
giantg2
This must be a satire piece.

It talks on things like power, reproducibility, etc. Which is fine. There are minority of papers with mathematical errors. What it fails to examine is what is "false". Their results may be valid for what they studied. Future studies may have new and different findings. You may have studies that seem to conflict with each other due to differences in definitions (eg what constitutes a "child", 12yo or 24yo?) or the nuance in perspective apllied to the policies they are investigating (eg aggregate vs adjusted gender wage gap).

It's about how you use them - "Research suggests..." or "We recommend further studies of larger size", etc. It's a tautology that if you misapply them they will be false a majority of the time.

ants_everywhere
This is a classic and important paper in the field of metascience. There are other great papers predating this one, but this one is widely known.

Unfortunately the author John Ioannidis turned out to be a Covid conspiracy theorist, which has significantly affected his reputation as an impartial seeker of truth in publication.