a_bonobo
You could also remove the incentive as to why there's fake research in the first place - judging researchers purely by how many papers they push out - but that would undermine Springer Nature's entire business model, and we can't have that, can we.
ghoshbishakh
I think analysis of gel and blot images is very important for the life science fields. But in general, the "peers" in the peer review processs have to be more careful now.
nope1000
I like the image duplication detector but I am sceptical that they solved the issue of detecting AI-generated text
johndoe0815
"AI tools to protect research integrity"? Sounds like a paradoxon, nice try...
bluenose69
Reasonable worries about fraudulent work have led to some good ideas -- perhaps the ones discussed in the Springer Nature article -- but also to some that I think are silly.

I reviewed something recently for a journal that asked me to do the usual thing, selecting a pulldown menu for the outcome, giving me space to write notes to the author and editor, etc. That's all fine and good.

But they also asked me to state whether I judged the work to be "correct". This journal deals with arcane numerical simulations of physics problems. Between them, the authors had several person-decades of experience at the cutting edge of this field. Without access to their computational resources (and technicians and programmers to support the work), and without a year to try to mimic the work, there's just no way I can get close to saying the work is "correct".

And what's the point? If I thought the work was incorrect, of course I'd point that out.

I think the question is just the journal's response to public scrutiny of science. A way to give the impression of vigilance, without requiring more work from the publisher.

PS. if the journal starts insisting that authors provide enough material for reviewers to check the work in detail, they won't get any reviewers. We are not paid, and get no recognition for this work. We don't have the time or inclination to wade through hundreds of pages of notes about how models were set up, or to watch perhaps a hundred hour of videos of the authors discussing alternative methods. And we don't have the funding to buy specialized software, to pay server costs, and to pay technicians to get things working.

PPS. sorry, I'm on a bit of a rant. The rant I wrote back to the editor was shorter and less rambling :-)

sedtacet
“Geppetto and SnappShot playing important role in stopping fake research from being published”