Theory of Intelligent Design, the best explanation of Origins

This is my personal virtual library, where i collect information, which leads in my view to Intelligent Design as the best explanation of the origin of the physical Universe, life, and biodiversity


You are not connected. Please login or register

Theory of Intelligent Design, the best explanation of Origins » Philosophy and God » Peer review: a flawed process at the heart of science and journals

Peer review: a flawed process at the heart of science and journals

View previous topic View next topic Go down  Message [Page 1 of 1]

Admin


Admin
Peer review: a flawed process at the heart of science and journals

http://reasonandscience.heavenforum.org/t1290-peer-review-a-flawed-process-at-the-heart-of-science-and-journals?highlight=peer



Peer review is meant to weed out junk science before it reaches publication. Yet over and over again in our survey, respondents told us this process fails. It was one of the parts of the scientific machinery to elicit the most rage among the researchers we heard from. 
http://www.vox.com/2016/7/14/12016710/science-challeges-research-funding-peer-review-process

Publishing: The peer-review scam
When a handful of authors were caught reviewing their own papers, it exposed weaknesses in modern publishing systems. Editors are trying to plug the holes.
Most journal editors know how much effort it takes to persuade busy researchers to review a paper. That is why the editor of The Journal of Enzyme Inhibition and Medicinal Chemistry was puzzled by the reviews for manuscripts by one author — Hyung-In Moon, a medicinal-plant researcher then at Dongguk University in Gyeongju, South Korea.
In 2012, he confronted Moon, who readily admitted that the reviews had come in so quickly because he had written many of them himself. The deception had not been hard to set up. Supuran's journal and several others published by Informa Healthcare in London invite authors to suggest potential reviewers for their papers. So Moon provided names, sometimes of real scientists and sometimes pseudonyms, often with bogus e-mail addresses that would go directly to him or his colleagues. His confession led to the retraction of 28 papers by several Informa journals, and the resignation of an editor.
http://www.nature.com/news/publishing-the-peer-review-scam-1.16400

How fake peer review happens: An impersonated reviewer speaks
Earlier this month, BioMed Central and Springer announced that they were retracting nearly 60 papers for a host of related issues, including manipulating the peer-review process.
http://retractionwatch.com/2016/11/15/more-details-on-the-bmcspringer-retraction-ring-an-impersonated-reviewer-speaks/

Why isn't intelligent design found published in peer-reviewed science journals? Darwinists use a similar rule—I call it “Catch-23”—to exclude intelligent design from science: intelligent design is not scientific, so it can’t be published in peer-reviewed scientific journals. How
do we know it’s not scientific? Because it isn’t published in peer-reviewed scientific journals. Catch-23!


In desperation to maintain the "No god needed" ideology, the faith is supported and filled in by the "No god needed" crowd with unproven hypothesis and theories with fancy language laced with qualifiers such as "possible", "might" and maybe, among others, and it's EXPECTED to be ACCEPTED as gospel.  Science says this or that, via "peer reviewed sources", or websites that are held as unquestionable authority, like talkorigins, or authorities, like Dawkins, Krauss, Hitchen, et al.  You actually think that's any different than "The bible says, or God says" this or that? Your faith is just as strong, if not stronger, than the faith of the believer, and based on those peer reviewed sources or websites that propose evolution, you base and express your values and principles. No different than the bible believing Christian.  That is faith. That is a religion. Just because you either can't see it or are too proud to admit it, it's fact. Atheists try to prove what they don't believe with the enthusiasm of a believer. How much sense makes that ?

Hundreds of open access journals accept fake science paper
http://retractionwatch.com/

Want a favorable peer review? Buy one
What do Henry Kissinger and Martin Scorsese have in common? Fun fact: Both evidently review scientific manuscripts for money.
OK, maybe that’s not quite true. In fact, it’s not at all true. But headshots of both men appear in the bios of two purported reviewers (one of which has a woman’s name, sorry, Martin!) for a company called EditPub that sells various scientific services, including peer reviews.
https://www.statnews.com/2016/04/21/peer-review-process/

ACADEMIC ABSURDITY OF THE WEEK: FAKE PEER REVIEWS
the actors involved in the publishing process are often driven by incentives which may, and increasingly do, undermine the quality of published work, especially in the presence of unethical conduits.
http://www.powerlineblog.com/archives/2016/09/academic-absurdity-of-the-week-fake-peer-reviews.php

Of the 106 journals that did conduct peer review, 70% accepted the paper.Public Library of Science, PLOS ONE, was the only journal that called attention to the paper's potential ethical problems and consequently rejected it within 2 weeks.
Meanwhile, 45% of Directory of Open Access Journals (DOAJ) publishers that completed the review process, accepted the paper, a statistic that DOAJ founder Lars Bjørnshauge, a library scientist at Lund University in Sweden, finds "hard to believe".
The hoax raises concerns about poor quality control and the 'gold' open access model. It also calls attention to the growing number of low-quality open access publishers, especially in the developing world. In his investigation, Bohannon came across 29 publishers which seemed to have derelict websites and disguised geographical locations.
http://www.theguardian.com/higher-education-network/2013/oct/04/open-access-journals-fake-paper

Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers.Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust. A major question, which I will return to, is whether peer review and journals should cease to work on trust.
The evidence on whether there is bias in peer review against certain sorts of authors is conflicting, but there is strong evidence of bias against women in the process of awarding grants.5 The most famous piece of evidence on bias against authors comes from a study by DP Peters and SJ Ceci.6 They took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality. Peters and Ceci concluded that this was evidence of bias against authors from less prestigious institutions.
http://jrs.sagepub.com/content/99/4/178.full

all the academics say we’ve got to have peer review. But I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists. There are universities in America, and I’ve heard from many committees, that we won’t consider people’s publications in low impact factor journals. Now I mean, people are trying to do something, but I think it’s not publish or perish, it’s publish in the okay places [or perish]. And this has assembled a most ridiculous group of people. I wrote a column for many years in the nineties, in a journal called Current Biology. In one article, “Hard Cases”, I campaigned against this [culture] because I think it is not only bad, it’s corrupt. In other words it puts the judgment in the hands of people who really have no reason to exercise judgment at all. And that’s all been done in the aid of commerce, because they are now giant organisations making money out of it.
http://kingsreview.co.uk/magazine/blog/2014/02/24/how-academia-and-publishing-are-destroying-scientific-innovation-a-conversation-with-sydney-brenner/

It may not be entirely fair to liken a "peer review and citation ring" to the academic version of an extortion ring, but there's certainly fraud involved in both. Retraction Watch, a blog dedicated to chronicling which academic papers have been withdrawn, is reporting that SAGE Publishing, a group that puts out numerous peer-reviewed journals, is retracting 60 papers from its Journal of Vibration and Control after an internal investigation uncovered extensive evidence of severe peer-review fraud.
Apparently researcher Peter Chen, formerly of National Pingtung University of Education in Taiwan, made multiple submission and reviewer accounts -- possibly along with other researchers at his institution or elsewhere -- so that he could influence the peer review system. When Chen or someone else from the ring submitted a paper, the group could manipulate who reviewed the research, and on at least one occasion Chen served as his own reviewer.
http://www.evolutionnews.org/2014/07/for_critics_of_087681.html


90% of peer-reviewed clinical research is completely false
…Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.
http://evillusion.wordpress.com/mountains-of-peer-reveiwed-papers/

In a science such as evolution, the only thing its scientists and scientific writers can do is make information up; use their imagination.  They present this imaginary information as if it isn’t made up; as if it’s real science.  Such is the case with evolution’s peer reviewed papers.  There is no answer as to how life came to be; or living cells; or any biological systems; not even bird nests.  But evolution’s paper writers go on as if they do have the answer. Evolution science writers write paper after paper.  They are peer reviewed and passed.  Many peer reviewers are paper writers themselves.  They pass one imaginary paper in hopes that their imaginary paper will be passed.  One paper piles on top of another, until there is an immense pile of papers. For over one hundred and fifty years evolution’s papers have been stacking up.  The writing, even though it is mostly imagination, is then quoted as if it’s real evidence.

 Peer Review = Appeal to Authority
"Evolution" includes many theories.Some of them have been found to be likely true.Some have been found to be surely false.All such theories have included many important assumptions or presumptions,such as the presumption that radio carbon dating is very accurate,and that giant gaps in the fossil records can reasonably be filled by speculation.
Theories of Evolution exist.Facts regarding Evolution exist.Most theories of Evolution arise from facts and lots of speculation.Just because something is speculative does NOT mean it is false."Evolution is a fact" is a rather poorly supported premise.That's because facts and theories are not the same things,and "Evolution" is a theory.
I suspect that many fools,many who have been granted the PhD by other demonstrable fools,would claim boldly that "Evolution is a fact."They'd be wrong.
http://www.iflscience.com/health-and-medicine/dozens-scientific-papers-withdrawn-probably-more-come

Scientific Publisher BioMed Central has withdrawn 43 papers, and is investigating many more, over what it calls the “fabrication” of peer reviews. Representatives of Journal editors have admitted the papers are the tip of a dangerous iceberg, and the scandal may to lead to an overhaul of how peer review is conducted.

Peer review is fundamental to science, a central part of the process of self-correction that sets it aside from faith-based systems. True peer review does not end with publication; plenty of scientific papers are published only to subsequently be shown to have major flaws. However, the initial process whereby editors of scientific publications send work, usually anonymized, to other researchers for checking is meant to filter out the worst mistakes.

Predatory Journals Hit By ‘Star Wars’ Sting
http://blogs.discovermagazine.com/neuroskeptic/2017/07/22/predatory-journals-star-wars-sting/#.WXrrTBXyuUl

A number of so-called scientific journals have accepted a Star Wars-themed spoof paper. The manuscript is an absurd mess of factual errors, plagiarism and movie quotes. I know because I wrote it.

Inspired by previous publishing “stings”, I wanted to test whether ‘predatory‘ journals would publish an obviously absurd paper. So I created a spoof manuscript about “midi-chlorians” – the fictional entities which live inside cells and give Jedi their powers in Star Wars. I filled it with other references to the galaxy far, far away, and submitted it to nine journals under the names of Dr Lucas McGeorge and Dr Annette Kin.



Last edited by Admin on Fri Jul 28, 2017 1:45 am; edited 28 times in total

View user profile http://elshamah.heavenforum.com

Admin


Admin


Peer review: a flawed process at the heart of science and journals

Peer review is at the heart of the processes of not just medical journals but of all of science. It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won. Yet it is hard to define. It has until recently been unstudied. And its defects are easier to identify than its attributes. Yet it shows no sign of going away. Famously, it is compared with democracy: a system full of problems but the least worst we have.

When something is peer reviewed it is in some sense blessed. Even journalists recognize this. When the BMJ published a highly controversial paper that argued that a new `disease', female sexual dysfunction, was in some ways being created by pharmaceutical companies, a friend who is a journalist was very excited—not least because reporting it gave him a chance to get sex onto the front page of a highly respectable but somewhat priggish newspaper (the Financial Times). `But,' the news editor wanted to know, `was this paper peer reviewed?'. The implication was that if it had been it was good enough for the front page and if it had not been it was not. Well, had it been? I had read it much more carefully than I read many papers and had asked the author, who happened to be a journalist, to revise the paper and produce more evidence. But this was not peer review, even though I was a peer of the author and had reviewed the paper. Or was it? (I told my friend that it had not been peer reviewed, but it was too late to pull the story from the front page.)



WHAT IS PEER REVIEW?

My point is that peer review is impossible to define in operational terms (an operational definition is one whereby if 50 of us looked at the same process we could all agree most of the time whether or not it was peer review). Peer review is thus like poetry, love, or justice. But it is something to do with a grant application or a paper being scrutinized by a third party—who is neither the author nor the person making a judgement on whether a grant should be given or a paper published. But who is a peer? Somebody doing exactly the same kind of research (in which case he or she is probably a direct competitor)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The paper looks all right to me', which is sadly what peer review sometimes seems to be. Or somebody pouring all over the paper, asking for raw data, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.

What is clear is that the forms of peer review are protean. Probably the systems of every journal and every grant giving body are different in at least some detail; and some systems are very different. There may even be some journals using the following classic system. The editor looks at the title of the paper and sends it to two friends whom the editor thinks know something about the subject. If both advise publication the editor sends it to the printers. If both advise against publication the editor rejects the paper. If the reviewers disagree the editor sends it to a third reviewer and does whatever he or she advises. This pastiche—which is not far from systems I have seen used—is little better than tossing a coin, because the level of agreement between reviewers on whether a paper should be published is little better than you'd expect by chance.1

That is why Robbie Fox, the great 20th century editor of the Lancet, who was no admirer of peer review, wondered whether anybody would notice if he were to swap the piles marked `publish' and `reject'. He also joked that the Lancet had a system of throwing a pile of papers down the stairs and publishing those that reached the bottom. When I was editor of the BMJ I was challenged by two of the cleverest researchers in Britain to publish an issue of the journal comprised only of papers that had failed peer review and see if anybody noticed. I wrote back `How do you know I haven't already done it?'
Previous SectionNext Section


DOES PEER REVIEW `WORK' AND WHAT IS IT FOR?


But does peer review `work' at all? A systematic review of all the available evidence on peer review concluded that

Jefferson T, Alderson P, Wager E, Davidoff F. Effects of editorial peer review: a systematic review. JAMA2002;287:2784 -6

`the practice of peer review is based on faith in its effects, rather than on facts'.


But the answer to the question on whether peer review works depends on the question

`What is peer review for?'

One answer is that it is a method to select the best grant applications for funding and the best papers to publish in a journal. It is hard to test this aim because there is no agreed definition of what constitutes a good paper or a good research proposal.


Plus what is peer review to be tested against? Chance? Or a much simpler process? Stephen Lock when editor of the BMJ conducted a study in which he alone decided which of a consecutive series of papers submitted to the journal he would publish. He then let the papers go through the usual process. There was little difference between the papers he chose and those selected after the full process of peer review.1 This small study suggests that perhaps you do not need an elaborate process. Maybe a lone editor, thoroughly familiar with what the journal wants and knowledgeable about research methods, would be enough. But it would be a bold journal that stepped aside from the sacred path of peer review.

Another answer to the question of what is peer review for is that it is to improve the quality of papers published or research proposals that are funded. The systematic review found little evidence to support this, but again such studies are hampered by the lack of an agreed definition of a good study or a good research proposal.

Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA1998;280:237 -40

Peer review might also be useful for detecting errors or fraud. At the BMJ we did several studies where we inserted major errors into papers that we then sent to many reviewers.Nobody ever spotted all of the errors. Some reviewers did not spot any, and most reviewers spotted only about a quarter. Peer review sometimes picks up fraud by chance, but generally it is not a reliable method for detecting fraud because it works on trust. A major question, which I will return to, is whether peer review and journals should cease to work on trust.



THE DEFECTS OF PEER REVIEW


So we have little evidence on the effectiveness of peer review, but we have considerable evidence on its defects. In addition to being poor at detecting gross defects and almost useless for detecting fraud it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.

Many journals, even in the age of the internet, take more than a year to review and publish a paper. It is hard to get good data on the cost of peer review, particularly because reviewers are often not paid (the same, come to that, is true of many editors). Yet there is a substantial `opportunity cost', as economists call it, in that the time spent reviewing could be spent doing something more productive—like original research. I estimate that the average cost of peer review per paper for the BMJ (remembering that the journal rejected 60% without external review) was of the order of£ 100, whereas the cost of a paper that made it right though the system was closer to £1000.

People have a great many fantasies about peer review, and one of the most powerful is that it is a highly objective, reliable, and consistent process. I regularly received letters from authors who were upset that the BMJ rejected their paper and then published what they thought to be a much inferior paper on the same subject. Always they saw something underhand. They found it hard to accept that peer review is a subjective and, therefore, inconsistent process. But it is probably unreasonable to expect it to be objective and consistent. If I ask people to rank painters like Titian, Tintoretto, Bellini, Carpaccio, and Veronese, I would never expect them to come up with the same order. A scientific study submitted to a medical journal may not be as complex a work as a Tintoretto altarpiece, but it is complex. Inevitably people will take different views on its strengths, weaknesses, and importance.

So, the evidence is that if reviewers are asked to give an opinion on whether or not a paper should be published they agree only slightly more than they would be expected to agree by chance. (I am conscious that this evidence conflicts with the study of Stephen Lock showing that he alone and the whole BMJ peer review process tended to reach the same decision on which papers should be published. The explanation may be that being the editor who had designed the BMJ process and appointed the editors and reviewers it was not surprising that they were fashioned in his image and made similar decisions.)

Sometimes the inconsistency can be laughable. Here is an example of two reviewers commenting on the same papers.

"Reviewer A: `I found this paper an extremely muddled paper with a large number of deficits'" "Reviewer B: `It is written in a clear style and would be understood by any reader'."

This—perhaps inevitable—inconsistency can make peer review something of a lottery. You submit a study to a journal. It enters a system that is effectively a black box, and then a more or less sensible answer comes out at the other end. The black box is like the roulette wheel, and the prizes and the losses can be big. For an academic, publication in a major journal like Nature or Cell is to win the jackpot.
Bias

The evidence on whether there is bias in peer review against certain sorts of authors is conflicting, but there is strong evidence of bias against women in the process of awarding grants.5 The most famous piece of evidence on bias against authors comes from a study by DP Peters and SJ Ceci.6 They took 12 studies that came from prestigious institutions that had already been published in psychology journals. They retyped the papers, made minor changes to the titles, abstracts, and introductions but changed the authors' names and institutions. They invented institutions with names like the Tri-Valley Center for Human Potential. The papers were then resubmitted to the journals that had first published them. In only three cases did the journals realize that they had already published the paper, and eight of the remaining nine were rejected—not because of lack of originality but because of poor quality. Peters and Ceci concluded that this was evidence of bias against authors from less prestigious institutions.

This is known as the Mathew effect: `To those who have, shall be given; to those who have not shall be taken away even the little that they have'. I remember feeling the effect strongly when as a young editor I had to consider a paper submitted to the BMJ by Karl Popper.7 I was unimpressed and thought we should reject the paper. But we could not. The power of the name was too strong. So we published, and time has shown we were right to do so. The paper argued that we should pay much more attention to error in medicine, about 20 years before many papers appeared arguing the same.

The editorial peer review process has been strongly biased against `negative studies', i.e. studies that find an intervention does not work. It is also clear that authors often do not even bother to write up such studies. This matters because it biases the information base of medicine. It is easy to see why journals would be biased against negative studies. Journalistic values come into play. Who wants to read that a new treatment does not work? That's boring.

We became very conscious of this bias at the BMJ; we always tried to concentrate not on the results of a study we were considering but on the question it was asking. If the question is important and the answer valid, then it must not matter whether the answer is positive or negative. I fear, however, that bias is not so easily abolished and persists.

The Lancet has tried to get round the problem by agreeing to consider the protocols (plans) for studies yet to be done.8 If it thinks the protocol sound and if the protocol is followed, the Lancet will publish the final results regardless of whether they are positive or negative. Such a system also has the advantage of stopping resources being spent on poor studies. The main disadvantage is that it increases the sum of peer reviewing—because most protocols will need to be reviewed in order to get funding to perform the study.
Abuse of peer review

There are several ways to abuse the process of peer review. You can steal ideas and present them as your own, or produce an unjustly harsh review to block or at least slow down the publication of the ideas of a competitor. These have all happened. Drummond Rennie tells the story of a paper he sent, when deputy editor of the New England Journal of Medicine, for review to Vijay Soman.9 Having produced a critical review of the paper, Soman copied some of the paragraphs and submitted it to another journal, the American Journal of Medicine. This journal, by coincidence, sent it for review to the boss of the author of the plagiarized paper. She realized that she had been plagiarized and objected strongly. She threatened to denounce Soman but was advised against it. Eventually, however, Soman was discovered to have invented data and patients, and left the country. Rennie learnt a lesson that he never subsequently forgot but which medical authorities seem reluctant to accept: those who behave dishonestly in one way are likely to do so in other ways as well.
Previous SectionNext Section

HOW TO IMPROVE PEER REVIEW?

The most important question with peer review is not whether to abandon it, but how to improve it. Many ideas have been advanced to do so, and an increasing number have been tested experimentally. The options include: standardizing procedures; opening up the process; blinding reviewers to the identity of authors; reviewing protocols; training reviewers; being more rigorous in selecting and deselecting reviewers; using electronic review; rewarding reviewers; providing detailed feedback to reviewers; using more checklists; or creating professional review agencies. It might be, however, that the best response would be to adopt a very quick and light form of peer review—and then let the broader world critique the paper or even perhaps rank it in the way that Amazon asks users to rank books and CDs.

I hope that it will not seem too indulgent if I describe the far from finished journey of the BMJ to try and improve peer review. We tried as we went to conduct experiments rather than simply introduce changes.

The most important step on the journey was realizing that peer review could be studied just like anything else. This was the idea of Stephen Lock, my predecessor as editor, together with Drummond Rennie and John Bailar. At the time it was a radical idea, and still seems radical to some—rather like conducting experiments with God or love.
Blinding reviewers to the identity of authors

The next important step was hearing the results of a randomized trial that showed that blinding reviewers to the identity of authors improved the quality of reviews (as measured by a validated instrument).10 This trial, which was conducted by Bob McNutt, A T Evans, and Bob and Suzanne Fletcher, was important not only for its results but because it provided an experimental design for investigating peer review. Studies where you intervene and experiment allow more confident conclusions than studies where you observe without intervening.

This trial was repeated on a larger scale by the BMJ and by a group in the USA who conducted the study in many different journals.11,12 Neither study found that blinding reviewers improved the quality of reviews. These studies also showed that such blinding is difficult to achieve (because many studies include internal clues on authorship), and that reviewers could identify the authors in about a quarter to a third of cases. But even when the results were analysed by looking at only those cases where blinding was successful there was no evidence of improved quality of the review.
Opening up peer review

At this point we at the BMJ thought that we would change direction dramatically and begin to open up the process. We hoped that increasing the accountability would improve the quality of review. We began by conducting a randomized trial of open review (meaning that the authors but not readers knew the identity of the reviewers) against traditional review.13 It had no effect on the quality of reviewers' opinions. They were neither better nor worse. We went ahead and introduced the system routinely on ethical grounds: such important judgements should be open and acountable unless there were compelling reasons why they could not be—and there were not.

Our next step was to conduct a trial of our current open system against a system whereby every document associated with peer review, together with the names of everybody involved, was posted on the BMJ's website when the paper was published. Once again this intervention had no effect on the quality of the opinion. We thus planned to make posting peer review documents the next stage in opening up our peer review process, but that has not yet happened—partly because the results of the trial have not yet been published and partly because this step required various technical developments.

The final step was, in my mind, to open up the whole process and conduct it in real time on the web in front of the eyes of anybody interested. Peer review would then be transformed from a black box into an open scientific discourse. Often I found the discourse around a study was a lot more interesting than the study itself. Now that I have left I am not sure if this system will be introduced.
Training reviewers

The BMJ also experimented with another possible way to improve peer review—by training reviewers.4 It is perhaps extraordinary that there has been no formal training for such an important job. Reviewers learnt either by trial and error (without, it has to be said, very good feedback), or by working with an experienced reviewer (who might unfortunately be experienced but not very good).

Our randomized trial of training reviewers had three arms: one group got nothing; one group had a day's face-to-face training plus a CD-rom of the training; and the third group got just the CD-rom. The overall result was that training made little difference.4 The groups that had training did show some evidence of improvement relative to those who had no training, but we did not think that the difference was big enough to be meaningful. We cannot conclude from this that longer or better training would not be helpful. A problem with our study was that most of the reviewers had been reviewing for a long time. `Old dogs cannot be taught new tricks', but the possibility remains that younger ones could.
Previous SectionNext Section

TRUST IN SCIENCE AND PEER REVIEW


One difficult question is whether peer review should continue to operate on trust. Some have made small steps beyond into the world of audit. The Food and Drug Administration in the USA reserves the right to go and look at the records and raw data of those who produce studies that are used in applications for new drugs to receive licences. Sometimes it does so. Some journals, including the BMJ, make it a condition of submission that the editors can ask for the raw data behind a study. We did so once or twice, only to discover that reviewing raw data is difficult, expensive, and time consuming. I cannot see journals moving beyond trust in any major way unless the whole scientific enterprise moves in that direction.
Previous SectionNext Section

CONCLUSION


So peer review is a flawed process, full of easily identified defects with little evidence that it works. Nevertheless, it is likely to remain central to science and journals because there is no obvious alternative, and scientists and editors have a continuing belief in peer review. How odd that science should be rooted in belief.



http://www.greenmedinfo.com/blog/evidence-based-medicine-coins-flip-worth-certainty

90% of peer-reviewed clinical research is completely false

…Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.

http://www.uncommondescent.com/intelligent-design/another-nobelist-denounces-peer-review/

http://kingsreview.co.uk/magazine/blog/2014/02/24/how-academia-and-publishing-are-destroying-scientific-innovation-a-conversation-with-sydney-brenner/

all the academics say we’ve got to have peer review. But I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system. It’s corrupt in many ways, in that scientists and academics have handed over to the editors of these journals the ability to make judgment on science and scientists. There are universities in America, and I’ve heard from many committees, that we won’t consider people’s publications in low impact factor journals. Now I mean, people are trying to do something, but I think it’s not publish or perish, it’s publish in the okay places [or perish]. And this has assembled a most ridiculous group of people. I wrote a column for many years in the nineties, in a journal called Current Biology. In one article, “Hard Cases”, I campaigned against this [culture] because I think it is not only bad, it’s corrupt. In other words it puts the judgment in the hands of people who really have no reason to exercise judgment at all. And that’s all been done in the aid of commerce, because they are now giant organisations making money out of it.

View user profile http://elshamah.heavenforum.com

Admin


Admin
Intelligent Design Is Peer-Reviewed, but Is Peer-Review a Requirement of Good Science?

http://www.evolutionnews.org/2012/02/intelligent_des056221.html

In an amusing letter titled "Not in our Nature," Campanario reminds the journal of four examples where it rejected significant papers:

   (1) In 1981, Nature rejected a paper by the British biochemist Robert H. Michell on signalling reaction by hormones. This paper has since been cited more than 1,800 times.

   (2) In June 1937, Nature rejected Hans Krebs's letter describing the citric acid cycle. Krebs won the 953 Nobel prize in physiology or medicine for this discovery.

   (3) Nature initially rejected a paper on work for which Harmut Michel won the 1988 Nobel prize for chemistry; it has been identified by the Institute of Scientific Information as a core document and widely cited.

   (4) A paper by Michael J. Berridge, rejected in 1983 by Nature, ranks at number 275 in a list of the most-cited papers of all time. It has been cited more than 1,900 times.4

Publication (which is but one element of peer review) is not a sine qua non of admissibility; it does not necessarily correlate with reliability, and in some instances well-grounded but innovative theories will not have been published. Some propositions, moreover, are too particular, too new, or of too limited interest to be published

View user profile http://elshamah.heavenforum.com

Admin


Admin
Why Most Published Research Findings Are False

There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

1) http://journals.plos.org/plosmedicine/article?id=10.1371%2Fjournal.pmed.0020124

View user profile http://elshamah.heavenforum.com

Admin


Admin

Most Published Research Findings are False


Most published research findings are false:
http://www.economist.com/news/science-and-technology/21598944-sloppy-researchers-beware-new-institute-has-you-its-sights-metaphysicians
Bad Science Muckrakers Question the Big Science Status Quo: "... inherent biases and the flawed statistical analyses built into most 'hypothesis driven' research, resulting in publications that largely represent 'accurate measures of the prevailing bias.'"
http://www.forbes.com/sites/billfrezza/2014/07/13/bad-science-muckrakers-question-the-big-science-status-quo/
Linus Pauling: "Everyone should know that most cancer research is largely a fraud and that the major cancer research organizations are derelict in their duties to the people who support them." -Linus Pauling PhD (Two-time Nobel Prize winner)."
http://nationalpress.org/images/uploads/programs/CAN2009_Marshall.pdf
"The Lancet": The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness."
http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736(15)60696-1.pdf
"Nature": "Ridding science of shoddy statistics will require scrutiny of every step, not merely the last one, say Jeffrey T. Leek and Roger D. Peng."
http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412
Publishers withdraw more than 120 gibberish papers: "The publishers Springer and IEEE are removing more than 120 papers from their subscription services after a French researcher discovered that the works were computer-generated nonsense." 
http://www.nature.com/news/publishers-withdraw-more-than-120-gibberish-papers-1.14763
The New England Journal of Medicine: "In August 2015, the publisher Springer retracted 64 articles from 10 different subscription journals “after editorial checks spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports,” according to a statement on their website.1 The retractions came only months after BioMed Central, an open-access publisher also owned by Springer, retracted 43 articles for the same reason."
http://www.nejm.org/doi/full/10.1056/NEJMp1512330
realclearscience.com: "A study that surveyed all the published cosmological literature between the years 1996 and 2008 showed that the statistics of the results were too good to be true. In fact, the statistical spread of the results was not consistent with what would be expected mathematically, which means cosmologists were in agreement with each other – but to a worrying degree. This meant that either results were being tuned somehow to reflect the status-quo, or that there may be some selection effect where only those papers that agreed with the status-quo were being accepted by journals." 
http://www.realclearscience.com/articles/2016/01/11/why_cosmology_is_in_crisis_109504.html
University of Oxford: "Half the world's natural history specimens may have the wrong name."
http://www.ox.ac.uk/news/2015-11-17-half-worlds-natural-history-specimens-may-have-wrong-name
NYTimes.com: "Dr. Prasad and Dr. Cifu extrapolate from past reversals to conclude that about 40 percent of what we consider state-of-the-art health care is likely to turn out to be unhelpful or actually harmful."
http://www.nytimes.com/2015/11/03/science/book-review-ending-medical-reversal-laments-flip-flopping.html
Retraction Watch
http://retractionwatch.com/
I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here's How.
http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800
"Der Spiegel protested all of this discussion with the statement, that what they hear is that 'journalists want to earn money, whereas scientists are only seeking the truth.' This brought loud guffaws from all three [professors]. 'Scientists,' answered Dr. Fischer, 'want success; they want a wife, a hotel room, an invitation, or perhaps a car!'"
http://www.uncommondescent.com/intelligent-design/der-spiegel-discovers-the-truth-from-science/
The History of Important Scientific Discoveries Initially Rejected and Ridiculed.
http://ncu9nc.blogspot.com/2013/04/a-history-of-scientific-discoveries.html

View user profile http://elshamah.heavenforum.com

Admin


Admin
A paper by Maggie Simpson and Edna Krabappel was accepted by two scientific journals

http://www.vox.com/2014/12/7/7339587/simpsons-science-paper




(20th Century Fox)
A scientific study by Maggie Simpson, Edna Krabappel, and Kim Jong Fun has been accepted by two journals.
Of course, none of these fictional characters actually wrote the paper, titled "Fuzzy, Homogeneous Configurations." Rather, it's a nonsensical text, submitted by engineer Alex Smolyanitsky in an effort to expose a pair of scientific journals — the Journal of Computational Intelligence and Electronic Systemsand the comic sans-loving Aperito Journal of NanoScience Technology.





One journal congratulates the authors on their paper being accepted. (Alex Smolyanitsky)
These outlets both belong to a world of predatory journals that spam thousands of scientists, offering to publish their work — whatever it is — for a fee, without actually conducting peer review. When Smolyanitsky was contacted by them, he submitted the paper, which has a totally incoherent, science-esque text written by SCIgen, a random text generator. (Example sentence: "we removed a 8-petabyte tape drive from our peer-to-peer cluster to prove provably "fuzzy" symmetries’s influence on the work of Japanese mad scientist Karthik Lakshminarayanan.")
Then, he thought up the authors, along with a nonexistent affiliation ("Belford University") for them. "I wanted first and foremost to come up with something that gives out the fake immediately," he says. "My only regret is that the second author isn't Ralph Wiggum."





The paper, as published in the Aperito Journal of Nanoscience Technology. (Alex Smolyanitsky)
One journal immediately accepted it, while the other took a month before accepting (perhaps as part of an effort to fake peer review), but has since published it — and now keeps sending Smolyanitsky an invoice for $459.
The fact that these journals would accept the paper is absurd, and the Simpsons connection is pretty funny. But it's also a troubling sign of a bigger problem in science publishing.

This isn't the first time a predatory publisher has been exposed

This is one of many times that low-quality, for-profit online journals have been exposed — either intentionally or by accident.
Most recently, one journal accepted a paper titled "Get me off Your Fucking Mailing List" that had been created by a pair of computer scientists as a joke to use in replying to unwanted conference invitations.





Figure 1 from the paper "Get me off Your Fucking Mailing List." (Mazieres and Kohler)
In other cases, reporters have intentionally exposed low-quality journals by submitting substandard material to see if it would get published.
Last April, for instance, a reporter for the Ottawa Citizen named Tom Spears wrote an entirely incoherent paper on soils, cancer treatment, and Mars, and got it accepted by 8 of 18 online, for-profit journals. And last year, reporter John Bohannon and the prestigious journal Sciencecollaborated on a similar stuntgetting a deeply flawed paper about a cancer-fighting lichen accepted by 60 percent of 340 journals. Using IP addresses, Bohannon discovered that the journals that accepted his paper were disproportionately located in India and Nigeria
a paper titled "get me off your fucking mailing list" was accepted by one journal
Earlier this year, I carried out a sting of a predatory book publisher — a company that uses the same basic strategy, but publishes physical books of academic theses and dissertations. When they contacted me offering to publish my undergraduate thesis for no fee, I agreed, so I could write an article about it. They gained the permanent rights to my work — along with the ability to sell copies of it for exorbitant prices online — but failed to notice that I'd stuck in a totally irrelevant sentence in towards the end, highlighting the fact that they publish without proofreading or editing.
Perhaps most troublingly, in Feburary 2014, a pair of science publishers (Springer and IEEE) retracted more than 120 papers, some of which were pure nonsense (created by the same program used for the Simpsons paper) but had made it into their published conference proceedings. Both these publishers are generally seen as reliable — showing how far the problem of substandard quality control goes.

Inside the weird world of predatory journals

The existence of these dubious publishers can be traced to the early 2000's, when the first open-access online journals were founded. Instead of printing each issue and making money by selling subscriptions to libraries, these journals were given out for free online, and supported themselves largely through fees paid by the actual researchers submitting work to be published.
The number OF predatory journals has exploded
The first of these journals were and are legitimate — PLOS ONE, for instance, rejected Bohannon's lichen paper because it failed peer review. But these were soon followed by predatory publishers — largely based abroad — that basically pose as legitimate journals so researchers will pay their processing fees.
Over the years, the number of these predatory journals has exploded. Jeffrey Beall, a librarian at the University of Colorado, keeps an up-to-date list of them to help researchers avoid being taken in; it currently has 550 publishers and journals on it.
Still, new ones pop up constantly, and it can be hard for a researcher — or a review board, looking at a resume and deciding whether to grant tenure — to track which journals are bogus. Journals are often judged on their impact factor(a number that rates how often their articles are cited by other journals), and Spears reports that some of these journals are now buying fake impact factors from fake rating companies to seem more legitimate.
Scientists view this industry as a problem for a few reasons: it reduces trust in science, allows unqualified researchers to build their resumes with fake or unreliable work, and makes research for legitimate scientists more difficult, as they're forced to wade through dozens of worthless papers to find useful ones.

View user profile http://elshamah.heavenforum.com

Admin


Admin
1,500 scientists lift the lid on reproducibility

25 May 2016 Corrected: 28 July 2016
Article tools
PDFRights & Permissions
Get Flash Player
More than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research.

The data reveal sometimes-contradictory attitudes towards reproducibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.

http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970

View user profile http://elshamah.heavenforum.com

Sponsored content


View previous topic View next topic Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum