Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.
The reliance upon, and use of, unreliable hearsay literature by expert testifiers is a challenging topic that cuts across the spectrum of complex litigation. Often, the literature is comprised of technical or scientific articles published in some journal with a claim that the published work product has been “peer reviewed.” Earlier articles have discussed the reliability of such out-of-court articles not authored by the testifier. Due to the increasing trend to “trial by literature,” it would be helpful to revisit the subject. Rather than diminish, the problems seem to have exacerbated.
In particular, there has been a global proliferation of journals whose quality review practices function differently from the classic model we used to know. Many so-called “open-access” journals that accept articles charge the author a fee. That dynamic seems to create potential conflicts of interest. Many of these journals publish articles without peer review. Others do a bogus peer review “sting” operation, like when a Harvard science journalist sent a science article to hundreds of journals. The shocking results are discussed below ' after presentation of some background information and findings.
Background
There is a place and need for hearsay literature. Much of what we say or do is based on what we learn. Much of what we learn is based on what we read. Much of what we read is based on what others have read. Much of what those others have read is based on what others have written. And so it becomes inevitable that, sooner or later, much of expert testimony boils down to what experts have read or learned or confirmed from writings. There is nothing inherently wrong with that.
If the expert enhances his or her expertise by reading scientific, technical or professional writings or benefits others by researching and writing as an expert, society is normally better off for the effort. Since the objective in the courtroom is to search for the truth and do justice, the writings the experts rely upon can enhance the expert's ' and therefore the jury's ' role in the truth-finding process if the expert's writings are trustworthy, accurate and professionally reliable. If, however, the writings are “junk,” and the expert relies on them or professes them to be the truth, then the expert's testimony is no better than the junk on which s/he is relying.
Sometimes, the quality and trustworthiness of professional writings fall between the extremes of “reliability” and “junk,” into a vast gray area of “quasi-reliability” or “not-quite-junk.” The articles may be published by journals with professional-sounding names or by institutions or entities recognized in the technical world, thereby creating an aura of trustworthiness that masks the diminished quality of the substantive content. What happens when the expert relies on such less-than-reliable professional literature? What should be the consequences of such reliance?
In general, the justice system wants the expert to give juries the benefit of his or her expertise, not merely to read out-of-court, hearsay materials to jurors. If the expert becomes merely a reader of someone else's thoughts or opinions, then the “someone else” is really doing the testifying, not the expert. That might not be so bad if we could guarantee that the out-of-court writing is genuine, accurate, trustworthy, reliable, relevant and “fits” the facts and issues in the case. But how can we do that? Ordinarily, we cannot cross-examine the writing, and the author of the technical or scientific or specialized hearsay is not in court to answer questions. Only a surrogate ' the so-called trial “expert” ' is subject to cross-examination. But he or she often knows only what was stated in the article. Beyond the confines of the actual text, the published findings and explicit writing, the trial expert usually does not know (if s/he is truthful) or is speculating (if s/he indulges in belief or guesswork).
False Findings
Earlier articles have reported on serious problems with reliability of many scientific articles, even those published in vaunted science or technical journals. (M. Hoenig, “Testifying Experts and Scientific Articles: Reliability Concerns,” New York Law Journal, Sept. 16, 2011, p. 3.) Moreover, the entire June 5, 2002, issue of the prestigious Journal of the American Medical Association (JAMA) was devoted to a soul-searching, critical analysis of major shortcomings in the articles' research, methodologies, and even the peer-review process. Important weaknesses often were not reported in the published articles. The published reports often masked “the true diversity of opinion among contributors about the meaning of their research findings,” resulting in a de facto “hidden research paper” behind the published article. Results sometimes were selectively reported and the authors “drew unjustified conclusions.” JAMA's details were arresting.
Then a respected epidemiologist, John P.A. Ioannidis, issued an article in the Public Library of Science Medicine (PLOS), dated Aug. 30, 2005, titled, “Why Most Published Research Findings Are False.” An editorial in the same journal conceded that Dr. Ioannidis had argued “convincingly” and that his claim that most conclusions are false “is probably correct.”
Courts and litigants should be concerned about unreliable, junky literature masquerading as the testimony of a qualified expert because reliability of expert testimony is a bedrock principle behind admissibility of the testimony in the first place. Even a qualified expert must give testimony that is both relevant and reliable. If, however, the literature the expert relies upon is itself unreliable or partially junk, then his or her testimony can be no better ' no matter how articulately stated. It is no better than the expert's guess, speculation or conjecture and the justice system demands more.
Concerns about the reliability of scientific literature have increased over the last few years, due, in large part, to the proliferation of so-called “open-access” (OA) journals, which are published by industry giants such as Sage, Elsevier and Wolters Kluwer. After a number of experiences involving prospective authors and article reviewers raised suspicions about OA journals' practices, the Science Magazine editorial staff contacted Bohannon. Intrigued, Bohannon looked into and contacted some of the journals' websites, editors and reviewers. What he found was disturbing. He decided to submit a science paper of his own under a fictitious name to a Scientific & Academic Publishing Co. (SAP) journal. He devised a sting operation. To compare the one target to other publications, Bohannon would “replicate the experiment across the entire open-access world.”
'Sting' Operation
Bohannon created a “credible but mundane” paper with such “grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable.” The hoax article described a sample test to see whether cancer cells grow more slowly in a test tube when treated with increasing concentrations of a certain molecule. In a second “experiment,” Bohannon wrote that the cells were treated with increasing doses of radiation to simulate cancer radiotherapy. The data were the same across both papers and so were the bogus conclusions: “The molecule is a powerful inhibitor of cancer growth, and it increases the sensitivity of cancer cells to radiotherapy.”
There were numerous “red flags” in the papers. The graph was inconsistent with ' indeed the opposite of ' accepted data. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's shortcomings immediately. The hoax paper was sent to 304 OA journals at a rate of about 10 a week. The sting article was accepted by 157 of the journals and rejected by 98. Of the 255 versions that went through the entire editing process to either acceptance or rejection, 60% did not undergo peer review. Of the 106 journals that did conduct peer review, some 70% accepted the paper. The Public Library of Science was the only journal that called attention to the paper's potential ethical problems and, accordingly, rejected it within two weeks.
Only 36 of the 304 submissions generated peer-review comments recognizing any of the paper's scientific problems. And 16 of those papers were accepted by the editors “despite the damning reviews.” One-third of the journals targeted in the sting operation were based in India, but the publishing powerhouses that profited from those activities were in Europe and the United States. The U.S., however, was the next largest base, with 29 acceptances and 26 rejections. As Bohannon reports, one major publication, without asking for any changes to the scientific content, sent an acceptance letter and an invoice for $3,100.
How to Review Articles
Apart from the startling Bohannon experiment, many other problems persist. (See, e.g., Sarah Fecht, “What Can We Do About Junk Science?” Popular Mechanics, April 8, 2014, http://bit.ly/1BvX5lK; Henry I. Miller and Bruce Chassy, “Scientists Smell a Rat in Fraudulent Genetic Engineering Study,” Forbes, Sept. 25, 2012 (Op/Ed), http://onforb.es/1w0qcqK; Beate Wieseler and Others, “Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data,” PLOS Medicine, Oct. 8, 2013, http://bit.ly/1xB4K0Z; David F. Freedman, “Lies, Damned Lies, and Medical Science,” The Atlantic, Oct. 4, 2010, http://theatln.tc/1s2suud.)
In the biomedical area, for example, “published research findings are often modified or refuted by subsequent evidence.” There is an increasing concern of a publication “bias toward positive results,” a competition to “rush findings into print,” and an overemphasis on publishing “conceptual breakthroughs” in high-impact journals. Misleading papers result in considerable expenditure of time, money and effort by researchers “following false trails.” (Editorial, “Further Confirmation Needed,” Nature Biotechnology, Sept. 10, 2012, http://bit.ly/1s2sF8F.) Leaders at the U.S. National Institutes of Health are planning “interventions” to ensure the reproducibility of biomedical research. There is, for example, the problem of what is not published. “There are few venues for researchers to publish negative data or papers that point out scientific flaws in previously published work.” In addition, there is a difficulty in accessing unpublished data. Francis S. Collins and Lawrence A. Tabak, “Policy: NIH Plans to Enhance Reproducibility,” Nature, Jan. 27, 2014, http://bit.ly/1BERpne (article by leaders of the U.S. National Institutes of Health; “checks and balances that once ensured scientific fidelity have been hobbled”; article outlines “interventions” planned by the NIH to ensure reproducibility of biomedical research).
As a practical matter, this means that testifying experts relying on published science papers often do not have complete information on the subject because the flaws and negative critiques come later and are largely unpublished. This reality hampers the ability to challenge and cross-examine to expose the flaws and unreliability of the hearsay literature.
Published works need to be exposed to comment, critiques, refutations, and identification of errors and weaknesses. Although, nominally, scientists are welcome to publish contradictory findings, typically by contacting the authors directly or by writing a letter to the journal's editor, these are lengthy processes that “likely will never be heard or seen by the majority of scientists.” Thus, “most scientists do not participate in formal reviews.”
Although a small number of scholarly journals have launched online fora for scientists to comment on published materials, there is inconvenience to scientists in commenting journal by journal. If one wants to comment on a paper in the journal Nature , one has to go to the Nature site, find the paper and comment. If it is a PLOS paper, one has to go to the PLOS site. These attempts are major investments in time, “particularly when people may never see the comments.”
Several concerned scientists developed a new post-publication peer-review system called “PubMed Commons,” housed on the often-accessed National Center for Biotechnology Information (NCBI) biomedical research database. PubMed Commons was announced on Oct. 22, 2013. It allows users to comment directly on any of PubMed's 23 million indexed research articles. For a description of this new PubMed venture, the reader can consult Aimee Swartz's October 2013 article in The Scientist Magazine , called “Post-Publication Peer Review Mainstreamed,” available at http://bit.ly/1DgIs7V.
Such an organized post-publication peer review system could help “clarify experiments, suggest avenues for follow-up work and even catch errors,” said Stanford University's Rob Tibshirani, one of the Commons developers and a professor of health research and policy and statistics. The post-peer review “could strengthen the scientific process.” Approximately 2.5 to 3 million people access the online resource each day. Researchers thus may have a resource to check out whether published papers have stimulated objections or controversy or, perhaps, have been retracted.
The Value of Peer Review
The Bohannon sting operation illustrates that peer review may be poorly done or entirely absent. Further, as Dr. Ioannidis has observed, peer review is not a guarantee of accuracy. Even if there are two qualified reviewers, Ioannidis notes that “journal reviewers don't typically scrutinize raw data, re-run the statistical analyses, or look for evidence of fraud.” What they are reviewing, says Ioannidis, “are mostly advertisements of research rather than the research itself.”
Thus, the peer review process does not guarantee the trustworthiness of the article. University of Miami Law Professor Susan Haack wrote an excellent law review article on this issue in 2007 titled “Peer Review and Publication: Lessons for Lawyers,” available at http://bit.ly/1AqBJ7J.
Haack's article elaborates many of the limitations inherent in peer review identified in my columns in the New York Law Journal. Her article sketches the origins of and many roles peer review now plays; the rationale for pre-publication review and its shortcomings as a quality control mechanism; the changes in science and scientific publication that have put the peer-review system “under severe strain”; recent examples of flawed or even fraudulent work that passed peer review; and the role peer review ought to play in courts' assessments of “reliability.”
Imperfect System
Some persons are “tempted to exaggerate” regarding the virtues of pre-publication peer review. Instead of viewing it as a “rough-and-ready preliminary filter,” some consider it a “strong indication of quality.” But, in reality, “the system now works very imperfectly.” Peer review cannot be expected to “guarantee truth, sound methodology, rigorous statistics, etc.” Scientific editors have stressed that they and their reviewers “have no choice but to rely on the integrity of authors.” In addition, when the author is not present to testify and be cross-examined, the testifier's parroting of the hearsay can create a testimonial integrity gap that should signal gatekeeping courts to be cautious.
Citing a noted editor, Haack describes the review process roughly like this: An editor classifies articles into self-evident masterpieces, obvious rubbish, and the remainder as needing careful consideration. The latter is the large majority. The editor then chooses one or two reviewers to look at each paper selected with a checklist against which to check for aspects of style, presentation and certain kinds of obvious error. The reviewers are given a time limit ' often no more than two weeks ' to respond with their assessments and recommendations. Reviewers “spend an average of around 2.4 hours evaluating a manuscript.” Many journals do not check the statistical calculations in accepted papers, and reviewers are in no position to repeat authors' experiments or studies, which ordinarily have taken a good deal of time and/or money. Acceptance rates vary. Where the acceptance rate is low, most of the rejected papers submitted to the “most desirable” journals eventually appear in some lower-ranked publication. A paper “may have been rejected by 10 or 20 journals before it is finally accepted.”
With more and more papers submitted to more and more journals, the quality of reviewers and the time and attention they can give to their task “is likely to decline.” Prestigious journal editors have expressed major concerns. Richard Smith, editor of ” The Lancet,” wrote that peer review is “expensive, slow, prone to bias, open to abuse, possibly anti-innovatory and unable to detect fraud.” Drummond Rennie of JAMA wrote: “[T]here seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or too egotistical, no design too warped, no methodology too bungled, ' no argument too circular, no conclusion too trifling or too unjustified, and no grammar or syntax too offensive for a paper to end up in print.”
Haack advises: The fact that a work has passed pre-publication peer review is “no guarantee that it is not flawed or even fraudulent; and the fact that it has been rejected by reviewers is no guarantee that it is not an important advance.” Publication does, in the long run, make the article available for the scrutiny of other scientists. This increases the likelihood that eventually any serious methodological flaws will be spotted. Haack's discussion of how this all affects the quest for “reliability” in the courtroom is too lengthy to review here. But she posits a “whole raft of questions” lawyers should ask that might throw light on the significance of publication in a peer-reviewed journal. These can be of value to litigators preparing to challenge the hearsay article.
Conclusion
The courts' task is to gatekeep expert testimony to ensure that scientific evidence is relevant and reliable before it is ruled admissible. Anything less obscures the search for the truth and distorts the justice system. Somehow, the professionally-reliable-hearsay exception ' permitted only because it is supposed to be trustworthy ' has morphed into a “trial by literature” stampede in which expert testifiers use all manner of hearsay articles to quote or to bolster their testimony. Often, the magic words “peer review” are flashed as a talismanic admissibility-gate-opener.
Sometimes, the tactic works. The search for the truth deserves better, however. Litigators have been furnished with abundant information about peer review's shortcomings. Armed with such knowledge, they can fashion compelling advocacy. A battle over reliability can dictate the lawsuit's outcome.
The reliance upon, and use of, unreliable hearsay literature by expert testifiers is a challenging topic that cuts across the spectrum of complex litigation. Often, the literature is comprised of technical or scientific articles published in some journal with a claim that the published work product has been “peer reviewed.” Earlier articles have discussed the reliability of such out-of-court articles not authored by the testifier. Due to the increasing trend to “trial by literature,” it would be helpful to revisit the subject. Rather than diminish, the problems seem to have exacerbated.
In particular, there has been a global proliferation of journals whose quality review practices function differently from the classic model we used to know. Many so-called “open-access” journals that accept articles charge the author a fee. That dynamic seems to create potential conflicts of interest. Many of these journals publish articles without peer review. Others do a bogus peer review “sting” operation, like when a Harvard science journalist sent a science article to hundreds of journals. The shocking results are discussed below ' after presentation of some background information and findings.
Background
There is a place and need for hearsay literature. Much of what we say or do is based on what we learn. Much of what we learn is based on what we read. Much of what we read is based on what others have read. Much of what those others have read is based on what others have written. And so it becomes inevitable that, sooner or later, much of expert testimony boils down to what experts have read or learned or confirmed from writings. There is nothing inherently wrong with that.
If the expert enhances his or her expertise by reading scientific, technical or professional writings or benefits others by researching and writing as an expert, society is normally better off for the effort. Since the objective in the courtroom is to search for the truth and do justice, the writings the experts rely upon can enhance the expert's ' and therefore the jury's ' role in the truth-finding process if the expert's writings are trustworthy, accurate and professionally reliable. If, however, the writings are “junk,” and the expert relies on them or professes them to be the truth, then the expert's testimony is no better than the junk on which s/he is relying.
Sometimes, the quality and trustworthiness of professional writings fall between the extremes of “reliability” and “junk,” into a vast gray area of “quasi-reliability” or “not-quite-junk.” The articles may be published by journals with professional-sounding names or by institutions or entities recognized in the technical world, thereby creating an aura of trustworthiness that masks the diminished quality of the substantive content. What happens when the expert relies on such less-than-reliable professional literature? What should be the consequences of such reliance?
In general, the justice system wants the expert to give juries the benefit of his or her expertise, not merely to read out-of-court, hearsay materials to jurors. If the expert becomes merely a reader of someone else's thoughts or opinions, then the “someone else” is really doing the testifying, not the expert. That might not be so bad if we could guarantee that the out-of-court writing is genuine, accurate, trustworthy, reliable, relevant and “fits” the facts and issues in the case. But how can we do that? Ordinarily, we cannot cross-examine the writing, and the author of the technical or scientific or specialized hearsay is not in court to answer questions. Only a surrogate ' the so-called trial “expert” ' is subject to cross-examination. But he or she often knows only what was stated in the article. Beyond the confines of the actual text, the published findings and explicit writing, the trial expert usually does not know (if s/he is truthful) or is speculating (if s/he indulges in belief or guesswork).
False Findings
Earlier articles have reported on serious problems with reliability of many scientific articles, even those published in vaunted science or technical journals. (M. Hoenig, “Testifying Experts and Scientific Articles: Reliability Concerns,”
Then a respected epidemiologist, John P.A. Ioannidis, issued an article in the Public Library of Science Medicine (PLOS), dated Aug. 30, 2005, titled, “Why Most Published Research Findings Are False.” An editorial in the same journal conceded that Dr. Ioannidis had argued “convincingly” and that his claim that most conclusions are false “is probably correct.”
Courts and litigants should be concerned about unreliable, junky literature masquerading as the testimony of a qualified expert because reliability of expert testimony is a bedrock principle behind admissibility of the testimony in the first place. Even a qualified expert must give testimony that is both relevant and reliable. If, however, the literature the expert relies upon is itself unreliable or partially junk, then his or her testimony can be no better ' no matter how articulately stated. It is no better than the expert's guess, speculation or conjecture and the justice system demands more.
Concerns about the reliability of scientific literature have increased over the last few years, due, in large part, to the proliferation of so-called “open-access” (OA) journals, which are published by industry giants such as Sage, Elsevier and Wolters Kluwer. After a number of experiences involving prospective authors and article reviewers raised suspicions about OA journals' practices, the Science Magazine editorial staff contacted Bohannon. Intrigued, Bohannon looked into and contacted some of the journals' websites, editors and reviewers. What he found was disturbing. He decided to submit a science paper of his own under a fictitious name to a Scientific & Academic Publishing Co. (SAP) journal. He devised a sting operation. To compare the one target to other publications, Bohannon would “replicate the experiment across the entire open-access world.”
'Sting' Operation
Bohannon created a “credible but mundane” paper with such “grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable.” The hoax article described a sample test to see whether cancer cells grow more slowly in a test tube when treated with increasing concentrations of a certain molecule. In a second “experiment,” Bohannon wrote that the cells were treated with increasing doses of radiation to simulate cancer radiotherapy. The data were the same across both papers and so were the bogus conclusions: “The molecule is a powerful inhibitor of cancer growth, and it increases the sensitivity of cancer cells to radiotherapy.”
There were numerous “red flags” in the papers. The graph was inconsistent with ' indeed the opposite of ' accepted data. Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's shortcomings immediately. The hoax paper was sent to 304 OA journals at a rate of about 10 a week. The sting article was accepted by 157 of the journals and rejected by 98. Of the 255 versions that went through the entire editing process to either acceptance or rejection, 60% did not undergo peer review. Of the 106 journals that did conduct peer review, some 70% accepted the paper. The Public Library of Science was the only journal that called attention to the paper's potential ethical problems and, accordingly, rejected it within two weeks.
Only 36 of the 304 submissions generated peer-review comments recognizing any of the paper's scientific problems. And 16 of those papers were accepted by the editors “despite the damning reviews.” One-third of the journals targeted in the sting operation were based in India, but the publishing powerhouses that profited from those activities were in Europe and the United States. The U.S., however, was the next largest base, with 29 acceptances and 26 rejections. As Bohannon reports, one major publication, without asking for any changes to the scientific content, sent an acceptance letter and an invoice for $3,100.
How to Review Articles
Apart from the startling Bohannon experiment, many other problems persist. (See, e.g., Sarah Fecht, “What Can We Do About Junk Science?” Popular Mechanics, April 8, 2014, http://bit.ly/1BvX5lK; Henry I. Miller and Bruce Chassy, “Scientists Smell a Rat in Fraudulent Genetic Engineering Study,” Forbes, Sept. 25, 2012 (Op/Ed), http://onforb.es/1w0qcqK; Beate Wieseler and Others, “Completeness of Reporting of Patient-Relevant Clinical Trial Outcomes: Comparison of Unpublished Clinical Study Reports with Publicly Available Data,” PLOS Medicine, Oct. 8, 2013, http://bit.ly/1xB4K0Z; David F. Freedman, “Lies, Damned Lies, and Medical Science,” The Atlantic, Oct. 4, 2010, http://theatln.tc/1s2suud.)
In the biomedical area, for example, “published research findings are often modified or refuted by subsequent evidence.” There is an increasing concern of a publication “bias toward positive results,” a competition to “rush findings into print,” and an overemphasis on publishing “conceptual breakthroughs” in high-impact journals. Misleading papers result in considerable expenditure of time, money and effort by researchers “following false trails.” (Editorial, “Further Confirmation Needed,” Nature Biotechnology, Sept. 10, 2012, http://bit.ly/1s2sF8F.) Leaders at the U.S. National Institutes of Health are planning “interventions” to ensure the reproducibility of biomedical research. There is, for example, the problem of what is not published. “There are few venues for researchers to publish negative data or papers that point out scientific flaws in previously published work.” In addition, there is a difficulty in accessing unpublished data. Francis S. Collins and Lawrence A. Tabak, “Policy: NIH Plans to Enhance Reproducibility,” Nature, Jan. 27, 2014, http://bit.ly/1BERpne (article by leaders of the U.S. National Institutes of Health; “checks and balances that once ensured scientific fidelity have been hobbled”; article outlines “interventions” planned by the NIH to ensure reproducibility of biomedical research).
As a practical matter, this means that testifying experts relying on published science papers often do not have complete information on the subject because the flaws and negative critiques come later and are largely unpublished. This reality hampers the ability to challenge and cross-examine to expose the flaws and unreliability of the hearsay literature.
Published works need to be exposed to comment, critiques, refutations, and identification of errors and weaknesses. Although, nominally, scientists are welcome to publish contradictory findings, typically by contacting the authors directly or by writing a letter to the journal's editor, these are lengthy processes that “likely will never be heard or seen by the majority of scientists.” Thus, “most scientists do not participate in formal reviews.”
Although a small number of scholarly journals have launched online fora for scientists to comment on published materials, there is inconvenience to scientists in commenting journal by journal. If one wants to comment on a paper in the journal Nature , one has to go to the Nature site, find the paper and comment. If it is a PLOS paper, one has to go to the PLOS site. These attempts are major investments in time, “particularly when people may never see the comments.”
Several concerned scientists developed a new post-publication peer-review system called “PubMed Commons,” housed on the often-accessed National Center for Biotechnology Information (NCBI) biomedical research database. PubMed Commons was announced on Oct. 22, 2013. It allows users to comment directly on any of PubMed's 23 million indexed research articles. For a description of this new PubMed venture, the reader can consult Aimee Swartz's October 2013 article in The Scientist Magazine , called “Post-Publication Peer Review Mainstreamed,” available at http://bit.ly/1DgIs7V.
Such an organized post-publication peer review system could help “clarify experiments, suggest avenues for follow-up work and even catch errors,” said Stanford University's Rob Tibshirani, one of the Commons developers and a professor of health research and policy and statistics. The post-peer review “could strengthen the scientific process.” Approximately 2.5 to 3 million people access the online resource each day. Researchers thus may have a resource to check out whether published papers have stimulated objections or controversy or, perhaps, have been retracted.
The Value of Peer Review
The Bohannon sting operation illustrates that peer review may be poorly done or entirely absent. Further, as Dr. Ioannidis has observed, peer review is not a guarantee of accuracy. Even if there are two qualified reviewers, Ioannidis notes that “journal reviewers don't typically scrutinize raw data, re-run the statistical analyses, or look for evidence of fraud.” What they are reviewing, says Ioannidis, “are mostly advertisements of research rather than the research itself.”
Thus, the peer review process does not guarantee the trustworthiness of the article.
Haack's article elaborates many of the limitations inherent in peer review identified in my columns in the
Imperfect System
Some persons are “tempted to exaggerate” regarding the virtues of pre-publication peer review. Instead of viewing it as a “rough-and-ready preliminary filter,” some consider it a “strong indication of quality.” But, in reality, “the system now works very imperfectly.” Peer review cannot be expected to “guarantee truth, sound methodology, rigorous statistics, etc.” Scientific editors have stressed that they and their reviewers “have no choice but to rely on the integrity of authors.” In addition, when the author is not present to testify and be cross-examined, the testifier's parroting of the hearsay can create a testimonial integrity gap that should signal gatekeeping courts to be cautious.
Citing a noted editor, Haack describes the review process roughly like this: An editor classifies articles into self-evident masterpieces, obvious rubbish, and the remainder as needing careful consideration. The latter is the large majority. The editor then chooses one or two reviewers to look at each paper selected with a checklist against which to check for aspects of style, presentation and certain kinds of obvious error. The reviewers are given a time limit ' often no more than two weeks ' to respond with their assessments and recommendations. Reviewers “spend an average of around 2.4 hours evaluating a manuscript.” Many journals do not check the statistical calculations in accepted papers, and reviewers are in no position to repeat authors' experiments or studies, which ordinarily have taken a good deal of time and/or money. Acceptance rates vary. Where the acceptance rate is low, most of the rejected papers submitted to the “most desirable” journals eventually appear in some lower-ranked publication. A paper “may have been rejected by 10 or 20 journals before it is finally accepted.”
With more and more papers submitted to more and more journals, the quality of reviewers and the time and attention they can give to their task “is likely to decline.” Prestigious journal editors have expressed major concerns.
Haack advises: The fact that a work has passed pre-publication peer review is “no guarantee that it is not flawed or even fraudulent; and the fact that it has been rejected by reviewers is no guarantee that it is not an important advance.” Publication does, in the long run, make the article available for the scrutiny of other scientists. This increases the likelihood that eventually any serious methodological flaws will be spotted. Haack's discussion of how this all affects the quest for “reliability” in the courtroom is too lengthy to review here. But she posits a “whole raft of questions” lawyers should ask that might throw light on the significance of publication in a peer-reviewed journal. These can be of value to litigators preparing to challenge the hearsay article.
Conclusion
The courts' task is to gatekeep expert testimony to ensure that scientific evidence is relevant and reliable before it is ruled admissible. Anything less obscures the search for the truth and distorts the justice system. Somehow, the professionally-reliable-hearsay exception ' permitted only because it is supposed to be trustworthy ' has morphed into a “trial by literature” stampede in which expert testifiers use all manner of hearsay articles to quote or to bolster their testimony. Often, the magic words “peer review” are flashed as a talismanic admissibility-gate-opener.
Sometimes, the tactic works. The search for the truth deserves better, however. Litigators have been furnished with abundant information about peer review's shortcomings. Armed with such knowledge, they can fashion compelling advocacy. A battle over reliability can dictate the lawsuit's outcome.
ENJOY UNLIMITED ACCESS TO THE SINGLE SOURCE OF OBJECTIVE LEGAL ANALYSIS, PRACTICAL INSIGHTS, AND NEWS IN ENTERTAINMENT LAW.
Already a have an account? Sign In Now Log In Now
For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473
With each successive large-scale cyber attack, it is slowly becoming clear that ransomware attacks are targeting the critical infrastructure of the most powerful country on the planet. Understanding the strategy, and tactics of our opponents, as well as the strategy and the tactics we implement as a response are vital to victory.
This article highlights how copyright law in the United Kingdom differs from U.S. copyright law, and points out differences that may be crucial to entertainment and media businesses familiar with U.S law that are interested in operating in the United Kingdom or under UK law. The article also briefly addresses contrasts in UK and U.S. trademark law.
In June 2024, the First Department decided Huguenot LLC v. Megalith Capital Group Fund I, L.P., which resolved a question of liability for a group of condominium apartment buyers and in so doing, touched on a wide range of issues about how contracts can obligate purchasers of real property.
The Article 8 opt-in election adds an additional layer of complexity to the already labyrinthine rules governing perfection of security interests under the UCC. A lender that is unaware of the nuances created by the opt in (may find its security interest vulnerable to being primed by another party that has taken steps to perfect in a superior manner under the circumstances.