Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.
Author's Note: Readers interested in reviewing the research literature underlying the material presented herein can study the following: 1) Faust, D. & Ahern, D.C. (2011): Clinical Judgment and Prediction (In D. Faust: Coping with Psychiatric and Psychological Testimony. 6th ed.); 2) Garb, H. (1998): Studying the Clinician: Judgment Research and Psychological Assessment (APA); 3) Lamb, et. al. (2000): Accuracy of Investigators' Verbatim Notes of Their Forensic Interviews with Alleged Child Abuse Victims (Law and Human Behavior, vol. 6); 4) Stewart, R. & Chambless, D. (2007): Does Psychology Research Inform Treatment Decisions in Private Practice? (J Clinical Psychol, Vol 63).
Assumptions
As forensic psychologists and psychiatrists agree to accept appointments as evaluators or take the stand to testify about a custody matter, there are often many assumptions about forensic practice floating among those in the legal community, and even on the part of litigants, that are questionable at best. Some are entirely inaccurate and can lead to beliefs about forensic reliability that yield a misplaced acquiescence to the belief that the “doctor knows best.” A review of some of these misplaced assumptions, through the lens of clinical judgment research, reinforces the notion that zealously analyzing and challenging the data, methods and reasoning used by evaluators is critically important during any litigation process that may profoundly affect a child's life.
Myth #1: More Years of Experience Yields Greater Assessment Accuracy
The gradual shift from the pre-Frye and Frye (see Frye v. United States, 293 F. 1013 (D.C. Cir. 1923)) evidentiary standards to the reasoning behind the decision in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), partially represents the realization that it is not the identity of the expert or of the expert's discipline, but the reliability of the method that was used by the expert that counts. However, it is common for attorneys favored by a forensic conclusion to highlight for the court the number of years an expert has been practicing or the number of assessments they have completed. It is not uncommon in Family and Civil Court matters for a moment to come in the court proceedings when, having been asked for the basis of a particular forensic opinion, the forensic evaluator utters the famous words, “Well, based on my clinical experience … ”
Unfortunately, the clinical judgment research indicates that once clinicians complete graduate training (which does appear to increase diagnostic accuracy), those who are presumed to be experts are not more accurate than clinicians with less experience. Interestingly, these findings have been replicated in studies that have included physicians, special education teachers, physical therapists, occupational therapists and social workers. (No one has gotten to attorneys yet!) The fact that a testifying custody evaluator has significant experience does not necessarily translate into improved forensic reliability undergirding the opinion (which is most likely the case when the area of forensic practice offers the evaluator no corrective feedback about when s/he gets it wrong and gets it right, as in the case of custody assessment).
Myth #2: More Information Yields a More Reliable Opinion
This author has peer-reviewed custody evaluations that range from 20 pages to 350. The longer assessments often involve the review of thousands of pages of material, along with hours of video-watching and audio-listening, numerous calls to longs lists of collaterals, and many, many hours of interview time. There is an understandable inclination on the part of lawyers and judges to be in awe of the mega-report, based on the assumption that more information just has to be better than less information. However, the critical issue is not how much information was gathered and analyzed, but how much valid information was considered and how much invalid or distracting information was utilized.
Custody assessment is essentially an extraordinarily complex prediction task, often involving the processing of mounds of data over many hours. As the saying goes, “garbage in, garbage out.” Research in the area of clinical judgment makes clear that increasing the amount of information available to evaluators often fails to improve judgment accuracy. The central task for forensic evaluators is not to gather as much data as they can, but: 1) to gather data their literature says best predicts what they are trying to predict; and 2) to be vigilant for data their literature says should be considered unimportant or invalid distractions for that prediction. In other words, a short evaluation that focuses on valid information and selectively dis-attends to invalid information may well be far more reliable than the 300-page behemoth that included poorly chosen, distracting or useless information.
Myth #3: Evaluators Are Able to Reliably Combine, Weight, and Integrate Complex Data
There has been a decades long controversy about whether, as clinicians try to make their predictions, we are more effective: 1) when emphasizing countable, mathematically proven and actuarial statistical data; or 2) when we sit back in our armchairs, think about all of the data we have gathered, analyze patterns unique to the case, and integrate it all using our professional experience and our powers of logical and clinical reasoning. The result: Emphasizing actuarial information, when it exists, almost always beats clinical reasoning.
However, those who are uncomfortable with mathematical predications about difficult human problems often reply that computers cannot (yet!) do the essential tasks of combining, weighting and integrating the wealth of complex, multi-level information we gather on families. They emphasize the importance of taking a holistic, “configural” approach to the analysis of data we gather as evaluators. Unfortunately, research on clinical judgment calls this idea into question also: Prominent reviewers (listed above) have failed to find any studies that confirm clinical evaluators as a group, as they reason in their armchairs, have abilities to do complex, configural analysis that actually adds information that improves prediction. The implication — When analyzing a custody report we must remember that: 1) the “factual” information that was gathered rather than any complex abstract reasoning applied to that data may be what is most useful; and 2) the complex reasoning that was applied may be nothing more than the effect of the expert's subjective opinions about parenting and family life.
Myth #4: Courts Can Trust That Evaluators Know and Use Their Research Literature
This author recalls the words of a senior professor in graduate school, 28 years ago: “Folks, remember that a PhD doesn't mean you are done —€ it means you are prepared to be a life-long learner.” Unfortunately, the notion that experts in custody cases can be trusted to have kept themselves abreast of the most recent custody-relevant literature in their field has, in this author's experience, little support. With the exception of New York State, where there is absolutely no requirement for psychologists to engage in ongoing continuing education (an amazing and embarrassing state of affairs), many states do require ongoing training for mental health professionals. However, this in no way guarantees that evaluators are studying the research literature most relevant to forensic issues.
The literature on professional behavior suggests that, despite training in the scientist-practitioner model, a meaningful percentage of practitioners pays little attention to the peer-reviewed research, and many carry on with the worrisome belief that clinical judgment alone is superior to research-guided practice.
Myth #5: Testifying Experts Objectively Consider the Data Regardless of Who Pays Them
A forensic expert's highest calling is to objectively apply his or her profession's library of knowledge to psycho-legal issues before the court, and to do so unencumbered by partisan demands and sources of influence. However, in the push-pull of legal advocacy, a common approach is to shop for expert opinions that work for one's client. When there is a lineup of experts paid by opposing sides, it is important to recall that there is a strong, measurable drift in expert opinions toward those that please the side that is writing the check. Research on such “retention-bias” has made clear that this form of human fallibility (and its attendant abandonment of the highest calling noted above) exerts a real effect on how experts gather data and reason about it, however unintended or unconscious the tendency may be.
Myth #6: Experts Know How Often Their Methods Get It Right And Wrong
When we visit a physician before surgery, perfectly sensible questions for the surgeon include, “Doc, how often does that test you used get it wrong?” or, “How often do patients who have this surgery get better?” In the area of assessing violence risk, or even general psychopathology, forensic psychologists are able to approximate the error rate associated with the methods they are using. For a long list of reasons, mental health professionals, acting as if they can reliably choose which parent should get custody or what the time-sharing plan should be for a child, actually have absolutely no idea how often they, or their discipline in general, get such decisions right or get them wrong. In this way, it is truly the Wild West out there. No expert custody evaluator has any idea how well they are doing at what they purport to be able to do.
Myth #7: Experts Are Able to Accurately Appraise Their Level of Certainty
It is not uncommon for testifying experts to be asked about their level of certainty regarding conclusions they have drawn about children or parents. It is also not uncommon for experts to freely and easily offer an answer (“I feel moderately confident” or “I feel highly confident about that conclusion.”). Unfortunately, while experts may “feel” this or that way about their opinions, the clinical judgment research is clear on the following: Clinicians are often overly confident in their own opinions. When objective measures of accuracy are used, there is a negligible or weak association between that confidence and how accurate the opinion actually is. This overconfidence problem is likely most endemic to clinical tasks where the expert receives little corrective feedback. (As noted above, this is exactly the situation for custody evaluators — lots of opinions with little feedback about when they get it wrong or right). And we also know now that there is a related, worrisome tendency: to primarily remember data that supports a conclusion and to “recall” data that supports a conclusion that never existed in the first place.
Myth #8: Experts Can Be Trusted To Create Accurate Records of What Happens
The original written record of what happens in the forensic examination room is a critically important source of information about the data set that was collected and the assessment method that was used. In the absence of videotapes of forensic sessions, it represents the closest attorneys and the court can get to what really happened, despite the fact that such details go through the filter of the expert's memory, reasoning and biases as notes are taken. However, research on the accuracy of evaluator notes gives reason for pause: When clinician-notes (in the area of sexual abuse assessment) of what happened in evaluation sessions were compared with video-tapes of what actually happened in the sessions, there was a very worrisome rate of error (complete omissions of critically important information, misremembering of what method was used, etc).
We conclude this article in next month's issue with a look at more common myths.
*****
Jeffrey P. Wittmann, Ph.D., a member of this newsletter's Board of Editors, is a licensed psychologist and trial consultant whose national practice concentrates on trial support for attorneys in custody and access matters and on forensic work-product reviews. He is author of Evaluating Evaluations: An Attorney's Handbook for Analyzing Child Custody Reports (MatLaw, 2013). Additional information can be found at childcustodyforensics.com.
ENJOY UNLIMITED ACCESS TO THE SINGLE SOURCE OF OBJECTIVE LEGAL ANALYSIS, PRACTICAL INSIGHTS, AND NEWS IN ENTERTAINMENT LAW.
Already a have an account? Sign In Now Log In Now
For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473
During the COVID-19 pandemic, some tenants were able to negotiate termination agreements with their landlords. But even though a landlord may agree to terminate a lease to regain control of a defaulting tenant's space without costly and lengthy litigation, typically a defaulting tenant that otherwise has no contractual right to terminate its lease will be in a much weaker bargaining position with respect to the conditions for termination.
What Law Firms Need to Know Before Trusting AI Systems with Confidential Information In a profession where confidentiality is paramount, failing to address AI security concerns could have disastrous consequences. It is vital that law firms and those in related industries ask the right questions about AI security to protect their clients and their reputation.
As the relationship between in-house and outside counsel continues to evolve, lawyers must continue to foster a client-first mindset, offer business-focused solutions, and embrace technology that helps deliver work faster and more efficiently.
The International Trade Commission is empowered to block the importation into the United States of products that infringe U.S. intellectual property rights, In the past, the ITC generally instituted investigations without questioning the importation allegations in the complaint, however in several recent cases, the ITC declined to institute an investigation as to certain proposed respondents due to inadequate pleading of importation.
GenAI's ability to produce highly sophisticated and convincing content at a fraction of the previous cost has raised fears that it could amplify misinformation. The dissemination of fake audio, images and text could reshape how voters perceive candidates and parties. Businesses, too, face challenges in managing their reputations and navigating this new terrain of manipulated content.
A recent research paper offers up some unexpected results regarding the best ways to manage retirement income.