Law.com Subscribers SAVE 30%

Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.

Lawyering and Psychological Research

By David A. Martindale
December 31, 2013

Model Standard 4.6(b) of the Association of Family and Conciliation Courts' Model Standards of Practice for Child Custody Evaluation [Family Court Review, 45:1, 70'91] urges evaluators “to utilize and make reference to pertinent peer-reviewed published research in the preparation of their reports.” If I were to assert that research shows that, since the publication of the Model Standards, in 2007, more evaluators are citing research in their reports, I would expect to be asked what research I am alluding to. There is none.

Research Support

In reading transcripts of deposition testimony by evaluators, it is not uncommon to encounter allusions to research support for positions taken by the evaluators. Often, no amplification is sought by the deposing attorney. Far more often than not, there are tactical advantages to challenging assertions of research support, and no corresponding tactical disadvantages. If an adverse witness is thoroughly familiar with the relevant research, and if that research does, in fact, support opinions expressed by the evaluator, it is better to learn this in the course of a deposition than to see the evaluator's knowledge on display for the first time at trial. If, on the other hand, the evaluator is engaging in testimonial improvisation or if the research alluded to is flawed, this is best determined at deposition.

It would be na've to expect professionals in any field to devote significant amounts of time to conduct research on topics in which they take little professional or personal interest. Rarely, if ever, are researchers disinterested students of the phenomena that are the subjects of their studies. Often, those who conduct research hold theoretical positions for which they hope to garner research support.

Readers may recall that the Daubert decision (509 U.S. 579) was prepared in seven parts; in part II-C (at 593), the Supreme Court stated that “some general observations are appropriate”; that the general observations have become known to most of us as the Daubert criteria; that one of those criteria is “falsifiability”; and, that Chief Justice Rehnquist, writing in partial dissent, acknowledged that he was “at a loss to know what is meant when it is said that the scientific status of a theory depends on its 'falsifiability” ” (at 600).

With that as background, the best endorsement of the validity of a theory is surviving attempts to disprove it. Finding evidence in support of a theory is significantly easier. In a biography of William O. Douglas, it is reported that, after his appointment, Chief Justice Hughes shared with Douglas the view that “ninety percent of any decision is emotional. The rational part of us supplies the reasons for supporting our own predilections.” It is likely that some of the evaluator recommendations that attorneys wish to challenge are the product of visceral reactions to the litigants for which rational support has subsequently been developed.

Not all research cited by evaluators is applicable to the fact patterns of the cases in which they are involved, and not all research is worthy of consideration. The probability of just outcomes is increased if attorneys know what questions to pose when evaluators allude to research support for their opinions.

An examination, conducted at deposition, of an evaluator's familiarity with research that is presented as supportive of opinions expressed can be quite useful at trial. An evaluator who has prepared a report in June and is being deposed in July should be reasonably familiar with research that she asserts has been relied upon in formulating her opinions. One cannot rely upon that with which one is insufficiently familiar.

Though the evaluator can play catch-up between deposition and trial, a competent attorney can call attention to the discrepancy between the information being imparted in the course of trial testimony and the ignorance displayed at deposition. Where research has been alluded to, what matters is the depth of the evaluator's knowledge of the research at the time that she asserts it was utilized, not her knowledge at the time of trial.

Testimonial Improvisation

In response to an inquiry at deposition, an evaluator explains that her recommendations are “based upon empirical research on the long-term effects of alienation.” There is none. If there were, it would have been useful to explore the evaluator's knowledge of the methodology employed. It would have been prudent to ask: “Whose research are you alluding to?” If a name had been provided, an appropriately inquisitive attorney would have asked what specific long-term effects were documented in the research and how those effects were assessed.

No deposing attorney should sit quietly as an evaluator asserts that a particular parenting plan “yields better outcomes for children, and the research demonstrates this.” What is meant by “better outcomes”? What are the criteria employed in defining good outcomes and bad outcomes? On what basis were those criteria selected? How were the criteria assessed? In some studies of the post-divorce and post-custody-litigation adjustment of children, information concerning the children's functioning has been obtained only from the parent with whom the child spends the most time (the parent who, during the course of a trial, is often referred to as the “favored parent”). Is it reasonable to expect a “favored parent” to inform a researcher that the child placed in his or her primary care is functioning poorly?

Sometimes, nebulous concepts have been employed by researchers whose work evaluators state they have relied upon. Where this has occurred, it is only through familiarity with the originally published article (as opposed to a brief description in a secondary source) that one becomes aware that the researchers have claimed to have a solid grip on a cloud.

A researcher alludes to the “emotional maturity of each parent, seen in each parent's capacity to operate from the child's best interests.” In what manner do we assess a parent's capacity to operate from the child's best interests? A researcher studies “parents' emotional availability to the child, as experienced by the child.” Certainly we do not ask children to rate their parents' emotional availability on a scale from 1 to 5. So, how is the assessment performed? Is the measure of emotional availability reliable?

Attorneys who must contend with reports that allude to research findings should be familiar with two types of reliability: inter-judge reliability and test/re-test reliability . The first pertains to the likelihood that two or more individuals faced with the task of assessing something (such as a child's perception of a parent's emotional availability) will arrive at reasonably similar conclusions. The second type of reliability pertains to temporal stability. What is the probability that the measure that I take a month from today will be reasonably similar to the measure I took today? Measures lacking in good inter-judge reliability or lacking in temporal stability are devoid of research value.

Only rarely do descriptions of research that appear in secondary sources offer an articulation of presumptions made by researchers. A researcher seeks to ascertain whether a child playing with toys looks at the parent to see if the parent is watching. The researcher presumes that, where this occurs, it reflects a child's insecurity with respect to the parent's emotional availability. Is that a reasonable presumption?

It is also worthy of note that the researcher who wishes to secure this information endeavors to obtain it not through the use of an objective observer, but, rather, by asking the parent. Should it be presumed that parents can provide reasonably accurate reports of their interactions with their children?

'Outcomes'

When an evaluator alludes to research on “outcomes” for children who are placed in the “primary care” of parents who practice family law, the curiosity of the deposing family law practitioner should be piqued. A short list of questions to be posed would include: How is the evaluator using the term “primary care”? How was the term defined by the researchers whose work is being cited by the evaluator? Is the evaluator using the term as it was used by the researchers?

When the gavel comes down at the conclusion of a trial addressing access to or custody of a child, parents often find that they must cope with an arrangement that has been judicially imposed and with which they are uncomfortable. Examples include imposed joint-custody and court-designated parenting coordinators. Research conducted with parents who have voluntarily entered into joint custodial arrangements sheds no light on the effectiveness of judicially imposed joint custodial arrangements. Research conducted with divorced parents who voluntarily seek professional assistance in order to co-parent more effectively sheds no light on the effectiveness of parenting coordinators whose involvement with parents has been ordered over their objection.

Conclusion

Finally, there are times when researchers look at their data, conclude that the data convey a hoped-for message, and simply fail to consider eminently plausible alternative explanations for the data that were collected. Promoters of computer-generated interpretations of psychological test data (known as CBTIs ' computer based test interpretations) have asserted that data from surveys of CBTI users attest to the accuracy of the computer-generated interpretations. What the promoters of CBTIs refer to as “studies of CBTI validity” are, in reality, customer satisfaction surveys collected from customers whose satisfaction with the product (the CBTI) is virtually assured. A price is paid for a CBTI; the narrative report is not provided free of charge.

Practitioners who believe (correctly or incorrectly) that CBTIs do not provide accurate descriptions stop ordering them. Those practitioners become former customers. Surveys are taken among current customers. Those who, if surveyed, might assign low accuracy ratings to the CBTIs are not heard from.

More than a decade ago, opining on what I believe to be the defining attributes of an expert opinion, I wrote that “the requisite characteristics relate to the procedures that were employed in formulating the opinion and the body of knowledge that forms the foundation upon which those procedures were developed. If the accumulated knowledge of the expert's field was not utilized, the opinion expressed is not an expert opinion. It is a personal opinion, albeit one being expressed by an expert.” [Martindale, D.A. (2001)]. Cross-examining mental health experts in child custody litigation. J. Psychiatry & L ., 29/Winter 2001, 483-511, page 503.] I strongly encourage evaluators to utilize peer-reviewed research, and encourage them to cite applicable research in their reports. I also urge deposing attorneys to pose the questions that will enable them to ascertain whether evaluators citing research are familiar with the research cited by them.


David A. Martindale, Ph.D., ABPP , a member of this newsletter's Board of Editors, is board certified in forensic psychology by the American Board of Professional Psychology, and the Reporter for the Association of Family and Conciliation Courts' Model Standards of Practice for Child Custody Evaluation. He offers forensic psychological consulting services to psychologists, attorneys, and licensing boards. Additional information may be found at www.damartindale.com.

Model Standard 4.6(b) of the Association of Family and Conciliation Courts' Model Standards of Practice for Child Custody Evaluation [Family Court Review, 45:1, 70'91] urges evaluators “to utilize and make reference to pertinent peer-reviewed published research in the preparation of their reports.” If I were to assert that research shows that, since the publication of the Model Standards, in 2007, more evaluators are citing research in their reports, I would expect to be asked what research I am alluding to. There is none.

Research Support

In reading transcripts of deposition testimony by evaluators, it is not uncommon to encounter allusions to research support for positions taken by the evaluators. Often, no amplification is sought by the deposing attorney. Far more often than not, there are tactical advantages to challenging assertions of research support, and no corresponding tactical disadvantages. If an adverse witness is thoroughly familiar with the relevant research, and if that research does, in fact, support opinions expressed by the evaluator, it is better to learn this in the course of a deposition than to see the evaluator's knowledge on display for the first time at trial. If, on the other hand, the evaluator is engaging in testimonial improvisation or if the research alluded to is flawed, this is best determined at deposition.

It would be na've to expect professionals in any field to devote significant amounts of time to conduct research on topics in which they take little professional or personal interest. Rarely, if ever, are researchers disinterested students of the phenomena that are the subjects of their studies. Often, those who conduct research hold theoretical positions for which they hope to garner research support.

Readers may recall that the Daubert decision (509 U.S. 579) was prepared in seven parts; in part II-C (at 593), the Supreme Court stated that “some general observations are appropriate”; that the general observations have become known to most of us as the Daubert criteria; that one of those criteria is “falsifiability”; and, that Chief Justice Rehnquist, writing in partial dissent, acknowledged that he was “at a loss to know what is meant when it is said that the scientific status of a theory depends on its 'falsifiability” ” (at 600).

With that as background, the best endorsement of the validity of a theory is surviving attempts to disprove it. Finding evidence in support of a theory is significantly easier. In a biography of William O. Douglas, it is reported that, after his appointment, Chief Justice Hughes shared with Douglas the view that “ninety percent of any decision is emotional. The rational part of us supplies the reasons for supporting our own predilections.” It is likely that some of the evaluator recommendations that attorneys wish to challenge are the product of visceral reactions to the litigants for which rational support has subsequently been developed.

Not all research cited by evaluators is applicable to the fact patterns of the cases in which they are involved, and not all research is worthy of consideration. The probability of just outcomes is increased if attorneys know what questions to pose when evaluators allude to research support for their opinions.

An examination, conducted at deposition, of an evaluator's familiarity with research that is presented as supportive of opinions expressed can be quite useful at trial. An evaluator who has prepared a report in June and is being deposed in July should be reasonably familiar with research that she asserts has been relied upon in formulating her opinions. One cannot rely upon that with which one is insufficiently familiar.

Though the evaluator can play catch-up between deposition and trial, a competent attorney can call attention to the discrepancy between the information being imparted in the course of trial testimony and the ignorance displayed at deposition. Where research has been alluded to, what matters is the depth of the evaluator's knowledge of the research at the time that she asserts it was utilized, not her knowledge at the time of trial.

Testimonial Improvisation

In response to an inquiry at deposition, an evaluator explains that her recommendations are “based upon empirical research on the long-term effects of alienation.” There is none. If there were, it would have been useful to explore the evaluator's knowledge of the methodology employed. It would have been prudent to ask: “Whose research are you alluding to?” If a name had been provided, an appropriately inquisitive attorney would have asked what specific long-term effects were documented in the research and how those effects were assessed.

No deposing attorney should sit quietly as an evaluator asserts that a particular parenting plan “yields better outcomes for children, and the research demonstrates this.” What is meant by “better outcomes”? What are the criteria employed in defining good outcomes and bad outcomes? On what basis were those criteria selected? How were the criteria assessed? In some studies of the post-divorce and post-custody-litigation adjustment of children, information concerning the children's functioning has been obtained only from the parent with whom the child spends the most time (the parent who, during the course of a trial, is often referred to as the “favored parent”). Is it reasonable to expect a “favored parent” to inform a researcher that the child placed in his or her primary care is functioning poorly?

Sometimes, nebulous concepts have been employed by researchers whose work evaluators state they have relied upon. Where this has occurred, it is only through familiarity with the originally published article (as opposed to a brief description in a secondary source) that one becomes aware that the researchers have claimed to have a solid grip on a cloud.

A researcher alludes to the “emotional maturity of each parent, seen in each parent's capacity to operate from the child's best interests.” In what manner do we assess a parent's capacity to operate from the child's best interests? A researcher studies “parents' emotional availability to the child, as experienced by the child.” Certainly we do not ask children to rate their parents' emotional availability on a scale from 1 to 5. So, how is the assessment performed? Is the measure of emotional availability reliable?

Attorneys who must contend with reports that allude to research findings should be familiar with two types of reliability: inter-judge reliability and test/re-test reliability . The first pertains to the likelihood that two or more individuals faced with the task of assessing something (such as a child's perception of a parent's emotional availability) will arrive at reasonably similar conclusions. The second type of reliability pertains to temporal stability. What is the probability that the measure that I take a month from today will be reasonably similar to the measure I took today? Measures lacking in good inter-judge reliability or lacking in temporal stability are devoid of research value.

Only rarely do descriptions of research that appear in secondary sources offer an articulation of presumptions made by researchers. A researcher seeks to ascertain whether a child playing with toys looks at the parent to see if the parent is watching. The researcher presumes that, where this occurs, it reflects a child's insecurity with respect to the parent's emotional availability. Is that a reasonable presumption?

It is also worthy of note that the researcher who wishes to secure this information endeavors to obtain it not through the use of an objective observer, but, rather, by asking the parent. Should it be presumed that parents can provide reasonably accurate reports of their interactions with their children?

'Outcomes'

When an evaluator alludes to research on “outcomes” for children who are placed in the “primary care” of parents who practice family law, the curiosity of the deposing family law practitioner should be piqued. A short list of questions to be posed would include: How is the evaluator using the term “primary care”? How was the term defined by the researchers whose work is being cited by the evaluator? Is the evaluator using the term as it was used by the researchers?

When the gavel comes down at the conclusion of a trial addressing access to or custody of a child, parents often find that they must cope with an arrangement that has been judicially imposed and with which they are uncomfortable. Examples include imposed joint-custody and court-designated parenting coordinators. Research conducted with parents who have voluntarily entered into joint custodial arrangements sheds no light on the effectiveness of judicially imposed joint custodial arrangements. Research conducted with divorced parents who voluntarily seek professional assistance in order to co-parent more effectively sheds no light on the effectiveness of parenting coordinators whose involvement with parents has been ordered over their objection.

Conclusion

Finally, there are times when researchers look at their data, conclude that the data convey a hoped-for message, and simply fail to consider eminently plausible alternative explanations for the data that were collected. Promoters of computer-generated interpretations of psychological test data (known as CBTIs ' computer based test interpretations) have asserted that data from surveys of CBTI users attest to the accuracy of the computer-generated interpretations. What the promoters of CBTIs refer to as “studies of CBTI validity” are, in reality, customer satisfaction surveys collected from customers whose satisfaction with the product (the CBTI) is virtually assured. A price is paid for a CBTI; the narrative report is not provided free of charge.

Practitioners who believe (correctly or incorrectly) that CBTIs do not provide accurate descriptions stop ordering them. Those practitioners become former customers. Surveys are taken among current customers. Those who, if surveyed, might assign low accuracy ratings to the CBTIs are not heard from.

More than a decade ago, opining on what I believe to be the defining attributes of an expert opinion, I wrote that “the requisite characteristics relate to the procedures that were employed in formulating the opinion and the body of knowledge that forms the foundation upon which those procedures were developed. If the accumulated knowledge of the expert's field was not utilized, the opinion expressed is not an expert opinion. It is a personal opinion, albeit one being expressed by an expert.” [Martindale, D.A. (2001)]. Cross-examining mental health experts in child custody litigation. J. Psychiatry & L ., 29/Winter 2001, 483-511, page 503.] I strongly encourage evaluators to utilize peer-reviewed research, and encourage them to cite applicable research in their reports. I also urge deposing attorneys to pose the questions that will enable them to ascertain whether evaluators citing research are familiar with the research cited by them.


David A. Martindale, Ph.D., ABPP , a member of this newsletter's Board of Editors, is board certified in forensic psychology by the American Board of Professional Psychology, and the Reporter for the Association of Family and Conciliation Courts' Model Standards of Practice for Child Custody Evaluation. He offers forensic psychological consulting services to psychologists, attorneys, and licensing boards. Additional information may be found at www.damartindale.com.

This premium content is locked for Entertainment Law & Finance subscribers only

  • Stay current on the latest information, rulings, regulations, and trends
  • Includes practical, must-have information on copyrights, royalties, AI, and more
  • Tap into expert guidance from top entertainment lawyers and experts

For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473

Read These Next
COVID-19 and Lease Negotiations: Early Termination Provisions Image

During the COVID-19 pandemic, some tenants were able to negotiate termination agreements with their landlords. But even though a landlord may agree to terminate a lease to regain control of a defaulting tenant's space without costly and lengthy litigation, typically a defaulting tenant that otherwise has no contractual right to terminate its lease will be in a much weaker bargaining position with respect to the conditions for termination.

How Secure Is the AI System Your Law Firm Is Using? Image

What Law Firms Need to Know Before Trusting AI Systems with Confidential Information In a profession where confidentiality is paramount, failing to address AI security concerns could have disastrous consequences. It is vital that law firms and those in related industries ask the right questions about AI security to protect their clients and their reputation.

Pleading Importation: ITC Decisions Highlight Need for Adequate Evidentiary Support Image

The International Trade Commission is empowered to block the importation into the United States of products that infringe U.S. intellectual property rights, In the past, the ITC generally instituted investigations without questioning the importation allegations in the complaint, however in several recent cases, the ITC declined to institute an investigation as to certain proposed respondents due to inadequate pleading of importation.

Authentic Communications Today Increase Success for Value-Driven Clients Image

As the relationship between in-house and outside counsel continues to evolve, lawyers must continue to foster a client-first mindset, offer business-focused solutions, and embrace technology that helps deliver work faster and more efficiently.

The Power of Your Inner Circle: Turning Friends and Social Contacts Into Business Allies Image

Practical strategies to explore doing business with friends and social contacts in a way that respects relationships and maximizes opportunities.