Your Trial Message

Your Trial Message

(formerly the Persuasive Litigator blog)

Expect Trial Consulting Myths to Die Hard

By Dr. Ken Broda Bahm:

No Snake Oil

A current article in Pacific Standard, the online publication of Miller-McCune Center for Research, Media and Public Policy, continues a debate that has become familiar to litigation consultants. The opinion piece by Seattle science writer Jane Hu makes the argument that the trial consulting field, and specifically its role in jury selection, fails to deliver as advertised and has a corrosive effect on a fair trial. The article’s address line goes so far as to call it “quackery.” The piece is republished at Undisputed Legal as well, and has been plugged, tweeted, Reddit’ed and Tumbler’d through dozens of other sites. In particular, it is featured in the National Center for State Courts’ current Jur-E Bulletin along with the note that it is a “well written article worth reading,” and also, oddly, that it “uses a fun piece of artwork.”  

At heart, the piece parallels a 2012 piece by Joel Warner that appeared in Slate. My review of that piece calls out three misconceptions that apply just as well to Jane Hu’s article:

  • It confuses social science with physical science.
  • It assumes inaccurately that the purpose of scientific jury selection is to determine trial outcomes.
  • It discounts scientific jury selection’s focus on the proper goal of voir dire which is to uncover bias.

Forty years into the history of trial consulting, this still seems to be a reality of consultants’ professional image: At least every couple of years, someone will dust off earlier published critiques of trial consulting (like this one or this one) and trot them out for an easy article as if they’re fresh new insights about a previously unknown field. By and large, these critiques are based on a highly skewed image of what trial consultants actually do. (For a more realistic picture, check out this book by Richard Gabriel, who worked on the Simpson, Spector, Anthony trials). As long as the misconceptions persist, however, it is up to working consultants to correct these perceptions wherever we see them. In this post, I will take up that challenge and walk through seven of the more important misconceptions raised in Jane Hu’s article.

1. Is Trial Consulting ‘Unscientific’? 

Let’s start with her title, “The Unscientific Science of Jury Selection.” As a field, it is only “unscientific” if one’s notion of science includes only those fields having something to do with rockets. If we look beyond the physical sciences to account for social science, then the methods used by most consultants and enshrined in the American Society of Trial Consultants’ standards and practice guidelines are as scientific as those used by the broader community of psychologists, sociologists, communication experts and others who seek to measure and to influence public opinion. When people use the word “unscientific,” I suspect that what they actually mean is “uncertain.” And that is true: No knowledge of human perception and decision making will be structurally certain. But that does not prevent that knowledge from being useful, particularly in an adversary context, where parties are supposed to each use the available tools to put forward their best case.

2. Does Trial Consulting ‘Incentivize the Use of Lazy Stereotypes’? 

The subtitle of Hu’s article is, “The techniques of jury consultants are unreliable, and often incentivize the use of lazy stereotypes.” The article then includes an example, apparently important enough to serve as the article’s only call-out text, regarding a litigant’s decision to exclude a Hindu individual from the jury. “Hindus tend,” he said, “to have feelings a good bit different from us.” That is pretty damning. And it would be both damning and on point if there was a trial consultant associated with that decision. But there is no mention of the role of a trial consultant in that anecdote. The party that made the challenge (a prosecutor) is probably the party least likely to use a consultant. At the end of her article (see my #7), Hu even acknowledges that the lazy stereotypes are more likely without a consultant’s advice. That squares with both logic and experience. Without the help of an expert trained to discover actual attitudinal bias, attorneys are prone to make gross assumptions like the one Hu shares in the call-out. So the reasoning in the article undercuts the subtitle: Trial consulting strongly disincentivizes the use of lazy stereotypes.

3. Is the Legal System Unsuited to a Consultant’s Role? 

One underlying premise of the article is that, by introducing nonlegal areas of expertise, the trial consulting field has upset the design of a fair trial. She notes that the system places a trust in the judgment of common people, then adds “but this system was designed well before social psychologists discovered how pervasive humans’ biases are.” But you can go back pretty far — all the way to Aristotle — and find an understanding of the pervasiveness of human bias. Those persuading the first juries in early Athens were well aware that they were persuading people and not machines. That isn’t a weakness of the jury system, that is the reason for it. And as long as unfair levels of bias are weeded out, then the system works pretty well. Based on her opening example, Hu seems to be implying that a dismissal of more than half the panelists for Jodi Arias’ sentencing phase is a sign of something wrong. But the fact that we are able to rely on our expertise in order to uncover and excuse widespread bias on a media-saturated life-or-death case like this — that is the sign of something right.

4. Do Consultants Expand the Gap Between Rich and Poor Litigants? 

Another frequent criticism, and one that comes closer to the mark, focuses on the difference in the clients that end up with high priced trial consultants. “Critics raise concerns,” Hu notes, “that these high fees mean only corporations and wealthy individuals can afford consultants, which widens the gap between the rich and poor in the courtroom.” The idea she shares of having court-appointed trial consultants is a good idea. It already does happen, but it should happen even more. But the idea of having courts require trial consultants to share their data with the other side is a very bad idea. It breaks the attorney work product privilege, which in turn breaks the adversary system itself. There is no reason that logic wouldn’t apply to all manner of expert advice the parties use, and even to the differential skill of the attorneys themselves. The better solution is to do what attorneys do: Offer pro bono assistance. Trial consultants do that too, and would do it more if they were asked. Anyone can email the chair of the American Society of Trial Consultants’ Pro Bono Committee (Michelle Ramos Burkhart) and get assistance on obtaining trial consulting help on a pro bono matter.

5. Do Trial Consultants Fail to Deliver? 

Hu draws upon earlier critiques to advance the argument that trial consultants simply don’t improve jury selection. Further, she considers it a “best-case scenario” if scientific jury selection doesn’t work very well, because that prevents the inappropriate molding of juries for partisan ends. It is the familiar argument based on the assumption that a consultant’s role is focused and limited to just Stack and Sway, the title of the field’s most familiar opposition book. She quotes Neil Kressel, one of its authors, on the question of whether scientific jury selection works, and the answer is, “Not very well.” The problem is what we mean by “works.” Relying on data that is familiar to trial consultants, Kressel notes that jury composition is only partially related to trial outcome because — thank goodness — the facts of a case matter more than the composition of the jury. So if the goal of jury selection is to determine trial outcome, then the consultant’s approach doesn’t, and shouldn’t “work.” But if the goal of jury selection is to identify and remove those jurors who would draw upon their own bias to give undue weight to factors other than the evidence, then it works quite well. And there are plenty of tools in common use by consultants, based on both general and case-specific research, that are designed to identify bias.

6. Are Consultants Too Reliant on Intuition? 

My answer to that is, possible so, but undoubtedly to a lesser extent than attorneys operating without a consultant’s help and data. Jane Hu’s article quotes well-known trial consultant Jo-Ellan Dimitrius saying that intuition “has always played a major part in my work.” In response, Hu notes the very real problems in human lie detection among even among trained and skeptical professionals. Knowing that intuition extends beyond simple lie-detection, it still remains possible that some consultants over-rely on “gut.” But it is even more clear that too many trial lawyers do. By and large, the skills and the knowledge base consultants bring to the task will be more likely to lead attorneys away from unmoored intuition and toward firmer ground.

7. Do Consultants Abuse Race-Neutral Reasons for Strikes? 

Anyone who conducts voir dire knows the ability to strike is not unlimited. Batson v. Kentucky, and the cases that followed it, made strikes based on some demographic characteristics, race and gender for example, legally impermissible. That in turn has set up a situation where attorneys can be called upon to give the court distinct, e.g., race-neutral reasons for their strikes when they seem to follow a demographic pattern. To Jane Hu, this is where the abuse enters the picture. “Some argue,” she doesn’t say who, “that consultants exploit this loophole by twisting their data to find ‘race-neutral’ arguments.” It is certainly possible that attorneys, with or without a consultant’s help, could violate their role and ethics by lying about their reasons for strikes. But given that most consultants are well aware of the unreliability of demographics in predicting bias — an unreliability that Hu points out as well — it seems much more likely that consultants would lead attorneys away from demographic reliance. Hu acknowledges this as well: “Research suggests,” she says, “that asking jurors detailed questions actually decreases the possibility that attorneys strike based on surface characteristics like race or gender.” So the greater possibility is that a race-neutral reason isn’t a pretext for a demographic strike, it is the real reason for the strike.

There is one aspect of trial consulting that nearly always attracts some suspicion in these critiques. Hu’s piece for example references an article by Audrey Cleary of Villanova University noting that other professions with a comparable impact on the administration of justice are licensed, regulated, certified or held to high external standards. For trial consultants, though, “there are no such checks on the field” and, further, that “anyone can advertise and practice as a ‘trial consultant.'” That part is true, but as much as anything, it is a sign that, even as it enters its fifth decade, trial consulting is still a relatively young field. But even without those credentials, it remains true that the average trial consultant is much more informed than the average critic of trial consulting when it comes to the misconceptions identified above. The myths will die hard over time. And in the meantime, those who have questions about the foundation for trial consulting and jury selection assistance should ask an experienced practitioner, and should focus on the actual practices and not on the myths.

_______

Other Posts on the Trial Consulting Field: 

_______

Cleary, A. (2005). Scientific Jury Selection: History, Practice, and Controversy. URL: http://concept.journals.villanova.edu/article/viewFile/255/219

Hu, J. (2014). The Unscientific Science of Jury Selection. Pacific Standard, November 18, 2014: URL: http://www.psmag.com/navigation/politics-and-law/jody-arias-quackery-behind-scientific-jury-selection-94423/

Image credit: 123rf.com, used under license (edited). giulianocoman / 123RF Stock Photo