In Part I and Part II of this post, I began recounting my recent experiences as an eyewitness expert in an armed robbery trial. My testimony in the case focused on the general limitations of eyewitness memory, but the photo array that police presented to the victims was problematic in its own right.
When I looked at this array before the trial began, the image of the defendant seemed to leap off the page of nine photos. I wasn't entirely sure why: perhaps because his face filled more of the frame than the others, perhaps because his head was cocked at an angle unlike the others, or maybe it was the visible remnants of a mysterious string of numbers typed below his photo -- I assume some sort of record-keeping code that hadn't been totally removed when the photo was cut from its printed sheet and stuck on the array background.
Whatever it was, something about the array looked fishy.
After all, a fair test of eyewitness memory would present a series of comparable photographs such that the only reason that a picture would stand out to witnesses is because they've seen the person in it before -- not because that one photo is somehow different than the others.
Of course, I already knew which photo was the defendant's. Maybe that's why my eyes were drawn straight to his face. I needed a more objective test of my hunch. So I showed the same photo array to 31 individuals who matched the basic demographics of the actual mugging victims. All I told them is that it was a photo array for an armed robbery case -- nothing more. Based on this limited information, I asked them to look at the photos and pick out the guy they felt was most likely to have committed the crime. Then I let them make a second choice as well.
When you do the math, in a balanced 9-image array, someone who knows nothing about the crime should have a 1/9 (11 percent) chance of picking any one photo. That is, if none of the photos stand out problematically, each one should have an 11 percent chance of being chosen. Giving respondents two choices doubles the odds for each photo's selection to 22 percent.
But in my photo array experiment, 23 percent of naïve respondents picked out the defendant with their first choice, knowing nothing at all about the crime. And a full 45 percent chose the defendant with either their first or second choice. Statistical analysis confirms that these are significant deviations from chance: for some reason (or reasons), the defendant in this array did stick out like a sore thumb, casting doubt on the usefulness of the actual victims' identifications in the case.
Now it's possible, of course, that had I presented my respondents with more information about the culprit, their photo array selections would have been different. All I had told them was that the crime was an armed robbery. So I ran a second test, this time giving people the same description that the victims originally provided of the culprit: a tall, clean-shaven, dark-skinned, Black man in his 20s. I left out the detail that the mugger allegedly had jagged or missing teeth since none of the men in the photos were smiling. And based on this description, once again, I asked respondents to pick the person they thought had committed the crime (and then to make a second choice).
Giving a physical description did make a difference in how my mock witnesses performed: now the photo array fared even worse. 29 percent of respondents chose the defendant first, compared to the 11 percent that chance would dictate. And 65 percent (!) picked him with either their first or second choice, far above the 22 percent rate dictated by chance.
In other words, the deck was stacked against this defendant as soon as the police glued his photo into the array. He just looked guilty, even to people who knew nothing about the crime. And he really looked guilty to people who knew what the culprit was supposed to look like.
I was both dismayed and excited by the results. Dismayed at what they meant in a case in which eyewitness identification was the only evidence supporting serious criminal charges against a young man. Excited because while psychologists often seek to conduct research with real-world implications, here was quick data collection that couldn't have been more applicable to a real-life problem.
So I headed to the courthouse that morning, data summary in hand, ready to educate the court not only about the general limitations of eyewitness memory, but also about the problematic aspects of the photo array used in this case. The defendant had waived his right to a jury trial, and my testimony in front of the judge went well. By the time I arrived back home, the following email from the defense attorney was waiting in my inbox:
Sam, the not guilty verdict was handed down immediately after brief closing arguments. The judge was diplomatic. He offered words of sympathy to the eyewitnesses, praised the work of the investigators, and then uttered the all important "however..." He adopted your rationale of "malleable confidence" and "post-event" influences to explain his rulings.
Justice served, at least this time around. Not a bad way to spend a Wednesday. Or to get a quick reminder of the great potential that behavioral science has to help resolve real-world dramas.
Like this post? Then check out the website for "Situations Matter: Understanding How Context Transforms Your World" (available now for pre-order). You can also follow Sam on Facebook here and on Twitter here. Book trailer video below:
Follow Sam Sommers on Twitter: www.twitter.com/samsommers