SEE UPDATE BELOW
An article posted to CNN.com yesterday evening contains some interesting information and massive backtracking on the recent voice analysis on the screams heard the night Trayvon Martin died.
First, Tom Owen, the “expert” who performed the computer analysis of the voice samples, now says that only a 60% characteristic match not 90% is required to determine if two samples match:
“Using [his software], he found a 48% likelihood the voice is Zimmerman’s. At least 60% is necessary to feel confident two samples are from the same source, he told CNN on Monday — meaning it’s unlikely it was Zimmerman who can be heard yelling.”
Here’s what Owen said in a March 31, 2012 article in the Orlando Sentinel:
“”I took all of the screams and put those together, and cut out everything else,” Owen says.
The software compared that audio to Zimmerman’s voice. It returned a 48 percent match. Owen said to reach a positive match with audio of this quality, he’d expect higher than 90 percent.
“As a result of that, you can say with reasonable scientific certainty that it’s not Zimmerman,” Owen says, stressing that he cannot confirm the voice as Trayvon’s, because he didn’t have a sample of the teen’s voice to compare.”
Quite a change in two days. Now it is just “unlikely” that it was Zimmerman, not “certainly not” Zimmerman. Updated to add: It has been pointed out to me that the word “unlikely” was the CNN reporter’s term, not Owen’s, and that is a fair point. However, the point still stands that Owen first said that because the “high quality” samples did not show greater than a 90% match, it was “reasonable scientific certainty” that the screams were not Zimmerman, and two days later said that only 60% is necessary for a match.
The CNN article goes on to acknowledge:
“And standards set by the American Board of Recorded Evidence indicate “there must be at least 10 comparable words between two voice samples to reach a minimal decision criteria….But that board’s current chairman Gregg Stutchman — who described Owens as a friend and well-respected in their field — said that exact metric doesn’t necessarily apply to the software Owens used.”
I’m glad they are addressing the issue of the number of words, but the article does not say why it does matter for Owen’s software. The article also doesn’t address the other problems with the samples. Still, it’s a start. And there’s more backtracking after that:
“David Faigman, a professor of law at the University of California-Hastings and an expert on the admissibility of scientific evidence, said courts and the overall scientific community have mixed opinions about the reliability of such “voiceprint” analysis.”
Of course, the reporter can’t quite let go without one final dig:
“Still, [Mr. Faigman] said, it wouldn’t be too hard for Zimmerman’s attorneys to find an audio expert to offer an opposing opinion.
“These expert witnesses come out of the woodwork when money is concerned,” he said.”
I find that quite ironic since I’ve read–but not confirmed–that the Orlando Sentinel paid for the original voice analysis. Out of the woodwork, indeed.
UPDATE: CNN has now updated the article. New link to article. In particular, the section regarding the questionable voice analysis has been expanded:
“But CNN and HLN legal analysts Beth Karas and Sonny Hostin raised questions about what the public should consider regarding the conclusions reached.”
This was followed by some discussion which concluded with both analysts “urging caution” in drawing any conclusions from the results of the voice analyses. Please go read! And Mr Owen has provided a non-explanation of why we should believe his software:
“Owen said the published American Board of Recorded Evidence standards apply only partially to the kind of test he conducted.
“These standards apply to the older aural-spectrographic analysis and software,” Owen said. “This only partially applies to the biometric software.”
Backwards and backwards and backwards…