Huffpost Entertainment
The Blog

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors

Erik Lundegaard Headshot

Are Critically Acclaimed Movies More Popular? They Were Last Year

Posted: Updated:

Last week I got called stupid 5,001 times. The extra time came from my seven-year-old nephew, whom I was picking up from golf lessons and driving to a friend's house, so I could take the two of them to, of all things, a Pokemon class for the afternoon. At the friend's house, my nephew, all enthusiasm, wanted to get out the SUV's side doors, but I was unfamiliar with my sister's car -- the newest car I've ever owned is a '96 Honda Accord -- and didn't know there was an "Open" button located on the ceiling. "Open it!" he insisted. I held up my hands. "How do you open it?" I asked. Frustrated with an uncle whose newest car was five years older than he is, my nephew delivered the coup de grace: "Stupid!" he said. I laughed.

The other 5,000 times came as a result of a Slate article I wrote. My nephew gets a pass: he's seven. The others, I assume, are a bit older.

David Poland on "The Hot Blog" is indicative. His critique of my article -- in which I wrote that, in general, a 2007 film that was well-reviewed (via Rotten Tomatoes' rankings) made $2,000 more per screen than a 2007 film that reviewers slammed -- is basically four-fold:

1. I love RT [Rotten Tomatoes]. It is a great site and a great idea [but] as a basis for statistical analysis, you should probably poll Patrick Goldstein's neighbors as soon as use those numbers for a factual analysis...
Some sympathy here. I didn't critique RT in the Slate article. In earlier drafts, yes, but you've only got so much space, even online (where attention spans are shorter), and besides who wants to repeat themselves? Three and a half years ago I'd written about RT's shortcomings in the same manner Poland did, and those shortcomings are still true, but I still say that as an attempt to quantify quality -- which is what you need in a statistical analysis that uses quality as a frame of reference -- it's helpful.

2. The second HUGE mistake is, somehow, in spite of indicating a lot of knowledge in general, thinking that bulk numbers -- as in, every film released on as many as 100 screens -- can be used to analyze anything in a reasonable way. The math of the studio Dependents is quite different than the true indies, much less the small releases of under 300 screens and the behemoths of summer and the holiday season.
Obviously math from one place to another can't be "different" (2 plus 2... etc.), but if the box office numbers we're getting are being calculated differently, well, that would be good to know. But Poland doesn't continue. Maybe this "different math" is common knowledge in L.A. but it isn't with me. Part of the reason I wrote the piece is that those Monday morning box office numbers always seem half (or less) of the story. If there's more to the story that I'm missing, and that boxofficemojo -- the site from whom I got most of my numbers -- is missing, I'd like to know.

3. The biggest, perhaps, problem of all, is that after trying to take a run at this idea, and examining his data, Lundegaard didn't just throw this junk science out. To wit ... what is the leggiest wide-release movie (domestically, since it is the only stat we can use for all US releases as of now) of The Summer of 2008? Anyone? What Happens In Vegas ... Rotten Tomatoes percentage? 27%.
Two things. He's equating popularity with legs, which isn't a bad method but has its own problems: Namely the problems he ascribes to my methodology in #2. But here's the second and more important point: There will always be exceptions. I don't understand why people don't get this. All I'm saying, all the numbers are saying, is that a 2007 film that was well-reviewed (via Rotten Tomatoes' system) generally did better, to the tune of $2,000 per screen, than a 2007 film that reviewers slammed. Are there exceptions? Of course. The 10th highest per-screen average belonged to National Treasure 2 and its 31 percent Rotten Tomatoes rating. Twelfth highest belonged to Alvin and the Chipmunks and its 24 percent rating. But generally it's true. The real question -- that no one seems to be asking -- is whether years besides 2007 will get similar results. I'm not sure. I haven't crunched those numbers.

4. And riddle me this... how can Lundegaard or anyone else assume that critics are increasing box office when "good" and "bad" are not the exclusive provenance of critics.
Three paragraphs later, Poland writes my answer: "There is nothing in Lundegaard's story that suggests in any sustainable way that critics reviews have a direct cause and effect on box office in a real way." Exactly. Because I'm not arguing direct cause and effect. I'm arguing correlation, not causation. I'm arguing that critics, perceived as elitist, are actually fairly good barometers of popular taste. I'm arguing something basic: that both critics and moviegoers like quality and don't like crap.

Is this revelatory? In a society that dismisses quality, and that holds up crap for imitation, it certainly feels revelatory.

The studios will always try to make their numbers look good, and it's part of our job to find out how they're lying with them. Is my method -- ranking films by the per-screen average for their entire run -- the best method? Probably not. But it's a method, a method we don't usually see, and, maybe, a method to build on.