THE BLOG

Moneyball: Dead, Alive, or on Life Support?

03/18/2010 05:12 am ET | Updated May 25, 2011
  • Daniel Adler Harvard Sports Analysis Collective Graduate Advisor

Moneyball Tombstone

This summer, much was made of the end of Moneyball. The teams with the five highest win totals—the Yankees, Angels, Dodgers, Red Sox, Phillies—rank 1st, 6th, 9th, 4th, and 7th
in opening day pay roll. Most of those teams took on even more salary
during the season, so their final payroll ranks may even be higher.

After a few years of increasing parity
in which we saw the A’s flourish, the Twins constantly competitive, the
Marlins win a World Series, and even the Rays win the American League,
disparity appeared to be on the rise this season. Harkening back to the
late-1990s and the environment that spawned the Blue Ribbon Panel,
this season seemed to be an extreme case of the haves versus the
have-nots. The small market clubs who had recently feasted on
undervalued players appear to have lost their competitive advantage
since the big spenders started to buy into the same philosophies—in
many cases hiring Ivy League stat geeks like ourselves. Michael Lewis’
concept of Moneyball was not about On-Base Percentage, but it
was rather about finding undervalued assets. Was this the year the big
market clubs finally started valuing assets correctly? Did money spent
play a bigger role this season than in recent years? Intuition (and the
Yankee World Series victory) says yes, but let’s take a look at the
numbers.

Methodology

Looking at opening day payroll and wins, we test to see which seasons
spending is the best predictor of wins. Since a team of replacement
level players (e.g. talent available for the league minimum) could win
49 games, we will consider the number of games a team wins above 49
(marginal wins). Also, we will consider only the money spent above the
minimum salary threshold (marginal dollars). This methodology,
popularized by Baseball Prospectus is very common. Payroll data is courtesy of USA Today's salary database.

Results

Running a simple regression trying to predict marginal wins as a function of marginal dollars spent, we will learn two things:

  1. Was marginal spending a significant (i.e. non-random predictor) of marginal wins? This is the p-value. The p-value represents the percent chance that the predictor is random. If the p-value is .50, that means there was a 50% chance it is random (spending does not impact wins). Lower p-value means spending was more important.
  2. What percentage of variation in marginal wins can we predict if we know spending? This is called the r2. A high r2 means that we can predict more of the variation in wins based on spending. If r2 were .05, that would mean we could predict 5% of the variance in wins if we knew spending. For r2, higher means money spent was a greater predictor of wins.

First, let’s run a quick regression looking at all years since 1989.
To account for payroll inflation, I have brought everything up to
present value using a discount rate of 8% (the average rate of
inflation in recent seasons).

2009-11-16-Screenshot20091116at2.40.24AM.png

1989-2009

Unsurprisingly, over the past 20 years, opening day marginal salary has been a significant predictor of wins. This estimate is not very precise since the 8% discount rate is an approximation and just allows us to consider all years on the same graph. We will get a clearer picture looking at each season individually.

2009-11-16-Screenshot20091116at2.40.58AM.png

 

p-values As you can see, in the early 1990s, salary did not predict wins for many of the years (high p-values). However, considering the full seasons after the strike, we see that salary has always been a good predictor (with 2008 being the only year p>.05).

r^2 values

This is where things get interesting. In the years leading up to the
strike (1990-1993), salary was a poor predictor of marginal wins.
Immediately after the strike, there was a series of years in which
marginal salary was a good predictor of marginal wins. This is when
parity hit its nadir and the Blue Ribbon panel was ordered. In 2000 and
2001 (both years in which the Yankees made the World Series), money was
a very poor predictor of wins. Since then, the r2 has hovered in the .17-.29 range, with 2008 (the year of the Rays) being a notable outlier at .107.

Considering the past 8 seasons, this year was not particularly
notable in terms of payroll predicting success. However, the big market
clubs are spending smarter, particularly in the draft and on Latin
American teenagers, which is not reflected in opening day payrolls. The
margin for error for the little guys is razor thin and the window of
opportunity can be very short (see: Indians’ downfall after 2007 and
Rays’ slide after 2008). However, the data suggest that this year may
not have called for the alarmist musings
by some members of the media. Competitive balance is probably not what
it should be, but this year was hardly different from most other recent
seasons.

Here are some plots from a few recent seasons:

2006-2009

harvardsportsanalysis.wordpress.com

More:

MLB Baseball