Going into this past weekend, two Ohio basketball teams were streaking, but within just 24 hours, both streaks were over. On Friday evening, the Cleveland Cavaliers' 26 game losing streak mercifully ended when they beat the Clippers in overtime. On Saturday afternoon, Ohio State's 24 game winning streak ended with a road loss to Wisconsin.

We wanted to know which streak was more improbable -- the Buckeyes' extended winning or the Cavaliers' extended stretch of futility. To calculate the odds of each streak, we used the Pythagorean win expectation formula. The formula gives the odds of winning a single game during the streak based on the number of points scored and allowed during the streak (Cavaliers: 2455 scored, 2810 allowed; Buckeyes: 1868 scored, 1367 allowed).

Using the Pythagorean expectation formula with the appropriate exponents (14 for NBA and 11.5 for college), the single game win probability was 13.1% for the Cavaliers and 96.8% for the Buckeyes. This means that given the points scored and allowed during the streaks, we would expect the Cavaliers to win 3.4 games out of 26 and the Buckeyes to win 23.2 games during a 24 game stretch. Keep in mind that these figures do not adjust for blowout wins/losses (most notable was the Cavaliers' 55 point loss to the Lakers). If we capped wins at 15 or even 20 points for our calculations, the Cavaliers would have had a higher win expectation and the Buckeyes would have had a lower win expectation.

Given this information, the chance that the Cavaliers would lose all 26 games was a paltry 2.6% (.131^26). The Cavaliers did seem to get unlucky as evidenced by the fact that they were actually ahead during in five of the six games before they broke the streak. The Buckeyes streak was not nearly as remarkable given their point differential. Their odds of winning 24 games in a row was 46.3% (.968^24).

For our next test, we wanted to see how the Cavaliers and Buckeyes would fare if they played as poorly/well for an entire season as they did during their streaks.

Assuming the Cavs had the same chance of winning each game, we simulated 100,000 seasons to determine the team's longest losing streak within a season. The results, broken down in the chart above (click on image to enlarge), indicate a franchise bound to endure prolonged streaks of futility. On average, the longest losing streak reached nearly twenty games -- only four teams in NBA history have suffered an in-season losing streak as long. The Cavs, however, relied on a particularly poor string of luck to break the all-time NBA record for consecutive losses. According to the simulations, the team had just an 18.5% chance of losing at least twenty-six games in a row.

Ohio State, on the other hand, embarked on a dominant run to be expected by its out-sized point differential. By basic probability alone, we can calculate the chance of a perfect regular season, with thirty-one games played per this year's schedule, as .968^31 = 37.0%. Simulating 100,00 regular seasons yields a clearer distribution of win streaks and, as a testament to the team's abilities, Ohio State had a 56.6% chance of winning at least twenty-four games in a row. Interestingly enough, the longest winning streak was 24.3 games on average -- in that sense, Ohio State's streak reflects the ordinary accomplishments of an extraordinary team.

So all that leaves is the question of who would win if the Buckeyes played the Cavaliers.

For more analysis like this, check out the Harvard Sports Analysis Collective's site at harvardsportsanalysis.wordpress.com.

]]>

How is it possible that one person rises so far above the rest of the field? It seems that Francis' success on the whatever you call a horseshoe lane (pitch, field, alley, court?) is unrivaled in other sports. In most other games, there are certainly players whose play is far better than average, but the impressive thing about Francis is that there is not one person even close to his play. He seems to contradict our old friend (and Nassim Taleb's enemy), the normal distribution.

One theory for how Francis can be so far away from anybody else is that there are just not many players of horseshoes. Imagine a world in which we select 100 random people to play baseball and nobody else. That is probably a large enough number that their batting averages would look like a normal distribution. However, imagine if one of those players by some stroke of luck happened to be Albert Pujols, who we know is a very skilled player when compared to a universe of millions of baseball players (i.e. today's baseball environment). Pujols would not only be the best of the group, but he would be the best by a huge margin, much like Francis is in horseshoes. When the universe of players is smaller, the gap between the players at the top end is bound to be larger. Furthermore, if we actually have a one in 500 million type player in a sport with few participants, we are going to have a huge gap.

Another player with similarly outlandish statistics is cricket player Donald Bradman. His cricket batting average fell 4.4 standard deviations above the mean (approximately 1 in 185,000), eclipsing the outlier-ness of Pele, Ty Cobb, Michael Jordan, and others. It would be interesting to see where Francis falls in this pantheon of distinguished athletes. As for Bradman's performance in comparison to other elite players, he appears to have an even greater edge than Francis, with a test career batting average of 99.94 compared to the an average of 60.97 for the next best player.

Maybe we should all head to Cedar Rapids, Iowa for the Horseshoe World Championships so we can tell our grandkids that we saw Jim Francis, the greatest athlete ever...or at least Jim Francis, a great athlete in a small sport. Is he a one in a million talent in a sport of 100,000 or is he perhaps a one in a billion talent in a sport of 100,000?

Check out the Harvard Sports Analysis Blog: harvardsportsanalysis.wordpress.com]]>

While the "maximum" salary varies depending on contract length and other factors, the important thing to know is there is a maximum amount any team can offer a single player. For all teams beside the Cavaliers, that number was something like $96 million for five years. The system promotes players staying with their current teams and the Cavaliers were allowed to offer about $30 million more for a six year contract. Is the concept of a maximum salary good for basketball?

It is important to define what we mean by good for basketball. For the moment, let's just consider whether it is good for competitive balance. There are some compelling reasons why perfect competitive balance is not good for basketball (people like to see dominant teams, big cities should win more frequently, etc).

A look at the top earners in the NBA shows a wide range of talents contained within a small range of salaries. Many of the players on the top 30 list earned the max (which is dependent on experience and when they signed the deal). Essentially, a max player like LeBron offers huge excess value (worth over actual earnings) to a team while the players who barely merit the maximum (Michael Redd and Rashard Lewis) offer little (if any) excess. The maximum salary system basically allows those teams that have the top players extra salary cap space.

To put some actual numbers to this example, first we need to decide the value of a win. Using a very simple methodology (dividing total league payroll by total wins in the league), David Berri determines that the value of a win in 2010 was about $1.71 million. Using Berri's Wins Produced stat averaged over the past three seasons, Arturo Galleti shows that LeBron is worth $41 million, Bosh is worth $17 million, and Wade is worth $25 million (low due to missing most of the 2008 season)...yet they will all make nearly the same salary ($16 million or so) with Wade possibly getting more due to the fact that he is re-signing with the Heat. Even those who disagree with Berri's method or the exact dollar figures know that LeBron is much more valuable than many players receiving maximum money. Take a look at Joe Johnson's huge payday in Atlanta.

Let's try a thought experiment. Imagine that we still have a salary cap, but no maximum salary (similar to the NFL). Would LeBron, Bosh, and Wade be so willing to take A LOT less money to play together? True, they may sacrifice a couple million per season the way things stand now, but let's say teams could offer as much of the cap as they wanted. Would LeBron pass up $45 million per year just to play with his buddies? How much does he really value winning/friendship/living in Tony Montana's old house?

The next NBA collective bargaining agreement promises to bring a far different salary structure. In the past six months, I have spoken with and heard from people representing both the NBPA and NBA and neither side has mentioned eliminating the maximum salary. In fact, most people seem to feel that it will actually decrease. Getting rid of a maximum salary is something the NBA should really consider if they value competitive balance.

Visit our website: harvardsportsanalysis.wordpress.com]]>

Looking at the winners and losers of the first Super Bowls, it appears the both winners and losers make the playoffs in the following season at a similar rate.

Two more Super Bowl winners have returned to the playoffs than losers, but the difference is not statistically significant.

Furthermore, when we consider how the teams perform in the regular season compared to their previous season, we actually see that the losers more frequently improve.

Of course, part of the reason for this fact is that Super Bowl winners have better regular season records on average. There is lots of reversion to the mean for both winners and losers.

Looking at the years 1970-2008, we see that the Super Bowl winners have had stronger regular season records than the losers and the gap is of equal magnitude in the next season.

So both teams win 2.1 fewer games on average, however, the winners are .9 better to start. It appears the winning or losing the Super Bowl does not predict next year's success once we control for wins in the previous year.

A slightly more rigorous analysis incorporating multiple regression provides similar results. Once we control for previous year winning percentage or better yet, scoring ratio, winning/losing the Super Bowl has no significant predictive power for the next regular season winning percentage (.03, p=.31 for winner dummy; .01 p=.69 for loser dummy; i.e. not significant).

However, my straw man may have a bit of a point. Since 1993, eight of 17 losers have made the playoffs, while 12/17 winners have returned to the playoffs. Additionally, 14 of the losers got worse, and only one improved. 11 winners won fewer games after winning the Super Bowl and three improved.

Still, the Colts should not worry about a Raiders-like tailspin. There are many notable stories of post-Super Bowl loss struggles, but losing the Super Bowl has not negatively impacted a team's chance of success in the subsequent regular season in a statistically significant way (5% level...or even 10%). As for the playoffs...we'll look at that next time.

Check out our website: http://harvardsportsanalysis.wordpress.com/

]]>

Wall Street firms such as AIG, which received $90 billion in government funds, and sports teams such as the Dallas Cowboys, who received roughly $325 million to build the monument to excess that is Cowboys Stadium, have more in common than being run by white males. Employees in both industries reap huge benefits due to government assistance.

Recently, Goldman Sachs has come under fire for paying lucrative year-end bonuses to their employees. They plan to pay $16.7 billion, an average of $700,000 per employee. Of course, this comes not many months after the US government, by way of AIG, paid Goldman $12.9 billion. Predictably, there has been much populist outrage and Goldman has taken steps to quell public anger and (far more worrisome to them) government regulation. Goldman should not be expected to spurn the government funds, but it seems quite inappropriate for the government to clean up someone else's (in this case, AIG's) mess. The government is essentially funding those high salaries. Why is a company that receives billions in government support paying their top employees far more than the usual rate for labor in other industries?

In the sports world, let us consider the example of the Cowboys. This season, they opened what may be the most impressive stadium in history. To pay for the $1.2 billion palace, they received $325 million in government funds. Last season, the Cowboys spent $146 million on players. Why is a company that receives hundreds of millions in government support paying their top employees far more than the usual rate for labor in other industries?

The Cowboys are not the only team to receive government funding for their stadium. The Yankees and Mets each opened new stadiums this year and received plenty of government assistance; through some creative bond financing, the Yankees will save $787 million and the Mets $513 million. The New York teams ranked first and second in total payroll last season. The savings on the stadium and high spending on players is not necessarily directly connected, but it is certain the Cowboys, Yankees, and Mets were not exactly in dire need of government money.

It's not just the high spending clubs that receive government subsidies for their stadiums. In an article in the

Certainly, there are many advantages to a city having a professional sports team, some of which are not easily measurable. How do you quantify the value of a winning team to a region? I know surly Bostonians are more pleasant after a Red Sox victory. Siegfried and Zimbalist do find that new stadiums provide an economic boost to the area around the stadium, but generally come at the expense of other forms of entertainment and nearby neighborhoods, so the funds spent do not increase the tax base.

Sports salaries are at ludicrous levels because the entire system is predicated on a large portion of team costs being covered by the government. Why do we as fans accept this transfer of taxpayer funds to players (with the owners taking a little for themselves along the way)? In the UK, the government has taken action to limit bankers' pay by taxing their bonuses at a very high rate. I am not advocating that for athletes (or bankers). However, to say that athletes are deserving of high pay because, "that's what the market supports," is a little misleading. The market for pro athletes, like most markets, is heavily impacted by government activity. As long as the government is funding some stadiums, even those teams that do pay for their own stadiums (such as the San Francisco Giants and New England Patriots) are forced to pay more for players.

In both sports and finance, salaries are artificially high due to government support. Wall Street may not get large bailouts every year, but it seems likely that salaries are higher due to repeated government actions that signal to the banks that they unlikely to fail. Neither the teams nor banks are really at fault; we cannot blame them for asking. If someone offered me billions, millions, or even tens of dollars, I would also accept. It seems ridiculous that in both industries, government funds subsidize salaries that are far higher than necessary to keep the top talent.

facto Big East Championship between Pittsburgh and Cincinnati provides

us here at HSAC another chance to analyze some interesting late game

coaching strategy. Today, we will take a look at Cincinnati’s

Madden-like decision to (seemingly) lie down on defense at the end of

Pitt’s penultimate drive.

Cincinnati trailed most of the game, down 31-10 in the first half.

With a stout second half defense and the awakening of their offense,

the Bearcats tied the score at 38 on a rushing touchdown and two-point

conversion in the fourth quarter. Pitt took the ball with 5:46 at

their own 33. Using a combination of clock eating runs, a key pass,

and a costly Cincinnati personal foul, Pitt brought the ball to the

Cincinnati 29 with 2:44. Cincinnati used their first timeout to stop

the clock on 3^{rd} and 9. The Panthers picked up a key first down to keep their drive going downfield. On 1^{st} and 10 at the Cincinnati 13, the Panthers ran the ball to the Cincinnati 5.

On 2^{nd} and 2, the Bearcats defense seemingly lied down

and let Dion Lewis run into the end-zone, practically untouched. With

the touchdown and botched extra point, the Panthers took a 44-38 lead

with 1:36 remaining and the Bearcats poised to receive the ball with

two timeouts. Assuming they meant to allow Lewis get the TD, did

Cincinnati make the right move? Should Pitt have counteracted it by

kneeling at the one like Brian Westbrook famously did in 2007?

By allowing Pitt to score, Cincinnati maintained both their timeouts

and 1:36 on the clock. If Lewis had kneeled at the one yard line and

then the Panthers took two knees, they would have forced Cincinnati to

burn both their remaining timeouts and would have reached 4^{th}

down on the one yard line with roughly 0:50 remaining (1:36 on first

down, 1:33 on second down, 1:30 on third down, and then a 40 second

run-off). If they opted for the field goal and made the chip-shot—not

necessarily a given considering they missed an extra-point—that would

have given Cincinnati the ball with under 50 seconds, down by three

with no timeouts. So which situation is better?

Unfortunately, the data for college football is not as readily

available as it is for the NFL, but using Brian Burke’s terrific site, Advanced NFL Stats,

we can come up with some numbers to start our discussion. Assuming

they start on their own 20, the NFL numbers indicate that Cincinnati

would have a .07 chance of winning if down by three with 50 seconds

remaining; down by six with 1:36 remaining, the chance of winning would

be .19. However, this is a little too nice to the Madden strategy

since they should have assumed the extra point would be good. If they

were down by 7 with 1:36 remaining, the chance of victory drops to

.06. This is curious since it should be roughly 50% of the chance of

victory if down by six (both require a touchdown, one gets a win the

other goes to overtime). These projections are for the NFL and

timeouts are not a factor. Adding the timeout advantage to the Madden

strategy would make it a better bet. Yes, touchdowns are tough to come

by, but as Les Miles discovered a couple weeks ago, field goals are

tough without timeouts.

I would take my chances with the Madden strategy (or kneel if I were

Pitt). 1:36 and two timeouts with an explosive offense like Cincinnati

seems better than just 50 seconds and no timeouts, even though the

latter situation requires just a field goal.

One other complicating factor is the weather. The snow was swirling

and both teams had already missed extra points (Pitt with a botched

snap/hold and Cincinnati with a kick off an upright).

An advantage to Pitt’s play—calling it a strategy may be a bit

strong since it may have been instinct alone that led Lewis into the

end-zone—is that it (should have) made a regulation loss very

unlikely. If they had run the clock and taken a field gold, they could

have lost to a Cincinnati touchdown. Ultimately, missing the extra

point meant that this very situation did happen, however that

occurrence was very unlikely. Since coaches are generally risk averse, it is likely most would have decided to make exactly the choice that Lewis did by running for the touchdown.

Few players have the wherewithal and self-control to make the decision

to not score an easy touchdown. In this case, the decision would not

have been unequivocally correct. In the case of Maurice Jones-Drew’s

kneel-down this year, I saw an ESPN pundit claim it was a bad choice

because (to paraphrase): you should score whenever possible and let the

defense handle the rest. Brilliant.

If there is anything we stat people have learned from the Belichick 4^{th} down debate,

it is that unsuccessful unconventional decisions are treated with

extreme skepticism, even if the numbers support the decision. Pitt

coach Dave Wannstedt and running back Dion Lewis should feel lucky that

they lost in the conventional manner rather than an unorthodox way. If

Pitt had lost after a kneel-down, the press would have skewered Pitt's mustachioed maestro.

For Cincinnati coach Brian Kelly, it was a good and gutsy decision…the

type of decision they enjoy in South Bend. There's also a very large

chance I am giving Kelly too much credit and the easy touchdown was

just a result of a poor defense that allowed 36 points to a 3-9

Illinois team last week.

Check out our site: harvardsportsanalysis.wordpress.com

]]>Recently, the winning percentage for teams with a 300-yard passer is on

the rise. Mike Martz reasoned that teams are now using the pass to get

ahead and not just passing to desperately catch-up when trailing,

"Teams are throwing because they want to now, not because they have

to," Martz told The Journal.

However,

looking at the outcome for 300-yard passers does not tell the whole

story. We would expect those throwing for big yardage to have a good

chance of winning—this makes the sub .500 winning percentage in some

years quite surprising. To really confirm Martz’s story, we need to

consider how teams fare when a quarterback attempts a certain number of

passes. If teams really are passing more by choice, we would expect to

see improved outcomes not just when a quarterback passes over 300

yards, but also when they attempt a certain number of passes. The

incidence of 40 attempts is relatively similar to the frequency of

300-yard passers, so it seems like a convenient starting point.

Using Pro-Football-Reference’s fantastic new game index tool,

I was able to confirm Smith’s numbers and also check to see whether

teams passing 40+ times per game have fared better in recent years. I also looked to see whether the incidence of 40+ attempts/300+ yards has increased.

One note is that all numbers here are based on individual stats, so

teams that have 300+ yards or 40+ attempts with multiple quarterbacks

do not make the list. The statistics include games played this

Thanksgiving, but not this past weekend.

For the visual people out there:

So while there is a noticeable upward trend of 300+ yard passers

winning, the same cannot be said of quarterbacks that attempt 40+

passes.

Unsurprisingly, there is a huge correlation (r^2=.49) between the

win percentage for 300+ yard passers and 40+ attempt passers. There is

a lot of overlap between these groups. 389 players appear on both list

(i.e. 46.5% of those who attempt 40+ passes also throw for 300+ yards

and 54.7% of those who throw for 300+ yards also attempt 40+ passes).

Of quarterbacks who throw for 300+ yards and attempt 40+ passes, the

win percentage is .382 (much higher than the .190 rate for those who

throw 40+ passes and less than 300 yards) . Those who throw for 300+

yards and attempt less than 40 passes win at a .764 rate. Throwing

lots of yards=good. Throwing lots of passes=bad. Doing both=not great.

The

win percentage for quarterbacks who throw for 300+ yards and attempt

40+ passes does appear to be on a bit of an upward trend lately,

however, nothing historically surprising.

This year does not seem to represent any major shift in the way the

game is played. Yes, throwing 300 yards is good. It is very

advantageous to do so with less than 40 attempts. Much of the success

of this year's 300+ yard teams is driven by the fact that there are

more than usual who are gaining 300+ yards without attempting over 40

passes.

This season alone there have been 35 quarterback games

with a 300 yard passer doing so under 40 attempts (10.7% of all

quarterback games). In previous seasons, this happened at a frequency

of 4.7%-7.2%. One thing to keep in mind is that the numbers may be

artificially high right now since later in the season, as the weather

gets colder, passing games may be grounded.

Check out our website: http://harvardsportsanalysis.wordpress.com/

]]>a so-called “entertainment shopping” site. The premise is relatively

simple and somewhat diabolical. Unlike normal auctions sites such as

eBay—which operates as a modified second price auction—Swoopo.com

bidders pay to place each bid in addition to the final selling price.

Though the items sell for far less than retail price, the aggregate of

money spent on bids is huge. Some auctions go in 1 cent increments, so

the site rakes in 61 times the final sale price (60 cents for each cent

the bid goes up and the actual price).

Richard Thaler wrote my favorite sports economics paper, but this issue has an even greater connection to the world of sports. In the article, Thaler talks about the psychological principle of commitment, one of Robert Cialdini’s weapons of influence. Basically, once we start playing, we hate to lose.

While reading the article, I suddenly realized that little separates

me as a sports fan from the Swoopo.com bidder who just spent $1000 on a

$200 camera. As a fan of the Cleveland teams, I am a prime example of a

person with a commitment problem—too much commitment. With each game I

went to as a child, each hat I bought, each bag of peanuts I devoured,

I was pulling myself deeper and deeper into the spiral of commitment.

Now, I am stuck rooting for teams that have not won a championship

since 1964.

This phenomenon does not just present itself in the world of sports

fandom. Sports teams fall prey to the commitment principle.

Frequently, they fail to understand the idea of sunk costs. Once they

have committed, they are reluctant to drop a player. Take the Houston

Texans, who were faced with a decision about quarterback David Carr

after his fourth season. The team had already sunk a first overall pick

and $21.75 million into the former Fresno State star and it was clear

he was not a very good player. To retain Carr’s services for three

more seasons, his contract called for an $8 million option bonus (in

addition to his salary during those seasons). If they did not pay the

option bonus, they would have to release Carr or try to get him to

renegotiate his deal. Already deeply invested in Carr, the Texans

exercised the option and ended up spending $13.25 million to keep David

Carr around for a fifth season. They dropped him the next year.

If I was starting over, I would probably choose to root for more

successful teams, but this is the hand my hometown has dealt me. Will

it be that much sweeter if/when a Cleveland team finally wins? I can

only commit to hoping that is true.

Check out our website: http://harvardsportsanalysis.wordpress.com/

]]>This summer, much was made of the end of Moneyball. The teams with the five highest win totals—the Yankees, Angels, Dodgers, Red Sox, Phillies—rank 1^{st}, 6^{th}, 9^{th}, 4^{th}, and 7^{th}

in opening day pay roll. Most of those teams took on even more salary

during the season, so their final payroll ranks may even be higher.

After a few years of increasing parity

in which we saw the A’s flourish, the Twins constantly competitive, the

Marlins win a World Series, and even the Rays win the American League,

disparity appeared to be on the rise this season. Harkening back to the

late-1990s and the environment that spawned the Blue Ribbon Panel,

this season seemed to be an extreme case of the haves versus the

have-nots. The small market clubs who had recently feasted on

undervalued players appear to have lost their competitive advantage

since the big spenders started to buy into the same philosophies—in

many cases hiring Ivy League stat geeks like ourselves. Michael Lewis’

concept of *Moneyball* was not about On-Base Percentage, but it

was rather about finding undervalued assets. Was this the year the big

market clubs finally started valuing assets correctly? Did money spent

play a bigger role this season than in recent years? Intuition (and the

Yankee World Series victory) says yes, but let’s take a look at the

numbers.

**Methodology**

Looking at opening day payroll and wins, we test to see which seasons

spending is the best predictor of wins. Since a team of replacement

level players (e.g. talent available for the league minimum) could win

49 games, we will consider the number of games a team wins above 49

(marginal wins). Also, we will consider only the money spent above the

minimum salary threshold (marginal dollars). This methodology,

popularized by Baseball Prospectus is very common. Payroll data is courtesy of USA Today's salary database.

**Results**

Running a simple regression trying to predict marginal wins as a function of marginal dollars spent, we will learn two things:

- Was marginal spending a significant (i.e. non-random predictor) of

marginal wins? This is the p-value. The p-value represents the percent

chance that the predictor is random. If the p-value is .50, that means

there was a 50% chance it is random (spending does not impact wins).**Lower p-value means spending was more important**. - What percentage of variation in marginal wins can we predict if we know spending? This is called the r
^{2}. A high r^{2 }means that we can predict more of the variation in wins based on spending. If r^{2}were .05, that would mean we could predict 5% of the variance in wins if we knew spending.**For r**.^{2}, higher means money spent was a greater predictor of wins

First, let’s run a quick regression looking at all years since 1989.

To account for payroll inflation, I have brought everything up to

present value using a discount rate of 8% (the average rate of

inflation in recent seasons).

Unsurprisingly,

over the past 20 years, opening day marginal salary has been a

significant predictor of wins. This estimate is not very precise since

the 8% discount rate is an approximation and just allows us to consider

all years on the same graph. We will get a clearer picture looking at

each season individually.

As you can see, in the early 1990s, salary did not predict wins for

many of the years (high p-values). However, considering the full

seasons after the strike, we see that salary has always been a good

predictor (with 2008 being the only year p>.05).

This is where things get interesting. In the years leading up to the

strike (1990-1993), salary was a poor predictor of marginal wins.

Immediately after the strike, there was a series of years in which

marginal salary was a good predictor of marginal wins. This is when

parity hit its nadir and the Blue Ribbon panel was ordered. In 2000 and

2001 (both years in which the Yankees made the World Series), money was

a very poor predictor of wins. Since then, the r^{2} has hovered in the .17-.29 range, with 2008 (the year of the Rays) being a notable outlier at .107.

Considering the past 8 seasons, this year was not particularly

notable in terms of payroll predicting success. However, the big market

clubs are spending smarter, particularly in the draft and on Latin

American teenagers, which is not reflected in opening day payrolls. The

margin for error for the little guys is razor thin and the window of

opportunity can be very short (see: Indians’ downfall after 2007 and

Rays’ slide after 2008). However, the data suggest that this year may

not have called for the alarmist musings

by some members of the media. Competitive balance is probably not what

it should be, but this year was hardly different from most other recent

seasons.

Here are some plots from a few recent seasons:

]]>