Washington Should Play Ball

Before we kill a program that passes every logical test, let's make sure that the program is executing the intended mission and that the measurements are properly designed to reflect the appropriate outcomes.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Moneyball was a book, and a movie, that celebrated a smart way to hire players when a baseball team has a tiny budget. It's all about data mining. (No, it does not involve the NSA, but it does require intelligence.) It asks a team manager to look at the fine print in a potential player's stats, scanning for on-base percentages rather than crowd-pleasing hits. Whoever gets on base the most, even with bunts and walks, is the one to hire.

A recent non-partisan magazine piece, co-authored by former economic advisers to George W. Bush and President Obama, proposed that the federal government start playing moneyball with taxpayer dollars. What Peter Orszag and John Bridgeland propose makes sense, up to a point.

The overview they provide is discouraging. Under both administrations, they saw a federal government basing its spending decisions on "good intentions, inertia, hunches, partisan politics and personal relationships." Programs clearly proven to get little or no results -- or ones that make matters worse -- continue to get funding. The Program Assessment Rating Tool (PART), developed under Bush, identified lackluster programs, but efforts to discontinue or improve them got little response on Capitol Hill. PART was an oversight function developed and approved in partnership with Congress, but Congress paid little attention to the intelligence it gathered and analyzed. As Orszag and Bridgeland write:

Since 1990, the federal government has put 11 large social programs collectively costing taxpayers more than $10 billion a year, through randomized controlled trials. Ten out of the 11 ... showed "weak or no positive effects" on their participants.

And yet the statistics and information gathered by reviews such as this get little attention when it comes to renewing funding in Congress.

I wholeheartedly support some of the possible solutions offered by Orszag and Bridgeland:

  • As proposed by Results for America, a non-profit, we should use one percent of any program's funding to pay for studies of that program's actual results.
  • A "moneyball index" should be publicly accessible to offer a scorecard that rates every member of Congress on his or her support of programs proven not to work.
  • Tracking results is fundamental in the private sector. Successful businesses around the world make these evaluations for every single dollar invested.

    A couple of caveats, however, are important. Carefully understand the measures to be used. Ask the right question is both art and science. You need to have a sense for how much credence to invest in the answers you get. I started in the research business of a marketing and communications company -- Young & Rubicam. I recall a wonderful story about the first head of Y&R's Research Department, George Gallup. With the help of W. Edwards Deming (who later became Japan's manufacturing revered guru -- essentially inventing the Total Quality Management movement), they created the first national probability survey in America. The head of MGM in Hollywood, the legendary Louis B. Mayer called Gallup with a vital issue. "Mr. Gallup", Mayer told him, "A publishing agent is negotiating with me on the film rights for a book. The publisher insists the book's readership is 10 to 20 times higher than the book's sales -- through pass-along copies. Can you tell me how many people have read this book?" "No problem" answered Gallup. So he promised Mayer an answer by the following morning. That evening, a national study was conducted by telephone across the country. The following morning Gallup got results. Ninety-eight percent of America said they had read the book! Gallup knew the answer couldn't be right. So his people double-checked. Made sure the operators didn't cheat. He called Mayer and said he needed 24 more hours. So that night, Gallup's question to the projectable sample was different: "Do you intend to read the book in the next six months?" Back came the answer late that night. Eighty-nine percent of America intended to read the book in the next six months! (By the way, the book was Gone With the Wind. Mayer bought the rights.)

    The second caveat is also critical. There are worthy goals that must be pursued. Sometimes the problem with a program is the execution, not the concept. An example. "Head Start" has been recently categorized as ineffective. I remember some years back, this venerable program was judged vital to providing underprivileged kids with the nutrition needed to learn in school. You don't learn well if you're hungry or worse, malnourished. So the issue here, unlike Moneyball, is to understand why the program is no longer considered effective. Are we measuring the right outcomes? Is there a delivery issue? Are the kids getting the food they need every day? In other words, no analysis of results could sensibly conclude that children learn better when they're hungry.

    Before we kill a program that passes every logical test, let's make sure that the program is executing the intended mission and that the measurements are properly designed to reflect the appropriate outcomes.

    Put it all together, to improve both the efficiencies and effectiveness of our tax investments we must use rigorous measurements, and the wisdom to properly evaluate the validity of our mission. If the programs fail, fix them or nix them. But do it with intelligence, thoughtfulness and with the courage to be decisive and responsible.

    Popular in the Community

    Close

    What's Hot