The Basics of Bracket Science: Baseline Pool Performance and “PASE”

Whenever I tell people about my work with statistics and the NCAA tournament, I get one of two reactions. Those who have little interest in college ball give me a patronizing smile and tell me some story about how their aunt won her pool by picking the cutest animal mascots. Then there are the clued-in hoops fans. They let me ramble on a bit before saying, “Oh, you’re like a bracketologist.”

Most of the time, I’ll nod and say, “Yeah, sort of.” But if they don’t wander away, I’ll tell them the difference between bracketology and bracket science. Bracketology is the sort of thing that Jerry Palm and Joe Lunardi do. Leading up to Selection Sunday, they project who’s going to make tournament, where they’re going to be seeded and which region they’ll play in. Their jobs involve prolonged analysis about who’s punched a ticket, who’s on the bubble and who’s on the outside looking in.

Bracket Science is different. I don’t care so much about all that. I’m more interested in examining which teams are going to advance in the tourney once the bracket gets set on Selection Sunday. My job involves intensive analysis about who’s going to overachieve in the tourney, who’s going to fall short of expectations and who’s most likely to cut down the nets.

From a business point of view, my gig doesn’t have the shelf life of bracketology. Speculation on who will be in the bracket goes on for months. Analysis of who will advance through the bracket has a big audience for about four days—from Selection Sunday to the time when you have to submit pool sheet.

About 70% of my traffic comes in those four “bracket pondering” days. At some level, everyone wants to know the same thing: how can I build a winning bracket? Some people come right out and ask me what the answers are, as if I have them at the ready and am just holding out. Others are more appreciative of the predictive challenge before them and the limits of statistics. They like to think alongside me and use the numbers I offer to develop their own theories.

But my analysis gets pretty deep pretty quick, especially if you’re giving yourself just a few hours to absorb all the guidance I provide and relate it to the 68 teams that make the dance. And time is a precious commodity after Selection Sunday. I thought it might be a good idea for newcomers to this blog to offer a few “Bracket Science 101” tips in advance of the dance. So let’s get started building a better bracket…

What is baseline bracket prediction performance?

I’m here to tell you right now that nobody has a failsafe system for predicting the NCAA basketball tourney. Filling out your bracket is a complex exercise in assessing probabilities. You’re not going to build a perfect bracket. Quicken Loans and Warren Buffett are going to make all that marketing noise, grab all those email addresses…and never have to pay out on the billion dollar prize. There are nine quintillion different bracket combinations (two to the 63rd power or 9,223,372,036,854,775,808 to be exact). Sure, a lot of them—like a Final Four of all 16 seeds—are patently absurd—but even the number of plausible outcomes is astronomical.

Fortunately, your task is not to build a perfect bracket. It’s just to win your bracket pool. Most people play in pools of 25-100 people. These pools are typically won by someone who gets between 48 and 54 correct picks out of the 63 tourney games. That works out to about a 75-85 percent accuracy rate. If that seems lower than you expected, remember that inaccurate picks in early rounds reduce the number of later-round games in which you even have a chance to be right. So three out of four actually ain’t too bad.

If 48 out of 63 is the benchmark of winning bracket performance, what constitutes average tourney prediction success? When you consider that even the most clueless bracket pool player could adopt a strategy of advancing the higher seed in all match-ups, then that should stand as the baseline for tourney prognostication. In this case, every top seed would reach the Final Four. To determine winners in the semis and finals, it would be reasonable to advance the team with the higher victory margin, since statistics show that this is a solid tourney performance indicator and anyone could dig up the numbers.

I went back over the last 29 tourneys of the 64-team era and figured out how well the “higher seed/bigger margin” strategy would’ve done. Here’s the year-by-year breakdown:


The numbers show that you should be able to predict about two-thirds of the 63 tourney games correctly without even thinking about it. In fact, the baseline bracket picking strategy would’ve correctly identified nine of the 28 champions in the modern tourney era, including four of the last seven (Kentucky in 2012, North Carolina in 2009, Kansas in 2008, and Florida in 2007). Heck, if you had used the higher seed/bigger margin strategy in 2008, you would’ve correctly pegged the Final Four, called both the finalists—and picked the winner.

Of course, this kind of late-round accuracy is an anomaly. Before 2008, all four top seeds had never reached the Final Four. And let’s face it: picking your bracket solely by seeding isn’t exactly a winning strategy. Using this approach would likely put you in contention for average-sized pools in only two of the 29 years—in 2007 (49 correct answers) and in 1993 (48 correct answers). If you were playing in a smaller pool, maybe you’d be among the leaders in 2008 and 2009 as well.

Look no further than last year’s dance for proof that seeding alone won’t get you very far in your pool. Gonzaga, of all teams, was the highest margin one seed; the Zags bowed out to Wichita State in the second round. Moreover, 37 correct picks out of 63 games is well below average prediction performance. Only 1990, 1999 and 2010 saw worse results.

There’s no denying that seeding is the single best determinant of a team’s tourney fate. It doesn’t take a bracketmaster to figure out that the one, two, three and four seeds have accounted for 99 of the 116 Final Four teams and 27 of 29 champions in the 64-team era. The real the question is: which factors beyond seeding help improve your ability to predict tourney outcomes above the baseline accuracy of 65 percent? Before we can answer that question, we need a way to reliably measure the impact of attributes on tourney performance. That’s where PASE comes in.

PASE: Performance Against Seed Expectations

If March Madness always played out to chalky seed expectations, the best one seed would win six games, the second best top seed would win five and the other two would win four. Then, the two seeds would win three games (before losing to top seeds in the Elite Eight), three and four seeds would win two, and five through eight seeds would win one.

Of course, the 29 tourneys of the modern era haven’t played out that way at all. Instead of the average one seed winning 4.75 games, as would be the case in perfect high-seed dominance, they’ve actually won 3.35 games per tourney. Here are the average win totals of a single team at each seed position:


It’s good to know these numbers as you make your bracket picks. If you think, for instance, that a second-seeded team is better than the average two seed, you may feel comfortable advancing them to the Elite Eight. But if you’re convinced they’re weaker than usual, it might make sense to have them bow out in the Sweet 16.

That’s the most basic use for these “wins per team by seed” numbers. But they play a much more central role than that in my research. With these numbers, I’ve developed a metric to measure actual performance in the tourney to expected performance—and assess the degree to which teams with certain attributes overachieve or underachieve.

This is the basic concept of “Performance Against Seed Expectations,” or PASE. Think of PASE as the equivalent of baseball’s WAR, Wins Above Replacement. Since we know that the average “replacement one seed” wins 3.35 games per tourney, we can say that a top seed like Louisville last year, which won six games, is 2.65 wins above expectations.

Using “wins per team by seed” values, you can tally the positive or negative differences between actual and expected wins for any attribute you want to study—whether it’s an individual coach, teams with more than five straight bids, teams with all-Americans, guard-oriented squads, etc. The total of these differences is then divided by the number of appearances to arrive at an average number of games the attribute either beat or fell short of expectations per tournament.

An example will make this clear. Let’s look at VCU coach Shaka Smart. He’s gone to the tourney three times. In 2011, he won four games as an 11 seed. The average 11 seed has won just 0.534 games per dance, so Smart was 3.466 games ahead of expectations that year. In 2012, Smarts’ twelfth-seeded squad won a single game in the tourney,.466 above expectations (the average 12 seed also wins .534 per dance). Last year, Smart’s fifth-seeded Rams won another opener then got beat by Michigan in the second round. Since five seeds average 1.293 games per tourney, Smart actually fell .293 below expectations. When you add up 3.466 and .466, then subtract .293, you arrive at a total of 3.801 games above expectations. Divide that by three years and you get a PASE of +1.267. That’s extremely high…and likely to go down as Smart makes more tourney appearances.

Coaching is just one attribute we can analyze in this way. Say you want to know whether Big Six schools that win their conference tourney actually overperform or underachieve in the national dance. You can get a precise measurement of that. Since 2002, when all the Power conferences had tourneys, the 72 champions have beaten seed expectations at a +.188 PASE rate. That’s a significant PASE. So it’s worth keeping your eye on the teams that win the Big Six tourneys (okay…Big Seven with the American. Ugh.)

So here’s what we learned in “Bracketscience 101”:

  • Baseline bracket performance is about 41 correct picks in 63 games.
  • To win an average-sized pool, you need to get at least 48 picks right, an improvement in prediction accuracy of about 16 percent.
  • Every seed has an average number of wins by team per tourney.
  • With these numbers you can measure the degree to which any attribute contributes to over- or underachievement against seed expectations.
  • The PASE stat is a key metric for assessing the relative values of attributes.
This entry was posted in Basic Concepts. Bookmark the permalink.

5 Responses to The Basics of Bracket Science: Baseline Pool Performance and “PASE”

  1. Brian says:

    I’ve been spending some time on the Bracketmaster+ tool, and was wondering about the PASE values. Let’s say I look up a coach’s average PASE from 2010 to 2012. Are the seed expectations updated to include the 2013 tournament in the average? Or is PASE based on pre-tournament 2013 seed expectations? I figured the averages would not fluctuate much either way just curious.

  2. Tommy says:

    Going back to last week’s post about power-conference overachievers having win streaks between 4-10 games, well, we know KU and Arizona will enter the tournament with no more than a 3-game winning streak, tops.

  3. ptiernan says:

    The PASE values don’t use the wins/seed totals of the years you searched. They still use the overall wins/seed of the last 29 years.

  4. Rick Bates says:

    The odds are not two to the 63rd power or (9,223,372,036,854,775,808) in picking a perfect bracket !!
    If you assume that the four #1 seeds win their first game in Round one which they have since the begining of time, (100%) then the odds are 2 to the 59th power.

    • ptiernan says:

      Well…I either call them theoretical odds or “possible combinations.” Fact is, with 2 seeds a 95% probability to win, 3 seeds about 80%, etc. The odds are a lot better than even 2 to the 59th power. But they’re still gigantic.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>