Quality Curve conforming to average…but what’s with Florida?

It’s been about three weeks since I evaluated how the KenPom performance ratings of this year’s top teams compares to their historical counterparts. If you’ll recall, there’s a connection between the quality of the top teams and tourney unpredictability. Basically, the years when the elite squads were relatively more efficient correlated to chalky dances, while those years when they were less efficient aligned with upset-laden tourneys.

I compared the 2011 and 2007 dances, the maddest and sanest of the 64-team era. 2011 saw the worst top 20 teams in terms of KenPom Pythag efficiency since the data became available in 2004. Conversely, 2007 featured the highest quality top 20. And what happened? The 2011 dance tied a record for upsets (13). Meanwhile, 2007 broke records for predictability, with just three upsets.

When I did the line chart for this year’s top 20 on January 11, we saw that teams ranked 1-9 were markedly better than average, teams rated 10-15 were about average, and the 16-20th ranked teams were much worse than average. From this, I concluded that we may be in for a top-heavy tourney, with one and two seeds dominating the brackets.

The curve has undergone a significant change since then. Take a look at how the red line compares to the grey (average), orange (best field) and blue lines (worst field):

2013_qualitycurve_012813

One thing jumps out on this table. Florida’s efficiency numbers are head and shoulders above the rest of the top 20. How good are the Gators’ KenPom stats? Chew on this: no team in the last nine years has gone into the tournament with a Pythag as high as Florida’s. Not Kentucky last year. Not the Gator squads that won back to back championships in 2006 and 2007.

The three other teams that would be slotted into the top seed—Michigan, Indiana and Louisville—are slightly better than their historical counterparts. But the gulf isn’t as big as it was three weeks ago. The same goes for the KenPom-projected two seeds (Duke, Kansas, Pitt and Syracuse) and three seeds (Minnesota, Ohio State, Arizona and Gonzaga). Miami, Creighton and Wisconsin rate out as markedly better potential four seeds, but Michigan State is below average. As for the projected five seeds, they’re more like the weak teams of 2011.

With the Quality Curve starting to conform more closely to average, I’m thinking that we may not see a top-heavy tourney. If there’s anything we’ve learned in the last week, it’s that any of the top 20 teams can beat each other on a given night.  As for the anomaly that is Florida, I’ll be very interested if those astronomical efficiency numbers remain so high. The pollsters have yet to fully credit Florida for their dominating play, but if they keep this up, they won’t be sneaking up on anyone come tourney time.

This entry was posted in Measuring Madness, Tourney Trends. Bookmark the permalink.

9 Responses to Quality Curve conforming to average…but what’s with Florida?

  1. P.H. says:

    You’re always going to find value with KenPom numbers. Teams like Wisconsin and Pitt – there’s no way they are going to get that high of seed heck they might not even make the tourney. I just love Florida. They absolutely slaughtered Wisc and Marquette both quality teams. If we had to fill out the brackets today they would be my pick.

  2. ptiernan says:

    For whatever reason, KenPom numbers always love Pitt and Wisconsin. Those are two teams that I’m leery of come tourney time. But there’s no denying that KenPom can help you find the undervalued gems…and steer clear of overvalued dogs.

  3. Brian says:

    The line for this year to date underscores the inherent volatility in Pomeroy’s system. I am skeptical of some teams’ efficiency rankings because they were earned against clearly inferior opponents. For example, I follow Indiana and I doubt they will have a top 20 defense come March. I definitely think this tournament is going to be a tough one to predict. There is a lot of parity up top but all these teams have some big question marks.

    • ptiernan says:

      Brian – I don’t know that KenPom data is volatile. His data actually factors in SOS. What I worry about is that KenPom efficiency numbers don’t actually factor in WHEN a team is efficient. Take a squad like Wisconsin, who his numbers always seem to love. I would contend they’re more efficient earlier in the game and less efficient when they need to score a basket late in the game. That’s one calculation you won’t get out of KenPom. Ken also calculates “Luck,” which is basically a metric measuring the deviation between his numbers and actual records. It’s an interesting stat to look at.

  4. Jon says:

    Hey Peter,

    One thing that might be interesting to analyze is conference strength compared to how well teams from the conference do in the tournament. For instance, the big 10 seems to be way better than the other conferences this year, how could we expect teams in the conference to do? In general, how does a given team’s strength of schedule (or even strength of victories) forbode how they do in the tourney? Just something I’ve always wondered. Keep up the awesome work

    • ptiernan says:

      Jon,

      I’ve done something like this in the past. The one year that everyone was drooling over the Big East, they flopped massively in the dance. Tell you what…if you can dredge up with conferences were considered the best heading into the last, say, eight dances, I’ll run an analysis and post a blog on it.

      I used to do a lot more conference analysis, but with the impending craziness of conference musical chairs, I think it’s going to be a much harder stat to draw reliable conclusions about.

  5. Ed says:

    Hi Peter. Possible to share Kenpom’s def. eff. ratings for the eventual final 4 teams just prior to start of tourney…last 10 years?

  6. Ed says:

    Rankings and not ratings. Thanks!

    • ptiernan says:

      Ed – Unfortunately, I don’t track the rankings overall…but against the tourney field. I realized this after doing a couple of the champ checks against rankings. The more accurate way to do this, probably, is with the raw Pythag numbers. I think the actual OE/DE thresholds are 21/31…not 17/25. UConn was the outlier on both ends of the court.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>