We are a learning organization and, as such, are constantly fine-tuning our rankings methodology to reflect new insights and new concepts. When we update our methodology, we want to let you know. This is our latest update for basketball — an important update, as some teams will see big changes in rankings.
The basis for this change is our difficulties ranking teams early in the season. Without getting too deep into the complexity and intricacy of different ranking methodologies; no ranking system does very well in the early going. There just aren't enough games played to get a good bead where teams stand. We need to see 8-10 games within the division to rank a team into what we consider their playing range.*
Our biggest flaw as a ranking system has been inflexibility in our early-season methodology. Teams performing significantly better than last season don't move up the ranks quickly enough, while teams performing significantly worse don't drop fast enough. In D1 this is usually not an issue, as most teams don't improve or decline as dramatically from year to year, but lower division teams can make big swings in one offseason, and our ranking system has had trouble quantifying that.
The original BennettRank methodology was based on two seasons of analysis of D1 Women Soccer. This division is relatively stable from year to year. However, in D2 and D3, major changes can and do occur from one season to the next. Various attempts to adjust the rankings to reflect these major changes have been methodologically complex and sometimes difficult to explain. As such, we want to move to a methodology that is intuitively correct (i.e. makes sense) and can be administered efficiently across all sports and all divisions for all college teams.
To that end, our ranking methodology now has three distinct phases:
Phase 1 – Preseason Rankings (Week 1)
For preseason rankings, we take the previous season's final BennettRank and adjust each team's underlying power score up to 15% based on conference performance. This is the same concept as in previous seasons.
One change we made this season was to ask our beat writers to examine the Top 25 teams in their division and suggest adjustments. These adjustments may reflect large graduating classes, known powerhouses hurt by weak conferences, or other intangibles our computers don't pick up. The term we use around the office is "face validity." Do the preseason Top 25 rankings make sense?
We make no adjustments for teams ranked outside the Top 25.
Phase 2 – Early Season Rankings – (Week 2 to Week 8)
For the first eight weeks of basketball season, we will use our traditional methodology. This takes the previous week's rankings and adjusts them based on each team's performance against our expectations. We publish our predictions for the week each Monday morning, along with the rankings.
These game predictions are based on the difference in rank between the two teams involved and the location of the game. A victory earns a team automatic points, and if the result is better than predicted a larger adjustment is made. Teams can also slide down (even with a win) if their performance doesn't meet our expectations. In this way, each week is just an incremental change from the week before.
Phase 3 – Mid to Late Season Rankings (Week 8 to Week 23)
For most sports it takes about one third of a season's worth of games to rank teams in their general playing range* so we will wait until this point to change anything about the way we rank teams. But this is where the changes to our ranking system come into play.
We have always calculated an RPI-like "body of work" measure, but until now we have never published it. There are several differences between our body of work measure and a true RPI ranking:
Margin of victory is the focus statistic
- Wins are important, but margin of victory is more important in our system. A big win will earn a team more points than a narrow one (up to a point, because a 35-point win doesn't teach us all that much more than a 25-point win). A team will earn extra ranking points for a margin of victory up to 18 in basketball. Past that, each point will earn the team an additional 50% credit in the rankings.
Home field advantage is explicitly recognized
- We expect home-court advantage to add about three points for the home team, on average, so we adjust our expected margin of victory to reflect that advantage. In terms of predicting games, this advantage represents about a 30-rank swing: meaning a team ranked BR-180 has about a 50% chance of beating a team ranked BR-150 at home. We feel this simply cannot be ignored in a computer ranking methodology.
Recent games are weighted more heavily
- All games up until Week 8 are weighted the same. From Week 8 through the end of the season, each week's games are weighted 3% more heavily than the week before. Thus at week 18, games are 30% more important than Week 9 games to our rankings. Teams go on streaks and more recent wins/losses are more telling and predictive moving forward than early season results.
In development, we modeled and compared this methodology to the RPI in D1 men's and women's basketball. The RPI uses only wins/losses, has a very crude home court advantage adjustment, and weighs all weeks in the season the same. The subjective rankings for most D1 sports are pretty good (Coaches Poll); some are exceptional (e.g. D1W Soccer, D1M Basketball). As such, we modeled our new approach to converge on those rankings the fastest.
From Week 8 on, we use this BennettRank "body of work" statistic as our sole ranking methodology. It intuitively moves teams up or down each week based on each week's performance and it has overall face validity. It will surely identify teams overlooked by the subjective polls. Usually 22-25 teams overlap in the major Top 25 polls in D1 (AP and Coaches Polls), and in D2/D3, very few teams enter or leave the Coaches Poll Top 25 each week. Our methodology will identify what we consider to be over- and under-ranked teams as compared to the subjective polls.
Week 8 will bring with it some big ranking differences for some teams. These teams are statistically playing much better or worse than last year; so much better that our Week 1-8 methodology just hasn't caught up to them yet.
As always, we encourage input and comments on our methodologies. We now have a methodology which recognizes early season ranking issues and gives a more predictive statistic later in the season. Please feel free to contact us with any questions, comments and concerns and as always, thanks for reading.
*Playing range is a term we use to describe the ~30 rank range a team should float during the season. Think of this as the equilibrium. This is where the scale is balanced.