Monday, February 7, 2011

NCAA Doesn't Earn Any Trust Points With New Strength of Schedule Multiplier

When the NCAA released the 2011 version of the Men's Basketball Championship Handbook back in November, this little nugget caused some head scratching around D3 world:

Home/Away Multiplier. A multiplier of 1.4 shall be added to the OWP for those games played away from home. In addition, the same multiplier (1.4) shall be included for those games played on the road for the OOWP. A multiplier of 1/0 (no positive or negative effect) will be included in the OWP and OOWP for all neutral games. A multiplier of 0.6 shall be included in the OWP and OOWP for all home games.

This section (which can be found on page 16) introduces a new multiplier for the strength of schedule (SOS) calculation depending on where the game was played.

This makes some sense. A road game versus a specific team is usually harder to win than a home game against the same opponent, and they wanted to reflect this in a team's SOS. But why did the D-III committee pick these specific multipliers?

Pat Coleman, d3hoops.com editor-in-chief, was kind enough to paste a portion of an electronic conversation he had with a committee member on the 'Pool C' section of d3boards.com. Here's what the (anonymous) committee member said:

This (1.4 & .6) is the multiplier that Division I and II have used in their SOS or RPI indexes for the past few years.

Again, we may have to have further discussions on tweaking the system.

I take these sentences to mean that they took the multiplier values from a similar adjustment that happens in D-I and D-II basketball, but that they didn't get too deep in discussions of the actual application themselves.

I say this because D-I and D-II are applying the multipliers in a completely different manner than D-III is.

CollegeRPI.com collects and publishes RPI data, and they have an explanation of how the home/road multiplier is used in Division I.

For the 2004-05 season, the formula was changed to give more weight to road wins vs home wins. A team's win total for RPI purposes is 1.4 * road wins + neutral site wins + 0.6 * home wins. A team's losses is calculated as 0.6 * road losses + neutral site losses + 1.4 * home losses.

For example, a team that is 4-0 at home and 2-7 on the road has a RPI record of 5.2 wins (1.4 * 2 + 0.6 * 4) and 4.2 losses (0.6 * 7). That means that even though it is 6-7, for RPI purposes, it is above .500 (5.2-4.2).

This "weighted" record is only used for the 25% of the formula that is each team's winning percentage. The regular team records are used to calculate OWP and OOWP.

According to this explanation, the multipliers are used against a team's winning percentage, not their strength of schedule. So we can't rightly expect that simply slapping the same multipliers on the strength of schedule would yield useful results without first studying the potential effects of doing so.

I wanted to test the differences between the two systems, so I selected twelve D-III teams from across the country in order to compare how the two different adjustments affect RPI. I tried to select a variety of teams ranging from Pool C locks to bubble teams. I know it has been said that the D-III committee doesn't actually calculate an RPI in their discussions, but I'm going to do it here for comparison's sake.

Here's a chart of these teams' respective RPI numbers as calculated the standard way (no multipliers), the D-III way (multipliers on SOS), and the D-I way (multipliers on winning percentage). (Click on the image for a larger version)



The blue bar (first in each series) is the standard RPI calculation without any multiplier ('the way D3 used to do it'), the green bar (second) is RPI with the D-I multiplier method added, and the red bar (third) is the RPI with the current D3 multipliers added.

If the D-III committee was hoping that the multipliers would serve as an adjustment, it looks like this experimant is a complete failure. Look how close the D-I RPI numbers are in relation to their respective standard RPI numbers. This is what an adjustment should look like. It's mostly the same, but, you know, slightly adjusted. The D-III numbers look like an entirely different rating system in several of the cases. This can't be what the Division III committee was going for.

Here's how the teams rank according to these three RPI methods:

StandardD-ID-III
1St. ThomasSt. ThomasLewis and Clark
2AmherstAmherstSt. Thomas
3St. Mary's (Md.)St. Mary's (Md.)Ramapo
4RamapoRamapoIllinois Wesleyan
5KeanKeanLa Roche
6CalvinLewis and ClarkSt. Mary's (Md.)
7Lewis and ClarkCalvinKean
8MariettaIllinois WesleyanCalvin
9CentreMariettaMarietta
10Illinois WesleyanCentreCentre
11Anderson La RocheAmherst
12La RocheAndersonAnderson


Which one of these is not like the other?

This chart makes my previous point, but it's easier to see off hand. The D-I version of the RPI calculation looks like the standard version (for the most part) but a few teams are adjusted up or down one or two spots. The D-III version sees La Roche climb seven spots and Amherst drop nine. Amherst has the 11th best resume of the teams on this list, and Lewis and Clark is the best? Really?

It appears that taking the D-I multipliers and applying them to the D-III strength of schedule is far too drastic of a move.

This isn't to say that D-I is doing it correctly. I happen to agree with the D-III committee that a multiplier makes more sense on the strength of schedule portion of the criteria, and not the winning percentage (home or road doesn't change whether you won or lost, but it does change how difficult the game was), but, in my opinion, 1.4 and 0.6 are not the correct multipliers for this.

It only took me about one hour to determine that these multipliers are indeed far too extreme, and I suspect that given only another hour or two (maximum), a reasonable person could come up with agreeable multiplier values. Is due diligence too much to ask for from the NCAA?