clock menu more-arrow no yes mobile

Filed under:

The College Football Playoff Selection Committee Isn't Following Its Rules, and That's a Good Thing

I'm glad they aren't following the guidelines to a T.

Kevin Jairaj-USA TODAY Sports

As I mulled over the freshly released College Football Playoff rankings last night, something occurred to me: the selection committee literally cannot do its job based on the instructions it was given. I realized shortly thereafter that it's a good thing that the committee is breaking its own rules. Let's break it down piece by piece.

Best, not Most Deserving

A point of emphasis is that the selection committee will go by which are the best, not the most deserving teams. It was true in April, and it was true when some media folks did their mock playoff exercise a little while ago. On the CFP website, the page on the protocol for the committee is literally titled, "How To Select the Four Best Teams to Compete for the College Football National Championship" (emphasis mine). It has seven instances where it tells committee members to concern themselves with the "best" teams.

I want to highlight something from the article on the media exercise that Tony Barnhart wrote:

The committee's charge, as it was explained to us, is to pick the four "BEST" teams for the College Football Playoff--not the four most "DESERVING" teams, but the four best teams. And determining the difference in "best" and "deserving" is not merely a matter of semantics.

It's really not just semantics, though Barnhart doesn't go on to explain the difference. These terms, "best" and "most deserving", have very specific meanings in a college football context. They are a byproduct of the arguments of the BCS era, and they work like this:

  • Best: refers to whether a team would be favored on a neutral field. Whoever would be expected to win in a game tomorrow against anyone else in the country is the "best" team. This method is sometimes shorthanded as "the eye test" or "style points", though arguments about who is best shouldn't rely solely on looks.
  • Most Deserving: refers to the accumulated wins and losses of a team. A team with a loss might be more deserving than a team without a loss if that loss was to a good team and the one-loss team has more quality wins than the undefeated team does. This method is often described as "resume ranking".

Those are the definitions, and it couldn't be more clear that the selection committee is tasked with finding the "best" teams. These are the definitions I'll be working from for the rest of this piece.

Strength of Schedule

There is a list of things the committee must consider, but clearly one of the most important is strength of schedule. The committee's instructions don't guide them to make it among the most important, as each committee member has the freedom to judge teams however he or she chooses. Based on decisions like having Auburn third and TCU well ahead of Baylor, schedule strength ended up a big concern.

Here is where it starts to get interesting, because how you apply schedule strength is different depending on whether you're going for "best" or "most deserving".

In my observations, most people apply schedule strength in the manner that "most deserving" dictates. In that method, you count the number of high quality wins and order teams with similar numbers of losses accordingly. An argument along these lines sounds like this: "Mississippi State should be No. 1 over Florida State because its wins over Auburn and LSU are better than FSU's wins over Notre Dame and Clemson." Or this: "Baylor beat TCU, but the Bears have no other quality wins and played nobody out of conference while TCU has defeated Oklahoma, Oklahoma State, and 6-2 Minnesota". Or, you might simply put Team A over Team B because, while they have the same record, Team A has a tougher strength of schedule rating than Team B does.

That is not how you apply strength of schedule when determining the "best" team. The best team is the best team no matter who it plays. The quantity of good wins is irrelevant to the discussion. Here, it's about how well a team does in each game relative to the strength of the opponent in that game.

The most common way to deal with this problem is through the use of opponent adjusted stats. Examples include the systems we talk about around here like S&P+FEI, and the combination of the two known as F/+. Barnhart mentioned that the committee will get some other kinds:

How detailed were these statistics? Well, they included things like "relative scoring defense" which measured what opponents scored against a team relative to what the opponents had averaged. Example: Teams that played [2008] Alabama scored only 48.89 percent of their scoring average when they played the Crimson Tide.

There is some merit to looking at schedule strength this way instead of the other way. For instance, FSU can't help it that the ACC doesn't provide as many potential good wins as the SEC does. It can't help it that its big non-conference opponent is a sinking ship. It can only play the games on the schedule and do as well as it can against them. It's an argument Boise State pounded consistently in its heyday a few years back, because not only would a power conference not expand to include the Broncos, but nearly all power conference teams refused to schedule them entirely.

So to review: when you're supposed to be picking out the "best" team and not picking out the "most deserving" team, you shouldn't be counting the number of quality wins when talking about strength of schedule. You should be talking about how well teams did in games relative to the strength of those opponents.

Margin of Victory

The computer polls in the BCS were not allowed to consider margin of victory. The restriction came from a desire not to give teams incentive to run up the score. Unfortunately for sportsmanship, margin of victory is necessary to make a formula-based ranking that actually works. Bill James made the case about this brilliantly a few years ago.

That misplaced belief in not including margin of victory lives on in the selection committee protocol. One of the things that the committee is instructed to consider is, "Comparative outcomes of common opponents (without incenting margin of victory)".

That doesn't quite cover it, though, but Jerry Palm did cover it with some comments from the media's mock selection committee process:

There is no credible data provided to measure strength of schedule.

Of course, you can see the schedule itself. The opponent, the opponent's record and the result of each game are listed. They also separate out wins against teams with records above .500 and losses to those below .500. Other than that, all you get is the old BCS version of each team's collective opponents' record, and opponents' opponents' record. There are no other ratings.

In other words, the committee gets materials that help you decide on the "most deserving" rubric of schedule strength, even though that's not what they're supposed to be deciding upon. Great. More:

Only a handful of the stats provided correlate well to a team's winning percentage, and all of those are points-based. Naturally, there is some correlation to winning and the ability to gain yards or stop your opponent from doing so, but it's not nearly as strong as the scoring stats. ...

[A]n even more meaningful stat isn't even allowed in the room. Margin of victory. Nothing correlates as well to winning percentage as MOV, but that isn't politically correct.

Now, Heather Dinich's writeup from that same exercise does mention that MOV was something that some people considered:

All 17 of the mock committee members spoke briefly about what they thought the most important factors were, and there were varying opinions. Some thought margin of victory mattered, while others didn't.

Not all did.

Margin of victory is one of the key defining factors in applying strength of schedule in a "best" rather than "most deserving" context. Beating a team that finishes over .500 by 3 and beating that same team by 24 is not the same thing, but it is the same thing in one of the useless stats that the committee will get regarding schedule strength. One of the key ways you figure out how well a team did relative to its opponents is to look at how much they beat them by.

In short, you cannot apply schedule strength in the way the committee is supposed to without margin of victory. Yet, they're discouraged from applying it at all and prohibited from applying it in at least one case (common opponents).

Getting It Right

To sum it up, the committee cannot follow its instructions to the letter and produce something that's coherent. And speaking of coherence, those instructions lack it in a far more obvious way. Despite the CFP hammering "best" over "most deserving", here's what the CFP's protocol says when explaining why the committee method is better than what we've had before:

Under the current construct, polls (although well-intended) have not expressed these values; particularly at the margins where teams that have won head-to-head competition and championships are sometimes ranked behind non-champions and teams that have lost in head-to-head competition. Nuanced mathematical formulas ignore some teams who "deserve" to be selected.

Emphasis mine.

We have a committee that is selecting "best" and not "most deserving" in part because formulas might leave out deserving teams. Huh? What?

Fortunately, it doesn't sound like the committee is actually trying to follow its instructions to the letter:

The Oregon Ducks came in at No. 5, significantly higher than a one-loss team that beat them, No. 12 Arizona. [Committee chair Jeff] Long was asked how that was different from the Ole Miss-Alabama situation.

"You look at Oregon, they not only beat Michigan State, but they went on the road and had a good win at UCLA. So I think their body of work -- as you guys have said and we use a lot in the room as well -- is better than that of (Arizona)."...

"We look at the games that Kansas State has played; they played an excellent game at home in a close loss to Auburn and then the Oklahoma win was important for them."

These are not the words of someone going entirely off of who is "best". This is the telltale parlance of someone worried about who is "most deserving". When you're determining who is "best", the quality of opponent is largely irrelevant; it's how you did against the teams you played that matters. Collecting wins over quality opponents is not part of the "best" equation, yet here Long is talking about just that with Oregon and Kansas State.

This is a good thing, and not just because attempting to follow incoherent rules is a recipe for madness. There is no way to get two people to agree precisely on what makes a team "best", and I'm not just talking about among committee members.

For the system to work and not be constantly derided like the BCS was, it has to feel like there is some logic behind it. You can certainly apply logic in the "best" methodology, but it absolutely requires using formulas like S&P+ that account for schedule strength. People just aren't that good at accounting for differences in opponent strength when eyeballing box scores, so those kinds of formulas take care of the hard stuff. Most fans don't trust formulas, though. The computers always took worse criticism during the BCS era than the polls did.

The only way to apply a veneer of logic to the system is then to go to "most deserving". Saying, "Team A is ahead of Team B because it has more good wins" makes sense to most people. Most fans consider that a fair way of determining team order, even if they might not agree that Team A actually has more good wins than Team B does.

Not attempting to follow the instructions is the best thing the committee can do, and that's what it's doing. Maybe they can clean up the guidelines in the offseason, but for now, the committee is getting it right.