Filed under:

# Quarterback Ratings Actually Do Matter, So Why Not Understand Them?

It's hard to compare quarterbacks across different college football teams. There are so many different offensive systems that it's difficult to do accurately -- Cam Newton, for example, is asked to do different things at Auburn than Andrew Luck is asked to do at Stanford.

The great equalizer -- though no one recognizes it -- is the passer rating. Giving credit where it's due: I was watching Mike and Mike this morning, and they partly scoffed at the rating -- before marveling at the fact that, if you look at the teams who are going to the playoffs, most of them have one of the highest-rated passers in the league. But of course, it's just a useless stat that only math geeks and the like understand.

But the passer rating is a useful tool to compare quarterbacks -- more on that in a moment. And if you look at the Top 25 passer ratings in the NCAA -- well, nine of them belong to quarterbacks on nine of the 10 teams in the BCS. (The exception to this is Connecticut, which probably shouldn't count as a BCS team for the purposes of anything statistical.)

##### Source: NCAA

By the way, the No. 2 that's missing there is Kellen Moore of Boise State, who went to some trophy ceremony in New York City or something.

So looking at the number of NFL playoff teams and BCS teams who rely on highly-rated quarterbacks, why is the formula so routinely scoffed at by pundits and fans? Well, it's hard to figure out, by which I mean it requires people to actually do math. Okay, so maybe that's not entirely fair to most people.

The thing is, it's really not that complicated. It probably is when you look at it like this:

[ { (8.4 * yards) + (330 * touchdowns) - (200 * interceptions) + (100 * completions) } / attempts ]

But let's break it down a little bit into the component pieces. First of all, we should note that the multipliers used are based on numbers that would have yielded a 100.00 for an average passer in 1965-79. Obviously, college offenses have changed since then, so you have to get more than 100 to have a good game this year, but that's the only part of the formula you might be able to quibble with.

• Multiply yardage by 8.4
• Multiply touchdowns by 330
• Multiply completions by 100
• Add those three numbers together
• Multiply the number of interceptions by 200; subtract this number from the sum of the other three numbers
• Divide all of that by attempts

One thing I like about the college calculation as opposed to the one from the pros is that it doesn't arbitrarily cap the values. (Which leads to NFL passers who have maximum ratings, but not perfect ratings, no matter what you hear it called. Just like it's a maximum SAT score.)

But if you look at all the things we generally consider important for quarterbacks -- passing yardage, touchdowns, completions -- and the one thing we don't like -- interceptions -- they're all there. And it's adjusted for attempts. so someone who throws 400 passes will be graded based on 400 passes, and someone who throws 250 will be graded based on 250. Maybe the weights aren't correct, but they probably even out in the end, because they're incorrect for everyone.

And there are a few other arguments that will be made -- some in the sabermetric football community would probably question the reliance on passing touchdowns. But don't bash the quarterback rating because you don't understand it. That's easy enough to fix, and the results suggest the statistic is too important to be ignored.