Predicting Super Bowl in 2016 was the ultimate test between eyes and numbers.
By the eye test, Carolina looked like the clear favorite over Denver. The Panthers had a stellar 17-1 record, and they destroyed two of the NFL’s best teams, Seattle and Arizona, to make the Super Bowl.
The eye test also favored Carolina at the quarterback position. Cam Newton had an Most Valuable Player caliber season, a touchdown machine at the pinnacle of his game.
However, the eye test for Super Bowl 50 didn’t hold up. In the first quarter, Von Miller stripped Cam Newton of the ball. Denver recovered for a touchdown that gave them a 10-0 lead.
Carolina’s offense never left the gate to take off for flight. Despite an anemic offense, Denver won 24-10 with the help of a few critical turnovers.
In contrast to the eye test, numbers suggested Denver wasn’t as overmatched as they seemed against Carolina in Super Bowl 50. This insight was based on computer rankings and their adjustments for strength of schedule.
Let me explain.
Margin of victory
It should be obvious that a team ranking system should consider margin of victory in games.
Do you care that Amazon has lower prices than your neighborhood book store? No. It’s the 40% discount on all titles that compels you to buy online.
The same lesson applies to computer rankings.
The Power Rank’s team rankings start with margin of victory in games. However, this raw metric didn’t tell the entire story about Carolina. The Panthers had an average margin of victory of almost 13 points, by far the best in the NFL in 2015.
Let’s take the next step.
Adjusting for strength of schedule
In a nutshell, computer ranking systems take a statistic like margin of victory and adjust for strength of schedule. That’s it.
This adjustment is more critical in college football than the NFL. In college, teams divide themselves into conferences of vastly differing strength. SEC teams play a much more difficult schedule than their neighbors in the Sun Belt.
In the NFL, the salary cap levels the playing field, which makes adjustments for strength of schedule less important than in college football. However, you shouldn’t ignore these adjustments, especially for Carolina during the 2015 season.
All of my team rankings take margin of victory in games and adjust for strength of schedule. Here are the NFL rankings prior to the Super Bowl with Carolina’s opponents in italics.
1. Carolina, 10.2
2. Seattle, 8.3
3. Cincinnati, 7.0
4. Arizona, 6.9
5. Kansas City, 6.8
6. Pittsburgh, 6.1
7. New England, 6.0
8. Denver, 5.2
9. Green Bay, 4.6
10. Minnesota, 3.7
11. New York Jets, 1.0
12. Buffalo, -0.4
13. St. Louis, -0.5
14. Oakland, -1.0
15. Detroit, -1.0
16. Houston, -1.0
17. Baltimore, -1.7
18. Philadelphia, -1.9
19. Chicago, -2.0
20. New York Giants, -2.2
21. Washington, -2.3
22. Atlanta, -3.1
23. New Orleans, -3.1
24. San Diego, -3.6
25. Indianapolis, -3.7
26. Dallas, -5.6
27. Tampa Bay, -5.8
28. Jacksonville, -5.9
29. San Francisco, -6.2
30. Miami, -6.4
31. Cleveland, -6.8
32. Tennessee, -8.9
Carolina played three teams in the top half of my team rankings the entire season. Their 6 division games against Atlanta (22nd), New Orleans (23rd) and Tampa Bay (27th) didn’t present much competition. In addition, they faced the weak teams from the NFC East and AFC South in other games.
Despite this strength of schedule, Carolina still ranked first in these points based NFL rankings because of their large unadjusted margin of victory in games. To find a potential weakness for Carolina against Denver, we need to dig further.
Rankings pass offense and defense
The Power Rank algorithm can do more than rank teams on adjusted margin of victory. It can also rank offenses and defenses based on efficiency metrics.
To get a better insight into the match up between Carolina and Denver, let’s look rankings for pass offense and defense. To do this, we take yards per pass attempt and adjust for strength of schedule.
This list gives the pass defense rankings before Super Bowl 50, again with Carolina’s opponents in italics.
1. Denver, 5.1
2. Carolina, 5.5
3. Seattle, 5.5
4. Kansas City, 5.6
5. Cincinnati, 5.6
6. Houston, 5.8
7. Green Bay, 5.9
8. New England, 6.0
9. Oakland, 6.0
10. St. Louis, 6.1
11. New York Jets, 6.1
12. Minnesota, 6.1
13. Pittsburgh, 6.2
14. Philadelphia, 6.2
15. Arizona, 6.3
16. Baltimore, 6.3
17. Buffalo, 6.6
18. Chicago, 6.6
19. Tampa Bay, 6.6
20. Indianapolis, 6.7
21. Detroit, 6.7
22. Dallas, 6.7
23. Washington, 6.8
24. Atlanta, 6.8
25. Tennessee, 6.9
26. Jacksonville, 6.9
27. San Francisco, 7.0
28. San Diego, 7.0
29. Miami, 7.1
30. New York Giants, 7.2
31. Cleveland, 7.3
32. New Orleans, 7.9
Cam Newton only faced three solid pass defenses all season. Three!
I should note that Arizona’s pass defense would have made a fourth good pass defense before Carolina racked up 11.2 yards per attempt against them in the NFC championship game.
Carolina threw for almost 7 yards per attempt, 5th best in the NFL. However, strength of schedule adjustments drop Carolina to 11th in the pass offense rankings.
Also, Denver had the top ranked pass defense heading into the Super Bowl. The number next to each team gives a rating, or expected yards per pass attempt allowed against an average pass offense. Denver had a rating significantly better than second ranked Carolina.
Numbers over the eye test
It wasn’t easy trusting the numbers before Super Bowl 50. Everyone liked Carolina, as the markets closed with the Panthers as a 5 point favorite over the Broncos.
My member predictions, which use a number of metrics including the rankings discussed in this article, gave Carolina a 1 point edge. While this seemed a bit low, the match up of Denver’s pass defense against Cam Newton gave the Broncos hope.
In the game, Newton threw for 4.1 yards per pass attempt, well below his season average. This played a big role in Carolina’s loss to Denver, as numbers and analytics stood strong against the eye test in this game.