Guest column by Benjamin Robinson
Every April, members of the NFL media put the finishing touches on their draft grades columns as the name is read of the last player selected, the player lovingly referred to as “Mr. Irrelevant.” However, how do we know if draft grades aren’t the real “Mr. Irrelevant” here? Are post-draft grades at all related to future value in the NFL?
Just like with my work on the value of mock drafts, the wisdom of the crowd seems like a good methodology to use to answer this question. In that vein, I put together a panel of draft graders from a sample of NFL media members covering the 2012 to 2017 NFL drafts. My panel included:
- Chris Burke, Sports Illustrated (now at The Athletic)
- Nate Davis, USA Today
- Vinnie Iyer, Sporting News
- Dan Kadar, SB Nation (currently a free agent)
- Mel Kiper Jr., ESPN
- Mark Maske, The Washington Post
- Pete Prisco, CBS Sports
- Rob Rang, NFL Draft Scout (now at Sports Illustrated)
- Evan Silva, Rotoworld (now at Establish the Run)
These analysts’ grades were also used in Football Outsiders’ annual NFL Draft Report Card series (see the most recent version of that here). In addition, I include a “Median Grader” meant to represent the panel’s overall views on each draft class.
Before digging into the future value of each draft class, I want to explore the grades to see if I can spot any systematic biases, such as grade inflation, in the data. Are NFL media members consistently too positive, too negative, or just right in terms of the distribution of their outlooks for each draft class?
Looking at how the panel of media members evaluated each draft class, it seems they had limited interest in giving out high or low grades, with the modal grade being around a B. This makes sense from a risk-aversion standpoint: graders would rather be lukewarm in their praise lest they be criticized for giving a poor grade for a draft class that outperforms expectations.
A great example of this is the Seattle Seahawks’ 2012 draft class. This draft class, which received a median grade of C from the panel, included cornerstones for the team on offense (Russell Wilson) and defense (Bobby Wagner) that were key contributors to consecutive Super Bowl appearances and have 12 Pro Bowl appearances between them in eight years in the NFL. I think it is safe to say that the public is more likely to remember an analyst’s misses than their hits. So why not play it safe and minimize the risk of being attacked for their grades?
In order to make any proper conclusions from this skewed data, I will have to grade these drafts on a curve. Giving someone a B when the modal grade is a B isn’t truly reflective of what a B stands for and is cause for transforming the data into a more normal distribution of grades. I performed this type of transformation, known as normalization, by first converting each letter grade into a numerical value using the College Board’s Grade Point Average reference scale.
Now that we’ve normalized the grades, let’s take a closer look at the data, starting with the panel.
Panel members vary quite a bit in terms of how they grade. Some graders’ distributions (Burke and Rang mainly) look quite like normal distributions. On the other hand, most panelists (Kadar, Silva, Maske, Kiper Jr., and Prisco) were much more conservative in their approach, while the rest of the group’s (Davis and Iyer) distributions were more inconsistent.
Let’s also take a look at team-by-team grades to see if any biases exist there.
Some teams do get better grades overall than others, but even those teams have quite a bit of variation in their normalized draft grades. This means that no team consistently graded better or worse than the other teams. Overall, the Minnesota Vikings (Rick Spielman has garnered a lot of respect over the years), Jacksonville Jaguars (give David Caldwell some props), Cincinnati Bengals (this coincides with their last run of contention), and Baltimore Ravens (Ozzie Newsome, enough said) were lauded most often, while the Carolina Panthers (David Gettleman strikes again), Buffalo Bills (pre-Brandon Beane), Cleveland Browns (management turnover galore), and the Dallas Cowboys (the pre-Will McClay era Jerry Jones) earned less praise.
How do I quantify future value? To ensure that I’m not reinventing the wheel, I use Pro Football Reference’s Approximate Value (AV) metric (see here for more on how Approximate Value is calculated). Since there aren’t many publicly available metrics for football that cover every player on the field and attempt to measure player value on the same scale (such as Wins Above Replacement in baseball), AV is the best option analysts have.
A plus of AV is that it covers every single position, which means we don’t have to deal with biased “counting” metrics that don’t do a good job assessing play at positions such as offensive line, where players don’t really provide those types of statistics. If I were to explore another metric, I would look into data on the total number of snaps (from Football Outsiders) a player has accumulated. However, because AV and snap count are strongly correlated at the draft class level, I won’t be using snap counts in this analysis. This makes some sense after looking at AV’s methodology and from the simple hypothesis that more “valuable” players tend to play more snaps than less valuable players.
Our main measure of future value is going to be the total sum of the AV of all the players in a draft class over their entire playing career divided by the number of picks in the class (e.g., more players means more opportunities for accumulating value) and by the number of seasons since the draft class was selected (e.g., it is possible to accumulate more value over more seasons). Now, I can begin to look into how normalized draft grades relate to future value for each class.
So the takeaway here is that there is pretty much no relationship between draft grades and future value at all. If there was a positive relationship between draft grades and future value, we would expect an upward sloping trendline of AV as normalized draft grade point averages also increase. Instead, there is a flat line indicating that overall, panelists get the future value of draft classes wrong just as often if not more than they get it right at each GPA. Could there be a silver lining, though? Perhaps some of our panelists are more deserving of our attention than others? The answer is a resounding no. There are some areas where panelists have some upward trend lines, but those are mostly due to small sample sizes at either ends of the grades distribution.
Grinding Without the Grades
We could end this article right here, but that would be disappointing. There has to be a better way to predict future value than just draft grades alone using the information that is available about a class as it’s drafted. With that in mind, I built a preliminary linear regression model to predict draft class AV. After running multiple models, I was able to develop a model that explains about 60% of the variation in AV. This improved model includes normalized draft grades but more importantly uses the team that drafted the class (relative to an average team), the number of years since a class was drafted, and the overall draft capital used to select the class (using the draft value chart made by Jason Fitzgerald of Over the Cap and Brad Spielberger, of Pro Football Focus).
The results showed the largest coefficients were associated more with which team a player was drafted by than anything else. It is not a big surprise, then, to see that the top five model coefficients were the dummy variables associated with teams that have played in Super Bowls in recent memory and have strong quarterback play: the New England Patriots, Carolina Panthers, Seattle Seahawks, Kansas City Chiefs, and Atlanta Falcons.
Using the results of this model, we can see that some classes truly rise above the rest. Particularly of note are the 2012 Seattle Seahawks draft class, which was referenced earlier, and the 2014 Las Vegas Raiders draft class, which included Khalil Mack and Derek Carr. We can also learn something from classes that fail to meet expectations. For example, the 2013 Miami Dolphins draft class (in which they surprised many by trading up to draft Oregon edge rusher Dion Jordan with the third overall pick) and the 2014 Houston Texans draft class (in which they unsurprisingly selected South Carolina edge rusher Jadeveon Clowney first overall) both fell short of expectation despite having high draft picks and an ample amount of time to develop.
Ultimately, the lesson here is a simple one: there are many things that are more important for predicting a draft class’s future value than draft grades. With no offense intended to the esteemed members of my panel, predicting the future is very hard! Maybe the bard, William Shakespeare, was thinking of draft grades when he was penning the great tragedy Macbeth:
Life’s but a walking shadow, a poor player
That struts and frets his hour upon the stage,
And then is heard no more. It is a tale
Told by an idiot, full of sound and fury,
Benjamin Robinson is a data scientist living in Washington, D.C., and the creator of Grinding the Mocks, a project that tracks how NFL prospects fare in mock drafts. You can follow him on Twitter @benj_robinson.
Want to use NFL Draft Grades for your own analysis? You can click here to access the dataset.