A couple of years ago I put out my first draft projection in advance of the 2014 NBA draft. Now a few years gone by and I can run a preliminary test for the draft model on that 2014 draft class. The basic idea of the test is to compare the player's actual performance as measured by my minute adjusted AWS efficiency metric in their third and fourth year (just using the third year for this test of 2014 draft) to the rank placed by both my model and the performance of the actual NBA draft order.
A couple of years ago I tested my draft model for the 2012 draft class, which was out of sample then from my analysis, against the performance of the actual draft. In that case the model out performed the NBA drafters, at least as measured by the metric that I use for my draft modelling.
In terms of average error, without regard to direction, the model predicted slightly better out of sample how well players rank by actual performance this year than the their spot selected in the draft. The average error for the model was 6.9 spaces from their "actual rank," while the draft order error was 8.1 spots. The full table is at the bottom of the page.
I want concentrate first on both the biggest misses and biggest hits for the model compared to the place they were selected in the draft. I think those cases give some of the most interesting lessons on what the model tells us relative to the draft decision makers.
- Jordan Adams, -21 places worse. my initial model loved Adams, placing him at the top of the class. Unfortunately Adams suffered a major leg injury that ended his NBA career, placing him among the least productive in the draft class. Injuries are a risk of the game and the draft. However, it depends on what question you are trying to answer as to whether you should include player's with career ending injuries in the model. A model on the average expected value of a pick should include them, because that is part of the draft risk. Translating college stats to NBA performance is a dicier question, however. There was nothing in in Jay WIlliams playing stats that would have let you know he would end his career early due to a motor cycle accident. In that case, there is a significant missing variable that is outside the question the model is trying to answer. In any case, Adams' rookie year was middling for a rookie, so it's not clear what stage he would be in career right now had he been healthy.
- Jarnell Stokes, -17 places worse: Stokes is an interesting case. He was a box score monster in the NCAA. But his fit with the modern NBA is problematic as a big man that doesn't excel at protecting the rim or have shooting range. He is under contract with Denver, but doesn't play
- Damien Inglis -18: Inglis was one of the youngest players in the draft and had decent numbers against grown men in France, big factor in his rating. Inglis also suffered a fairly major injury his rookie year. On the other hand he's yet to show significant development since returning to basketball that would lead one to think he warranted a high draft selection.
- Zach LaVine -13 places worse error. LaVine is actually performing better, at least as measured by my metric, than either his draft slot or the model projection as the 7th best of his draft class. LaVine is scoring at a high rate of efficiency and torching from three, both at a higher rate than we saw at UCLA.
- Rodney Hood -12 worse rank error. Hood is also better than his draft spot or the model would have predicted.
- Nikola Jokic, +31 in rank error. Jokic is rated as a top player in AWS terms this year. And as a passing young big man was very highly rated by my model. Often player's that dominate box score numbers and show flashes of everything but lack overt athleticism just don't stick around in the NBA. However, an interesting indication from a different study I did looking at "star" players and "busts" with both scout rankings and on court performance was that performance was relatively more important for becoming a "star", while scouting ratings were relatively more important for avoiding becoming a "bust." So, guys that put up numbers but aren't as good on the eye test might be the real high variance plays.
- Clint Capela, +19 in rank error. Capela has turned into a good young big man and was rated as 4th in AWS among this class. It is arguable that Capela is a guy that is overrated by a box score based rating, but he is definitely better than the 22nd selection where he was picked.
- Kyle Anderson, +15 in rank error, Anderson, with the nickname Slow-Mo is kind of the ultimate eye test vs analytics model player. In this case his efficiency rating is between the two measures, but closer to the rating of the model, whether the Spurs development machine plays into it or not.
- Nik Stauskas +14 in rank error, Stauskas has resurrected his career to extent in Philly. However, he is still not one of the more productive players in his class, ranking 24th via AWS. In this case, we shouldn't have necessarily trusted the crowd source process.
- Jerami Grant, +11 in rank error, Grant was rated as a late first round talent, which is about what he's been.
- Honorable mention to Adreian Payne, +10, as an older player that didn't dominate in college and looked like a reach in the middle of the first round.
I should note that in this case we're testing the model against the target variable for the training of the model, which gives the model an advantage even out of sample. So I quickly looked at the model vs win shares available from Basketball Reference. In that case the gap was much closer with the average error for the model being 7.2 places and the draft being 7.4.
Below is the full table:
Tableau:
end
Comments