Last year I explored a benchmark system for draft prospects by Ed Weiland. The benchmarks track whether a prospect reaches minimum statistical marks in a number of different statistical categories. In looking at Weiland’s benchmarks, I found that the benefits of a benchmarking system is that it allows one to highlight potential red flags and rewards versatility in a way most draft models do not. The downside is the loss of information with a simple pass fail test on the benchmarks, outstanding efficiency is treated the same as adequate efficiency. Another important factor maybe the simplicity in conveying the information, meeting or not meeting specific benchmarks may be easier to convey to less analytically inclined readers than the output of a regression or ML model.
This year I am introducing my own benchmarking system designed to work without picking a position for every player, a complication that can make the system less reliable for players with positional versatility. I am using four benchmarks:
- Scoring- Does the player score 21 points per 80 possessions?
- Efficiency- Does the player’s two point percentage and free throw percentage add up to 1.25? (Allows positional diversity, players need to demonstrate ability to score inside monstrously efficiently or some evidence of ability to shoot).
- Offensive Activity- Do assists plus offensive rebounds equal at least 6? (Strong negative correlation between the two, indicates trade off of size and skill)
- Defensive Activity- Do steals plus blocks equal 2.5 or more?
Comparison to Draft Model
While the point of the benchmarks is not to displace or mimic the methodology of a trained model, I did look at both the individual benchmarks and the total percent of benchmarks met against the data used for my draft models. In both cases the benchmark data, combined with age, had significant positive correlations to measures of success in the NBA (though less than more detailed model). Offensive Activity showed the strongest relationship followed by Defensive Activity, Efficiency and Scoring in descending order. Also notable, when combined with the data for the draft models, percent of benchmarks reached had a modest positive effect, consistent with other measures of versatility I have tested in the past.
In a boosting regression using the two models as predictors the relative importance in predicting power was 80% to the trained model and 20% to the benchmarks. (Measure used was the max performance in years two through four as measured by a combination of box score stats and RAPM).
Examples from this Prospect Class
Looking at the bench marks applied to this year’s top prospects should give a little better idea of how they work and what information they can reveal.
There are six major prospects that make all four of the benchmarks, DeAndre Ayton, Luka Doncic, Wendell Carter, Shake Milton, Josh Okogie and Gary Clark.
Doncic, Ayton, and Wendell Carter are well known to anyone even casually following the draft. Milton and Clark are the older prospects on this short list, which takes a little of the shine off of making the benchmarks. All of the prospects that meet all of the benchmarks also rate reasonably well in the traditional model, with all in the top 30 by the traditional model rank.
On the other side are the three significant candidates that fail to meet a single benchmark, Lonnie Walker, Sviatoslav Mykhailiuk, and Hamidou Diallo, with the highest rated prospects to fail to meet all four benchmarks being Walker and Diallo, both rated in the first round by ESPN. Walker is the only one of these prospects to rate in the top 40 via my traditional model.
Age is the big factor not explicitly addressed by the benchmarks. A 23 year old hitting all four benchmarks is still not necessarily a strong prospect, and an 18 year old only hitting two is not necessarily a non-prospect. In the attached link, I added a column that factors the prospect score by age in order to get a more approximate evaluation,
Comments