Most NBA teams past the twenty game mark over the holiday weekend, approximately a quarter of the season. Statistically, however, we are closer to the halfway point, at least in the sense that a team’s average margin of victory (MOV) should predict a bit over half of the variation in win percentage over the rest of the season.

In some ways that’s a low bar to jump over as what has happened in terms of margin of victory so far, based on history, also won’t explain nearly 50% of the typical team’s win percent variation from here on out. To give you a sense of what the relationship of MOV to win percentage from here on out, below is a plot of MOV in the first twenty games to win percentage to the end of the season for 2010 to 2017.

For those fans whose favorite team has been sub-par to date (hi Clips fans!), you can take solace in the turnaround in outliers over the line like the 2013-2014 Brooklyn Nets, who came out of the twenty game mark at 6 and 14 with an MOV of -7.9, but still won 61% of their games going forward.

And any fans on teams exceeding expectations, before you get ahead of yourselves (You know who you are), just remember the 2011-2012 Philadelphia 76ers, who came in at 14 and 6 with a plus 11.7 MOV. Yet they managed to win less than half of the rest of their games.

Another interesting thing is that, at this point the winning percentage of each team is almost as good a predictor as the MOV for performance over the rest of the season. In the last six seasons winning percent after twenty games has an R^2 of .489 with win percent for the remaining schedule.

In some ways that’s not that surprising, the whole reason we’re interested in MOV is that it’s correlated to winning and stabilizes more quickly than wins and losses. But, what we find in this analysis is that the gap by now is pretty small in terms of prediction for the rest of the year.

To go further, a few years ago Benjamin Morris found on his site found that using the MOV and record from the other 81 games in a season both factors, winning and MOV were statistically valid predictors, and using both improved the prediction. So, I wanted to use the MOV and win percent from the just first twenty games to predict the rest of the season wins. In an OLS model with the six seasons as well as various subsets of the data and in a Partial Least Squares regression, both the MOV and Win Pct were statistically important predictors and showed a very marginally improved prediction.

In the combined model 67% of the prediction came from he MOV, and 33% from a team’s win percentage, with some variation in the subsamples.1 Given how well correlated the two predictors are it is tough to draw too much from the split, though it was encouraging to see the relative stability in the sub-samples. On the other hand using a sampling method, called Bootstrap sampling, whether Win Pct was significant at the 95% level depended on which Bootstrap method was chosen.

So the most we can probably say from this is that average margin of victory is the stronger indicator of team quality at the twenty game mark, but that Win Pct is ***probably*** something of an indicator as well. At least given these two factors, if you’re favorite team is under-performing in close games, while that will mostly even out, it may not entirely.

The differences between the two models applied to the rest of the season are pretty small for most teams. Teams that have over performed their MOV to date, like the Boston Celtics and Detroit Pistons are projected to win just over one more game over the rest of the season than their MOV would indicate. The biggest movers by some distance are the Thunder, projected to win about three fewer games from here on out than in the MOV model.

Why winning might be a skill is tough to say definitively. Other research gives some interesting hypothesis, such as indications that ahead teams simply relax and give up part of their lead even controlling for who’s in the game. Or Ben Falk’s finding better projections without garbage time being included in the data. But there is probably need for more research.

Below are the model projections using both the MOV only and combined Win Pct and MOV model along with the differences:

A couple of years ago Evan Zamir built a model to convert Dean Oliver's Four Factors of basketball, effective field goal percentage (eFG%), rebounds, turnovers, and free throw rate, to net point differential. Last year I applied Zamir's formula to regressed early season four factor numbers to derive point differentials for each team.

This year I decided to redo Zamir's analysis using more recent seasons, as Zamir's had been done with seasons from the Tim Duncan and early Garnett Celtics era. One thing I noticed about Zamir's numbers is that there was slightly more weight to the defensive side of the four factors.

By contrast my similar analysis regressing the four factors on both sides of the ball as eight variables against point differential by team over the last four years I found that the offensive factors explained more of the net point differential than the defensive side. The break down, as shown below, is close to 55/45 in favor of offense.

The raw coefficients look a little closer than that. For example, the raw offensive coefficient is only 1.5% larger than the defensive coefficient. But, the spread between the best and worst eFG% teams has been much larger on offense than it has been on defense over the last four years. So that when we standardize the variable that indicates that eFG% has contributed more to winning and losing on offense over the last four years.

The table below has the model results with the raw coefficients, standard deviation for each factor, and the contribution to variance. In addition there is a column that gives the cumulative percentage by offense and defense.

The second big difference comes from the gap between offensive rebounds and defensive rebounds, which is somewhat surprising given the low regard offensive rebounds have fallen into for most teams.

In this case it is both the raw coefficient on the gap between teams that increases the variation. That is definitely interesting, and casts some question on how we value, or don't offensive rebounding especially from centers who can grab o-rebs without messing up a team's defensive floor balance. But, it doesn't tell the whole picture, as the reason teams have cut down on offensive rebounds is to increase shooters on the floor to increase their eFg% and focused on getting back on defense to cut down on their opponent's eFG%, the number one and two factors in effect on winning.

The only case case where the defensive factor has more impact than its offensive counter part is free throw rate. In that case we probably have Dwight Howard, Andre Drummond and DeAndre Jordan and any other Hack-a victim to blame.

So then we can apply the Four Factors Point Differential Model to this season to date as an out of sample test, and the model performs almost as well as on the training data. The image below is the model compared to the margin of victory via Basketball Reference as of November 15th, with an R^2 of 96%.

The one little dot higher on the model estimate than the current MOV in the middle of chart? Dwight Howard's Charlotte Hornets, where he's shooting 30& of their free throw attempts, and hitting only 41%. Otherwise the PD model explains point differential pretty well.

Lastly, here are a couple of the Added Variable Plots from the regression to get a visual of the difference in effect:

Contrasted with the less tightly aligned plot for ORebs.

Lastly, the offensive free throw rate, or FTA/FGA

The outlier dot on the lower left happens to be the Dwight Howard 2015 Houston Rockets.

Small sample size theater time, those first fifteen games or so of the season, is the period of the basketball calendar that gives me the most conflicted feelings. Standout rookie performances are exciting! New off season traded players underperforming is interesting! All of which is counterbalanced by the overreactions to miniscule on/off splits and unsustainable shooting streaks.

It’s less fun to play sample size cop on Twitter than you might think. But, there are two areas of expected regression that almost always deserve highlighting about this time of the year, opponent free throw percentage and opponent three point percentage.

Free throw defense is not a thing. Not only is there no correlation year to year, there is very little spread between teams by the end of the year. Last year opponent’s free throw percentage had a coefficient of variation (COV) of 1.1%. As of yesterday that COV was 3.8%, over three times more spread out.

Likewise three point defense is much less of a thing than it appears at this time of the year. The spread there between teams at the end of last year was a 3.7% COV, as opposed to a 10.2% COV as of Tuesday. In both cases by far the most reasonable expectation is that the outliers will regress significantly toward the mean.

If we take that expectation and apply it to each team we can get a sense of the degree in noise in the current team performances, at least on the defensive side of the ball. The table below has the top ten regression candidates in the downward direction.

The Defensive Efficiency Adjustment is calculated as if the team's opponents had shot at an average rate on both free throws and three point attempts thus far with some mitigation then applied with a generalized expected opponent's offensive rebounds based on the extra misses.

Below are the ten teams on the other side of basketball fortune so far. The Cavs may have reason to be a bit less worried than the overall defense numbers might indicate, as they have had the worst bounces in these two noisy measures, with the Phoenix Suns right behind.

To be clear, opponent free throw percentage and opponent three point percentage are not the only measures that have noise to them at this point of the season, on offense or defense. The number of threes or free throws surrendered in themselves have some variation in them at this point that will settle out over the season, as do turn overs and rebounds, But those measures have at least some more solidity to them at this point. So simply regressing say Utah's two point defense to league average without weighing how good their rim protection has been in the last two years probably does little to help actually understand their expected trajectory.

Free throw defense and three point defense act close enough to random for us to be able to pump the brakes a little on the Orlando banner raising and at least some of the Cav's October panic.

Given that the free agency period is winding down I decided to check in on the performance of the free agency models I built. Using data from the market over the last three years I built two different models to predict the average annual valuation of the contracts for this year’s free agent crop. One is a regression model. The other is a Bayesian machine learning variation of a random forest model.

The primary factors in both models at predicting the AAV were Win Shares, Age, Usage, cap spike and playing time. For both models I used the Percent of the Salary Cap for the first contract year as the target variable. The benefit of the regression model is that it gives straight forward coefficients, while the ML model gives the “importance” of each variable. However, the R stats package I worked with also provides a partial dependence graph that gives an idea of the shape and direction of each variable’s influence in the context of the model. (Context of model is important since the partial dependence is shaped by the other variables included in the model as well as the sample being modeled)

Below are a couple of the more interesting variables in the ML model.

The age variable, for example, shows age with little effect on the percent of cap on the player’s contract until he hits twenty-nine, and then it declines quickly.

Win Shares also shows a nonlinear pattern in the model, taking off at around two.

And while minutes played looks like it’s relatively linear, being a starter takes one jump right around 41 games started, then stays flat.

In addition to learning about the free agency market, part of my motivation was to expand my modeling skills and experiment with ML model. And, of course, I was interested to see which one would get better results out of sample. In terms of measuring overall success I used a simple mean absolute error (MAE), which is the average error regardless of the direction of the error. The ML model has so far slightly out performed the regression model, with a MAE of 3.4 million dollars for the regression and 3.3 for the ML. But, as it turns out the error of the models averaged together is slightly better than the either at 3.2.

In an overall perspective the error on simply guessing that every player gets the average contract is 7.7 million dollars per player, so the models net a decent improvement.

But it does look like there are some systematic errors between the model and this year’s market. To start, so far the model has overestimated the contracts of centers and underestimated the contracts of point guards on average. Below is the blend of the two models plotted by position.

Whether that is a part of the league’s continuing evolution, or a reflection on this year’s free agent group is tough to say.

I then looked at the residuals compared to out of sample individual statistics I found that Usage was still undervalued and age and blocks were overvalued. Though 40 year old Vince Carter’s one year $8 million deal seems to be more or less responsible for age affect.

Lastly, there are the individual outliers. In the cases where the model is much lower than the player’s contract it’s not clear if it’s a poor projection by the model, or an overpay by the team. Last year there were cases like Timofey Mozgov that proved to be a warning of an overpay. However, the models just give a rough baseline of where the market may fall. This year one of the biggest "overpays" via the model was Stephen Curry, who is not only part of the undervalued point guard class, but receiving a Super Max contract that did not exist in the training data. The other two "overpays" via the model are short term contracts that probably are a bit high on a per year basis, but that is purposely mitigated by attaching fewer contract years. JJ Redick and Paul Milsap, The last two could potentially be a bit more concerning for the signing team given the length of contract, Blake Griffin and Jrue Holiday. Both were projected to be about $9 million lower than their AAV, and were given five year deals to stay with teams that had little to no leverage.

The best value contracts (or where the model was the most over), were Luc Mbah Moute and Ersan Illyasova at around $7 million less than projected. The link to the list is attached here.

I couldn’t quite wait for Summer League to be over officially, run the numbers to see what, if anything, we can take from the rookies’ performance. But with the Lakers sitting virtually everyone of note for the Vegas Championship, I figured it’s over enough. (Note to the NBA: Vegas Summer League is too long, this is why teams start sitting their lottery talent).

In the link here, I have the full run down of per 40 one number performances via Kevin Ferrigan’s Daily RAPM Estimate (DRE) and Alt WIn Score (AWS), a metric I use quite often in my draft models with data from RealGM. But, to me, the real focus of Summer League has to be the rookies.

Some of that is informed by a Kevin Pelton ESPN Insider article, indicating that Summer League adds a bit of predictive power to rookie performance but not to 2nd year players. It’s also plain that we just have less information on rookies, so new info relatively more valuable as is seeing them in a new team setting.

In order to get a quick and estimate on evaluating the stats out of SL, and maybe providing a bit of perspective, I re-calculated my Rookie performance draft model, first by substituting the SL stats for the college or European league stats and then used the SL stats to just update my original Rookie Model.

First the Summer League Only (SLO) run. The SLO model seems to match the buzz coming out of Vegas very well, with Lonzo Ball on top followed by Dennis Smith Jr, and Jayson Tatum, at two and three. Laker fans might want to frame this table with three Lakers prospects in the top 10.

Of course, for reference I should add that the scale of the projections line up so that a 5 is roughly equal to average production, or a 0 in plus/minus terms. Indicating that the SLO model projects every rookie but Ball to be below average next year.

For Celtics fans SLO is more mixed, Tatum comes out the third best, Zizic performed respectably, and uh, Semi Ojeleye was also there.

It is hugely unfair to project Markelle Fultz based on what amounted to about 64 minutes of playing time. Though he was actually helped by the the regression I applied, since he had performed below average in his brief court time.

So then, how much to actually adjust our expectations, if at all? Going again based on the Pelton article, I ran the rookie model with the SL numbers added, weighted at a 25% of the pre-NBA numbers. This gives a much more realistic evaluation to Summer League importance.

For example, in the tables below ordered by the SL Updated Rookie Model (appropriately, perhaps- SLURM) Fultz is still appropriately in the top 2. And the most any player has been adjusted up is three tenths, and down is five tenths (again the model results are scaled so that it’s roughly the same as being projected to be .3 better in a plus/minus model).

And below is the 2nd half of the summer league rookies by SLURM:

Maybe, we can hold off on the Kuzma Rookie Of the Year ceremony and wait to bury Fultz or Zach Collins. At least until the second game of Preseason.

One way to get a sense of player's strengths and weakness is to chart them against their peers. The visual can give a more intuitive feel than numbers by themselves and it's much easier to take in information on a bunch of different players at a time.

I created one for the draft that groups the prospects stats in categories that are mostly independent and represent different aspects of my draft model's evaluation. All the stats are standardized in a z-score, with average players scoring at 0 and the spread between best and worst scaled by the standard deviation. That way the differences in each category are set to the same visual scale.

The stats are then put in a box plot using position as the grouping variables, so a player's assists, for example is compared to guys at the same position, instead of comparing a power forward's assists to a point guard.

Below is an image of the rebounding category, with Celtics picks Semi Ojeleye and Jayson Tatum's mark in the chart highlighted with commentary.

The* data in the link is interactive *so you can change the filters are hover over the screen to see more information on a particular data mark.

Among the highlighted stats for Jayson Tatum, he was:

- A very good scorer, among the best from the wing positions
- An above average rebounder at the three, but not great.
- Distribution was his biggest weakness, one the worst wings in that rating.
- Was above average for his position in blocks and steals rating.
- He was one of the youngest players in the draft, and played a very competitive college schedule.

For Semi Ojeleye highlights were:

- Very good rebounder as a three, combo forward potential
- One of the best wing scorers in the draft
- Weakness in blocks and steals
- Slightly above average for a wing in distribution
- Ojeleye is one of the older prospects in the draft.

It's been almost three years since I posted my models to predict the three point accuracy of players coming into the NBA, a post called "Predictions Are Hard: Especially About Three Point Shooting". Along with presenting the coefficients for the models, I also gave the results for some of the first round selections from that summers 2014 draft class. (The results for some of this year's class are up at a Nylon Calculus post here).

The models were trained on the shooting percentage first four years a player is in the league, so we're not *quite* to the time frame to give an exact test, or, retrodiction of those projections. But, it's too close for me not to give a little look in.

To test the models I compared the mean average error (MAE) of model projections against the MAE of the player's pre-NBA (either college or overseas) three point percentage and the MAE compared to assuming the player is league average. The two models I presented performed virtually identically, both performing better than compared to he player's three point percentage and better than the simple average shooter assumption. The models had a MAE of 3% and 3.1%, while the MAE for the player's pre-NBA three point percentage was 5.7% and compared to the NBA average three point percentage the MAE was 4.2%.

The results for individual players featured in the original post are shown below, with their Pre-Draft three point shooting, NBA performance and the two models projections, then the error for each measure.

Overall this class has not been a good shooting class, with a 32.9% average player shooting percentage. Though slightly higher, 33.4%, if weighted for attempts. The models were slightly biased high compared to the actual performance projecting 34.9%, the player's pre-draft three point percentage was 35.7% and league average since 2002 at around 35.5% coming into this year. The biggest miss was on Adreian Payne, who never developed the three point shooting he needed to establish his game. James Young was a similar miss, under performing his projections in shooting (along with many other areas of the game).

Since I started doing draft analysis I have been interested in ways that I could supplement the data I had from the box scores and demographic data to help better predict NBA success. A good amount of this comes from scouts, including high school ranks, draft board rank, and specific trait ratings. That reflects my thoughts that analytic models should be complementary with scouting, not a substitute.

Last year a feature at 538 had an interview with Ed Weiland, a draft analyst with his own site, Hoop Analyst. Weiland developed a benchmark system in various statistical categories with minimum achievement levels based on the player's position. For example, a shooting guard prospect is expected to:

- Hit at least 50% of his two pointers
- Hit 30% of his three pointers
- Score 18 points per 40 minutes
- Get 1.3 steals per 40 minutes,
- Get a sum of 7 in rebounds, steals and blocks per 40
- Have an assist to turn over ratio of at least .8

The rest of the criteria are at the 538 interview.

The basic premise is that the more benchmarks a prospect misses, the more skeptical we should be about them performing in the NBA. The benchmarks are an interesting addition to look at prospects in addition to the models, I believe, as they look across multiple categories rather than adding them up as most draft models do. That lets us look at the the players versatility, or alternatively potential fatal flaws. In the past I have modeled on a few versatility measures, which had minor effects in the context of the model and in the end didn't adopt any in part because of positional biases.

There are a couple of downsides to the benchmark approach compared to modeling. It limits information to yes/no on making or not the benchmark (and doesn't credit elite performance over adequate) and by drops demographic information like age (though Weiland is clear he considers informally)

In any case, contrasting the benchmark performance with my model estimate seemed like a good way to explore the current prospects as well as the flaws and benefits of each method. To do this I measured the percentage of benchmarks a player exceeds for his position. For example, a 1.0 indicating the player exceeds all benchmarks, or a .2 indicating he exceeds only 1 of 5 benchmarks. To be clear, this my own adaptation in order to be able to compare the benchmarks to my model results.

Generally, it's the cases where my model disagrees with the benchmarks that I think are the most informative. So below I have a chart of the players that are highly rated by model but miss on a couple of benchmarks.

The chart here visualizes the prospects performance using a standardized measures of statistical performance centered around a zero indicating average performance compared to other prospects.

Eight of the nine players under performing on benchmarks are big men.That likely says something about the difference between the two methods by position. Possibly that benchmarks miss style issues with the modern big man, or that the model is overly enamored of scoring or rebounding specialists. Seven of nine are younger than the average prospect, which is to be expected given that age isn't explicit in the benchmarks.

But one other contrast that stands out is the treatment of specialists like Monte Morris with passing and distribution (the grey bar above), or Tony Bradly and Ivan Rabb with rebounds (the orange bar). The reliance on one skill should act as caution flags, I think for the model's result.

Next are the opposite cases, players not highly rated by the model, but hit most of their benchmarks.

In contrast all of these players are wings, mostly small forwards and all of them are older than average prospects.

As the stacked bar chart shows, here the opposite issue comes up for benchmarking, these prospects, for the most part, are competent across the board but don't excel in any skill. In order to be top out at more than a possible 10th man, a prospect probably needs to have at least one elite skill.

Lastly, there were three players that did well on both the benchmarks and in my model, but were not highly rated in the Draft Express top 100, Josh Hart. Mikal Bridges and Ethan Happ.

Happ is an interesting case, he puts up numbers across the board. But he lacks the size to play center in the NBA and lacks the shooting to play a modern power forward, much less a on the wing. That, of course, raises the question about whether these particular benchmarks are the best possible for today's game. Neither big man position has any criteria for shooting, arguably free throw percentage could be used as a proxy with a minimum that varies by position.

In any case, the exercise has convinced me that benchmarks are worth looking at more and a useful way to examine prospects, as well as my model results.

A couple of years ago I put out my first draft projection in advance of the 2014 NBA draft. Now a few years gone by and I can run a preliminary test for the draft model on that 2014 draft class. The basic idea of the test is to compare the player's actual performance as measured by my minute adjusted AWS efficiency metric in their third and fourth year (just using the third year for this test of 2014 draft) to the rank placed by both my model and the performance of the actual NBA draft order.

A couple of years ago I tested my draft model for the 2012 draft class, which was out of sample then from my analysis, against the performance of the actual draft. In that case the model out performed the NBA drafters, at least as measured by the metric that I use for my draft modelling.

In terms of average error, without regard to direction, the model predicted slightly better out of sample how well players rank by actual performance this year than the their spot selected in the draft. The average error for the model was 6.9 spaces from their "actual rank," while the draft order error was 8.1 spots. The full table is at the bottom of the page.

I want concentrate first on both the biggest misses and biggest hits for the model compared to the place they were selected in the draft. I think those cases give some of the most interesting lessons on what the model tells us relative to the draft decision makers.

Below are the five worst estimates for the model compared to where they were drafted:

**Jordan Adams**, -21 places worse. my initial model loved Adams, placing him at the top of the class. Unfortunately Adams suffered a major leg injury that ended his NBA career, placing him among the least productive in the draft class. Injuries are a risk of the game and the draft. However, it depends on what question you are trying to answer as to whether you should include player's with career ending injuries in the model. A model on the average expected value of a pick should include them, because that is part of the draft risk. Translating college stats to NBA performance is a dicier question, however. There was nothing in in Jay WIlliams playing stats that would have let you know he would end his career early due to a motor cycle accident. In that case, there is a significant missing variable that is outside the question the model is trying to answer. In any case, Adams' rookie year was middling for a rookie, so it's not clear what stage he would be in career right now had he been healthy.**Jarnell Stokes**, -17 places worse: Stokes is an interesting case. He was a box score monster in the NCAA. But his fit with the modern NBA is problematic as a big man that doesn't excel at protecting the rim or have shooting range. He is under contract with Denver, but doesn't play**Damien Inglis**-18: Inglis was one of the youngest players in the draft and had decent numbers against grown men in France, big factor in his rating. Inglis also suffered a fairly major injury his rookie year. On the other hand he's yet to show significant development since returning to basketball that would lead one to think he warranted a high draft selection.**Zach LaVine**-13 places worse error. LaVine is actually performing better, at least as measured by my metric, than either his draft slot or the model projection as the 7th best of his draft class. LaVine is scoring at a high rate of efficiency and torching from three, both at a higher rate than we saw at UCLA.**Rodney Hood**-12 worse rank error. Hood is also better than his draft spot or the model would have predicted.

On the other side of the ledger, here are the five best model estimates compared to where the were drafted.

**Nikola Jokic**, +31 in rank error. Jokic is rated as a top player in AWS terms this year. And as a passing young big man was very highly rated by my model. Often player's that dominate box score numbers and show flashes of everything but lack overt athleticism just don't stick around in the NBA. However, an interesting indication from a different study I did looking at "star" players and "busts" with both scout rankings and on court performance was that performance was relatively more important for becoming a "star", while scouting ratings were relatively more important for avoiding becoming a "bust." So, guys that put up numbers but aren't as good on the eye test might be the real high variance plays.**Clint Capela**, +19 in rank error. Capela has turned into a good young big man and was rated as 4th in AWS among this class. It is arguable that Capela is a guy that is overrated by a box score based rating, but he is definitely better than the 22nd selection where he was picked.**Kyle Anderson,**+15 in rank error, Anderson, with the nickname Slow-Mo is kind of the ultimate eye test vs analytics model player. In this case his efficiency rating is between the two measures, but closer to the rating of the model, whether the Spurs development machine plays into it or not.**Nik Stauskas**+14 in rank error, Stauskas has resurrected his career to extent in Philly. However, he is still not one of the more productive players in his class, ranking 24th via AWS. In this case, we shouldn't have necessarily trusted the crowd source process.**Jerami Grant**, +11 in rank error, Grant was rated as a late first round talent, which is about what he's been.- Honorable mention to
**Adreian Payne**, +10, as an older player that didn't dominate in college and looked like a reach in the middle of the first round.

I should note that in this case we're testing the model against the target variable for the training of the model, which gives the model an advantage even out of sample. So I quickly looked at the model vs win shares available from Basketball Reference. In that case the gap was much closer with the average error for the model being 7.2 places and the draft being 7.4.

Below is the full table:

Tableau:

end

Useful analytics at the start of the NBA season are sometimes hard to come up with. Essentially you have a choice of noting how unprecedented Anthony Davis's start of the season is or how noisy any particular stat is.

There isn't anything wrong with either of those approaches, but I tend to want to estimate how much of this noisy data or unprecedented performance can we take forward. So, I have been experimenting doing in season projections with based on the expected regression using a Bayesian updating formula that weights the amount of noise in the sample against the stability expected in the entire season. The gist of the formula is that the noisier the sample is the less weight we give the new data.

I decided to apply that method to Dean Oliver's four factors, effective field goal percentage, rebounds, turn overs and free trow rate, on both offense and defense for each team as of Tuesday morning via Basketball Reference. I used a team based prior based on the last season regressed toward average based on the season to season correlation.

(See below for the individual team factors and the expected percent adjustment)

The adjusted expected four factors were then converted into net rating using a formula developed by Evan Zamir. In the table I have each teams current margin of victory, or net rating, and the expected rating after weighting with the prior.

The four factor method has the benefit of being relatively straight forward, but it leaves out few noisy areas that can especially distort early season success, three point shooting by the offense and even more by their opponents, both of which get hidden in eFG%, as well as my personal favorite, free throw defense. But the results above at least have air of reasonableness, at least for a sample based on three games for most teams, well the Warriors aside.

Individual four factors as of Tuesday, November 1 and the percent adjustment expected by the end of the season according to the model.