MAMBA Reworked Updated:
A Fantasy end goal with this metric would be for it to be able to be pitched to teams and be seen on the same level as EPM or LEBRON. However, there was an issue with the original version, while the idea was to use Time Decayed RAPM rather than regular RAPM to create less bias and put less emphasis on the box score I think is still very interesting, it may reduce some practicality of the metric early or mid season as the box score component as it is may not be as powerful as Box LEBRON or EPM’s box score component.
Therefore, without losing sight of the philosophy behind creating the metric, I worked on creating a Box Prior that could stand on its own, so this metric could work midseason with a higher decay rate and still provide a good snapshot of the current season (Simply starting the decay rate out really high and lowering it as the season goes on). Perhaps if the Prior is good enough, I could provide a supplementary single year version as well.
Fundamentally, first I would need to have it be able to work with a relatively high decay rate, so Increased the decay rate heavily:
In the original metric, the decay rate was set in a way so that:
The last game of the previous season would be at 68%
The First game of the previous season would be at 40%
The last game of two seasons prior would be at 28%
The First game of two seasons prior would be at 17%
The Current Decay Rate is set up in this way:
The last game of the previous season would be at 59%
The First game of the previous season would be at 28%
The last game of two seasons prior would be at 17%
The First game of two seasons prior would be at 8%
This creates less bias coming from the previous year, As I do want this to be a single year metric with previous years to help stabilize it.
Heavily Reworked Prior
Originally, I had a belief that the box score prior wouldn’t be as important with the time decay RAPM as a factor. While this is still true, I also did not want to create a situation where I wasn’t creating a strong box score prior for the sake of relying on Time Decay RAPM. Furthermore, upon recreating the metric, this premise was not entirely accurate, as while the test results were still generally better than EPM and LEBRON, the results themselves would vary heavily depending on the Priors, often with results that simply did not pass the sniff test. I still want this to be a bit more impact driven, but upon thinking of the practicality of this metric in-season, I did want to create a powerful box score prior still, so the data is regressed a bit more heavily to the Priors, although I’d assume not by as much as the other All in ones in the sphere still.
Offense:
Took out POE. This was my original innovation at the time for the Prior, but I found that while it lead to an overall accuracy.
Added Transition POE, as players like Giannis and Lebron were underrated in the Prior
Some very limited interaction effects (As they can cause some very weird individual results, I was very conservative with this and set a limit for how much they could alter the original data), and some things like Transition POE were slightly shifted depending on a players Overall POE efficiency very conservatively.
Defense:
Charges Drawn was heavily inflating some bigs who drew many charges but weren’t great rim protectors, but it was a very powerful predictor. While there are likely more sophisticated ways to do this, simply setting arbitrary caps on charges drawn based on analysis of the dataset by position, ended up being a pretty solid way to do things. Bigs and bigger players were emphasized by other components anyways, so this helped balance it out to an extent.
Added Field Goals Missed Against, and added a small effect where (+0.25*blocks) were added to it. Note: I don’t actually believe this improved testing results at all, but the results generally made more sense + I did want to emphasize bigs in the box score prior.
Some fo those changes on defense may not have increased overall accuracy, but I did want to emphasize rim protectors more for basketball reasons, and within the framework of this metric I did believe it did help present players in a more “in a vacuum” type way, from the prior to do so, while also having things like Charges Drawn balance the overall picture out. I made other changes and testing as well, but this was was a brief summary of the big changes
Testing
Here, I will post the Correlations to Offense, Defense and Overall, for LebronBox, and MambaBox, and below that, Mamba and EPM. EPM Priors are not available, but LEBRON priors are, and I want to test the Prior specifically as well.
The testing was slightly different for the sake of time, and since I’m just comparing MAMBABOX to LEBRONBOX and MAMBA to EPM. Since the only thing I really cared about was comparative accuracy between metrics, Rookies were given a value of 0, players who played under 250 minutes in the previous season replacement value. When trying to actually predict with the metrics in the best way possible rookies should be given replacement level values, but with diminishing returns on accuracy as you get higher up this may demonstrate the differences a tad better.
So overall The process is the same as before but more simplified: Get current minutes, if they are under 250 give them replacement values, sum them up, and get R^2 vs wins. I did the same for relative Offensive, Defensive net ratings, and overall net ratings too. This will lead to generally lower R^2 all around than in my original test, but its just done because I’m not trying to get the highest prediction but see how MAMBA and MAMBABOX stacks up versus other metrics
Box Score Prior Evaluation:
Compared to BoxLebron, the Prior here shines more offensively. Defensively it’s about a tossup. 2022 is a glaring miss for BoxMAMBA, although given that this is shared by EPM’s overall numbers, maybe it's more a statement on tracking data that year. Outside of that, the overall average accuracy is slightly higher but nothing remotely meaningful, while LebronDefense wins out in 5/9 years.
Now, I should note: When Creating the Box Prior for Defense, I made it in a way to emphasize bigs more since their impact is a bit more stable in different situations, and just the basketball reasons of 2 elite rim protectors is fundamentally different than 2 elite perimeter defenders for example. This is similar to How LEBRON does it, but I will say that general accuracy improves if I don’t do that, on the defensive end and overall, but I do think in a vacuum it might be more accurate to do it this way in terms of ranking players as long as you’re incorporating stuff to balance it out for truly elite impact perimeter players so you don’t just have a list full of Bigs in the final metric, + I think this type of approach makes more sense in conjunction with this kind of approach to the impact side of things
As I know the person behind BBI I don’t really want to display the LEBRON results in this testing + it would be a tad unfair because they do a lot of cool stuff with padding low possession players which would not be represented, but overall gap in defensive prediction between the two metrics with this methodology of testing was pretty large.
Its overall performance, off a glance, seems similar or perhaps better relatively to BoxLazyLebron which I mentioned was an unreleased Prior for an unreleased metric called LazyLebron, which was discarded because of some spurious individual results at the top. (The final results themselves weren’t released, but it was noted Steven Adams, Caruso, Delon Wright and Clint Capela were all in the top 10 for 2022-23 as an example of an issue. This is in the final metric, not the box score prior).
I will show the MAMBA results in the same format as I did before, but generally that wasn’t much of an issue for my metric, I felt the current results “passed the sniff test” much better than the previous ones I published for example, although still there were a few caveats to that and exceptions.
Last note: I would likely approach the Defensive Prior a bit differently to be more precise if/when I do a single year version of this, I wasn’t necessarily trying to get the highest prediction accuracy as I could, as a few iteration and variables I excluded led to higher prediction accuracy in testing without changing results too drastically, I just felt with the TDRAPM still being a part of it being conservative with this made sense.
EPM BOX is not available, I would likely say that EPM BOX is probably better than this, as that also incorporates tracking data and EPM Defense always tested very well, outdoing the previous iteration of MAMBA’s defense by a decent margin. I would likely try a more precise approach with tracking data if/when I create a single year version, which I am more interested in now after seeing the performance of the Prior
Overall METRIC Testing
Now, because Rookies weren’t given values at all (Thus, would be 0), the actual accuracy results from here are going to be less accurate than had I given them replacement values, but to an extent I believe this might be better for demonstrating the predictive accuracy between metrics.
In general, I would say defensively it is about a tossup, but offensively and overall accuracy wise MAMBA seems to have an edge. The gap is larger that it was before, likely from slightly different methodology and more glaringly because rookies were given values of 0 instead of replacement values. The actual gap is likely smaller that it appears on here, but MAMBA still performs a good deal better regardless.
Actual Results Breakdown
In the original writeup, I went over some results I thought were weird, and got the corresponding results here. Here is what the new numbers say about these players.
2015: MAMBA: Lebron at 6, George hill at 7. (EPM Lebron at 5, George Hill at 7, ) (LEBRON: Lebron at 4, George hill at 15)
MambaNew: Lebron at 3, George Hill at 5 - George Hill jumping up a few spots is a bit odd
2016: MAMBA: Lebron at 4 (EPM 4) (LEBRON 2)
MambaNew: Lebron at 2
2017: MAMBA: Durant 8, Lebron 3 (EPM 14, Lebron 7 ) (LEBRON 9, Lebron 3)
MambaNew: Durant 9, Lebron 1
2018: MAMBA: AD 15, KD 16 (EPM: AD 3, KD 15,) (LEBRON AD 8, KD 12).
MambaNew: AD 9, KD 16
2019: MAMBA: Kawhi 15, AD 10 (EPM Kawhi 15, AD 5) (LEBRON Kawhi 12, AD 2), Player of the year of course, its just low on him because toronto did well without him playing sometimes and its an impact thing
MambaNew: Kawhi 17, AD 5,
2020: MAMBA: AD 10, (EPM AD 8) (LEBRON AD 5)
MambaNew: AD 6,
2022: MAMBA: Luka 18, (EPM Luka 17), (LEBRON Luka 8),
MambaNew: Luka 22,
2023: MAMBA: Luka 11, Giannis 12 AD 21, (LEBRON Luka 7, Giannis 2, AD 5), (EPM Luka 7, Giannis 9, AD 10)
MambaNew: Luka 12, Giannis 8, AD 17
2024: MAMBA: Giannis 7, EPM Giannis 4 ,AD 18, LEBRON Giannis 2: AD 6
MambaNew: AD 14, Giannis 6 (Note, Originally outside of the MVP candidates it was PG and Mitchell above him, now its only Bron and the MVP candidates, which seems a bit more reasonable).
Overall, as you would expect, some of the eye popping results that were pretty general across all in ones remained here. Outside of Luka, the differences are more towards the direction of what you would expect, and generally around the EPM range. Giannis jumped up a bit in 2023 and 2024, and instead of being behind PG and Donovan Mitchell outside of the MVP candidates hes behind Bron with some seperation versus everyone else.
Outside of that: It has Jokic as #1 Every year from 2022 to 2024, similar to LEBRON and not similar to EPM, but is lower on him in 2021; Its generally higher on Curry and Lebron, but the pretty obvious fix was AD is no longer severely underrated. While it isn’t necessarily high on him still, and I do think LEBRON is more accurate in this case, its more in line with EPM most years.
To Wrap up, the goal was to
Have results pass the sniff test a little more whlie maintaining predictive accuracy
Create a genuinely good Prior that can stand on its own, so this metric can be used in season rather than just at the end of seasons (Showcased through maintaining accuracy while increasing the Decay Rate)
Overall, I think I achieved that. The results maintain despite the decay rate being increased substantially, the Prior is tested now and does pretty well, and while the results aren’t incredibly different, for the most part the differences make more sense or are in line with some odd results from other metrics rather than being alone in that regard, although its REALLY high on kemba now. I think there is potential for this to be a single year metric too, but for now I think it is at a point where it can still do what its meant to do (reduce bias with less box score weight) but still usable in-season with a higher decay rate to without being too biased towards prior seasons to get a good image of the current season.Here are the results. They aren’t sorted by default although it might seem that way, so click Mamba/the Overall to sort it https://timotaij.github.io/LepookTable/