9Cat Dynasty Prospect Model Outcomes Database
Model Preamble
Very proud to finally present my prospect model outcomes. The idea of using prospect data to model their potential NBA fantasy success is an idea I've been tinkering with in my NRL fantasy analysis for a while at a small scale. It's also been done by a few other analysts in a great way. I've recently put in the hours and the mammoth amount of work it takes to get this designed, built, tested and then completed for uptake of new prospects. Note that this page is the summary database of outcomes. The link HERE will take you to my current prospect analysis which leverages the model grades as well.
​
The model looks at a range of metrics for incoming NBA Draft prospects. It places value on a range of statistics that, in my evaluation and through the testing of my model, are strong indicators of fantasy scoring success in category leagues. Some of the metrics are custom, too, so this model is bespoke and cannot be replicated which is cool.
​
Balancing and catering for different ages, sizes, roles and skillsets is the challenge, but I think my model does a decent job as indicated by the confidence index and the sense-checking on the general returns in my database.
​
Now the foundation is set through many hours tweaking and working in the model and the database, I am confident this is the perfect addition to my eye-test and film evaluation of prospects which has served me quite well so far.
I think this model has the LeBron on-court effect; it not only raises the floor of my evaluation quality but also the ceiling.
​
Confidence Index
The confidence index tracks how accurate my models grade evaluation is comparative to actual fantasy season outcomes.
Due to my database only going back to 2019 (with some testing anomalies such as KD, Curry, SGA), a lot of the prospects I've evaluated have not even began to hit their stride in the league, so therefore they haven't returned even close to their proper outcomes via rank.
Regardless of this known impact, my model demonstrates a strong level of accuracy, with an average prediction deviation of -0.86.
On average, it tends to predict grades slightly lower than the actual grades.
It's important to note that this deviation is within a completely acceptable range for my application of the data and the models maturity level.
I'm actively actively working to further refine the model to enhance its predictive capabilities, and I expect the deviation to improve markedly within the next two seasons as some of my high ranking talents can finally convert their prospective talent into a meaningful 9Cat season long rankings.
The confidence index is not a perfect science.
It rewards the model for being accurate, and it punishes the model for overestimating a prospect. Note that the main drawback of this approach is the non-linear scaling of my grades.
My current approach of evaluating accuracy aligns with my goal of having more consequences for overestimating in the context of fantasy sports. I would only use the confidence index as an indicator (where evidenced as such) for my models relative reliability. You'll see plenty of names in the dataset have grades that don't reflect what we've seen in fantasy rankings for better or for worse. Lastly, remember that the model doesn't adjust for a prospects role in college - some of the more odd outcomes are due to odd roles in teams etc!
​
Grades
For grades on prospects, I've kept it relatively simple:
A+ = Top 25 Fantasy Season Ceiling
A = Top 40 Fantasy Season Ceiling
A- = Top 80 Fantasy Season Ceiling
B+ = Top 110 Fantasy Season Ceiling
B = Top 150 Fantasy Season Ceiling
B- = Top 180 Fantasy Season Ceiling
C+ = Top 210 Fantasy Season Ceiling
C = Top 250 Fantasy Season Ceiling
C- = Top 350 Fantasy Season Ceiling
D+ = 350+ Fantasy Season Ceiling
Current Model Leaders
-
Jaren Jackson Jr
-
Alperen Sengun
-
Chet Holmgren
-
Zion Williamson
-
Anthony Davis
-
Kevin Durant
-
Evan Mobley
-
Grant Williams
-
Jaxson Hayes
-
Victor Wembanyama
-
Keegan Murray
-
Onyeka Okongwu
​