As a fantasy sports writer (or any sort of writer, actually), you always want to be known for something uniquely your own. For me, when I finally decide to hang up my keyboard (probably soon), I’d like to think that I’ll be remembered for some of the things I helped kick off: actual Consistency Rankings (don’t get me started on hacks who make up “consistency” rankings without a shred of math to back it up), relative position value (The Best Damn Draft Theory), and perhaps my most famous gimmick: the Curse of 370.
You know my deal: when a running back gets more than 370 “wear and tear” attempts (combined rushes and receptions) in a given season, it is a virtual certainty that the player will underperform the following season … and usually by a significant amount:
Nothing ever stays the same, however. As the game has evolved, the days of running backs getting anywhere near 370 touches a season are just not happening any more. As a matter of fact, not one runner managed to crack the magic 370 threshold in 2015.
So, I decided to see if there was a lower threshold that could also help owners avoid drafting a bust runner. Here are the results when I drop the threshold to 325 touches a season (stats from 2007-15):
Not quite as convincing as the “Curse of 370,” but still pretty convincing.
But let’s take this a step further and determine if what I’ve stumbled onto here is something solid, or just a random deviation. Thanks to some ancient statistics courses I took in college and my built-in advantage of having used Greek lettering on occasion, I decided to run a test.
Not to get all super-nerdy, but statistical tests involve setting up hypothesis and applying a complicated mathematical formula:
- Null Hypothesis (H0): A player who receives 325-plus attempts in a season will post an equal or better performance the following season (than the players who didn’t).
- Alternate Hypothesis (H1): A player who receives 325-plus attempts in a season will post a worse performance the following season (than the players who didn’t).
To decide whether or not to reject H0, we will conduct Welch’s T-test.
Here’s the formula for Welch’s Test:
- α = level of confidence set at 95% = 0.05;
- Xn = the mean of each population = -6.6433 and -22.7522, respectively;
- Sn = the standard deviation of each population squared = 85.072 and 29.872, respectively;
- Nn = the size of each population = 509 and 46, respectively;
- t = ((-6.6433)-(-22.7522)) / = 16.11/√33.62 = 16.11/5.80 = 2.7776;
The degrees of freedom (df) associated with this variance estimate is approximated using the Welch–Satterthwaite equation (which works out to 128 in this case). We need the df in order to calculate our p value, which is used to determine statistical significance. Calculation of p is too cumbersome to do manually; the values of t and df were fed into a computer algorithm to calculate p.
After running the algorithm, p = 0.0063, which is much less than α and allows for a rejection of the null hypothesis (i.e., our alternate hypothesis is very statistically significant and is not the result of randomness). Another way of stating this is that we can reject the Null Hypothesis with a 99.37 percent confidence.
Conclusion: If you are drafting running backs for your fantasy football team, avoid drafting runners who have logged 325+ attempts the previous season! Heading into 2016 that would mean Adrian Peterson and Devonta Freeman can be expected to perform about 23 percent below the league average for running backs (who log more than 50 carries).