This methodology, and others similar, are the beginnings of predictive models with results around the 70% range. In other words, if you rank all teams just by those numbers without accounting for any sort of attrition and/or experience, you could predict the outcome of 7 out of every 10 games (90% championship games).
Having done this and similar evaluations, I can tell you that attrition is largely similar across all Division 1 teams (yes there are a few outliers, and yes UT is
potentially one of those). But, in general, teams lose players at roughly the same rate. That is, at a rate similar enough to produce results within a range of 70% +/-.
Similarly, the idea that inexperience is a big factor is over-played (notice I didn't say not-important, I said over-played). What is more important than experience is a base line of athletic ability. Many people tend to forget that top flight recruits are coming in with strength and conditioning programs that are superior to what recruits of even a decade, or half-decade ago, had. That means that younger players at many positions are much closer to having a direct impact than many believe.
Insofar as predicting absolute outcomes (W or L) talent is, by far, the superior component of any of the variables (location, coaching, weather, etc).
Many of your ideas about adjusting rosters for attrition, thus latent talent, will not increase this rate of prediction enough to make it worth your time, if you aren't being paid to do it (your time invested will approach the tens of hundreds of hours if you do it for every team, and the rate of prediction
might only go up 5-10%).
Using this data as a base-line evaluation, there are ways to get huge jumps in the predictability of games, but they are so labor intensive as to be prohibitive to anyone who does not have a lot of free capital to invest. It isn't just as simple, or as intuitive, as finding the latent talent on the two deep and comparing an offense against a defense, you have to know exactly what sort talent creates statistical mismatches before you get substantial jumps in predictability enough to begin to use data to beat the spread.
If you want to see how well this sort of simple model correlates with actual wins and losses on the field, here is a chart illustrating the SEC last season. As you can see, not only did talent predict 67% of all SEC games played, but insofar as seasonal predictions only 4 of 14 teams ended exactly as talent predicted, BUT 9 of 14 teams (64%) ended the season within one game of their predictions, and 11 of 14 teams (79%) ended within two games of their predictions.
View attachment 75054
Tennessee, for instance, underperformed by two games but, 1) decreased the under-performance from Dooley's typical 3-4 game under-performance, to just 2, and; 2)still had a prediction rate of 10 of 12 games (83%) overall, or 75% in the SEC.
Here is how this list looks for the 2014 season:
View attachment 75055
Or, if you would like to see how all of UT's 2014 schedule ranks:
View attachment 75059