gfsdgsdf wrote:
He has been at cornell long enough to give us stats (how many people improve year over year, how many regress, how many burn out,...). Is it better than let say old school arkansas (lots of hard fast miles if the training plans I have read are right)?
To make accurate comparisons would mean looking at scads of incoming freshman runners from across the country each year and determining how far they progressed within 4 years (not 5 years, since Ivy League runners don't normally have a 5th year). You could formulate a regression equation to determine how much a 4:05 high school 1,600 runner would be expected to improve, how much a 4:10 guy could expect to improve, and so on. The slower high school guys would be expected to improve more than the superstars, but the number of 4:25 types in D-1 schools might be lower than the number of 4:15 guys, since 4:25 guys would not usually be recruited by powerhouses. You could still gather data about the 4:25 guys from smaller schools. Then do the same for other events (3,200 to 3k, 5k or 10k, for example).
This regression equation would give you the "national average" for improvement. You might determine that a 9:05 high school 3,200 runner should be expected to run 8:09 for 3,000 and 14:08 for 5,000 after 4 years of college, for example. Then you could measure any selected team's runners against that national average and see who's doing the best relative to it. If, in a period of 8 years (4 developed senior classes - remember, we don't count a redshirt year), a team signs and graduates 30 recruits whose average 3,200 time in high school was 8:55 (not unreasonable for the Oregons or Stanfords of the world) and their average college 5,000 personal best is 13:45 while the expected value based on regression is 13:53, you can compare how far ahead of that expected value those guys ran vs. the average of guys on other teams. The other teams don't have to recruit equally to make the comparison, as long as the sample sizes are large enough to approach a normal distribution (about 30 will do it). If a different team has 30 recruits who averaged 9:15 in high school come out of 4 years of college with an average 5,000 time of 14:15 while the national expected value for 9:15 high schoolers is 14:24 after 4 years of college, the team with the slower recruits actually outdid the expected value by a slightly larger percentage (1.05% better than the average vs. 0.97% better) than the team that would kill them in a meet. Based on the expected development, the slower team actually got more out of their guys.
This method doesn't take into account how well runners perform in the most important races, which is one of a few drawbacks to it, since some teams don't chase times as much as other teams do, but it does give one quantitative method for measuring development. With a sport that is as quantitative by nature as track is, at least we have this way of comparing development. And as sample sizes get larger, the measurements about development become more trustworthy, since there will be plenty of runners on all of these teams who do chase times and have the chance to nail down their best possible personal bests.
You could also make a pretty good comparison using cross country recruits who didn't run quite as well in track. There have been Foot Locker finalists who weren't that great at the 1,600 or 3,200. But making the finals means you're one of the top 40 in the nation (or were at some point in the season - namely, on the day of the qualifying race). If you take the top 40 times for the year in the 3,200 or converted 3,000 or 2 mile, you can get a comparison of how good being a finalist in cross country would be if you were equally as good on the track. For purposes of comparing college cross country teams, having 5 Foot Locker finalists on your team, even if they never ran that well in track, is likely better than having 5 9:00 3,200 runners who were not proven cross country commodities. Again, given large enough sample sizes over a period of several years, the probability of having excellent cross country recruits who eventually perform well on the track increases.
Altitude times (Colorado, Utah, etc.) are a little harder to adjust for since people vary quite a bit on how well they run at sea level when they come down from altitude.
Comparing teams with equal recruiting classes but significantly different numbers of recruits will favor the teams with the larger number of recruits. If team A recruits 40 middle distance runners over a 4-year period and the average incoming 800 time is 1:55, while team B recruits 70 middle distance runners who average 1:55 before college, team B has a significant advantage in "diamond in the rough" potential. There are methods for comparing improvements for samples of different sizes, and you have to use those methods to make accurate comparisons when warranted.
These comparisons can be made if you want to find out who's really coaching and who's recruiting. It will take a lot of number crunching, but that's better than unsubstantiated name calling. So for anybody who wants to run their mouth, let's see your numbers and with some work, we can see if you're full of crap or not.
Let's do it. Seriously. Let's put this issue to rest.