After reading through the comments, I don't understand why this was "published". The guy performing the study believes his new equipment has too much noise. Does that invalidate the whole test? Is there anything to glean from this?
Granted, very interesting, but all arbitrary numbers in an inordinately low power (n=1?) study. As I said, really cool that he did this, but it has zero statistical significance.
Granted, very interesting, but all arbitrary numbers in an inordinately low power (n=1?) study. As I said, really cool that he did this, but it has zero statistical significance.
Sure, but it's still interesting. Even the tests with more statistical significance does not tell what any particular shoe does for any particular runner—the error bars are large enough that a performance list done with a ton of runners still won't tell you what is best for anyone in particular.
Ideally, everyone would have access to individual testing and all the shoes to figure out their efficiencies in different shoes. . . their own case studies of 1. But that's only available to people with money, time, and connections.
Sounds like the dude needs to redo the test at 5:30 pace…
And of course the real question is how much of the difference is just being used to a given shoe. How many miles does he have in each shoe before doing this test?
it is pretty hard to do experiments like this without having a bunch of factors that can’t be controlled…
Study? It's most distinctly not a "study" and Dustin doesn't claim it's a study. He takes great pains to say it's just testing on himself. It's interesting in that he appears to be a super-responder to the AF1 but not with the AF3. Try to keep up, and don't be a moron.
Study? It's most distinctly not a "study" and Dustin doesn't claim it's a study. He takes great pains to say it's just testing on himself. It's interesting in that he appears to be a super-responder to the AF1 but not with the AF3. Try to keep up, and don't be a moron.
I guess you missed the part where he posted "case study".