First off, thanks to all for perhaps my favorite thread in all of LR. Really cool to see thoughtful discussion of training with minimal detritus to distract. I’ve never posted on LR before and don’t plan to post anywhere but here. Tis nothing but an honor to be this thread’s thousandth post.
I’m in the middle of a marathon training block and don’t want to deviate too much so I’m putting off my sub-T deep dive till after NYC. I do have one question before the thread dies off completely: has anyone here experimented with mixing/matching paces and rep lengths within a workout to provide different stimulus without sacrificing the core principles of this philosophy? I.e. 3k @ a little under HM pace 2:00 rest, 3x1k @ 12-15k pace 1:00 rest, 8x400 @ 10k pace 0:30 rest
The closest thing I've personally done was 2 x (3K + 2K), so I went back and forth between them, with the 2K reps being faster. I must say that it was a nice change from the monotony of doing identical reps all workout long.
Ok, i understand you now. You identified the mathematical problem of a 2 parameter approach for a hyperbolic function. If you enter 360.26 as speed in your formula you get time=274.99/0 which is not defined mathematically.
The Norwegian model does not need D' or W', we can use the Norwegian model with just CV only, because we operate close below CV. I see CV as a marker which can replace MLSS, without doing any lactate measurement to identify MLSS, and a marker which you should not cross over it (running faster than that) for the purpose of this thread. The calculation of CV has its accuracy, so doing the reps below CV with a small distance to CV is recommended.
It’s not even that the function is undefined at the critical speed. To me, it’s that the prediction for the duration, without violating any of the model assumptions, suggests you can maintain CS far longer than is actually possible.
Even if you ignore that inaccuracy and accept that it is not the goal of the model to provide both a speed and a duration (either time or distance) for the CS, are you not left with the conflict created by stating that the CS determines the boundary between the severe intensity domain and the high intensity domain? Yet, because that speed which demarcates the two domains is CS and the model suggests it should be maintained for X minutes, X should also mark where the severe and high intensity domains reside. But that would be a point that, again, in practice is observed to be unrealistic. Is that not a perplexing discrepancy? I don’t want to go on about it here as it is not the theme of the thread. Also, maybe this has already been addressed but I suspect not, since I assume you would have said so?
I’m going to start a thread on this topic though, as it’s curious to me (not so much in application but conceptually) with the hope that maybe some exercise physiologists or better informed individuals than myself will chime in.
In an attempt to summarize this concept though, as well as provide some more “grade school graphs” to the participants of this thread, I put together two documents. One is an overview of the critical speed model with some details not identified in the papers and videos I’ve seen. The second document uses VDOT to not only provide CS paces for all VDOT values but also identify the duration CS can be maintained based on a runners VDOT value and at what percent of 5km pace it occurs.
P.S. The second paper may also be of interest to undergraduates in applied maths and sciences, as a project. There’s some interesting ideas and applications, I think. Most especially if additional data were added to the mix. If, despite the lottery winning level of probability, someone does find this interesting, I can post the code (python) used to generate the results.
Critical Speed Model This is a brief overview of the model, its terms and derivation, for those interested in the concept. Time = D'Speed - Critical Speed D’ and Critical Speed are parameters, whose values need to be estimate...
Overview Given the generalizability and predictive accuracy of Jack Daniels’ VDOT system, I thought it would be interesting to explore applying it using the framework of the critical speed model. Specifically, if an optimal t...
At first glance to Fig.4 in the second document (critical speed duration based on Vdot), that seems not correct as you calculate 38 minutes. Durations for paces very close above CS can not be calculated based on the hyperbolic function problem mentioned above.
I have also some IAAF data and will compare -> takes time
This post was edited 9 minutes after it was posted.
At first glance to Fig.4 in the second document (critical speed duration based on Vdot), that seems not correct as you calculate 38 minutes. Durations for paces very close above CS can not be calculated based on the hyperbolic function problem mentioned above.
I have also some IAAF data and will compare -> takes time
So, the assumption I made was that while the hyperbolic model can’t predict the duration at critical speed, the critical speed value itself, which is estimated using the 5 test results in the 3-20 minute time frame, is the “true” critical speed. Therefore, that’s all that is needed from the hyperbolic model. From there, using VDOT, you can calculate the percent of VDOT that the CS occurs at for each VDOT value, then using an iterative algorithm (Newton-Raphson Method in this case) calculate for what duration of time that speed can be maintained. I’ll go through again though and read the code to make sure it executed as I conceived it.
The appendix goes through what I did specifically to generate the calculations. I realize the math may not be accessible to everyone (which first assumes that the document itself is of interest to anyone haha), but part of the reason I included it at the end was, for those who are interested and can follow along, feel free to call me out if you think I made an error or an improper assumption. I’m more than willing to answer criticisms, as long as it's constructive.
Btw, lexel, I produced A LOT more data than I included in my “document dump” haha. If you want, I can send it to you? I pulled data from IAAF too and calculated a bunch of athletes critical speed curves but also fit performance curves to their race times from 1500m - marathon. Plus, I have additional data from the VDOT calculations. Let me know.
One other idea I had was to create a performance curve based on 1500m, 5000m, and 10,000m times (since those distances are frequently raced), then use the values estimated from that curve, at the same time intervals as I used in the VDOT document, to construct the hyperbolic function. From there, I could test how similar the VDOT values are to what the athletes have actually run. Maybe I’ll give that a go this weekend haha
While your discussion may be very pertinent to training, and even interesting for those interested in the subject you are discussing, I think this thread's goal is to make things simple.
Btw, lexel, I produced A LOT more data than I included in my “document dump” haha. If you want, I can send it to you? I pulled data from IAAF too and calculated a bunch of athletes critical speed curves but also fit performance curves to their race times from 1500m - marathon. Plus, I have additional data from the VDOT calculations. Let me know.
I have a different approach, as I do not trust data without verifying if they are plausible. Same applies for the VDot or even IAAF data.
In the attached document you see different VDOT and IAAF score values, with different race times. CV as well as D’ is calculated with it. It is more or less self-explaining.
I made a discovery: D’ seems to be wrong for the VDot values! As better the athlete as closer the gap is between CV and vVO2max and therefore D’ has to get smaller as higher CV is! This is indeed true for the IAAF data but not for the VDot data. I have to find out why.
I have a different approach, as I do not trust data without verifying if they are plausible. Same applies for the VDot or even IAAF data.
In the attached document you see different VDOT and IAAF score values, with different race times. CV as well as D’ is calculated with it. It is more or less self-explaining.
I made a discovery: D’ seems to be wrong for the VDot values! As better the athlete as closer the gap is between CV and vVO2max and therefore D’ has to get smaller as higher CV is! This is indeed true for the IAAF data but not for the VDot data. I have to find out why.
To your point about the difference in the D’. It’s not that it’s wrong, but it will be different, as you identified, using the IAAF table. The IAAF are more “generous” in the race times that they consider equivalent. For example,
System: 5km, 10km, Half, Marathon
IAAF: 17:30, 37:32, 1:23:49, 3:05:35
VDOT: 17:30, 36:17, 1:20:14, 2:47:41
I do tend to think Daniels can be a little fast with the predictions the longer the race distance goes, but mainly because most people do not run enough mileage to be “equally” trained for the longer distance as they are/were for the shorter distance, not necessarily because it's physiologically impossible. Also, the longer out you project, the more variability, in general. I tend to think the reality lies somewhere between those two equivalencies, for most people. It’s a good point you make though, regarding the basis used for prediction.
P.S. You have very different D' values for your VDOT than I do, which I assume is due to a difference in the sample size and/or range of the test protocol used to estimate the CS and D' parameters. Maybe we can dig into that over email. I have a table that shows the percent difference in the estimated CS based on the number of tests 3-5 and the range of times used. It can be over a 5% difference in estimated CS based on the input.
This post was edited 15 minutes after it was posted.
I made a discovery: D’ seems to be wrong for the VDot values! As better the athlete as closer the gap is between CV and vVO2max and therefore D’ has to get smaller as higher CV is! This is indeed true for the IAAF data but not for the VDot data. I have to find out why.
I didn't quite see that last line. I would've answered it directly to save you the time. It has to do with what I mentioned previously about the IAAF having slower equivalent times than Daniels as the distance gets longer, plus the way you estimate the parameter D'.
D' = d - cs*t , where d is the average distance and t is the average time.
Since you're using the same distances with different times, d will be constant throughout your calculations. cs and t are the variables changing. Since the times for IAAf are slower relative to the same starting distance (say 1000m using your data) than Daniels, the CS calculation will be slightly slower for the IAAF data. That also means the average time, t, will be slightly higher than compared with Daniels. It's a very marginal change but that results in an increasing cs*t term, compared to Daniels which decreases as the performances get faster. And thus, since the average distance for IAAF and Daniels is constant, subtracting an increasingly larger number from a constant results in an increasingly smaller number for D'. Hope that helps.
This post was edited 1 minute after it was posted.
Easily the best thread for anyone who is a hobby jogger, perhaps in LRC history.
I've been reading since maybe page 5 in real time and have been training like this the last couple of months now. Game changer. Few things I can add:
Very hard to reign yourself in, on days you feel good. Especially if you are used to vo2 sessions. I feel when I finish a sub t session, I could do another rep.
Easy runs feel really easy, it's hard to get your head around. But, it's all coming together and I've just PB @ 5k. I almost didn't stick to it, but carried on. Thanks to sirpoc for the guide. The thread is kind of carrying on over in the Strava group, it's really interesting to see so many people putting it into action. It seems easier, but I used runalyze and my load is now higher and I'm faster. Quite the neat trick of these sweetspot of paces.
This thread is incredible and I am going to try it. I bought a Lactate Plus meter.
One question I have is: is there any difference to doing 3Q + LR vs. doing a subthreshold workout inside the LR? So 2Q+LR (w/ a subthreshold session inside)?
Reason I ask is I am mostly training for half marathon / marathon. Would the latter (doing 2Q+QLR) be better for half marathon / marathon?
And if not, would it be okay to do the LR the day after a Q session?
You will be fine for the half marathon if you follow this approach. 3 sub-threshold runs a week and a long run - which in this case is only a longer run at easy pace - after a sub-threshold day is the way to go.
This thread is incredible and I am going to try it. I bought a Lactate Plus meter.
One question I have is: is there any difference to doing 3Q + LR vs. doing a subthreshold workout inside the LR? So 2Q+LR (w/ a subthreshold session inside)?
Reason I ask is I am mostly training for half marathon / marathon. Would the latter (doing 2Q+QLR) be better for half marathon / marathon?
And if not, would it be okay to do the LR the day after a Q session?
Depends on how you define "better".
If you want a quantitative answer, then 3Q + LR would accumulate more stress points.
A qualitative perspective might suggest including 2Q + QLR for more specific training.
You can alternate both on a weekly basis, I don't see any harm in that.
Easily the best thread for anyone who is a hobby jogger, perhaps in LRC history.
I've been reading since maybe page 5 in real time and have been training like this the last couple of months now. Game changer. Few things I can add:
Very hard to reign yourself in, on days you feel good. Especially if you are used to vo2 sessions. I feel when I finish a sub t session, I could do another rep.
Easy runs feel really easy, it's hard to get your head around. But, it's all coming together and I've just PB @ 5k. I almost didn't stick to it, but carried on. Thanks to sirpoc for the guide. The thread is kind of carrying on over in the Strava group, it's really interesting to see so many people putting it into action. It seems easier, but I used runalyze and my load is now higher and I'm faster. Quite the neat trick of these sweetspot of paces.
Which metric are you looking at in runalyze? TRIMP? (That's what I have settled on for the moment.)
I started to collect data middle of September only (just started running again and didn't bother in August). Haven't even 42 days which runalyze uses for "ATL/CTL".
So I am interested in what works for you, as I should be there 2nd half of the month.
Which metric are you looking at in runalyze? TRIMP? (That's what I have settled on for the moment.)
I started to collect data middle of September only (just started running again and didn't bother in August). Haven't even 42 days which runalyze uses for "ATL/CTL".
So I am interested in what works for you, as I should be there 2nd half of the month.
Not the same guy, but I’ve recently moved from runalyze to intervals.icu. I’ve found the interface a bit more user friendly and it looks better on mobile IMO. In icu, “Fitness” is the main metric I’m tracking. They have graphs with “Fatigue” and “Form” metrics superimposed with “Fitness” that are of use too.
For reference, I am running 40-45mpw / ~6h 30min per week. My last 5K in June was a 20:47 basically untrained. I've been training consistently since then on a Daniels program, but now switching to this. Have another 5K on 10/29 and half marathon in late April. Will report back.
This thread is incredible and I am going to try it. I bought a Lactate Plus meter.
One question I have is: is there any difference to doing 3Q + LR vs. doing a subthreshold workout inside the LR? So 2Q+LR (w/ a subthreshold session inside)?
Reason I ask is I am mostly training for half marathon / marathon. Would the latter (doing 2Q+QLR) be better for half marathon / marathon?
And if not, would it be okay to do the LR the day after a Q session?
The whole Ingebrigtsen training discussed here is the "base" phase. They do this and then maintain their gains and either start hard workouts or race frequently to get specific stimulus.
If I were using this to train for a half/full marathon, I would do the 3Q + Easy LR and then do what you thought of, the 2Q + Q LR to maintain the straight up LT benefits and also apply them, specifically, to running long the last 4-8 weeks or so. Bakken kind of talks about this with his hour warm up for a 5x6 min session and I think that is about ideal if you're trying to abide by this method. You can always adjust it to you though! The important thing is specific sessions should be specific, thresholds matter less those days and pace/form you expect to use come race day matter more.
Lots of great info on this thread. Gonna chime in with my experience recently, I just capped off a half marathon cycle where I peaked at 70 mpw and did the vast majority of my workouts at threshold with a handful of double T days once every 2-3 weeks. Everything seemed to be going well, I was feeling great in workouts, and I ran a 35 second 5k PR in August after only doing one race pace session the week before. Then I completely bombed my goal half this past weekend, missing my goal time by four minutes. I think I mostly just had a bad day and I was fit enough to run at least two minutes faster. Still, it seems like the Norwegian approach is best suited for 1500-10k (you know, the distances that the Ingebrigtsens run), and there's no substitute for high mileage for the longer stuff.
Since threshold is pretty similar to HM race pace, I thought I could benefit doubly from maximizing the volume I ran in that zone. I now think this was a flawed strategy, and I was essentially making the same mistake as a 5k specialist running all his workouts at VO2 pace. Still, I'm happy with the overall fitness gains I made in the past few months, and this system clearly works if you apply it correctly.