Anyone else here thinking rekrunner and Armstronglivs just need to get a room already?
Anyone else here thinking rekrunner and Armstronglivs just need to get a room already?
Jack50 wrote:
Armstronglivs wrote:
No, it is dirtier than ever. There are more drugs available today and antidoping cannot keep up. Testing catches very few. As David Howman (former WADA head) says, the dopers are more sophisticated than those trying to catch them. It is a race that has been lost long ago.
Yes, exactly. Everyone is doping at the top. All casual fans / uneducated are in denial.
This is just not true. I personally know Olympic finalists and even sprint semi finalists and they are just not doping.
Not everyone is.
It's definitely cleaner. If it was dirtier or just as dirty, there is no possible way the mile and 1500m records stand as long as they have with improved tracks and spikes, and no possible way the 5k and 10k records stand for as long as they did only for them to conveniently get broken during a lapse in testing.
Down with the IDMC wrote:
This year the men's shot put record that was about 30 years old was broken, the men's 400M hurdles was beaten twice and the second time the record was taken to Steve Austin/Usain Bolt land, the women's 400m hurdles record was beaten in a similar Bionic Woman way and another record from the 90's was beaten in the women's triple jump.
Unless it's one of the great coincidences of all time then something is up.
The 400H records were soft. Most people knew that. AMD it was coincidence that you had 4 top athletes doing it.
casual obsever wrote:
rekrunner wrote:
I didn’t say that 43.6% was non-sensical
Hahahahaha - this whole discussion was solely about your nonsense from three days ago, which you kept repeating too:
rekrunner wrote:
43.6% was visibly non-sensical
Wow you are funny - not.
And all your obvious trolling for a full three days came although I was so kind to avoid citing the abstract, for I know very well how much facts like that annoy you. In brief, with no interpretation whatsoever:
Results The estimated prevalence of past-year doping was 43.6% ...
we were unlikely to have overestimated the true prevalence of doping
In full so you don't accuse me of leaving out important facts:
Results The estimated prevalence of past-year doping was 43.6% (95% confidence interval 39.4–47.9) at WCA and 57.1% (52.4–61.8) at PAG. The estimated prevalence of past-year supplement use at PAG was 70.1% (65.6–74.7%). Sensitivity analyses, assessing the robustness of these estimates under numerous hypothetical scenarios of intentional or unintentional noncompliance by respondents, suggested that we were unlikely to have overestimated the true prevalence of doping.
Look what I read today; more scientists seeing the 43.6% as the result, what a surprise:
10.1016/j.peh.2018.01.001
In a study among elite athletes, the estimated prevalence of past-year doping was 43.6% at a track and field World Championship and 57.1% at the Pan-Arab Games (Ulrich et al., 2018).
Bye bye.
That was poor phrasing. I stand corrected.
43.6% is visibly nonsensical (to visualize it, see Figure 3) because the fastest top-10% estimates 96%, and the fastest top-30% estimates 72%. These estimates from the fastest responses are inexplicably nonsensical. The speculative reasons: hasty, careless, didn’t read instructions, etc., confirm the view that the data should be deleted, and deletion is necessary in order to avoid a “serious problem” and a “significant overestimate”.
This was the whole purpose of keeping track of response times, and performing the response time analysis and the deletion exercise — to filter out the predicted hasty, and careless responses that can only contaminate the “real” estimate.
You can quote the abstract, but even the full abstract omits one important fact: one of the “Key Points” of the paper, according to the authors. This is the main problem with the abstract, and what makes it incompatible with the “Discussion” section of the paper.
It looks like you found more scientists who also copy/pasted the non-value added survey result without distinguishing between the raw survey estimate and the post-analysis finding. I’m sure you can probably find more. Again, the published data speaks for itself. 43.6% means what it means, with all the identified issues, and 31.4% means what it means, with all the identified issues.
But it’s true we got derailed, because while the authors knew, or must have known, or found out, that 43.6% was nonsense, it turns out that 31.4% is potentially nonsense too, but the authors did not know that, relying too much on the anonymity of UQM to be able to give accurate results for a sensitive question like doping.
The survey needs to be refined, to address the flaws of UQM, and repeated, to give more accurate results. This is not just my wish, but also the conclusions of the authors “we would urge continued use and refinement of this methodology to estimate the prevalence of doping in future sports events.”
casualrunner wrote:
It's definitely cleaner. If it was dirtier or just as dirty, there is no possible way the mile and 1500m records stand as long as they have with improved tracks and spikes, and no possible way the 5k and 10k records stand for as long as they did only for them to conveniently get broken during a lapse in testing.
10.54.
Karma Police wrote:
Jack50 wrote:
Yes, exactly. Everyone is doping at the top. All casual fans / uneducated are in denial.
This is just not true. I personally know Olympic finalists and even sprint semi finalists and they are just not doping.
Not everyone is.
You tested them?
Karma Police wrote:
Down with the IDMC wrote:
This year the men's shot put record that was about 30 years old was broken, the men's 400M hurdles was beaten twice and the second time the record was taken to Steve Austin/Usain Bolt land, the women's 400m hurdles record was beaten in a similar Bionic Woman way and another record from the 90's was beaten in the women's triple jump.
Unless it's one of the great coincidences of all time then something is up.
The 400H records were soft. Most people knew that. AMD it was coincidence that you had 4 top athletes doing it.
All records that get broken were "soft". Especially when they are beaten by dopers.
rekrunner wrote:
casual obsever wrote:
Hahahahaha - this whole discussion was solely about your nonsense from three days ago, which you kept repeating too:
Wow you are funny - not.
And all your obvious trolling for a full three days came although I was so kind to avoid citing the abstract, for I know very well how much facts like that annoy you. In brief, with no interpretation whatsoever:
In full so you don't accuse me of leaving out important facts:
Look what I read today; more scientists seeing the 43.6% as the result, what a surprise:
10.1016/j.peh.2018.01.001
Bye bye.
That was poor phrasing. I stand corrected.
43.6% is visibly nonsensical (to visualize it, see Figure 3) because the fastest top-10% estimates 96%, and the fastest top-30% estimates 72%. These estimates from the fastest responses are inexplicably nonsensical. The speculative reasons: hasty, careless, didn’t read instructions, etc., confirm the view that the data should be deleted, and deletion is necessary in order to avoid a “serious problem” and a “significant overestimate”.
This was the whole purpose of keeping track of response times, and performing the response time analysis and the deletion exercise — to filter out the predicted hasty, and careless responses that can only contaminate the “real” estimate.
You can quote the abstract, but even the full abstract omits one important fact: one of the “Key Points” of the paper, according to the authors. This is the main problem with the abstract, and what makes it incompatible with the “Discussion” section of the paper.
It looks like you found more scientists who also copy/pasted the non-value added survey result without distinguishing between the raw survey estimate and the post-analysis finding. I’m sure you can probably find more. Again, the published data speaks for itself. 43.6% means what it means, with all the identified issues, and 31.4% means what it means, with all the identified issues.
But it’s true we got derailed, because while the authors knew, or must have known, or found out, that 43.6% was nonsense, it turns out that 31.4% is potentially nonsense too, but the authors did not know that, relying too much on the anonymity of UQM to be able to give accurate results for a sensitive question like doping.
The survey needs to be refined, to address the flaws of UQM, and repeated, to give more accurate results. This is not just my wish, but also the conclusions of the authors “we would urge continued use and refinement of this methodology to estimate the prevalence of doping in future sports events.”
As you continue to show, flat-earthing takes many different forms.
Elaine thompson hera would not have beaten flo-jo or marion jones at their peak.Put elaine on an 80s/90s track with regular spikes,and her 10.54 would be worth maybe 10.69-10.72.Still a scary quick time,but not quite flo-jo.I believe the sport is as filthy dirty as its always been .Various world records have proved that.Also todays athletes are much more muscled,and younger athletes are running times that are just way too fast to be believed.Athletics isnt a growing sport.In fact i believe its on a slow decline world wide,yet i no longer believe what im seeing.The east german women couldnt dream of being as densely muscled as todays women athletes,and they all worked hard in the gym.
Holiday Inn Express wrote:
Anyone else here thinking rekrunner and Armstronglivs just need to get a room already?
I'll pass on that. A padded cell isn't my thing.
Armstronglivs wrote:
As you continue to show, flat-earthing takes many different forms.
Oh look who’s back, unsurprisingly with another awkward analogy. Let’s try another intelligence test:
Do you think the authors justified, through their sensitivity analyses, on the aggregate, that the negative biases were likely greater than the positive biases, despite having not measured any biases outside of the response time analysis?
Do you think that the UQM method is a reliably accurate method for doping?
I await your insight.
rekrunner wrote:
Armstronglivs wrote:
As you continue to show, flat-earthing takes many different forms.
Oh look who’s back, unsurprisingly with another awkward analogy. Let’s try another intelligence test:
Do you think the authors justified, through their sensitivity analyses, on the aggregate, that the negative biases were likely greater than the positive biases, despite having not measured any biases outside of the response time analysis?
Do you think that the UQM method is a reliably accurate method for doping?
I await your insight.
My insight, which I am happy to pass on, is that you have just disappeared up your rear.
rekrunner wrote:
Do you think the authors justified, through their sensitivity analyses, on the aggregate, that the negative biases were likely greater than the positive biases
Not a bad question – but then you had to add your spin onto its end. Regardless, fact is, several scenarios were quantified, as detailed in the paper.
The answer is: Yes, the authors are correct in their conclusion that they likely underestimated the doping prevalence.
Here a brief overview over the different quantified results of the different sensitivity analyses, each time giving a range of doping prevalence at worlds depending on the chosen assumption (all part of the electronic supplement), sorted by prevalence:
Table 12A: 43.6% - 8.9%
Table 4: 43.6% - 29.9%
Table 7: 43.6% - 41.0%
Table 9C: 43.6% - 41.0%
Table 10C: 43.6% - 44.5%
Table 11: 43.6% - 44.5%
Table 10B: 43.6% - 46.0%
Table 9B: 43.6% - 46.7%
Table 10A: 43.6% - 47.6%
Table 9A: 43.6% - 51.4%
Table 8: 43.6% - 63.4%
Table 12B: 43.6% - 73.1%
4x the prevalence was an overestimate, 8x the prevalence was an underestimate.
Average: 44.8%; average discarding the two extreme values of Table 12: 45.6%. The 43.6% don't look so wrong now, do they?
Hence the authors state in the results part of the abstract on page 1 (211), right after saying that the result is 43.6%:
Sensitivity analyses, assessing the robustness of these estimates under numerous hypothetical scenarios of intentional or unintentional noncompliance by respondents, suggested that we were unlikely to have overestimated the true prevalence of doping
And in the discussion part as the last sentence on page 8 (218):
there are numerous reasons to suspect that we may well have underestimated the true prevalence of doping among these athletes
All in all, I see nothing wrong, misleading, or incompatible in this paper. Likewise, neither do any of the several follow-up, peer reviewed papers who cite this work with its 43.6% as the result.
And let's not forget that they asked for doping only in the last 12 months; dopers who stopped doping over 12 months ago are not counted as dopers in this study.
I’ll try not to “spin”. Since you went to the trouble to average the tables, I will try to give my most fair assessment.
casual obsever wrote:
“Yes, the authors are correct in their conclusion that they likely underestimated the doping prevalence.”
We are generally in agreement about the authors suggestion that they were unlikely to have over-estimated — the disagreement is that you want to use 43.6% from the abstract, while I think it is correct to use their “at least 30%” in the “Key Points”.
Abstract:
The estimated prevalence of past-year doping was 43.6% …. Sensitivity analyses, assessing the robustness of these estimates under numerous hypothetical scenarios of intentional or unintentional noncompliance by respondents, suggested that we were unlikely to have overestimated the true prevalence of doping
Key Points:
After performing numerous sensitivity analyses, assessing the robustness of our estimates under various hypothetical scenarios of intentional or unintentional noncompliance by respondents, we found that the prevalence of past-year doping was at least 30% at WCA and 45% at PAG.
You asked what I don’t understand about that. Here it is in a nutshell: I don’t see how you can reconcile “Sensitivity analyses … suggested … (likely more than 44%)” with “After performing … sensitivity analyses, … we found that the prevalence of past-year doping was at least 30% at WCA”
I appreciate what you are trying to show, but there are a number of problems your “average” of all ranges approach:
1) Table 4 is not like Tables 7 through 12. For a demonstration that they are not the same look at Table 14, and look for Table 4 in Table 14 — you will find Tables 7-12 but not Table 4. Table 4 is not considered a “form of non-compliance”. Table 4 are real results they observed, with a likelihood of 100%, while Tables 7-12 are theoretical models, which need an input they did not observe.
2) Averaging them somehow gives them equal weight or likelihood. The reality is that some are plausible/likely while others are not. Some are even mutually exclusive, like 9A/9B/9C and 10A/10B/10C — you can only pick one, and the right one might be a 9G and a 10G they didn’t calculate.
3) All of your ranges are relative to 43.6% — assuming the very thing under dispute.
The “not a bad question” was if Tables 7-12 — something they didn’t measure and cannot assess likelihood — could likely offset Table 4 — something they did measure, while avoiding the “serious problem” and “significant overestimate”.
rekrunner wrote:
You asked what I don’t understand about that. Here it is in a nutshell: I don’t see how you can reconcile “Sensitivity analyses … suggested … (likely more than 44%)” with “After performing … sensitivity analyses, … we found that the prevalence of past-year doping was at least 30% at WCA”
I don't see a contradiction between
1) I have at least $30 in my pocket (even under two pessimistic assumptions)
and
2) I have likely more than $44 in my pocket (all things considered).
rekrunner wrote:
1) Table 4 are real results they observed, with a likelihood of 100%, while Tables 7-12 are theoretical models, which need an input they did not observe.
Not really 100%.
Yes, Table 4 is different than e.g. Table 8.
Table 4 is based on the observation that the likelihood of saying yes to having doped decreases with increasing response time. The authors did not and could not measure whether the first 1, 2, ... or 50% gave a wrong (too hasty) response.
We then considered the possibility that athletes with very fast response times might produce unreliable answers as a result of haste or carelessness (as discussed in the main text of this paper). Therefore, for each of the three items, we divided the athletes' response times into deciles, and then performed sensitivity analyzes in which we progressively deleted from the total population those deciles of athletes showing the fastest responses (e.g., the fastest 10%, 20%, 30%, 40%, and 50%).
"considered" / "possibility" / "might" -> those are not observations but assumptions.
The authors then calculate the changes assuming the first 0, 10, 20, 30, 40, 50% were too hasty, resulting in 44, 38, 34, 31, 31, 30% of dopers. The authors do not know whether the first 10, 20, 30, 40, or 50% were too hasty. Those are not "real results they observed, with a likelihood of 100%".
Table 8 is based on the observation of earlier studies that some responders do not admit to their sins truthfully, not trusting the anonymity ("underreported doping" because "self-protective").
doping athletes directed to Question B might have lied and answered a self-protective ‘‘no’’ despite the assurance of anonymity.
"might".... see above
The authors then calculate the changes assuming 0, 10, 20, 30% lied, resulting in 44, 49, 55, 63% of dopers. The authors do not know whether 0, 10, 20, or 30% lied. Those are also not "real results they observed, with a likelihood of 100%".
rekrunner wrote:
3) All of your ranges are relative to 43.6% — assuming the very thing under dispute.
Exactly - each sensitivity analysis was relative to 43.6%, resulting in the comment that the 43.6% was likely not an underestimation when considering all sensitivity analyses.
casual obsever wrote:
rekrunner wrote:
You asked what I don’t understand about that. Here it is in a nutshell: I don’t see how you can reconcile “Sensitivity analyses … suggested … (likely more than 44%)” with “After performing … sensitivity analyses, … we found that the prevalence of past-year doping was at least 30% at WCA”
I don't see a contradiction between
1) I have at least $30 in my pocket (even under two pessimistic assumptions)
and
2) I have likely more than $44 in my pocket (all things considered).
rekrunner wrote:
1) Table 4 are real results they observed, with a likelihood of 100%, while Tables 7-12 are theoretical models, which need an input they did not observe.
Not really 100%.
Yes, Table 4 is different than e.g. Table 8.
Table 4 is based on the observation that the likelihood of saying yes to having doped decreases with increasing response time. The authors did not and could not measure whether the first 1, 2, ... or 50% gave a wrong (too hasty) response.
We then considered the possibility that athletes with very fast response times might produce unreliable answers as a result of haste or carelessness (as discussed in the main text of this paper). Therefore, for each of the three items, we divided the athletes' response times into deciles, and then performed sensitivity analyzes in which we progressively deleted from the total population those deciles of athletes showing the fastest responses (e.g., the fastest 10%, 20%, 30%, 40%, and 50%).
"considered" / "possibility" / "might" -> those are not observations but assumptions.
The authors then calculate the changes assuming the first 0, 10, 20, 30, 40, 50% were too hasty, resulting in 44, 38, 34, 31, 31, 30% of dopers. The authors do not know whether the first 10, 20, 30, 40, or 50% were too hasty. Those are not "real results they observed, with a likelihood of 100%".
Table 8 is based on the observation of earlier studies that some responders do not admit to their sins truthfully, not trusting the anonymity ("underreported doping" because "self-protective").
doping athletes directed to Question B might have lied and answered a self-protective ‘‘no’’ despite the assurance of anonymity.
"might".... see above
The authors then calculate the changes assuming 0, 10, 20, 30% lied, resulting in 44, 49, 55, 63% of dopers. The authors do not know whether 0, 10, 20, or 30% lied. Those are also not "real results they observed, with a likelihood of 100%".
rekrunner wrote:
3) All of your ranges are relative to 43.6% — assuming the very thing under dispute.
Exactly - each sensitivity analysis was relative to 43.6%, resulting in the comment that the 43.6% was likely not an underestimation when considering all sensitivity analyses.
I think it has been established that neither of you will agree on what the survey said. Of course you won't. The Casual obsever understanding of what it says, that would suggest doping by nearly 1 in 2 athletes, can never be accepted by rekrunner. So his practised denying goes on.
Armstronglivs wrote:
Karma Police wrote:
The 400H records were soft. Most people knew that. AMD it was coincidence that you had 4 top athletes doing it.
All records that get broken were "soft". Especially when they are beaten by dopers.
Exactly.
969 was soft.
19.30 was soft.
43.18 was soft.
1:41.01 was soft.
2:12.18 was soft.
12:35.36 was soft.
26:17.53 was soft.
2.44m was soft
8.90m was soft.
23,12m was soft.
6.16m was soft.
And many more soft records which were broken.
too obvious wrote:
[
All records that get broken were "soft". Especially when they are beaten by dopers.
12:35.36 was soft.
And many more soft records which were broken.[/quote]
12:37.35
When a doper breaks some record it is more indicative for a soft record then when a non doper breaks some record? That's your very own logic Armstronglivs?
I don't want to judge which WR is soft, but for a world record to be clean:
The most talented clean athlete has to be over 1-4% better than the most talented doper (depending on the discipline and expert), despite the high prevalence of dopers.
For example, banned drug cheat coach Salazar estimated a gain of 2 minutes in the marathon, and doping experts Ashenden and Parisotto 15 – 30 seconds over 5,000 m, and Schumacher up to 60 seconds over 10,000 m.
Taking the marathon as the example:
WR = 2:01:39
WR + 2 min = 2:03:39
14 athletes (not counting Kipchoge) have run between 2:01:39 and 2:03:39. Any clean runner of those 14 could have run the WR – according to Salazar – if he doped. Imagine the temptation.
If you think Kipchoge was clean: he could have brought the official WR down to sub-2, no need for illegal conditions other than doping. Same goes for Bekele. Imagine the temptation.
Note also that one of Kipchoge’s coach’s athletes is currently banned for blood-doping (2019 – 2023).
Matt Fox/SweatElite harasses one of his clients after they called him out
Ingebrigtsen brothers release incredibly catchy Olympic music video (listen here + full lyrics)
2024 College Track & Field Open Coaching Positions Discussion
Sometimes it seems like Cooper Teare is not that good BUT…
Per sources, Colorado expected to hire NAU assistant coach Jarred Cornfield as head xc coach