Really? You are doubling down, arguing against the clear findings of a peer reviewed study? Sorry for the length, but virtually everything you said is nonsensical. Your main objections appear to be 2012 was not a “follow-up” of 2018, and only 2 of the 3 alternative methods count as “cross-check”. Even accepting these objections, all of the main points still stand.
The peer-reviewed published takeaway — which is well summarized in the abstract — something I thought you would appreciate — is that UQM approach conducted in 2011, with a well known delay in publication, in the “2018” paper is not yet proven to be reliably accurate for sensitive questions like doping. Not before 2018, and not after 2018.
In the 2012 “inflating estimate” study, there was a large discrepancy between UQM and THREE other methods which were all part of the same study: 1) SSC, 2) “social projection”, and 3) “network scale up” among the same participants. It’s a different athlete population, so the actual percentages don’t matter, but a 300% discrepancy cannot be ignored, and, if the WADA/IAAF approved “2018” Tuebingen paper came later, this was not addressed in the 2018 paper.
“WADA-paid” is a lame appeal to emotion, not to mention, nonsensical.
“WADA-paid” Petroczi is an author of both papers, and WADA paid for the “2018” Tuebingen study too.
The data from the 2012 “inflated estimate” paper is not extrapolated from other cases, nor wishful thinking, but data generated by and for that study.
Your other words are indeed other words. Your different conclusion is more hypothesis than conclusion, and appears to be something you just invented now for the sake of argument.
For one, UQM is not more anonymous than SSC. SSC has all the features of UQM, including anonymity, plus the added ability of estimating the magnitude of non-compliance. The UQM authors of the “2018” Tuebingen study unfortunately have no way to estimate survey non-compliance — this is the main failing. They cannot measure any form of non-compliance required for all the models they generated, and therefore, cannot determine the bias between the survey result and the true prevalence.
15-18% among IAAF World Championship athletes does not contradict a 19.8% estimate for a different “sub-elite athletic” population in the “inflated estimate” 2012 study.
Furthermore, 15-18% from the IAAF studies is among endurance athletes. It still makes complete sense that the blood doping prevalence among elite endurance athletes would be a similar magnitude as overall doping prevalence for the larger population of all elite athletes for all events.
Are athletes falsely claiming they cheated? That is one naive interpretation. Guaranteed anonymity is supposed to reduce any motivation to “lie”. This is what makes much of the “discussion” in the “2018” paper non-sensical. They talk about the motivation and likelihood to lie based on fear of responding to a doping question, despite recommending and using a method that should reduce that fear to a non-negligible level.
The main answer is apathy — the athletes weren’t being truthful or lying, but simply didn’t care to take the time to correctly participate in a survey according to the instructions. In the Tuebingen study, the authors recommended discarding the fastest 30% of responses, because they were hasty, didn’t read the instructions, and just automatically responded “yes” to all the screens, visibly skewing the results by a significant 13 %-points (31.4% versus 43.6%) — they knew that 43.6% was visibly non-sensical. In the 2012 “inflating estimate” study, they hypothesized that a significant number, rather than answering the sensitive doping question, decided to switch questions after the initial selection, and truthfully answer the birthday question — disproportionately skewing the results towards 50%.
The authors of either study didn’t answer why so many athletes don’t comply, but the refinement necessary is to identify the degree and types of non-compliance, and/or find ways to reliably cancel the effect of non-compliance. As I showed you earlier, even the Tuebingen authors concluded that the UQM protocol used needs refinement.