EPIC Flagpole wrote:
FP is a LOT smarter than you wrote:Really, Mr. "engineering degrees" (oooooooooohh!) ? Is that why you have never even attempted to show why the following quote (from you, I believe) makes any sense at all (after being invited to do so over and over again)?
"3.3 is off of the actual result of 2.1 by a factor of 1.57 (or "57% larger than 2.1"). That is not at all "cold clear accuracy." The bulk of the results (the "+4") were off the actual outcome by a factor of 2. That is not good."
Please feel free to explain why dividing these two margins of victory (one actual, one from the polls) makes any sense at all. Otherwise, yeah, you've been exposed.
Can't wait for your explanation. This ought to be good.
Since you're clearly trolling now- having been owned quantitatively- I'll try to keep this succinct. But I really do want you to learn how to analyze data, so perhaps it'll be longer than I would like.
1) My original message was questioning Agip describing the polls as "cold clear accuracy." In doing so, there are many different ways to prove that the polls were not accurate in a statistical sense (not a relative, "better than other polls" sense... which doesn't mean anything) at all. There's obviously more than one way to do so. Following? Ok...
2) I have in fact already explained why the quote you keep hanging on to from my original message does make sense. I'll just copy from my follow-on message, since it answered the call:
"My lord you're dense. It certainly is meaningful and relevant, as being off by such a factor is not "cold clear accuracy", which was my point [that you continue to omit for some reason]. If the figure was 2.1 vs 2.2, or even 2.1 vs 2.5, I'd agree it was pretty accurate, as these are both low enough figures AND within an acceptable tolerance to be considered accurate (the standard deviation and confidence interval also supports this theory by the way). 2.1 vs 3.3, especially when including an outlier like the +2 for Trump, certainly does not reflect "cold clear accuracy", which again, was the point. (Removing the outlier results in 2.1 vs 3.8, which is far less "accurate", considering just a single poll in the sample even matches the 2.1 figure!)"
So, there you go. It's not about "dividing two numbers" as you stubbornly keep trying to focus on. It's about realizing that being that far off, even when dealing with single digit numbers, is not "accuracy" (again, the basis of this entire discussion) with the numbers of polls we had at our disposal. This wasn't a single poll we're dealing with; this was hundreds of polls, with 90%+ being on one side of the eventual outcome.
But I realized that I could do better... Still following?...
3) As I mention in #2, you can go a step further than the basic 5 second assessment method I applied originally with traditional statistical methods.
Confidence intervals are a great tool to use here because they are testing the reliability of the estimation procedure ("polls" are the estimation procedure in this case).
Let's use these final 10 polls from RealClearPolitics Poll of Polls:
Bloomberg 11/4 - 11/6 799 LV 3.5 46 43 Clinton +3
IBD/TIPP Tracking 11/4 - 11/7 1107 LV 3.1 43 42 Clinton +1
Economist/YouGov 11/4 - 11/7 3669 LV -- 49 45 Clinton +4
LA Times/USC Tracking 11/1 - 11/7 2935 LV 4.5 44 47 Trump +3
ABC/Wash Post Tracking 11/3 - 11/6 2220 LV 2.5 49 46 Clinton +3
FOX News 11/3 - 11/6 1295 LV 2.5 48 44 Clinton +4
Monmouth 11/3 - 11/6 748 LV 3.6 50 44 Clinton +6
NBC News/Wall St. Jrnl 11/3 - 11/5 1282 LV 2.7 48 43 Clinton +5
CBS News 11/2 - 11/6 1426 LV 3.0 47 43 Clinton +4
Reuters/Ipsos 11/2 - 11/6 2196 LV 2.3 44 39 Clinton +5
Right off the bat you can glance and see that "Clinton +2" is not listed once, and they lean strongly to one side of that number. The most common values are Clinton +4, and eight (8 of 10!!!) of the values are on one side of the ultimate outcome. "Cold clear accuracy," you say? Come on.
Moving on... Use the sample values of -3, 1, 3, 3, 4, 4, 4, 5, 5, 6 and calculate for standard deviation and confidence intervals (Google a calculator).
The actual outcome of 2.1 starts falling outside of the confidence interval at about 84%, meaning the actual outcome being outside that window was only likely to occur 16% of the time. But it occurs equally on either end of the hypothetical Bell Curve, so 16/2 = 8%. So the actual outcome being 2.1 or smaller was likely to occur just 8% of the time. That 8% includes HRC +2.1 and all numbers in the direction of Trump (HRC +2.0, +1.9, +1.8...0...over to Trump +100- haha). So, it was probably about a 3% chance that it was going to end up HRC +2.1.
So... "Cold clear accuracy"? I think not.
...But you come back and call me names- that'll show how much I was "exposed". LOL.