OK I'm going to take off my partisan hat and put on my statistician hat. Virtually every Republican pollster, pundit, and party official was wildly wrong about this election and many like Karl Rove ended up embarrassing themselves on election night and after. All seemed genuinely surprised by the outcome and met it with frank disbelief. Of course, Nate Silver and a number of other pollsters and aggregators of polls had right on the money (although I would note that Nate has had extreme good luck given that in 2008 he used no underlying economic data and in 2012 he used an extremely crude six variable model). So it was possible to predict these results accurately and quite a few folks actually did. There is an old axiom that to be disillusioned, you first must have illusions. The Lefties and the Nerds generally got iit right and the Republicans and conservatives missed it altogether. This is an epic fail in prediction and has too important dimensions. In addition to the obvious horse race dimension of who is winning and who is losing, there is another dimension. If a poll can't measure what it purports to measure (statistical "validity") in one crucial area, are its results valid in other areas? This is the real reason for a Republican pollster postmortem. If they thought in the last days that Pennsylvania was in play, that North Carolina was a lock, and that they were ahead in Iowa, Virginia, and Colorado; not only did they misallocate resources at the end of the campaign, they throw doubt as to the content of of those efforts. How were Colorado Hispanics going to vote? As Catholics or as Hispanics? Would the south Florida Cuban vote hold? How would Iowa farmers react to a Republican House screwing up a farm bill? I'm not arguing that the policies should have been better (that's a whole new discussion), but that the pollsters might not have had a grasp of what issues were winning or losing for them state by state. They were boxing blindfolded and didn't see the sucker punches coming. So what went wrong with the polling? Most people are aware that the phrasing of questions and their placement in the interview can have a dramatic effect on the answers people give. That's true and it also is a major factor in whether a particular poll predicts well and is reliable and valid. But the published polls also include the exact wording and placement, so these issues are subject to peer review. I really don't see an argument that this kind of problem introduced any systematic bias in the Republican polls. These are professionals, after all, and they generally know their craft. So put to bed the notions that the Republican polls were intentionally slanted or incompetent. There are two other important choices a pollster must make and these are often closely guarded secrets. One is the definition and filters for the "likely voter" and the other is the weighting to be given each demographic in the sample. This is where things went horribly wrong. Nate Silver uses historical data to determine a "house effect" to measure this for each pollster. For example, Pew is the most left-leaning poll, routinely giving a 3.2% edge to Democratic candidates COMPARED TO THE AVERAGE FOR ALL POLLS. Conversely, Gallup is 2.5% tilted to a Republican candidate. CNN comes closes to the average with a 0.4% Democratic edge and Fox News has a 1.5% Republican bias. For Nate Silver's comments and other pollster ratings, see Calculating 'House Effects' of Polling Firms - NYTimes.com Republican candidates and pundits apparently believed that their polls were correct and all others were wrong. House effects always net out to zero, i.e. the sum of deviations from average for all polls has to be zero. PPP leans Democratic and Rasmussen leans Republican; big surprise. But how accurate is that average? This is what poll aggregators try to answer. Nate Silver has a regression model that factors in the current economic data, time-weighted poll results, and tendencies from prior elections. He tries to improve the predictions of the movement of the poll average for a number of other factors, and has been pretty accurate. There are three or four others who try to do the same thing. I generally follow three of them each election, and their results are generally similar, diverging only a couple of states and correctly identifying the true "toss-up" states. So what were the Republicans looking at that gave them such a different view? I think there were three effects which distorted their view. First, they thought that their polls were better than others in some technical respects. This caused them to underestimate the turnout of black and Hispanic voters and skewed the predictions. Second, they got addicted to "horse race polls" that showed their candidate winning. While a certain amount of this is usual politics (you want to publish the good polls and quietly file the others), it seems that some in the Republican ranks didn't realize that they were getting poll disclosures for public consumption and using them to gauge their campaign success and strategy. Finally, they had an ideological position on what the "real voters" believed and were motivated by and this caused them to suspect polls which did not confirm their views ("confirmation bias"). In short, they drank their own koolaid. This turned out to be a fatal flaw and explains a number of missteps in the campaign. But the fault is not with the pollsters. They will correct their "likely voter" profile for next time and pay more attention to why their polls and other polls diverge. Again they are professionals, and that is what professionals do when they get it wrong. The fault is with the candidates and campaign managers who deluded themselves in the election, believing that because what they wanted to happen was "right" and "true" it was not possible for contrary poll numbers to be correct.