Polling outfits do not calibrate their party ID and go from there, they ask the people they're surveying what their party ID is and then report the answer. they were consistently getting between D +6 and D +8 and whatdoyaknow!
I suppose you are right, but if a polling outfit got a result of +5 R in Michigan, would they just run the results? No, they would likely throw that out and try again. There is a baseline (of expectation) they are running on based on past history.
I'm not an expert on survey science, but I'm pretty sure that "throwing out" a poll because it gives you an unexpected result is bad science and bad ethics. You'd publish and include the caveat that you think there's a good chance you botched it somehow.
And in any case, the baseline expectation was being set by other polls. When the vast majority of your polls are showing something and it largely conforms to the results of different pollsters, unless there's something in the data that shows a statistical bias (certain gender/ethnicity/locality being significantly over- or undersampled compared to its actual representation), the unbiased thing is to conclude that the polls are probably approaching the truth. Chambers et al were using their own intuition, then going back and bending the data towards that, rather than vice versa.
edit: I get what you're saying and it woulda been reasonable for Republicans to expect a 2010 turnout and Democrats to expect a 2008 turnout if there were a small amount of data available leading up to the election. But there was a ton of data and I think it's pretty clear now that aggregated poll results leading up to an election is a lot more predictive than wishful thinking and gut feelings on enthusiasm.
See also: