The Myth of the Monolithic Wine Palate

If you have more than a passing interest in wine, you’ve no doubt heard some form of this common complaint: wine critic Robert Parker’s palate, with it’s emphasis for ‘hedonistic fruit bombs,’ has ruined the wine world, because now everyone makes (unappealing/monstrous/one-dimensional/sweet/spoofulated/choose-your-adjective) wines that taste the same and have the singular goal of a high point score from Parker.

I have long maintained that this “sky is falling” point of view (perhaps best typified by the irresponsible polemic, Mondovino) and in particular the demonization of Robert Parker’s palate as monolithic represents a sort of irrational fanaticism with little basis in reality.

My observations, for as long as I have been following the world of wine criticism, have led me to believe that, contrary to the whining and accusations of many, most of the world’s top wine critics tend to completely agree with Parker when it comes to most of the top wines of the world.

And now there’s actually been a study that seems to bolster my anecdotal convictions. Conducted by the Center for Hospitality Research at Cornell University, this recently released study was commissioned to examine the hypothesis that the ordered ranking of Bordeaux Chateaux into First Growths, Second Growths, etc. that has been in place since 1855 may no longer be truly accurate. In the process of testing this hypothesis, the researchers have produced the only statistical analysis I have ever seen that compares the rankings of major wine critics across similar wines. And while it was not the purpose of their research, their findings on the correlation of scores between The Wine Spectator, Robert Parker, and Stephen Tanzer are quite remarkable.

In short: these three sources are in near complete agreement on which wines are the best, and they have been for three decades. This result utterly refutes the idea that somehow Parker’s “skewed” palate has driven the wine market to a place that it would not have otherwise gone on its own.

Here’s one of the charts from the report that pretty much says it all (click to enlarge):

ratings_by_chateau.gif

This graphic shows the ratings for nearly 50 of the top wines of the Medoc region of Bordeaux by these three critical sources. The researchers’ primary findings about these ratings are nicely visualized here, namely that there are incredibly strong correlations between all three raters as to which are the better wines, as well as which wines are relatively better than others, as well as the fact that the differences between these raters are consistent. Parker gives higher ratings (by about one third of a point) than the Spectator, which in turn is about a point higher than Stephen Tanzer. Over 30 years of data, even in the cases where there is significant disagreement between these raters, that disagreement is rarely more than two or three points, maximum.

The only way this study could have proved my suspicions any better is if it had included scores from European critics like Jancis Robinson, Stephen Spurrier, Michael Bettane, and Michael Broadbent.

But luckily enough, there’s a fairly easy way to answer that “what if?”, thanks to a phenomenally useful site called Bordoverview.Com, which lists the scores for several hundred top Bordeaux wines across the past 4 vintages and across a huge range of critics, including Parker, Robinson, Bettane, and the Spectator. A quick pass through the data on that site should be enough to put a nail in the coffin of the myth of the monolithic palate once and for all.

A comparison of the top 20 wines from each of the critics from every vintage since 2004 yields an overlap of more than 60%. I didn’t have the time (or the skill) to grab all the scores and run a regression analysis on them, but I’d bet good money that they’d show the same level of correlation, as well as internal consistency that was found by the Cornell study.

Of course, there will be people who will say, “well, that’s just the top Chateaux of Bordeaux, what about California, or Burgundy, or Italy, or Australia?” It certainly would be great to do this sort of analysis on scores from the critics for all those regions. But the reality is that the majority of wine critics don’t cover all those regions equally. Bordeaux, and the Left Bank in particular, is the ultimate benchmark for wine critics — every major critic covers nearly every one of these wines every year, and these are ostensibly the best wines on the planet if only judged by broad historical market prices and demand.

So let’s just put this one to rest, shall we? If anyone wants to persist in the argument that Robert Parker is ruining wine for the world then they need to answer the following question: how can that possibly be, when the rest of the major wine critics in the world seem to agree with him (nearly wine for wine) and when it appears that some have done so for decades?

I highly recommend you check out the report from Cornell, and that you spend some time playing with Bordoverview.Com.

Oh, and about that 1855 Classification? Looks like it needs a significant overhaul.