Log In / Join Now

Wine Rating Inflation

Wine scores seem to get bigger every year. Are the wines better? Or is something else at play?
Photo by: Greg Gorman

Posted: May 15, 2013 1:40pm ET

Anyone who pays attention to wine ratings knows one thing: Critics are giving more 100-point scores than ever before. Are there really so many more "perfect" wines today than in the past?

It's indisputable that wines are better now than a generation ago. Vineyard management, winery technology, winemaker skill—all have progressed. And as wines have improved, ratings as reflected by scores have risen. Simply put, there are more 90-point wines, those wines of outstanding quality, today than in the past.

Yet there is something else going on. The surge of 100-point ratings is about much more than wine quality. In fact, it has little to do with the wine in the bottle. Awarding a wine a perfect rating is a powerful statement. It brings attention to the wine and the winemaker—and also to the critic.

One-hundred-point ratings are rare at Wine Spectator. Over the past 30 years, I've given only two perfect scores in tastings of new releases. In 2012, Wine Spectator editors reviewed more than 17,000 newly released wines; none received a 100-point score.

In contrast, one well-established publication gave 100-point scores to more than 50 new releases in 2012.

At the root of rising scores is the matter of method. Wine reviewers fall into one of two camps: those who taste blind and those who don't.

Non-blind means the taster knows the identity of the wine; essentially, the label is visible. Implicit in that knowledge is the vintner's reputation, the wine's price, and many other considerations that don't exist when the wine's identity isn't known. This knowledge opens the door to a cognitive error called "confirmation bias," which plays a large, but largely unacknowledged, role in everyday judgment. It is a powerful influence, regardless of the independence or integrity of the critic.

In the blind approach, the critic has some general information about the wine, such as the appellation, vintage or varietal, but not the producer's name or price. For me, blind tasting is the highest standard.

That's how we conduct all of our tastings of newly released wines at Wine Spectator. But it's a costly and time-consuming endeavor.

Tasting blind forces the taster to be cautious and critical. In that context, perfection is elusive—no matter how good a wine is, you can find something that would make it better. Most winemakers assess their own wines blind in their comparative, in-house tastings-knowing they could hardly be impartial about them otherwise.

Other critics take a different approach: They taste wines with the winemaker at their winery, or even in their homes. It's far easier and less expensive to review wines at industry-sponsored events and on-site tastings.

It's the non-blind method, especially with a vintner present, that I believe is at the root of today's escalating ratings and the increase in the number of 100-point wines. It's easier to bestow a perfect score upon a wine when you're staring at the bottle and are confident of the vintner's or the wine's reputation.

When objectivity is compromised, a taster can be less cautious, less critical, in assessing a wine than he would be were he tasting blind. This is not a judgment on people's ability to judge wines but it is a strong condemnation of a methodology that leads inevitably to inflated scores.

A version of this blog post was published in the June 15 issue of Wine Spectator.

Louis Robichaux
Highland Village, Texas —  May 15, 2013 2:15pm ET
I agree with your comments. As someone who likes the challenge of tasting blind, I can say it's a humbling experience. The fact that Wine Spectator sticks to this gold standard is one of the top reasons I have no problem writing a check each year to renew my print and on-line subscriptions.
Tim Schultheiss
CA —  May 15, 2013 3:11pm ET
But you don't taste wines blind. You know the varietal, the year, and the appellation. For professionals with that much information and who concentrate in a single region, you should be able to identify many of the wines you taste, especially the more well known ones.

I also wish that a 100-point score would not be interpreted as implying perfection in any way. It is merely the highest score available on the scale you have chosen. It seems it would be appropriate to any wine with a score higher than 99.5, assuming the assessment that leads to the score is continuous.

I realize this sounds petulant, but in my opinion WS makes more of its "blind" tasting than is deserved.
David A Zajac
Akron, Ohio —  May 15, 2013 3:23pm ET
I agree, but its funny in that there are other critics that don't taste blind (Burghound) but you see no score escalation. I have a feeling it has more to do with marketing than anything else. I won't go so far as to say there are ulterior motives, as I am not sure that the other publication would resort to that, but how else can you buy into the 50 plus perfect wines? They and unfortunately a former colleague of yours give scores out that are routinely silly, and I have a difficult time following or giving them much credence based upon the overflow of 96-100 point ratings. However, you sure do see them referred to by anyone trying to sell wines, don't you?
Jeffrey D Travis
Sarasota, FL —  May 15, 2013 4:43pm ET
I would prefer that a critic say I did not taste blind, this is my review, I stand by it, than have a critic say I tasted blind, sort of, so therefore I must be considered the better source of information, no matter what.
Blake Angove
Traverse City, Michigan —  May 15, 2013 11:09pm ET

I am glad you brought this topic up because I have had a question about some reviews that imply a non-blind setting without any disclosure of such a situation. For example, your review of the 2007 Artemis Cabernet reads as follows:

Notably earthy, herbal, drying and leathery, this is savory at best and mediocre in terms of overall quality. This simply lacks charm and appeal. Another sad offering from this once-prominent producer. Drink now through 2016. 54,851 cases made. –JL

You obviously knew what wine you were reviewing, even if tasting blind technically. If you are so familiar with the wines of Napa that you knew what producer it was than you too could be susceptible to a degree of "confirmation bias" as well since the reviews of this wine are consistently lukewarm at best you might think to yourself "this is just another low quality Artemis" and that would affect your impartiality. If it is true that you can identify certain wines by taste then one could say that you should recuse yourself from tasting such wines because there is no chance of actually tasting "blind" as Wine Spectator claims to do in its methodology. Or maybe this is an editorial comment added to the review after the fact and if so, this should be noted in the review.

In any case, I am just curious how a blind tasting of a wine can have such a specific statement about a particular producer. Thanks for your time.
David Crowther
Tuscaloosa, AL USA —  May 16, 2013 5:09am ET
It's obvious some of the commenters did not read Wine Spectator's "How We Taste" section. Especially Blake. It is stated clearly that the reviewer goes back over the tasting notes after the wine is revealed and extra info can then be added but the score can not be changed.
I think Wine Spectator does a great job. Scores are great. Your publication takes a lot of unjust and uneducated and often hypocritical negative criticism that is undeserved.
Keep up the good work.
Thomas Matthews
New York —  May 16, 2013 8:51am ET
We thank readers for their thoughts on this complex issue. We agree there is no "perfect" way to evaluate a wine -- every methodology has its virtues and its drawbacks. Our belief is that the "single blind" methodology we follow is best adapted to eliminating confirmation bias and allowing the wine's true character to be experienced.

David, thanks for your answer to Blake's question; you are correct.

Tim, I think you underestimate the subtlety and complexity of wine. Even with all of our experience, correctly identifying a given wine in a blind tasting is hardly easy. And it's not really the point; the goal is to evaluate the wine on its intrinsic qualities rather than its reputation.

Jeffrey, I understand your position, but I think enough studies have shown that when information irrelevant to quality -- such as label or price -- is not excluded, confirmation bias can operate no matter how "independent" a critic may be.

Louis, thank you for your vote of confidence. We know that Wine Spectator can only succeed if we earn the trust of our readers, both in our integrity and our expertise. We appreciate your support.

Thomas Matthews
Executive editor
Eric Bowgren
Chicago —  May 16, 2013 11:11am ET
I drink wines non-blind, thus have no issue with non-blind reviews. My own confirmation bias is likely comparable to that of a non-blind reviewer.

During a recent trip to the Napa Valley it was interesting listening to feedback from winemakers/proprietors on which method they prefer. Many of them feel that blind reviews can "corner" a reviewer. Meaning, a blind reviewer does not gain perspective on vineyard management and the many many cafefully thought out decisions made in winemaking (we're talking about fine wines here). Learning about these things likely causes confirmation bias, but personally I find value in the attention to detail in the creation of fine wines, and if this causes score inflation, I suppose I'm ok with that.

I do, however, have much respect for blind tasting- hard to argue that this is unfair. I reconcile by subscribing to multiple publications.
David A Zajac
Akron, Ohio —  May 16, 2013 11:35am ET
Lets also not forget that one person's 100 is another person's 98 just based on the merit's alone of what is a 100 point wine? To me it seems that the best bottle of wine I have ever had is my 100 point wine, everything else starts off at 99 and goes down from there. Obviously not all reviewers take that approach, even within WS you see a diversity of top scores from one reviewer to another as some give out many scores of 96+ and others they are very few and far between (regions covered taken into consideration).
Harvey Steiman
San Francisco —  May 16, 2013 11:39am ET
Early in my use of blind tastings I often convinced myself I could identify the wine I was tasting only to discover upon removing the bag that it was something else. I know better now, and try to focus on what's in the glass, not to try to ID the wine.

Also, our tasting coordinators are pretty wily. They will intentionally mislead us on wines in which the combination of appellation and type could give away the identity. The other day, for example, the "Victoria Shiraz" on my tasting sheet successfully obscured the identity of a Clonakilla Shiraz, from the cool but rarely-seen-here Hilltops region of New South Wales. Victoria is also known for its cool regions but a totally different state with many other wines.
Steve Roth
New York, New York —  May 16, 2013 12:30pm ET
I strongly believe in blind tasting, with years of solid experience that demonstrates the bias from identification and other cues. As a long time practitioner of market research, I've tested many products both blind and identified. I've seen numerous cases where the same product is rated very differently when tested blind and identified. Underscoring the bias created by identification, I've seen many cases where the same product is rated very differently when identified by different brands.
Josh Moser
Sunnyvale, CA —  May 16, 2013 2:00pm ET
James - Great topic. I don't know if this would work, but what if you used a scoring curve? A statistical method of assigning scores designed to yield a pre-determined distribution of scores among the wines you taste in a sitting.

I think it is more interesting to see the differences in scores between reviewers, and I have found that they really don't differ that much. Let's take the '09 La Lagune -- Spectator gave it 93 to 96, another reviewer gave it 95 and a third reviewer gave it 89 to 92 points. In my opinion,that is not a huge difference. Now let's take the '05 Leoville Las Cases, and Spectator gave it 100 and other reviewers gave the wine between 95 and 99 points. Now let's look at the '09 Robert Mondavi Napa Valley Cabernet, and Spectator gave it 87 and three other reviewers gave it between 87 and 91 points.

Josh Moser
Founder of VinoServant

Thomas Matthews
New York —  May 16, 2013 4:18pm ET

I think it's worth emphasizing that Wine Spectator doesn't only taste blind. Our editors regularly travel to wine regions to taste with vintners, and that gives them invaluable experience and context. But when it comes to formal reviews, our goal is to eliminate as much bias as possible, while still providing enough information to evaluate typicity as well as intrinsic quality.

Thomas Matthews
Executive editor
Eric Bowgren
Chicago —  May 16, 2013 5:28pm ET
Thomas -

Noted- experience and context I've only started to gain.

There were two instances in the Valley last week where we didn't actually taste wine until after about an hour of vineyard touring, wine making education, etc...the anticipation for the actual wine in these instances was incredible, especially given the attention to detail at these sites. It will be interesting to compare notes from vineyard tasting vs. at-home tasting in the future...more than likely a discrepency will emerge.
David Stultz
Ohio —  May 16, 2013 9:13pm ET
As a true novice, I appreciate the efforts involved with tasting wine blindly. There are advantages - avoiding the 'confirmation bias' is certainly one. But I would propose a at least 1 limitation. I will give you that there are much more refined palates than my own, but I believe a reviewer would find themselves in the minority if they 'accidentally' awarded a bottle of 2 buck chuck a 100. Conversely, unless there were a truly awful vintage, most reviewers may be shocked if they found they awarded a sub par (85? 80? 75???) score to Latour. Thus, I believe there is a bias toward the middle (which nowadays seems to be about 90) when tasting blindly. (I realize that many more wines are tasted than are reported in the magazine, and why would you dedicate a large amount of space to a bunch of sub-80 wines?). It is much easier to justify post-hoc a 90-92 for a wine that might be slightly better or worse than that range, rather than justify a 96-100 score for a wine that is merely good.

Obviously, there is no perfect system - each has their advantages and disadvantages. It is interesting for the novice (me) to compare comments made from different reviewers from different methods of tasting!
Michael Nappi
Staten Island, New York —  May 18, 2013 9:03am ET
It is my understanding that previously scored wines are inserted into the tasting process from time to time and that wines that are scored unusually high or low are "re-tasted". These two steps would seem to me a very good way to validate an imperfect system.
Michael Nappi
Staten Island, New York —  May 18, 2013 9:38pm ET
It is my understanding that previously scored wines are inserted into the tasting process from time to time and that wines that are scored unusually high or low are "re-tasted". These two steps would seem to me a very good way to validate an imperfect system.
Jordan Harris
Leesburg, Virginia —  May 19, 2013 9:49am ET
It is stated above by Tim that you most certainly don't taste truly blind. I think you do the best job you can with the logisitics involved, but as long as you know vintage, variety and AVA there will be a bias. It has been stated on a Wine Spectator blog in the past that certain up and coming regions start with a ceiling of 90 points and then start to work up with added experience tasting them. That is a bias. There is no way that you can say that the editors enter a tasting of "Other US" wines with the same frame of mind as "Medoc". The bias is set before the tasting even starts. The same can be said for vintages and variety. No human on the planet is going to enter a tasting of 2002 Rhones vs 2005 and the scores obviously show that. Sure the vintages is clearly poor in 2002 but outliers would not be given a fair shake to get good enough scores. You could use vintages that are a little less dramatic like 2010 & 2011 in Napa compared to 2007-2009. Does someone tasting Gamay ever enter their tastings with the thought that the top Moulin a Vent might score as high as the top Pinot from Chambertin? There is biases by knowing these.

On top of this, while I appreciate that many of the editors at Wine Spectator may not hand out a lot or any 100 point scores, that is not magazine wide. You had an editor for a very long time that was one of the biggest culprits of handing out 100 point scores? Why was this? I just don't think it is fair to slam others direction when the end result has been the same at Wine Spectator at times. Someone mentioned the 2005 Leoville Las Cases, that was one of how many 100 point scores from Bordeaux in the same vintage, nevermind the 99's and 98's of the same year.
Southern New Jersey —  May 19, 2013 3:05pm ET
17 years ago, I was a wine drinker who married into a family that truly enjoys the whole wine experience. We do not work in the industry and we are far from being experts, but wine is an very important part of who we are. We are faithful consumers who are always on a quest to find that special bottle...that special value...that special pairing. We all generally agree on the virtues that constitute a great bottle of wine, however wine preference is a very personal thing. My father-in-law (the family "expert") and I agree on most of our assessments, however we find ourselves at odds on occasion. Both of us long to be the one who finds the "Holy Grail"!

That being said, we sometimes turn to industry ratings to assist us in finding that perfect bottle. I have been disappointed on occasion when selecting wines based on ratings from sources other than WS. I can honestly say that, generally speaking, I have never been disappointed by WS. I really don't think there can be a "definitive" rating source, because wine tasting is such a personal thing. I appreciate the high standards of WS, and I think their ratings are educated, solid, and about as unbiased as you can reasonably be in this endeavor. They are as close as you can come to a definitive rating.

For our part we use WS as a guideline to help us explore new wines. We occasionally throw caution to the wind and try something obscure, or something local. It's part of the fun of being a wine lover!
Don Rauba
Schaumburg, IL —  May 20, 2013 12:31am ET
I nominate Jim Laube as "Most Intrepid Blogger" at WS! LOL... this topic never fails to generate a HEAP of criticism and controversy, but you never shy away! Good for you!

I'd still like to see a bit of double blind (a very odd name) ratings mixed in from time to time, along with occasional beat-swapping (I'd love to see Jim & Harvey swap temporarily, for example).

Question for you: Aside from other reviewers, to which you seem to point the dogs of accusation (perhaps rightly so), and speaking only of WS reviews, has there been ratings inflation at WS? What do you see as the primary reasons for that, speaking only of WS ratings? Surely improved winemaking is foremost?
Jim Gallagher
San Francisco, CA USA —  May 21, 2013 7:25pm ET
Very interesting discussion. I liked Harvey Steimen's point regarding distractions during wine assessment; for example, concern for a wine's identity. Some wine tasters require component identity, an analysis of their sensory experience; while others follow a more hedonistic, perhaps "right brain", holistic approach. Most critics fall somewhere between these poles in terms of the weight of their assessment.
Bias is an interesting topic. After all when we are assessing wine we are searching for our bias.
Kale Anderson
Napa, CA —  May 22, 2013 4:04am ET
I have yet to taste a 100 point wine in my book. For me it is unattainable, something that I will always strive for, but never accomplish. 100 means perfection, and I don't believe it exists.
Hugh L Sutherland Jr-m
owens cross road,al 35763 —  May 23, 2013 1:55pm ET
Lots of good comments concerning blind vs nonblind tasting. A good way to prove blind is best is to judge the wine blind and then later come back and judge the same wine nonblind.

Another problem is that all tasters do not have the same idea of what a wine should taste like. This is magnified by the fact that the same grape will taste differently in different soils or App's.

I feel that WS does a good job in evaluating wines. Since I cannot afford many 90+ wines, I am most interested in making sure that the wine is drinkable. As I have said before, how do you judge a wine 100 when you have never had one? How do you judge a wine 90 when you have never tasted a 100 wine? I would be very happy with "excellent","very good", "good", and "forget".

I still feel that a consensus of three tasters testing the same wine at the same time and taking an average of the scores is the fairest way of being fair to the wine and winery.
Sergio F Marques
Sao Paulo, Brazil —  June 2, 2013 9:01am ET
I've got one question: what's the need for knowing in advance the region (or sub-region), the vintage and other side information? Is it because a certain sub-region would deserve a higher base rate level because of its history and tradition for instance? Wouldn't It also bring a certain level of bias?
Jacqueline Jauregui
Santa Fe, New Mexico —  June 14, 2013 5:58pm ET
Dan Ariely's book Predictably Irrational discusses some fascinating experiments with wine tasting and confirmation bias -- the subjects were given false pricing information about wines they were tasting and ( surprise) loved the " more expensive" wine. Which was in fact identical to the cheap wine!
James, your scores really work for me, and I appreciate your efforts very much. In 17 years of reading the Spectator, I have never been disappointed in a Cab or a Chardonnay to which you have given a high score. Thanks!
James Laube
Napa —  June 14, 2013 6:01pm ET
Well, that's a great way to start the weekend, Jacqueline. Thank you for the compliment. Will keep me on my toes...

Would you like to comment? Want to join or start a discussion?

Become a WineSpectator.com member and you can!
To protect the quality of our conversations, only members may submit comments. Member benefits include access to more than 315,000 reviews in our Wine Ratings Search; a first look at ratings in our Insider, Advance and Tasting Highlights; Value Wines; the Personal Wine List/My Cellar tool, hundreds of wine-friendly recipes and more.

WineRatings+ app: Download now for 340,000+ ratings.