Log In / Join Now

james laube's wine flights

Grading the 100-Point Scale

Winemakers have come to embrace the 100-point rating system as much as consumers have
Photo by: Greg Gorman

Posted: Nov 11, 2013 12:10pm ET

One big benefit of the 100-point scale is that it has given winemakers a target. It's one way for critics to show vintners where their strike zone lies.

Consumers embraced the scoring system a long time ago. Vintners were more skeptical and cautious. They can rate their own wines intellectually, by flavor, density, balance—any number of ways—but assigning a number, or even using the esoteric descriptors most wine writers use, hasn't fit their comfort zone.

That's changed. Now many vintners I taste with use the 100-point scale. They use it in in-house tastings (often blind). They use it to calibrate their excitement about a wine.

Many vintners have told me that they never thought they'd like the scoring system, but that they now see its merits and enjoy playing the rating game. Some even try to predict a critic's score.

Ratings help sort out wines, and as winemakers assess their wines and their competition, one method is to stage comparative tastings using the top-rated wines in a given category. An easy route is to collect a critic's top-scoring wines and see how their tastes calibrate. Many people will buy wines from the Wine Spectator Top 100 Wines of the Year, taste them and rate them.

Over time, critics' notes and scores come to define the sweet spot. Whether or not a vintner agrees with a rating matters less than knowing how the critic rates certain aspects of a wine, in effect, setting up the strike zone.

David A Zajac
Akron, Ohio —  November 11, 2013 3:26pm ET
Wow, powerful and in my opinion oh so wrong at the same time. Your last comment is what is wrong with wine reviewers and why, imo, there seems to be less diversity today in wine styles than ever before. When you tell me that what the vintner likes or dislikes matters less than what some critic thinks, then the winemaker must, in that situation, know how to make a wine that a specific critic likes and how to then get the high score they all seek. So, if Jim or Bob or whomever likes high alcohol hign intensity wines, don't bring me a wine of finesse or the rating will be so low it won't sell? Way too much power in the hands of the critic...shouldn't you base it on what is in the glass?
Peter Vangsness
East Longmeadow, MA —  November 11, 2013 4:46pm ET
James,

Scoring, a subjective exercise at best, is less about the accuracy of the score relative to a wine's qualities and more about the consumer's capacity to align his or her tastes to the scorer. My tastes happen to be fairly aligned with the WS reviews, and not so much with the WA or Tanzer efforts.
While the scoring system probably influences SOME vintners, I do believe it to be a consumer benefit.
JOOYEON KIM
New York, NY —  November 11, 2013 4:57pm ET
The problem is that the tastings are never COMPLETELY blind...

A real, completely blind tasting should include anything from Premier Cru Bordeaux all the way to two-buck chuck, and the taster should have no idea whatsoever what may be in the mix.

When one knows (for example) that the set of wines they are about to taste are all from a specific appellation/region and a specific year, there is an automatic reference point to past scores from that appellation/region.

Many cheers!
Douglas Johnson
Appleton, WI —  November 11, 2013 5:17pm ET
I agree with Peter Vangsness comments, and like him, I have found my tastes aligned with most of the WS reviewing staff. However, I question the 100 point scale only because I've never seen a wine rated lower than 70 by anyone using the 100 point scale. Moreover, a one point difference is most cases is just splitting hairs. So why not just use a 10 point scale?
David Bidwell
Cardiff, CA —  November 11, 2013 7:56pm ET
I have to disagree with Jooyeon. It makes no sense to taste wines from different regions blindly together and rate one better or higher than another. It only makes sense to compare wines from the same region and grape, and then do so against a standard. That's what the Wine Spectator does, and that is a service to the readers. The relative (perceived) quality differences among similar wines is the value added. Who would ever try to argue a well crafted Napa Cabernet is better than a well crafted Pinot from New Zealand?
Blake Holman
Marietta, GA —  November 11, 2013 8:21pm ET
Agree 100% with Doug's post above. Well stated.
Stephen Segretario
Middlebury, CT —  November 12, 2013 6:52am ET
While the 100-point scale helps sort out some wine according to quality, being in sales, it has skewed the public's opinion of what is good. 90 is the standard for retail at this time. Anything below an 88 and the score isn't mentioned, and then price is a factor. Consider that a $50 Estate Napa Cab may get a 91 point review, while a negociant produced Napa Cab that sells for $20 gets a 90. The consumer only sees numbers, the score and the price. We're "dumbing down" the consumer population for the most part to just buy on review. And if it doesn't get a 90, it isn't worth it.

90+ Cellars anyone? Case and point.
Dan Merry
Suffolk, England —  November 12, 2013 7:19am ET
For the consumer there is plenty of value in the rating scale as is. A $12 Rioja might not have gotten my attention, but Wine Spectator's 90-pt assessment of the 2006 LAN Rioja, for the same price surely did--and it was a wonderful find. A 10-pt scheme could achieve the same effect but I believe many would wonder why there weren't half-point increments.
John Noble
Columbus, OH —  November 12, 2013 9:08am ET
The point system provides a good guide for a particular wines quality, but it still boils down to individual taste. I am also noticing "score inflation" recently as critics go out on their own (Suckling, Galloni) and get headlines (and free advertising) by having the highest score for a particular wine.
Saq Fleurimont
sherbrooke —  November 12, 2013 11:48am ET
where is that scale in your article???
Louis Shenk
Metairie, LA —  November 12, 2013 11:54am ET
Agree with John about score inflation. If 90 plus or 95 plus or whatever cutoff you decide designates the very best wines that are available, only a small percentage should receive that designation and the percentage shouldn't change that much with time. Score inflation is partly due to improving wine quality across the board and around the world (Argentina, Spain, Greece, South Africa) - but the 100 point scale loses its' value if every wine is 90 plus. So perhaps there needs to be a "deflation" and increased standards for what makes a wine 90 plus - it's not just about the intrinsic quality of the wine but how it compares to everything else released. At least that's how it's most useful to the consumer.
James Laube
Napa —  November 12, 2013 12:03pm ET
Louis, wine quality has improved for many of the reasons you identify. For me, what I consider a good wine vs. outstanding is more or less the same as it's been. If a vintner's goal is a score/rating, the 100-point scale gives them an idea of what a given critic thinks is exceptional, or less. That and the description of the wine (body, tannin, acidity, ripeness-sourness, etc.) should provide enough information to validate the score. Not all wines are so easy to describe, believe me.

I don't believe most serious winemakers make wines to please critics. They make the kinds of wines they like, even if that takes time and adjustments. (see: http://www.winespectator.com/magazine/show/id/48835)

One reason ratings are higher is wines are better, that and vintners know where to aim if a score is their goal.
Glenn Keeler
SoCal —  November 12, 2013 12:05pm ET
The first paragraph of this blog is very sad. If winemakers are making wines just to please a few critics individual preferences, then they have simply lost their way.

Wine criticism has no doubt raised the level of quality of wine being produced these days as consumers and producers are simply smarter than they used to be. Plus, I understand that this is a business and good scores help sell wine, but there has to be a balance. If the critic has gotten to the point that they shape the market instead of just commenting on the market, then the consumers ultimately suffer from a homogenized wine landscape.

Robert Taylor
New York —  November 12, 2013 12:08pm ET
Saq,

I've added a link to an explanation of Wine Spectator's 100-point scale to the story, and you can read it here: http://www.winespectator.com/display/show/id/scoring-scale

Robert Taylor
Associate Editor
WineSpectator.com
Brian Marshall
Springfield, MA —  November 12, 2013 1:36pm ET
I agree with Glenn. One of the most interesting things about wine is the differences one can observe from one vineyard/producer/vintage to the next. Why would we want all winemakers to aim for the same target or produce the same product (in regards to style)?
James Laube
Napa —  November 12, 2013 2:10pm ET
Lest this be confused, I'm not suggesting winemakers aim for a critic's strike zone, but being aware of where it is can be useful. I consider I have a pretty wide strike zone, as a wide range of wines and styles appeal to me.
Joseph Byrne
CA —  November 12, 2013 3:22pm ET
Just like a movie or restaurant critic you have to find the wine critic that has similar taste to yours for the various regions. The notes really help describe what a wine may be like and if you or the folks your are with would enjoy that taste. Another really good part of the WS tasting notes is the amount of case made. If there are only 30 cases made, good luck finding it. If there are 3,000 cases made then much easier to find. The maturity years are a nice ball park number to let you know when its best for your tastes.
In the end you have to read a lot of different critics to see what they like and how they describe the wine. Words like "sexy" that some critics have used in other publications does not describe a flavor.

Joe
Lawrence J Cohen
Pittsburgh PA —  November 13, 2013 6:25pm ET
I hardly consider myself an expert when it comes to wine (more a novice with expensive tastes and a poor man's budget), but I find that as my knowledge of wine increases my reliance on the scoring system decreases to the point where I use it less as oracle and more as one particular guideline among many. I like the scoring system (even if I don't always agree with it) and don't understand why all the hoopla.
Austin Beeman
Maumee, Ohio —  November 13, 2013 7:21pm ET
James,
By saying that your standard for a 90 point wine hasn't changed, and acknowledging that more wines are now able to hit that mark, may make reviewing more objective, but less helpful.
Many people, myself included, would love to see WS grade on a scale with 50% of tasted wines scoring 0-50points and 50% of tasted wines scoring 51-100%
I believe that would be incredibly helpful to the wine drinking public and the wine industry.

Thanks
Austin Beeman
Donald Howes
London, London, UK —  November 14, 2013 12:16pm ET
Don't worry about what the critics say. It is only one man's opinion and at best, an indication of quality. Trust your own palate! After all, you'll be drinking the wine, not the critic. Enjoy!
Patrick Benton
Thousand Oaks —  November 14, 2013 2:30pm ET
Speaking of the 100 pt. scale, A competing critic just rated almost the entire Turnbull portfolio very highly, including a 100 pt wine.

I've belonged to Turnbull for some time, and generally like Peter Heitz work. But I also think there are some very good Napa. Cabs at Turnbull's increasing price point. I haven't tasted the '10's yet and won't get to until next weekend. I'm anxious, but also skeptical based on prior vintages, including '09.

So, I'm curious was WS not given the opportunity to taste the 2010 lineup? I only see the inaugural release of their Oakville Cab?
James Laube
Napa —  November 14, 2013 3:13pm ET
Patrick, I have reviewed the Turnbull 2010s. The reviews will appear soon.
Ted A Hunt
Fort Lauderdale —  November 14, 2013 4:34pm ET
James - Since we have entered the realm of specifics with the last blog, I have a question regarding which ratings get published. This fall we bought some Chiarello Eileen Cabernet 2010 at his store, which we tasted and liked. I was continuing to build up our collection of the great 2010 Cabs and therefore passed on the highly rated 2009 which was still in stock. I then anxiously waited for the Cab issue of WS and I found that the 2011 Eileen was highly rated in this year's issue and the 2010 is not mentioned anywhere. My question is this, did the winemaker not have the 2010 rated? Was it rated and it was decided to not publish the rating because it was low? Is it the wine maker's prerogative to withhold the publishing of a rating?

I feel fortunate in that our tastes in California wines closely follow yours and we have relied on WS ratings for many years. Wondered often about this question. Thanks.
James Laube
Napa —  November 14, 2013 4:58pm ET
We usually review all of Michael Chiarello's wines (He submits the wines), but occasionally we miss one.
Kelly Carter
Colorado —  November 15, 2013 7:35pm ET
James,

This has generated a turbocharged discussion!

I think your first point is among the most poignant: winemakers have a target. I doubt most people could argue that wines have improved markedly over the last two decades. There are a number of factors, but having something to offer a reference point (a grading scale) helps guide people. No scale is ever perfect, nor is it a substitution for one's own tastes.

I don't agree with some of your WS colleagues. However, their context offers clarity and I know who's reviews and wines to avoid. Not because those reviews are inaccurate, but their reviews and wines don't align with my own tastes.

I toast you and others that have attempted to help us as consumers navigate the shoals of what we like. You offer a steady, consistent, and cost effective way for us to indulge in our passion.

Cheers!
Louis Barash
New York, NY —  November 18, 2013 12:28pm ET
There is no "100 point scale." Let's take the range of ratings for the most recent vintage of Chablis (2011), which to me seems typical of WS ratings generally. WS rates forty-four wines. The highest is rated 93 (one wine), the lowest are rated 84 (three wines): a less than ten point swing between best and worst. Twenty-seven out of the forty-four wines (61%) are rated 88 through 90. I find that hardly helpful in selecting the Chablis I'll buy. I would hope -- but I'm not sure -- that I would like the 93 pointer better than those rated 84. But I have no expectation that I would consistently like the 90 rated wines better than the 88s. And that's where most of the wines are rated. (This is not an aberration. In the 2010 Chablis vintage, 82 wines are rated by WS. Excepting three wines rated 82 or 83, all of those wines are also rated in the nine poinnt band between 84 and 93. Fifty-three (65%) are rated 88, 89 or 90.)

A 100 point scale would be useful if there were consistency in the reviews and a meaningful distinction (range) in the scoress. Because there is neither -- and that may be the result of increased quality and/or vintners seeking to satisfy wine critics -- the current scale is really not all that helpful.
Ethan Bright
Ann Arbor, MI, USA —  November 18, 2013 10:50pm ET
Both as a consumer and as a scientist, I find the rating scale - as published just about everywhere - to be fraught with unintentional deception. Tasting wine is like taking a sample; To get an idea of whether one gets a "true" value (i.e., what the larger population of consumers will likely perceive the wine to be, over time), one needs to take samples over time. Yes, Robert posted WS's scale explanation ("...one wine may be scored 85-88, another 87-90, another 89-92...), but, unfortunately, a lot of info is missing: 1) how many observations (tastes, i.e. "n"); 2) how many observers (tasters); 3) were there multiple tastings by the observers 4) what was the variation around the mean score; and 5) what was the mode? Actually, I could go on (i.e., what were the biases/pre-tasting preferences of the observers), but what would be really useful would be a distribution curve of the observations. (Yes, I know some web sites post a range of scores of a particular wine, but there is no control for each observation).

What does that mean for the "100 point scale"? A true 100 point wine would be a mean, mode values of 100 with very little, if any, variation. And, beyond the flowery tasting notes and sensations that sometimes boggle the mind (why is "damp earth" good in one wine and bad in another?), reviewers should explicitly standardize the criteria. Yes, I know that's tough, for it would differ by varietal, for one thing. But it would allow one to say that a 100 point Petrus is equal to a 100 point d'Yquem or a 100 point Hermann Donnhoff Riesling Eiswein. And it would allow one to understand the difference between a 89 point and 88 point wine.

Practically, displaying all this information would not be feasible (costs) in print, but I submit that it's entirely feasible - and necessary - via the web. And WS should hire some statisticians and database managers to implement this!

I need a drink...
Lyle Kumasaka
Arlington, VA —  November 19, 2013 12:57am ET
I don't find it illogical or objectionable that trained professionals can make fine quantitative distinctions based on qualitative differences--Olympic medals are awarded in gymnastics, figure skating, and diving on this basis. And lacking the time, wealth, and liver function to fully explore an individual wine vision, I'm happy to rely on the judgments (not just the qualitative descriptions, but the quantitative judgments) of trained professional reviewers with far more experience and discerning palates.
Tedd Potts
Estero —  November 23, 2013 7:46am ET
To me this discussion is similar to a discussion of markets in general. Over time, as markets evolve, experts and non-government regulators are selected by market participants based on their merits, and certain metrics and standards become commonplace. The wine market has selected the 100-point rating system, and it has selected a handful of rating services. In a free market the possibility will exist that changes will occur: that one or more services will fall out of favor, or a particular metric will be replaced by one the market considers superior. Also, just as in any market, not all participants will prefer the generally accepted experts, metrics, or services. But, provided the market remains free of government regulation, individuals will have many choices. In the case of the wine market, for example, individuals can find a dozen or more experts who use various rating methods. One day, however, at the pace we are on, the FDA will be in the business of rating wines, and we'll really have something to complain about!
Daniel Ades
Panama —  December 17, 2013 1:43am ET
James what would you prefer a $100 /95 point or a$200/ 92 point.

Would you like to comment? Want to join or start a discussion?

Become a WineSpectator.com member and you can!
To protect the quality of our conversations, only members may submit comments. Member benefits include access to more than 315,000 reviews in our Wine Ratings Search; a first look at ratings in our Insider, Advance and Tasting Highlights; Value Wines; the Personal Wine List/My Cellar tool, hundreds of wine-friendly recipes and more.
Most Recent Posts
Sep 13, 2017
Hot Advice? Chill Out
Jun 15, 2017
Do Vintages Matter?

WineRatings+ app: Download now for 340,000+ ratings.