Log In / Join Now

The Costs of Blind Tasting

Handling nearly 20,000 wines a year requires serious investment in infrastructure and personnel
Photo by: Greg Gorman

Posted: Mar 11, 2013 3:00pm ET

Recently, my colleague Harvey Steiman tackled many of the issues, pro and con, of blind vs. non-blind tasting. One aspect I'd like to address is one that's rarely discussed: the cost of staffing and staging blind tastings.

Over the course of more than 20 years, Wine Spectator has developed a methodology for our blind tastings, one designed to keep them independent, consistent and fair to the wines. When you factor in the costs of handling nearly 20,000 wines each year, that methodology costs serious money.

First is the infrastructure. We have constructed dedicated tasting rooms in both our New York and Napa offices. We have also built cellars to store the wines we receive for tasting and keep them in optimum condition. When we taste off-site, as is the case in Bordeaux, for example, we rent space.

More important is the personnel. Wine Spectator has about 10 full-time employees working in our two tasting departments. They play an essential role in ensuring that we taste all the wines in impeccable conditions.

First, they source the wines. Most of the wines we review are submitted by producers or importers/distributors; others are purchased. This requires a great deal of communication and management. Then come the tasks of bagging, coding and organizing the tastings. Finally, and hardly insignificant, comes cleaning up and discarding the bottles.

It's far easier and less expensive for a reviewer to taste scores of wines at an industry-sponsored event such as Premiere Napa Valley than it is to set up a blind tasting of hundreds of wines. But of course the reviewer loses control at a large group tasting—of the wines presented, of the way they are organized, and of the time he or she is able to devote to assessing them. Avoiding distractions and interruptions at such events is impossible.

However, at least the industry-sponsored tastings could be blind. It would be easy for vintner groups to have the wines presented blind (delivered in bags by wineries or bagged up the organization). I'm not sure how many wineries—or reviewers—would be comfortable with that, though. It's too easy to always be right when you're staring at the wine, or standing next to and listening to the winemaker while jotting notes or scores.

In my belief, it is in the industry's best interests to have its wines critiqued by independent reviewers, and blind is the fairest and most objective way to do this. That's why we go to the lengths we do to set up our tastings.

Ed Frankoski
Huntington, NY —  March 11, 2013 6:31pm ET
James, you hit on a great point regarding industry-sponsored tastings when you wrote "avoiding distractions and interruptions at such events is impossible." According to the Wine Spectator School primer on "Getting the most out of wine tasting," a pause after tasting is required to "form an overall image of the wine." Cerebral activity like verbalizing and note writing as well as listening to others shifts the brain out of its highly focused sensing mode necessary for complex tasting. Therefore, one of the most important parts of "blind tasting" is the privacy afforded to the taster which allows them to form a sensory memory of their experience. I'll take my privacy over bags at any tasting. EJF
John Wilen
Texas —  March 11, 2013 7:01pm ET
JL, it is nice to know what goes on at WS blind tastings. Thanks for shedding some light on the "black box". But it is only half the story.

Wine and ratings. What other industry is set up so perfectly for fraud by producers? Start with the end consumer. Many wine drinkers feel inadequate when it comes to judging wine; deep down, they want and need ratings and reviews. Many are easily led, and susceptible to "expert" opinions.

Fortunately for them, there are a handful of critics ready to serve. Their writings wield enormous power; wine shoppers need and want their blessings. The fact that they "taste blind" even lends an air of impartiality. Fantastic!

The retail channel senses this symbiotic relationship, and is only too happy to promote wines with high ratings and to allocate them more shelf space. Ratings also support high prices. The circle is complete. Everybody wins with a high rating.

What's wrong with this picture? Nothing, if everybody plays by the rules. But increasingly, I think wineries don't. You see, the stakes are just too high these days to leave anything to chance. The ratings industry involves high stakes --- and wineries are playing to win. They have millions of dollars at risk, and million-dollar egos. They can't have their entire financial year hinge on a critic's 10-second sip-and-spit. So they have figured out how to beat the system --- your system.

Here's how I believe they do it.

The magazine's office requests bottle samples from wineries, which they provide. However, the wine in the bottles is not necessarily the 2010 "Winery X" Napa Cabernet as labeled. It might contain their high scoring 2009 vintage. Or it might actually be a sister winery's high scoring Cabernet, which the magazine has already touted. Or it might really be their 2010 Cabernet, but it is not the final blend, but rather was taken from the "best" barrel.

Would wineries really do that? I believe they do. Avoidable? Absolutely. But it would require the magazine to come out of pocket and pay for samples off the shelf. That's the only way to ensure a bona fide, representative sample.

Wine and ratings. The perfect storm.
James Laube
Napa —  March 11, 2013 7:19pm ET

Regarding the possibility that a winery might misrepresent a sample (that is send in a different wine or vintage labeled differently than is represented), I've never seen any evidence of that in some 30 years or reviewing wines. Were that to happen and a winery be caught would be, I would think, devastating to their credibility and reputation. As it is, we're pretty good at sizing up the wines, producers and vintages and anything that's suspect (corks, spoiled bottles, etc) are automatically reassessed.
Harvey Steiman
San Francisco —  March 11, 2013 7:58pm ET
John, what you're describing is a problem for any tasting, blind or not blind, on site or on neutral turf. It's a totally different issue.

As Jim says, I doubt if wineries would want to risk a fraud conviction by doing as you say.
John Wilen
Texas —  March 11, 2013 10:44pm ET
First of all, thank you to the blog editors who allowed my post above to be published. I had my doubts...

JL and HS -- it strikes me that the hypothetical that I posited, and that both of you heartily discredit, sounds and smells a lot like the Baseball Steroids scandal. In the early days "everyone" denied it: the players, baseball's union, league management, club owners, the media, the fans, etc. Everyone wanted to believe the game was clean. That nobody would ever cheat...

And what words were used back then when the issue was raised and questions were asked? Those eerily similar to both of yours: "were a player to be caught… it would be devastating to their credibility and reputation. I doubt if any player would want to risk being outed as a fraud. It would be too hard to go undetected. The peer pressure against it would be too strong." We know now that the MLB steroids scandal was rampant and had been going on for a long time. And all the reasons given for why no one would possibly cheat? Well they look pretty silly in hindsight.

Sadly, people who had the chance to clean up the abuses turned a blind eye. They ducked the issue. They could have put in protections but chose not to. A regrettable arms race in MLB was allowed to flourish because an atmosphere was created where it was acceptable to look the other way, where those who spoke up or attempted to raise warnings were ignored. An environment was created where those who did not partake were faced with a devil's choice --- to potentially operate at a competitive disadvantage or to shrug and join in….

And why didn't those, that had the power, clean it up? They chose not to, either because they didn't know how, or they got too much pushback from those that benefitted from the status quo, or because it would be simply too costly to fix.

I don't find the arms race in wine scores to be that different from the arms race in baseball. Maybe that's why they call it "juiced"?
Adam Lee
Sonoma County, CA —  March 12, 2013 6:16am ET

I appreciate your comments and concerns about a winery misrepresenting samples. As a producer, I can only tell you my point of view.

Your mentioning of the baseball steroid issue is an analogy and, unfortunately, analogies only work to a certain degree. There are only 1200 professional baseball players.....that is a tiny number. A 100 case bottling is 1200 bottles. Virtually all of my wines are made in larger quantities than that and are spread around the country (and overseas) in restaurants and retailers. For between $14 and $60 anybody can pick up a bottle of mine and "test the wine" and see if it matches the reviews. That "anybody" includes Jim Laube, Harvey Steiman, Tim Fish, Mary Ann Worobiec and any of the other Wine Spectator Editors. I know from reading their blogs (and Twitter posts) that they do drink wines "off the clock" and those bottles could be my wine. In addition, I pour some of those reviewed wines at events including, but not limited to, Wine Spectator tastings....where they sometimes come by and taste the wines.

In other words, the availability of the wine (of virtually any wines) is far greater and the opportunity for getting caught is far greater and far less expensive....that the risks of submitting a false sample would truly outweigh the rewards.

Adam Lee
Siduri Wines
Jeffrey D Travis
Sarasota, FL —  March 12, 2013 10:35am ET
With only the highest respect for the staff and tasters at Wine Spectator, I have a lingering question. And without going into hypotheticals, my question is: “Do you recognize that the close personal and working relationship between the same tasters and staff, (and management) year after year, at WS, could pose the unintended perception of a rating that is not entirely ‘blind’”?
Thomas Matthews
New York —  March 12, 2013 2:01pm ET
Jeffrey Travis,

Your question seems to imply that one result of a "close working relationship" at Wine Spectator might be a desire to bend the rules of blind tasting for some nefarious reason. But in fact, the opposite is true: we are all deeply committed to our methodology and our ethics; we believe in both their principles and their results. One reason our staff has such a long tenure is that we all share the same values and goals: to give honest, fair and credible guidance to our readers.

Thomas Matthews
Executive editor
Philip A Chauche
Germantown, MD —  March 12, 2013 2:06pm ET
Very interesting discussion. I have to agree with Mr. Lee at Siduri, and perhaps take it a step further. I take the WS ratings as a starting point, not the finish line. I always taste a bottle before I buy more, which happily occurs between 85 and 90 percent of the time.

It's probably a fair point that absolute objectivity is not attainable, but the system WS has in place is seems quite outstanding, and reminds me of the Winston Churchill quote regarding democracy.

"It's the worst form of government in the world, except for all the others that have been tried."
Harvey Steiman
San Francisco —  March 12, 2013 2:18pm ET
John, I don't take your hypotheticals lightly. We worry about the same things. But I don't think the parallel fits with steroids in sports and here's why.

In sports there were plenty of people pointing at specific athletes and wondering how their accomplishments were possible. To my knowledge no one has actually pointed to actual review bottles that are different from those being offered for sale. If anyone does, I promise you we would follow up.

There has been evident improvement across the board in wines. A parallel with steroids would be the use of processes such as spinning cones and the use of additives such as MegaPurple. But that would affect all the bottles of a given wine.
Maryann Worobiec
Napa, California —  March 12, 2013 2:45pm ET
As one of the members of the tasting department team, I would like to pipe in to say that we do not accept unlabeled or unfinished wines for review, and we keep any eye out for signs that a bottle that has been tampered with (obviously hand labeled or hand corked).

I agree with Tom--we are committed to the process and fairness of blind tastings, as it is our own reputation at stake as much as the producers whose wines we are reviewing.

MaryAnn Worobiec
Senior editor
Wine Spectator
Stewart Lancaster
beaver, pa —  March 12, 2013 4:07pm ET
As I taste and buy more wines, your evaluations of the wines help in my selection process, but the score itself can be misleading because every taster has his own biases on the style of wine that he likes. So, what I like in a wine style may not be preferred by the taster from your magazine. Thus, as always, we must try for ourselves and judge whether we like it or not. That's why we taste double blind so frequently.
Jim Powers
California —  March 13, 2013 1:12am ET
I’m very curious whether the WS tasting staff has ever re-tasted the same flight of wines one month, six months, one year later? Have the blind results been duplicated (confirmed)? Or do some wines go up, others down. What I think many readers don’t realize is the score assigned a wine during a blind tasting is just a snapshot of that wine during that one moment in time. I think many of us have had the experience of tasting a wine numerous times over several months/years and noticed a difference in the perceived quality, both up and down. My biggest complaint about scores (and it’s only a small complaint), is the score lasts a lifetime while the wine is perpetually changing in the bottle.

This discussion reminds me of a tasting I attended many years ago. The tasting consisted of 12 high-end wines from a very prestigious wine region. The wine identities were known beforehand by all tasters but then bagged, numbered, and randomly poured in corresponding numbered glasses for review. There was complete silence in the room for almost 45 minutes while everyone swirled, sniffed, and tasted the wines. The moderator of the tasting, a very well-known wine reviewer sitting on the dais at the front of the room, then proceeded to pontificate on the bagged wines, one by one, proclaiming the first wine to be from Winery X because of such and such attributes and scoring the wines close, if not exactly, to what he had previously written in his published review from six months earlier. As the bags came off the first four wines, the room fell silent as all in attendance realized that the moderator had not one wine correct. In fact, wines that he had scored lower in his previous reviews were now scoring higher, while wines he had scored higher in his previous reviews were now scoring lower. At this point the moderator “gave up” trying to guess the provenance of the wines being tasted that evening, chuckling while declaring that the wines had obviously changed over the past year. Granted, this tasting was not done as the tastings are done at the Wine Spectator, but I hope I’ve made my point that tasting wines, in any venue, is just a snapshot of that wine at that moment in time. A wine that scores a 91 one night might end up being a 94, or an 89, two, six, or twelve months later.

As I stated at the beginning, I’m curious whether the WS tasting staff has experienced this same discrepancy in evaluating wines over a short time frame – say 3 to 12 months (I’m familiar with 10/20 year retrospectives and understand this is a rather long time frame, hence bigger, and perhaps expected, changes).
John Wilen
Texas —  March 13, 2013 11:47am ET
Critics can nitpick that my baseball scandal analogy isn't perfect. (I think it's quite appropriate.) But they are losing sight of the main point: that is, not fiercely guarding against winery shenanigans would be the Achilles Heel of an otherwise admirable blind tasting operation at WS.

Over the years, WS has refined and beefed up its blind tasting protocols and procedures. They have "opened the kimono" with detailed explanations of how they do their tastings, the expense involved, the impartiality they seek to protect, etc. All of this is necessary, very admirable and they should be lauded for their efforts.

My criticism was that WS appears to take few, if any, precautions to insure that samples are bonafide, other than perhaps a quick visual inspection. There appears to be an assumption that just as WS is striving for impartiality and integrity internally, that wineries will do the same. That wineries, too, will play fair. And that's where I have my doubts. The stakes are too high and the conditions are too perfect for cheating. Because if a winery sends in a couple bottles taken from the best barrel (which is not at all representative of the final mixed blend), then Spectator's robust set of testing procedures really doesn't matter much at that point. Once the fraudulent sample has infiltrated the system, who cares about the rest?

My goal was not to accuse a specific winery or the wine industry of cheating. But rather to point out that despite WS going over and above board to insure the integrity of their tastings, they appear to be leaving the back door open. Given the importance of the rating and the apparent ease of having a tainted sample accepted, I think one has to be delusional to think that no winery would ever try to cheat the system.

As for the argument that cheating won't occur because people can simply pick up a bottle off the shelf and verify a score for themselves, let's get serious. How many people have done just that and said "What was reviewer X thinking? This tastes nothing like that review." One only needs to go to cellartracker and read 'regular guy' reviews to see comments like "Spectator sure got that one wrong". Do people suspect the old switcheroo? No, we all chalk it up to the natural bottle variation in wine.

Yes, that's another aspect of the industry that supports sample tampering. If people try a bottle and it doesn't taste like what they expected, the last thing they think of is a switch. Instead, they've been trained to think of just about any other reason like "corked", "my palate is off", "bad storage conditions", "shipped in hot weather", etc. Unlike the spirits industry where every bottle is virtually identical, wine isn't like that. Yes, wine's bottle variation is just another reason in the half-dozen I've provided as to why it is the perfect product to tamper with when a make-or-break WS rating is on the line.

Can anybody remember the first baseball players caught for steroids? No, what people remember is that Baseball, the institution, got a black eye. It was "The Game" and its Commissioner that lost all credibility. Because they had the power to toughen up the rules and chose not to. As a result, MLBs "record book" now and forever has an asterisk. Similarly, if it ever were to come out that a winery cheated on its WS samples, yes the winery would be scorned, but a hailstorm would rain down on WS for its lack of internal controls.

Yes, Mary Ann, I'd say that WS has more to lose than any winery. In the big scheme of things, who cares if Winery X doctors their sample and gets scored a 93 by JL instead of an 87. And that allows them to sell out their small production of 400 cases that year at a much higher, net realized price than they would have otherwise earned normally. But if that fraud was discovered, even just once, all of WS's procedures would be called in to question. You'd be back on your heels for years trying to recover the lost goodwill. So Harvey, if you don't take my hypotheticals lightly, let us hear in a future article about how WS is protecting the franchise. I can't think of a bigger priority.
Richard Lee
Napa —  March 13, 2013 1:23pm ET
I agree with you 100%. There are holes in the process and most companies don't resolve them until they get burned. One example comes to mind, a couple of years ago Schild out of Australia was caught selling one of their Shiraz under fraudulent conditions.

WS had given them a good rating, so after they sold out they produced more bottles of that same wine but from different and cheaper juice. Most people don't get caught the first time they cheat so it does make you wonder how often this is happening. So, what you are saying makes plenty of sense to me. You can only present your ideas, if others don't want to use them or consider them there is nothing you can do.
Ivan Campos
Ottawa —  March 13, 2013 3:20pm ET
While Mr. Wilen's theory may be a bit too difficult to prove, the fact that WS decided to include Schild in its Top 100 for 2012 sends a strong signal about its tolerance of unethical behaviour, legally permissible or not. Any/any publicity is 'real estate', Top 2012 included, and I don't think such a winery should be reintegrated without rigorous effort and transparency on its part regarding how it is cleaning up its act and rebuilding consumer confidence. I know this enters into industry politics, but this is as relevant to the reader as the blind-tasting component. I much rather the magazine spend the same space covering unheralded wines from corners unknown.
Harvey Steiman
San Francisco —  March 13, 2013 4:05pm ET
Schild did indeed cooperate with us when we investigated and reported on a second bottling of its 2008 Shiraz that came from sources that were different from the original bottling. They were dis-invited to the 2011 Wine Experience and could not show it with the other wines selected for the Top 10 that year.

Schild has been transparent in providing lot and bottling information on succeeding vintages, including the outstanding wine honored on the 2012 Top 100. I visited them in Barossa and believe the family understands that its decision to make a second bottling from different sources was a mistake.

To be clear, the issue with the 2008 Shiraz was not that they had made a special bottling for our review. The wine reviewed was indeed the same wine being offered until it sold out. A second bottling was discovered and Wine Spectator did not hesitate to expose it.
Tim Mc Donald
Napa, CA USA —  March 13, 2013 5:30pm ET
WOW, this is the most amusing string of posts I have seen in a while with crackpot theorists, accusations of fraud, and even baseball juicing comparisons. I have been in the wine business for over 30 years and NO One does a better job reviewing and telling the stories of regions and the producers from those regions than WS. And although it is not perfect, it is near perfect considering that the process is subjective. John, your premise on cheating is absurd and sending WS the wine from the best barrel simply does not happen. Although it could, I know almost for certain that it doesn't. I would guess that if it did the winery would be caught by the tasting coordinators. Ivan & Mr Lee, the Schild fraud had zippo to do with the process in place @ WS, the winery ran out and replaced the high scoring wine and substituted a wine they felt was similar. They made a mistake and were caught and apologized. I look forward to more comments on discovering wine fraud which is reminding me a bit like voter fraud. It simply does not happen IMO and especially in the process of blind scoring wines.
Vince Liotta
Elmhurst, Il —  March 13, 2013 8:14pm ET
If one searches WS site for the basic Rodet Pommard 1999, they will find 80 points for a "pretty" wine of "balsamic" character. In its publication, Wine Enthusiast described the same wine as having a charming and vivid red fruit character , giving it 89 points.

I had been at a pre-sell tasting for Rodet and Jacques Prieur Burgundies, and found that wine to be exactly as described by Wine Enthusiast, and at the pre-sell price an excellent value, so we purchased some for the store. When it arrived, I looked forward to opening a bottle and recalling that charming forward fruit character I had enjoyed at the tasting. To my surprise, the bottle that arrived at our store was precisely the bottle described by WS's tasting note, not the one I had tasted previously, nor described by Wine Enthusiast.


Michael Henderson
San Francisco —  March 13, 2013 8:22pm ET
Unfortunately I must agree with John from Texas. I too believe the wine industry has transformed to where scores are everything and wineries will do most anything for high scores.

Out right fraud? Maybe not. Close to it? Probably.
Michael Henderson
San Francisco —  March 13, 2013 8:36pm ET
Mr. McDonald, "They made a mistake and were caught and apologized" So what? They cheated and still got included in the Top 100. Since there was no penalty the chances are high they will do it again.
Vince Liotta
Elmhurst, Il —  March 13, 2013 9:53pm ET
And to the point above by Mr. Powers, that a wine changes over time. I recall tasting Chateau Angelus 2000 at a large tasting of Bordeaux. The wine was quite tight showing its Cabernet Franc with firm red fruit and perhaps a bit of herbs and spice. A nice effort, I thought but some other wines at the Grand Cru Classe tasting I thought showed better. Others I spoke with thought it was the wine of the tasting.

I tasted the wine again a few months later at a much smaller sit-down Bordeaux tasting. The same Chateau Angelus 2000 showed a plump and massive dark Merlot fruit character, the Cabernet Franc entirely in the background. I then understood the buzz of some at the previous about its showiness (even though perhaps I personally still wouldn't have considered it the wine of the tasting).

Was it the time that passed that caused the wine to show so differently? The particular bottle I tried which varied from some others? The Graves I tasted immediately prior?

Tasting and evaluating is very complicated, and this is one more reason to advocate screwcaps. I believe it decreased bottle variation, and increases likelihood that a wine will show more consistently over time.

Lyle Kumasaka
Arlington, VA —  March 13, 2013 9:56pm ET
Thanks for the explanation, and in the hopes that there will be more posts on your process, here is tangentially related question:

Once wines are tasted and reviewed, how is it decided which reviews get printed in the magazine and which get put in the database only for website subscribers? A lot more wineries advertise in the dead tree edition than on the website, so there are some bad incentives present. To extend the baseball analogies, I assume this a hanging curveball for you, but would be interested in the answer.
Thomas Matthews
New York —  March 14, 2013 9:04am ET

It's true that we taste more wines than we can fit into the magazine's Buying Guide, and so as we prepare each issue of the magazine, we must choose from the available pool which reviews to publish in print, and which will be entered into our on-line database.

It's not a trivial issue: of the nearly 20,000 wines we review, only about half are published in the Buying Guide.

We can't apply strict criteria, as the pool of wine reviews available changes from issue to issue. However, we take several general guidelines into account.

First, we publish the wines we think will most likely be of interest to our readers. The Spectator Selections that lead off each Buying Guide comprise our highest recommendations from the issue's pool of available wines. Overall, this means wines with high scores, especially relative to their prices, wines whose high production makes them likely to be widely available, and wines whose quality and reputation make them benchmarks for wine lovers.

Second, we try to keep a varied mix of wines in the Buying Guide, so that most readers will find wines that suit their palate and their budget.

Third, we often focus on wines that are the subject of the issue's tasting reports. (And when we publish a tasting report of an especially successful or significant region or varietal, we will include a complete chart of all the wines tasted for the report, whether or not the full reviews have ever appeared in the Buying Guide, for readers' reference.)

So which wines are not published in the magazine? Wines with scores so low they can't be recommended (except for cases where a significant winery scores surprisingly low, and we want to alert our readers to the mis-step). Wines with productions so low as to be basically unavailable (except for very high-profile wines). Wines with high prices relative to their scores. In other words, wines that have relatively limited appeal to our national readership.

I would also note that reviews of wines of special interest are frequently published on the Web site before they appear in the Buying Guide (in our weekly Insider newsletter, for example). And remember: every wine we taste is reviewed, and every review is available through the database on our Web site. We are not trying to hide any review, only trying to make the wisest allocation of our resources.

This is more than I intended to write, and perhaps more than you wanted to know. But I hope it answers your question.

Thomas Matthews
Executive editor
John Wilen
Texas —  March 14, 2013 3:26pm ET
Thomas, FYI there is no way to use the advanced search function on the WS website to find just those wine reviews that were "Web only" (i.e., do not have a published issue date). A database redesign was promised but never happened...
James Laube
Napa —  March 14, 2013 3:48pm ET

I retaste wines constantly and you're correct. A review is a snapshot in time. You're right too about wines changing, which makes it a moving target. I'm confident that my reviews reflect my experience that day and I'm confident too that the review will be of value for some time. The greater the time between when a wine is reviewed and consumed the greater the chance the two won't jibe. That's a key consideration, too, in suggested drink window, earlier always being the wiser choice.
James Laube
Napa —  March 14, 2013 3:54pm ET

The two greatest influences on a wine once bottled are storage conditions and seal. Many of us here at WS feel strongly that wine and consumers are far better served by twist offs than corks. Some day soon there will be a way to determine if a wine has been improperly stored (that is allowed to get too hot). Those two factors account for most bottle variation and the many instances where a wine shows great one day and poorly the next.
Staffan Bjorlin
Los Angeles —  March 14, 2013 3:58pm ET
Interesting blogs (this one and HS's blog). MHO is that it is important for both professionals and consumers that want to learn more about wine to taste both blind (to avoid bias) and not blind (to provide context). I don't understand all the arguing about one method being better than the other.

Regarding the fraud speculation, I tend to agree that the wide distribution and thousand of consumers drinking the wine would probably expose most attempts. I understand that the possibility of fraud could be a concern for wineries since it could give your competition an unfair advantage. But why do consumers care? I don't understand all the fuss about whether scores are correct or tasting notes provide any value. If you don't like them don't read them. Last night I had a Mayacamas Sauvignon Blanc 2007 and Anthill Farms Anderson Valley Pinot Noir 2008. Both really good wines. Neither of them rated by any publication.
Christopher Fay
Philadelphia, PA, USA —  March 16, 2013 4:22pm ET
I'm always suspicious of people who post under one name but sign the post as another (ie, Vince/Tom). I think this might be poster fraud, and not that dissimilar from the mislabeling at Schild... How can we trust their post? Who are they, really? Is this poster Vince to WE and Tom to WS? Are they eligible for the 2013 Top 100 posters?
Timothy C Mooney
Arlington, VA —  March 16, 2013 6:29pm ET
As an enthusiastic conspiracy theorist, I find John Wilen's above hypothesis highly credible. I would add another possibility for using high scores to manipulate consumers. A small to medium batch wine scores a 90+ rating from WS. The rating write-up lists a 3,000 case production run. the wine quickly sells out. A few months later the same vintage of the wine starts appearing in large national retail chains. For some reason the wine bears scant resemblance to the superlative aromatics mentioned in the WS tasting notes.
Neil H Levine
Maplewood, nj —  March 23, 2013 3:02pm ET
Another related question I have is What about bottle variation; how good are wineries at insuring that all the wine they bottle for a specific wine tastes the same. I have noticed very big differences in the same wine when I have tasted it at home or at a restaurant etc.Could the differences in how a wine tastes be more bottle variation than storage or how our tastes change daily.Several other commenters have hinted at this issue. Is the Schild issue really an isolated situation.

BTW this string has been fascinating.

Would you like to comment? Want to join or start a discussion?

Become a WineSpectator.com member and you can!
To protect the quality of our conversations, only members may submit comments. Member benefits include access to more than 315,000 reviews in our Wine Ratings Search; a first look at ratings in our Insider, Advance and Tasting Highlights; Value Wines; the Personal Wine List/My Cellar tool, hundreds of wine-friendly recipes and more.

WineRatings+ app: Download now for 340,000+ ratings.