Log In / Join Now

james laube's wine flights

Getting a Read on Readers

Photo by: Greg Gorman

Posted: Oct 24, 2006 12:53pm ET

One of the things I enjoy most about the Wine Experience is the chance to meet readers, old and new, and talk about what I’m (or we’re) doing right or wrong, or how we might improve.

In the span of nearly four days, I ran into dozens of readers, producers, restaurateurs and retailers at the walk-around tastings, dinners, restaurants, lunches and seminars—even in the coffee line at the Starbucks kiosk at the Marriott.

One topic that came up on several occasions was my critique and analysis of the 2003 Cabernets, since that appeared in the Nov. 15 issue, which many people had just received and just read.

Although I’ve consistently called 2003 a mixed and challenging vintage for Napa Cabernet for years, and even written several pieces on this subject, some people were surprised by my reviews.

Well, I have to say I was surprised by some of their comments and conclusions.

For instance, a couple of people said, effectively, “Boy, you really trashed the ’03 vintage.”

Really? I gave the vintage an 85 rating, which puts it at the low end of very good. I did say it’s a weaker year than either 2001 or 2002, and I compared 2003 to the 2000 vintage, which, in general, yielded lean, less ripe wines.

Other common comments related to specific ratings for specific wines. Some readers concluded that when I gave a high-profile wine an 83 that I hated it. An 83-point rating means I thought a wine was good, not horrible. You have to use, and respect, our scale to get an accurate picture of where a given wine falls in our rankings.

I'm sorry other critics rarely go below 85 points and don't tell you about wines they don't like, or wouldn't recommend. That makes those of us who use a fuller range of the rating scale look like we're the only ones who don't like certain wines. Truth is, other critics apparently don't want to weigh in on these wines, and that's their choice.

Yet another observation came after the two nights of Grand Tastings and seminars at which 2003 Cabernets were tasted. Previously, many people hadn’t tried the wines I’d reviewed in the issue, and once they had the opportunity at our events, most said they agreed with my reviews. And those who attended my Cabernet seminar also learned that there were some out-of-this-world 2003s.

I find it impossible to debate with people who haven’t tried a wine, but want to rely on a winery’s reputation or assign a motive on my behalf. Any talk about me losing enthusiasm for Napa Cabernet, or trying to correct the market by giving “low” scores to big-name producers (as if an 88 or 89 is dumping on a wine), is simply wrong.

My goal is to provide you with my most accurate impression of a given wine. I taste virtually every wine twice, blind, and have one of my colleagues taste it blind as well. If I think it’s an off-bottle, I retry it. If an off-wine comes from a good producer, I retry it. Sadly, the issues of bad corks, bottle variation (including storage) and undesirable microbial activity in wines make it harder for all of us to know if we are tasting exactly the same wine.

If you have new comments or questions about the Cabernet issue, I’ll take my best stab at answering them.

John Wilen
Texas —  October 24, 2006 6:43pm ET
Unlike many people who are lighting up wine bulletin boards all over the Internet, I am not bothered at all about the score differentials between you and Parker and/or Tanzer on many '03 CA cabernets. Normally our tasting group's collective palate aligns better with you than them. But what continues to concern me is the ever increasing gulf between your own barrel and bottle scores for a given wine. When there is a significant difference, the barrel scores tend to be higher, as is the case with a number of 2003's (BV Georges, Diamond Creek Volcanic, St. Clement Howell, Hundred Acre, WHL reserve, etc.). We skeptics have always believed that wineries often succumb to the temptation to preselect the best barrels for tasting. However with the 2003s, the pendulum has swung the other way as well. Wines like Pahlmeyer and Spottswoode improved dramatically from barrel to bottle. All these examples serve to raise the question: what is the value of rating barrel samples? Can we trust them? Alternatively, when you see a 6-10 point improvement in the finished product, does it make you wonder what the winery actually sent you to sample?
John Wilen
Texas —  October 24, 2006 6:45pm ET
I think your vintage ratings are often called into question because, to my knowledge, you have never defined how they are derived. Does the composite vintage score reflect the scores of a collection of benchmark wines, such as your Top 50? Or is it derived by looking at the distribution curve of hundreds of scores that year (e.g., x% are above 90, y% are below 80)? Or perhaps no math is involved at all, and it is just gut feel? An individual wine rating is a quantitative assessment of something very subjective. In my opinion, a vintage rating should not be. There is no need for it to be subjective since we have hundreds of data points. In other words, by definition the vintage score should somehow be mathematically related to the individual wines released that year, either to all of them or to an intelligently selected cross-section. Not revealing this derivation contributes to critics becoming lightening rods. When there's no data, people begin to question motives. So tell us again, how exactly did you derive an 85 for 2003?
James Laube
Napa, CA —  October 24, 2006 6:54pm ET
John, wrote about this a few days ago...http://www.winespectator.com/Wine/Blogs/Blog_Detail/0,4211,546,00.html
Mark Mccullough
GA —  October 25, 2006 12:42am ET
James, I think your comments and ratings on the 2003 vintage, especially relative to the previous three, have been exceptionally good. It takes a lot more courage to rate off-years than great years. It seems like most of the griping on WS blogs has been from those who are on the top mailing lists and are used to getting mid 90's+ ratings for their $150+. When you have a difficult vintage where the top names are getting ratings in the same range as $15 bottles, the tendency is to argue with the messenger, rather than resign the bad QPR to a mediocre year. The difference between you and your peers at other magazines/websites is that you taste everything blind, and name reputation doesn't matter. In my experience, that's why I think WS, and I'm including all the editors, is far more consistent and accurate than any other source (in particular, MUCH more accurate than Parker). Keep calling them exactly as you see them, James.
Colin Haggerty
La Jolla, California —  October 25, 2006 9:46am ET
James-I am one who greatly appreciates seeing ALL scores of ALL wines tasted. Other critics often just list those wines rated 85 or above, leaving the reader to speculate as to whether the critic ever even tasted the wine in question. Many thanks for expressing your honest opinions. That is what readers like myself are looking for.
Matt Devan
Fairhope, Alabama —  October 25, 2006 11:00am ET
Lower ratings and less-stellar tasting notes can be hard to swallow when expensive wines are purchased on speculation, or a particular buyer is linked to a particular winery. When consumers join mailing lists and purchase in quantity without pre-tasting, they are automatically placed in the position of wholesalers; defending brands and vintages by virtue of having to... to defend their reputation as buyers, and in some cases sellers. Take the Nov. 15th issue of Wine Spectator for example...Even the Top 50 Cabernet Producers have less then stellar vintages. If they didn't, it would make you wonder how much effect terroir and climate had on the wines... It is certainly safe and easy to collect Caymus, Harlan, and Shafer, but I prefer to reserve my loyalties for the best wines of each vintage..and I use Spectator and Parker as a starting point on my search. The folks that aren't complaining about your scores and vintage ratings and tasting notes are the ones that use your notes and scores as tools to navigate the sea of wine that floods in and ebbs out every year. How else could a non-travelling Alabama wine enthusiast target in on brilliant wines while they are still available?Regardless of the quality of the vintage, there are exceptional wines for exceptional prices.St.Francis 2002 Cabernet Sauvignon $20 90ws,Sequum Cabernet Sauvignon Napa Valley Four Soil Melange 2003 $55 92ws,Sebastiani has a few 2003 91 pointers for $30,Salvestrin Cabernet Sauvignon Napa Valley, Salvestrin Estate Vnyd 2002 $48 93ws,Neal Family,Meyer,Match,Lewis,Ladera,John Anthony,Carter,Bennett Lane,I don't think people disagree with your tasting ability James. Anger is a natural reaction when you choose a stock, a boxer, a horse, or a spouse, and they don't perform up to prediction.You just happened to the ref at the time.
John Wilen
Texas —  October 25, 2006 11:32am ET
Mark, while I agree that JL's assessment of the '03 vintage has surely bent a few noses out of shape among those that buy cult cabernets at high prices, by far and away the bigger fallout is going to be at retail. Once word gets out that 2003 is not a favored vintage, inventory will sit on shelves. Customers will postpone purchases and wait for the 2004s. (Just ask any retailer about 1998 and 2000. Many still have nightmares; most certainly have less hair!)

While you and I prefer our primary critic to taste blind, to use the full scoring scale, to ignore winery and winemaker reputation, to reveal the dogs not hide them, and to not let previous vintages of a wine influence the current evaluation, the retail trade DOES NOT. Wine retailing has become a cutthroat business as the Internet, brand proliferation, cheap imports and other factors have crippled margins. The last thing the industry wants is a powerful critic that is all over the map with scores. THEY seem to prefer the subtle approach that Parker uses, where his scores for a given wine do not swing wildly from year to year. If a normally great wine has an off year, RP will give it 88. That is a strong enough slap to send a message without upsetting the retail trade. His subscribers know how to look for the hints. His readers get the message without setting off WWIII at the wine store. WS appears to reject this approach.

WS's writers like JL believe their first priority is to the consumer, who can handle the truth. They believe that by calling them as they see them, they maintain integrity with the reader. But in reality, WS serves two constituencies: consumers and retailers. (I believe WS was the first to seed retailers with advance reviews so that sellers could stock up on the well rated wines and have ample point-of-sale material when the review broke in the magazine a week later.) Having talked with many of them, I'd say retailers have a love-hate relationship with WS. They need them but they resent their power. Scoring 2003 with an 85 will once again test the limits of that relationship.
James Laube
Napa, CA —  October 25, 2006 12:06pm ET
John, on barrel tastings, I've written about that, too and will address it in a future blog. But the bottom line is I'm curious about the wines, many wineries are willing to submit their best honest samples and after 20 years of trying to educate myself, from barrel to retrospectives, I like the opportunity to see how the young wines are showing. I wish I could spend more time with other wines out of barrel, as well.
Jonathon Wagner
San Francisco, CA —  October 25, 2006 12:13pm ET
I think it's a bit oversimplified to pretend as though nothing is different, and you're utilizing the WS rating scale as you always have. Most of us former fans of yours have noticed some dramatic shifts in the past 3 years with regard to your palate sensitivities, investigative reporting, odd trends on barrel to bottle ratings, as well as final bottle ratings. While most people (connesieurs and critics alike) feel as though the quality of CA wine is at it's peak (expecially with 2001 and 2002 back to back), your views have seemed to always take a path less traveled from this perspective. Many of your readers don't know what to think, and have stopped taking you seriously because of your often erractic difference of opinion. While differences are fine and often valued, it becomes questionable when it happens as often as it has with you in recent years. This blog attempts to address this by simply denying it. Your fans (former and current) want to know...What's different with you?
James Laube
Napa, CA —  October 25, 2006 12:35pm ET
Jonathon, if anything has changed it is that in the past few years I've tasted more wines, both from California and around the world, and feel more confident than ever with my tasting results. You seem to think I've missed California at its peak, yet I feel like I've been right on the track with Cabernet (both 2001 and 2002 earning outstanding ratings as vintages) and more so with Pinot Noir, Syrah, Chardonnay and even Sauvignon Blanc. As I indicated before, those who know me best and taste with me the most often appreciate the time and effort I put in with the wines. Sorry you think I'm in denial. Not so.
George Pallas
Arcadia, Ca —  October 25, 2006 12:57pm ET
James, is 03 going downhill as time goes on? To me some of the good 03's you rated that were as good as their 02's like Buehler, Caymus, Sebastiani AV, got early ratings as long as a year ago, but seems the low ratings are getting more frequent as time goes on for 03's.
Mark Mccullough
GA —  October 25, 2006 2:03pm ET
John, you confirmed my view on Parker (that he is compromised and his scale is highly skewed) and why I generally ignore his ratings as a purchase guide and never as a sole source. Re: your point about retail, many retailers don't seem to have much restraint against immediately adding extra markups after the WS ratings and Top 100 are known, so I'm having trouble feeling sorry for them simply because a reviewer writes that a wine wasn't as good as last year's release and buyers might actually take that into consideration. Perhaps lagging inventories may spark some pushback on wineries upping their prices even when they know ahead of release that it's a mediocre vintage. For those that think JL is out to lunch on his 2003 ratings, aren't the low ratings a benefit since now there may be "bargains" on these wines and you can stock up at a discount? Isn't wine "value" determined by your individual palate?
Mike Vessa
East Williston,NY —  October 25, 2006 2:25pm ET
Jim, Could you please clarify your rating in this month's WS for the Carter Beckstoffer To-Kalon...was it the '02 or the '03 that received a 95 point rating??? At pg. 138 the '03 seems to have received that score but the tasting notes at pg. 224 and the text itself deals with the '02. Please advise. PS I still think the '02 Lewis Cab and '01 Les Pavots are the best calls you've made! Regards, MPV
James Laube
Napa, CA —  October 25, 2006 2:35pm ET
Mike, the 95-point Carter rating is for the 2002. The 2003 got 91. Both are wonderful wines. Thanks for catching our typo. Apologize for the confusion.
Michael Russak
October 25, 2006 2:53pm ET
I really don't understand all the confusion.In this critics opinion many 2003s are only "good" to "very good" and not "great" or "classic". It's a data point. If it's corraborated by other critics for a given wine it becomes a consensus. If not, well you'll just have to try the wine in question and see what you think and whose opinion you then value more.All a critic can do is give an honest assessment of the wine. That is what James says he does and there is no reason in the world to doubt him.If people feel his "non-concensus" reviews are way off the mark then those people should ignore those reviews because they are simply not relevant. If people complain that they don't want to shell out $100 to find out who they agree with, well then start by buying $15 wines that James likes and see if you like them too. Then you will know if this is a critic whose opinion will matter to your taste buds and then you can follow his lead on more expensive wines (or not).Of course this post ignores the $$$ ramifications for retailers and flippers but I don't care about that because I am just a guy who likes wine, and wants good wine at what I consider reasonable prices and who wants a critic to just say what he thinks about a wine in his expert opinion and leave it at that.
Jason Kadushin
Seattle, WA —  October 25, 2006 4:01pm ET
James,

Wow this piece elicited some lively discussion, to which of course I am going to add my two cents. First off, I respect what you and WS do and think that you offer a valuable consumer/trade resource.

That said I've some problems with how WS uses rankings, data and #s in general. Let¿s turn to the issue of vintage ranking. In 2003, 2002 and 2001 Napa Cab was given an 85, 93 and 93 respectively. As someone else pointed out - how this is arrived at is a big mystery and should be linked (at least somewhat) to wine ratings. I agree, and would think that you would as well. So I downloaded all of the ratings for Napa Cabs from 2001-2003 that are available on the WS website (2003 appears to be incomplete). The average Napa Cab (and blends) score for these vintages were 86.7 (01), 86.9 (02) and 86.4 (03). Yet the 01 and 02 vintages were each rated a 93. So then I took the top quartile for each which was 90.8 (01), 91 (02) and 90 (03). Again, how does this work?

Second, in your other note you said "Basically, I use a system that relies on the percentage of outstanding wines from a given appellation" amongst other factors. Outstanding in WS speak is 90+. So let¿s return to the 01-03 vintages. In 2001 30.9% of the wines rated 90+ and these averaged a 92, 2002: 32.5% (92.6 avg), 2003: 28.9% (91.5 avg, for the record 31.5% occurred at avg of 89). I'm at a loss once more.

This brings us to yet another issue the rating scale. In the recent issue WS attempted to devise a "quality price ratio" (QPR) for what was deemed the great Cab Houses of 1990-2003. This was simply the average score divided by the average price. There are a number of flaws with this. First and foremost, this gives the reader the impression that the ENTIRE range of scores is used, which we all know is not true. From 1990-2003 3,950 Cab (and blends) were listed in the database.
Jason Kadushin
Seattle, WA —  October 25, 2006 4:04pm ET


Of these 95.8% scored 80+, this means (according to WS ratings) that 95.8% of wines over 13 vintages were ¿good¿ and only 4.2% of wines were ¿mediocre¿ or worse. How is it possible that less than 5% of wines were mediocre? There seems to be a problem with what defines a good wine ¿ purely from a ratings perspective. Back to the QPR. Issue one, many of these wines have risen dramatically in price some over five fold during this same period (btw 2001-2003 alone the avg price has increased ~$8). So looking at averages is deceptive. Second, and more importantly what this neglects is the average for the Napa Cab market in general. The real questions that many consumers ask is ¿is this bottle worth the EXTRA amount of $$$?¿ So, a better analysis for WS to undertake for the ¿great cab houses¿ is:

{(average price for X producer)-(average cab price)} / {(average score of X producer)-(average cab score)}

If they had wanted to take this to even better analysis should have been done for each vintage for each producer and then averaged over time. This allows one to answer the question, how does much each additional point above average cost? And takes into account the how much better than the general market a given producer is over time ¿ now that seems like a nugget of information for wine buyers.

By doing a simply price per point calculation WS has rewarded wines that have often underperformed for a given vintage, or are excessively priced relative to the market. Furthermore, WS¿ QPR has made expensive wines look cheap b/c they distributed their cost over 100 points when in reality 95.8% of the time only 20 points are used (for the record 75+ consists 98.8% of NV Cabs). WS should ask themselves two related questions. First, should we redo our rating systems? Second, in what WS calls the Golden Age of wine (increase of quality and availability) should we reconsider what it means to be a ¿good¿ wine?

I am not on any top mailing lists so
John Wilen
Texas —  October 25, 2006 6:15pm ET
Mark, yes I agree that low ratings inevitably make higher priced wines harder to sell. That is the reality of the score-driven wine world we live in. For people who really liked the wines, or vertical collectors who can't bear to miss a vintage, IT IS a benefit in disguise. They should stop complaining about the low score and just go buy the wine at a fire sale. But they can't because of cognitive dissonance, the uncomfortable tension that comes from holding two conflicting thoughts at the same time. They struggle as their brain toggles between "I like the wine and it just got cheaper" AND "but a recognized expert says he thinks it is only average or worse". So what happens? They lash out at the critic, accusing him of incompetence, bias, hidden motives and/or changing the rules. Mature, huh?

Mike, you are correct that JL's view is only a data point, and that he is entitled to his opinion. But it is naive to think that it is just another data point, to be simply accepted or ignored by consumers. For better or for worse, WS (or RP) ratings move markets up or down. People's lives are impacted, people's businesses are impacted, people's careers are impacted. Believe me, JL knows his position wields enormous power, power that directly affects winery personnel, wholesalers and retailers --- the very industry he has spent a career studying and immersing himself in. I, for one, know he doesn't accept that responsibility lightly. Rather, he takes extra care to exert that power wisely and with integrity.
Richard Wilson
Texas —  October 25, 2006 6:29pm ET
I appreciate the reference to the scoring. I sometimes lose sight of that and wonder "Why an 88" when I like a wine, before remembering 88 is not a bad a rating, and taste is in the eye of the beholder. Interestingly, I also wonder why some wines are not tasted or rated by WS, given the comment about others not rating wines if the score is below 85. I tried a cabernet recently I really enjoyed. I looked to see what rating was assigned, to measure how much time I would have to buy a few more bottles (when WS goes 90 or better, it is amazing how fast wines will disappear). Much to my surprise, I found no rating. At least I know I have a while to buy a few more of that one.
Robert Fukushima
California —  October 25, 2006 7:38pm ET
James,For your understanding, a read on a reader as it were, I do not really make purchases based on the score a wine recieves from any reviewer/critic. They are benchmarks that can help me establish some sense of what is being said about a wine. I do not really use the vintage charts, as my wine budget doesn't allow me to make widespread purchases of vintages. I do have wines I buy that are not rated, reviewed or noted, they are simply wines I have stumbled upon whose flavor, style or vineyard I have a preference for. When I finally get around to reading your review of the 2003 cabs, I will note the scores, but, I care more about the descriptions to make any purchases.
James Laube
Napa, CA —  October 25, 2006 7:51pm ET
Robert, I like your thoughtful approach and think the same when I'm looking at wines I'm not familiar with. Though I must say that when I'm tasting a wine I don't know much about I like to get as much imput as possible, and sometimes that can be a vintage chart.
Maryann Worobiec
Napa, CA —  October 25, 2006 8:14pm ET
Jason, wow, what research! I'm afraid, however, that all of your numbers---if they're based on the database online--are based on an incomplete set of data. In particular, the Nov. 15 issue that features our Cabernet report is not available in the online database yet.

To arrive at a vintage chart, we start by looking at the total number of wines from an area we've reviewed (excluding barrel scores) and the percentage of 90+ wines. I have access to all of those scores, and right now 2001 and 2002 are neck and neck (less than a tenth of percentage point away), so it's no surprise to me that they have the same score. Likewise, 2003 is better than 1998, and about the same as 2000. Again, the vintage scores reflect this.

This isn't the only thing we use to determine vintage scores; we also look at the percentage of wines that get 88+, or 85+. We read dozens of accounts of vintages from winemakers all over the state. We get out and kick the dirt around in the vineyards, and we see for ourselves what is happening.

Anyway, thank you for your research--I do this all the time, so I know how much work it can be!

MaryAnn Worobiec Bovio
Tasting Coordinator
Wine Spectator
Thomas Altmayer
October 26, 2006 1:11am ET
James, do you think your preferences in California wines have changed over the years? Over the past decade, it seems to me Cabs have gotten bigger and bigger, both in terms of intensity of fruit, alcohol and even viscosity. I'm wondering if you agree that there has been this stylistic change and whether you have a preference of one style versus another.
John Wilen
Texas —  October 26, 2006 8:05am ET
Tom, a number of wine gurus have weighed in on your question. California winemaking consultant George Vierra calls these "social wines," because they are successful at wine tastings, and as topics for conversation, but far less suitable at the dinner table. Increasingly, these are "wines made to impress and amaze, rather than wines one can actually drink. With their often freakishly high alcohol, full body, port-like ripeness and strong oakiness, they are frequently overbearing at the dinner table, and owing to their low acidity they are incapable of cleaning and refreshing the palate. These wines may be best enjoyed as food substitutes".

Steven Tanzer was the first to identify a key consumer insight. He said that "a powerful influence driving producers to vinify ever-riper fruit has been the apparent taste preferences of a new generation of wine drinkers, many of whom appear to be once-a-week drinkers who uncork the occasional trophy wine with friends as a sort of urban indoor sporting event¿as opposed to more traditional imbibers who routinely enjoy wine with their dinner without making a fetish of it. People who view wine as an occasional indulgence are more likely to want their doors blown off than they are to prize subtlety".
James Zalenka
Pittsburgh PA —  October 26, 2006 8:48am ET
Does anyone know if the problems at BV still persist? I've seen recent vintages of Tapesty and George LaTour sell for considerably less than before. Rutherford was always my house cab and I've been afraid to buy it since the 98 vintage.
Jason Kadushin
Seattle, WA —  October 26, 2006 2:36pm ET
Maryann,

As I noted, "2003 appears to be incomplete."

As you said "...all of your numbers---if they're based on the database online--are based on an incomplete set of data." If, as you say ALL of the #s are wrong. Is this to mean that NOT all of the wines reviewed in the magazine are published online? (again I recognize that 2003 is yet to be uploaded online and that there are some WEB ONLY reviews, so are there MAG only reviews?)

Also, when looking at % of wines 90+ and then avg of 90+ wines it seems that the vintage seems to get some subjective bump up from this #.

Would also like to hear any of your thoughts on the QPR that was done in the previous magazine.

Thanks for the response.
Maryann Worobiec
Napa, CA —  October 26, 2006 4:45pm ET
Jason, to add to what Dana has clarified, the Nov. 15 issue contains scores for Cabernets for mulitiple vintages, not just 2003's (although the bulk are certainly 2003s). The body of wine that we're looking at changes on a daily, almost hourly basis, as wines come in for review, we review them (daily), and that information is then transfered to print and digital for our readers. This is what I mean when I suggest your numbers and my numbers might vary a little. And please remember that this percentage of 90+ wines is just one of the criteria we use.

I appreciate your comments on QPR, there are certainly many ways to look at it, and many ways to weigh these factors. What we did was try to quantify the relative value of these top 50 wines in as simple of a snapshot as we could.

MaryAnn
Jason Kadushin
Seattle, WA —  October 26, 2006 7:39pm ET
Sorry, I get it what I meant in my original post by "incomplete" was "not yet uploaded" and that it takes time to hit the website. Which is why I looked at 2001 and 2002 to begin with.

Thank you.
John Lindsay
San Diego, Claifornia —  October 26, 2006 7:56pm ET
The biggest problem I have with your recent scores is the difference in ratings with ALL the other reviewers:2001 Montelena EstateLaube 67Parker 95Tanzer 93+Wine Enthusiast 93Decanter 4 starsHow can 5 professional reviewers be this far off or is it just one who is off. I have had the wine on many occasions and it is maybe not 95 but it is surely not a 67?
Apj Powers
Dallas, TX —  October 27, 2006 6:20am ET
2 late additions to this string: I use vintage & btl ratings to "help" in my decisions for our restaurant's list. After a few yrs MOST consumers get these confused or forget or never knew the ratings. I overhear misquoted vintage talk all the time. James Z: BV prices have dropped. I have recently picked up several vintages of 'Georges de Latour' and have had positive feedback on the '98, '99, '02 and esp. '97. I have openly discussed their problems & controversies w/ staff and guests (if it is brought up). Always offering to take the btls back even if it is just a personal taste thing and so far have had happy customers & reorders of 2nd btls. Does this mean they are 67pts or 95pts?? I don't bring up the issues but actually like when it is brought up because then I encourage guests to try the wine as a "no-lose" proposition. It's interesting to me & to them to debate the wines.
Michael Russak
October 27, 2006 11:50am ET
John - I can accept that you think it "naive" that I refer James scores as one data point, I prefer to call it "simple". Semantics aside, in the end the critic exists for one audience only; The Consumer. All he or she need do is have integrity and honestly (and of course expertise concerning whatever he is critiquing) and to describe his assessment in a forthright manner. That is exactly what James has done with 2003 Cali Cabs. It is that simple. The enormous collateral effects of a critic's review or score in the wine industry should not factor AT ALL in his score or review if a critic wants to maintain integrity. Those ripple effects are a function of the bizarre situation in which only 2 or 3 opinions matter, not to mention the obscene prices charged for some wines. This never happens with say, a movie in which some critics disagree, because there are tons of critics and its only $9 to find out for yourself what you think.
Matthew Letson
Wilmington, NC —  October 27, 2006 6:19pm ET
Dana, are you suggesting that Jim Laube has a higher sensitivity to TCA than Parker, Tanzer and the staff at Decanter? And if he does have a hypersensitivity to TCA that is much higher than 99% of the tire-kicking public, one might begin to wonder how useful his scoring and tasting notes are (or at least those wines that he finds tainted by TCA). Just curious...

Would you like to comment? Want to join or start a discussion?

Become a WineSpectator.com member and you can!
To protect the quality of our conversations, only members may submit comments. Member benefits include access to more than 315,000 reviews in our Wine Ratings Search; a first look at ratings in our Insider, Advance and Tasting Highlights; Value Wines; the Personal Wine List/My Cellar tool, hundreds of wine-friendly recipes and more.

WineRatings+ app: Download now for 340,000+ ratings.