Quantcast

Thursday, April 18, 2013

Wine Spectator moving away from numerical scores?

The May 31, 2013 issue of Wine Spectator is curious for several reasons (aside from being published a month in advance). First, the meat and potatoes of this issue revolves around sushi and sake. It is not unusual for Wine Spectator to feature stories on food or specific types of wine, but the focus on Nihonshu (sake is actually the general Japanese term for alcoholic beverage and 日本酒 is the fermented rice beverage referred to as sake in English) is interesting and applauded. Having lived in Japan for one year almost eight years ago, I am probably more interested in Japanese food and beverages than the average wine drinker. Harvey Steiman wrote an interesting piece on sushi master Jiro Ono, who was the subject of the recent documentary, Jiro Dreams of Sushi. Kim Marcus and Mitch Frank each added stories on Nihonshu. It was in Marcus' story that the second curious issue arose.

In "Cracking the Sake Code," Marcus does a great job of describing how sake is made and defining the various terms used to describe it. However, one thing missing was the terroir of sake. Just as with wine, the regional differences of sake are both clearly defined and endlessly argued in Japan. Sake from Kyoto, Niigata and Yamagata are all very different; not because of the soils or climate, but because of the water, yeast and rice varieties used. Geography matters, but that isn't made apparent in the article. In fact, Marcus actually suggests that rice and water aren't usually locally sourced and the source doesn't matter. It would have been nice to see the geography of sake addressed with something approaching the effort they do for wine. Yet, that wasn't the impetus for this post.

What's missing?
The most striking thing missing from Marcus' story was Wine Spectator 100-point scale. Instead of numerical scores, Marcus, along with Bruce Sanderson, blind tasted the sake and used descriptive categories (words, not points) to reflect how highly they regarded each sake relative to other sake in different categories. Does 92 points describe something that "outstanding" does not? Do you gain more information knowing a wine rates 88 points as opposed to "very good?" As the precise score of a wine varies palate to palate, I think categories are in fact more useful. I think the method was more effective at describing the sake than if they had used points, but I clearly am not an advocate of the 100-pt system. Is this a hint that Wine Spectator is moving away from numerical scores? If sake doesn't need scores, then why does wine?

8 comments:

  1. Kyle,

    Wine Spectator is not "moving away from numerical scores." Our readers would not accept that.

    In our opinion, a scale of judgment can be more precise as expertise is greater. After decades of experience tasting and evaluating wines, we feel confident in our ability to use the 100-point scale in a way that's consistent, reliable and useful for readers.

    We have much less experience with sake, and felt that broader categories would be more appropriate to express our opinions on their quality. However, I could easily see a critic with deeper experience in sake using the 100-point scale, and perhaps if we taste extensively enough, one day we will too.

    Glad you enjoyed the issue.

    Thomas Matthews
    Executive editor
    Wine Spectator

    ReplyDelete
  2. Tom,

    I of course didn't think that Wine Spectator was getting rid of the 100-pt scale. Wishfully hoped, maybe... I just thought it was noteworthy that such a prominent feature story did not make use of the system. I fully understand the reasoning behind why it wasn't used. Like I said in the post, I think this method is much more meaningful to me and possibly many of your subscribers and potential subscribers. Obviously, you'll do what you think is best for your publication. Thanks for taking the time to read and comment! Cheers!

    Kyle

    ReplyDelete
  3. I applaud Wine Spectator's humility, as expressed by Matthews, on this issue.

    ReplyDelete
  4. Blake, I concur. I do wish, however, that the same sentiments were associated with wine reviews as well.

    ReplyDelete
  5. OK, I am going to go out on limb here....I love the 100pt scale with the following caveats: (1) the reviewer must be blinded to winery and the cost of the wine; (2) the reviewer must have a consistent palate (e.g. when they say "cherry"… the flavor is consistent (even if I do not interpret it as cherry) and (3) my palate must have some reasonable calibration with the reviewer (e.g. if the wine is scored highly.... I generally like the wine and visa versa). I find that the WS consistently meets these 3 criteria and thus I frequently use their recommendations to purchase wine. Now, I am not going to try to argue that a 93 pt wine is always better than a 92 pt wine...that would be silly and beside the point. The rating are a great guide to assist with my purchases of wine that I cannot taste a priori and they have lead to some wineries that I have enjoyed immensely (Rudius, Olabisi, McPrice Myers, Booker, Bedrock, Ravines, Prospect 772, among many others)

    As an aside, I've read your blog many times although this is my first comment. I am a big fan of wines from AZ and I appreciate your focus and devotion to Colorado wines. As a matter of fact, I used JMs recent high ratings of Infinite Monkey Theorem to purchase a 1/2 case....FYI I loved the red blend and Semillon. I’ve tasted thru a number of CO wines in the past when I have visited Denver and there is definitely some variability (more that what I see in California … some are great … others not so much). There is NO WAY I would have spent $45 plus shipping on a wine from CO without either tasting it first (tough to do when I live in Sacramento) or having an independent 3rd party vouch for the quality. Another benefit of the 100 pt scale I guess....

    ReplyDelete
  6. Andrew, thanks for reading and commenting. I, too, often find agreement in my palate and some of the WS editors. I'm not saying they don't know what they're doing. I just think a rating of "Outstanding" along with a tasting note, provides as much information to a consumer as a 92 or 93 with the same tasting note. Scores really should come with a +/- or range (as they do for barrel samples). And of course reviews provide a considerable benefit to consumers to make purchase decisions prior to tasting a wine. I think the false sense of precision the 100-pt system projects could easily be avoided by using the broader descriptive categories used in the sake article.

    I'm also glad to hear that you've purchased CO (and AZ) wines! And yes, there is a wide range of quality to be found in Colorado. Though, I've found the same range in California and France. The orders of magnitude more wineries in each of those places perhaps make finding top quality wine a bit easier... Though if you know what you're looking for in CO, AZ, MI, NY or even MD (check back tomorrow and Monday) you can find excellent juice. I just wish I could convince the best CO wineries that 3rd party verification really does matter to consumers (like you!). Thanks again for you comments!

    ReplyDelete
  7. Kyle,

    I too wish we could retire the 100 point scale in favor of a more meaningful, honest method of critique. As a career wine merchant I have to say, the 100 point scale is the snake oil of our business. Yes, scores sell a ton of wine. No question. The problem is that it's no more possible to apply an accurate, definitive numerical rating to a wine than it is to apply one to a song, a meal, a painting or a sunset. If it were possible, logically wouldn't all the "expert reviewers" consistently come up with the exact same scores when reviewing the same wines. Like anything random, it does happen sometimes, just not consistently.

    In grade school, if you scored 89 points on an exam with 100 questions, you knew exactly why you earned the score you did. You could look at a "wrong" answer and know why you missed it. Not so with a wine score. Why did a wine get 89 points--or worse, 89+, whatever the hell that is? Where was it lacking? Who knows. Would others agree that it was lacking in that area? Likely, some would, some wouldn't. Maybe the reviewer did not feel well that day, did not care for that style of wine, was hungry, was tired, was still annoyed by the cab driver who made a wrong turn while en route to the tasting. Maybe the reviewer had palate fatigue after tasting dozens of other wines in the same sitting? The what if's and maybe's are endless. And I won't even get into the issue of "pay to play" or "thanks for your ad business" reviews.

    The late Roger Ebert could convey whether a film was worth seeing with just the flick of his thumb. The respected Italian publication, Gambero Rosso, has been able to provide meaningful wine guidance using just three wine glasses. I would argue that even stars or smiley faces do a better job than randomly generated, "presented as definitive" numerical scores.

    To those who take offense at the notion that scores are arbitrarily assembled, please watch the Youtube video (the link is below) of former Wine Spectator reviewer, James Suckling, explaining how he generates his scores. It's like watching David Copperfield...at his home in Tuscany.

    http://www.youtube.com/watch?v=tiZ-_5j6LvU

    ReplyDelete
  8. This comment has been removed by a blog administrator.

    ReplyDelete

Note: Only a member of this blog may post a comment.