Quantcast

May 302012
 

I have read the Harvard Business School study about critics, and it is clueless on so many levels about the craft and mechanics of reviewing that it is astonishing major newspapers and magazines have taken it seriously.

By Bill Marx.

We seem to be in an age of confusion when it comes to criticism of the arts; the advent of technology and the rise of the ‘amateur’ reviewer, combined with the increasingly powerful diminution of reviewing into a marketing tool (“the hottest show in town”) is spreading panic about the nature and fate of criticism, even among those who should know better.

Recent proof that ignorance reigns, not only at mainstream publications, many of which are downsizing their reviewing staff as quickly as possible, but even at the respected Harvard Business School, is supplied by a recent HBS report arguing that Amazon reviews are just as likely to give “an accurate summary” of a book’s quality as critiques in professional newspapers. In contrast to Amazon’s consumer reviews, “What Makes a Critic Tick?” concludes “experts tend to favor more established authors and the data suggests that media outlets cater reviews to their own audiences, who have a preference for books written by their own journalists and book-award winners, whereas consumers tend to favor first-time authors.”

I have read the HBS study (by Loretti I. Dobrescu, Michael Luca, and Alberto Motta), and it is clueless on so many levels about the craft and mechanics of reviewing that it is astonishing newspapers such as The Guardian have taken it seriously—the only explanation is that editors, no longer thinking it is fashionable to stand up for what criticism should be, are becoming terminally defensive. Truth is, the authors of the study have no sense of what criticism is, as shown in this definition of criticism early on:

Reviews written by professional critics are appealing because these critics (and the publications they work for) may be able to provide an unbiased and accurate estimate of a product’s quality. Further, reviewers can build a reputation for both the quality of the review and the tastes of the reviewer.

Criticism should be fair, but it is never unbiased or objective, as if reviews were a form of scientific experiment. Readers want to hear the critic’s personal judgement —negative, positive, or mixed —as shaped by his or her sensibility and depth of knowledge. Critics are individuals, not measuring sticks. The authors of the study have no idea of what a book review is supposed to do: they supply judgments with reasons. They do not summarize quality, whatever in the world that means. Predictably,the bibliography of the HBS report doesn’t list a single publication that deals with the crafts of reviewing—to them, a sentence long “thumbs up” on Amazon is the same as a review in The New York Times. Because it does not discriminate between substantial reviews and guttural opinions, sensible considerations and hit jobs, the study proves nothing.

The report’s conclusion is old news. Is anyone surprised that major publications tend to review books by award-winning authors and writers who contribute to the magazine or newspaper? Partly it is due to marketing —familiar names rather than unknowns generate more interest among readers. Partly it is log-rolling: publications often use their own pages to support friends, hot properties, and co-workers. That is why it is so hard for first-time writers without a celebrity background to be reviewed. There’s always been dissatisfaction among outsiders with the cozy situation, but glad-handing in book review pages has been going on for hundreds of years. Web publications have the opportunity to turn that tradition of ingrained conflict-of-interest around. Though Amazon, with its “hands free” editorial policy, is not going to go against the grain.

The varieties of corruption generated by consumer reviewing on Amazon is acknowledged by the report’s authors but not to the point where it undercuts their concluding points. There’s little about writers paying “professional” reviewers to plug their books, no research exploring how relatives, friends, and students of first-time writers pen reviews, often using a number of different names. Editors in major publications may slant coverage toward known quantities, but the “wild west” of reviewing on Amazon lacks credibility and quality. The same cheap sun of praise or blame shines on everybody, from award-winning writers to rookies. Just because someone claims he or she has written a review doesn’t mean it is one.

The response to the report’s silliness has been disappointing to say the least. The non-fiction book editor of The Guardian says he would like to make use of consumer reviewers “but they do have to be able to do what I want them to do. They have to be discriminating writers, with expertise, and stylish too. Writing the best, the liveliest kind of review takes unusual talent and it’s interesting that even many published authors make disappointing reviewers. Not many people can do the particular thing I’m looking for, which is one reason why the Guardian‘s book pages are different from Amazon book reviews.”

We are not told what The Guardian editor wants his critics to do, beyond proffering style and expertise. What about the other qualities of successful book reviews? Shouldn’t they provide independent evaluations backed up by thoughtful analysis? Shouldn’t they contribute to a public discussion that invigorates the culture? Reviews are not valuable simply because of their verdicts—their credibility comes from the power and perceptiveness of their explanations. In this way, the reader is given the opportunity to critique the critic by grappling with (or questioning) the evidence and reasoning used to back up specific judgments.

I believe that under the right conditions serious criticism will thrive in the digital age—The Arts Fuse is testimony to my trust in the elemental human urge to share judgments about artistic value—but it is important to realize that the real battle ground is not about Amazon reviewers versus professionals, defensiveness regarding expertise (the old anti-elitism argument in new guise), or fretting about the predictable loss of critical power.

We are approaching a time when no one (not even editors and critics, let along bloggers) expects criticism to be more than a verdict, a stylish yea or nay.

The challenge is to fit the traditional strengths of arts criticism—evaluation that raises artistic standards by way of analysis that encourages dialogue—into the digital age. But for the web reviews of the future to thrive there must be a strong sense of what, at its best, criticism was in the past. Unfortunately, amnesia prevails, and as time passes and readers are less exposed to examples of substantial journalistic criticism, which are fading out of the pages of major newspapers and magazines, the notion of the review as a reaction rather than a reasoned judgment will prevail. That is the essential problem—not just discriminating among old and new forms of critical perfidy.

PinterestRedditStumbleUponTumblrEmailShare

Read more by Bill Marx

Follow Bill Marx on Twitter

Email Bill Marx

  One Response to “Fuse Commentary: What Makes a Critic Tick? Harvard Business School Hasn’t a Clue”

Comments (1)
  1. I found this so interesting that I tried a little experiment to see how non-fiction reviews in traditional media compared with Amazon reviews when using a criteria that included reasoned judgement and depth of knowledge.

    The results don’t reflect that well on Amazon reviews, but they don’t reflect that well on “expert reviews” either.

 Leave a Reply

(required)

(required)