Mailvox: Objectivity

Northern Hamlet objects to my appeal to average Amazon ratings as evidence that the 2015 shortlist is objectively superior to recent previous Hugo shortlists:

By this criteria for distinctive works: Hemingway’s The Sun Also Rises at 3.8 < Vox’s A Throne of Bones 4.2 Also, you’re also nearly tied there with Twilight at 4.1 for distinctive storyness.

Online ratings are no more an accurate measure of distinctive works than sales are. It’s an extension of the same argument… consider: we could predict 1 million Big Mac sales might result in a large number of people saying they sure do like Big Macs. There’s brand loyalty there among other things. While for Lima Beans, people might not report loving them as much. None of this has anything to do with healthiness in the same way that sales and ratings have nothing to do with distinctiveness.

Think of the NYC art world. When they award Jeff Koons or Damien Hirst with some award for their accomplishments in art, do you imagine that the average person would even understand anything about the pieces? You place an unneeded emphasis on reception (sales or ratings, take your pick here). Though art and literature’s quality can be determined there if we like, it’s hardly the only way (nor the common way these niche communities have developed in the past)

Now, you can go different ways with this… Shakespeare was great because of how many people have learned to appreciate him or Robbe-Grillet is great and we do need judges (gatekeepers if you will) to help refine our understanding of the art and literature experience.

Northern Hamlet’s response is neither unfair nor unexpected. It does, however, manage to completely miss the point. His error is obvious: he substitutes “distinctive works” for “objective superiority” without realizing that the former is a subset of the latter. He furthere demonstrates that he still doesn’t grasp the purpose for citing the metric when SirHamster points out his mistake:

SirHamster: He provided an objective measure for Hugo recognition, not for story distinctiveness. Whether or not Amazon average ratings provide a measure of story distinctiveness, they provide an objective measure of user-perceived quality, which may have some relation to distinctiveness.

Northern Hamlet: Yes, and superior in ratings alone, not in reception. Because, well, we need it to mean anything the SJWs didn’t mean.

No, we don’t need it to mean anything at all beyond the fact that it is an objective measure of quality. We have been repeatedly informed, by people who admit that they have not even read the works concerned, that those works are inferior to other, previous works that those same people may or may not have read.

Now, we could appeal to the same subjective standard to what they are appealing, which is to say our own opinions. We can even argue that our opinions are more informed and reliable than theirs; there are more people on this blog who have read John Scalzi’s and Charles Stross’s and George Martin’s work than there are people at Whatever and Not A Blog who have read the work of John C. Wright, Tom Kratman, and Vox Day. It should be obvious that those of us who have read multiple works by each of all six authors can much more fairly compare them than those who have not.

But we don’t need to rely upon subjective metrics. We can cite objective metrics, and, lo and behold, whether we turn to Amazon or the more left-leaning Goodreads, we observe the same thing at work: the 2015 shortlist is more highly regarded than the previous shortlists. Marc DuQuesne did the math. Can you tell which list is objectively and quantitatively superior?

A: 4.60 Amazon, 4.16 Goodreads
B: 4.64 Amazon, 4.16 Goodreads
C: 4.46 Amazon, 4.11 Goodreads
D: 3.90 Amazon, 3.91 Goodreads

Let’s look at my list of Top 10 SF and Fantasy books of all time. For science fiction, my top ten averages 4.32 on Amazon. For Fantasy, it averages 4.53, giving a net average of 4.43. This is considerably higher than the pre-Puppy 1986-2013 Hugo shortlist average of 4.00. Of course, my Top 10 list is wholly subjective, but review the list before you dismiss it; my more esoteric selections such as China Mieville’s Embassytown and Tanith Lee’s The Book of the Damned tend to bring the average down. So, I would certainly invite similar comparisons to other all-time top 10 lists.

This metric even picks up the perceived decline in the quality of Hugo nominees about which so many people have complained over the years:

1986 to 1995: 4.13
1996 to 2005: 3.93
2006 to 2013: 3.94

Now, unless Northern Hamlet wishes to entirely discount a metric which clearly shows the objective superiority of The Lord of the Rings (4.7) to The Sword of Shannara (3.7), Starship Troopers (4.4) to Redshirts (3.8), The Golden Age (4.1) to Rainbow’s End (3.6), and For Whom the Bell Tolls (4.5) to A Throne of Bones (4.2) in favor of opinions that are rooted in nothing objective and are entirely subjective, I suggest that despite the occasional flaws, average review ratings are a perfectly reasonable measure that any sensible SF/F reader can use as a basic quality heuristic given a sufficient number of reviews.