Blu-ray reviewing – verity or result?

In comments on an earlier post, Peter draws attention to another negative review of The Fellowship of the Ring. This gives 3/5 stars for video quality. The other review suggests 2.5/5.

Once again, the finger is pointed at digital noise reduction, but other stuff as well. Let me quote from this review:

Detail is far from consistent, as from shot to shot in any scene, it’s almost like watching the film from multiple grade sources culled together. There are moments were distance shots boast brilliant clarity and the finest of minute detail, then a close up will follow that’s muddled beyond belief. While I cannot say what created this issue, I can say that DNR (yes, Digital Noise Reduction) played a large part. There are numerous excessively smoothed and muddy moments, and they’re hardly difficult to spot.

Now here’s my problem: if you want to apply DNR to a movie to prepare it for transfer to Blu-ray — for whatever reason — how do you do it? Do you sit down and carefully apply different levels on a scene by scene, moment by moment basis? Or do you click the ‘Mild’, ‘Medium’ or ‘High’ radio button in the DNR section of the encoding software, and let the automatic processes do their work. I suspect the latter.

That would give a consistent look, though, not a variable one.

So what if all the complaints about this movie should apply not to the Blu-ray but to the movie itself? What if the Blu-ray rendition is actually a very good representation of the movie, as it originally appeared at the cinemas?

Might it be? I don’t think we’ll ever know.

You see, movie reviewers rarely seem to comment on technical issues, except when they interfere with the story telling. When was the last time you read a movie review in a newspaper or magazine — and here I’m talking about current release movies, showing only at the cinema — which devoted a paragraph to the amount of detail in the cinematography, differences from scene to scene in film grain, and so on. No, they are all about the story and the characters.

In any case, back in 2001 the reviews I read suggested that the viewers were mostly blown away by the special effects. Now, nine years later, the special effects sometimes seem rather too obvious. At the time they didn’t. Each new generation of special effects seem convincing to that generation, but after a few years seem artificial. Especially animation ones. Presumably people back in the 1980s people thought that Terminator Arnie’s metal skeleton walking through the fire, or his fake head having its eye extracted, were realistic (enough, anyway, to suspend disbelief). Likewise, ED209 in Robocop is horribly clunky.

Blu-ray reviewers are far more interested in the technical stuff. They’re selected for their roles by that interest. I’m like that too. Last night I watched a couple of episodes of ‘Claymore‘ on Blu-ray and ended up spending more time rewinding and rewatching a few short segments, over and over, trying to nail down an issue in the picture quality, than actually watching the show as a story. How many reviewers will even notice that there’s a strange shimmer on the grill of a car 1:22 into Public Enemies. Movie? Disc? Equipment? Still to be determined.

All of which leads me to a question, and I’d appreciate feedback on this one. The ‘Video Quality’ rating on my reviews, and just about every else’s Blu-ray reviews, is meant to summarise our discussions of the picture quality. But should we give five stars for a great looking final result, or five stars for an extremely accurate rendition of the movie?

What if The Fellowship of the Ring was a bit weak in terms of picture quality at the time that it was released to cinema (they were still learning, I expect, and would have gotten better in the next two installments)? What if they did (as I suspect) soften the overall image during the movie production just a little to better disguise the live-action/special effects transitions? Is this the fault of the Blu-ray producers?

I can easily imagine that the actual transfer to Blu-ray of a movie might be the most accurate ever achieved, indeed the most accurate achievable, yet score poorly because the movie itself is full of less than transparent digital effects.

And then we get to older movies. On Blu-ray Dr No looks good, and The Godfather looks much improved, but both of these were restored. How many stars do we give to the Lowry Process used on the former, even if some of the detail is actually invented by the automated digital processes, rather than originally appearing on the film?

So, should I give five stars because the Blu-ray of a movie looks good, or because it is accurate? And how do I know if it actually is accurate? And what do I do if those two things are in tension with each other?


This entry was posted in Blu-ray, Cinema, Rant, Testing. Bookmark the permalink.

6 Responses to Blu-ray reviewing – verity or result?

  1. james gifford says:

    out of interest are the blu ray releases the original or the directors cut versions?

  2. treblid says:

    I prefer accurate, i.e. director’s intention…
    E.g. HP’s video was too dark for me at times, but that’s the intention. But as you said, how do you know when it comes to accuracy? :p

    Doesn’t really matter to me anyway, it’s just a subjective number, based on years of experience.

    Just can’t believe it’s been nine years already!!! Seemed only yesterday.

  3. Stephen Dawson says:

    Yes, these are the theatrical versions. I imagine that the long ones will come out later.

  4. Victor says:

    I don’t think there is a simple answer, but I think it makes more sense to rate the picture quality against the DVD release. By the time it is released on disk the only choice people have is “do I get it on DVD, or do I get it on blu-ray”. Unless, of course, both are transferred so poorly, you might be inclined not to watch at all.

  5. Chris says:

    This is something I think about a lot as well (and it seems I just wrote too much about below). I know, I should get life!

    The simplest answer, of course, is not to assign scores at all and this must be a very attractive option for many reviewers. After all, time is spent crafting the words of the writeup itself and it is presumably at least partially the review author’s goal to have someone read them, rather than have them glance at a number and move on. However, I think the numbers are an invaluable navigational aid when dealing with a sea of reviews – a signpost of sorts that points to an interesting opinion. Although, exploiting the extremes of positive and negative opinion to catch attention with higher or lower than average scores is a pitfall in itself…

    Assuming for a moment that scores are actually useful (which I believe), how to assign them?

    A purely contextual artistic intent and source material representation scoring system might lead to things like assigning a 10/10 PQ score to the 28 Days Later Blu-ray, for example, as a “perfect” representation that cannot possibly be improved upon. Maybe that’s fair from this perspective, but the number itself is misleading to the casual reader and is hard to reconcile with a possibly lower non-perfect score given to 28 Weeks Later which is clearly better looking. There is also the problem of how this extends to lower-budget films. 28 Days Later is reasonably well-respected so maybe one can get away with giving it a perfect score, but should the same ranking criteria be applied to Jesus Christ Vampire Hunter or (insert your own low-budget BD)? Might they also deserve a 10/10 based on faithfulness to the source? And, of course, the elephant in the room… what exactly is the source and how to determine the degree of faithfulness? There are very few that actually have direct access to the negative, raw digital intermediate or a first-generation print. An A/B comparison against the memory of theatrical viewing is not yet practical and, even if it was, who knows if that’s really representative either?

    On the opposite end of the spectrum is the absolutist “tiers” approach as best exemplified by the AVS PQ Tier Thread. With very clear criteria for constructing a video score, any disc can be stripped of its artistic and historic context and given the closest possible approach to an objective and repeatable PQ score. All films are treated in the same manner – i.e. held to the highest available standard of sharpness, color, contrast, black level and detail. This obviously punishes something like 28 Days Later (which is arguably a fair warning to the casual reader), but what about Casablanca, The Third Man or The Wizard of Oz? What about the inherit difference of live action vs. 3D CGI vs. 2D animation vs. digital camera vs. 70mm film vs. 35mm vs. etc. in this kind of system?

    As a sidenote, there seems to be a prevalent misconception that excessive digital manipulation (DNR/EE/etc.) improves the chances of a film on this type of scale. From my experience, the opposite has been the case, at least for the discerning eyes at places like AVS, since the tools are too primitive and more often than not cause an overtly manipulated source to suffer too much on rating metrics in certain dimensions to make up for the improvements on others – i.e. they cause more problems than they solve. However, one can imagine a theoretically perfect “image enhancer” software tool that leads to an overall improvement to picture quality – like the Lowry process on steroids perhaps. Is this really a bad thing assuming that the essential artistic intent can be retained? And what about the cases where this manipulation of the source is approved by the filmmakers?

    It seems on the surface that the best approach for scoring might be a compromise between the contextual and absolutist scale, but that’s almost a tautology – at least in the sense that I didn’t need to write all that crap above to make this conclusion! In the end, I really read reviews for opinions – someone’s unique opinion that I love (or least love to hate). It may be a trap to get too caught up in appeasing the contextualists or the absolutists or, even worse, both at the same time. Much like the stars assigned to the content itself, I’m starting to think a review of video and audio quality might be best served by being the most honest reflection of the reviewer’s own personal aesthetic, with the score somehow being a crude (but not completely arbitrary) representation of this. For something like PQ that is inherently subjective, context is movable, intent is unknowable, objectiveness is impossible, but impressions are unassailable.

  6. Stephen Dawson says:

    Hey, Chris, great discussion there. Thanks. Especially the last couple of sentences. Perhaps the biggest mistake that we make so often is to think that there is a perfect objective score which all right-minded people would agree with. So, lots of opinions, lots of eyes (I notice things sometimes that no-one else apparently does, and no doubt I miss lots of stuff that other people do).

    So perhaps the best approach I could take, not just for the stars but also for the review content, is to adapt a disclaimer used by Dave Hitt at the end of his Quick Hitts podcasts, that they “are little more than a journal of one man’s opinion, and therefore should not be taken too seriously”.

Leave a Reply

Your email address will not be published. Required fields are marked *