Bit perfect digital audio

In a comment to an earlier post, Fredrick mentioned that he had ‘read an interesting article by Chris Connaker at computeraudiophile.com. While testing Asus sound cards, he believes he has achieved bit perfect playback that still sounds awful’. A little while later jhans11 kindly provided a link to what appears to be the article Fredrick was talking about.

Here is the article. It is a review of a couple of Asus sound cards. The review is extremely detailed; admirably so. Any review can be agreed with or disagreed with. This review is so detailed that you can see pretty clearly why the reviewer made his judgements.

I disagree with at least one of those judgements, and indeed the reviewer himself is equivocal about the same thing: that is, his methodology for determining whether or not a sound card is passing through the digital audio ‘bit perfect’. He does this by plugging the coaxial digital audio output of the sound card into the digital audio input of his external DAC. This apparently supports the HDCD encoding enhancement. He feeds a digital signal with a HDCD flag into this, and sees whether the DAC properly detects the flag.

The flag is indicated by a particular pattern in the least significant bit (LSB) of the 16 bit signal (and, he says, in the LSB of 24 bit signals which surprises me, there being no point in HDCD for 24 bits).

HDCD is, essentially, a compander system although the company tends not to view it that way. If the flag is present, then the processor stretches the louder bits out so that they become louder, and the softer bits out as well, so they become softer. In other words, the dynamic range is extended beyond that available within the 16 bits of a CD. I hear they talk about it being equivalent to 20 bits of range.

What I don’t know is whether the HDCD flag is a continuing pattern in the digital signal (which would mean surrendering one bit of real resolution), or just a burst at a particular point at the start. With a reliable signal, the latter scheme should work fine.

Of course, with the latter scheme, kicking the HDCD system on says nothing about how accurately the subsequent bits are being conveyed, so let’s assume that the HDCD flag is ongoing.

The author thinks that because the HDCD flag is in the LSB, it is most susceptible to corruption, and therefore its working is a good indicator of signal integrity. Unfortunately this is analogue thinking. Mixing any noise (or dither) into the signal would certainly screw up the LSB HDCD flag. But in the digital world, each bit stands alone. At the risk of gross oversimplification, the receiving DAC collects 16 individual bits one after the other, and only then packages them together into the byte which is used to modulate the analogue output signal. But each of those 16 bits are as likely to be corrupt, or as unlikely, as any of the others. It is perfectly possible for the LSB to be intact, while several of the more significant bits aren’t.

The descriptions of the problems all point to digital audio data corruption. This could be timing: if the source device and the receiving device get out of sync, then either samples will be inserted or signal samples will be lost. But with S/PDIF as the carrier, this should not happen because the DAC slaves itself to a timing signal generated on the S/PDIF line by the source.

So I’d say the clicks indicate bytes corrupted by an incorrect bit (or bits). For them to be obvious, the bits would be in the more significant portion of the byte. (Corruption of less significant bits will be happening at the same rate, but less noticeably.)  Incidentally, a click doesn’t require much corruption at all. Open an audio file in a digital audio editor, and move one sample by, say, 25% of the full scale either up on down. Just one sample. Play it. There will be a clear click at that point.

So what’s a good way of checking whether the output of a digital audio device is bit perfect? One way would be to use a second computer with a digital audio input to record its output. Compare the original file with the newly recorded one. If they are perfect matches you know the output of the device is bit perfect.

But that presupposes that this recording computer’s sound card is bit perfect in recording. You can check this by playing a CD from the digital audio output of a CD player or DVD player into the input, recording a section. Then rip the CD and compare.

I have done this myself in the past.

This entry was posted in Audio, Computer, How Things Work. Bookmark the permalink.

5 Responses to Bit perfect digital audio

  1. Craig says:

    I would rip a DTS-CD, and try to play those bits at the other end into a DTS decoder (any A/V receiver with a DTS decoder should work).

    If you get properly decoded DTS 5.1 audio, then it’s bit perfect.

    If you get noise, then something is messing up the bits.

    I have a heap of these DTS CDs, which were an early attempt to get 5.1 music to the consumer before SACD or DVD-Audio came about. They were 16-bit/44.kHz sampling rate, but the *decoded* DTS was, I believe, 20 bit resolution.

  2. Fredrik says:

    I see that your kind readers helped you find the article I was referring to. I don’t know Chris Connaker, but he does indeed seem to have the patience required for this kind of investigation.

    By the way, I don’t mind you ranting a bit. I don’t always agree with you, but your candid style on these topics is refreshing 🙂

  3. Stephen Dawson says:

    Craig, I considered this as an option, and I’ve done similar myself to test data integrity. But the only real problem is I don’t know what level of error correction is used in the DTS bitstream. Since it was first developed for cinema use, my guess is that robustness of the data would have been a very high priority in development.

    I’ve got a number of DTS CDs, and they sound pretty good. One of the many interesting things to come out of Blu-ray has been accurate DTS 5.1 @ 1509kbps is. The overall bitrate minus the 1509kbps gives a sense of how much adjustment the codec has to make to adjust the DTS back to perfection. Of the 74 DTS-HD MA 16/48 5.1 tracks I’ve checked, 23 have a total bitrate of under 2,000kbps, and 69 under 2,500kbps.

  4. Craig says:

    I used to have a CD player that could attenuate the SPDIF digital output in the digital domain. Only when the output level was at 100%, could DTS-CDs be correclty decoded. Even reducing it by a fraction would corrupt the DTS signal, and result in data noise. This tells me that DTS-CDs need to be bit-perfect for decoding.

    Also, the bitrate on DTS-CDs is 1.270 M/bits/sec. Less than the maximum of 1.411 M/Bits/Sec of normal audio CDs. So they only really use 14 out of the available 16 bits for audio. This was to reduce the chance of ear-splitting noise, and damage to equipment, should a user not employ a DTS decoder on the output of the CD player. The resulting undecoded DTS data-noise was, I believe, 6dB below full scale.

    However, having said all that, I have ONE DTS-CD which breaks this rule and uses all 16 bits (1.411 M/Bits/sec) of the data available. It is also the best sounding DTS-CD that I have, but I believe this has more to do with the high quality of production and mastering, than simply having a little more data-rate to play with.

  5. Hi Stephen – Very interesting article. I really like your analysis of what I’ve done with my bit perfect testing.

    The HDCD flag on the LSB is continuous, so your assumption was correct.

    “But each of those 16 bits are as likely to be corrupt, or as unlikely, as any of the others. It is perfectly possible for the LSB to be intact, while several of the more significant bits aren’t.”

    I agree and disagree at the same time with your statement above. While it’s true all the bits are equally corruptible I think it would be extremely unlikely that only the LSB would remain untouched while all the others or some of the other bits suffered severe issues.

    “So I’d say the clicks indicate bytes corrupted by an incorrect bit (or bits). For them to be obvious, the bits would be in the more significant portion of the byte. (Corruption of less significant bits will be happening at the same rate, but less noticeably.)”

    I disagree here. Corruption of the LSB is perhaps even more obvious using the HDCD flag as an indicator because the HDCD indicator remains illuminated constantly if there are no issues. As soon as there is a problem the indicator either blinks or goes out. This is essentially immediate notification of a problem with the LSB. An issue with the other bits may go unheard if one is not listening carefully.

    As I said in my article, I’m not an expert and my testing would not hold up to rigorous testing standards. I’m all about seeking the truth and correct information to share with my readers and everyone I talk to in the industry. Currently I am writing this from my hotel room in Las Vegas at the Consumer Electronics Show. Lots of brain power here at the show as there are tons of brilliant engineers to discuss these concepts with.

    Have a wonderful day :~)

Leave a Reply

Your email address will not be published. Required fields are marked *