A couple of things.
I had a guy email me who is one of the better guys on PinkFishMedia Forum - he's been a member there since 2003. I had mentioned this forum on PFM & he came to have a look. He wrote to me to say how shocked he was at how civilised it was here (in a good way). So guys we better start a fight or two soon or we will be considered too level-headed for forum participation :)
Dave, this is the wrong argument that these guys seem to be stuck on - that the bits aren't changing therefore nothing is changing. Nobody ever maintained that the bits are changing, the whole concept of bit-perfect is exactly that - bits are perfect & unchanged in the delivery of the digital audio data. It seems that most of them cannot think about digital & what it means at the electrical level. Somehow, it's considered to be a magic thing that just works & is perfect. When you read those articles you will see that at the electrical level it is the same as analogue signals, has the same electrical noise issues riding on the signal - it's just that digital is an agreed protocol that says any signal under 1.5V (let's say) will be treated as an off bit (0) & above that as an on bit (1). This simple extraction deals with most noise issues "as long as you stay in the digital domain" So digital data is robust in dealing with noise on the line - it ignores it by way of its agreed protocol. The noise hasn't gone away, it just hasn't got any affect on the 0s & 1s - they are still correctly interpreted.
When we do a conversion from digital to analogue in our DACs we now have to become very aware of noise as it can cause all sorts of issues. So we are going from a system that is carefree about noise (digital) to one that is very sensitive to noise (analogue). It's a clash of two worlds :) We now have to analyse the digital signal to see what noise might be riding along with the actual signal & also analyse the ground plane (which is used as the reference point for generating the analogue signals) & what noise might be riding on this. Any of these noise disturbances will result in distortions on the final analogue signal.
Ok, so the argument is made - why can't you just measure what comes out of the DAC's analogue ports & show this distortion. It seems that this is not as easy as it seems. Firstly the noise is most likely (almost definitely) generated by dynamic signals such as music that cause fluctuations. Steady signals, like sinewaves, which are typically used as test signals will not do the job. The next issue is how do we differentiate noise from music in the analogue signal coming out of the DAC? This noise is probably low level but fluctuating & very difficult to uncover with test equipment.
Audio Diffmaker has often been used & cited as a way of testing this. Yes it is a good idea - subtract the input signal from the output signal to reveal the differences. But it's implementation becomes problematic. We have to use an A to D conversion to get this DAC analogue output back into digital. How accurate does this have to be? Very accurate & not introduce any distortions that might mask what we want to reveal. The next problem is that for the output to be EXACTLY the same as the input, the subtraction should give a difference of zero - what is called a perfect NULL. This is never achieved - typically -90dB nulls are considered good enough as "we can't hear down to this low a level"
But here's a piece by Bob katz, a well known recording engineer, when he was discussing the dithered volume control in Jriver [Just for those that don't know what dither is or what effect it has - it avoids some errors that might occur in digital down at the very low level (15, 16th bit of a 16bit file) i.e down at -84dB or -90dB]
http://yabb.jriver.com/interact/index.p ... #msg521299
Also, keep in mind that the noise of dither is usually inaudible, but the artifacts of not dithering (distortion, loss of depth, loss of soundstage and loss of warmth) are audible. It's also a matter of ear-training. Most people are first trained to recognize timbre, but soundstage depth and dimension, far more subtle quantitles, are what are lost first when truncation is performed instead of dithering
So here we have a well respected recording engineer telling us that there are audible changes (ear-training necessary) to be heard at this low level signalling. Does this sound like the sort of improvements we heard with the recent PS experiments on PC, increased depth, sound stage, body/warmth?
Anyway, just some random thoughts.
Oh, btw, I urge you all to try a software player that a few of us have been watching/listening to for a while now while it was being developed/tweaked. The great thing about this software is that it is open-source, non-commercial & the really interesting bit (for me, anyway), is that he spells out what he is changing in the code & what are the audible effects. The thread is here:
http://www.computeraudiophile.com/f11-s ... yer-15401/
Try it, you won't be disappointed!