Ok, let's treat both of these positions as biases - Nige's that these devices are easily identified & yours that they sound the same. Isn't the blind part of Nige's test going to eliminate his bias by removing the knowledge of which device is playing & then having to nominate the device he thinks it is. Your bias can't be eliminated in any equivalent way - if you think they will sound the same they will probably sound the same irrespective of the ACTUAL differences between the devices. In other words you would report false negativesDiapason wrote:I know you don't want to get into science here, but this is a good example of the problem with these discussions! You have stated yourself that this test is predicated on your (personal) belief that the differences are large enough. To be honest, that renders the test pretty much useless, since if we're going to start by assuming exactly that which we're trying to prove, there's no point in doing the test at all.nige2000 wrote: only reason i suggest it is because i believe the differences are large enough not to require exact level matching or long term listening
If I take the opposing point of view (that the items all sound identically the same) then if people hear differences/have preferences I can easily claim that it's due to different levels. And so we go on and on and on as people on the internet have always done.
Part of the problem is that the opposing sides disagree on what's "self-evident".
That's one of the reasons why I would like to see a control for false negatives in blind tests as I suspect a lot of such tests are full of such false negatives.
I brought up this issue of controls for false negatives on Hydrogen Audio & couldn't believe the responses - very revealing of the agenda. I brought it up for the reasons I already gave but also because this guy Arnie Kreuger (who claims to be the originator of ABX tests) produced some ABX test results showing a null overall result i.e no better than guessing. He posted the log from the ABX test which shows time & result for each trial among other info. The problem with the results was that I & another guy (Amir) picked up on the fact that in the majority of the trials he had taken 3 second or less to listen & register his choice We quesried this as having done some ABX tests myself I knew this was not a reasonable time to listen to a snippet of X & decide which it was A or B & click the relevant on-screen button using the mouse - in some instances he took 1 sec, in some more he took 2 secs & in others he took 3 secs.
Anyway, when he was queried on this he had some lame excuse that he had mistakenly deleted the original log & this was just a repeat of it. He never said any of this when he first posted the test, ony when he was queried about the speed of the results. He then said in the original ABX test (the one for which he had deleted the log) he had done a long listen & score no better than chance so this was just a repeat.
I suggested that his test results were invalid & that he had not listened to the files, he just selected randomly. This was a great example of why controls like I suggested were needed in such tests as we had no way of knowing if someone was just randomly selecting without listening - it was ony the timing that alerted us to this fact.
So here's the bit that stuns - a number of well known posters including a moderator posted to the effect that if you don't hear a difference in the first couple of trials why would you be bothered to listen for the next 10 or so trials (16 trials is the default I think) - their reason was that life was too short. This is from the guy who claims he created ABX testing & from the Hydrogen Audio crowd - the home of so-called objectivists. In other words they are quite happy with the results as long as the results are null ie no difference heard.
The opposite side of this is that guy I mentioned Amir has produced many positive ABX results showing that he can differentiate between high-res & RB. I know, so what :) but this is another sacrosanct area for objectivists & they have been claiming blind tests have never shown anybody able to differentiate. So when Amir produced his results & did them a number of times following the stipulations & suggestions of these objectivists, failing to find any flaw in his approach or procedure that could explain this positive results they finally settled on accusing him of dishonesty & that his results were not acceptable unless the test was overseen by a trusted 3rd party. This guy, Amir, btw, was a vice-president in Microsft in charge of the audio production side of things so is well trained in both running blind tests & hearing distortions & he also considers himself an objectivist
So, it really is interesting to see the reactions of these people when faced with evidence that they have been demanding for years & nobody producing it& finally when positive evidence is produced they then go into a tizzy of denial (the Foobar ABX software plug-in even had an update to make it more difficult to cheat it).
So all of this made me even more convinced that blind test results needed some internal checks to record how prone the test/tester is in producing false negatives.
Just to be clear, Diapson, I'm not putting you in the same category as these Hydrogen Audio guys but your bias in this instance would likely produce no difference. The only way to tease this out is to have some internal, hidden controls that have known, agreed differences & if they are not picked up as different then the tester is biased to not hear differences or he is tired & lost focus or the playback system is not revealing enough or ........ in the case of the HA guys they have monkeys sitting in for them to do the test because life is too short :)