In response to my post of a few days ago, New Paths, reviewer Tony Cordesman wrote a brief essay on the subject I thought illuminating.
"I never could figure out if Ivor was simply a con artist or really believed that first in the chain argument. It never made the slightest sense in terms of basic analytic modeling, particularly since he ignored all of the colorations in making the recording, and placed more emphasis on the turntable than the cartridge.
You might consider the basic principles of operations research. You evaluate any given item or process by developing an "error budget" by analyzing each step and potential source of problems in the flow or function of what you are assessing. This includes a step by step identification and sub analysis of every key factor that can affect the end result. You assign a relative weighting to each variable and then assess and find some way to score the success in that and every other link in the total "error budget."
It shouldn't take you more than a minute with your technical background to realize two things that are clear to anyone observing military OR work.
- Even if you can set up some form of dynamic model or algorithim, and create a set of detailed OR subroutines for each variable, the end result still has to be highly subjective. Math can never fully substitute for judgment. Even deciding what technical measures to use and how important it is has to be a matter of subjective judgment and experience.
- The limiting variables in a system that works are almost always combinations of several weakest links in the chain, and no one variable dominates unless there is something very wrong in a single part of the design that dominates over every other aspect of the error budget.
I've never seen anyone in audio attempt this kind of analysis of either the recording or playback side in systematic way. A few focus on a few technical parameters, usually in one piece of equipment, at the expense of ignoring the rest. Others either rely of their ears and some empirical experimentation in a few areas -- with or without limited technical measurement. This is partly a function of the sheer number of variables involved from mike to speaker. It is one hell of a large and complex error budget.
At a broader and more popular level, we have cycles of focusing on one variable like sample rates, TIM, etc. In some OR work, this is called "suboptimization analysis." You focus on a key part of the problem you know you have to solve and ignore the rest of the process or device. What often happens, however, is that people become so centered on that aspect of performance that they see only that tree and forget about the forest.
From an audio perspective, this can sometimes lead to real advances in components no one previously really thought about. It also is a fact of life in OR, however, that people can also focus their attention far too much on something like the power cable or a given type of capacitor, tune their hearing to differences that really have comparatively petty musical impact, or simply convince themselves that some tiny piece of stuff can damp a room.
I gave up trying to get anyone in audio to pay attention to OR and related human factors work decades ago, and the odd thing is that having so many competing suboptimizers seems to act as the equivalent of a rational approach to engineering development over time.
High end audio is a little like evolution, if it really worked efficiently, evolution would have produced a better species than us several million years ago. Fortunately, in practice in a very flawed universe, even the seeming monsters and dysfunctional dead ends have some evolutionary value in moving things forward.
But, and it is a very real but, the variables you ignore in a small, sharply suboptimized error budget, can also turn into monsters that come back to bite you. It's the whole audio chain -- mike to media to speaker, room, and listening position that ultimately counts."