As engineers, we focus our efforts on what we can quantify by measuring, evaluating, and finding some form of commonality we can all agree upon. Perhaps the easiest is the ear.
We know what the average ear is supposed to do and we’ve got reams of research on the subject. We know its frequency capabilities as well as its maximum dynamic range and loudness levels. There’s probably not too much we don’t know about that appendage on the side of our head, and so it’s easy to give facts and figures on spec sheets as to how well our equipment’s going to interface with our ears.
Only, our ears are little more than sensors. What they interface with is our brains, and here we have far less knowledge of what we can and cannot perceive. For example, we have a general idea of how much and what type of distortion the average person can tolerate before they notice something’s amiss—but that’s not a firm number. It depends on the kind of music, the listener’s tolerance levels, and (maddeningly to engineers) people’s moods.
Our ears as microphones are an interesting concept but hardly how it works.