In my previous article on vacuum tubes in Issue 103, I talked about linearity. One of our readers requested a more thorough explanation of how linearity relates to audio and music.
There are a few different common uses of the term “linearity” in relation to audio electronics. One refers to the frequency response of an audio circuit.
A “linear frequency response” refers to the ability of the circuit to produce an equal output level at all frequencies within a specified bandwidth, for an equal input level.
The audio range of frequencies is commonly thought of as starting at 20 Hz and ending at 20 kHz, which is the average range of frequencies that can be perceived as sound by a healthy human when presented as pure sinusoidal (sine wave) waveforms, heard through loudspeakers or headphones.
This is far from the whole story though, since live music consists of fundamental frequencies as low as 16 Hz (which can be produced by organ pipes) and harmonics extending up to or over 100 kHz (certain percussive instruments). Furthermore, sound in nature is never just pure sinusoidal waveforms, but complex waves consisting of various frequencies blended together.
While we cannot “hear” 16 Hz, we can perceive its presence or absence with our other senses. Also, the ability or inability of audio equipment to linearly reproduce 16 Hz will affect the phase response above 20 Hz in the range that we do hear!
The phase response can be thought of as the relative stage of development between the different regularly occurring oscillating phenomena, representing the frequency components of a complex sound. There are mathematical relationships (which can get rather complex) between the frequency and phase response of an electronic circuit. A key point to consider is that frequency response errors outside the 20 Hz – 20 kHz range can cause phase response errors within the 20 Hz-20 kHz range, at low and high frequencies alike.
Changing the relative phase between the different frequency components of a complex wave actually changes the wave shape, as mathematically described by the Fourier transform. But is this actually audible? Dr. Milind Kunchur of the University of South Carolina conducted several experiments, attempting to define the temporal resolution of the human auditory system. The groundbreaking results were published in peer-reviewed academic journals and even proposed a neurophysiological mechanism model that could explain the findings. Our hearing is much more sensitive to time domain and phase response errors than to frequency domain errors (frequency response). The temporal resolution found through Dr. Kunchur’s research is in the order of 4.7 µs, which would seem to imply that we should be able to hear frequencies much higher than 20 kHz. It is noteworthy that our temporal resolution does not appear to degrade as much with age, whereas our hearing becomes progressively less sensitive to high frequencies as we age.
We have essentially just stumbled upon a second potential use of the term “linearity.” This would be phase linearity.
Both frequency and phase response can easily be measured. The frequency response of an audio component or loudspeaker is often proudly displayed in product specification sheets – while the phase response is usually absent. Moreover, the frequency response specs are usually stated in terms of 20 Hz – 20 kHz +/-1 dB, which tells us absolutely nothing about the product’s actual sound.