As Audiophiles, we lean toward the extremes: lower distortion, greater dynamic range, lower noise, higher resolution.
So, what of the debate about the benefits of 24-bit resolution vs. 32 bits? Seems academic, right?
Let's look at what it all means.
The main difference between 24-bit and 32-bit digital audio is the level of precision or resolution in the audio data. A 24-bit audio sample can represent up to 16.7 million levels of amplitude, while a 32-bit audio sample can represent over 4.2 billion levels of amplitude.
In practical terms, this means that a 32-bit digital audio recording can capture a greater dynamic range than a 24-bit recording.
How much dynamic range are we talking here? The dynamic range of a 24-bit digital audio signal is about 144 dB, while 32 dB weighs in at 192 dB.
To put that in context, a full symphony orchestra has a dynamic range of about 100 dB. This means the quietest parts of an orchestra are around 30 dB or so, while the loudest parts inch up to 130 dB. To put this in perspective, a typical conversation at normal speaking volume is around 60 dB, while a rock concert can be as loud as 120 to 130 dB.
144 dB is more than sufficient to cover this range. (The dynamic range of human hearing is typically considered to be around 120 dB)
I'll probably be drawn and quartered to suggest 192 dB is overkill, but what the heck?
And increased resolution? Well, clearly, if you can break apart the same signal into billions of bits rather than mere millions, you should have finer resolution.
But is it audible? As long as I am already being drawn and quartered, I might as well stick my neck out and suggest no.
I've tried the experiment multiple times on our systems, some of the highest resolution ones there are, and I can't hear even the hint of difference.
The jury is still out, but I am not losing any sleep over the subject.