Yesterday we discovered what 1-bit DSD looks like and how it’s fundamentally different from that of the standard CD format, PCM. I am going to spend a few posts helping folks understand a little better detail about DSD but today I wanted to make sure we all understand the importance of timing.
Timing is everything in digital audio. Regardless of the format, PCM or DSD, the accuracy of either the rate of samples we take or the accuracy of the 1-bit data coming in a stream on DSD plays a critical role in how the audio will sound when we play it back in our systems.
Timing issues are commonly known as jitter – a term I thought must have something to do with drinking too much coffee in the morning – but turns out to be simply differences in timing of the stream.
If you’ve been following along in our little series it should start to be clear how important jitter can be to the eventual output music. Consider that in a PCM system if the timing of the samples, which is supposed to happen 44 thousand times a second starts changing in the encoding process or the playback process. The system only works well if the timing is correct because, at the end of the day in both DSD and PCM they are timing based systems. Change the timing and the results will always be something unexpected.
Tomorrow we’ll start on the crazy process in 1-bit decoders of trying and failing to get it right.