Atomic clocks

Prev Next

We all understand that in digital audio, the better the clock the lower the jitter, the better the sound. We've seen it in DirectStream: both the DAC and the new Memory Player. So, if what we've built into our products is low jitter, what about adding something even better? Like an expensive external clock? Like an atomic clock? Wow, that would be the be-all-to-end-all clock. Right? Nothing's more accurate than an atomic clock and we see them available for more money than the DAC itself. We've even read how much better a particular DAC sounds when added. Remember in yesterday's post I gave some examples of how we think one way, but ignore something else and wind up with the wrong answer? Atomic clocks are a perfect example of being wrong. Our digital guru, Ted Smith, is here to set us straight. This is lifted from our forums in answer to a customer's question.

Atomic clocks are addressing a problem that's irrelevant to audio: Long term clock accuracy. It doesn't really matter for music listening if the clock is accurate for years. What really matters is how much the clock jumps around during a second or so. It's that jitter in time that frequency modulates the audio.

Atomic clocks actually aren't designed for low jitter (at least that's not their primary design.) In fact they don't use the atomic reference at all in the short term. They have a free running clock and they push it around based on periodic reads of the atomic reference. It's the quality of that free running clock that matters for audio and whether varying its rate adds phase noise that will affect the audio... If they sound better at all in a system it's just because they are better built overall than another clock, not because they are better for audio.

The ability to change the frequency of a clock to match an external source is exactly at odds with a low phase noise clock. That's the primary reason that PLL's have a bad reputation in audio. The act of trying to control the frequency of a clock is adding jitter to that clock.

Conversely running a clock for any distance, going thru various impedance discontinuities (e.g. cables and their connectors), being subject to ground loops and other interference, going thru conversions to optical and back, etc. all add jitter. There's no method of distributing a clock that doesn't add phase noise in the frequencies that matter to audio.

Then there's the issue of having multiple clocks in a system. If a new clock is added to a DAC what's the DAC supposed to do when that new clock runs at a slightly different rate than the incoming data? Asynchronous sample rate conversion is the standard answer, but what really does is encode the clock rate differences into the output audio, making it impossible to separate that jitter downstream... Not a good design for audio at all.

So, what sounds like a good idea, an external clock of the highest accuracy should be better, turns out to be worse. And, when it is better, that only means the internal one it replaced wasn't very good to start with. So, adding the atomic clock is better - but only better than what was not good enough in the first place.

Myths we cherish are often broken by science.

Back to blog
Paul McGowan

Founder & CEO

Never miss a post

Subscribe

Related Posts


1 of 2