Software jitter

April 29, 2012
 by Paul McGowan

Just when we thought we had it all figured out along comes a new form of distortion to tackle: software jitter. The culprit here is, unfortunately, a very necessary component in the chain of digital audio – the CPU (central processing unit) itself.

We first noticed this problem when we started releasing different versions of software and firmware – every release of our music management program eLyric sounds different and every release of the Bridge firmware sounds different. This might seem obvious to you but not to our designers since the changes we were making had “nothing” to do with the data stream or the audio itself. Sometimes a change in the front panel display code would cause a major upset in sound quality.

Turns out the core of this issue is our old “pal” the power supply – the problem we started working on in 1975 when we introduced external high-current power supply options and again in 1997 with the Power Plant. Differences in code change how the CPU chugs along or gets wild with activity – which in turn modulates the power supply causing tiny voltage shifts. These voltage shifts affect the transition area between a 1 and a 0 causing a temporal shift in the data called jitter.

Of course it should be obvious the way to fix this isn’t in the code that causes the changing flurry of CPU activity but in the hardware itself – a much bigger challenge. You can see some of this work reflected in our new MKII upgrade of the PWD where we went from a couple of localized regulators to 11 – all in an effort to minimize the effects of software jitter.

I wanted to write this post to keep you up to date with what’s being discovered in our industry. Look for changes in hardware and lots of controversy surrounding this finding. I am sure it’ll rise to the top shortly.

Subscribe to Paul's Posts

7 comments on “Software jitter”

  1. Great post Paul, thanks. And thanks for your work in this area. Maybe this is part of the explanation as to why some digital just doesn’t sound as good as it should. Can you imagine how insufficient those power supplies were in the early CD players we loved to hate? And for those “measurement” folks, why my LPs and tube amps are the best sound available to these ears, despite all their reproduction friendly limitations and distortions.

    Good listening!

  2. I am trying to understand this better, but am a bit confused still.

    The nature of this confusion is that the CPU is functioning in the digital domain, completely independent of the timing and stream of the digital content/music that it is processing. If a CPU is running at 3Ghz, 2Ghz, or any variation in between (many dynamically change by the slit second with Intel Speedstep/TurboBoost etc…), it shouldn’t matter. This is as long as the processing power is sufficient to maintain the streaming speeds needed by to I/O subsystems that are used for the stream.

    I think you may see my point above, so please do provide a bit more clarity for this, I do think it is interesting but I am not quite grasping it.

    Thanks again and look forward to reading more interesting posts.

    Jim

    1. It is perhaps difficult to explain in such a short venue. However, one simple explanation that might help is understanding jitter itself – which is basically a timing variance in the digital data stream – and if that timing error is correlated with something (as opposed to simply random shifts in timing) the ear hears these shifts and the music sound quality is affected negatively. Meanwhile, the bits are identical – always perfect – but where they are delivered in time is changing – causing this problem.

      The CPU, in the case I am referring to, is also handling the bits in perfect fashion but as it is working it is taxing the power supply feeding the system as well as spewing emissions to the other components in the digital chain. When the power supply gets taxed it varies in voltage (going down as the CPU works harder) and this up and down can affect where the transition voltages occur in time. This results in jitter and worse sound – although the bits themselves are unaffected.

      1. It should be self evident to any electrical engineer that a piece of electronic equipment can perform no better than its power supply and they should also understand exactly why. It’s not so evident to those not trained as electrical engineers, it is hard to explain and even harder to sell. Is it surprising therefore that much so called high end audio equipment is often built around marginal power supplies merely adequate for their function most of the time? It’s not just true for power amplifiers but for everything. This equipment is often designed by tinkerers and tweakers, not engineers. They have neither the understanding nor the test equipment to know what will happen to performance when circumstances are less than ideal. Power supplies for sensitive electronic instrumentation must be line side and load side well regulated with excellent filtering and considerable reserve capacity beyond the worst case of what will be expected of them in use. Can you imagine if a CAT scanner or an MRI would not function properly due to utility power line disturbances or heavy demand load internally? One of the flaws in the old IHF standards was that it allowed testing of equipment with substitute power supplies of the same voltage but better regulation than the manufacturer supplied. Those standards and manufacturer published specifications were worthless because you could never be sure what you were actually getting when you bought something. For mission critical applications, at least one double static conversion UPS is mandatory. This will provide protection from all nine types of power source disturbances but will not correct for load side performance from marginally selected power supplies, it will not provide load side regulation.

        So I ask, is re-clocking the digital bit stream because heavy demand by the CPU loads down the power supply and alters the internal clock changing the time when switching occurs fixing the symptom and not the cause? What other circuits in the same unit are affected by drifting power supply output voltage, the analog audio amplifier stage for example?

        The question that someone who’s from the show-me state of mind like I am, is how much distortion does digital jitter create typically and what is the threshold of its audibility? Digital switching errors will eventually show up as analog waveform distortion, the D/A converter simply getting the wrong voltage during the sampling interval where jitter manifests itself as reading the wrong number. Of the several different types of analog waveform distortion, the catchall category invented for anything that doesn’t fit neatly in the other boxes is intermodulation distortion and noise. I suppose if it’s not steady state it has to be called transient intermodulation distortion (Otala’s usage of the term struck me as inappropriate as slew rate limiting at high power levels merely indicated falloff in bandwidth with increasing output, a form of quasi linear distortion.) So jitter causes noise, how much? To make a fair comparison a circuit under test should include an analog input signal, an A/D converter of the best possible quality with no audible jitter, and a D/A converter where jitter induced distortion can be varied for testing (the only variable in the test.) Then the input and output analog waveforms are compared both by analyzing the output’s measured distortion and in blind A/B testing to see if just that one variable causes an audible difference. Otherwise how would anyone be convinced that the added cost of correcting it is justified? By just claiming it is or by scaring them to buy it “just in case?” It strikes me that for high end audio, often that’s the name of the game in advertising.

  3. , These voltage shifts affect the transition area between a 1 and a 0 causing a temporal shift in the data called jitter.

    Temporal shift???? Of course I know what this is, but please explain it to those who do not.

  4. Paul, I don’t see why you are branding this as a “new form of distortion, software jitter” when your description of the problem and the solution your designers implemented would clearly indicate the problem to be one of Electromagnetic Compatibility (EMC). In low level sensitive circuits noise isolation has always been necessary, and one of the basic causes of noise in digital circuitry is switching transients caused by the resulting current draw associated with electrical circuit activity, in m. This draws down the voltage and puts high frequency spikes (i.e. electrical noise) onto the power rails. If you have distributed power then this a prime means of coupling noise generated in one stage to others. This is a form of Electromagnetic Interference (EMI). EMI has long been recognized as one of the components of jitter.

    Some high end audio manufacturers have paid particular attention to EMC issues and it has resulted in recognition by reviewers of their audio performance relative to competitors. I am thinking specifically of ARCAM’s top of the line Home Theater receivers and processors, though I know that other manufacturers have addressed it as well. Turning off displays is one method of dealing with noise associated with multiplexing and modulation.

    The solution you describe for the Mk II version is in fact a small scale implementation of the Power Plant with the regulators effectively regenerating clean power from the power rails to provide isolation from the noise that’s on them.

    I am just surprised that you would categorize the issue as a software problem when it was in fact, based on your description, a hardware system implementation issue which was unmasked by software activity. I’ve been an EMC engineer for 25 years and whenever functional problems have occurred it was usually put down to being either caused by software or EMC. Usually its been the case that we had to prove out that it wasn’t EMC, as 95% of the time it was in fact a software issue. This is the rare occasion where I can defend the software and state with a fair degree of certainty that it was an EMC issue.

    1. Certainly we recognize EMI and power supply spikes as a contributor to the SQ and have always paid close attention to the isolation and careful bypassing of these devices to keep power supply interaction low. But while what I am referring to is certainly related it is also somewhat unique at least in our experience.

      EMI (RF) is a common problem all engineers have to worry about and we always have but it is the correlated nature of the RF as well as the power supply modulation that is unique in my experience. Any correlation between the audio signal and the power supply modulations or any RF interference sounds bad – and we now have the situation where the software program itself is responsible for the correlation between the program material and the CPU activities.

      Our ears hear correlated interference as tonal changes and mostly ignore uncorrelated “noise”.

      As you no doubt know, anything that modulates the power supply can cause jitter. One example of the significance of that power supply modulation is that it’s possible to use power supply variances to attack RSA public key encryption by statistically detecting the different loads caused by having ones vs. zeros in a single multiply at a given point in time. The mere fact that this is being done again points to the correlated nature between the software and the power supply issues which is what we’re referring to as software jitter.

      There are other ways for code changes in a CPU to manifest themselves besides jitter. E.g. via RF. CPU generated RF can enter via direct connection or via any RF antennae formed by loops. RF detectors can be any non-linearities in any component in the downstream signal path. (The most extreme example of this is the crystal radio.)

      One should be wary of using diodes to do level clamping on an analog signal, but there are plenty of other non-linearities in the audio signal path: Even the DAC itself is obviously a non-linear device. (Another example is that we’re always wary of “non-polar-electrolytics” as well. The back to back caps are essentially two weak diodes.)

      So indeed we’re dealing with supply modulations that upsets the temporal references of the digital audio chain inducing jitter as well as RF problems causing other types of distortion – and good housekeeping rules would always force us to pay close attention to isolation, lots of bypass caps, local supplies, etc. But the correlated nature of the device modulating the supply and generating the RF is specific to the software running – so my point is that we need to pay close attention to all – hardware, power supplies and the software itself in an effort to minimize MIPS.

      I think the term software jitter seems appropriate since no matter how careful we are – different software will always cause different SQ.

Leave a Reply

Stop by for a tour:
Mon-Fri, 8:30am-5pm MST

4865 Sterling Dr.
Boulder, CO 80301
1-800-PSAUDIO

Join the hi-fi family

Stop by for a tour:
4865 Sterling Dr.
Boulder, CO 80301

Join the hi-fi family

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram