analog vs. digital

January 6, 2022
 by Paul McGowan

We define analog as a continuous unbroken stream, while digital means it is built from discrete bits.

But, of course, our definition of analog is not accurate. Sound itself is made from bits called cycles per second.

Like the discrete pixels or grains of silver that make up a photograph or the electrons and quarks that formed those pixels and grains, at some level everything in our world is actually formed from bits.

If everything is made from bits does that suggest that the idea of a continuous stream is but a myth?

Perhaps, but then who cares? There’s the metaphysical argument and then there’s the practical. I may be made up of bits but I feel pretty solid.

For purposes of discussion let’s go with everything’s continuous at some level.

Could we instead suggest that analog is the medium that requires no more conversion when recording it? That we ignore the conversion process of magnets and tape or wiggling needles in plastic because these do not further break down the cycles into smaller bits?

If that’s the case I wonder where DSD fits into all this. The fact we can take a DSD stream and inject it directly into an analog power amplifier and get music out the other end has to mean something other than simply categorizing it as digital or analog.

These are the kinds of questions that keep me up at night.

Subscribe to Paul's Posts

76 comments on “analog vs. digital”

  1. I can’t say I lose sleep about this.

    I remember reading James Gleick’s book “Chaos” decades ago and he starts with the example of measuring the distances between two points on a coastal map. If you then look closer and measure the route from intermediate points it gets longer, to the point that if you measure it down to the grains of sand on the beach it becomes infinite. Of course when we travel from A to B our interest in the route only goes to a certain level of detail, and that will vary from almost none (flying) to lots (walking). All we need is the level of detail relevant to our safe passage, and for lots of people, me included, for a long time that has been 16-bit level of detail 44,100 times per second.

    A sound wave goes through so many conversion processes starting with the microphone and ending at the speaker that it is more likely the lowest common denominator than the highest that is critical. DSD is of course widely used in audio signal processing, originally by dCS for professional recording, and how it improves sound quality (even of humble 16/44) I have no idea. All I know is that my system converts analogue signals from my turntable to 40/384 PCM, then runs various DSP algorithms, and converts back to analogue, and what comes out the other end is very enjoyable indeed. Would it sound different if sent though a pure analogue pre/power amplifier? Probably. Do I care? No.

    Never mind the digital processing side of things, which I will never understand in any detail (and what I don’t understand doesn’t bother me), it still amazes me that dragging a chip of old rock over a piece of spinning plastic can bring dead jazz musicians back to life.

    1. Yes Steven, I agree.
      Even though I haven’t ‘done’ vinyl since late 1987, it also amazes me that a tiny chip of old rock dragged through a plastic groove can bring so many ‘pops’, ‘clicks’ & ‘snaps’ back to life…like an open fireplace where none actually exists; it’s a miracle!

      Good on Bangladesh!
      SA looking hopeful…India needs a loss; it’s character building.
      Scott Boland tomorrow…should be good 😀

      1. The rain in Sydney was well planned so we could watch Bang topple the World Test Champions (whatever that means).

        No pops and clicks in this house. And if DSD is so simple, why does the DSD DAC cost £5,500?

        I seem to remember Scylla gobbled up some of Odysseus’s crew. Even the ones that survived took 10 years to get home from Troy, which is longer than DHL took in that Tom Hanks movie. So presuming it has not been devoured and even if you have to wait that long for your SACD, the good news is that by then the Oz team are still unlikely to have lost a home test to Eng.

        p.s. Well done all those responsible for locking up our least favourite tennis player. Made our day.

        1. Steven, you’re in top form today 🙂

          Emmanuel Macron’s quote of the day, “The unvaccinated? I really want to piss them off!’ (& I’m pretty sure that that was partly a warning to Djokovic)

          Today is the first anniversary of the Trump insurrection…time for a BBQ?

        2. I don’t want to appear picky but if we’re talking about the same Tom Hanks movie the company was FedEx. So much for the priceless publicity from product placement. Music fans might have noticed the cameo from singer Lari White at the end. I didn’t use the word ‘country’ in that description as the genre doesn’t seem to have much love here.

          ‘Castaway’, it’s one of the few films I’ve watched more than once and by chance it was on television again only last Sunday (Ch5, UK). In a possible revised ending, Chuck delivers the package and the woman answers the door so he asks what’s in the box. The woman replies “a satellite phone, GPS locator, fishing rod, water purifier tablets and some seeds.”

          Thanks to the internet for that one.

          1. There’s a version of that film in FR’s head where she opens the box and his SACD is in it.

            I remember reading those wonderful stories by Antoine de Sainte-Exupery about his times as a postal pilot (Wind, Sand and Stars), flying single seaters over the Sahara. I reckon if PSA sent FR his SACD by Aeropostale he’d have had it ages ago.

  2. These are the Pro’s & Con’s of hitchhiking.

    “The fact that we can take a DSD stream and inject it directly into an analog(sic) power
    amplifier and get music out the other end…”…say WH-A-A-A-AT??
    Umm…no.
    A DSD stream still needs to go through a DAC before it can come out of an analogue
    amp & sound anywhere near music, Paul.
    From what I understand DSD is still a whole bunch of ‘zero’s & ‘one’s that exist in
    the digital domain.
    I’d love to know what ‘proper’ DSD sounds like, but my ‘Octave Records’ CD is
    caught between the Scylla & Charybdis…since September 22, 2021 🙁

    1. Hi Martin,
      Sorry to hear your PS Audio CD from 9/2021 is still being blocked by the Great Blue Whale. I didn’t think their habitat stretched all the way to Australia or is AU being isolated by the CCP or do you have to choose between Scylla or Charydis for safe passage?

      AU still not accepting international first class and priority mail? Covid-19 is transmitted by a cough, sneeze or breath not from handling packages. When I order CDs from Germany, Greece or UK, I receive them within 2 weeks. Maybe its time to change AU Politico!

      1. Hi William,
        Yeah, it’s insane.
        My wife is receiving goods (shoes & handbags) from China within 3 weeks of ordering them on-line & ‘Royal Ascot Mike’ sent me a Jeff Beck CD from England last month, via basic postal services, & it too arrived here within 3 weeks, so I have no proper clue as to why I can’t receive CDs from the USA…crazy stuff!

        1. Hi Martin,

          USA is not an ally of AU? No mail from Colorado – Mysterious. For PMcG and I, DSD is an excellent medium for “natural” sound. Zuill Bailey’s DSD excels in sound but Bach can require a lot of concentration.

          I know you are strictly a rock and roll man? If you want to see and hear an exceptional performance of Beethoven’s “Appassionata” Sonata, check out the link and no one in the AU or from Chicago will block You Tube?? Lilya Zilberstein plays: https://youtu.be/zhqlAAGeKF8

          1. Thanks William,
            Yes, 85% of my CD library is R ‘n R.
            My choice was the Gabriel Mervine (Jazz) CD.
            There’s few of us who feel that the ZB (Bach) CD is ‘funeral music’…depressing, & so that one is definitely a no-go for me.
            Thanks for the link, I’ll check it out later on today when there’s a break in the Cricket 😉

  3. In analog there are only bits of cycles per second which is natural sound. In digital there are both cycles per second and digital bits. Digital bits not being natural. Converting analog to digital and back to analog cannot improve on the sound. In my opinion pure analog recordings played back on the best analog equipment is as close to reality as it gets. Not to say digital playback of analog or even pure digital cannot be enjoyable. I’m not sure how you get pure digital though when microphones and speakers are analog. Everything starts and finishes with analog. Even when using fiber optic cables it’s converted back to analog. What happens to the sound during these transfers between digital and analog? Do you lose synergy or continuity? We might not hear it or can test for it but we can sense it.

    1. I agree something is lost in the conversion that cannot be perfectly reproduced, like dynamic range for example.
      Analog has amplitude and frequency
      Binary is just a frequency of clocked highs and lows. Dynamic range has to be synthesized.

    2. I hate to be pedantic but your statement “Even when using fiber optic cables it’s converted back to analog.” is not really how it works. The digital data is not converted back to analog over fiber optic transmission. An analog waveform is manipulated in a way agreeable to the other end-point to represent the digital bits. In fiber up to 10Gbps it is on-off keying or simply turning the laser on and off with NRZ. At speeds above 10Gbps, mainly 100Gbps and 400Gbps, some form of modulation is used like QPSK, QAM variants, or PAM variants.

      For an example, early 100Gbps coherent systems used DP-QPSK, dual-pole quadrature phase-shift keying. With QPSK we shift the phase of the carrier through all four quadrants of phase; 0 degrees, 90 degrees, 180 degrees, and 270 degrees. With four phases you can represent all the combinations of two bits; 00, 01, 10, and 11.
      The dual-pole part uses the fact that light has two orthogonal poles in fiber so we can manipulate the other pole as well for two more bits. This means each symbol can represent 4 bits. We only need a symbol rate of 25G baud to send 100Gbps. We do use more than that for forward error correction overhead which is another long topic.

  4. This is presenting the “3-card Monte”… Lipstick on a pig…

    A man claims that “all men have two legs…” ipso facto, (or ‘by reason thereof’ as the Latin would have it) “if it has two legs, it’s a man…” While the first clause is a concomitant rational statement, the second is manifest non-sense, for we know birds have two legs, and birds are not men.

    https://mojo-audio.com/blog/dsd-vs-pcm-myth-vs-truth

  5. To me that’s a pretty dogmatic approach. If music sounds good to me und brings me joy, why bothering if it’s all digital or all analogue?
    I was trying to feed my Cambridge Audio Azur851N-Streamer/DAC with an Octave Records DSD-File via USB from my PC. Maybe I’m too silly, but I didn’t get it. Though even if I want to check the soundquality of my few DSD files it seems to be not so easy. Am I wrong??

    Btw: streaming via Qobuz is a very nice thing, when the recordings are well made. Same for vinyl. But you can’t have all the music in the world on plastic 😀

    1. “..you can’t have all the music in the world on plastic” but you also wont
      have the time to listen to all the music in the world, so I’ll keep my plastic 🙂

      1. That’s true! Checking new releases is pretty comfortable when you can stream it. The music I really appreciate will then be purchased on physical media. That’s why I use both 😉

    2. Go to the CA website and search “Windows Audio USB Class 2 Driver” and download it. The standard driver will not play DSD.

      Or just stick to Qobuz.

      1. Thank you, Steven. I already have installed this driver and changed the settings on the Azur. I installed the JRiver-media player, but each time I try to open the given DSD files I get an error information. I’m using a Lenovo ThinkPad P70 laptop with the simple on board soundcard. That’s why I send the audio signal to the azur. For HiRes-PCM files there’s no trouble.

  6. It is against my better judgement that I am commenting here. Putting up a post on analog versus digital is like throwing gasoline on a fire.

    Hz are cycles per second, they are always continuous at audible frequencies and even at the DSD sampling rates when people use Hz to describe how frequently an analog signal is sampled. At extremely high frequencies such as the frequency if visible light ( 100 teraHz or 100 x 10 to the 12 power ) using quantum mechanics it becomes easier to think of light as small packets of energy that are called photons. These frequencies are a million to a billion times higher than the frequencies of sound. There are no packets or bits of sound energy.

    As to sending a DSD signal directly into an amp I would not recommend trying it. The DSD signal ( which is pulse density modulation – PDM ) does have an analog appearance but the mega hertz glitches in the signal due to sampling may cause the amp to go into an unwanted oscillation. A simple low pass filter to remove the glitches produces an analog signal that is safe to amplify. I think Ted Smith uses transformers to this in his DAC.

    1. Let us take a standard piano, a musical instrument that is a sophisticated mechanical device where a very clever system, attributed to Bartolomeo Cristofori, of ‘hammers’ (felt covered wooden mallets) that strike taut ‘strings’ to produce sounds, fundamentals and harmonic frequencies that propagate through the air. 88 discrete keys for what I will call the standard model (and, yes, I know about those wonderful Bosendorfers). These taut ‘strings’, wires really of various lengths, gauges, and windings, are tuned to produce the desired notes. But this prelude has rambled on long enough, let’s take a look at a few notes in the, again, ‘standard’ tuning.

      Note: A4 @ 440 Hz fundamental frequency
      Note: A#4 @ 466.1638 Hz fundamental frequency
      Note: B4 @ 493.8833 Hz fundamental frequency
      . . .
      Note: A5 @ 880 Hz fundamental frequency
      Note: A#5 @ 932.3275 Hz fundamental frequency
      Note: B5 @ 987.7666 Hz fundamental frequency

      And 82 more keys, both higher and lower; you can look it up if so inclined. Anyway, not exactly a bunch of whole numbers for the frequencies as defined by the system used. Sorry Paul, not the best example or analogy. Wait, analogy? Analog…y? Hmmm.

        1. My reading of Paul’s opening argument that the physical bits of sound come in discrete whole numbers of cycles per second, not an essentially continuous suite of frequencies. At least not once you get above the quantum level.

          What is sound as we messy biological entities perceive it, literally? Consider sound as pressure waves propagating through the air (I know there are other mechanisms that can be perceived, but that is for another day). Air is only approximately an ideal gas. It consists of mostly molecules (N2, O2, H20, CO2. CH4, et al.) and atoms (Ar, Ne, He, Kr, et al.), mostly N2 (78%) and O2 (21%) by mole fraction. These pressure waves, as the density of the molecules and atoms are temporarily, but regularly compressed and rarified above the ambient conditions, interact with tympanic membrane and mechanisms of the middle ear and then into the fluid filled inner ear where cilia of specialized nerve cells begin the conversion of mechanical energy to electro-chemical pulses transmitted along a chain of nerve cells to the brain where other nerve cells that are the physical basis of our consciousness interpret these transmitted electro-chemical pulses, or signals, as sounds. Sometimes, when we are fortunate, these sounds are interpreted as music. Yaayyy! It is all an enormous kludge that has evolved and been refined over billions of years since nonliving matter became life and then life became increasingly elaborate in fits and starts on this “One Strange Rock”. And here we are. Whether you are religious or not, I think the term ‘miraculous’ is not inappropriate.

  7. Paul says that everything is made from bits.
    Tom Waits has a different take:
    Everything is made from dreams
    Time is made from honey slow and sweet
    Only the fools know what it means… Temptation

  8. I have a warm spot in my heart for records. I grew up in the era of fumbling through record stores for music, getting up and dropping the needle on the record and looking at album covers while I listen. Going to web sites/playback apps and trying to “virtually” experience it is, in a way, kind of creepy.

    As far as sound goes, my records don’t pop or make noise unless they are bad, dirty or something is wrong with my turntable setup. Back in the days of records, I seldom flipped components. Since the digital era, it’s seems I flip them all of the time. For whatever reason, a good analog setup has a less resolving but more coherent and pleasing sound to it. With digital, it’s more resolving, but I seem to constantly want to focus on pieces of the music vs. an overall experience.

    Oddly enough, a good DSD recording of a turntable playing a record sounds better than the straight DSD recording. That is what keeps me up at night.

  9. Hey Paul, I would love to see a video of a DSD stream playing plugged into an amp. I’m not doubting you its just that I never knew it was possible.

    Ed

    1. Sure. Have a look at this: https://www.psaudio.com/wp-content/uploads/2022/01/dsd.jpeg which is a representation of what it looks like. Place that lower stream of thicker and thinner bits into an amp and you get music.

      Also, consider that’s exactly what we do with the DirectStream DAC. Literally. The output internal to the DS is that same bitstream. All we do is run it through some switches to make it bigger (louder) and then a simple filter to remove the rough edges.

      As to the filter, your speaker would be a perfect filer. We just don’t do that because some amps and preamps wouldn’t appreciate the high frequencies it can put out.

      1. Thanks Tony. That’s a JCorder custom Technics RS1500. I dont have that one but I have a couple other 1500’s. Honestly I use them as a “audio art” display. I have loved R2R’s since the 1970’s and still have a collection of 6 big end of the era deck’s. They all work but I collect them for the cool factor.

        Ed

  10. “p.s. Well done all those responsible for locking up our least favourite tennis player. Made our day”
    And mine !
    I can only hope the Australian Border Patrol will stand firm.
    Today’s post an invitation for another discussion analog vs digital ? No thank you.
    Or a “discussion” about the world of quarks ? Way over our heads !

  11. Hi Paul
    Thought that you might be interested in the similar concept of wave and particle theories of light and sound. In the case of light the duality theory is required to account for diffraction (Huygens wave theory) and the photo-electric effect (light particles, photons). The example for sound is dominated by wave theory (propagation of sound from speakers), but phonons (sound particles) can be observed in liquid helium below 1 degree kelvin. My point is that whether it is digital or analogue source material, both have their correct place in explaining observation.
    Kind regards
    Frank

    1. Having done two years of work studying electron open orbits in single crystals at liquid helium temperatures ( 4 Kelvin ) I do not recommend cooling your stereo gear to 1 Kelvin to produce phonons. 😉

    2. And if you cool it down a bit more, the common nucleonic particles, protons and neutrons break down into a ‘soup’ of quarks and other such called a Bose-Einstein condensate. My layman’s understanding as a ‘hand waving’ geologist cannot properly describe this phenomenon, so I will defer to the real physicists in this group. I merely raise this up for further discussion. Or not. Quantum mechanics takes much study to get beyond the rudimentary concepts.

        1. Thank you, Tony. I had given a quick look to Wikipedia, but not to the extent where enlightenment occurs. As you said a rabbit hole requiring (for me) some serious review first of vaguely recalled knowledge.

          But in parting for now, a bit of old jocularity:

          “Is your symmetry broken? Call the Quantum Mechanics. “

          1. LOL! Before you tackle BEC you might want review the Strong Force and how it makes the atomic nucleus possible. Also what I am not up to speed on is did the discovery of the Higgs Boson help, hurt or do nothing to BEC. What you really need to find is a young energetic theoretical physicist instead of an old out of date experimental physicist like me.

  12. As you suggest Paul, the interesting observation, that the resolution of the magnetized bits of tape or the molecular structure of vinyl is higher than current digital resolutions doesn’t really lead us anywhere, as this is probably not the reason for the sound difference. Also, that there are always kind of conversion processes, but extremely different ones, is comprehensible, but not very insightful yet.

    So far theories and even observable but seemingly not outright facts were quite misleading, otherwise, the first CD would have had to sound better than tape and vinyl since the beginning.

    I really admired your pragmatism when you said something like “if DSD pressed on vinyl sounds better and even if we don’t know why yet, let’s do it. Then try to find out why and try to improve what we really aim for”.

    I think the practical approach while trying to improve makes most sense.

    I respect your way of fixing a goal (you want to record and produce in DSD, as it’s the theoretically best way, even if there still might be better results anywhere else and also even though different other influences and priorities may actually still quicker lead to good sound than a theoretically perfect format does). I’m quite sure you will reach your goal, even if it takes a bit longer until the result is as leading as you wish.

    The practical approach (realizing the best solution now while simultaneously offering the status other formats actuall have) is what Giulio Cesare Ricci of the label Fone and your friend Bob Attiyeh of Yarlung seem to do. They don’t fixate on DSD without exactly finding out about it’s real quality status, but do both in parallel.

    Ricci not only records everything in parallel digital and analog (since 1983 PCM and analog, since 1998 DSD and analog), he partly for test purposes even produced the same LP’s once from the digital master and in parallel from the analog master. All to find out what’s really better sounding. One can even still buy those releases to compare. He, as Bob Attiyeh, who went a similar way, decided to produce their LP’s from the analog recording, not from PCM or DSD. They use the digital recordings to produce the digital media.

    They actually tried out (and still do) what we theorize about or are writing our fingers to the bone about. And I think they just wait how each develops and use the best process at the time. Maybe some time they use what you develop. But you only really know, if you also try both.

  13. I am learning so much every day about the DSD versus Analog comparison.

    I truly believe that we are still in the early stages of smoothing out the rough edges of DSD sound.

    If this doesn’t come to fruition, I expect that there will be an anouncement of the newest member of PS Audio line, The “No Regrets” Ultra Stable Turntable, Tonearm, Laser Cartridge combo. Then we will have “A listening experience for all tastes“.

  14. We live in analog world. Humans cannot make sense of binary square waves clocked in megacycles but “computers” can. Square waves produced from switching voltages in a binary high and low contain nasty harmonics that radiate and ring…
    That said, If digital has the ability to augment or synthesize the human experience than I am all for it.

  15. At first I was not going to reply I am totally ignorant and confused about the new digital stuff DACS, SACDS ect and most of these posts. I will say when I first heard my first CD I was amazed at the dynamic range, sound, and lack of noise. I bought a lot of them in the 80s now I never use them except in car CD player for convenience. I believe record players were used in early model vehicle radios and later the idea was discarded for 8 tracks cassettes and CDs. Something very personal about fiddling around and setting up an 80 lb record player with all the fancy gadgets such as the Ginko Cloud isolators and trying to find methods to avoid feedback from my subs (one downfall of analog vs digital I guess). One is always left wondering if something could be adjusted a bit differently for better sound such as azimuth. I guess a pitfall of an audiophile is one never really is satisfied and the love for tinkering. I loved (and still do) to page through the album photos while listening to music especially Kiss Alive (1) and imagine myself at the concert. The tiny Cd booklets were somewhat of a joke/rip off and after a while I found myself enjoying the sound quality of analog more than digital. Not sure why.

  16. In our world an analog signal could be defined as the continuously varying voltage of a transducer as function of the pressure of the sound impinging upon that transducer.
    A digital signal is a sampling of that signal that results in bits, or levels. To precisely match the analog value interpolation is required. This interpolation reduces the approximation inherent in sampling.

    In the end it comes down to how much detail one can hear. In our audio world if you cannot perceive the difference between the digital signal and the analog signal then the approximation is working “perfectly” in terms of what you can hear.
    This, I think, gets us back to the issue of measurement versus hearing. If you can hear something bad, it is bad. If you have a bad measurement but it sounds excellent, the measurement does not matter. Perhaps amplifier distortion is an example – the amplifier with the higher distortion measurement often sounds better than the one with the lower distortion measurement.

  17. KiWimagic. Good explanation and so true. In my younger years I would buy…. say a Pioneer A-90 Amplifier with .0002 THD @ 200 Wpc and many years later to my dismay hear another brand say a Bryston which may have .01 THD at rated RMS power and of course the Bryston would sound Worlds better than the Pioneer. I have earned that the hard way. I don’t pay any attention to specs anymore.

    1. My priority was the elimination transient intermodulation distortion, not total harmonic distortion. Transient intermodulation distortion wears out the eardrum and soon irritates the eardrum a ‘higher’ volume levels. When you find yourself getting up turning down the volume but you really cant explain why its probably TIM. THD is the wrong distortion spec to brag about in my opinion.

  18. Dear Paul,

    “Sound itself is made from bits called cycles per second.”

    Respectfully, I cannot agree that this unsupported assertion is correct, or that it even makes any sense. I think bits and frequency are unrelated and irreconcilable concepts.

    If this is the starting point, then you have already assumed your conclusion.

    With warmest regards,

    Ron

    1. Not sure what you’re disagreeing with, Ron. A 100Hz bass note travels from zero output to full output (at whatever level it’s played at) 100 times per second. Thus, there are 100 zeros that ramp up to 100 loud bits and back down again in the shape of a modified sinewave. I’d say that pretty much sounds like discrete bits.

  19. If I am understanding it correctly? Here was an eye opener:

    “But, of course, our definition of analog is not accurate. Sound itself is made from bits called cycles per second.”

    So, one could say that a record with bass guitar may manifest bits all over the lower range. 30 and 50 bits, 100 bits.. 400 bits, etc. And, all in between?

    Could you imagine having a digital recording at 70 bits and 500 bits simultaneously?

    Maybe I am not getting it right?

    1. The “bits” or CPS (Hz) are between 20 and 20,000 for us to hear them. Bits higher or lower move the air but our ears can’t respond to them in the same way our eyes cannot sense infrared and ultra violet colors.

  20. Our ears probably can’t tell the difference between a sound wave composed of tens of thousands of discrete sound pulses per second and a smooth sound wave. What our ears hear is the artifacts from less than perfect digital to analog conversion in the recording and playback gear. Even if our loudspeakers could produce the discrete energy pulses of a digital sound wave without a DAC, by the time those pulses reached our ears the discrete pulses would have been smoothed by billions of air molecules colliding with each other, blending the energy pulses into a more continuous wave.

    Question: is there any speaker technology that can produce sound from digital signals without first converting from digital to analogue? Or do all drivers require analogue signal? I’m not talking about speakers that have within their circuitry DACs. I mean a speaker that produces sound directly from a digital signal input.

    1. There are features of our acoustico-electric transduction mechanismn that act like a Real Time Analyzer filter, so we can discriminate waveform differences by timbral (spectral) content. Our time discrimination is much better, as in consonant based languages where the transient waveforms of syllables starting and stopping are very different from vowel waveforms which repeat over many cycles; and carry over half the meaning despite representing ~5% of the time and energy.

      Audio chains all have short term energy storage and other resonance affects that distort the waveforms of musical consonants, such that audio listeners grow insensitive to time distortions from cognitive “break-in” to the universal time distortions of audio. This produces paradoxical observations.

      For example, Dr. Manfred Schroeder showed that phase information was at least as important as frequency information for speech intelligibility; but 80 years of auditory resreach has shown that listeners are phase deaf to music.

      Speech and music are both captured by the same physical and acoustico-electric pathway, so why do we respond so differently to these sounds? Here are the data:

      1. Musicians are insensitive to frequency response deviations in audio chains, but very sensitive to time distortion.

      2. MRI and brain damage studies have shown that speech is processed in a distinct region of the cerebral cortex, while music has a complex of processors in a different place.

      3. Further MRI studies have proven that professional musicians grow 10 billion more brain cells, in part in the music cognition processors, and hear time ten times better than the general public; and ten times better than theoretical analysis! (according to the Fourier Uncertainty Principle)

      4. All of modern audiology and audio research is based on measurements using borrowed RADIO technology. (raise your arm when you hear the tone > electronic oscillators and headphones)

      5. For the last 80 years, children in ‘developed’ coutries have learned to hear speech in acoustic environments, but learned to hear music through radio and electronic phonograph, disc player, streamer, etc.

      6. Every step, every knob in an audio chain causes time distortion, including any EQ, compression, gating, mixing, panning, anti-aliasing, added reverb, negative feedback, wires, speaker drivers, cabinets, and crossovers, and the listening room.

      The answer: audio evolution is the result of measuring musical hearing that was developmentally stunted by listening to audio in a vicious circle. Most audiophiles have a hole in their head where discrimination of phase would be if they grew up listening to acoustic music or sounds of Nature instead of audio time distortions and other sonic pollution.

      It is possible to construct audio chains that preserve phase, but it means that all chains have to be one microphone to one speaker, with no processing, mixing, or mastering (zero knob recording); “stereo” is best approximated by a NEAR coincident pair of pressure gradient microphones; DSF is used to minimize phase shift of anti-aliasing; speakers have to have to be minimum phase with low phase error across the full spectrum AT ALL ANGLES; and your listening room has to have a substantial portion of the room boundaries covered with a specific combination of absorption and diffusion, to cover all frequencies and all reflective pathways (enough to make partners, significant others, architects, and interior designers explode into invective of tears).

      Solstice Salutations!

      1. Your points one and three make so much sense to me. Even though I am a tech head who use to be an expert in data analysis and stuff like that and even though I love music and have a pretty good stereo system, I am essentially deaf when it comes to timing in music. I hear musicians talk about how some musicians come in at the beginning of the beat and others come in at the end of the beat. I could not tell the beginning of the beat from the end of the beat if my life depended on it. I suspect that this is probably due to my early childhood when my only exposure to music was hearing the AM radio that my mother had on while she worked in the kitchen.

  21. There is data to support the theories of time and distance quanta, so the concept that “everything is bits” has some correspondance to reality. Shannon’s Law says that every information transfer can be measured in bits, so there is mathematical equivalence at some level, too.

    BUT, CPS (cycles per second, now defined as “Hertz”) arbitrarily chooses a 1 second window to measure frequency, so therefore there can be fractional, even irrational frequencies. For example, the standard 12EDO scale used since the 19th Century in European musicks has frequencies spaced by 2^(1/12), so all intervals other than octave and all notes other than A are irrational. This is besides the reality that the second minute is defined as 1/86,400 part of the terrestrial insolation cycle, which has no causal relationship to sound.

    I sometimes tune my bass B string to 10*π .

    Digital audio is usually expressed as rectangular, two dimensional quantization of both time and pressure (or time and velocity for ribbon mics). Analog is a proportionaltransduction to a storage medium and back. It has time and level resolution limited by noise, which is a parameter of Shannon’s Law.

    BUT, Shannon’s Law breaks down because human hearing has the amazing abilities to hear into the noise and time differences that are far beyond the “frequency response”. Our physical transduction mechanism has a low pass filter at 4KHz, we hear 20KHz when we are young, but can discriminate 3 microseconds of delay between right and left ears! This processing synchronization can also decode the directional phase encoding of the outer ear (pinna) to determine the direction of arrival of sound vectors, like gradient microphones and spatial arrays only with far more specificity. This phase encoding transmits three dimensional information to the neural transducers. These factors together increase the effective information content of hearing >100X over a microphone that is flat to 20KHz!

    Further, the immense parallel processing and fractal memory matching of our sonic sense has real-time predictive powers that enable us to process and cognize the spatial information of room reflections up to 20dB BELOW THE NOISE FLOOR! This only works for analog media, because digital quantization wipes it out. Because the direct sound is stored in echoic memory, we can auto-correlate the waveform of room reflections in the same way that radio receivers recover signals from deep space probes with 70dB noise-to-signal ratios.

    Ears also have a noise threshold 10dB below the best microphones, and can determine frequency with a ten times smaller time sample than theretically perfect microphones feeding 32 bit DACs, superceding the Fourier Uncertainty Principle!

    Experientially, I can barely notice a difference betrween 24/384 PCM and DSD, but it is there. DSF combines the best of both worlds by being somewhere in the grey area between analog and digital. It uses a few more bits than 24/96 encoding, but blows it away in sonic quality becasue it is more like how our neural circuitry works.

    (This post is for the nerds who want to yell “Fourier Theorem” and “Nyquist Criterion”)

    1. acuvox, I have never thought about PCM to DSD to analog the way you just described it. I have always thought of PCM as hard digital and DSD as soft digital ( soft is a good thing here ). But now is see DSD as the bridge between PCM and analog.

      I am perplexed when you closed with ” This post is for the nerds who want to yell “Fourier Theorem” …”.

      Fourier’s Theorem is always there when we talk about music. It teaches us that signals that are a function of both time and frequency exist in a two dimensional space with one axis is time and the other is frequency and can be formed by series of sine waves, cosine waves, and other sine wave variations. Without Fourier’s Theorem we would not have spectrum analyzers.

      1. The Fourier Theorem proves the information of the frequency domain is equivalent to the time domain under two key conditions: that you integrate over time from negative infinity to positive infinity, and also calculate the phase relationships of every sine wave frequency.

        Spectrum analysis would literally take forever under these stipulations, so spectrum analyzers utilize “Fast Fourier Transforms” with a defined window. These can’t express transients properly.

        Further, when the phase information is included, the graphs do not show what really happens when it goes over 180 degrees. This masks the mis-behavior of vented, bandpass, and “Wave(tm)” alignments and even the less distorted sealed box response; and waveform anomalies produced by high-order crossovers.

        A more esoteric problem is the interaction of high-complexity neural processing which can extract information from microsecond features of the received soundwaves and correlation to sound waves received in the previous few seconds, and memories of soundwaves accumulated over a lifetime. This puts you in a Bayesian prediction mode that can hear things much better than our most elaborate and expensive electronics, and APPEARS to violate the Fourier Uncertainty Principle. This is where it matters that our sensing neurons fire in TIME, not frequency.

        1. I have to ask where did you learn of Bayesian probability and statistics. The professor that taught me advanced statistical mechanics was big on Bayesian. He used to say ” only a fool bets on red when a roulette wheel has come up black ten times in a row”.

          1. Bayes’s mathematics came up in a social media debate about the future projections of Artificial General Intelligence (AGI). Biological neural networks are Bayesian, and researchers are now constructing Bayesian engines in Silicon, which is orders of magnitude improvement over Von Neumann, Harvard, Flow Process, Vector, and Graphics processor architectures.

    1. Yes, I agree. It “looks” continuous but technically, from my point of view, anything moving through zero is an isolated bit. I think where the confusion comes from is the notion that because it’s a wave it’s a continuous unbroken motion. That if you remove one of the cycles the whole thing collapses. Ergo it must be connected to the whole. But actually, that’s not what happens.

      In any case, it’s just a viewpoint. A way of looking at the world that is a little different in an effort at broadening our understanding of something that (quite frankly) is unfathomable.

      1. Throw a rock in water and we witness a continuous unbroken wave. A digital movie camera captures the event and is able to play it back. Engineering (and technology) have become very good at synthesizing reality. To develop technology a viewpoint is indeed needed to develop and build upon. We live better than King Tut or King Solomon. We can control light, heat & cool, audio, video at low cost. Its a cool age to live in.

      2. One truth that Fourier Theorem teaches is that when you chop a wave into a single cycle, you add an infinitude of frequencies to it. A lot of my speaker design heroes are radio engineers – Linkwitz, Dunlavy, Modafferi, Sequerra, and they understand modulation theory. If you isolate a single cycle waveform, the modulation envelope adds to the spectrum the same as a pulse of the wave period.

        I have come to agreement with Harry F. Olsen that the “rectangular gated sine wave” is the best test of speaker performance, because it represents typical musical transient information in a way that is easily interpreted graphically. One of my epiphanies in speaker dewign was realizing that Qms causes energy to delay in this waveform from the beginning to the end of the pulse, regardless of the number of cycles; and further, that this is a common waveform in music and the time distortion is clearly audible.

        The audibility is enhanced becasue this is ringing at an anharmonic frequency, to which hearing is very sensitive. If the note has a lot of cycles, or if the envelope is more gradual (Linkwitz preferred cosine envelope to rectangular), then the distortion becomes very low level and therefore is missed by all other test waveforms.

        This explained why I have never liked vented boxes – too much “group delay”, which includes 360° phase rotation at the resonance frequencies.

        The FR30 is saved from audible forms of this distortion on commercial recordings because most instruments which extend below E1 (41Hz) have gradual buildup of the note – Contrabassoon, Contrabass Tuba, Piano, and Organ. Even though the piano is a percussion instrument and the bass strings reach maximum in the first half cycle, it takes more than one cycle to transfer the energy to the sound board. Further, commercial recordings filter out bass transients becasue nearly all consumer speakers will sound ‘muddy’, as do studio monitors (vented boxes).

        Note that 98% of bass guitars have the neck pickup at 22% or less of the string length, so the fundamentals are suppressed below the fifth fret. Bass drums in pop, rock, and even hip hop are tuned to 60Hz or above. Therefore, most content in the two bottom octaves is bass pulses and modulation envelopes rather than waves.

        The time distorting resonance of woofers, vents, and passive radiators also correlates inversely to “PRaT” cognition, and remains one of the frontiers of speaker design. It is symptomatic of the false criteria of frequency response. Human ears transduce sound as a time function, and yet I DON’T SEE SPEAKER MANUFACTURERS PUBLISHING CEPSTRAL RESPONSE.

        Care to start a trend?

        1. This is fascinating. In 2017 I spent seven or eight months trying to decided on upgrading to Wilson or Magico speakers. I kept coming back to the fact that the bass response of the Magico speakers sounded more natural. ( I eventually went with the Magico speakers. ) I wonder if it was a simple as vented versus sealed?

          1. It also depends on the Q of the driver and the cabinet. For example, the Sony ESS-M9 speaker was vented, but the vent and driver were low-Q. It had rather exceptional bass, which John Atkinson raved about. (he played bass professionally, so I tend to belive him). I only heard them once, in a surround sound demo at a trade show, and the material was sub-optimal for my ear test.

            Most commercial recordings have modified bass to sound good on vented woofers. I collect esoteric recordings with accurate bass. Some you may be able to find are “Superbass” and “Superbass II”. Pretty much everything I have heard by Ray Brown sounds realistic, but these include a second and third bassist which makes the articulation of the bass tracks revealing. If you can’t find those, look for his work on Concord.

        2. This is interesting. I would like more information but don’t know where to start. “Note that 98% of bass guitars have the neck pickup at 22% or less of the string length, so the fundamentals are suppressed below the fifth fret”.

          The highest note on 24 fret six string guitar is 1,318Hz. The pickups are made to be sensitive to about 4 to 8 times higher than this to capture the strong harmonic content which is fundamental to the guitar sound. Playing on the pickup next to the bridge will provide a tight, snappy, bright sound. The neck position is characterized by a higher bass content. When I see someone playing a solo on a Gibson LP and they flip the selector switch to neck position the majority of the time they are going to play above the 12 fret. This pickup move is to avoid a unwanted shrill sound.
          Changing pickups during playing changes dynamics and tone.

          Thx!

  22. A day late….

    Just for the record (ha!), magnetic tape recording is “a lot like digital”. If you try to record an analogue signal onto tape without the bias oscillator, the extreme non-linearity of the tape is exposed and the recorded sound is horribly distorted. The use of a ~100KHz ‘bias oscillator’ at an appropriate level remedies this – but has the effect of sampling the analogue signal at the bias frequency…..

    Unlike ‘true digital’, the recorded signal isn’t digitized, but it is sampled.

    These days, we can avoid the use of tape completely by doing direct to digital, so the situation needn’t arise; tape’s still pretty distort-y if you want good signal to noise, anyway.

    And yes – worth noting if nobody has done so before me – a DSD signal is a pulse code density signal. If you amplify it, you’ll hear the audio. The speaker is the DAC. This is not the case with PCM, where the data is truly encoded in the bits.

    — P

Leave a Reply

Stop by for a tour:
Mon-Fri, 8:30am-5pm MST

4865 Sterling Dr.
Boulder, CO 80301
1-800-PSAUDIO

Join the hi-fi family

Stop by for a tour:
4865 Sterling Dr.
Boulder, CO 80301

Join the hi-fi family

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram