The timeless dimension

December 3, 2021
 by Paul McGowan

When we finish with an Octave release we upload the digital masters to a server owned by Sony that is located in Austria. The upload takes between 20 minutes and an hour depending on traffic.

Regardless of how long the upload takes the data will be identical because our medium exists in a timeless dimension.

Here there are no clocks.

Time does not exist.

And that’s the thing about digital audio: we can transport it anywhere on or off the planet without loss. Using protocols known as checksums, we can be confident each up or download are bit perfect in this timeless universe.

Once we get beyond the delivery phase and enter into the conversion process, our timeless medium enters a different universe, one where time rules the day.

Now everything matters: distance traveled, the road taken, how accurate are the clocks.

Moving from one dimension to the next requires a whole new set of rules and thinking at least where great sound is the object.

Subscribe to Paul's Posts

40 comments on “The timeless dimension”

  1. I think that I mentioned ‘WeTransfer’ a few posts ago.

    Ti-i-i-ime is on my side, yes it is…wait…what?….oh!
    (Take 2) Ti-i-i-ime ain’t on my side, not any more 🙁
    (Old Fart Society)

    Shout-out to ‘tarheelneil’, are you out there…somewhere?
    I missed your input on yesterday’s topic…you know…because you’re so ordinary 😉
    I hope that you are well (better) & still ‘with us’.

  2. Paul, why then was burning CD‘s quality told to be depending on noise and drive influences?
    Isn’t burning CD‘s a kind of pure copy or uploading process in the spirit of today’s post?

    1. JN,
      A digital copy is going to be identical regardless of media, that’s kind of the whole point in digital storage-it’s always the same no matter how many times it’s copied or to what digital device. Where the noise and quality issues come in is when we take the information in that digital copy into the analog domain. That’s the conversion process Paul mentions and it’s where the magic or lack thereof happens and where literally everything is critical. At least I think that’s what he means, I have to admit to a bit of mystification at his terminology this morning. I’m doing the insomniac thing tonight.

      1. Yes that’s exactly how I understood it.

        But you might remember the discussions, how important it is to have a certain quiet drive make or ripping/burning workstations with linear power supplies to get best disc quality when copying a CD or best file quality ripping it.

        I never cared, but was this all nonsense?

        1. So the quiet drive for ripping was likely nonsense but the quiet drive for playback was not.

          The difference is important. When you rip a file you extract the data and ignore the clock. Data is data and as long as there are no errors you’re good (which is why Exact Audio Copy was a good program to use as it had very few errors.

          Playback is different. When playing back a disc on a disc player now we have a clock. In fact, the player generates the master clock that drives the entire system including the DAC. That’s why our PWT was so great at what it did. We did not take the crap clock from the drive, but rather we stored the retrieved data in a buffer and then at our leisure, generated our own perfect, low jitter clock and output the data.

          1. Yes, the buffering of the CD drive was what I first noticed of PSA and I thought that’s really smart.

            But interesting that all the CD burning discussions were nonsense. I remember, there were quite convinced folks regarding what importance burning equipment has and how different the burning results can sound. I think even audio magazines reported this.

      2. I don’t think Paul is talking about analogue at all.

        As Robert Harley explains (see below), applying a very accurate external clock to a purely digital signal path from the digital source to the DAC will reduce jitter and improve sound quality.

        He explains in his book that this is very easy to demonstrate, because you can listen to the sound without the master clock and, at the flick a switch, turn the external clock on or off. When he did this it was very obvious.

        For all the people the other day preferring CD’s to streaming, I suspect it is mainly because their disc spinners have better clocking and less noise than the laptop or cheapo streaming device they use. Investing in quality streaming devices (low electrical noise, low jitter) gives equally good or better sound because the data is the same wherever it comes from.

        Spinners will always have the problem of mechanical vibration. SSD drives can generate a huge amount of electrical noise, which is why I chose a device with a SATA drive that is powered off when not in use. These are good reasons to keep music files on a separate network device from the streamer. This is why the dCS Bridge is so good, it has no onboard storage and a very high quality clock, but optimally you can connect it to the master clock controlling your dCS disc spinner and DAC.

        dCS pioneered this arrangement, but plenty of others like Esoteric and Auralic do the same thing.

        1. „…which is why I chose a device with a SATA drive that is powered off when not in use.“

          This is good, but if it’s switched off when nothing plays doesn’t matter and in case it switches off when the file extracted from it plays, I’d be interested how you achieved that.

          1. The music files are buffered depending on the source and the streaming software. Some sources don’t buffer at all. The devices with SSD have an additional internal linear power supply specifically for the SSD drive.

            It seems that the issues that have to be addressed in getting good sound from a server/streamer have been known for a long time, most manufacturers do similar things, and now it’s pretty much about how much you want to spend.

  3. Paul,

    I’ve been searching the site for a way to put a thumbnail on my posts and it has eluded me, could you point me in the right direction?

    Thanks
    OHT

  4. Despite a century of audio technology advances, the greatest degradation of recorded data content occurs in the initial conversion of mechanical energy to electrical, and the final conversion of electrical energy back to mechanical.

  5. So my Octave files sit on my server, which sends them via a very low-noise processor (total server power consumption 15w) down a 1m ethernet cable (so galvanically isolated) to the streaming card of my all-in-one player, where a dedicated processor reclocks it. Any DSP is applied by the same card and it it then upsampled to 40/384 before wandering off to the nearby DAC.

    I have no idea how this stuff works but, as Tony the Tiger used to say, it sounds Ggggrrrrrreeattttttt!!!

    Jesus from Sonore used to write posts about reclocking on Roon Labs Community (he may still do) whose length and incomprehensibility was only challenged by Galen on cables. Actually Jesus was streets ahead of Galen and occasionally used the ultimate condescension saying some things were so complicated that they were not worth explaining to mere mortals. The complete opposite of Paul who teases us with no explanation at all.

    Robert Harley, in his comprehensive and thoroughly understandable “The Complete Guide to High-End Audio: Fifth Edition 556pp” explains on pages 190-192 the reasoning and effectiveness of external reclocking when the external clock is controlling the digital transport, upsampler and DAC. His reference is the Esoteric G-0Rb master clock. He concludes that the alternative approach is to stick a high quality clock just before the DAC, as in my system.

    Reclocking is not rocket science. Rocket science is easier.

    1. Steven, As one who understands both reclocking and rocket science, I can assure you that rocket science is way harder than reclocking. Screw up your rocket science and people can die. Screw up your reclocking and your audio gear gets crappy reviews.

      1. I must disagree with you as the lady in that movie did rocket science in her head armed with nothing else but a stick of chalk. So it can’t be that difficult.

        As one with little scientific knowledge and certainly none about stuff that flies, for most of my life I lived in the fear that every time I got in an aeroplane it would get up in the air and the wings would fall off. The old adage that flying is two minutes of absolute fear separated by hours of boredom does not apply to me, the fear was the middle hours.

        It was then explained to me that the wings would never fall off – I was not convinced having flown in the DeHavilland Comet as a child, famous for wings falling off through metal fatigue. But I digress. The wings are forced up by air so, if anything, the fuselage would fall from between the wings and plummet to earth, whereas the wings would in theory keep going up. This of course was not reassuring at all.

        I have a video that I put on YouTube taking off from Thiumpu in Bhutan in a commercial jet (737 I think). We had great weather and after a few minutes bobbing and weaving the plane just clears a range of mountains. I was later told people avoid going there through fear of hitting the mountains. It only proves that when it comes to science, ignorance can be bliss.
        https://youtu.be/sYfaRZ9f4JI

  6. Once we are in a dimension of where time matters – someplace between digital and on the way to analog… and that’s where clocking matters most, then wouldn’t the clocking on the D/A side be just as or maybe even more so critical? Of course it’s the chicken and the egg thing…. As you would have to convert back to analog to check it against the original analog signal.

    Having to deal with short Femto seconds (or shorter) of light virtually every day of the week along with the complexity of phase relationships, very small localized temp variations, air currents, electrical noise, timing jitter, and a host of other variables I can to pretend to sorta understand the complexities.

    What I’m not convinced of is that every characteristic and minute detail of the original signal was captured in the first place…. because of input digital sampling timing.

    1. Mike, You are correct. The whole idea that a simple two times 20kHz sampling rate and an anti aliasing filter at just above 20 kHz was going to give us the perfect sound forever was a bad joke played on consumers of music everywhere.

      When I was a newly minted Ph.D. in 1980 and was working on acoustic signals used for oil exportation the frequency range was 20 Hz to 20 kHZ ( sound familiar ). When we wanted to digitize these signals we used Burr Brown DAC chips that sampled at 100 kS.p.s. ( not 40 kS.p.s. ) because it was obvious that the higher sampling rate gave better fidelity.

      We were working on tape so there was no concerns about the size of disc, the size of a symphony or any of that nonsense. If Sony / Philips had done their homework and started with 2x the sampling rate ( ~88 kS.p.s. ) there probably would have never been a clamor for hi-rez digital.

    2. So having some time to let this sink into my ‘bone head’ and after reading Paul’s reply to stimpy2 things have started to make more sense. Apparently clocking at a specific rate is only applicable to the playback side, accuracy of the recording is determined by how often and accurate the A/D process is, as all that side is doing is creating the bits. Reassembling those bits in a “timely” and orderly fashion seems to be the key.

      So then in addition to timing I would think absolute voltage stability and accuracy is equally important. Then it also seems the absolute noise free voltage stability of the digitized signal is even more important for an accurate representation in the initial digitization process.

      All of that said, apparently most of the losses occur in the conversion of sound waves to mechanical energy and then to electrical energy. Which begs another question, why are vintage microphones in use and demand?

      1. Mike, I am sorry, I made a mistake in my post above. It was an Burr Brown ADC chip, not a DAC. All the same things apply to the ADC chip that apply to the DAC chip. If the ADC has poor timing ( i.e. jitter ) then the entire recording has jitter and is inaccurate. ( I wish I had a chalk board to explain this ) Let’s say that at the exact right moment when the ADC chip is suppose to record the amplitude the amplitude is 5. Since the ADC chip has a lot of jitter it records the amplitude sooner than it should and records 4.8 instead of 5. Inaccuracy in the timing results in a loss of amplitude resolution ( 4.8 instead of 5 ). So instead of a 24 bit recording you end up with a 22 or 20 bit recording.

        The ADC chip and the clock are just as important in the recording process as the DAC chip and the clock are in the playback process. I hope this helps.

        1. Tony,
          Thanks. That’s what I’ve been trying to elude to over the last few ‘bit’ posts. So why do we not ‘ hear ‘ more about that side of things? Is it because the errors are much smaller, or the clocks are much better? Or has it been that the focus and money are on the consumer (playback) side?

          I’m not passing judgement, just genuinely curious.

          1. Mike, It is actually probably the opposite. The playback side in good home systems may be better that the recording side in terms of high quality gear. As to why, it is hard to say. Basically the recording side keeps its mouth shut when I comes to saying what kind of gear is used. Paul is a noticeable exception.

            Neil put me on to an article in TAS Issue 321written by Anthony H. Cordesman. Give it a read if you get the chance.

            1. Hey Tony,

              I read it once but will give it a read again.

              I’m not even sure why I was curious earlier, since I pretty much already knew the answers.

              It all matters in playback only. If the recording is less than perfect then the natural next step is to record at a quality the shows off all the capability of the digital playback equipment.

              I’ve picked up some good really well digitally remastered recordings from the late 50’s / early 60’s. They were really good recordings in the day and being judged (after the remaster) against a lot of todays recordings they still standout as great and maybe even exceptional. I’m starting to think that something in recording techniques and that art has been lost for some reason. Most likely for ease, profit, and speed.

              Audio things seem like a giant closely wound spiral. They go almost full circle but never quite touch.

  7. I’m sort of confused about the first step in uploading the masters calling the process timeless and then mentioning that SONY uses a checksum technique of which there are many types of checksum algorithms to ensure that they are bit-perfect.
    I have a problem understanding how this technique can be asynchronous unless in the second step where you bring time into the equation, there is a synchronous comparison of the masters packet-by-packet. Please explain how this is accomplished?

    1. When you are uploading or downloading or storing or retrieving a digital file (in any way) there is no clock associated with it. At this point it is raw bits that can be verified for accuracy.

      There is no timing associated with it. You could pull half the data out one day and the other half on the next and it wouldn’t matter.

      It isn’t until you get to the conversion process where we add a clock. That’s the point where everything begins to matter.

      1. Can you recommend an article or a book that describes this verification process or is this proprietary? I’m interested in getting a better understanding of much of this new digital recording technology for my own head.

    1. Various checksums can be utilized in the arena of digital content. In relationship to file transfer over ethernet, the most common checksum utilized is a component of the TCP/UDP protocol stack. (to check to see if the packet is the same as it was when it left)

      A file’s verification can also use a hash algorithm to check it’s properties in relationship to the origination hash. These are mathematical equations based on the hash type. (i.e. SHA1 or MD5)

      It matters not the content or type of file.

  8. OK, now I got it. When I was an engineer 40 years ago we employed checksums for an entirely different purposes. When you work for a government contractor asynchronous sequential switching circus is the biggest no-no that you ever want to include in a digital circuit. Thanks for your answer it was very understandable.

  9. Can someone explain to this dummy how the processor with the clock knows what timing to assign to the raw data? Does the raw data contain timing information that informs the clock? I assume there are packages of bits quantifying each tiny section of the analog signal curve. The spacing between these little packages must be according to the particular digital format. So, a processor (chip or software program) for that format must know and implement the right timing between those packages?

    Conveyor analogy, Packages (containers, boxes, etc.) of different sizes are one by one positioned on a conveyor belt that is moving at a known controlled speed. Each package has its own bar code label. These boxes are conveyed at varying speeds on a transport conveyor that does not maintain any particular distance between the packages. Then at the discharge end a bar code reader reads each package label and using various techniques (such as metering belts) re-spaces the packages to their original spacings and discharges them onto an exit conveyor that is moving at exactly the same speed that the input conveyor belt was moving. That allows the packages at the exit end of the conveyor to arrive at exactly the same rate as the packages were originally introduced onto the belt. As long as the packages do not get out of order during their transport, using the package code information and the right space adjusting technique, the original timing and spacing of the packages can be restored at the discharge end. In the case of audio, the packages are the batches of “0”s and “1”s associated with the discreet electrical voltages on the analog audio signal.

    Is my conveyor analogy basically correct?

    1. It really depends on how the digital audio is sent and by what means. In a CD transport the clock is always running at the same speed. In a DVD player that can send different data rates, the header of the 16/24 bit word contains the information as to what it is and how it should be handled.

      1. So, Paul, in my envisioned conveyor analogy, the series of packages on the conveyor is preceded by a package of coded instructions that tells the recipient at the end of the line the timing and procedure for opening the packages. The packages can arrive quickly or slowly, spread out or all bunched up, but they must arrive in correct sequence in order to be processed correctly. At the destination they are accumulated and can be stored that way or sent immediately for sequential processing. As long as the packages stay discrete and in correct sequential order they can at any time be sequentially opened according to a timing protocol (using a clock). If a package gets mangled or lost, it’s not the end of the world as long as the packages with the critical processing instructions make it through unscathed.

        In audio, I gather, the DAC is a recipient of the millions/billions of digital packages of sound information, and it recognizes the digital format (“the header” instructions at the beginning of the file?) and then using a clock it processes the individual sound bit packages in sequence, according to the format’s timing protocol. According to the DACs function, it can convert the digital packages to analogue output voltages or send the processed digital stream to another digital device.

        What I don’t understand is why a CD transport needs a clock. Why can’t it just read the stored data sequentially and send it sequentially to the DAC and let the DAC with its clock do all the work of setting the timing for opening the bit packages?

        1. Oh wait, I guess the bits read from the CD have to be metered out at a speed that doesn’t overwhelm the DAC. It would be like dumping more packages on a conveyor belt than the conveyor can handle. So a clock in the CD transport determines how fast the bit packages get delivered to the DAC. Duh.

          1. Yes, you’re getting it. One of the problems the engineers of the CD faced was instability in retrieving data from the spinning disc. Depending on the amount of correction and the number of rereads to find the data what comes out isn’t a perfectly timed batch.

            Instead, the system relies upon a variable clock and that clock varies depending on if there’s more or less data on offer from the disc reader.

            One of the first things we pioneered was the Digital Lens which solved this problem. Because a variable speed clock has high jitter we wanted to instead output the data with a low jitter fixed clock. To do that with a variable speed data input we needed to build a buffer – a holding tank big enough to store up the bits and then present them in proper order to the fixed output clock.

            That’s why all our digital source products to this day still have the Digital Lens technology built in.

            1. Paul, you may wonder why I think in terms of a package conveyor. As an airport terminal architect and systems engineer, one of my specialized skillsets was designing airport baggage handling systems. A lot of the issues in digital word bit transmission and processing, I gather, is similar in concept to baggage and package handling. For example, we had to design our systems to accept bunches of passengers all showing up with their bags at the same time, overloading the baggage input belts. So we used indexing and metering feed belts and variable high-speed conveyors to move the bags away from the check-in area; then downstream we used long runs of indexing accumulating belts to store the bags until they could be absorbed back into the system for processing (comparable to the digital buffer you described). We used metering belts to space the bags evenly and uniformly for baggage security screening, the rate of which was limited by the throughput capacity of the CTX machines. Auto-sort systems in large facilities require hundreds, sometime thousands of individual conveyors controlled by a redundant sophisticated computer system that keeps track of thousands of individual bags, each labeled with bar codes and/or RFID chips. When the system loses tracking of a bag or bags, they are automatically kicked offline and recirculated for re-read, a kind of error correction procedure. If they fail the second read they are sent to a manual encoding station for human intervention. Eventually nearly all bags eventually make it to their destination. Horror stories include dogs getting out of their animal cages and roaming the baggage lines, fisherman’s styrofoam coolers breaking and spilling ice and fish all over the baggage room floor, and bags bursting and releasing their unmentionable contents. [Animal cages are supposed to be hand-carried, but dogs still can sometimes access the conveyor system. No, they don’t put body bags in the baggage system.] Hardly ever a dull, event-free day in a large airport baggage system.

Leave a Reply

Stop by for a tour:
Mon-Fri, 8:30am-5pm MST

4865 Sterling Dr.
Boulder, CO 80301
1-800-PSAUDIO

Join the hi-fi family

Stop by for a tour:
4865 Sterling Dr.
Boulder, CO 80301

Join the hi-fi family

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram