Personal DSP
Join Our Community Subscribe to Paul's PostsOur hearing is a combination of what we receive from our ears and how those signals are processed by our brains.
As our ears change over time so too does our brain’s interpretation. What this means is that we can compensate for the peaks and valleys in our ear’s response.
Think of it as an internal DSP.
Our Digital Signal Processor has been running a full-time feedback loop—continuously adjusting our perception of sound to match the physical reality of our environment since our earliest days of childhood.
Thanks to our internal DSP we recognize voices and instruments with as much accuracy today as we did when we first learned them, despite the fact our hearing has changed.
Which is why scratching one’s head over the results of a hearing test is probably a waste of time. Our internal DSP hasn’t yet learned to equalize for test tones in the same way it does to fill in frequency gaps on the sound of a violin.
So if you’ve ever wondered why it takes a bit of time to adjust to a new system or upgrade, it’s your internal DSP fiddling with the knobs.
It’s enough to drive a measurementist crazy!
I am not convinced of the DSP analogy at all. I rather guess that our brain is a most sophisticated pattern recognition device and an AI computer optimized for predictions. Pattern recognition means that we have stored patterns as a reference for actually sensed patterns. A problematic result of this way of “recognizing“ the world is the phenomenon of visual and aural illusions! Thus I also never could accept the marketing claim “You have to get adjusted to the new stereo system!” – a most popular claim of non-serious audio dealers!
It‘s hard to believe but also my experience.
We get accustomed to a certain variation of tonality or handling of transient response. This can lead to the fact that we prefer a certain sound at a later time when we got used to it, that we didn’t earlier. If we then don’t do a back comparison again or even do it frequently, we don’t know what’s finally really better sound for us.
Only those with regular access to multiple setups (and in parallel live music) really know their preference among the options. Those who listen to one (developing) setup only for years, might see this as the only truth, but get stuck in a hole they can’t leave anymore. Such a setup can be „optimized to death“ without real progress.
All this is most obvious with solid state vs. tubes, digital vs. analog etc. You can limit yourself to one of it. If you don’t, it gets expensive 😉
That’s very true. One of the great benefits of a good dealer are manufacturer demonstrations. Many audio manufacturer owner/designers used to spend a lot of time travelling the world doing this and I’ve been to quite a few. It has informed certain key purchases. It is far better than audio shows as the listening conditions are normally superior, more relaxed and the sessions longer. These demonstrations cost you nothing other than time and you learn to appreciate different approaches to sound reproduction. You also learn to understand your own tastes, which for valid reasons may be very different to other people. The end result is that you greatly reduce having to learn these things by buying and selling audio equipment. You also appreciate the limitations, if not complete irrelevance, of other people’s opinions of perceived sound quality.
My experience is, that people (even if from different camps) mostly agree about good sound quality of a setup as soon as it reached a certain very high level. That’s where even different concepts start to sound more similar.
Where people have very individual opinions is at the stage of serious compromises, where there are very different preferences of what’s important in the first place or tolerable.
With all experiences made, I’d come to the conclusion that in a full range setup, bass quality and overall coherence are the most important priorities if one has to make compromises.
A lot of the problem of being a jazznut is that the stand-up bass has a frequency range of about 40 – 400Hz and the lower half of that is the hardest to get right in just about any audio system, not least because it is so room dependent.
I am definitely not a bass junkie but the ability to hear the stand-up bass properly was a key decision in purchasing Wilson Sabrina. They would probably not be considered full-range (a single 8″ bass driver), but they just about do it, and I have a room trough at 80-90Hz. In our new room I’ve been laying the groundwork for a single REL S/812 to be included.
Great exchange here Jazznut and Steven!
“opinions at the stage of serious compromise”. I think for many hobbies there is also a stage of exploration and learning regardless of financial compromise. I’m not sure trail and error can be avoided when on a personal journey for increasing enjoyment and engagement in your hobby. I would rather learn from mistakes and improve than settle and surrender to an “acquired” taste. The discoveries along the way can be just as satisfying as the destination.
It is very easy to hear peaks and troughs with a bass frequency sweep up to say 250Hz. Tube amplifiers often have a peak in the mid-range between about 1-2Khz that gives added warmth with vocals, wind instruments etc. and a trough in the presence region around 5-6Khz, which takes the edge off the treble, whereas speakers with a trough in the midrange will sound flat and a rising presence region with be fatiguing.
The bass frequency sweep has to be done at home because it is so room dependent, more than the mid and treble that is more down to the speaker design and if a speaker is bright and fatiguing I don’t think there is anything you can do about it other than get a different pair.
I have no doubt that a skilled speaker designer has a very clear understanding of how features of measured response will impact the sound produced by the speaker. Alan Shaw is well-known for having documented every day of his speaker design for the last 35 years in large books, running to many thousands of pages of data and observations. He is also one of the few designers who frequently addresses the limitations of human hearing and sound perception.
So I don’t think this is an issue that an be trivialised and personally I’ve found component changes (that make a difference) are very easy to perceive. If there is adjustment, it is probably more to do with the limitations of memory for sound. Could you recognise a speaker in a blind test you’d not heard for 20 years? Almost certainly not, but we can instantly recognise people we’ve not seen for 20 years.
Why 20 years?
Why not 5 years?
I’m pretty sure that if you put a pair of stock standard Celestion Ditton 66’s in front of me, blind test course, I’d recognize them in an instant; especially that MD-500 two & a quarter inch dome midrange driver…dunno if I’ve got 20 years left in me, the way my wife cooks.
Harbeth loudspeakers are breathtaking to listen to,
& no internal bracing; not one stick.
After owning & listening to the same loudspeakers for nearly 38 years, it does seem to be a bit of a challenge adjusting to the new ones.
Paul
the hypothesis that “all swans are white,” can easily be falsified by observing a black swan.
That is why your thesis of the personal DSP is just an opinion and not more.
And in the case of audio equipment It does not help to develop better stuff.
On the contrary, it could lead to the view that it all makes no sense anyway because everone’s hearing different.
Your conclusion “It’s enough to drive a measurementist crazy!” may be correct to the persons you know. but not to all.
I am looking forward to seeing audiovox’s comments on this one.
So our ears / brain interpreting define what we hear. They are a measurement tool by which we qualify and quantify sound reproduction equipment, and then blather on about degrees of correctness.
How many people… designers, reviewers, critics, etc… have taken the time to have their “tool” calibrated and compare those calibrations to ones from 30 or 40 years ago or even to one from last year?
So while measuring sound systems with instruments other than ears doesn’t necessarily equate to good sound, subjective judgement of sound systems without a baseline profile and record of individual hearing acuity equates to just individual opinion.
It seems to me anyone involved in this hobby is crazy 😉
Mike,
What’s crazy is trying to define & to be objective with a uber-subjective hobby.
There’s only one question that counts:
“Does it sound good enough to spend that sort of money on?”
That’s it; simple.
Now, what else can we talk about? 😉
FR,
Defining what one expects from their 2 channel recordings / setups is not so difficult. I think meeting the desired definition is what can become difficult (expectations).
For some it comes down to dollars and cents (or substitute sense 🙂 ) for others not so much.
If everyone’s hears slightly different then everyone has a different take away.
Now gotta go and reboot my DSP it’s stuck in an endless loop. Must be a software issue 😀
Well, I know I am.
Knowing is half the battle. 😀
Great email! Thank You!
Paul, I believe you are saying that our brain is applying corrective processing to restore sounds to fit a recognizable pattern. Therefore, we don’t have to worry about an age related fall off in frequency response, as our brain will process and compensate. I think this is true for all our senses, and the more we bring our senses together, the more intense the memory and internal compensation. For example, in my search for the perfect pizza, my standard got lodged in my brain more than 50 years ago and yet can be instantly recalled through a combination of taste, smell, and sight. However, I’ve come to realize that memories can be deceiving and unnecessarily limiting. In terms of loudspeakers, we can be tricked into falling for certain attributes at first hearing which provide diminished satisfaction over time. Anyway, a fascinating subject.
After 30 plus years with my used adcom electronics and snell speakers, I changed systems to used technics integrated and canton stand speakers plus jamo sub and it sounded terrible but within a few months Woo it’s sounds great way better than my old rig. So what this tells me is chasing the best gear is a fool’s errand your brain will compensate and adjust to the gear you have, as evolution teaches us to adapt or die.
I buy locally used to maximize the quality and local in case there is an issue and I can test listen before purchase. Cost is my primary consideration along with the sound ie Joy to cost ratio.
“Thanks to our internal DSP we recognize voices and instruments with as much accuracy today as we did when we first learned them, despite the fact our hearing has changed.” That depends on the level of accuracy and the degree to which hearing has changed 🙂
“Our internal DSP hasn’t yet learned to equalize for test tones in the same way it does to fill in frequency gaps on the sound of a violin.”
Congratulations Paul, you just killed off over 200 years of bedrock mathematics. Scientists, engineers and mathematicians have been calling me all morning crying to me, asking what will we do without Fourier’s mathematics. We don’t have a substitute. Ask Paul what he has as a replacement.
“So if you’ve ever wondered why it takes a bit of time to adjust to a new system or upgrade, it’s your internal DSP fiddling with the knobs.”
And here I thought it was the equipment that needed break in, not my brain. Ya larns sompin’ new every day. That’s what Roseanne Roseannadanna would have said. So I guess that puts a stake in the heart of the notion of absolute sound. Joseph Fourier and Harry Pearson are turning over in their graves. If you are very quiet you might hear them like I do. They’re also moaning in pain, their life’s work proven wrong.
https://www.youtube.com/watch?v=k59d-xMvooA
Even if the first part appears reasonable, the conclusion makes no sense at all. The field of psychoacoustics is full of reports showing how your ears and brain can fool you.
It reminds me of the old GP that claims that for years he has been treating his patients successfully until you measure his actual performance and show deterioration and worse outcomes for his patients. But in his experience he has done a great job! (It is usually men that do this).
In the field of rheumatology, Fred Wolfe (and Ted Pincus has validated this too), probably the most renowned “data minded” in his field, has shown for decades how the lack of measuring makes rheumatoid arthritis patients worse. Subjectivists versus objectivists.
You may “like” the sound you get, you definitely got used to it, comfortable, but that doesn’t necessarily mean you are getting the “correct” sound or the sound the musicians wanted you to hear.
It’s well beyond the current state of the art which isn’t going anywhere as far as I can tell.
Do you want to know how much our ears fool us? Look at this video and listen to it. The musicians “sound” (play?) in your room. But they were all over the world sending their files by fibers or wires to a center master console. And then mixed so you hear it through two speakers. But, what is real? The video or the sound?
https://playingforchange.com/videos/biko-around-the-world/
A sound field is a real physical entity. It is the job of the designer of high fidelity sound reproduction systems to recreate the likeness of those fields as closely as they might be heard live from recordings or transmissions. Given how primitive the state of the art of acoustic science is in regard to the ability to model, analyze, and measure acoustic fields it is hardly surprising that there has been practically no success in the efforts to duplicate them. Those who try don’t even bother to learn what little is known in that science. Their preconceived notions are way off the mark and are doomed to fail for many reasons. Those who think they have succeeded are either too inexperienced with live music to know what it should sound like or are kidding themselves.
The problem is far more complicated that these people think it is. Those in this world who have the ability to advance this science and art aren’t interested in it. Were I starting out today I wouldn’t be either. There are far better things to do with my time and energy. Even back in the day this was only good enough to be a hobby, not a career. Those who are in it and I’ve met many, watched many others, read what many more have written are IMO not nearly up to the task. They don’t have the intellectual chops for it. It’s simply beyond them. But it is fun to watch them like mice running a maze unable to find their way to the cheese. They never give up and each one thinks he’s gotten a little closer to it than the others. Are these people worth the kind of money they charge for their failed efforts? No way. They aren’t even worth a tenth of what they charge.
I think I am making a different point to you. You are saying that current SOTA is not good enough to reproduce music the way it sounds in concert halls. I don’t think anyone can disagree with that.
What I say is that almost all music makers EXCEPT that intended to be listened in concert halls are fully aware that it will be listened with 2 speakers (unless it is done in a car, but that is another issue) and they produce it FOR 2 speakers (or headphones). They KNOW this is what it is and they make it for that SOTA. They may or may not want to provide you effects that make it appear around you. They will play with their equipment, their plug ins, their studio and give you what you should hear with 2 speakers.
I just want to hear what they made for me. I don’t want to add or modify that signal.
The only “acoustic” instrument or voice that can sound relatively realistic at home with 2 speakers is a single instrument, or very few of them, or a single singer where the acoustics of the recording place interact minimally or have not much input into the interaction of the speakers in your room.
Once someone invents a new reproduction system, musicians and producers will adapt to it. You get into a car to drive or be driven. But you don’t expect cars (these days) to fly.
Think of your ears as two very good microphones spaced apart by the width of the head. In a live setting they are like a pair of stereo microphones, each receiving sound waves and converting them via three bone structures into a scalar audio signal which through electrical/chemical neural transmission ends up in the brain for mysterious processing. Why cannot two very good stereo recording omni mics do a similar thing–that is, receive sound waves from multiple directions, combine them into a linear audio signal which when converted to sound waves by our two-channel audio system for our ears to hear through headphones or a pair of left and right channel loudspeakers? The main issues are the quality of the mics and the audio recording and playback system (including the room), which distort the signal to varying degrees, and the degree of crosstalk between the left and right channel. (Note: some crosstalk is natural, as some sound wave energy going to each ear does find its way to the other ear through and around the head and there probably is some crosstalk in the neural pathways to the brain and within the brain as well.) But if the mics are great, the audio recording devices and playback system are great (source, amplification, loudspeakers including optimal placement, non-reflecting room surfaces) and we limit crosstalk by sitting close to the loudspeakers (or listen through quality headphones) we should hear a reasonably good rendition of the live sound, whether it is a single instrument, multiple instruments, single voice, multiple voices or choir, or any combination, in a dead or highly reverberant room or hall. Our ears will hear what the two stereo mics recorded, and if properly positioned those mics are recording the sound waves our two ears would have received if we had been there. I know I am speaking simplistically when I say our ears are microphones, but that is indeed what they are–the best microphones ever made. The fact that we have a stereo pair of microphones in our head is the basis of the success of stereo music recording and playback. Stereo recording and playback works for single voices in anechoic chambers, symphonies in reverberant halls, and virtually all other instrument and voice combinations. Through technology, microphones will eventually become as sensitive to omni-directional sound wave energy as our ears, and as our recording and playback systems becomes virtually free of distortion, there is no reason every recording can’t sound live.
Hey, if you assume lots of things then you can get where you want. The problem is that we don’t (yet) have the technology to get what you are assuming.
As I said, based on the current technology, this can’t happen. You can tease your brain to think something close to it, but not there yet.
It is like the old economists joke. How do a bunch of economists leave from a deserted island where they are stranded? They “assume” a boat….
It is happening with current technology. Some excellent recordings using excellent omni mics and gear played on excellent playback audio systems sound live. It is not a pipedream. Some of us experience it with excellent recordings on our own home systems, and we are not inexperienced in what live music sounds like.
WRONG.
1) Your external ears (pinnae) are directional phase encoders that translate direction of arrival into waveform modifications that can be de-convolved to track echo patterns and cognize acoustic space.
2) The physical path down the ear canal maintains 2 dimensional encoding of 3D information expressed in echo patterns, which passes through 3D motion of the ossicles and projects in a 2D pattern onto the basilar membrane, for two orders of magnitude more spatial information than can be encoded via scalars.
3) The two channel stereo modality is a learned delusion. People who learned to hear music acoustically have to train their ears for some time before any “stereo” illusion develops.
4) Trained musicians grow 10 billion more neurons than a matched control group because of the superior aural information in acoustic sounds over audio, and they hear music ten times better than speaker listeners. This is by MRI studies, no subjectivity involved.
5) The success of stereo is from the convenience of radio and phonograph, which replaced ear training to acoustic music with ear training to scalar music with massive temporal, transient and spatial distortions. Within eight years of the 1925 loudspeaker patent most people in industrialized US and Europe learned to hear music through electronics (and a few still through acoustic phonographs), so all of the audio research since 1933 was done on loudspeaker trained listeners with developmentally stunted hearing. This includes Alan Blumlein’s seminal stereo experiment, which was mainly Decca Radio Engineers.
6. This major failing in science, perhaps the largest selection bias in history, is what fueled the marketing hype convincing consumers to buy two speakers and two amplifiers. The adaptability of hearing also provided convincing “proof”that 2 channel sound is real stereo for both people who have no idea what real music sounds like, and even those who make music for a living but hear recordings more often than live music and compartmentalize the two divergent experiences.
You can say WRONG in caps all you want. I said it is a simplification to say that ears are microphones, but they are like stereo microphones in principle no matter how many times you say WRONG. I did not say microphones are as good as ears. Of course there are some differences. I have, like you, read about people who have only one ear but can still perceive direction due to ear lobe features and through training to compensate for what they lost by not having two functioning ears. Microphones do not yet have the facility to replicate a human ear including the lobes. But that does not make me wrong in the microphone analogy. Stereo via two ears is the fundamental feature of our hearing that stereo music recording and reproduction emulates and works so well for most people. I don’t know about your ears and hearing, but my ears were not trained to perceive stereo music as realistic. The first time I heard stereo music through headphones it sounded realistic and convincing. Of course musicians through experience can identify sounds and nuances that others may not be attuned to, just like some people can pick out camouflaged objects easier than others through visual experiences and sight training. We know that the brain creates neurons through experience and need; for example, deaf people often have better vision perception because the brain dedicates more neurons to that task, and blind people have a heightened sense of touch through training and the brain dedicating more resources to that sense. And it is common knowledge that hearing can be improved through training and experience. The brain has amazing abilities to adapt. But all these do not refute the validity of stereo music as a realistic portrayal of live sound. It is not a perfect portrayal, but it is close enough. You and some other posters just love to pooh pooh stereo as a technological innovation and put down 2-channel system designers and manufacturers, and yet you offer no practical alternative to the music world.
“Once someone invents a new reproduction system, musicians and producers will adapt to it. ”
I did that 47 years ago. I even have a US Patent to prove it. Nobody was interested. I admit I’m not much of a salesman. If I had to make a living selling anything I’d starve. Paul heard my prototype. He doesn’t talk about it. I can’t say I blame him. It violates every rule in the book. There’s nothing I’d trade it for at any price. It will remain one of a kind until the day someone else reinvents it.
Bill Gates did not invent Windows. Go figure. Steve Jobs did not invent the Apple system either. There was this dude Wozniak around. Go figure. Who do people remember?
CtA,
Many people “remember” Steve Wozniak & many people are very aware of his achievements within the field of computing.
What exactly is your point?
Do you in fact have a point here?
The ignorant is back! I’m not surprised you don’t understand. It’s likely beyond your level of comprehension. Never mind. Go write about cricket. That should keep you entertained.
Wozniak is revered more because he is rich than because he was a kick-ass engineer and programmer – and the success of Apple is more due to the TI and Fairchild inventors of the integrated Circuit, Douglas Englebart who invented the mouse and the programmers at Xerox who invented the GUI than to Jobs and Woz together.
As for Gates, he licensed MSDOS from Orange Micro, who cloned it from Gary Kildall’s CP/M and DRDOS, he got the idea of GUI from Xerox the same as Jobs, and he released three versions of Windows that were a TOTAL FLOP in the marketplace. The factors that turned Win3 the most successfui product in history included the trick memory management software that permitted loading the multi-tasking and network kernels above 640K, which Microsoft cloned from my HiDOS and Hi386 software developed at RYBS Electronics.
Fine. Good for you. It doesn’t matter how much I try to convince myself, this is not happening. “Some of us”, good for those with superior recordings and home systems.
Maybe mine is not “superior” enough. I can get a pretty good facsimile, as I said, with very small ensembles. But I know I have to suspend disbelief.
“Once someone invents a new reproduction system, musicians and producers will adapt to it. ”
I did that 47 years ago. I even have a US Patent to prove it. Nobody was interested. I admit I’m not much of a salesman. If I had to make a living selling anything I’d starve. Paul heard my prototype. He doesn’t talk about it. I can’t say I blame him. It violates every rule in the book. There’s nothing I’d trade it for at any price. It will remain one of a kind until the day someone else reinvents it..
Sounds really interesting – violating every rule in the book, would like to know what it is you invented.
Being ‘not much of a salesman’ can’t you get someone else (beyond Paul) to bring it to market for you nowadays?
It would be interesting to hear from someone with one foot in Pro Audio and the other foot in High End, if the progress in the development of microphones has been commensurate with the development of loudspeakers over say, the last 30 years.
Unfortunately, I think most of you have gone to bed, or are listening to music at this hour.
I dropped out of pro audio in 1980 and came back in 2007. There were two innovations in microphones in the last 40 years: the RF technology in the Sennheiser MKH series; and the upgrade in Ribbon microphones to Neodymium magnets and active phantom powered electronics. MEMS sensors are innovative, but have yet to equal let alone surpass the best dynamic, ribbon and condenser mics. Other than that. I was surprised to see the same models on the shelves after 27 years, with a few updates and Asian, Russian and American clones.
There have been a variety of “surround” microphone arrays, but the best never made it into production (James Johnston’s work at AT&T Labs); and the ITU standard came from the need of commercial cinema theaters, rather than optimized for psycho-acoustics. I have a proprietary ITU mic array that will challenge any out there.
There are lately a whole crop of clones of Michael Gerzon’s “Ambisonic” microphone: Oktava MK-4012, Sennheiser Ambeo, Zoom HV-3R, Core Sound Tetramic and Octamic, Zylia ZM-1 and the descendant of the original licensee CalRec, the RODE NT-SF1. These are not new tech, as I heard a demonstration from Michael Gerzon at the 1978 AES Convention. The measurement, compensation and convolution software is a lot better than 30 years ago, but this was tech pioneered by Richard Heyser starting with his first paper on TDS in 1967 (a real life “rocket scientist” at JPL who was an amateur audio researcher).
As for speakers, the last important invention was the Heil Tweeter patented in 1972 and the 21st Century innovations were DSP crossovers and temporal compensation, digital amplifiers in powered speakers, Neodymium magnets, DUMAX and Klippel analysis, FEA design, the Unity and Tractrix horns, and Tom Danley’s tapped delay line. If I am counting math and measurement, Meyer Sound MNoise is also a critical advance, a signal that emulates real, live music better than any prior tech.
To illustrate the speed of scientific progress (or lack thereof), the tractrix curve was discovered and named in the 17th Century by Liebniz, the co-re-inventor of Calculus with Sir Isaac Newton. Calculus was originated by Archimedes over 2,000 years ago, but his books were burned as pagan literature by Christian terrorists. If a Roman Centurion hadn’t killed Archimedes out of ego, the Roman Empire might have eliminated the Dark Ages and produced the Industrial Revolution 2 millennia earlier. We also still don’t know what knowledge of Imhotep was lost to Pharaonic IP protection.
Acuvox, Thanks for your informative and detailed response. I only regret that the post was so late in the day, very few will ever see it. In more general terms, it seems like speakers have made significant gains in the past 30 years, if not by new inventions, they by refinement of existing technologies. Microphones, on the other hand, I know little about. I do recall one American company out of the Northeast, possibly Rhode Island, that tried to break into the industry with a tube-based microphone, but I don’t think they have survived. I also appreciate your comments on the grand scheme of civilization and the technologies that might have been, except for unfortunate circumstances. Thank you, again.
Since the human body is all analogue would it not be better to call it an equalizer rather than DSP ? The contents of the post are perfectly understandable. If this is the case than why do people spend enormous amounts of money on hearing aids made specially for audio ? I wonder. Does it have something to do with loudness ? Regards.
Luckily for me I’m still young. No changes in hearing yet or at least that I’m aware of. I’m still treble sensitive and have very little tolerance towards sibilance. Probably why I hate a lot of remastered recordings. Louder is not better for me.
DSP is a bad analogy on many levels. As was pointed out, DSP is a serial Turing/Von Neumann machine operating on scalars, while the brain is idiomatic neural circuits grown to mirror external reality transduced by massively parallel but crude raw sensors, a CPU custom built to optimally decode actual acoustic events by ignoring the quadrillions of mathematical permutations that do not exist in physical reality like statistical reverb, a Violin the size of a cow, or a piano with an 8″ sounding board.
We operate as a Bayesian inference engine in massively parallel arrays of computing elements that are analog in both intensity and timing to encode massive amounts of data in linked, parallel and hierarchical fractal compression stores.
The neural timing delays mirror real time external events like echoes, static values fade from consciousness, and changes are amplified by non-linear differentiators with hysteresis. This enables temporal, spectral and spatial re-construction filters such that we can hear <,01% of anharmonic inter-modulation distortion and 3 microseconds of time delay, an apparent violation of the Fourier uncertainty principle.
Think about blind bicyclists riding between traffic and parked cars. How would you program DSP to do this? Correct, the whole concept of stereo collapses into un-workability. It also collapses from the one ear test.
When you move around a room or when a sound source moves around the room your ears recognize it as the same; but it takes a minute or two of re-calibration for DSP, and even then it doesn't sound the exactly the same. Well trained human ears can map a room and contents from a single un-calibrated transient event, like a twig snapping. Real time, passive signal correction requires DSP to recognize the 3D sonic object, its orientation, and the acoustic space in real time. Good luck with that!
OTOH, DSP is not delusional and would never mistake two speakers for a phantom center!
Hello:
My early days involving stereo dealt with skiing on very icy slopes of Canon Mountain. If you could hear your skills edges it was going to
be a good day ! The crowds stayed away from icy slopes. When hand tuning my skis,I learned to leave microscopic burs on the edges. The edges worked great. Next ,I graduated into reading all the vibrations were graphed-soft shovels and tailed for slaloms and much stiffer front and back for giant slalom-cruiser skis.Heaven before ski chatter ! Immediately upon skiing to my front door I hit a tall glass of 100% HOT DAMN watered down with 2-3 Buds-what a nap !
As my family trucking business grew I transported any and all things used in modern shipbuilding,I.e,,compound ceramics for hulls,pro
Pellet blades for frigates,destroyers and submarines. Much of this freight involved critical tests in sign weave diffusion,sound elimination elimination and sonar recognition. What a trip. To go from ski measurements,to sound elimination to tonnage restrictions,bridge weights and specified routes.
Paul,what stories I could tell you over stall glass of Hot Damn 100’s plus 3 PBR’s. All wave length related…
First run of the day to the Last run…AMEN!!!