Reverb
Join Our Community Subscribe to Paul's PostsReverb is short for the word reverberation. It’s a method of adding “life” to an otherwise lifeless recording.
You hear reverb mostly on voices. Recording studios add it when the original voice recording was made devoid of any room interactions.
To explain reverb in another way, it’s created when a sound or signal is reflected off the room or objects in the room, built-up, and then decayed as the sound is absorbed by the surfaces of objects in the space.
A long and complicated way of saying it’s the difference between walking into an acoustically dead or alive room. The differences being how sound is reflected or absorbed.
When I walk into a new room I am always listening for how that room interacts with my voice: alive or dead. Perfect rooms don’t suck away sound nor do they overly reflect it.
Reverb is that essential element in any listening environment.
The element that brings acoustic life or death.
Everybody clap your hands!
…then listen intently 😉
‘The sound of one hand clapping’ is supposed to be silence, however it IS possible to ‘clap’ with one hand.
Completely loosen your fingers, but keep them together, & very quickly ‘flick’ or ‘snap’ your hand back & forth, keeping a stiff wrist so that only your fingers can flop back & forth & viola, you’ve got one hand actually making a clapping sound…myth busted! 🙂
I clap using my opposable thumb. Not too useful for reverb quick test.
acuvox,
😉
If you want a really good listening room, you need to build a reverberation chamber around it. You’re going to need a few extra dollars, but it’s worth it. It’s not the sort of thing you can do without the wife noticing. A good example is the Eldborg Concert Hall at Harpa in Reykavik. The entire concert hall sits inside another structure that creates a gap of between 5m and 10m. The extent to which the sound can escape the main hall and bounce back is controlled by 1,200 electronically controlled panels, mostly lined in felt. The reverberation can be changed with the range of about 0.5s to 2.3s. Ceilings are also a bit of an issue so, if you can, you want to raise your ceiling about 20m above the existing one and suspend some panels that can be tilted. If you can also raise and lower the tilt of your floor, that also helps.
Here’s the lovely Ingibjörg Fríða Helgadóttir, who does the guided tours at Harpa, when not singing opera. She sings during all her tours.
https://www.youtube.com/watch?v=NTWqkDrDjic
This concert hall is magnificent internally as well as a true marvel from the outside
Larry
The sound is magnificent as well. The outer shell is a third layer of the building designed by the world famous architect Olafur Eliasson, whose studio is in Borganes, the other side of the bay. He’s done all sorts of things, I even saw a Parris Opera Ballet production for which he did the sets. He was meant to be designing a new concert hall in London for the London Symphony, it doesn’t seem to be happening very fast, as a result of which the conductor Simon Rattle has decided to go back to Germany.
https://www.theguardian.com/uk-news/2019/jan/21/first-designs-revealed-new-288m-london-concert-hall
The best acoustic is apparently in the Elbephilharmonie in Hamburg. It cost $1billion to build. I’ve tried getting tickets, it’s not far from London, but sells out a year ahead and costs an absolute fortune. One day I’ll get there.
Steven I was in the Elbphilhamonie in the summer of 2019 when a concert of the Schleswig-Holstein Musik Festival took place there. They gave J. S. Bach H-Moll Messe and it was breathtaking.
There are only a few places where you can hear the music as a whole and yet every single instrument and every single voice can be heard like in Hamburg. As far as I know, German television recorded the event. If you’re interested, try to find it on Google.
That’s the amazing thing I experienced at Eldborg. You can have 120 musicians thrashing away and you hear the smallest detail from some guy at the back. They played a piece by Kalevi Aho (Minea) and it was just astonishing. The B Minor Mass must have been something else. I will get to Hamburg sooner or later.
Thanks, that’s exactly what I need…although a completely sound isolated room would do it for the moment. I’m a more or less „original volume“ listener (to the current occasional disadvantage of surrounding humans and pets).
Wow Steven, this made my morning. Amazing. Never heard of such innovation.
Well Paul, I bet that you’re scratching your head now wondering how to redesign the studio at ‘Octave Records’ to get the right reverb 😉
Steven,
Not only is my wife going to notice, although her lack of concern would be the least of my worries in that regard, but so will the Strata management here & my real estate agent.
Reverb is a simple filter applied to improve (mask) vocals that aren’t good or strong enough to sound decent without it. It’s been an easy modification to less than great vocal strength for decades. You want to sing but you realistically aren’t good at it? Let’s slap some reverb on you.
That was my first thought too, before I realized the discussion was about halls. Unless it’s used by design and effectively — thinking Agnes Obel — it exposes the singer. In general, I prefer the unreinforced voice, where I can like it or not.
Reverb is NOT a filter. A filter removes part of the signal.
Reverb is a split, delay, attenuate and feedback function so it is adding more copies of the signal to itself. Reverb algorithms and physical reverb devices do have frequency shaping to mimic the frequency shaping of real reverb, but they does not fool well trained ears.
Reverberation does not attenuate sound, it amplifies it in time, space, and loudness. The same sound is heard repeated over and over and over again over an extended time from many directions in rapid succession in intervals short enough that the human brain regards all of it as part of the original sound. It alters tonality, dynamics, creates new harmonies and dissonances. It can create envelopment that is not possible any other way. The devil is in the details. It’s so powerful it can greatly enhance or ruin the enjoyment of music. It can make it impossible to understand speech. That BTW is how the science of acoustics was born in 1895 by Wallace Sabine of the physics Department at Harvard University. People complained about the lack of intelligibility of lecturers at Fogg Hall. He was asked to investigate why and what could be done about it. Later he was the acoustic consultant for Boston Symphony Hall regarded by many experts as the best room in the United States for listening to music and one of the two or three best in the world.
Reverb is essential to a pipe organ’s voicing and character. If a pipe organ is close miked, most of the room’s reverb is lost in the recording and the organ sounds dry and boring, almost synthesized. In my digital pipe organ Hauptwerk system, there are separate stereo channels of recorded reverb from mikes located different places in the hall or cathedral, so the player can adjust real time where he wants to be in the room. For digital samples without separate channels of recorded reverb, I process an auxilary stereo line through a Bricasti M7 reverb module, which allows reverb time and characteristics to be adjusted, or presets modelled after actual concert halls and recording studio rooms can be selected on the fly. With the reverb channels played through separate speakers behind the listening position, the effect is very realistic. I love the ability to change the reverb and liveliness on the fly, as highly reverberant sound can be impressive for a few minutes, but become fatiguing after a while. My favorite reverb time for spaces with pipe organs is around 1.8 to 2.8 seconds. The ideal reverb time depends a lot on the piece being played and how clearly one wants to hear the articulation of individual notes. Long reverb times mask sloppy playing and hide player errors, which is why I often indulge!
JLG,
Wow, a 2.8 second delay on pipe organ music…
I’d feel like I was travelling in the Tardis!
That’s nothing. Many large cathedrals have reverb times of over 6 seconds. Concert halls on the other hand are lower. Carnegie Hall and Boston Symphony Hall have reverb times of 1.7 and 1.8 seconds, respectively.
Reverberation time = time for 60db drop from original sound level. Remember that reverberation consists of many sound reflections, the earlier of which are typically stronger in intensity and the later of which get progressively weaker. Different frequencies have different reverberation times because they travel at different speeds and reflect and diffuse differently with different materials. Even the number and distribution of people in the audience affects reverb time, because they absorb sound.
Like herding cats eh 😉
The RT of very large reverberant spaces like cathedrals can be as high as 11 seconds although most churches are not nearly so long. Most of my simulations for it are around 5.5 to 6.5 seconds. This is why so much pipe organ and choral music meant to be performed and heard in these spaces are slow. Beranek’s paper on acoustics comparing 59 concert halls shows a negative 80 percent correlation between RT and clarity. Von Karajan liked Boston Symphony Hall better than Vienna’s GrossesalleMusikverein because the mid frequency RT of Boston is 1.8 seconds while Vienna is 2.0 seconds. That allowed him to conduct at a faster tempo. In many musical compositions the composition builds up getting faster and faster and louder and louder until at one point the orchestra stops momentarily. The Conductor waits for just the right moment when the reflections die out to play the next note. It’s the musical principle of tension and release. When done properly it adds enormous drama and impact to a performance. When absent as on many recordings it is a discontinuity that is musically very disappointing if you know what the real thing should sound like. The lack of suitable reverberation as heard in a great concert hall or other suitable performance site makes high fidelity sound reproducing systems utter failures IMO. This is what I spent so much time and effort learning how to reconstruct it. Getting it right is no easy thing to do.
You are right, RT for some cathedrals are 11 seconds or longer. RT for St. Paul’s Cathedral in London is 13 seconds! On rare occasion people fill the cathedral and reverb time drops as much as 50 percent. When playing organ or listening to organ music I prefer concert hall and medium size church reverb times in the range 1.8 to 2.8 because it allows the character of the individual pipe voices to be heard yet still provides the spatial bloom necessary for the organ to sound full and majestic. I only bump the RT above concert hall level when I want to impress guests or experience temporary euphoria. The nice thing about Hauptwerk is you can switch among chamber organs, concert hall organs, and large cathedral organs in seconds without leaving the console bench.
So I guess when we are listening to headphones we are listening in what would be an acoustically dead room. Hmm I wonder why headphones sound good in that environment but speaker’s cannot? I guess if speaker’s were made to sound good in a acoustically dead space as headphones are they would sound terrible in our rooms. Unless our room’s were acoustically dead. Open ear headphones as opposed to closed ear do have a different sound but both are pretty much still in an acoustically dead environment.
If there is reverb in the recording and we also have our room reflections are we listening to double or enhanced reverb? Headphones don’t have double reverb. You hear the recording as intended. Still I prefer room sound over headphone sound since live music is in a room and I want the musician’s in the room as opposed to a small space between my ears though there are some headphones that do a great job of out of head sound like my Beyerdynamic DT990 Pro and my Sennheiser collection of headphones . They need to be open ear headphones to do that. My B&W P7 headphones are closed ear for privacy but sound extremely musically accurate and transparent. Great bass that gives you a sense of feeling the bass as speaker’s do.
A room with reflective surfaces does indeed add to the reverb in recordings, although most rooms are not big enough to add much. Rooms in homes, unless huge, have reverb times of a small fraction of a second. The reverb time for each frequency varies depending on the geometry of the room and the materials of the reflective and absorptive surfaces.
Headphone reproduction of recorded reverb is FAKE reverb because it does not have vector information like real reverb. Typical commercial “mixes” attempt to predict what they will sound like in the listeners’ rooms and fit into the accidental reverb. Fake “stereo” mixes of multi-mono track recordings all have artificial reverb, which sounds worse through headphones than through speakers.
The best reproduction of reverb is near coincident pair recordings in real performance venues though headphones or a well diffused listening room – but that is still a learned illusion, just a more real and more consistent one.
Well recorded real reverb sounds most realistic to me through headphones. My loudspeaker system as fine as it is with many drivers strategically spread throughout the room cannot quite match the quality, clarity and realism I hear through headphones. But I hate wearing headphones, so I settle for almost as good.
As for vector information, our ears are like microphones in that they receive sound vector information (sound from all directions) and convert it to scalar signal via the inner ear bones (malleus, incus and stapes). Unfortunately recording microphones don’t do as good of a job in receiving and summing vector information as our ears. Microphones have limited sensitivity to reflected multi-directional sound. If stereo pairs of microphones were as sensitive and efficient as our ears in the collection and summation of multi-directional sound wave energy, there is no reason what we hear in stereo headphones using a good audio could not virtually match reality. The vector-to-scalar conversion would just have been done by the microphones instead of our ears.
Headphones take the room factor out of the equation and we hear the recording as it was intended. It gives us a nice reference as to what our speakers should sound like in a bigger more open environment. I don’t like wearing headphones too long but good open ear headphones are the way to go. i do like the closed B&W P7 for privacy. Great bass and ruthlessly revealing of the recording. Comfort is very important and the headphone manufacturers are going out of their way to address that.
My dream has always been to hear stereo loudspeakers that can match the sound reproduction of good stereo headphones. So far I have not heard loudspeakers or studio monitors that do. Loudspeakers are generally more laid back and easier to listen to, but to me the clarity, resolution of detail, full frequency presentation and ability to create an enveloping, truly convincing soundstage with the air and reverberation of the recorded venue at all sound levels is where headphones hold the edge. Having said that, headphones are not perfect and I hardly ever listen through them. Their ruthless accuracy can be fatiguing or grating with bad recordings. “Room effect” is not always a bad thing.
My vintage EPI speakers could make my room sound like headphones. Also my NHT 2.9 OR 3.3.
My loudspeakers sound “like headphones” but not the same, especially on high resolution large hall recordings where I want to hear the last ounce of attack and reverberant decay. Also, through headphones I can hear the most subtle nuances of organ pipe tone and speech, such as the hollowness of the Holflute, the shimmer of vibrating tin on the Diapasons, the movement of air at the pipe mouths and all the buzz of the reeds–subtleties that are not as precisely rendered by loudspeakers or monitors. When I use headphones, I’m not just plugging run of the mill cans into the headphone jack of my DAC. I use a balanced Woo Audio headphone amp in which I have replaced stock tubes with N.O.S. tubes, including the rare and wonderful TungSol 6SN7GTB. I also use Stefan Audio Art’s balanced headphone cable on my Sennheiser. To me the sound is the best there is.
JosephLG. I think based on my experience so far I have to agree with you. Headphones are some serious immersive beasts and they are my first love when it comes to serious listening. Best part of all. You can achieve some very elite equipment for quality headphone set ups that will sound like 120,000$ speakers for a fraction of the price. I’m with you on this.
Absolutely. I think people who berate headphones have just never listened to a really fine headphone system. Components of headphone systems have to selected and tweaked just like loudspeaker systems to sound their best. Also, I don’t think my ears or brain had to be “trained” to enjoy headphones. I remember I was wowed at the realism the first time I put on a stereo headset in college in the music listening room at the library. Having said that, I still enjoy my room-filling 35-driver surround loudspeaker system for my digital pipe organ and my 2-channel loudspeaker system for CDs. Headphones are great when you want to experience the maximum “being there experience” or don’t want to annoy the neighbors or others in the house with your music.
I love headphones I just hate wearing them. not that they are not comfortable because manufacturers have improved on that since the Koss cans, I just cannot stand the wires and isolation even with open air headphones. Yeah there are wireless but there is something missing with those.
Hey Joe. Yeah I get you and definitely wired up is unfortunately still the best way to go. Wireless streaming and or Bluetooth I feel a loss of transients is the biggest problem wireless music faces. That to me is a major thing what is missing, however streaming/blue tooth and wireless headphones are Improving a lot these days.
An early way of adding reverb to a recording was a microphone and speaker within a separate bare room using the reverb characteristics of that room.
Also beloved of guitarists etc. is the spring and the plate reverb but nowadays can be succeeded digitally.
Not a fan of adding too much reverb, its like singing in a bathroom or spicing up a poor recording.
Yes or making a singer sound better than they are.
Yes agreed Joe, at least its not quite as bad as Antares Auto-Tune. Sometimes both are used together… Arghhhh )-:
A few years ago I saw a very interesting documentary on German television about the construction of the Elbphilharmonie in Hamburg and in particular the special attention to acoustics.
If I ever get the opportunity, if only to see the interior, I’ll go there. Sad they are losing a lot of money right now.
And talking about big projects: last week I saw an also very interesting 4hr documentary about the construction of the Sydney Metro.
The first part opened in 2019. Final stages/extensions planned to be ready around 2050. In terms of costs we’re talking tens of billions here.
jb4,
The ‘bit’ that goes past where I live will be completed by 2025.
But by 2050…I’ll be 90.
We’re paying for it by selling about 130 billion dollars worth of Iron Ore to the Chinese every year; which of course they will use to build up their navy ships & their armaments…it’s all good 🙂
I didn’t want to hurt your feelings Fat Rat, but now that you mention it….
If I remember well the TBM’s (Tunnel Boring Machines), or at least big parts of it, were constructed in CHINA. (clearly the Australian government didn’t ask for your advice).
But correct me if I’m wrong.
jb4,
As I’ve stated to ‘Soundmind’ in our recent discussions that it is impossible to hurt my feelings as they are protected by a generous coating of Teflon (probably also made in China)
I’m very much a believer in free speech & freedom of expression even if it is against myself.
I’m not one of these fragile ‘Namby Pamby’ egos who can dish it out but can’t take it.
Please refrain from feeling the need to apologize in advance, or at any time for that matter, about my feelings…they will survive 🙂
Yes you are correct, they didn’t come to me for advice.
We (Australia) is having a new fleet of submarines built by a French company, which really surprised me since everything else does appear to be built in China.
Of course the CCP, in it’s infinite totalitarian wisdom, has stopped buying our wheat, barley lobsters, wine, coal….basically everything that they agreed to buy from us, except iron ore of course, just because we (Australia) wont let them take over the place with their ‘Belt & Roads’ initiative, with their Huawei 5G network & by bringing in new laws that forbid foreign interference in to our affairs.
They can’t do without our iron ore though & if it was up to me I’d stop selling them that just to show them that they don’t call the shots…but apparently they do.
However what they (CCP) are really pissed off about is that little Australia suggested to the WHO that a team of scientists should go to Wuhan in China to establish the origins of the Corona Virus, which WHO scientists & virologists are now in the process of doing, to basically find out who’s is lying about what (as if we don’t already know)…it’s all good 😉
As luck would have it I was exposed to and was fascinated by reflections, echoes, and reverberation since I was an infant. In fact within days of being born. For the first 2 1/2 years of my life I was exposed to them at least several times a day. 47 years ago using analytical tools I’d learned in school I found a way to turn it from a crude art audio engineers use and a primitive science I later learned acousticians had developed into an exact science using a unique physical and mathematical model that allowed me to understand it completely. That model is straightforward with few tricks that should be easily understandable to physicists. I learned how to measure it, analyze it, and engineer it using methods other than just using the architecture of an environment. I even adapted it to use with commercially made recordings, CDs being the best source. I can mold sound pretty much anyway I want to.
In this regard the term “accuracy” the way audiophiles use it makes no sense. As a scientific problem it can only be approached with anechoic made recordings, precise measurements of a space to be duplicated, and a special laboratory that is sill enormously complex, expensive, and difficult. However, similar convincing results can be achieved and while not easy, it can be done. I’ve released some of the information in my patent application but not all. It has generated no commercial interest and likely could not be developed into a commercial product. It has been cited a number of times by other patent applicants and others .
It has been a great deal of fun building and experimenting with it over these years, I’ve learned a lot from it, and I’ve really enjoyed the unique pleasure this one of a kind system provides. Since these reflections are most of what you hear at a live performance they really need to be far better understood and dealt with if you have any hope of achieving high fidelity to the sound of live music. As I see it the high fidelity industry dipped its toe in the edge of the ocean of this problem with quadraphonic sound and drowned because it was in over its head.
“In this regard the term “accuracy” the way audiophiles use it makes no sense. ” I cringe when I hear audiophiles tank about “realistic” vocals. The vast majority of recorded vocals are close miked in mono and then are located in their place in the stereo mix by an engineer or producer using a “panpot”, and are often as not enhanced by artificially added reverb. A part of most singers technique is using the “proximity effect” of close microphone placement as part of their “sound”.
I’ve been experimenting with pop singers like Julie London, Peggy Lee, Frank Sinatra, Judy Garland etc. and getting some very pleasing results. It strikes me that when the older pop singers prior to the rock era sang “the standards” I always enjoyed them. I’m not 100 percent any one genre. I’ve been listening to Dave Brubeck, George Shearing, Billy Taylor, Marian McPartland all with excellent success. I have a wide range of adjustments and my sound system can make any recording made after about 1958 sound great. Unfortunately I’ve got quite a few from earlier eras that were never enhances as can sometimes be done with modern software that just don’t please me.
“Since these reflections are most of what you hear at a live performance ”
I take issue with this! While it is true that most of the seats in a symphonic or operatic hall of 19th Century dimensions are beyond the Schroeder limit as you suggest, I believe that acoustic music is best experienced in the front seats which receive more direct sound energy than reverb. Historically, instrumental music evolved for >43,000 years in less reverberant spaces – clearings in the forest, plains, longhouses, open plan temples and palaces, rice paper walls, well furnished rooms, etc.
Choral and sacred music was performed inside heavy reflective walls and ceiling of chapels and cathedrals, or caves in earlier times; but they do not serve well for plucked and struck instruments, or lyric intelligibility as in popular song or call and response.
During the Medieval, Renaissance, Baroque and Classical eras, elite architecture of music patrons was reverberant as built, but not so much as furnished – except for sacred spaces, which eschewed fabrics and furnishings. Sacred instruments like the mighty church organs (which took several humans to power) where constructed and orchestrated for high reverberation, but chamber instruments like lute, recorder, reeds, and viol needed a more hushed space for nuances of articulation.
The large reverberant halls in Vienna, Amsterdam, Bayreuth and Boston are an artifact of economics in music after the fall of the aristocracy and the need to sell thousands of tickets to the bourgeoisie. The music adapted, becoming simpler, less articulated and louder from instrument design and massed sections of like instruments.
The huge caverns enabled by the excesses of the steel age are just too much. This hit an extreme with Philharmonic/Fisher/Geffen Hall where the reverb swamped the music in the first row! I have avoided it nearly all of my time in New York. Stern and Starr are reasonable from the seventh row, but I far prefer 600 seat Zankel, 400 seat Weill or 250 seat Gilder-Lehrman.
I developed the opposite, a 55 seat chamber hall with carefully tuned acoustics to contain a Steinway Model D concert grand. This had enough REAL reverberation to feel enveloped, but with a delay between a healthy dose initial reflections (for intimacy per Beranek) and the longer fade into silence for unparalleled articulation.
Here is a commendation of my results:
https://www.nytimes.com/2014/03/27/arts/music/piano-by-jacob-greenberg-and-reinier-van-houdt-at-spectrum.html
If you listen to the same sounds outdoors without any amplification either electronic or from reflections and compare it to the same kinds of sounds indoors even in a high school auditorium as I did when I was about 12 or 13 years old you’ll hear an astonishing difference in many ways but chief among them is loudness. That sound heard with no reflections is exactly the first sound you’d hear anyplace if you were at the same position and distance in relationship to the source. The rest is all due to reflections. There’s plenty of documentation that confirms the relative proportion of loudness of reflected sound to the first arriving sound. If Dr. Bose got nothing else right, nothing else of importance the one thing he said in his original white paper for is 901 speaker is that he measured 16 feet from the performing stage of Boston Symphony Hall 89 percent of the sound arrived from reflections. He also had a graph in that white paper showing that the further away from the stage you went, the greater the proportion of reflected sound. This makes complete sense since the first arriving sound is reduced by 6 db with every doubling of distance while the loudness of the reflected sound is quite uniform throughout the space. Beranek’s favorite seat in that hall was at the center of the first row of the first balcony. The volume of the space of Boston Symphony Hall and Avery Fisher Hall is about the same, roughly 670,000 cubic feet. But there is a world of difference in the sound. Boston is regarded as among the best while Avery Fisher is regarded as awful despite many attempts by “experts” and a lots of money spent to try to fix it for 55 years since it first opened. BTW the acoustic consultant for it was Beranek. He argued with the project manager, the architects, the owners but he lost all of his arguments. He should have quit before he was fired. What went wrong? Nearly everything possible.
Whenever I hear the word “reverb “ I immediately think of the recording technique and the way it was so well used to capture James LaBrie’s voice on the track “voices.”
If anyone is interested the band is Dream Theater and the album is called “Awake”. For me it is the best progressive metal album to come out in 1994 and one of my favorite albums of all time. It is an absolute beast with very engaging sonics. The talent is off the charts and David Prater’s mix is beautiful.
Nephilim,
I have Dream Theater’s 1992 album “Images and Words” and the 1999 album “Metropolis2”.
Both excellent albums.
I also like “The Astonishing”. (opinions about this album vary !).
To me “Awake” sounds sometimes a bit more like (prog-)metal than progrock.
But I’m certainly no expert in al these different styles.
There are still a few DT albums I didn’t listen to, so that’s a job for the nearby future 🙂
My friend JB4. Have a listen to them all! Dream Theater are worth exploring their entire discography, however with keyboardist and multi effects wiz Kevin Moore I feel DT were at their strongest. 1994’s Awake is no doubt Prog metal and is an album that really thought me a lot about how electric guitar can sound. 🙂
Getting outside the Moore era, I absolutely recommend their latest release “Distance over time” on Blu Ray Audio. Jimmy T. Meslin did a fantastic job mixing the album and giving us DT fans the best fidelity of any DT studio album Ever done.
It is a winner, but you need that Blu ray disc cause the cd mastering is way louder. 🙁
Paul,
You missed your chance – or perhaps you aren’t enough of a geezer.
You said:
The element that brings acoustic life or death.
When, for us geezers, you could have said:
The element from which “springs” acoustic life or death.
I apologize in advance as kids today, whom haven’t played a guitar that required actual cords to the amp, probably won’t get this.
=;0)
Here are different types of artificial reverb. They all sound, well, “artificial.” Only a few reverb processors can come close to approximating actual concert hall or cathedral acoustics. The best are modelled after actual spaces, or utilize actual reverberation recordings of the spaces they simulate.
https://www.google.com/search?q=how+reverb+was+done+in+early+days&ei=A6AVYM7gA4f2tAXwsoDYAw&start=10&sa=N&ved=2ahUKEwiO7trGn8TuAhUHO60KHXAZADsQ8NMDegQIGBBI&biw=1824&bih=891
I think this pretty much covers it. https://anotherproducer.com/online-tools-for-musicians/delay-reverb-time-calculator/
As JosephLG suggested, the other cited descriptions of reverb by RT60 is ridiculously reductionist, and most acoustic designers (including the fabled John Storyk) miss the essential complexity. tonyplachy’s link is a good example of how reductionism degrades musical reality in over-produced pop music.
Artificial reverb is really all you can cram into 2 channels, which is a good argument why less is more in recordings. The closest you can get in commercial fake stereo is Near Coincident Pair technique, which is exceedingly rare. I find the commercially necessary artificial reverb in mixed multi-track recordings unlistenable for the most part. I watched an hour long documentary on the making of “Aja” and found even their use annoying – and it probably was all plate reverb, the most real electro-mechanical system. The composition, virtuosity and orchestration were fascinating, but I found the dry solo tracks more pleasing than the mix. They revealed many subtle tracks that were buried in the mix by reverb. I would LOVE taking the multi-track master and playing it back with one speaker per track on a stage using real room reverb. This is the only accurate rendering.
Soundminded is correct that real reverb comes from all directions, but it is more complex than that. Well trained human ears can decode dozens, even hundreds of individual reflection vectors to simultaneously map room boundaries and triangulate all sound sources contained therein, in a game of three, four, five and more cushion sonic billiards.
99% of digital reverb is “statistical”, meaning a histogram of the delay and decay instances matches target room statistical measurements. This is scrambled reverb, and it scrambles brains.
JosephLG cited attempts to capture real reverb mathematics either by physical modeling (acoustic ray tracing analogous to scene rendering graphics) or convolutional reverb, which is using a measured room impulse response. I have tried the latter, and found it the best digital reverb, but still has the fault of only offering one position – every instrument in the mix has reverb as if they were all in the same point in the captured hall where the impulse generator sat. Physical modeling has the ability to calculate a reverb corresponding to a real stage layout, but probably not including the shadowing of the musicians on the stage; and the processing load would be enormous, demanding a floating point GPU.
Since I grew up in the ’60s, my ears are acclimated to springs and plates. I still have a 2 channel spring reverb, a Furman RV-2, which I prefer to the digital reverb algorithms in stage mixers. I have also used a Bricasti, which is very good for a box, and I would probably use one again if it were available.
There is a new type of artificial reverb being implemented in halls, which is a hybrid of digital and electro-acoustical. It consists of a cross-feed matrix of speakers and microphones which are fed programmed combinations of delayed and attenuated signals of all permutations. When it is fine tuned by well-trained ears (musician’s ears, not engineer’s ears), it can augment room reverb very musically.
I construct ad-hoc versions of this when performers call for reverb, but with only a few mics and speakers so I am really amplifying the existing room reverb (my acoustic designs include an integral reverberator). This suggests a mechanism for home listening, recording one or more tracks of room reverb to surround channels. Chesky and Lipinsky have released albums made with a surround microphone array with excellent playback realism in a well tuned Home Theater room, and James Johnston made stunning surround recordings with his proprietary Bell Labs microphone array and processing system, which has the biggest sweet spot short of OVOMOS. (unfortunately the patent was buried)
My acoustic designs intuitively implemented principles that renowned acoustician Christopher Jaffe developed experimentally since the 1960s. Critical factors start with the delay and density of first reflections, which are critical to the musicians hearing each other on stage. They need to be closer than ten feet to the nearest wall for a feeling of intimacy, and to separate out the multiple sound sources. Halls with wings for dramatic entrances are deficient, and likewise fly arches eat the near term vertical reflections. The side walls should not be parallel, however, and neither the stage floor and ceiling.
The next factor is the delay until the first reverb reflection. This should leave a large gap so that notes’ startings are sharply etched with waveform accuracy and the startling +18dB transients of real music are discernible, even though they last such a short time. This is why first reflection points have to be acoustically treated in home listening rooms, you want a space from 10ms to at least 50ms to hear the musical consonants intelligibly.
The body of reverb needs to be diffuse. Jaffe accomplishes this by coupling reverb chambers close to the stage, but with a long path length from stage to internal first reflection point and thence to audience. In my favorite chamber hall, Zankel, this is implemented by two open stairwells flanking the stage. There is an eight foot high wall on the stage separating the vertical space, so stage sound bounces over the top of the wall and bounces back out, long enough that the articulation is maintained to the last row of the balcony. I know this because I had a seat in the back of the mezzanine for Michael Tilson Thomas’s “Islands”, which is scored for eight percussionists each wielding four mallets on four six octave Marimbas, extremely dense tonal percussion.
I further had an opportunity to run the hall during a rehearsal to check the timbral balance in Zankel, and it was perfect in nearly every seat.
So, the amount of reverb is critical, the shape of the reverb envelope is critical, each individual reflection in the reverb has to be physically possible, and for true accuracy the direction of arrival of each reflection has to represent a physical space. This is why the only accurate reproduction of a musical performance has to be one microphone per instrument and one speaker per microphone in a performance space. This is the ultimate destination audio, coming to a theater near you in the distant future; or until you, the high-end consumer, start asking for it.
Fascinating subject and I am going to have to find some time to read all that is here. I have played some gigs where there is just too much bouncing around of sound. Very distracting. I know one musician who when he walks into space where he is going to perform claps his hand and that gives him a good idea of how good the reverberation in the space will be.