COPPER

A PS Audio Publication

Issue 61 • Free Online Magazine

Issue 61 THE COPPER INTERVIEW

Conversing With Choueiri: Part 2, Going to the Mattresses

Conversing With Choueiri: Part 2, Going to the Mattresses

In Part I of my conversation with Professor Edgar Choueiri, he laid out the basis of how we perceive a three-dimensional soundscape, and what the cues were that our ear/brain systems use to conjure up a 3D image. Let’s continue …

RM. So we are obviously receiving those cues from our loudspeakers in our listening rooms, because they are fully captured in a binaural recording, yet we only perceive a vague illusion of a sonic image. So why is it that these cues are insufficient to regenerate the original 3D audio sound field under normal stereo listening?

EC. These differential cues, the ITD, ILD, spectral cues and reverberant ratios, these are all fully captured in a binaural recording. But because both of the stereo speakers are radiating into the room, both of our ears receive sounds emitted from both of the speakers, whereas what we need is for the sound from the left speaker to be heard by our left ear only, and the sound from the right speaker to be heard by our right ear only.

In effect, the system suffers from crosstalk. Try this experiment. Place your speakers quite close together and angle them in towards your head. Now get a mattress and stand it vertically between the two speakers so that it buts up against your face. This will serve to eliminate a lot of the crosstalk, so that when you play a binaural recording the left ear will hear only the left speaker and the right ear will hear only the right speaker. With this peculiar setup you will hear a remarkably clear and precise 3D image. And, unlike with headphones, you can rotate your head and you won’t lose that image. Furthermore, this system will pass my proposed test, as we can reposition either of the speakers without affecting the image! There are actually a small number of enthusiasts around the world who fully understand this problem, and who have constructed listening rooms with a barrier! They sit there with a barrier down the middle so they can enjoy true 3D imaging.

RM. That conjures up quite a mental image!

EC. So the critical question is, can we do this crosstalk cancellation without having to erect a barrier? It is important to understand that this is a well-established challenge, and that research on crosstalk cancellation has been going on since as early as 1961. Initially it was done using all-analog circuitry, and some interesting results were obtained. More recently, digital audio has come along, and we have been able to construct cancellation filters in the digital domain, but even that has been going on for a long time before I got involved!

Imagine we record someone snapping their finger very close to your left ear. We record that on a binaural head, and play it back through a pair of loudspeakers. The sound comes almost exclusively out of the left speaker, but still the right ear manages to hear it, although it will be something like 4-5dB quieter than what the left ear hears. This is crosstalk, and we have designed a special filter to try to eliminate it.

How it works is that it first sends the sound of the finger snap out of the left speaker, and then after a slight delay, sends a negative image of the same sound out of the right speaker, but attenuated by 4-5dB and timed to reach your right ear at the exact same instant as the original finger snap sound from the left speaker. And because it is a negative image, it causes them both to cancel out. But that cancellation snap, having done its job at your right ear, will go on to be heard after another slight delay by your left ear, and that will upset the 3D image. So to deal with it, a third cancellation pulse, attenuated by a further 4-5dB, has to be sent out from the left speaker. This continues, back and forth, until the correction signal is no longer audible, and the net result is that the original sound of the finger snap was heard only by the left ear and not by the right ear. If it is done properly the whole process lasts no more that 300μs, and is quite seamless, and the ear/brain is fooled into hearing the original 3D sound field.

RM. That sounds pretty incredible. Does it actually work?

EC. There are two problems with the crosstalk cancellation system I just described. Number one, with just a slight movement of the head to the left or right all bets are off, because the delays will all be calculated wrongly. So you would need to recalculate the filter for that new position. But this is just a technological problem – once the technology is in place to detect the head motion and to switch from one filter to another in real time, we would just incorporate those capabilities into the system. And today, that technology is in place – a laptop computer can comfortably handle it. So problem number one is effectively solved.

Problem number two is more subtle, and a lot of people didn’t understand it very well, but it really is a major obstacle. These “perfect” cancellation filters, incorporating real-time head-tracking and multiple filters, they do an exceptional job of recreating a perfectly stable 3D sound field and behave very, very well from a crosstalk cancellation point of view. But the sound quality is awful! It suffers from dreadful tonal distortion – in other words the frequency response is extremely bad and cannot be simply corrected. No audiophile in their right mind will pay real money to listen to a perfect 3D image of a piano that sounds like a xylophone!

RM. They certainly wouldn’t! But are you saying those errors shouldn’t be there?

EC. This puzzled a lot of people, because on paper these crosstalk-correction systems should have a flat frequency response … the mathematics is quite simple really. But in practice they had spikes as high as 34dB, which not only sounded unacceptably bad, but had the capability of driving the associated electronics into clipping, and maybe blowing a drive unit. But the explanation also turned out to be quite simple. If you design the perfect filter, it provides perfect performance at the one point in space where it is asked to do that. But if you move even slightly out of position everything goes to hell. Including the frequency response. And this is what we were seeing. But it took many years to actually understand what was happening.

RM. But you eventually solved it.

EC. Yes. And this, finally, is where we come into the story. We developed a way to fix the frequency response problem by deciding to pay a price in terms of the amount of crosstalk-cancellation that we would deem acceptable, and that’s a lot more difficult and more complicated than it sounds. In summary, a perfect crosstalk cancellation filter will provide close to infinite attenuation, but in practical terms we don’t need that. Something like 20dB turns out to be more than enough, so by reducing the crosstalk cancellation requirement from infinity down to 25dB we found we could go from a frequency response with 34dB peaks to one that was flat – and I mean ruler flat. In effect we traded in a degree of crosstalk performance we didn’t need for a degree of tonal performance that we did. And that right there was our invention! It’s called the BACCH filter, and it was patented and trademarked by the University. In fact it has now become third most lucrative patent in the history of Princeton University if you can believe that!

RM. It sounds a lot more appealing than an upright mattress down the middle of the listening room … but is it up to the demands of hard-core high-end audiophiles? In other words, how well does it work with the high-quality real-world recordings that we like to listen to?

EC. Our accomplishment is that we’ve made crosstalk cancellation tonally transparent, so you can take any album you have and listen to it in 3D without tonal distortion. And here’s the kicker – I said “any album you have” – I didn’t say “any binaural album you have”. I think I’ve made it clear now why a binaural album should work. But I haven’t suggested any reasons why any stereo album should work, and the answer is actually also very clear.

Any properly recorded stereo album has ITD and ILD cues embedded in the recording, and it is these cues that present the normal stereo image. If you record using Spaced Omnis, for example, these will capture a strong ITD signal, and will capture, for example, the reverb of the recording environment. ORTF recordings use cardioid mikes, which tend to emphasize the ILD cues, since they are typically too close together for a strong ITD signal. So most acoustically recorded recordings will produce an extraordinarily impressive and satisfying 3D spatial image. So you will hear a very strong 3D image, and not the normal stereo image locked to the speakers. It just won’t necessarily be spatially accurate. [For that you’d need a binaural recording, recorded with a dummy head with your own personal HRTF … and then you would be able to recreate the exact original 3D acoustical image – RM.]

The question remains as to how pop music, or studio-generated music, will work. These are often assembled from individual tracks recorded with mono microphones, or generated by electronic instruments. But a good recording engineer is trying to construct a good stereo image using panning, reverb, and so on, and these techniques effectively add the very cues which allow a well-defined 3D image to develop outside of the speakers and in a consistent 3D space. That image can still be very satisfying, but it is not real. But neither is the conventional stereo image – that isn’t real either, it’s just the construct of the recording engineer. By the way, as a user of the BACCH system, you can just bypass the crosstalk cancellation at the touch of a button and listen in normal stereo whenever you want, but we don’t know any customers who prefer to do that.

RM. What about headphones?

EC. We have developed a new patented technology, called BACCH-HP, that emulates BACCH-filtered speakers through headphones. The result is that you would hear a fully head-externalized 3D sound field from your headphones that is virtually indistinguishable from what you would hear if your speakers were playing. It’s all done in software, apart from a camera for head tracking. Essentially we use the headphones to emulate the loudspeakers. It works so well because headphones are so much more accurate than loudspeakers. We can simulate the most expensive loudspeakers in the world over a $100 set of headphones, and you won’t be able to tell the difference. [That is possibly the most remarkable claim I have ever heard made in the history of high-end audio – RM]

RM. Who knew such amazing things were happening in the field of audio research! Can we expect the world of high-end audio to be turned upside-down by an onslaught of new developments?

EC. I record orchestras for fun, and I’ve been doing that as a hobby since high school and college. I’ve been recording my university orchestra for many years. And a year and a half ago I was invited to Berlin to record my favourite orchestra, the Berlin Philharmonic [invitations to record don’t come any more prestigious than that – RM], and for that I developed a special 3D mixer. So now you can navigate your way through the 3D sound space. For instance, if you want to listen to the timpani you can “walk over” and position yourself next to them!

A lot of the breakthroughs that are happening right now in audio research, especially over the last five years, waaaaay overwhelm all that happened in the previous 20 or more years. And a lot of the young PhD candidates and researchers doing this don’t give a damn about tubes or cables. They are dealing with much tougher problems, but only a few of these problems have direct relevance to the high-end audio field. I’m really an outlier here.

These efforts are all driven by AR/VR research (Augmented Reality / Virtual Reality), where the challenges are not only greater, the requirements are much tougher. For example, in AR one of the present challenges is to have the voice of a virtually added person in a real room sound as realistic as the sound of a real person in the same room (which requires the listener’s HRTF, on-the-fly modeling of the room’s acoustics, and more). Another example we’re working on is a system for cars where the driver and the passengers are simultaneously listening to different music of their choice, played through the same set of speakers! Think about that …

RM. I will indeed try to think about that, and all the other things you have described. But I have to tell you, my head is spinning … and I just hope your head-tracking algorithms will be able to keep up with it! Thank you so much for taking the time to talk with me, and for sharing both your insights and your remarkable developments with Copper’s readers.

More from Issue 61

View All Articles in Issue 61

Search Copper Magazine

#227 Seth Lewis Gets in the Groove With Take a Look Around: a Tribute to the Meters by Frank Doris Feb 02, 2026 #227 Passport to Sound: May Anwar’s Audio Learning Experience for Young People by Frank Doris Feb 02, 2026 #227 Conjectures on Cosmic Consciousness by B. Jan Montana Feb 02, 2026 #227 The Big Takeover Turns 45 by Wayne Robins Feb 02, 2026 #227 Music and Chocolate: On the Sensory Connection by Joe Caplan Feb 02, 2026 #227 Singer/Songwriter Chris Berardo: Getting Wilder All the Time by Ray Chelstowski Feb 02, 2026 #227 The Earliest Stars of Country Music, Part One by Jeff Weiner Feb 02, 2026 #227 The Vinyl Beat Goes Down to Tijuana (By Way of Los Angeles), Part Two by Rudy Radelic Feb 02, 2026 #227 How to Play in a Rock Band, 20: On the Road With Blood, Sweat & Tears’ Guitarist Gabe Cummins by Frank Doris Feb 02, 2026 #227 From The Audiophile’s Guide: Audio Specs and Measuring by Paul McGowan Feb 02, 2026 #227 Our Brain is Always Listening by Peter Trübner Feb 02, 2026 #227 PS Audio in the News by PS Audio Staff Feb 02, 2026 #227 The Listening Chair: Sleek Style and Sound From the Luxman L3 by Howard Kneller Feb 02, 2026 #227 The Los Angeles and Orange County Audio Society Celebrates Its 32nd Anniversary, Honoring David and Sheryl Lee Wilson and Bernie Grundman by Harris Fogel Feb 02, 2026 #227 Back to My Reel-to-Reel Roots, Part 26: Half Full – Not Half Empty, Redux by Ken Kessler Feb 02, 2026 #227 That's What Puzzles Us... by Frank Doris Feb 02, 2026 #227 Record-Breaking by Peter Xeni Feb 02, 2026 #227 The Long and Winding Road by B. Jan Montana Feb 02, 2026 #226 JJ Murphy’s Sleep Paralysis is a Genre-Bending Musical Journey Through Jazz, Fusion and More by Frank Doris Jan 05, 2026 #226 Stewardship by Consent by B. Jan Montana Jan 05, 2026 #226 Food, Music, and Sensory Experience: An Interview With Professor Jonathan Zearfoss of the Culinary Institute of America by Joe Caplan Jan 05, 2026 #226 Studio Confidential: A Who’s Who of Recording Engineers Tell Their Stories by Frank Doris Jan 05, 2026 #226 Pilot Radio is Reborn, 50 Years Later: Talking With CEO Barak Epstein by Frank Doris Jan 05, 2026 #226 The Vinyl Beat Goes Down to Tijuana (By Way of Los Angeles), Part One by Rudy Radelic Jan 05, 2026 #226 Capital Audiofest 2025: Must-See Stereo, Part Two by Frank Doris Jan 05, 2026 #226 My Morning Jacket’s Carl Broemel and Tyler Ramsey Collaborate on Their Acoustic Guitar Album, Celestun by Ray Chelstowski Jan 05, 2026 #226 The People Who Make Audio Happen: CanJam SoCal 2025, Part Two by Harris Fogel Jan 05, 2026 #226 How to Play in a Rock Band, 19: Touring Can Make You Crazy, Part One by Frank Doris Jan 05, 2026 #226 Linda Ronstadt Goes Bigger by Wayne Robins Jan 05, 2026 #226 From The Audiophile’s Guide: Active Room Correction and Digital Signal Processing by Paul McGowan Jan 05, 2026 #226 PS Audio in the News by Frank Doris Jan 05, 2026 #226 Back to My Reel-to-Reel Roots, Part 25: Half-Full, Not Empty by Ken Kessler Jan 05, 2026 #226 Happy New Year! by Frank Doris Jan 05, 2026 #226 Turn It Down! by Peter Xeni Jan 05, 2026 #226 Ghost Riders by James Schrimpf Jan 05, 2026 #226 A Factory Tour of Audio Manufacturer German Physiks by Markus "Marsu" Manthey Jan 04, 2026 #225 Capital Audiofest 2025: Must-See Stereo, Part One by Frank Doris Dec 01, 2025 #225 Otis Taylor and the Electrics Delivers a Powerful Set of Hypnotic Modern Blues by Frank Doris Dec 01, 2025 #225 A Christmas Miracle by B. Jan Montana Dec 01, 2025 #225 T.H.E. Show New York 2025, Part Two: Plenty to See, Hear, and Enjoy by Frank Doris Dec 01, 2025 #225 Underappreciated Artists, Part One: Martin Briley by Rich Isaacs Dec 01, 2025 #225 Rock and Roll is Here to Stay by Wayne Robins Dec 01, 2025 #225 A Lifetime of Holiday Record (and CD) Listening by Rudy Radelic Dec 01, 2025 #225 Little Feat: Not Saying Goodbye, Not Yet by Ray Chelstowski Dec 01, 2025 #225 How to Play in a Rock Band, Part 18: Dealing With Burnout by Frank Doris Dec 01, 2025 #225 The People Who Make Audio Happen: CanJam SoCal 2025 by Harris Fogel Dec 01, 2025 #225 Chicago’s Sonic Sanctuaries: Four Hi‑Fi Listening Bars Channeling the Jazz‑Kissa Spirit by Olivier Meunier-Plante Dec 01, 2025

Conversing With Choueiri: Part 2, Going to the Mattresses

Conversing With Choueiri: Part 2, Going to the Mattresses

In Part I of my conversation with Professor Edgar Choueiri, he laid out the basis of how we perceive a three-dimensional soundscape, and what the cues were that our ear/brain systems use to conjure up a 3D image. Let’s continue …

RM. So we are obviously receiving those cues from our loudspeakers in our listening rooms, because they are fully captured in a binaural recording, yet we only perceive a vague illusion of a sonic image. So why is it that these cues are insufficient to regenerate the original 3D audio sound field under normal stereo listening?

EC. These differential cues, the ITD, ILD, spectral cues and reverberant ratios, these are all fully captured in a binaural recording. But because both of the stereo speakers are radiating into the room, both of our ears receive sounds emitted from both of the speakers, whereas what we need is for the sound from the left speaker to be heard by our left ear only, and the sound from the right speaker to be heard by our right ear only.

In effect, the system suffers from crosstalk. Try this experiment. Place your speakers quite close together and angle them in towards your head. Now get a mattress and stand it vertically between the two speakers so that it buts up against your face. This will serve to eliminate a lot of the crosstalk, so that when you play a binaural recording the left ear will hear only the left speaker and the right ear will hear only the right speaker. With this peculiar setup you will hear a remarkably clear and precise 3D image. And, unlike with headphones, you can rotate your head and you won’t lose that image. Furthermore, this system will pass my proposed test, as we can reposition either of the speakers without affecting the image! There are actually a small number of enthusiasts around the world who fully understand this problem, and who have constructed listening rooms with a barrier! They sit there with a barrier down the middle so they can enjoy true 3D imaging.

RM. That conjures up quite a mental image!

EC. So the critical question is, can we do this crosstalk cancellation without having to erect a barrier? It is important to understand that this is a well-established challenge, and that research on crosstalk cancellation has been going on since as early as 1961. Initially it was done using all-analog circuitry, and some interesting results were obtained. More recently, digital audio has come along, and we have been able to construct cancellation filters in the digital domain, but even that has been going on for a long time before I got involved!

Imagine we record someone snapping their finger very close to your left ear. We record that on a binaural head, and play it back through a pair of loudspeakers. The sound comes almost exclusively out of the left speaker, but still the right ear manages to hear it, although it will be something like 4-5dB quieter than what the left ear hears. This is crosstalk, and we have designed a special filter to try to eliminate it.

How it works is that it first sends the sound of the finger snap out of the left speaker, and then after a slight delay, sends a negative image of the same sound out of the right speaker, but attenuated by 4-5dB and timed to reach your right ear at the exact same instant as the original finger snap sound from the left speaker. And because it is a negative image, it causes them both to cancel out. But that cancellation snap, having done its job at your right ear, will go on to be heard after another slight delay by your left ear, and that will upset the 3D image. So to deal with it, a third cancellation pulse, attenuated by a further 4-5dB, has to be sent out from the left speaker. This continues, back and forth, until the correction signal is no longer audible, and the net result is that the original sound of the finger snap was heard only by the left ear and not by the right ear. If it is done properly the whole process lasts no more that 300μs, and is quite seamless, and the ear/brain is fooled into hearing the original 3D sound field.

RM. That sounds pretty incredible. Does it actually work?

EC. There are two problems with the crosstalk cancellation system I just described. Number one, with just a slight movement of the head to the left or right all bets are off, because the delays will all be calculated wrongly. So you would need to recalculate the filter for that new position. But this is just a technological problem – once the technology is in place to detect the head motion and to switch from one filter to another in real time, we would just incorporate those capabilities into the system. And today, that technology is in place – a laptop computer can comfortably handle it. So problem number one is effectively solved.

Problem number two is more subtle, and a lot of people didn’t understand it very well, but it really is a major obstacle. These “perfect” cancellation filters, incorporating real-time head-tracking and multiple filters, they do an exceptional job of recreating a perfectly stable 3D sound field and behave very, very well from a crosstalk cancellation point of view. But the sound quality is awful! It suffers from dreadful tonal distortion – in other words the frequency response is extremely bad and cannot be simply corrected. No audiophile in their right mind will pay real money to listen to a perfect 3D image of a piano that sounds like a xylophone!

RM. They certainly wouldn’t! But are you saying those errors shouldn’t be there?

EC. This puzzled a lot of people, because on paper these crosstalk-correction systems should have a flat frequency response … the mathematics is quite simple really. But in practice they had spikes as high as 34dB, which not only sounded unacceptably bad, but had the capability of driving the associated electronics into clipping, and maybe blowing a drive unit. But the explanation also turned out to be quite simple. If you design the perfect filter, it provides perfect performance at the one point in space where it is asked to do that. But if you move even slightly out of position everything goes to hell. Including the frequency response. And this is what we were seeing. But it took many years to actually understand what was happening.

RM. But you eventually solved it.

EC. Yes. And this, finally, is where we come into the story. We developed a way to fix the frequency response problem by deciding to pay a price in terms of the amount of crosstalk-cancellation that we would deem acceptable, and that’s a lot more difficult and more complicated than it sounds. In summary, a perfect crosstalk cancellation filter will provide close to infinite attenuation, but in practical terms we don’t need that. Something like 20dB turns out to be more than enough, so by reducing the crosstalk cancellation requirement from infinity down to 25dB we found we could go from a frequency response with 34dB peaks to one that was flat – and I mean ruler flat. In effect we traded in a degree of crosstalk performance we didn’t need for a degree of tonal performance that we did. And that right there was our invention! It’s called the BACCH filter, and it was patented and trademarked by the University. In fact it has now become third most lucrative patent in the history of Princeton University if you can believe that!

RM. It sounds a lot more appealing than an upright mattress down the middle of the listening room … but is it up to the demands of hard-core high-end audiophiles? In other words, how well does it work with the high-quality real-world recordings that we like to listen to?

EC. Our accomplishment is that we’ve made crosstalk cancellation tonally transparent, so you can take any album you have and listen to it in 3D without tonal distortion. And here’s the kicker – I said “any album you have” – I didn’t say “any binaural album you have”. I think I’ve made it clear now why a binaural album should work. But I haven’t suggested any reasons why any stereo album should work, and the answer is actually also very clear.

Any properly recorded stereo album has ITD and ILD cues embedded in the recording, and it is these cues that present the normal stereo image. If you record using Spaced Omnis, for example, these will capture a strong ITD signal, and will capture, for example, the reverb of the recording environment. ORTF recordings use cardioid mikes, which tend to emphasize the ILD cues, since they are typically too close together for a strong ITD signal. So most acoustically recorded recordings will produce an extraordinarily impressive and satisfying 3D spatial image. So you will hear a very strong 3D image, and not the normal stereo image locked to the speakers. It just won’t necessarily be spatially accurate. [For that you’d need a binaural recording, recorded with a dummy head with your own personal HRTF … and then you would be able to recreate the exact original 3D acoustical image – RM.]

The question remains as to how pop music, or studio-generated music, will work. These are often assembled from individual tracks recorded with mono microphones, or generated by electronic instruments. But a good recording engineer is trying to construct a good stereo image using panning, reverb, and so on, and these techniques effectively add the very cues which allow a well-defined 3D image to develop outside of the speakers and in a consistent 3D space. That image can still be very satisfying, but it is not real. But neither is the conventional stereo image – that isn’t real either, it’s just the construct of the recording engineer. By the way, as a user of the BACCH system, you can just bypass the crosstalk cancellation at the touch of a button and listen in normal stereo whenever you want, but we don’t know any customers who prefer to do that.

RM. What about headphones?

EC. We have developed a new patented technology, called BACCH-HP, that emulates BACCH-filtered speakers through headphones. The result is that you would hear a fully head-externalized 3D sound field from your headphones that is virtually indistinguishable from what you would hear if your speakers were playing. It’s all done in software, apart from a camera for head tracking. Essentially we use the headphones to emulate the loudspeakers. It works so well because headphones are so much more accurate than loudspeakers. We can simulate the most expensive loudspeakers in the world over a $100 set of headphones, and you won’t be able to tell the difference. [That is possibly the most remarkable claim I have ever heard made in the history of high-end audio – RM]

RM. Who knew such amazing things were happening in the field of audio research! Can we expect the world of high-end audio to be turned upside-down by an onslaught of new developments?

EC. I record orchestras for fun, and I’ve been doing that as a hobby since high school and college. I’ve been recording my university orchestra for many years. And a year and a half ago I was invited to Berlin to record my favourite orchestra, the Berlin Philharmonic [invitations to record don’t come any more prestigious than that – RM], and for that I developed a special 3D mixer. So now you can navigate your way through the 3D sound space. For instance, if you want to listen to the timpani you can “walk over” and position yourself next to them!

A lot of the breakthroughs that are happening right now in audio research, especially over the last five years, waaaaay overwhelm all that happened in the previous 20 or more years. And a lot of the young PhD candidates and researchers doing this don’t give a damn about tubes or cables. They are dealing with much tougher problems, but only a few of these problems have direct relevance to the high-end audio field. I’m really an outlier here.

These efforts are all driven by AR/VR research (Augmented Reality / Virtual Reality), where the challenges are not only greater, the requirements are much tougher. For example, in AR one of the present challenges is to have the voice of a virtually added person in a real room sound as realistic as the sound of a real person in the same room (which requires the listener’s HRTF, on-the-fly modeling of the room’s acoustics, and more). Another example we’re working on is a system for cars where the driver and the passengers are simultaneously listening to different music of their choice, played through the same set of speakers! Think about that …

RM. I will indeed try to think about that, and all the other things you have described. But I have to tell you, my head is spinning … and I just hope your head-tracking algorithms will be able to keep up with it! Thank you so much for taking the time to talk with me, and for sharing both your insights and your remarkable developments with Copper’s readers.

0 comments

Leave a comment

0 Comments

Your avatar

Loading comments...

🗑️ Delete Comment

Enter moderator password to delete this comment:

✏️ Edit Comment

Enter your email to verify ownership: