Loading...

Issue 131

Is There a Thinking Cap?

Is There a Thinking Cap?

Frank Doris
I wonder if artificial intelligence is the natural result of human evolution. It’s not a unique thought; science fiction author John W. Campbell’s short story “The Last Evolution” looked at this in 1932, and Ray Kurzweil’s 2005 book The Singularity Is Near suggests it may not only be inevitable but closer than we think. Should it be feared or embraced? Read Kurzweil’s book – I won’t give any spoilers here. I find the subject fascinating. We’ve lost two more titans: keyboardist and composer Chick Corea (79), whose influence on the birth of jazz fusion cannot be overstated. WL Woodward offers a tribute in this issue. It’s safe to say that everyone reading this has heard the work of Rupert Neve (94), a designer of pro audio equipment that shaped the sound of music as we know it today. His recording consoles, found in Abbey Road Studios and facilities worldwide, are prized. Also in this issue: we’ve got a trifecta of interviews, as John Seetoo talks with Nason Tackett of Hear Technologies, Don Lindich goes Rogue with Mark O’Brien of Rogue Audio and Rich Isaacs interviews musical synthesizer pioneer Patrick Gleeson. Larry Schenbeck considers Bach’s monumental Passion settings. Adrian Wu considers the unending analog vs. digital debate. Tom Gibbs reviews new releases from Steven Wilson, the Staves and a solo album from Hayley Williams of Paramore. Ken Sander spends time in Peter Tosh’s Jamaica. Anne E. Johnson gets down with Mary Chapin Carpenter and jazz drumming legend Max Roach. Jay Jay French dives into streaming audio, and Andy Schaub starts a series on the technology behind music streaming software. Galen Gareis of ICONOCLAST and Belden continues his deep exploration into audio cables. J.I. Agnew makes a very big move. Russ Welton gives us more speaker setup tips. I speculate about whether audio systems can ever attain perfection. We wrap things up with James Whitworth racking them up, Peter Xeni contemplating the sound of silence, leisurely listening and a rave new world.

Issue 131

Frank Doris

Dr. Patrick Gleeson: The Interview

Dr. Patrick Gleeson: The Interview

Dr. Patrick Gleeson: The Interview

Rich Isaacs

Musician, Engineer, Producer, Professor of 18th Century English Literature?!

You may not be familiar with the name Patrick Gleeson, but he has quite a résumé. He ditched a career as a college instructor to become an electronic music pioneer in the late 1960s and 1970s. He created a synthesized version of Gustav Holst’s The Planets that was nominated for a Grammy, composed soundtrack music for television and independent films, ran a recording studio in San Francisco (Different Fur Trading Company), and was a member of Herbie Hancock’s band. Gleeson also recorded synthesizer performances of Antonio Vivaldi’s The Four Seasons and the music from Star Wars, along with collaborations with other jazz and electronic music artists.

Rich Isaacs: You didn’t start out as a professional musician, so where and in what field did you get your degree?

Patrick Gleeson: I got my PhD in 18th Century English Literature. I did my course work at UC Berkeley and my dissertation under a man at the University of Washington who I wanted to work with. Unfortunately, the 18th century guy at that time at UC Berkeley was everything I hated: a conservative, so forth and so on. And the guy at Washington was really brilliant, and also he had a wonderful attitude toward higher learning, which got me my PhD in a hurry. He became my thesis advisor. He asked, “so how long are you planning to spend writing this dissertation?” I thought he probably wanted to hear something like two years, but I wanted to be out of there in a year if I could. So I said, “maybe a year?” He just shook his head. I said, “longer?” And he said, “no, shorter. Much shorter.” And I said, “well, what are you thinking about?” He said, “why don’t you aim for 30 days?” So I wrote my thesis in 33 days, and I got an offer to publish the damn thing.

Patrick Gleeson.

After a year at the University of Victoria, I began teaching at San Francisco State University. Then there was a huge turning point: I got very involved politically at San Francisco State. I was on the losing side of things – that was when S.I. Hayakawa became our president, there was a lot of turmoil and demonstrations, and I was sitting in with the students and committing numerous other transgressions. This prompted a tenure hearing on me.

It wasn’t really until probably 1966 that I just became so interested in electronic music that I wanted to make my own. And I was listening to Bartok’s first violin concerto, released posthumously. One night, I’d smoked some grass or maybe had a little acid, I can’t remember which – it was the ‘60s – and I listened to this and it just tore through me like a tornado. I thought, “my god, what am I doing with my life? This is not right. I’m not doing what I want to do. I want to make music like this” – a grandiose ambition. But I think the only way you succeed in the arts is by having grandiose ambitions. So with that, I was on my way out. By that time, the tenure hearing had started, and it was so political. I went home for Christmas vacation in 1967 and didn’t even return to clear my desk out.

RI: What was your musical background? (answer from www.patrickgleesonmusic.com by permission)

PG: I began piano lessons at six. By the third grade, jazz had hit me hard. I started playing out of Mary Lou Williams’ jazz piano books. After school, my best friend Jeff and I would listen to jazz records in his parents’ den – Art Tatum, Benny Goodman, Teddy Wilson, Lionel Hampton and the rest.

When I was practicing, I’d change the music to make it sound more like jazz. Mom caught on and she’d yell from the kitchen, “Doesn’t sound like your lesson, Pat!” We didn’t know this was called improvising.

A hip cousin, Mary Gleeson, was dating Norm Bobrow, a jazz musician and DJ for Seattle’s “race music” station. When I was thirteen, they set up this meeting between me and Ernestine Anderson’s accompanist, a local piano player whose playing I adored. Mary asked him if he ever accepted students. The guy looked at me and said, “why don’t you fall by the pad and let’s see what happens.” Fall by the pad? My god!

I ran home to tell my mother, who wasn’t exactly thrilled. For this Irish immigrant couple that had planned on me being a doctor, this seemed…umm, risky. Mom told me that if my regular piano teacher approved, I could take jazz piano lessons in addition to my regular lessons. Unfortunately, that teacher, whom I disliked anyway, decided this wouldn’t do. My parents agreed. I was devastated and quit music for 15 years.

RI: Getting back to electronic music, you’ve progressed through synthesizers from the early Buchla and Moogs to the Emu and Synclavier and beyond. What are you currently using?

PG: From the time that MIDI became available, I really transitioned out of big keyboards. So at this point, I’m entirely in the box [computer]. I’ve got a laptop and software (Ableton Live). And I don’t have any hardware at all.

RI: I assume there’s a keyboard hooked up to the laptop?

PG: Yeah.

RI: But the sounds are all in the software now?

PG: Yes. My experience has been so different from the young guys. For them, I can see the romance of these early synthesizers and why they love them so much. Younger guys call me up all the time and say, “guess what, man? I just copped a Moog! Wanna come over and see it?” “Well, um, not so much.” Really, when I think back on it, what I was doing when I was with Herbie was just terrifying. To go out there with six incredible jazz musicians – arguably some of the best in the world – with an instrument that played one note at a time, was not touch-sensitive, had no patch memory, and the entire set was improvised… I’ll tell you a funny story about that. When Herbie called me up after I’d done work on his album and said he wanted me to join the group, I thought, “how am I going to work this?” Thank god for the ARP 2600, which had just been released. So I thought, I know I’m going to have to change patches quickly, so I’ll color-code the patches. I’ll have this little rack right alongside my keyboard that all the patch cords will be hanging from by color. Then I’ll just keep track of that and plug them in. Well, forget that! I never even had the time to look over at this collection of patch chords. I just grabbed the nearest one. I think after about the third week, I got rid of everything but one color that was the longest, and just went with that.

Patrick Gleeson and Herbie Hancock working with an E-mu Systems modular synthesizer.

RI: I can see how that would be terrifying.

PG: It really was! And the first night I played live with them – just to emphasize the jeopardy of it – the other guys in the band (Billy Hart, Eddie Henderson, Bennie Maupin, Julian Priester, and Buster Williams) were not enthusiastic about having me in the band at all. It was racial, cultural, professional, and regarding synthesizers, “this isn’t even music.” Despite the initial wariness, they have since become lifelong friends.

RI: But it probably seemed like a bit of elitism, too, didn’t it?

PG: That is a part (of it). You figure these guys – after I was out on the road with them for a year, I wondered how were they as nice to me when I first joined as they were when I thought of all the sh*t they would take every goddamn week from white guys. And I was brought to the band – in their view — by the white record producers.

 

RI: Tell me how you came to join the band. (answer from www.patrickgleesonmusic.com by permission)

PG: At Different Fur, after the last recording session of the day, I’d go into the studio and work late into the night on a synthesizer orchestration I was improvising over Miles Davis Bitches Brew. It sounded incredible, I told David Rubinson, San Francisco’s one big-time producer. When he signed Herbie Hancock I began badgering David about letting me play on Herbie’s then-new record.

David told Herbie, “look the guy’s not a musician of your caliber, but he’s good with synths – maybe he can set up some patches for you.”

Herbie and I met at Different Fur. He’d brought one side of what would become Crossings, the breakout recording for Herbie’s Mwandishi band. We put on the tape and began listening to “Water Torture.” About 30 seconds in, Herbie said, “maybe add something here.” I began patching the Moog as fast as I could, afraid Herbie wouldn’t be impressed and would walk out. Soon, I had a sound like a flock of birds ascending into the music. I said, “you could try that.” “You didn’t record it?” Herbie said. “Well, no, I thought you’d play it.” He added: “You’re fine: record it.”

We continued this process, working our way through the tune. After an hour or so, Herbie said, “Look, I’ll come back tomorrow. Keep going.”

I stayed up all night and by the time Herbie returned I’d overdubbed one side of the album. Later, he told several music magazines that the experience had blown his mind – he’d never heard anything like it. A few months later, I’d joined Herbie’s band and was on the road.

 

RI: How did it happen, and how exciting was it to have Wendy Carlos contribute liner notes for Beyond the Sun?

PG: That was such a wonderful surprise. I’m not sure who asked whom first. She, at the time, was sharing a brownstone in New York with her producer, Rachel Elkind, so Rachel began corresponding with me. She was very tentative at first, and said, “I need to ask you, are you aware of Wendy’s fairly radical medical change?” or something like that. Wendy had addressed me up to that point as “W. Carlos.” I said, “Sure. I think that’s great.” She had just wanted to make sure that was no issue. When we went back to New York, we had lunch with Wendy and Rachel and then went over to Wendy’s studio, and she showed us what she was doing, which was just fascinating.

In retrospect, I really think that the only person who ever really nailed the arrangement of classical music on a synthesizer was Wendy. I don’t think I nailed it, and I don’t think anybody else did. I never heard anybody do it. It seems like, in a way, that it’s a very simple thing to do. But it isn’t just the technology, that’s almost the least of it. Wendy was peculiarly well-suited for doing that. When you would meet her, you were aware that she was very, very different, obsessive in a certain way. She would travel all around the world to see eclipses – that was a big passion of hers. And, of course, doing synthesizer music, the way she did it at that time, you had to be fairly obsessive. She brought this peculiar obsessiveness to it, but also maybe because she was kind of outside the mainstream sociologically, once she had gone through her surgery. She had a very independent streak and take on almost everything, and I think that extended to the Bach music she was doing.

I think, in a way, she just took it with the right degree of seriousness. I think I was too serious. And also, I was influenced by German prog stuff and liked “metronomic” stuff (I still do), but that approach is not particularly well suited for synthesized classical music – you really need to have ritardandos and accelerandos, etc. And my performance doesn’t really have those. So at the time, I thought [her stuff] was just wonderful. Wendy commented that that was the only area where she wasn’t totally in agreement with what I was doing. She said we’d have an interesting discussion of that at some point. If we were to have that discussion now, I would say, no, you were completely right, I was wrong. I think she was the only one who respected the music enough to really explore its essential nature, and at the same time didn’t take it so seriously that she didn’t realize that what she was doing was essentially a popular performance and the first thing it needed to do was to please. I think Tomita obviously pleased people – sold a lot of records – but didn’t particularly respect the music. And Wendy’s recordings did both. And I think my performance respected the music, and just was not enough fun. I just wish, if I were going back to do that now, I would do The Planets so differently.

RI: Interesting. In my opinion, yours is by far a more serious work than Tomita’s. I’ve always thought it was a shame that the timing of its release put you in direct competition with Tomita, who was coming off a hit with his synthesized Debussy album, Snowflakes are Dancing. Whenever I’ve told people about your album versus Tomita’s, I’ve said, “Patrick is an artist; Tomita is a cartoonist.”

PG: It’s too bad he died relatively early in life. When I did Beyond the Sun, I did so on spec. There was nobody saying “we’re going to release this album.” Most everything significant I’ve done, I’ve done that way. I sent it first to RCA and I got this strange letter back. After some very complimentary language about the album, the guy said, “Unfortunately, we have already committed to a synthesizer rendition of the same music by Tomita,” which was the first time I’d heard the name Tomita. That was the first complication. So RCA was off the table, and I went with Mercury Classics, which was a division of Polygram at the time. I would’ve preferred to have been on RCA. Mercury was well-meaning but sort of lead-footed, and they, in general, really didn’t get it. They were nice people, but as I say, not the most adroit. The second interesting thing that happened was they needed to get permission to release the album from the Holst estate, and the heiress to the estate, Imogen Holst, was a well-known British conductor. And she abhorred the idea in the extreme and turned it down totally and immediately.

RI: Just the concept in general?

PG: Yeah, just period. Maybe she hated my version, undoubtedly she did, but she also hated just the very idea of it all. So at that point, Mercury or Polygram’s lawyers – whoever the lawyers were — wrote and reminded her that the Canadian branch of her publishing company had already extended that permission to Tomita, and that it would be construed as prejudicial and discriminatory if they then refused the same thing to me. So her lawyers advised her to let it go. But she didn’t want either one released, which I think is a classic instance of taking life too seriously. Because it’s popular music to begin with, in a way.

RI: Back to keyboards. If you had to go back to playing a self-contained synthesizer keyboard, what would you choose?

PG: I’m so out of that loop, I don’t think I’m competent to say.

RI: How about of the ones you’ve used? Do you think you could go back to self-contained synthesizers/keyboards?

PG: I wouldn’t want to use any of them. I think there are some big expensive combo keyboards like the new $10K Moog, for example – the Moog Matrix One. Something like that would probably be what I’d use, but I wouldn’t be very happy with it. The very problem with having a synthesizer that’s not virtual and configured so that it’s programmable in this modern way means it’s not very versatile. You can’t jump into any patch point and initiate something that’s never been done before. The designers of the instruments have to have preplanned that. And often they do, to a considerable degree, but then what you have is a very complicated instrument that is not immediately accessible. And with Ableton, I probably have a shameful number of apps – probably 200 or 300 different synth programs of one kind or another. In a given piece of music, I might use 40 or 50 of them.

End of Part One.

(In part two we’ll learn how Different Fur got its name, along with more stories of performing with Herbie Hancock and others.)

Header image of ARP 2600 synthesizer courtesy of Wikimedia Commons/Daniel Spils.


The Sound of Silence

The Sound of Silence

The Sound of Silence

Peter Xeni

A Conversation With Mark O’Brien of Rogue Audio

A Conversation With Mark O’Brien of Rogue Audio

A Conversation With Mark O’Brien of Rogue Audio

Don Lindich

Rogue Audio manufactures a wide range of tube, solid-state and hybrid audio components including integrated amplifiers, preamplifiers, power amps, phono stages and headphone amps. Located in Brodheadsville, Pennsylvania, the company states that its engineering goals include superior sonics, high quality and reliability, appealing design and high value. We spoke with Mark O’Brien, president and general manager of Rogue Audio.

Don Lindich: Please tell us a little more about yourself. Where are you from and how did you become an audiophile?

Mark OBrien: I’m originally from New Jersey but have lived most of my life here in Pennsylvania. As a kid I was fascinated by electronics and started messing around with speaker design in my early teens. Being interested in both electronics and acoustics, I studied physics in college and earned my BS from California Polytechnic State University. I took some further grad school courses in physics but wound up getting an MBA so that I could better understand how to run a successful business.


Mark O’Brien of Rogue Audio.

DL: When and where did you launch Rogue Audio, and how did you get your start?

MO: I became really interested in amplifier design while I was working at Bell Laboratories in the early nineties. I was fortunate because I was working with some really bright electrical engineering PhDs who were a never ending source of both information and inspiration. At the time, the amplifiers and preamplifiers I built were all for my own use. The early versions were pretty crude but after a while, I started getting some really pleasing results. Eventually I convinced two of my colleagues to jump ship with me and start Rogue Audio in 1996.

DL: Where are your products made now? Are any Rogue products made overseas?

MO: All of our products have always been hand built here in Pennsylvania. We also locally source most of the parts we use to build them. A couple of years ago we built a brand new factory from the ground-up in Brodheadsville. It was a really nice step up from the old industrial building we had been working in for the previous twenty years (think air conditioning!) Rogue Audio has always had a great work environment in terms of our company culture and the people who work here. I would never want to change that.

DL: What is your design process and philosophy?

MO: I would say that our overarching design philosophy is to create great-performing audio products at attainable prices. That doesn’t mean that they are inexpensive, but rather that we offer excellent value. From an engineering standpoint, we design our products to be reliable, work properly with other well-designed products, and most importantly, to remain faithful to the original audio signal. We don’t try to “flavor” our sound by using components that artificially alter the signal. We also design our products to have low output impedances and high input impedances so they will work well with other solid-state or tube brands.

DL: Looking at your product line, Rogue components use various combinations of tube and solid-state electronics in their designs, and some novel applications of Class D amplifier technology. Is there any combination of these technologies that is your favorite or that you think leads to the best overall sound?


Sphinx 3 integrated amplifier.

MO: That’s a great question. While we are primarily a tube amp company, almost all of our products incorporate solid-state devices in their design to one degree or another. In the case of our hybrid products, we have taken advantage of the best of both technologies. We use a proprietary technology we call TubeD that forces the solid-state devices to sound (and test!) like large high-performance tube amps. Essentially TubeD employs a small amount of feedback from the tubes to create tube-like behavior in the Class D output modules.

We were all very proud when The Absolute Sound chose our new DragoN amp, which is a hybrid tube/Class D design, as a 2020 Solid-State Power Amplifier of the Year. Our hybrid products are perfect for people who want tube sound without having any tube maintenance.


Magnum preamplifier.

Personally, I really enjoy designing in both spheres as well as writing the software to operate the products. Many companies have their embedded engineering (the software) written by outside companies. We bit the bullet several years ago and brought that technology in-house. All of the software we use to control the displays, the remote control operation, the input switching et cetera is all developed at Rogue Audio. This gives us the luxury of super-fast turnaround when we want to make any changes.

DL: What tubes do you use and how do you choose them?

MO: Our primary considerations are sound and reliability. For the small-signal tubes (used in the preamps and input stages of power amps) we mainly use 12AU7 and 12AX7 tubes because they are readily available and work great for audio applications. For the larger output tubes in our power amps, we use the KT120 tube. It sounds excellent and has proven to be extremely reliable. One of the fun aspects of tube amplification is being able to fine-tune the sound by swapping out different tubes. As a manufacturer we need to use tubes that are currently in production, but the end user has loads of choices in terms of what they can use in their gear – the world is truly their oyster.

DL: What are your most popular products?

MO: Needless to say, we sell more $3,000 Cronus Magnum III integrated amplifiers than we do $15,000 Apollo Dark monoblock amps but on the whole, our products are pretty popular across the board. I believe that they all offer terrific sound and meet a wide variety of needs.

DL: Who is your target customer, and what are the reasons they should buy a Rogue Audio product compared to other choices on the market?


One of several listening rooms at Rogue Audio.

MO: Our target customers are critical listeners who are not only passionate about their music but are also intelligent buyers. They recognize that our products not only sound great but are a good long-term investment in their audio systems.

DL: Your products are what most audiophiles would consider “affordable high-end.” Do you ever see Rogue expanding into the mass market, or conversely, into the more expensive and esoteric ultra-high-end market?

MO: No and no. I view our employees as craftspeople rather than assemblers. For example, the people who hand-solder our circuit boards do so at what I would consider an artisanal level. It takes several months to train someone to [even] begin to solder boards at the level we expect and a year or more to fully come up to speed. The same holds true for all of the other positions here. This level of craftsmanship pretty much precludes the possibility of mass-market production. As far as more esoteric products are concerned, it simply isn’t who we are as a company. I view most of the super-expensive gear as electronic jewelry more than hi-fi gear. Much of the pricing seems to be arbitrary or a result of costs that don’t really have anything to do with performance.


Adeline of Rogue Audio is adept at the fine art of soldering.

DL: What is the origin of your logo?

MO: I have always been a bird lover and the raven is a highly intelligent bird that doesn’t necessarily go with the flock. When we started Rogue Audio we saw that as symbolic of our company and our goals. That still holds true, but now has also become symbolic of our terrific customers.

DL: Anything else you would like to add?

MO: Only that I feel truly gifted to be able to do such interesting work alongside great people and within a really fun industry.


RP-9 preamplifier.

 


The factory in Brodheadsville, Pennsylvania.

All images courtesy of Rogue Audio.


Superposition: Getting Speaker Placement Right

Superposition: Getting Speaker Placement Right

Superposition: Getting Speaker Placement Right

Russ Welton

In Issue 130, Russ noted that he’s been reappraising his audio system and went over some basic ideas about speaker setup. The series continues here.

When placing your speakers in any given room, you may initially be concerned with all the factors you can’t control: the size of the room, its orientation, what furniture must go in there, if there is a hard wooden floor or large exposed glass surfaces that will cause unwanted sonic reflections, and so on. Although each of these individual issues can be examined and dealt with separately, let’s look at one of the factors we can readily control, so that we can happily say, “I’ve found a super position for my speakers! They sound great here.”

In quantum mechanics, particles can be in two or more states at the same time. (I wish I could work and sleep at the same time! Wouldn’t that be cool?) This is known as superposition. “But what has this got to do with our speaker positioning?” I hear you holler.

It may not be a precise analogy but if our speakers are placed optimally, they can be in two “states” at the same time: occupying the physical locations where we set them down, and at the same time, sonically “disappear.” Like our subatomic particles, they could be thought of as having two states or properties.

How do we get our speakers to be both “there” and “not there?”

When we get the soundstage correct, we can look and still know where the speakers are, but according to our ears, the sound produced will seem as if the speakers aren’t even there. Instead, we hear the band or artists as an event, and are immersed in the performance. The speakers become exciting.

By making fine adjustments we can perhaps even suspend disbelief entirely.

So how can we improve our listening experience? What I’m going to suggest is a bit different from the usual setup articles. First, a tip on what to listen for.

Do you close your eyes when you listen to music? Depriving yourself of sight may enhance your sense of hearing and listening. As you listen, think about how significant the vocals are in the mix, and how the song may have been produced with the intention of drawing you in as the listener, by getting you to engage with the emotion of the piece.


Close your eyes and these Magneplanar 3.7i speakers will disappear into a seamless sonic presentation. Courtesy of Magnepan.

Now consider the fact that vocals are one of our first natural references for communicating, and using our own voice can help us in setting up our speakers.

You are likely most familiar with the sound of your own voice, and its natural properties are firmly imprinted in your mind. You know what you sound like. This can assist you in determining how far from the rear wall you should place your main speakers. If you stand with your back against that wall and speak out loud at normal talking volume, pay attention to the tone of your voice. You may notice more reverb and/or delay than usual. Your voice may sound closed and less open. Move slightly forward away from the wall and repeat your recital. Again, notice if there’s a change in your voice. Does it sound less “slappy,” slightly warmer and less echoey? Keep gradually advancing forward until you like the tone of your voice, where it sounds most natural and familiar to you without that excessive reverb and hardness from being too close to the walls. Then, try placing your speakers at this same distance from the wall and listen to their tonal balance, and make further adjustments from there. You’ll likely notice they also sound more natural and develop more openness and breadth in their tone; literally sounding less hard and closed-in. They say that talking to yourself is the first sign of madness but perhaps it’s worth it in this case!

In the previous article I mentioned the importance of reading the manufacturers’ guidelines for speaker placement, both in relation to their distance from each other and from the back wall. (Remember, these are guidelines, not absolute requirements.) But what if there is no such information? Many manufacturers will not state a specific optimal distance from the rear wall, because this can vary according to the dimensions of the room itself. The room will heavily dictate the overall sound, because its dimensions will determine the areas of bass reinforcement and cancellation and the behaviour of standing waves in the room. If you place the speakers in an area of cancellation or reinforcement, the tonal balance can suffer greatly.

You may discover that a good starting point for your main speakers is to place them one fifth of the total room length into the room and one fifth in from the side walls. Alternatively, you can try the famous “rule of thirds” you’ve probably heard of before. If that’s not possible, try placing them one fifth of the room away from the side walls. Measure your distances from the front and center of the speaker as this is the acoustic source of your sound. If you measure from the rear of the speaker you could end up placing them further into the room than is necessary. Save yourself some space – measure from the front. Also, measure precisely – it’s important to get the speakers as accurately and symmetrically placed as possible. Even fractions of an inch can make a difference.

Placing the speakers as far apart from each other as possible will allow for a wide soundstage. However, if they’re too far apart you’ll get a “hole in the middle” rather than a seamless spread of sound with a focused center image, which is what you want. To get your sweet spot, angle the speakers in toward your listening position, which will increase the focus of the image, making it more solid. Toeing in your speakers increases the ratio of direct to reflected sound. Check for increased brightness and adjust to taste as you make these incremental adjustments.


Here’s a suggested speaker setup starting point, from Audiophile’s Guide: The Stereo by Paul McGowan. Illustration by James Whitworth.

Be aware though that some speakers are specifically designed to be positioned without any toe in, as their responses are very even both on and widely off axis. This is great for consistency of sound in a wider seating area and toeing in these speakers may yield no improvement at all. Some speakers actually sound better with no toe in purely as a characteristic of their “personality” and may well be bright enough already.

If the sound is still too boomy, it may be that your speakers are still too close to the wall. Gradually bring the speakers away from the walls until that boominess is gone. Also, the depth of the sound field can suffer if the speakers are too close to the wall, and moving them further out into the room can really get the sound to open up.


A guide for 7.1 surround-sound speaker setup. Courtesy of Sound & Vision.

Similarly, a good starting position for your listening chair is about one fifth of the length of the room in from the rear wall because, again, you won’t be located typically where standing waves peaks and troughs occur. Again, experiment. Moving the chair even a few inches forward or back can have a big effect. I realize that for many of us, however, we simply don’t have the freedom to put our listening chair in the ideal spot, or we may just decide against it because we don’t like the way it looks. If you can, it’s good to give yourself a reference of how good it sounds there and then you can aim for this within the compromises or further decisions you make afterwards. But at the least, try moving the listening position to different places, if at all possible.

What else can you try? Given that you want to avoid sitting where standing waves build up, you can experiment with placing your seating at a point that is not in a location where this problem is compounded, such as in the middle of the room. Measure the width of the wall behind your front speakers. Multiply this by 1.25 and place your seating this far back from that wall, in the middle of the room’s width. So, let’s say your room is 12 feet wide. Take your 12 feet width and multiply it by 1.25 = 15 feet. Place your chair at 15 feet from the rear wall behind the speakers and test drive your music from here.

But what if you find yourself competing with furniture or general access through the room? If this is the case, you may choose to reduce the distance between your main speakers so that your seat can be placed equidistant from them in a simple equilateral triangle. (In fact, many speaker setup articles will recommend such a triangle configuration between you and the speakers as a starting point, and it’s a tried-and-true method for many listening situations.) You may compromise some of the spaciousness of the soundstage – but not necessarily – as you place yourself in a more intimate position closer to the speakers. If you find this to be too focussed and direct sounding, experiment with toeing your speakers out a few degrees for your personalised room super position.


We think they’ve got it! From Audiophile’s Guide: The Stereo by Paul McGowan. Illustration by James Whitworth.

Header image: Klipsch Forte IV loudspeakers.


Analog vs. Digital: An Unending Debate

Analog vs. Digital: An Unending Debate

Analog vs. Digital: An Unending Debate

Adrian Wu

One of the most controversial topics in the audiophile universe is the digital versus analog debate. After the introduction of the compact disc in the early 1980s, the sales of analog music formats (LPs and cassette tapes mainly) declined steadily until 2007, when there was a revival of interest in vinyl. Since then, the market for vinyl LPs has seen a double-digit percentage rise each year, whereas CDs are gradually being replaced by music streaming, such that the value of LPs sold in 2020 exceeded that of CDs for the first time since the 1980s. Even the compact cassette tape is making a comeback, with the recent resumption of blank tape manufacturing.

There are several reasons for the revival of vinyl LPs. While some audiophiles claim that LPs sound better than digital formats, sales growth is being driven not by audiophiles (who only represent a small fraction of the record buying public and are mostly more mature adults), but by young music lovers getting into the format for the first time. Music has become a commodity, something that is so easily available from streaming sites, and this allows music lovers to acquaint themselves with music from years past. My son listens pretty much to the same music as I did at his age; he is no longer confined to what is on the charts. He can explore and decide for himself what he likes. The revival of interest in the music of times past also stirs up the desire to own the music in the formats that people used to have in that era. I can still remember the thrill of opening a copy of Pink Floyd’s The Dark Side of the Moon that I bought with the money I earned at my summer job. There is no such thrill when I click a song to play on Tidal. The whole package, with the illustrated jacket, the black disc, the printed lyrics and posters instill a pride of ownership. As audiophiles, we should thank these consumers for giving a reason to the record companies to continue manufacturing vinyl LPs.


This reissue of Stravinsky’s Firebird from the Mercury Living Presence catalog, done by Classic Records (now part of Analogue Productions) is one of the best LP reissues I have experienced. In fact, in terms of sound quality, it is probably one of the best classical LPs ever made. The dynamics on this LP are frightening. I am still looking for a good copy of this master tape. It was the combination of Robert Fine, the recording engineer, with his 3-microphone (Schoeps M201) technique and the recording venue (Watford Town Hall) that created this magic. https://www.stereophile.com/content/fine-art-mercury-living-presence-recordings

Going back to the question of which is better, digital or analog, this is not an easy question to answer and depends on the perspective of the user. If you ask a professional (recording and mastering engineers), you will probably hear a completely different answer than if you ask an audiophile. It is not so much due to the differences in how these two groups of users evaluate sound quality, but due to their very different experiences in using these two technologies. The experience of a typical audiophile is often limited to compact discs, SACDs, high-resolution PCM and DSD file playback, streaming, and vinyl LPs. For the professionals, it is the high-resolution formats (nobody records in Red Book format anymore) with the associated hardware and software, versus analog tape and the associated analog hardware. While audiophiles only care about the sound quality of the end product, professionals have to take into account the production process.

I am not a professional, but I have been making recordings for many years as a hobbyist, so I have some idea about the production process. Music production nowadays invariably involves multiple tracks, and digital technology has made this infinitely easier. All the mixing and editing can be done in post-production with a digital audio workstation and computer. A large variety of plug-ins are available to apply different effects to the sound. Expensive hardware is no longer necessary; there are plug-ins that emulate the sound of famous vintage microphones, plate reverbs, compressors and so on. And the changes made are fully reversible, whereas a bad splice of the analog master tape can become a disaster. It is like the difference between using a typewriter and Microsoft Office.

The danger is in relying too much on post-production and not paying enough attention during the recording process. During the early stereo era, sessions were recorded onto two or three-track tape recorders. Some companies such as Mercury only used three microphones, and the three tracks from the microphones were mixed down to stereo during mastering. Companies such as Decca that used multiple microphones would mix the tracks in real time into stereo during the recording session. This meant the balance engineers had to get everything right during the expensive recording sessions, as there was no way to remix the tracks afterwards. Microphone placement was of paramount importance. After the introduction of multitrack analog recorders, mixing could be done during post-production (and Dolby noise reduction, introduced in 1965, aided in the process). However, tape was (and still is) expensive and editing must be done manually (by cutting and splicing the actual tape!), giving the engineers the incentive to get everything right during the recording session. As my partners and I always record in analog (with digital as backup), we are well aware of these pitfalls.


The combination of Kenneth Wilkinson, the recording engineer for Decca, and Kingsway Hall as a recording venue is a guarantee for stupendous sound quality. The Decca Tree technique of using three omnidirectional microphones (Neumann M50) was developed by Wilkinson, Roy Wallace and Arthur Haddy. It is still widely used today for recording in large spaces. This reissue LP from Speakers Corner is excellent and comes very close to the master tape.

However, I have attended professional multi-miked recording sessions where one microphone was used for each player of an orchestra, placed casually and without regard for phase cancellation. The idea is that everything can be corrected during post-production, which is actually not true. The natural acoustics of the recording venue and the perspective of the orchestra can never be re-created by simply mixing the individual instruments together. This might be one of the reasons why recordings made 60 years ago still sound better than many modern recordings, despite the technological advances that have happened since then.

Old analog recordings often sound better because of how the music business is run today. In the past, large labels had their own recording teams with highly experienced recording and mastering engineers, along with an apprentice system to train the next generation. The engineers were intimately familiar with the recording venues and produced consistently excellent recordings. During the heyday of the music business, labels were able to make good profits from record sales. Nowadays, the revenue stream from sales of physical media has dried up, and the income from streaming is miniscule. Recording projects are often outsourced to the lowest bidder, and artists sometimes have to pay for the recordings themselves. Nobody can afford to take on projects such as Decca’s Wagner Ring cycle.

Another reason why early stereo recordings are often better is that the record-buying public in that era cared about sound quality. Buying a stereo system involved a significant financial outlay, and there was no distinction between “audiophile” and consumer equipment, at least not until the Japanese companies entered the market with mid-fi and mass market products in the 1960s and dominated it in the 1970s. In other words, anyone buying LPs or open reel tapes in those days was what we would now call an audiophile. All major classical labels were in effect audiophile labels, and sound quality was a major selling point, in addition to the quality and reputation of the artists.

Music nowadays is mostly played on smartphones, car sound systems and computer speakers. The number of people who still sit and listen in front of a stereo system is very small. Music is therefore mastered in such a way so as to optimize the quality when played through these modern means of listening. That means compression is used so that soft passages can be heard even in noisy environments outdoors or in a car, and equalization is used to compensate for the limited bandwidth of these devices. This obliterates the dynamic shading and tonality of the music when played through a high-quality stereo system.

This is not to say that there are no high-quality recordings being made nowadays. Many small independent labels still produce recordings with sound quality in mind, using the latest high-resolution digital technology. Ironically, some engineers feel that passing a digital recording through analog tape makes it sound more natural. This might have to do with the higher noise floor of analog tape. This noise mimics the background noise of natural acoustic environments, whereas the almost noise-free background of digital recordings actually sounds unnatural. There are now plug-ins that add tape noise, tape saturation and other analog artifacts to digital recordings!

Many people have offered opinions as to why digital recordings do not sound as good as analog in their estimation. As I have very limited technical knowledge of digital audio technology, I will not comment on the merits of these arguments. Through the monitoring system we use during recording sessions, switching from the live feed to high-resolution digital, especially DSD, sounds indistinguishable to me. However, during playback at home, the tape often sounds more dynamic and natural, but this could be due to the quality of the playback equipment, as I have not invested anywhere near the same amount on my digital front end as on my analog front end.

For the audiophile, comparing analog and digital often comes down to a comparison between LPs and CDs or high-resolution digital formats. Again, the quality of the respective playback equipment matters, and for LPs, proper set up of the record player is a must. The question is, do LPs represent the best analog has to offer? LPs have a lot of inherent limitations. The linear velocity of the groove decreases towards the center of an LP, and the lower velocity at the end of a side leads to an increase in distortion and makes tracking more difficult. For symphonic music, it is often the end of a piece that has the greatest dynamics, right where the groove velocity is the lowest. Compression (dynamic range limiting) is therefore often necessary to prevent mistracking. Longer pieces require narrower grooves to fit onto one side of an LP, which again can require compression.

The whole process of LP production involves multiple steps, with potential for sonic degradation at each stage. LPs that are made with new stampers sound better than those made with worn out stampers. Background noise is a function of the quality of the pressing process and of the vinyl material. An off-center spindle hole will cause pitch instability that is more evident with certain instruments such as a piano. Only when all the stars are aligned will one get a perfect record. Digital recordings, on the other hand, are always consistent. They sound the same whether you have played them once or a thousand times. Whereas music with limited bandwidth and dynamic range, such as a folk singer with a guitar, might sound better on an LP, a Mahler symphony will almost certainly sound more dynamic on high-resolution digital, given the superior signal to noise ratio and dynamic range of digital recordings compared to LPs.

There are around 60 recordings for which I have both the LP and a copy of the master tape (mostly copies of the production or safety masters). Most of these are Decca, EMI and RCA recordings from the late 1950s to mid-1970s. In no case is the LP superior. In over half, the quality gap is wide, and all the LPs that sound close to the tapes are modern reissues. Certain prominent magazine reviewers past and present have touted the superiority of first pressings, and some of these now cost an arm and a leg as a result. Examples include some RCA Living Stereo “Shaded Dog,” (Nipper, the RCA dog, is pictured against a shaded red background; later “Plain Dog” pressings have a plain red background), Mercury Living Presence and “wide-band” Decca LPs (so called because the silver band on the label that says “Full Frequency Stereophonic Sound” is wider).


An RCA Living Stereo “Shaded Dog” label.

In my experience, these old LPs rarely live up to their reputation, which I don’t find surprising. Vinyl record production technology has advanced by leaps and bounds since the late 1950s, so it would not make any sense that these ancient LPs should be better than those reissued today, unless the master tapes have significantly deteriorated. The original issues were also made in larger numbers, whereas modern audiophile reissues are made in far smaller quantities, with smaller production runs from each stamper to ensure more consistent quality. Rather than spending the money on these vintage collector’s items, why not spend the money on reissues to support today’s manufacturers and ensure they will continue to be available in the future?


Do LPs represent the best analog has to offer? Compare them to the original master tapes and you can decide.

So, here are my conclusions. Assuming the quality level of the playback equipment for digital and analog is comparable, I would go for a digital format if the original recording was in digital. It makes no sense to me to produce an LP from a digital source (except for DJs who use turntables for scratching). For music that was originally recorded in analog, the choice comes down to the type of music. For music that is large scale and dynamic, I would go for a high-resolution digital remastering as long as it was done correctly, in order to avoid problems associated with LPs such as end-of-side distortion, compression and noise. For other forms of music, it comes down to the quality of the LP pressings versus the quality of digital remastering. Given a choice, I prefer the DSD format. Other than any simple “splicing” or editing that might need to be done, DSD must be converted to PCM (usually in 24-bit, 352.8kHz, also called the Digital eXtreme Definition or DXD format) for editing before re-converting back to DSD. Whether this causes any appreciable loss in quality is debatable. For conversion of analog materials to DSD, it is best to do the remastering in analog domain before conversion.

Dealing with music originally recorded in Red Book CD standard (16-bit, 44.1kHz) is another matter. Early digital recordings suffer from a loss of low-level detail. In an article by a recording engineer about his early experiences with digital recording, he talked about the way he heard the steps of the recording artist as she entered the studio; on the analog tape, he could also hear the reverberation following each step, but on the digital recording played back at the same level, he could only hear the feet striking the floor but not the reverberations. This loss of low-level information is what makes early digital recordings sound unnatural and less dynamic when compared to analog tape. Early analog to digital converters had an effective bit depth of only 14 bits even though 16 bits were specified. This gave a dynamic range of 84 dB, and if overloaded would result in highly unpleasant non-harmonic distortions. The Nyquist limit (the highest frequency that could be encoded without aliasing, which is half of the sampling frequency) of 22 kHz is just at the limit of the audio band, thus requiring steep anti-alias filtering before digitization. These steep analog filters can introduce amplitude and phase non-linearities as well as ringing. The eventual adoption of oversampling allowed the use of more gentle filter slopes. Unfortunately, the loss of low level detail and the artifacts introduced by anti-alias filters cannot be undone during remastering. We therefore have a decade’s worth of recordings that will always remain problematic.

I have bought very few CDs over the years; most of my digital music collection comes from converting LPs and tapes to DSD, and from high-resolution downloads. For standard Red Book listening, either from CD rips or streaming services, I prefer real-time conversion to DSD128 during playback using Audirvana software.

There are many recordings made during the golden age of music performances, decades before the digital era, featuring artists such as Furtwangler, Walter, Kleiber, Callas, Oistrakh, Kogan, Du Pre, Cortot and Richter to name just a few. On the rock music front, most of the important releases from the Beatles, Pink Floyd, Led Zeppelin, Hendrix and other greats were made during the analog era, not to mention many classic jazz and blues recordings. Some of these have been remastered into new LP and digital releases, but we are at the mercy of the mastering engineers, since many of the original artists are no longer with us and cannot ensure that their original intent is properly preserved. This is why many people still seek out the original LPs. Some of these recordings are now being released in open reel tape format, mostly 1:1 copies from master tapes (copies made at the original playback speed rather than high-speed duplication, which is used for expediency but can create sonic degradation) and without additional manipulation. In my view, this is the ideal format for preserving recordings from the pre-digital era. I will further discuss this new trend in future articles.

Header image courtesy of Pexels/cottonbro.


Will A Perfect Audio System Ever Exist?

Will A Perfect Audio System Ever Exist?

Will A Perfect Audio System Ever Exist?

Frank Doris

As a hard-core audiophile, I’ve spent the better part of my life working on improving my audio systems. I’ll admit – mostly because of selfishness. I want to hear music reproduced as perfectly as possible. I don’t want merely good. I want incredible, mind-blowing.

I’m extremely happy with the way my system sounds now, but I know it could be better.

Will we ever have audio systems that literally sound like the real thing? The obvious answer is no. After all, as Galen Gareis (see his articles in this issue and in Issue 130) has noted, you can’t beat the laws of physics. But what if you could, or at least work around them? Why not dream of the day when music systems can sound exactly like live music?

This is going to involve some pie-in-the-sky speculation and I invite readers to tell me I’m completely crazy, or laugh uproariously at my lack of scientific knowledge. But, we all want perfect audio reproduction. (Except for looking at mics, I’m going to skip over the fact that the recording chain would also have to achieve perfection.) How can we get it? Not only don’t I know the answers, I don’t even know if I’m asking the right questions. I’m putting this out there as food for thought, and to encourage comments.

The deviation from sonic reality starts right at the beginning of the recording process. As soon as the sound of the vocalist, instrument or whatever acoustic waves that are traveling through the air hits the microphone, it’s already game over. The microphone diaphragm has mass, and inertia. Objects at rest tend to stay at rest and objects in motion tend to stay in motion, whether a car or a microphone diaphragm. No matter how delicate the mic’s diaphragm, it can’t move in an exact reproduction of the sound hitting it.


The song doesn’t remain the same: as soon as the sound hits the mic, even an excellent one like this Neumann, it loses something.

So how do you solve that? Eliminate the mass! Create a massless microphone. In fact, there have been attempts at this, including plasma microphones and laser beamforming, where laser-induced (air) breakdown (LIB) generates an audio signal. Here’s a laser-and-smoke proof of concept microphone. This example doesn’t sound good, but it’s a prototype of a patented technology created years ago so who knows where it could lead?

At an AES convention once, I mentioned the idea of a massless microphone to the director of engineering of a well-known audio company. The person gave me a sharp look and replied, “we’ve actually got some ideas about that but if I told you, I’d have to kill you!”

The converse of the sound hitting the microphone is, of course, the sound coming out of the loudspeakers. Here the problem of overcoming mass and inertia is greater, since we’re dealing with the movement of much larger speaker diaphragms and voice coil assemblies. A partial solution that’s already been in use is the use of a servo control mechanism, where motional feedback from the driver is sent (from an accelerometer attached to the driver) back to the amplifier, which then corrects its output in an attempt to control the “overhang” of the driver. This typically would be used with woofers and subwoofers. However, I don’t know if it’s ever been tried with midrange drivers or tweeters. Anyone?

As another approach, maybe some kind of DSP that doesn’t actually measure the motional feedback from the drivers, but “anticipates” the drivers’ behavior might be worth looking into. Musical signals and driver behavior can be extremely complex, but maybe it’s just an engineering problem.

If we can dream of a massless diaphragm, why not a massless speaker? Actual massless speakers have in fact been demonstrated – dig this YouTube video featuring The Audiophiliac, Steve Guttenberg, and designer Nelson Pass of Pass Laboratories talking about his Ion Cloud speaker.


Nelson Pass and his Ion Cloud speaker on the cover of Stereophile, Volume 6, No. 1, 1983.

Those of you lucky enough to have heard an Ionovac tweeter can attest to its almost spooky purity. But so far, such designs have proven impractical or impossible to implement on a mass-market scale; as examples, the Ion Cloud produced high levels of ozone, and the legendary Hill Plasmatronics speaker had to be fueled with helium.

Interesting work is being done with carbon nanotube speakers, but they simply have less mass, not none, and the sound quality isn’t there. Yet. And at a CES a few years ago, someone demonstrated a system that involved beaming an audio signal on an ultrasonic carrier wave, or something like that, but I couldn’t attend the demo and don’t remember the company’s name. As I understand it, the system still needed a transducer to send the signal. Can any readers help?

Maybe there’s a way to manipulate air that no one’s thought of yet.


A pair of DuKane Ionovac plasma tweeters. If you’ve heard these, you know.

Let’s consider our music sources. In the case of analog, it involves dragging a rock (the stylus) through a plastic medium (the record), then sending a minuscule signal created by a cantilever and electromagnetic generator (the rest of the cartridge) to an equalization circuit (the phono preamp). I think it’s safe to say that such a system is never going to attain perfection. Analog tape seems no less odd when you really think about it – running a thin ribbon of plastic, coated with magnetically-responsive particles, through an electromagnetic tape head for recording and playback. On the other hand, like a bumblebee flying, it never ceases to amaze me at how such Rube Goldbergian devices can sound so utterly fantastic.

I can’t help but think: might there be an entirely new way to record and reproduce a perfect analog musical signal that doesn’t involve imperfect analog playback hardware? Some kind of as-yet-un-invented optical technology, maybe? I know, I know, there would be transduction involved in the acoustic-to-optical-to-electrical signal, but a man can dream, can’t he?

Then there’s digital. I’m not an engineer and don’t want to debate how much of a sample rate is really adequate (we can leave that for the comments), but the idea of chopping up the audio signal and reconstructing it just seems…weird to me. (I know, I know, and I’m not that much of a techno-rube, but…still.) I do find the sound of high-resolution audio to be satisfying and enjoyable. But is it the sonic perfection we’re looking for?

How about an audio system’s preamplifiers and amplifiers? Many of us are familiar with the concept of “straight wire with gain,” where the ideal amplification circuit would simply amplify the signal and add no coloration of its own. How could we make it happen, especially when veteran circuit designers and hobbyists know that not only can parts quality make a difference, but even the physical layout of the components on a board (because of RF susceptibility and other issues) can have a sonic impact?

The first thought might be to make a circuit (for other audio components, as well as preamps and amps) as simple as possible. Seems intuitive – simple circuits might yield greater sonic purity. But then, every part in the sonic “recipe” of the circuit becomes more critical! And when you get into the real world, the simpler-is-better analogy simply falls apart. My second thought is, make the actual product smaller. Minimize the distance between the internal components. OK, maybe not – just try to make a conventional power amp with inadequate output transformers and see how it performs. Although, the remarkably small size of Class D amplification circuitry is a tantalizing glimpse of what can be done. And, have we really explored the limits of what integrated circuit or passive component miniaturization might sound like?


An ICEPower 1000A Class D mono amp module. It measures about 4 inches square and puts out 1,000 watts.

Maybe digital signal processing (DSP) is the answer. In much the same manner that servo mechanisms can control the behavior of loudspeakers, and negative feedback can improve the sound in amplifiers, DSP can correct for all kinds of audio behavior, not the least of which is loudspeaker room-response correction. But perhaps other sonic areas could benefit in ways that no one’s thought of yet.

Speaking of the room, we encounter another major issue. The room the recording was made in won’t match the acoustics of your listening room – one will be overlaid on top of the other. How on Earth will that ever be overcome? Why do the initials “DSP” appear in my head yet again? It would be a daunting if not impossible task – how would we ever quantify the uncountable acoustic signatures of bazillions of recordings and figure out how to eliminate the acoustic effects of each person’s listening room? Call me the Man of La Mancha.

 

Back to a straight wire with gain. Wouldn’t cables literally be the closest manifestation of this? If only. Cables have resistance, capacitance and inductance as well as other variables like skin effect and (thank you again Galen) different velocity of propagation across the frequency band. Then there are the impedance mismatches between amplifier, cable and speaker, or between the electronics in the system (for example, the preamp and amplifier) to consider. How to eliminate all that? Are you thinking what I’m thinking? Wireless signal transmission. But then, you need to have transducers at the signal source and the playback device to convert the wireless signal back to an electrical one, and how much fidelity are you going to lose in the process? (And here you have an argument in favor of integrated amplifiers or all-in-one components that reduce the number of wired connections.) Still, the idea of a completely transparent wireless technology is intriguing.

Getting into the realm of science fiction, how about bypassing an audio system altogether with some kind of a direct neural implant? Cochlear implants are already a reality and research is ongoing, so who’s to say an implanted high-end audio system couldn’t be done someday? Of course, the first thing you’d have to listen to would be Steely Dan’s “Aja.” (Dan fans will get the reference.) Even better – the implant could include a direct brain interface with a streaming service, so all you’d have to do is think of a song and it would play. You could adjust the sound just by thinking, and have a soundstage as vast as the Grand Canyon if you wanted.


Wired for sound: a cochlear implant. Courtesy of Wikimedia Commons/Blausen.com staff.

I’ve spent all this time considering the hardware. What about us, the actual people who are going to be listening to all this stuff? Could there be some way to put us into a state of mind or affect us in a way that makes our audio systems seem more “real”? I know what you’re thinking…but I don’t know if psychoactive drugs are gonna get you there. But seriously, could a drug be developed that gives us better hearing acuity? Or some kind of audio-enhancement hearing aid that lets us hear our systems with better fidelity?

One last thought. Many industry giants have worked to advance the field of high fidelity. Perhaps some great ideas have been lost, and are waiting to be rediscovered.

As Walt Disney once didn’t say, if you can dream it, you can do it.

Header image courtesy of Pixabay/Gerd Altmann.


Cable Design and the Speed of Sound, Part Two

Cable Design and the Speed of Sound, Part Two

Cable Design and the Speed of Sound, Part Two

Galen Gareis

In Part One of this series (Issue 130), Galen Gareis of ICONOCLAST cables and Belden Inc. began an extensive exploration into a critical but not often discussed aspect of cable design: the velocity of propagation (Vp) of audio signals. In this installment, he looks at practical ways to change the velocity of propagation and improve signal linearity for the benefit of better cable performance, and examine other subjects.

Also, introductory material on the subject by Galen and Gautam Raja is available in Copper Issues 48, 49 and 50.

What can we do with the insights into the fundamental ability to change the Vp with frequency we examined in the last installment? Can we do something to improve the signal linearity in audio cables?

Here is an example of what might happen in a cable that is designed to have varying levels of Vp differential based on managing the capacitance of the cable. We can do this by varying the size of the insulation, or even the insulation material. For simplicity, we’ll hold DCR the same to isolate the capacitance effects on Vp.

Notice that the change is well within the audio range, and the Vp change is pretty extreme on an absolute basis. (Short cable lengths can allow us to ignore Vp non-linearity; as a first approximation they are too short to have a meaningful propagation time difference.) What if we do not want to ignore this issue, but achieve a better balance in performance by manipulating other cable parameters? If our objective is to make cable better overall, why not? Well, better cable is far more complex and expensive to make, electromagnetically, I grant you that.

In the example above we look at only capacitance. But, inductance is loop area-determined. The farther apart the wires are from one another, which is known as the loop area, the lower their capacitance. But the equations for measuring inductance say that as inductance gets higher, the farther apart the wires are moved. What to do?

A cable’s capacitance can be designed in several ways. If we want to retain a low inductance, which keeps changes in the phase of a signal and its resultant frequency-response anomalies to lower levels, we need to keep the inductive loop area small. Initial phase alignment, the time alignment of signals applied to the cable, in audio cables is small, so many feel it can be ignored, as the initial time-aligned phase isn’t getting too much worse with the Vp speed differences across frequency. This is called group delay, or how much the best to worst signals separate going down a cable after the initial time alignment applied to the cable. A cable shouldn’t make frequency time alignment worse, but it does.

If we consider a square wave, we can get a better idea what group delay and phase delay are, since in order for a square wave to maintain its integrity, its frequency components have to be kept in proper phase alignment with one another.

http://www.iowahills.com/B1GroupDelay.html  A square wave is square only because its frequency components are in proper phase alignment with one another. If we pass a square wave through a device and expect it to remain square, then we need to ensure that the device doesn’t misalign these frequency components. A Group Delay measurement shows us how much a device causes these frequency components to become misaligned.

Keeping the signal in correct phase right from the start is imperative, but group delay, which is caused by the differential in the velocity of propagation, is how a cable makes things worse.

Our objective is to attain the best cable performance possible. How do we do that?

One way is to lower inductance and capacitance. Thicker insulation does not lower inductance; it increases loop area (the space between the wires), which as we have seen increases inductance. To keep the loop area as small as we can for low inductance, but not increase capacitance, we need to use the absolute most efficient dielectric(s) we can. Air is the best dielectric and Teflon is the best material. A low “E” or dielectric constant in an insulating material will allow two wires to be as close together as they can be and reach the lowest possible capacitance.

When E, the dielectric constant, is high, the capacitance is higher at a set RF impedance. We can use the capacitance value calculated at RF through the audio band because L and C are both fixed across frequencies. Only at RF is the Vp=1/SQRT(e). We test RF coaxial cable capacitance at 1 kHz for example.

 

The graph above shows how capacitance and the velocity of propagation are directly related to the dielectric constant for a 100-ohm RF cable type. The capacitance value can be used at audio frequencies, but not the dielectric’s RF velocity.

At RF;

Vp = (1/SQRT (dielectric constant)) or Vp = (1/SQRT (L*C))

L and C are constant from low frequencies through RF with a set dielectric material. We’ll look at this in more depth later in the article.

We certainly want to start with the lowest “E” value possible, and not just for its effect on capacitance alone. But in designing a cable, does capacitance have to be as low as possible and then we’re done? Not exactly.

The above equation for low-frequency Vp also has the variable, R (resistance). Resistance is almost always considered a “passive” element. It is thought to be responsible for attenuation only, like turning up and down a volume knob. However, it influences Vp non-linearity, too. Higher DCR flattens the Vp linearity through the audio band – but only if the DCR seen in each cable “circuit” is sufficiently isolated from other electrical paths. The data below shows what happens when resistance is varied, and we hold the capacitance to 15 pF/foot. And, do we even want zero R or C? And what happens if we ignore the Vp differential and lower the resistance as far as we can?

Vp ACROSS LOW FREQUENCY BY AWG

The chart and table above show that if we decrease the wire size, which increases resistance, we can also manage the Vp differential across the audio band. This allows us to use lower capacitance if, if, IF we can utilize higher DCR wire. Designs can use multiple smaller wires, but beware what happens to C and L when we use more aggregate wires to reach a low bulk DCR.

Physics says we can’t speed up the low frequencies, only slow down the higher frequencies. The curve flattens below 250 Hz. But to avoid too high of a capacitance in order to lower the Vp differential, we can also change just the wire DCR, which allows us to lower the capacitance. We balance the R and C.

Observations

Let’s look at a few things to better understand what is available to us in designing cable, and where these factors are working. R, resistance, isn’t stable with frequency, as a wire’s skin effect (its self-inductance, which is predominant at higher frequencies) and its proximity effect (inefficiency in passing current, which is predominant at lower frequencies) can cause attenuation that varies with the audio signal’s frequency.

The tables below on inductance and capacitance show a few cables’ response across frequency. They are close to a constant to the first approximation. Do we see this in audio cables?

Does the ICONOCLAST cable really show flat L and C, too? It does. The table below shows R, L and C measurements up to 1 MHz for an earlier design prototype. Notice that Rs (Resistance swept) increases as we go up in frequency. Why? Some of this is caused by skin effect and some is the result of closely-spaced conductor wires. Also, the proximity effect concentrates current flowing in the same directions near the wire surfaces nearest one another, and pushes the current away from the two closely-spaced wires in the that carry current in the reverse direction. Both of these factors superimpose to decrease wire efficiency (less current uniformity across the wires’ cross sections).

ICONOCLAST SPEAKER CABLE PROTOTYPE LUMP ( TOTAL VALUE)/
ADJUSTED TO FOOT ELECTRICALS

Higher frequencies, which don’t require much current, need a larger surface area, not the overall volume of wire, to propagate with low attenuation.

High-current applications need a larger wire volume for low attenuation at low frequencies.

If you have high current and high frequencies, you get a double whammy for attenuation. This kind of wire would be very inefficient.

This chart, which was shown earlier, shows that the impedance curve is non-linear and need three separate approximation equations to characterize three different regions of test performance. The low-frequency curve contains the imaginary component “j” times omega or v. Omega is equal to 2f. We saw this set of variables in the Vp equation at low frequencies too; Vp = SQRT (2v/R*C). Capacitance is directly related to; Vp = 1/ SQRT(E) at RF.

Why is increasing impedance through audio frequencies a problem? Below we see a graphic from one of Paul McGowan’s Daily Posts that shows the energy spectrum of typical music. (I have these types of graphs too, but Paul’s is better than mine!) If we want to match the power transfer of the cable to the musical spectrum we need to do it in the highest average power distribution spectrum we can within with a non-linear sweep distribution.

What cable needs to do is match the impedance where the most power is being distributed, or the power energy spectrum.  Where is this region actually? It is below 500 Hz – and smack dab in the region where the impedance curve rises. This makes a true cable-to-low-impedance load match technically impossible to do. It is great to think about, but the physics says we can’t get there. Vp drops too much and too fast as frequency drops raising the impedance when we need to really have it lowered.  At “zero” Hz Vp is by definition zero so we know we’re going to see a change with frequency.

TWEETER POWER,
AUGUST 31, 2020 by PAUL MCGOWAN

There is a near 1,000 watt peak at 60 Hz. The impedance of a cable can’t be close to 4 to 16 ohms in this region due to Vp non-linearity. It is impossible to do using low frequency open-short impedance tests. The physics says low-impedance measurements through the audio range can’t be done with reference open-short impedance measurements (except open-short is most close to how audio cables work, and need to be measured).

True, and honest impedance graphs of a speaker cable show this to be the case. Better cable can indeed decrease the low-end impedance rise, but not eliminate the physics we are working against that cause that impedance rise. The impedance and phase curves below exhibit proper open-short impedance measurements.

ICONOCLAST is a 0.08uH/foot and 45pF/foot 11.5 AWG aggregate design, all very good values for a complex design with 24 0.20” wires in each polarity, which serves to flatten the Vp curve and as we have seen, thus lower the impedance rise at low frequencies.

Below is what a typical ported-speaker impedance trace actually looks like. The solid line is impedance and the dashed line is phase. Superimpose this onto the above graph. This is the true situation we have to deal with, and illustrates how speaker cable really “matches” with a speaker. We can’t match “8-ohm” cable.

What do some other cables do at low frequencies? The chart below graphs several measured cables. If we look at good old POTS (Plain Old Telephone System)-type cable, we see that we can have cable that measures 600-ohms at 1 kHz! Yes, the dropping Vp differential low-frequency properties increase impedance to about 600 ohms. We’ve decreased that effect to just 270 ohms in ICONOCLAST speaker cable but 2-16 ohms is an impossibly low reality in a referenced open-short test. That’s because capacitive reactance, Xc=1/(2FC) keeps going up (resistance to AC electrical energy flow) as the Vp keeps going down (raising impedance even more).

In the next and final installment we’ll look at resistance, the effects of various dielectrics, wire geometry, skin effect and other considerations, and summarize our findings.


Talking With Nason Tackett of Hear Technologies

Talking With Nason Tackett of Hear Technologies

Talking With Nason Tackett of Hear Technologies

John Seetoo

When it comes to comparing audio equipment like speakers or headphones, it is difficult to avoid biased opinions from manufacturers and designers, since it is inevitable that each one has formulated its business model based on an individual preference and an ideal concept.

With this in mind, it stands to reason that a company that makes amplifiers, consoles, or mixing systems might have a more neutral perspective about the types of devices that might be used with their equipment, and the various pros and cons of each,

Nason Tackett is the senior design engineer of Hear Technologies. The company has already made a reputation with Tackett’s cutting-edge monitor mixing units and recording interfaces, as well as other audio/video support devices. Hear offers tablet-sized mixing consoles that allow musicians and vocalists to customize their respective headphone mixes for recording in the studio or for live performance. Although small, these consoles can submix up to 128 channels with a frequency response of 20 Hz to 20 kHz and a sampling rate up to 192 kHz.

Nason Tackett of Hear Technologies.
Nason Tackett of Hear Technologies.

 

Nason graciously made time to share some insights about headphones, in-ear monitors (IEMs) and earbuds from the perspective of the company’s musician, producer and engineering clients, as well as looking at the gray zone that separaes audiophile and professional audio equipment.

John Seetoo: Owsley Stanley, best known for his audio R&D involvement with the Grateful Dead and (sound and musical instrument company) Alembic, held a philosophy that hi-fi home audio gear and pro audio gear should have negligible performance differences apart from the durability requirements of pro audio gear. This is an outlook shared by Pat Quilter of QSC Audio (Copper Issues 118, 119 and 120) and John Meyer of Meyer Sound (Copper Issues 99, 100 and 101).

Do you share or differ with this philosophy?

Another question: apart from mixing capabilities, in what ways does your design of the Hear Back mixing units for musicians differ from or are comparable to audiophile headphone DAC/amps?

Nason Tackett: I do share this philosophy. If I am designing equipment that will be used in the creation of music recordings, I want the accuracy to be at least as accurate if not more accurate than what someone will listen to the playback of the recording on, even if [they will be listening on] a very high-end system. You want to hear all the detail [as well as] all the ugly parts as you make the recording. You don’t want to cover them up and then have someone hear them during playback on a [home] hi-fi system.

The headphone monitoring systems that I have designed are very much like audiophile headphone DAC /amps. The goals in designing them were to be accurate, to avoid coloring the sound, and [to be] as low [in] noise as possible. I like to be able to turn up the master volume and have no hiss. Thankfully, I am in a situation where I’m able to design the best piece of gear I can without anyone telling me to cut corners or cut costs. I just do what it takes to make it the best that I can.

I ran my own live sound and recording company for over a decade, so while designing these products I brought my pair of near-field studio monitors in that I was familiar with, and used them to critically listen to every product I made. I trusted my ears as much as what the test equipment was showing me.

JS: While IEMs are the likely default listening platform for musicians using Hear Back PRO monitoring units in a live setting, headphones of all types probably would be deployed when used in the recording studio. Can you discuss a bit about the differences in headphone types that clients have been using, and any advice for optimal use? For example, would someone in the studio using AKG headphones have an easier time than someone using Dr. Dre Beats?


A Hear Back PRO mixer at Muscle Shoals Sound Studio. Photo by Jessica Coleman.

NT: I will refer to headphones and IEMs (in-ear monitors) collectively as “phones.”

We see people using everything! There is no right answer to what someone should use. It depends on your needs. Some people think [that] using something that has a razor-flat frequency response is the right answer, but this is not necessarily true. Everyone’s hearing is different. People have hearing loss in different areas, so what works for one person might not for another. Your brain gets used to equipment [that] you use frequently.

People who spend the money to get a good-quality pair of IEMs molded to their ears often bring these into the studio to record with because they are used to the way they sound. And this makes good sense. If you don’t have phones [that] you are used to, I recommend visiting a trade show or a shop where you can try a lot of different types to [hear] for yourself. You will want to bring a recording that you are very familiar with and see if you can hear everything, especially the things you really need to hear to perform as a musician.

I don’t recommend a majority of the consumer phones because they are designed to listen to mastered recordings where the dynamics are very controlled. When monitoring live music, the dynamics are not as controlled so the phones usually get hit with a lot more energy that will typically damage consumer-grade phones. Consumer phones also tend to have exaggerated low and high frequencies, which may cause a musician to change their tone to compensate and result in a difference between what the musician and the audience hears. However, there are some really nice consumer phones out there that will work just fine in a live environment. It is important to investigate each design individually to find what works best for you.


Christopher Currie and Bruce Krombholtz using Switch Back M8RX monitoring interfaces at Spice Radio in Huntsville, Alabama. Photo courtesy of Spice Radio. Christopher Currie and Bruce Krombholtz using Switch Back M8RX monitoring interfaces at Spice Radio in Huntsville, Alabama. Photo courtesy of Spice Radio.

JS: Would there be a way for an engineer or musician to use open-backed headphones or even electrostatic headphones during studio sessions?

NT: It really depends on the application. In a studio, if they are in an area that is not very noisy and there are no microphones that could pick up the sound of the headphones, then someone could use open-backed or electrostatic headphones. These are good if someone wants to monitor audio but still be able to hear someone talking to them. They just don’t work if you need to block out room noise or if there are microphones that might pick up sound coming from the headphones. I have a pair of open-back headphones I love to use while designing and testing, but the room I [use them] in is not noisy. My co-workers do get to listen to whatever is in my phones, though.

JS: Hear Technologies has chosen Future Sonics as its preferred IEM supplier, and has stated a preference for their dynamic driver design over balanced armature-based technology (where an armature is balanced between two magnets and is connected to a diaphragm that produces sound). Can you please elaborate on the pros and cons between the two platforms, and in which circumstances, if any, would balanced-armature technology be preferable?


Future Sonics Spectrum Series G10 in-ear monitors.
Future Sonics Spectrum Series G10 in-ear monitors.

NT: Like many aspects of audio, IEM preference is subjective. We like Future Sonics’ approach because it’s simple and it works. They use one dynamic driver, and it works exceptionally well. They were also the first company to commercially produce IEMs, so they have been doing it the longest. [Other companies] have adopted armature technology from the hearing aid industry. [However], hearing aids are designed to reproduce voice frequencies and are made to be as absolutely tiny [and invisible] as possible. With IEMs, [on the other hand], nobody is ashamed to be wearing them, so you can get away with a larger piece in your ear.

Armature technology is limited in the frequency response it can reproduce. To overcome this, designers will use multiple armatures, each reproducing a different frequency range. This leads to a much more complex design. More parts logically means more possible failure points. Also, because the Future Sonics design is simple, the cost is significantly less than multi-armature technology. The number of well-known musicians who use Future Sonics attests to their quality.

Ultimately, you’ll want to listen to a lot of different IEMs yourself, [again], using a recording you are very familiar with and see what sounds the best. Don’t let anyone tell you. Go listen!


Vocalist for Android Lust rehearses with a Hear Back OCTO mixer to create a custom monitor mix. Photo courtesy of Android Lust. Vocalist for Android Lust rehearses with a Hear Back OCTO mixer to create a custom monitor mix. Photo courtesy of Android Lust.

JS: As some musicians also like to hear the sound of the room when performing, they might only use one IEM in one ear, or IEMs or earbuds that are not sealed to enable them to hear some room sound. Do you have clients that subscribe to this, and are there any tips you would give them?

NT: The best choice really depends on the application. I don’t really recommend the open-back design for situations where the music you are playing can both come into the port of the phones [physically] and also through the phones electrically, because now you have multiple [sonic inputs] that are probably out of time and phase. [This] could throw you off or make things sound funny. If your stage is totally silent and you want to hear the audience, then maybe an open design could work, but you might still get reflections of the main PA coming back in at a delay, [which could] possibly throw you off. When in doubt, use a sealed design.

If you choose to use just one IEM, make sure you disconnect the other earpiece, as IEMs are designed to have back pressure on them. When you take them out of your ear, there is no back-pressure and you can easily damage the driver.

JS: When listening via a networked audio interface, are there challenges in delivering the same audio quality you’d get with a standard wired connection, or are there audible tradeoffs that can be heard?

NT: Once [an audio signal is] converted to digital, the only thing you really need to worry about is the latency. It’s not really going to matter if [you’re listening via an] Ethernet or an AES/EBU, S/PDIF, MADI et cetera connection. They just move bits from one location to the other. In the very beginning, digital did not sound as good as analog, but now with all the oversampling that the D/A converters do, it’s pretty much an exact copy of the signal, especially with a sampling rate of 96 kHz. Anything higher than this really has no value because even the highest harmonics of the musical instruments that we are capturing are under 40 kHz ([and the sampling rate needs to be greater than double the frequency you want to capture). Going higher just wastes more data.

Latency, on the other hand, can become a problem. It’s not so much an issue for live sound and using stage monitors because the time it takes the sound to get from the speaker to your ear is in the milliseconds range. The latencies for digital audio are typically 6 milliseconds or less. But issues arise when you are using IEMs. You have that little speaker so close to your eardrum that there is almost zero acoustical latency. Those few milliseconds are going to be noticeable. It won’t sound like a delay; it will sound more like a comb filter or like if you cup your hands in front of your face and talk.


Switch Back M8RX preamp and multichannel headphone monitoring interface, featuring zero-latency analog mixing. Switch Back M8RX preamp and multichannel headphone monitoring interface, featuring zero-latency analog mixing.

Singers and drummers really tend to notice even 2 milliseconds. [Singers] hear an acoustic copy of their voice in their own head through bone conduction, and if the in-ear is a couple milliseconds behind, it will sound weird. A lot of singers have had to use an analog monitor console just for their in-ear mix so there is zero latency just for their vocal channel. So whatever digital protocol is used, the latency should be considered. If it is below 2 milliseconds (our digital equipment ranges from .25 to 2 milliseconds) it will likely be a non-issue.

JS: What are some of the differences between pro IEMs and consumer or audiophile in-ear headphones?

NT: In general, consumer earbuds are not designed to keep up with the energy and dynamics of live music. [However], there are some pretty well-designed consumer earbuds, so it’s hard to paint all [of them] with a broad brush. We do notice that many praise and worship bands seem to use consumer earbuds. We find out about it when they complain about distortion and it turns out they are using a cheap pair of earbuds – those just can’t keep up with the energy of live music. Our equipment can only sound as good as the phones you hook up to it.

Every design is different, so it is important to look at each individually. Many consumer and professional earbuds share the same drivers. There are good and bad consumer phones and good and bad professional designs. Professional [models] tend to have higher power handling capacity and more rugged [construction]. Consumer [phones] also tend to have exaggerated low and high frequencies. But there are plenty of exceptions. Comfort is another factor. With IEMs, having a molded design is essential to long-term comfort. If you are going to have something stuffed in your ears for hours, it needs to be comfortable.

JS: How has the pandemic affected your company?

NT: We have amazing and dedicated customers. Like everyone [else], 2020 sent us some unexpected challenges. But we’ve increased our online outreach and digital trade show presence, and we’ve been able to connect with people we wouldn’t otherwise be able to reach. That has been really good. And since we build our products in the US we’ve been able to keep everything in stock, which has been an advantage. Our team has made sure Hear Technologies’ ability to respond quickly to orders and support hasn’t changed.

J.S.: Is there anything else you’d like to add?

NT: Go listen to it before you buy it!


Drummer Alvin Ford Jr. uses a Hear Back PRO mixer and in-ear monitors at Esplanade Studios in New Orleans. Photo courtesy of Rachel June/iamnotracheljune.com. Drummer Alvin Ford Jr. uses a Hear Back PRO mixer and in-ear monitors at Esplanade Studios in New Orleans. Photo courtesy of Rachel June/iamnotracheljune.com.
Special thanks to Katie Stallcup of Hear Technologies for arranging this interview.

 


Chick Corea Returns to Forever

Chick Corea Returns to Forever

Chick Corea Returns to Forever

WL Woodward

Armando Anthony Corea, known to the world as “Chick,” passed to the other side on February 9. We’ve been losing music icons in the last few years because of an aging population. But this one hurt. The news was shocking because not only had he just released Trilogy 2 at the end of 2018, but we had no idea Chick was ill and the sad event took everyone by surprise. Sure, he was 79 years old. But the closer I come to that maturity the younger it sounds.

Chick Corea could have died at 105 and that would have been a sad day. But 79? Nope. No sir, way too soon. He was in no way done sparkling. There is a YouTube vid of him playing the Blue Note with Patitucci, Weckl, Marienthal, and Gambale on Chick’s 75th birthday and he was finer than frog hair.

Wait! Here it is!

 

Shiver me timbers. Smiles all around. Except Weckl, who always looks like Mitch McConnell back there. Flippin’ Patitucci still looks like a kid.

Chick’s father was a working musician. The lad was exposed to jazz and other forms at an early age. Chick remembers his dad and friends listening to Miles with tears coming down their faces. One of my favorite stories was related by Chick himself in an interview. At four years old his sleeping area was the couch in the living room. He remembered hearing workmen outside the third-floor picture window. The window had been removed, and an upright piano appeared on a block and tackle. His mom had bought this for little Chickie for $45 including cartage. Those were the days. The upright was installed in the living room and as Chick related, “that’s where it all started.”

As with most musicians, Corea started playing with high school groups. He later moved to New York and studied music at Columbia and eventually Juilliard. Amazingly, he was disappointed in the experience and quit formal education. You know, I have heard that story from others. Interesting.

In 1962 at the age of 21 years, Chick was playing for Mongo Santamaria, then Herbie Mann and Stan Getz. Corea started getting attention quickly. He released his first album, Tones for Joan’s Bones in 1967. Here is the title track. I love this. That’s Steve Swallow on bass and Joe Chambers on drums. A review of the album at the time stated, “at a time when the popularity of jazz has waned there are still examples of real beauty and this album is one.” You already hear the “conversation” style of Corea’s music that would wind through his music his entire life. Note the drums are one side and piano on the other, with bass full on like it’s the lead. That is classic Chick man, even early on.

 

I am not going to do a Chick Corea discography. That has been done many times and by better men. I’d rather touch on periods or compositions that influenced me and millions of others.

In 1968 he released Now He Sings, Now He Sobs, which contained a song that would become a jazz standard, “Windows.”

 

In mid-1968 Chick got lucky when Herbie Hancock, who had been playing with the Miles Davis Quintet, contracted food poisoning on his honeymoon in South America. Tony Williams contacted Chick and told him Miles wanted Chick to fill in on a date in Baltimore. Chick played and impressed Miles enough that Miles kept Corea on. Sorry Herbie. I have heard Hancock tell this story completely without rancor. Hancock was a big fan of Corea’s and was happy for him. Herbie had some exploding to do anyway and commenced to do just that.

Corea left Miles in 1970, along with Dave Holland, to form an avant-garde group, then Chick did a few solo albums. By 1971 he had put together one of his most iconic bands, Return to Forever.  Their first album was Latin-theme influenced, but by the third album, Hymn of the Seventh Galaxy, Corea had been blown away by John McLaughlin and the Mahavishnu Orchestra. He heard the power of the fusion of rock and jazz and Corea wanted to go exploring. He replaced earlier members of Return to Forever with Lenny White on drums/percussion, and kept Stanley Clarke on bass and Bill Connors on guitar.

This period of 1973 – 1976 saw the release of five more albums which, along with albums by Weather Report and the Mahavishnu Orchestra, were quintessential jazz-fusion staples and turned all of us on our ears.

Weather Report was my first exposure to fusion and Jaco was my hero. (He joined the band in 1976.) Because of Jaco, I even pulled the frets out of my Fender Jazz Bass to make it fretless, and changed to Rotosound roundwound strings. Still my precious.

When I listen to some of these works today I still get that chill, the sense of being in another reality, a need to study and immerse and revel. Return to Forever will continue to delight the senses.

In 1974 a 19 year old (!) Al Di Meola had joined the Forever lineup and it was this ensemble that released my personal favorite, Romantic Warrior. Here doing the title track are Chick Corea, Stanley Clarke, Al Di Meola, and Lenny White. Lenny White…crisp as a fall apple.

 

Ridiculous.

Return to Forever had developed within Chick Corea a lyrical style that no longer needed vocals, a conversational style that was really evident in all of Corea’s live performances, and a melodic sensibility that came through no matter the style he would utilize to express himself for the remainder of his career.

Besides Corea’s virtuosity, his magnificent approach to melody and grandeur, I will always cherish watching him live. With a band he will alternatively watch the other members and the audience constantly, his lips moving as if speaking, even while ripping off amazing riffs and solos. He rarely looks down at his instrument.

Listening to his groups I am always reminded of Duke Ellington. Both surrounded themselves with the best musicians they could find, then wrote for them as soloists, recognizing their unique styles and incorporating each into their music.

By the mid 1980s he had formed a new and exciting direction. With Elektric Band he released an album of often rowdy arrangements that were just a blast to listen to. He had changed to a new lineup with Scott Henderson and Carlos Rios on guitars, John Patitucci on bass and Dave Weckl on the kit. I had their first CD and I forced my son to listen to it so many times I get the eye roll when I talk about this band to this day. Eventually Rios and Henderson were replaced with guitarist Frank Gambale. We have here a rendition of the lineup with Gambale doing a song from the first Elektric album called “Rumble.” Dig the horrific robot imitations.  Remember, we’re talking mid 1980s here. Check out the monster 6-string bass Patitucci’s playing.

 

An offshoot followed called Chick Corea Akoustic Band featured a pared-down lineup with Corea, Patitucci and Weckl. Here is the first song off the first album, “Bessie’s Blues.”

 

Yep.

I was immersing myself in all things Chick after his passing (isn’t that always the way?) and came across an interview with Corea, Stanley Clarke and Lenny White, ostensibly discussing the Return To Forever band. They were talking about a piece from Romantic Warrior, “Medieval Overture,” and Chick was saying how he liked to use overture concepts at the beginning of his works as a bombastic opening fantasy, which of course is what an overture is. White remembered a favorite overture that they had never recorded. Chick leans over Clarke and says, ”I recorded that with the Elektric Band.” Lenny fakes an ice pick to the neck and slumps over. Clarke leans back and says, “Oh. Those guys.” Clarke then pulls the imaginary ice pick out of White’s neck. Hilarious. I’d never thought of it before, probably because Corea had so many different lineups, but these guys probably had a bit of a rivalry going on.

Chick Corea constantly played in duet situations as well. In the 1970s he worked with Gary Burton and that musical relationship went on for 40 years. Chick worked duet performances off and on with Herbie Hancock as well. In 2007 he recorded an album with Bela Fleck, The Enchantment, just the two in a studio. I have that CD and it’s a monster.

Corea’s spirit encompassed all forms and all folks. His generosity in music is documented in his works and his friends. Carlitos del Puerto, a wonderful bass man who played with Chick for years and finally in the Spanish Heart Band, talks of Corea’s rehearsal persona. “You’d make a mistake, and Chick would stop and start laughing. Then, ‘OK let’s go again,’ still chuckling.”

Chick Corea would play with anyone who asked, and who had talent. I wish I had his phone number in the day. Alas, I neither the number nor the talent.

Chick Corea went on to follow his muse with classical, Latin, and fusion themes his entire career. Every time I poked my head in to see what he was up to he was doing something marvelous. His was a talent of giant stature and as a true jazz icon leaves us breathless.

I would be remiss to not send condolences to his family, especially his wife Gayle. Chick and Gayle were loving, kindred spirits. Gayle’s reminiscences of Chick working on something in his studio, then running upstairs to haul her downstairs to show her something astounding, were heartwarming and now heartbreaking.

Let’s leave with a sense of wonder and another pair of kindred spirits. This last is Herbie and Chick live in Germany in 1978. They start out plucking inside on the piano strings. Watch for Chick using the top support as a percussion instrument.

 

Header image courtesy of Wikimedia Commons/Ice Boy Tell, cropped to fit format.


An Overview of Audiophile Playback Software, Part One

An Overview of Audiophile Playback Software, Part One

An Overview of Audiophile Playback Software, Part One

Andy Schaub

It’s a safe bet that many Copper readers are interested in getting involved with higher-quality audio streaming services and digital music servers if they’re not using them already. In this series, Andy Schaub, a contributor to Positive Feedback and other publications explains the technology behind streaming audio and what’s involved in getting set up with high-quality streaming and digital music playback. 

Streaming Audio: A Glossary

In order to gain a fundamental understanding of the principles of streaming audio, we’ll begin with a glossary of terms. We’ll build upon things from there.

Streaming

In the context of home audio systems, streaming means to receive and play a digital data stream containing “bits” of music from somewhere on the internet (from a source like Spotify, TIDAL, Pandora etc.) or from a local cache of digital data (music) files you have stored on a fixed drive or drives. in other words, there’s no physical media involved; you access and control the source, routing and playback of the music with software.

Server (Distributed Server System)

A server is usually a fairly powerful computer or dedicated audio component that is connected to a LAN (lLocal Area Network) or WAN (Wide Area Network) to perform tasks that are too intense for a client computer or other device that controls music playback. The control device can be a mobile phone, tablet, laptop or other device.

The server (the computer where the music resides or is streamed from) may perform tasks for more than one client and can also deal with information exchange between clients and the routing of music in the form of data files from one point to another. Many servers are called “headless,” because they don’t normally need a keyboard, screen or mouse and be accessed over the web. However, they are still just computers and sooner or later you’ll need to reboot them, so you’ll need to have a monitor, keyboard, and mouse on hand even for such “headless” systems.

High-Resolution Audio

High-resolution audio is such a generic term that’s it’s hard to define outside of a context. For our purposes, we’ll assume that high-resolution audio means three things (all in the digital domain).

  1. Little to no loss of information even after the process of compressing and decompressing audio files.
  2. By convention, better than the CD-quality 16-bit audio bit depth and 44.1 kHz sample rate. Bit depth can be up to 24-bit with typical sampling rates of 96 kHz and 192 kHz, although other sample rates exist.
  3. There’s usually some attempt to ensure that the reconstructed musical waveform is as true as possible to the original, through various methods of maintaining the proper timing of the musical data with clock signals, and the use of optimal (and often minimal) filtering (AKA interpolation), using math to rebuild what might be missing in the original signal.

All this being said, if the music sounds convincingly “real,” then it’s high-fidelity audio, regardless IMHO of the actual measurable resolution or specs.

Remote App

A remote app is a “lightweight” app (meaning that it doesn’t use a lot of system resources) that runs on client devices to control a music streaming system. Most apps don’t actually stream the music. They just tell the device (the computer or audio component that functions as the music server) where to get the files and where to send them to play. Things gets a little complex after that, which we’ll go over later.

NAS – Network Attached Storage

NAS or Network Attached Storage refers, in general, to any data storage device  that quickly and efficiently stores and retrieves data at very high speeds with little to no error or delay. Its purpose is to “serve” music to a system, via the transportation of digital files over a LAN or WAN as opposed to analog electrical signals flowing through a wire.

It’s often an array of fixed drives in a RAID (Redundant Array of Independent Disks) configuration, housed in a separate box with a simple operating system and no user interface.

Companies like Synology and QNAP make NAS units that serve both general-purpose and also home-entertainment needs. Melco is one brand that makes NAS products that are targeted specifically towards audiophiles. Some Melco and other brands have a built-in CD drive and audio ripping software to convert CDs to audio files – very handy if you want to transfer your CD library for easy-access listening.

Network Bridge

Traditionally, a DAC, or digital-to-analog converter, receives digital audio from a disc transport (such as a CD or SACD transport, or the output from a CD, SACD, Blu-ray or DVD player), using an S/PDIF, AES, USB, optical digital (Toslink) or other “legacy” connection. A network bridge is an interface component that takes the data – packets of information – coming from a server before the data goes to the DAC, sorts through the packets, splices them together, and buffers the information to create a stream of bits that spaces almost every “bit” exactly equally far apart in time to reduce or eliminate distortions. The network then converts that stream to a USB or Ethernet or other) output and then sends it to the DAC. Typically, you can control a network bridge using a remote app.

Firmware

Simply put, firmware is just software that’s loaded directly onto a chip into ROM (read-only memory). This allows the chip to serve a specific function much faster and more efficiently than if the software was running somewhere else. Some firmware can be updated by the user, some can’t. It depends on the architecture of the chip and if the product can be connected to an external source (such as a USB stick, to name one example) to do the update.

UPnP – Universal Plug and Play

UPnP, Universal Plug and Play, is simply a standard protocol and interface that allows streamers and data storage devices to communicate with each other. It’s a little like USB but at the software level and enables hardware devices to “speak the same language.” It’s sort of like a Rosetta Stone for streaming, applications, and devices. Here’s the Wikipedia definition:

  • Universal Plug and Play (UPnP) is a set of networking protocols that permits networked devices, such as personal computers, printers, Internet gateways, Wi-Fi access points and mobile devices to seamlessly discover each other’s presence on the network and establish functional network services for data sharing, communications, and entertainment. UPnP is intended primarily for residential networks without enterprise-class devices.

Let’s start digging more deeply.

What Kinds of Software Do Audiophiles Use?

There are many streaming services, systems and devices available, but only some are capable of delivering high-resolution/better than CD-quality audio. However, if you’re new to hi-res streaming, how does one sort it all out?

First of all, we need to step away from thinking of recorded media, whether digital or analog, as “software.” It’s actually data that can be stored, transmitted and reproduced. Software is something that tells a computer how to do something. Audiophile-oriented software tends to fall into one of three categories:

  1. Remote control apps
  2. Distributed server systems
  3. Firmware and DSP (digital signal processing)

Remote Control Apps

Remote control apps are a good place to begin, because they’ve actually been around for quite a while, mostly as UPnP (Universal Plug and Play)-compliant websites and web pages that began for use in conjunction with early digital music playback devices from companies like Bryston and Magnum Dynalab. Notable current examples include apps like the Aurender Conductor app for Aurender music servers; the Simaudio MOON MiND controller, and Audirvana, Amarra, and Roon Remote, to name just a few.

All of these apps serve the same basic purpose, which is to give you a “rich” visual control interface for choosing and playing your music from a laptop, smartphone, tablet or dedicated hardware device. The music can be streamed from an internet service like TIDAL, Qobuz, Apple Music, Amazon Music, Spotify, internet-based radio stations and so on, or from a music server or NAS (Network Attached Storage) device. Numerous audio companies offer music servers, including Bluesound, Sony, NAD, Meridian, Linn, Innuos, Wolf Audio Systems, Technics and many others. NAS drives are available from a number of audio companies and other manufacturers. You could also just use a hard drive on your computer along with Plug and Play-compatible software – Twonky is one example – and a remote-control app.

Note that while music services such as Apple Music and others all have their own interfaces, dedicated apps like Aurender Conductor are often required to operate an audio company’s particular music playback component. These apps can also have other advantages, such as making all of your various playback devices, such as multiple hard drives and so on, look like they’re coming from a single source on the app – more about that in a bit – which is a very convenient way to access a music collection scattered on different devices.

Confusing? A little! Put together properly, though, you get to choose all the music in the world (almost literally) or albums and tracks from your own collection (in the form of digital files), just by tapping on a screen that might look something like this:

T

This illustration shows a Roon navigation screen (from the Roon Labs website).

Roon is a distributed music server system in which multiple “target” devices (DACs, basically, nowadays) and several remote apps are “seen” as a single entity on a single Roon interface.

The Aurender Conductor app and the Simaudio MiND app, as other examples, have much the same functionality but are designed to regard the target device itself (in these examples, an Aurender device or a Simaudio product) as the “brains” of the operation and the actual thing that delivers the sound. Alternately, Roon, Meridian’s Sooloos and other systems use a separate server computer as the “brains” to route and process the data files (aka “music”) to the “target,” a DAC of some kind behind a network bridge, or a “streaming DAC” or other connected device, like a Sonos system or Apple TV.

So – the remote apps all, at a minimum, let you control the “brains” (the “engine”) of the system, regardless of whether the “brains” of the system reside in the playback device itself or in the remote app, computer or in all of these.

Next, where does actual signal processing of the digital files occur? It can either take place in a dedicated DAC (digital to analog converter), or in a distributed music server system with built-in DSP (digital signal processing). Some systems are proprietary (the Meridian Sooloos as one example) and some are more universal (like Roon, or sonicOrbiter by Small Green Computer).

Distributed Server Systems

Distributed server systems are those in which the server is a dedicated computer that among other things acts as a “traffic cop” for the system.

Technically speaking, all processing and manipulation of a digital data stream is a form of DSP, because it really just means that you are doing things with sequences of numbers. However, DSP means something specific in most audiophile-oriented literature. It refers to the manipulation of said numbers to necessary or even “better” effect (meaning better sound quality), and includes everything from digital volume adjustment (yes, that can involve DSP), to sample rate conversion, to manipulation of the frequency response for room-correction equalization.

If you think of that “engine” as the place that does 90 percent of the audio-related processing (as opposed to non-audio-related functions like controlling the music selection), with the rest being done by the DAC itself, then it makes sense to put that engine onto a separate computer. This allows for much more powerful processing capabilities, the particulars of which will be covered later.

Some people feel that “digital signal processing” – any extra audio “enhancement” or deviation from the original “bit-perfect” data stream – is a bad thing. However, keep in mind that all digital data streams require some manipulation to be heard as music, so where does one draw the line?

All that aside, there’s a major practical advantage in having the server app run on a separate computer: it allows you to have multiple audio streams go to different devices simultaneously. For example, this can allow different family members want to listen to different music at the same time.

There’s a lot of effort nowadays to sell dedicated music-server-specific computers, like the Roon Nucleus, Small Green Computer i5, Antipodes CX, and others, but they’re all really just computers and they all run more or less the same kind of software. The advantage is pre-packaged convenience, which, admittedly, can be a major benefit.

One thing to consider in creating a digital music system: you’ll have to decide if you want to use a computer as a multi-purpose device to share running the audio server app in a distributed system along with other household tasks (like family members surfing the net or playing games), or if you want a dedicated server machine. The advantages of a dedicated computer/music server, of course, are more storage space and access to more and faster processing power without competing with other tasks.

You can configure a computer-based music system to be operated using anything from a smartphone or tablet-based device to a good old keyboard, mouse and monitor. I just use an old, mostly-dedicated 13-inch MacBook Pro and it’s fine.

 Firmware and How it Relates to DSP

As we noted in the Glossary section, firmware is simply software that is written for a task, and resides on a specific device or chip in non-volatile memory. Examples of firmware include the software embedded in DAC chips that convert the digital data stream into analog information, aka music.

Where is the firmware? It’s everywhere. Almost everything this whole chip does – not just the actual digital-to-analog conversion – is controlled by software embedded within (stored on) the chip itself. How does this relate to the quality of the music that you’ll hear? Consider: even at this level, with just one chip, the signal processing is at least somewhat distributed and probably not actually bit-perfect, not if you at the very least consider jitter, or timing errors in the signal. It’s necessary to have the timing of the zeroes and ones that comprise the digital audio signal correct, which requires a temporal point of reference – a timecode or clock signal – and a very accurate oscillator or word clock. Without such firmware, you have a piece of rock, not a music-playback device. (Outboard master clocks are available for audiophile and recording studio applications.)

What it boils down to is that all of digital audio is essentially digital signal processing. However, let’s assume for the sake of argument that when most people talk about “DSP” with respect to audio, it refers to deliberately altering the audio signal to suit a technical purpose or accommodate someone’s listening preference. Examples of the former would be digital RIAA correction of a phono input, or digital room correction EQ.

To leave you with a final thought for now: computer-based music listening systems can accommodate analog. I once tried ripping an album (converting the vinyl to a digital file) using a very low-noise FET-based mic preamp and a Goldring cartridge. The RIAA correction was all software-based using an app called Pure Vinyl, with no need to employ an outboard phono stage. Either because of less phase distortion, a more accurate RIAA EQ, less signal loss or all of the above, the result was amazing.

Header image: Audirvana app, from the Audirvana website.


The Big Move, Part One

The Big Move, Part One

The Big Move, Part One

J.I. Agnew
Ten years ago, I moved to a new building carrying a couple of truckloads of equipment, with which I was planning on building up a mastering studio, an electronics workshop and a very basic machine shop, where I could work on all kinds of audio equipment. What started as a fairly small operation, built up on the leftovers of my previous business, quickly grew and soon outgrew the building it was housed in.
The Neumann-based custom disk mastering system at Magnetic Fidelity, with an MCI JH-110M preview head tape machine visible in the background.

The studio developed into a recording facility, along with a mastering suite, complete with a disk mastering lathe to cut lacquer disks for vinyl record manufacturing. Within a few years, the number of tape machines in the facility exceeded 20 in various formats including stereophonic, monophonic and multitrack. Some of them rare custom-made oddball machines.

 

Several tape machines in a row at the tape editing room at Magnetic Fidelity.

The collection of vintage guitar and bass amplifiers also grew rapidly, enhanced by various electric organs and my custom modular synthesizer, known by his stage name “Bob,” as well as a respectable collection of highly sought after effects units. Massive plate reverbs were constructed and installed, along with various more reasonably-dimensioned spring reverb boxes.

 

A home-made analog modular synthesizer called “Bob," which the author started building when he was 16 years old. It eventually escalated into an audio-visual synthesizer. Photo courtesy of the author’s vault of audio adventures.

In the corners of the studio some weird instruments were lurking around – circuit-bent Casios (inspired by Q. R. Ghazala’s work), vintage glockenspiels, a self-made Theremin, blues harps, an old police band drum, and a World War II air-raid siren.

 

A vacuum tube electrochemical audio synthesizer, designed and built by the author: It generates audio signals as a result of an electro-chemical reaction in a reactor cell, filled with a liquid electrolyte solution.

The operation of the studio generated a large number of master tapes, test pressings, stampers, mothers and reference cuts on lacquer, along with calibration tapes and records, safety copies, archival copies, working copies, cassette tapes, CD’s, and so on. This steadily growing collection necessitated a generous amount of storage space, consisting of properly ventilated shelves in a climate-controlled environment.

 

The Agnew Analog electronics workshop: tube tester, spectrum analyzer, oscilloscope on the bench.

The electronics lab also grew accordingly. It was originally intended to cater to the maintenance of all the studio equipment, but soon evolved into a boutique manufacturing operation of custom audio equipment. It started with a couple of soldering stations, an oscilloscope, a signal generator and various multimeters, but soon extended to include tube testers, LCR bridges, various ’scopes, spectrum analyzers, coil and transformer winding equipment, various power supplies, measurement transducers, vibration analysis equipment and a lifetime supply of rare electronic components. When other studios saw what we were doing with our tape machines and disk recording lathes, they started asking us to work on their machines too, which we did. We then started manufacturing custom effects units, synthesizers, tube amplifiers and specialized electronics for mastering studios.

 

The Magnetovolt Beyonder is a massive 200-watt vacuum tube musical instrument amplifier, based on a unique transformer-coupled circuit invented by the author.

As if this wasn’t already intense enough, the machine shop also grew out of proportion, to include some of the world’s finest lathes, milling machines, measurement instruments, precision work-holding devices and eventually a full blown mechanical engineering workshop.

 The author engaged in precision machining on a 1930s Lorch precision lathe: The machined parts are so tiny that a microscope is required!

This of course included a stock room, where all kinds of raw material to be worked on (along with tons of packaging materials) would be stored, plus heat treatment facilities for hardening parts, grinding, lapping and polishing facilities and of course some basic lifting equipment (a crane and associated equipment) to be able to lift and move massively heavy machines around. Extensions to the building were constructed, to house even more equipment, including a “pneumatics” room, where industrial air compressors, vacuum pumps, air filters and regulators lived. The machine shop became the basis of a new business for making precision parts for disk recording lathes, tape machines and transducers. Word spread and we were soon making parts for many of the world’s finest studios, record pressing plants and original equipment manufacturers.

 Small parts for disk recording cutter heads, machined by J. I. Agnew.

All of the above required specialized HVAC (heating, ventilation and air conditioning) system as well as a very complicated electrical installation, generating clean AC power from a massive battery bank, in both 60 and 50 Hz, 115 and 230 VAC, with separate supplies for lights, HVAC, analog audio electronics, digital audio electronics, and transport systems for lathes, turntables and tape machines. There was also the 380 VAC three-phase supply to the machine shop and balanced power (positive phase and negative phase instead of phase and neutral) to the audio areas and laboratory instruments.

 

Part of an electrical board during the installation phase in the workshop.

Every ten years, I seem to be making some kind of major life change and this decade was no exception: Shortly after the birth of my son, Nicholas Thermion Agnew, we decided we needed a more comfortable living space, in an area which would be more exciting for a child to grow up in. At the same time, we realized that for the past couple of years, the primary factor limiting further growth of the business side of things was that we had completely run out of space! Our building was totally packed full of equipment and all possible arrangements for creating more space had already been implemented long ago. It was impossible to put anything more in there and it was even getting hard to work, as the workshop was permanently full of several lathes, tape machines and other projects we were working on for our customers, in addition to our own equipment that was permanently located there. I had already gotten used having to climb over boxes to reach my equipment, to the point where my wife Sabine eventually refused to enter the lab due to safety concerns…

 

One of the several custom lathe projects filling up the lab: a heavily-modified 1940s Presto lathe, converted to direct-drive and computer control!

So, we started looking for a new building, but kept on dismissing building after building as unsuitable for our purposes. We were beginning to think that it might be impossible to find what we needed and started considering the idea of just buying land and constructing the building from the ground up, although it was clear that with running a demanding business and my fatherly duties, I would never find the time to be personally involved in such a construction project.

 

Beautiful countryside around the new building. Photo courtesy of Sabine Agnew.

But then the perfect building was suddenly found: a quiet location in the beautiful countryside, secluded enough to offer privacy for celebrity musicians and exceptional low levels of ambient noise, but still in close proximity to the city infrastructure, with shops, restaurants, bars, luxury hotels, VIP-grade private hospitals (in case the band members disagree on who shall sing the chorus) and some of the world’s most attractive beaches, all within a short car journey. The building itself was roomy, with distinct architecture and very solidly built. Our initial measurements showed that it fell comfortably within the NC-20 (noise criterion) line even before any sound isolation or acoustic treatment. So, we decided it was time to move our entire business and build a new studio and workshop in the empty shell of that new building!

 

An 8-ton Hitachi excavator on tracks, during the construction work necessary around the new building.

What had been built up over ten busy years now had to be packed up, loaded onto several large trucks, moved several hours away and put back together again as fast as possible, by the bare minimum of a crew, observing all safety precautions for the pandemic. What had taken ten years to build up, now had to be moved and rebuilt within weeks. Having constructed several studios and audio engineering laboratories for my own use over the years, as well as having worked as a consultant for many others, there were also plenty of new things I wanted to try out this time and I was determined to outdo all my previous efforts.

 

One of the several crates containing audio and industrial equipment, that come and go on a regular basis.

In addition to all the equipment from the previous facility, we searched the world for rare items that would be needed in the new one, including a 120-year-old John Broadwood & Sons grand piano, discussed in Issue 129 and Issue 130. This created various logistics nightmares, which often involved lifting ridiculously heavy machines and maneuvering them in unbelievably tight spaces around both the old and the new buildings. The master plan called for a total of three forklift trucks and five cranes, three of which came in the form of 20-ton trucks. One of the forklift trucks would be required on a permanent basis, to cater for the requirements of daily operations even after the move was done. One of the neighboring businesses where our previous building was located had a forklift truck, and we had been working with them when we needed to load massive disk mastering lathes, and other equipment used in the industrial side of media manufacturing, in and out.

 

A vintage Toyota forklift truck lifting the Hardinge HLV lathe out of the old building.
The Toyota forklift taking the scenic route.

But our new building did not have suitably -equipped neighbors, so it was time to buy our own forklift truck. (The other two forklift trucks that would be used for the move were an ancient Toyota and a strange vintage truck powered by a Perkins diesel engine.) We looked around and in typical Agnew fashion, fell in love with a German BKS forklift, powered by a 1952 Daimler-Benz “oil” engine (which means it is designed to run on anything that can pass for oil, from diesel to corn oil, chip fat, or any mix thereof). Some quick fluid changes later, it was busy at work, carrying pallet after pallet of equipment.

 

A scene from the latest Mad Max film, where a global pandemic has forced audio professionals to take to post-apocalyptic oil-burning forklift trucks…no, wait, this is reality. The author next to his “new” BKS forklift truck, outside the new building.

Among the machines that were to be transported were some rare examples of the world’s most accurate machine tools, of the kind usually encountered in national laboratories and research institutes. There was a 1954 Hardinge HLV lathe, a 1961 Moore Special Tools jig borer, a 1930s Lorch optical lathe that came from the Leitz factory which made the microscopes found on Neumann disk mastering lathes (and the lens assemblies for Leica cameras), and an unusual Harnisch and Rieth milling machine, originally intended for manufacturing medical implants, along with various other museum pieces, that could make seasoned mechanical engineers weep. Transporting these treasures made my hair a bit thinner from stress.

 

The author operating a forklift truck powered by a 1952 Daimler-Benz oil engine, carrying a 1954 Hardinge HLV lathe. Keeping it period-correct!

By comparison, moving the 1,000 lb. Broadwood piano and similarly heavy disk mastering lathes seemed easy, and the big tape machines suddenly felt small and lightweight! As for tube amplifiers? What, you mean that cute thing in the corner that you don’t even need a crane for?

 

Bringing in the Hardinge lathe: The forklift lowered it onto a custom-made industrial trolley, on which it is rolled to the new machine shop.

At one point, my assistant was carrying a box of thoriated tungsten filament triodes and told me: “this is suspiciously lightweight!” To which I replied, “anything lightweight is most probably radioactive, so just make sure you don’t drop it!”

To be continued.

 

Inside the new building, a crane lifts the Hardinge lathe off the trolley and to its final position, using a load balancer to prevent it from tilting towards the heavier headstock side.
All images courtesy of Agnew Analog Reference Instruments unless otherwise noted.

A Time in Peter Tosh’s Jamaica

A Time in Peter Tosh’s Jamaica

A Time in Peter Tosh’s Jamaica

Ken Sander
In speaking with Eppy, he mentioned that he was promoting a concert in Jamaica with the headliner Peter Tosh. A bunch of disc jockeys from Long Island’s FM station WBAI were going and my then wife and I were invited too. This was a vacation, reasonably inexpensive; it would just cost us round-trip air fare and an all-inclusive hotel package, and of course, free admission to the concert.
Peter Tosh (with Robbie Shakespeare in background), 1978. Courtesy of Wikimedia Commons/TimDuncan.
Peter Tosh (with Robbie Shakespeare in background), 1978. Courtesy of Wikimedia Commons/TimDuncan.

Michael “Eppy” Epstein was the owner of My Father’s Place in Roslyn, Long Island. It was a well-established rock and roll club that opened in 1971 and has had a long and good history. (After closing in 1987, it reopened at the Roslyn Hotel in 2010.) Hundreds of acts played there, many before they became stars, including Bruce Springsteen and the E Street Band, Talking Heads, Television, Patti Smith, the Ramones, the Good Rats, John Prine, The Police, Aerosmith, Hall and Oates and countless others.

The timing for the Jamaica trip was good for me. I was between tours, so I had the time and so did Jessica, my wife at the time. Jessica was in early pregnancy about three months along and had just started showing. It was winter, so Jamaica and the beach would be a nice break from the bleak New York weather. This seemed like a good diversion and with the impending birth of my son I could not know if such an opportunity would present itself again.

The concert was scheduled for February 24th at the Trelawny Hotel on the beach, and that was where we were staying. This was the first date of the 1978 Bush Doctor tour. The next date he would play would be the One Love Peace Concert, which would take place two months later at the National Stadium in Kingston, Jamaica. After that concert Tosh and the band went on to the Stateside portion of the tour. For most of the US dates they would be opening for the Rolling Stones and playing coliseums.

 

We all bought tickets on the same Air Jamaica flight and met at JFK. Once on board the stewardesses plied us with a rum punch drink that was delicious and quite intoxicating. The flight, which was noisy at first, became very quiet as most everyone fell asleep (make that, passed out) until landing. After picking up our luggage we boarded a bus that the hotel provided. Check-in was quick and we deposited our luggage in our room.

A quick change into bathing suits and we hit the beach. It was early afternoon and a beautiful day. The sun was bright, the beach was exceptional, the water was warm and there were plenty of beach chairs. Around 4:00-ish Jessica and I decided we had had enough sun for the day and went back to our room. After showering and unpacking we readied for dinner. We noticed we were sunburned, but not too badly, and congratulated ourselves for being smart enough to get out of the sun in time.

Ken and Jessica in Jamaica. Ken and Jessica in Jamaica.

 

Dinner was served in a large room big enough to hold two to three hundred people, with big round tables that seated eight. The dinner was fish stew, and it was not bad, not great but OK. That was the dinner – no options. The servers were local and had a bit of an attitude. That is when I found out we were in a government-owned hotel. Every employee was a civil servant. Imagine a hotel run by the personnel of the Department of Motor Vehicles. Even when they tried to be nice, they could not quite get there. After dinner, a drink at the bar and then bed.

At breakfast the next morning I really saw the reluctance of the staff. None of the tables had a coffee pot. Instead, they had a couple of servers walking around the room with coffee pots. The idea was, you’d raise your hand or make some kind of signal and one of them would come over and pour you some coffee. Here’s the rub. The servers would walk around the room and almost never “see” or acknowledge a guest motioning for coffee. It was amazing and frustrating, especially since the Jamaican coffee was delicious. If it wasn’t so annoying it would have been funny. You could never make eye contact with them or get their attention. They were always looking in another direction. They had it down to an art form. If they refilled two cups of coffee a minute that would mean they were working too hard. Another trick they’d use was that they’d pour a drop of coffee in your cup and say the pot was empty, and that they had to go back to the kitchen and refill the coffee pot, thusly putting the guest back to square one.

I did not know the extent to which the Jamaicans were angry and that there was political violence in Jamaica, particularly between the Jamaica Labour Party (JLP) and the People’s National Party (PNP). An example of this was The Green Bay Massacre that happened just a month earlier on January 5th, 1978, in which five Jamaican Labour Party supporters were ambushed and shot dead. I did know that a few years earlier some American tourists had been murdered on a hotel golf course, but my situational awareness was not up to speed. These were bad times politically, the economic disparity had persisted for years, and things were coming to a head. This was not a civil war but very close to one. These political parties vied for supporters through political patronage, and the development of Jamaican trade unionism. That said, they needed tourist dollars, but on the other hand they resented the hell out of us.

After breakfast Jessica and I ran into some of our fellow passengers from the plane. This couple had been sitting next to us. They were badly sunburned and in real pain. The guy had gel all over his exposed areas. I asked him what the gel was, and he said, “Preparation H.” That surprised me and I said, “I do not think I would have thought of that.” He replied, “look, if it shrinks hemorrhoids then it should help with sunburn.” Made sense – the pain from sunburn is from swelling. I said, “but how about taking an aspirin? That way you will not get that gel all over your room furniture and bed.” “That is a thought,” he answered, and then he and his girlfriend went back to their room and we did not see them again till the last day.

 

In the afternoon it started to cloud up, so we decided to take a ride on the hotel’s glass bottom boat. The boats’ route was through a passageway opening in the reef to a shallow bay just down the coast, where there were tons of colorful fish and plenty of underwater activity. It was okay, but as usual, the boat crew was not into it and the boat was grungy and showed signs of wear; the glass bottom was foggy. On the way back we were going through the split in the reef when a big wave came up behind the boat and almost swamped us. The crew was not paying attention and they were just as surprised as we were. The boat was pushed sideways and almost tipped over on its side. I then realized that the boat crew were not sailors but just hotel employees. This revelation really pissed me off, that the hotel would thoughtlessly jeopardize our lives by having incompetents crew the boat. A few minutes later we disembarked at the hotel. I was still pretty annoyed that my pregnant wife and unborn child were put in danger! I never thought we would have our safety threatened in a touristy glass bottom boat ride.

Later that night it started to rain. It continued for the next two days and without the sun there was nothing to do and it was really boring. I could not blame the resort; it was the weather, but still, we were on vacation with nothing to do.

Finally, the day of the concert the rain stopped and it started to clear up. It was not a beach day though, so some of us walked to a nearby forest and took a hike. It was lush and tropical but after a few miles the humidity took its toll and we were dripping with sweat, so we turned back. Being pregnant, my wife did not come on the hike but instead took a book and sat near the beach in the shade and read.

At 5:00 pm everyone on the hotel grounds could hear the sound check, because this was an outdoor concert. Near dusk, the concert started. Initially it was not that crowded, but the audience was native Jamaicans so I thought they might still be getting off work. Pretty quickly the audience area was filling up. The reggae music was a tonic settling on the crowd, and people started dancing. It was a happy concert, with good sound and visibility and the music freed the audience from their cares. For the first time since I had arrived in Jamaica I felt at ease and comfortable, maybe even welcomed.

 

Jessica was joined by a bunch of Jamaican gals around her age they were laughing and talking about pregnancy, makeup and men. The air reeked of ganja and rum was the drink. Being pregnant, Jessica would touch none of that, but I wasn’t as restricted, and she did not mind (she was good-natured about stuff like that). I turned to a group of Jamaican guys and they welcomed me like I was a long-lost cousin and passed me a spliff. I got really buzzed.

It was an all-Jamaican audience and the only outsiders were us, the hotel guests. But the music bonded us like we were one. Peter Tosh was amazing, and he talked to his fellow countrymen as his neighbors. It was a special concert moment, almost like a family gathering, and even though we were inebriated we could feel there was a great vibe in the air.

After the concert Jessica helped me up to our room. I was still really ripped and once in the room it started to spin. I started to get undressed and halfway through I fell face-forward on to the bed. I looked up and Jessica was taking pictures of me and laughing her butt off. I started laughing too but was also begging her to stop.

Ken begged her not to take this picture.
Ken begged her not to take this picture.

The next morning at breakfast it was the same deal as before, trying to get the attention of the coffee servers. But Eppy sat down at the table and was given coffee without even having to ask for it. He did not look good so I asked him, “what’s wrong?” He said, “I got killed last night. They broke my back.” “Really? I thought you had a full house, at least a couple of thousand people. That is a lot of tickets,” I noted. “Yeah, if they paid for them it would be.” “What happened?” I asked. “We sold a few hundred tickets and the rest of the audience were let in through the kitchen.” The kitchen staff had snuck them in. Probably the whole hotel staff knew what was happening and even helped. “Damn man, that is terrible. Any recourse?” Eppy replied, “No, not a thing I can do.” “That sucks. I wonder if Peter Tosh knew of the gate crashers?” Eppy just gave me a long look.

We went up to our rooms, picked up our luggage and loaded up on the shuttle bus back to the airport. The flight home was uneventful, and Jessica and I got back to our one-bedroom duplex apartment on 18th Street in Manhattan by 5 pm. It was not one of our best vacations. The service was terrible, the food and hotel at best just mediocre. But it was not all bad – there were some moments. The concert was very special and the whole affair was like being given the privilege of participating in a warm, private family gathering.

Header image courtesy of Pixabay/jemacb.



Islands In the Stream

Islands In the Stream

Islands In the Stream

Jay Jay French

In past articles I have referenced the differences between how my audio system sounds vs. the way my friend Ira’s sounds.

My system is analog-based but also has great CD/SACD playback capability. Ira’s is solely CD/SACD based.

Neither of us were into streaming.

We wanted to remain “pure!” (That is weird to refer to digital as “pure”…LOL.)

We both had come to terms, over many years, with what we considered to be the best way to listen to music. We both agonized for hours over the greater and smaller differences between every aspect of our systems.

The last thing either one of us wanted to do was to open up another door.

Best to keep it closed. After all, streaming is just another way of playing back digital files and we both have great CD collections and players. We thought, let’s just keep well enough alone. And all of the options involving high-resolution streaming playback made it all the more daunting. How would be choose between Qobuz and TIDAL (to name just two)?

Roon?

Oh yeah, you have to buy a streamer and maybe a DAC (or a combo component that handles both), and then decide whether to connect it through Wi-Fi or Ethernet. Not to mention, how are you going to connect all these devices – what kinds of cables do you need and want, as well as the power cords for each piece?

Once hooked up and paying for the highest-possible sound transfer, then we’d have to compare the sound quality of the stream vs the CD/SACD vs. the vinyl….

And Roon?

Too damn exhausting.

But (there is always a “but”)…

If you stream, you can open up a world of music choices so vast that it would, could, justify the plunge. But both of us really didn’t want to deal with the new technology.

Until someone gave me a Bluesound NODE 2i streamer. He is a modern-day drug dealer. (”Hey kid, wanna try somethin’ new? You’ll love it. I promise!) So, I did.

Bluesound NODE 2i Wireless Multi-Room Hi-Res Music Streamer. Bluesound NODE 2i Wireless Multi-Room Hi-Res Music Streamer.

 

This is not an equipment promotion or endorsement for the Bluesound NODE 2i (Bluesound is owned by NAD). It was simply a device that I was given so I could test the waters. I left it in the box for a couple of weeks, then opened the box, followed the instructions (and I had an Ethernet port close by) and within a half-hour had it up and running after signing on to TIDAL.

OMG, I was hooked.

So many options of music were available! I wasn’t being very critical about the sound at this point. I was just reveling in the vast amount of musical options.

I called Ira. He hung up on me…LOL. I called back and said, “just borrow it for the weekend.”

I too was becoming a dope dealer…

Ira really didn’t want to. I could tell that he wanted the idea of streaming to just go away. Why? Because I knew that once we both got into it, started going into chat rooms and started delving in the minutiae of streaming hi-res audio, the options for high-quality playback would lead to an almost never-ending quest to get better and better sound.

What happened next blew me away.

We were amazed by the possibilities, even though I could see storm clouds on the horizon. The storm clouds were all the insane variables built into the entire music chain of the streaming audio experience. But in our minds, we needed to get better, faster gear!

I wound up loving the whole streaming experience so much that within two weeks quickly jumped up to a PS Audio DirectStream Digital D/A converter with a Bridge II streaming unit built in.

Ira, after getting hooked on the convenience and choices available through the Bluesound, bought an Auralic ARIES G2.1 Wireless Streaming Transporter (not to be confused with a CD transport – the ARIES G2.1 is solely a streamer with no D2A capabilities). Ira uses the built-in 1-bit DSD DAC in his Marantz SA-10 SACD player as his D/A converter. And the PS Audio DirectStream and Auralic units are just one of many hi-res streaming audio components out there.

Auralic ARIES G2.1 Wireless Streaming Transporter. Auralic ARIES G2.1 Wireless Streaming Transporter.

The output of the Auralic has many options and the setup is critical. I used Ethernet for the streaming audio connectivity and Ira used Wi-Fi. Which is better? Just one of many variables. Ira had no choice but to use Wi-Fi as he doesn’t have close access to an Ethernet port. Me?

I do, and bought a very expensive Ethernet cable plugged into an English Electric 8Switch, an active 8-port junction box and re-clocking Ethernet gigabit switch designed for streaming audio applications. It’s connected by a very-high-quality short Ethernet cable that goes into my streamer.

The games had just begun!

The streaming capabilities of the more expensive products sound better than the Bluesound Node 2i, but as a matter of degree. The differences, like most “better” hi-fi gear, can only show up if your system has the ability to resolve the information. The Bluesound Node 2i is actually terrific and a great way to enter this world. (It sells for $549.) I mention these products as points of reference to act as a guide only and not as comparative reviews per se.

Ira and I went back and forth with TIDAL vs. Qobuz. (Other subscription services are available, and there are differences in monthly fees, choices of music and audio quality. But we kept to TIDAL and Qobuz.)

Then I decided to get Roon. Roon is not an easy thing to explain. It’s an app that acts as the “brain” of a streaming audio setup. It manages all of your audio devices, streaming services and any music files you may have that are stored on a music server or computer. But what’s really impressive is that Roon supplies a huge amount of metadata – so much that it makes the information on an album cover (something that most of us grew up with and depended on to learn about the artist whose music we were listening to) seem slight. There is so much additional information about the album, the artist, the songwriters, the producers, as well as other albums by the artist available through Roon that it seems like overkill.

Roon Nucleus music server, rear panel. Roon Nucleus music server, rear panel.

 

I love it.

To be clear, both TIDAL and Qobuz (Ira and I both think Qobuz sounds a bit better, less processed) give you information to read through while you listen. One does not need Roon for this. Roon also provides technology to run the metadata computer information on a Roon-manufactured (sold separately) standalone hard drive called a Nucleus. You don’t need that either, as I use my Apple notebook to run Roon. The advantage of a standalone component like the Nucleus is that it is always on, it removes all the processing work that your computer would otherwise have to to deal with, and it can be used as a “quarterback” to control all aspects of music delivery from any source (streaming service, NAS drive, music stored on your computer or what have you) and send it to any connected audio system in your house.

I have spoken to several Roon reps as well as other high-end professionals about using Roon. I’m sure I have not used its full potential and I’m not sure that I ever will. I will say that the metadata alone is, however, pretty incredible. If that’s worth a monthly, yearly or “lifetime” (yes, they have a pay once for a lifetime subscription deal) appeals to you, then quest forth.

The point of all this is: the more technology is supposed to make our lives easier, the more complicated it seems to get.

I haven’t even gotten into the ethics of streaming as it regards the royalty rates that are paid to artists. [For more about that subject, see Copper’s Stream-o-Nomics article in Issue 130.]

I’m an artist and I ain’t happy about it.

I will say that it does allow our music to be heard by millions more people than would normally have access to it.

As far as the quest for the absolute sound: what I’m hearing is that streaming, as it stands today, is close to but still not quite CD or SACD quality. I don’t care what the bitstream is (16- or 24-bit) or the sampling rate (44.1 48. 88.2, 96, 176.4 or 192 kHz) or if the digital audio file is encoded in MQA or FLAC or whatever. So far, as a direct comparison, my CDs sound just a tad better. I think the reason for this is that anything streamed is compressed. Period. MQA is supposed to rebuild the sonics to pre-streamed levels.

It’s all too much. And don’t take my word for any of this, as I’m a partially deaf rock musician!

Bottom line, I now love streaming, and Ira, the once-devoted CD user who swore that you would have to pry his SACD player from his cold dead fingers, is now a committed streamer and doesn’t listen to his CD player anymore.

Me? I love the options of choice (and sound quality) of streaming, but on a Saturday night, when my wife and I share some wine and peace and quiet, I will always go back to vinyl.

And read the back of the album covers!


Max Roach: Bebop Pioneer

Max Roach: Bebop Pioneer

Max Roach: Bebop Pioneer

Anne E. Johnson

Born in North Carolina swamp country in either 1924 or 1925 (he wasn’t sure himself) and raised in Brooklyn, Max Roach listened to his mother sing gospel music and was inspired to start playing bugle and drums. By the time he was 18, he was subbing on drums in the Duke Ellington Orchestra.

Roach is one of bebop’s original pioneers. He contributed his innovative rhythms to recordings and performances led by the likes of Miles Davis, Dizzy Gillespie, and Charlie Parker. One of his early projects was the co-founding of a bebop record label, Debut, with Charles Mingus, which was supposed to give artists an alternative to the unfavorable contracts from major labels. But the pull of the giants was too strong, and the label lasted only five years.

Besides the various bop and hard bop ensembles he formed in the late 1950s and 1960s, Roach was also involved in the Civil Rights movement. From 1962 – 1970 he was married to singer and activist Abbey Lincoln and often accompanied her performances, supporting her work toward racial justice both personally and musically.

Believing that jazz drums were an orchestra unto themselves, in the 1980s Roach performed a series of solo concerts to demonstrate this. He also wrote incidental music for many dance and theater pieces, including a play by Sam Shepard, and used his flair for dramatic timing to create a percussive accompaniment track to be played under the Rev. Martin Luther King, Jr.’s “I Have a Dream” speech. His career continued through the 1990s, until illness forced him to retire; he died in 2007.

Enjoy these eight great tracks by Max Roach.

  1. Track: “I’ll Take Romance”
    Album: Jazz in 3/4 Time
    Label: EmArcy
    Year: 1957

Jazz in 3/4 Time was a strong statement by Roach that the rhythmic norms of swing – and even of bebop – could be questioned and rebuilt. It’s not that nobody had ever played jazz in a waltz meter before (Fats Waller’s “Jitterbug Waltz” dates from 1942), but turning triple time into the default through an entire album was new and daring.

Several of the tracks are jazz arrangements of Broadway and Hollywood musical standards, including “I’ll Take Romance.” This 1937 number, with music by Ben Oakland and lyrics by Oscar Hammerstein III, was composed for the film of the same name. Here Kenny Dorham is on trumpet, Bill Wallace on piano, and Sonny Rollins on tenor sax. What starts as a sweet, lyrical swing deconstructs quickly into a laid-back bebop cubism of the tune.

 

  1. Track: “Rounder’s Mood”
    Album: The Defiant Ones
    Label: United Artists
    Year: 1958

This duo album with trumpeter Booker Little was also released under the title Booker Little 4 and Max Roach. Joining the ensemble were George Coleman on tenor sax, Tommy Flanagan on piano, and Art Davis on bass.

Little wrote the intense bebop tune “Rounder’s Mood.” Roach’s playing defies the logic of the apparent meter while making perfect sense in its own syncopated stratosphere. His brushwork is exquisite, changing coloration for each instrument’s solo.

 

  1. Track: “Moon Faced, Starry Eyed”
    Album: Moon Faced and Starry Eyed
    Label: Mercury
    Year: 1959

This album is exceptional for many reasons, including its imaginative track list. Abbey Lincoln (a few years before she married Roach) adds a couple of vocals, but there are also some great instrumental interpretations of songs that originally had lyrics.

One of those is the title track, “Moon Faced, Starry Eyed,” with music that Kurt Weill wrote for the musical Street Scene, with lyrics by Langston Hughes. While we don’t get to hear the words, Roach’s brush-on-cymbal patterns are well worth a listen, not to mention Ray Bryant’s piano work.

 

  1. Track: “Garvey’s Ghost”
    Album: Percussion Bitter Sweet
    Label: Impulse!
    Year: 1961

Lincoln can be heard contributing a wordless vocalise at the opening of the stunning “Garvey’s Ghost,” a track that combines African-inspired rhythms with the angular dissonances of cutting-edge post-bop.

Even the earnest, almost angry solos by Clifford Jordan on tenor saxophone and Booker Little on trumpet (one of his last recordings before his untimely death) can’t overshadow the percussive sounds. Eugenio Arango plays the cowbell in a thrilling trio with drum kit (Roach) and congas (Carlos Valdés) leading up to the track’s climax.

 

  1. Track: “Pay Not, Play Not”
    Album: The Max Roach Trio Featuring the Legendary Hasaan
    Label: Atlantic
    Year: 1964

Incredibly, this is the only recording ever released by Hasaan Ibn Ali, an American jazz pianist who played with and impressed many of the greats of bebop, but somehow didn’t end up in the studio. The third member of this temporary configuration labeled the Max Roach Trio is Roach’s longtime bassist, Art Davis.

Hasaan also composed all the tracks on the album. On “Pay Not, Play Not,” his piano virtuosity is matched by Roach’s on the drums. The completely original approach to meter and style make it clear that the failure to capture more of Hasaan’s playing on record is quite a tragedy.

 

  1. Track: “Equipoise”
    Album: Members, Don’t Git Weary
    Label: Atlantic
    Year: 1968

Although Members, Don’t Git Weary was marketed as a Roach album, all its tracks are by piano/keyboard player Stanley Cowell, who also plays in the record’s five-man team. One change for Roach is that Jymie Merritt is on electric bass rather than the upright acoustic that one finds on the earlier albums.

“Equipoise” features the duo sounds of Charles Tolliver on trumpet and Gary Bartz on alto sax. Roach’s triplet figures hold down the compound meter, even when the horns seem to float away in duple time.

 

  1. Track: “Acclamation”
    Album: Streams of Consciousness
    Label: Baystate Records
    Year: 1977

Streams of Consciousness is a duet album with South African pianist Dollar Brand (also known as Abdullah Ibrahim). Roach produced the record himself. Reportedly, the four-track album was largely improvised in the studio, including the 21-minute title track takes up all of Side A. Yet the music has a complexity and sense of internal organization that gives it a composed quality.

“Acclamation” is all about that famous Roach cymbal touch. Brand plays a walking bassline and bluesy barrelhouse chords, the perfect companion to the high-frequency percussive symphony rolling off Roach’s kit. At about the nine-minute mark, Brand changes course and starts quoting from African-American spirituals while Roach creates a frantic, train-like rhythmic pattern.

 

  1. Track: “One in Two- Two in One: Part 1”
    Album: One in Two – Two in One
    Label: Hathut
    Year: 1979

Another of Roach’s duo albums, One in Two – Two in One is a collaboration with saxophonist Anthony Braxton. It was recorded live in Switzerland at the Willisau Jazz Festival.

The gifted Braxton is equally at home on any size saxophone, flute, or clarinet. He uses the soprano sax here in ethereal swirls against the Japanese-flavored chimes and gongs that Roach explores.

Header image courtesy of Wikimedia Commons/The Library of Congress @ Flickr Commons, cropped to fit format.


Mary Chapin Carpenter: Let Her Into Your Heart

Mary Chapin Carpenter: Let Her Into Your Heart

Mary Chapin Carpenter: Let Her Into Your Heart

Anne E. Johnson

Winning five Grammy Awards is impressive enough, but when four of them are wins in consecutive years for Best Female Country Music Performance, that is a unique achievement. Mary Chapin Carpenter accomplished that feat. Yet a careful listening to her output may convince you that her style reaches beyond country.

The New Jersey native, born in 1958, grew up on The Beatles and The Mamas & The Papas. As a teen she learned the songs of Judy Collins and John Denver while testing out her own songwriting wings. Armed with a degree in American civilization from Brown University, she thought of music as a love, not a way to make a living. But her relationship with songwriting kept getting more serious. She signed with Columbia in 1986, and never looked back.

Her debut, Hometown Girl (1987), was produced by John Jennings, whom she’d met in DC during her college years. They would end up making many records together. Right out of the gate, Carpenter showed spectacular taste in session musicians: fiddler Mark O’Connor and guitarist Tony Rice in particular gave this record a sound more folk/bluegrass than country. (Sadly, Rice passed away on Christmas Day, 2020.)

There were no hit singles from Hometown Girl, but that’s not to say there are no great tracks. Along with many songs by Carpenter, the album also includes a cover of Tom Waits’ “Downtown Train.” It’s a thoughtful, original interpretation of the song.

 

The next album, State of the Heart (1989), is considered more on the country side of things, and its success on the country charts shows that to be true. The addition of accordion and pedal steel contributes to the Nashville sound. The following year, Shooting Straight in the Dark performed even better on the country charts, with the high-energy Cajun-flavored single “Down at the Twist and Shout” hitting the No. 2 spot.

But Carpenter hadn’t left her bluegrass foundation behind. You can hear it in the tightly rhythmic strumming and plucking on her song “Halley Came to Jackson.” O’Connor provides a touching fiddle line.

 

By the time Come On, Come On was released in 1992, Carpenter was a bona fide country star. Seven of the album’s 12 tracks became hit singles. She and Jennings continued their streak of attracting great musicians to their sessions: the backing vocals personnel list includes Rosanne Cash and the Indigo Girls.

Amid all her success, Carpenter has only had one No. 1 album, 1994’s Stones in the Road. And its biggest single, “Shut Up and Kiss Me,” was her first to make it to the top of the country charts. This time the eclectic and imaginative roster of session players includes Irish tin whistle player and singer Paul Brady, soprano saxophonist Branford Marsalis, and country and blues master Lee Roy Parnell on electric slide guitar. Jennings is still on hand as producer.

The quiet, wistful “The End of My Pirate Days” is one of the album’s lesser-known tracks. It starts simple, with just acoustic guitar and percussion, but grows slowly and steadily in a skillfully crafted arrangement that engulfs Carpenter’s steady voice; she never over-sings.

 

After A Place in the World (1996), it was another five years before Carpenter released Time* Sex* Love*. One notable thing about this album is the collaborative composer credits on several songs, with Carpenter acknowledging creative input from the likes of Jennings, Gary Burr, and Kim Richey.

Richey has an impressive songwriting resume, boasting recordings of her works by Brooks & Dunn, Trisha Yearwood, and others. “Swept Away” is the track she co-wrote with Carpenter, a dark, sophisticated heartbreak song using Carpenter’s lower register on the verses and her upper range on the chorus. The dissonances in the guitar part highlight the pain in the lyrics.

 

 

Carpenter has never slowed her songwriting or recording productivity, and her fans have always rewarded her consistency. Between Here and Gone came out in 2004, reaching the No. 5 spot on the charts. That was followed by The Calling in 2007, her first album after leaving Columbia Records. By necessity, that meant leaving behind Jennings as her producer. The Calling, released on Zoë Records, was produced by musician and composer Matt Rollings, who would later win a Grammy for his work with Willie Nelson.

This seems to be a deeply empathetic album for Carpenter, judging by the lyrical content of its songs. Take “Houston,” for example, introduced by Rollings at the piano, in which the life experiences of a struggling family are vividly portrayed.

 

Rollings also co-produced The Age of Miracles with Carpenter in 2010 as well as Ashes and Roses two years later. On the latter, sought-after session drummer Russ Kunkel (he’s worked with Bob Seger, Warren Zevon, Carly Simon, and countless other luminaries) helps bring a solid rock edge to the proceedings.

“I Tried Going West” is a beautiful waltz with a bittersweet lyric in which joy edges out sorrow. The arrangement engulfing its flowing melody uses elements of country, bluegrass, and pop. The electric guitar solos are by Duke Levine.

 

For the past few years, Carpenter has been releasing her albums through Lambent Light Records. These releases have a rich, intimate sound quality. One of those is The Things That We Are Made Of (2016), produced by six-time Grammy winner Dave Cobb, who also contributes several types of guitars and synthesizers.

As is true with many technical jobs in the arts, it’s the effortless musical production that often has the most work behind it, making it appear easy and off-hand. The song “The Middle Ages” is a good example: It seems simple, with just Carpenter and her guitar, until you realize there’s an intricate synth-based sound environment that has seeped into the background, coloring the mood.

 

Carpenter’s most recent album came out in August of 2020. Although Ethan Johns produced The Dirt and the Stars, Matt Rollins is back on piano and Hammond organ. Carpenter writes about the personal struggles that come with self-doubt and aging. But she also reaches outside her comfort zone for a timely political piece.

The funky and snide “American Stooge” shows a different side of her, and it’s hard not to wonder what her body of work might be if she’d done more of this type of songwriting throughout her career, commenting on earlier eras of American history. But that’s not how creativity operates; it develops at its own pace.

 

This is one songwriter who shows no signs of slowing down, so it will be fun to hear what’s next.

Header image courtesy of Wikimedia Commons/Mike Evans, cropped to fit format.


Speaker Misplacement

Speaker Misplacement

Speaker Misplacement

Frank Doris
In its day the Teac A-3340S was the machine for musicians recording at home or making demos. We hope the photo shoot didn't take too long...that thing is heavy! From Audio, July 1977.

In its day the Teac A-3340S was the machine for musicians recording at home or making demos. We hope the photo shoot didn't take too long...that thing is heavy! From Audio, July 1977.

Groovy girl: digging the Radiola RA 1044 A, mid-1960s. C'est transistorisé!

Digging the Radiola RA 1044 A, mid-1960s. C'est transistorisé!

The first McIntosh preamplifier, the incredibly rare mono model AE2. it was sold from circa 1949/1950 - 1952 and was made in Silver Spring, Maryland. From the Audio Classics collection.

The first McIntosh preamplifier, the incredibly rare mono model AE2. it was sold from circa 1949/1950 - 1952 and was made in Silver Spring, Maryland. From the Audio Classics collection.

We'd spend the money on the Mighty-9 though. Or convert the charging unit from the kit into a fuzz box.

We'd spend the money on the Mighty-9 though, or convert the charging unit from the kit into a fuzz box.

Guess this 1962 RadioShack catalog didn't include any tips on speaker placement.

Guess this 1962 RadioShack catalog didn't include any tips on speaker placement.


Rave New World

Rave New World

Rave New World

Frank Doris
Taken at the Electric Zoo festival, Randall's Island, New York, 2017. By Michael Vazquez @musicfestivalstreetphotography, copyright 2021. Canon EOS-1D X Mark II camera with Canon EF 70-300mm f/4.5-5.6 DO IS USM lens, ISO 200, f/5.6, 1/250 sec. Rising above the isle of Randall like a stately, foofy-eared 21st century Sphinx, this festy-fortified Wook is living her best-life-halcyon days, adorned in full rave regalia, rendered functionally efficient by way of a hydration pack and metallic pyramid-motif dust mask with matching bra (she was one of several wearing this scarf ‘n top combo, and it was heartening to see ravers amused by their coinciding style choices, rather than taking same as an existential affront). With a backpacked hoodie at the ready for the alternating climes of a Labor Day weekend, and well-strapped Trip Glasses serving as eye-soul guardians of her riddle, and enhancers of alternating mind states, she was ready to experience – individually and with her fellow pilgrims – profound personal wonder at the late summer sunset and coming, charmed twilight.

Double Duty

Double Duty

Double Duty

James Whitworth

New Releases: One Disappointing, One Overproduced, and a Great One!

New Releases: One Disappointing, One Overproduced, and a Great One!

New Releases: One Disappointing, One Overproduced, and a Great One!

Tom Gibbs

Steven Wilson – The Future Bites

When Steven Wilson released his EP The B-Sides Collection late last year ahead of this new LP, The Future Bites, for me it was easily among the best of the year. My review in Copper Issue 127 was nothing short of a flat-out rave, and I’ve been waiting in rapt anticipation for the full album to arrive. If the finished project proved to be anywhere nearly as exhilarating a listen as the B-Sides proved to be, it would definitely be on my early short list for best of 2021. The wait is now over!

The B-Sides Collection included four non-album tracks that highlighted the same musical brilliance that Steven Wilson has shown in all his past endeavors; the songwriting and instrumentation offered a surprisingly entertaining blend of electronic pop, rock, and prog. Only one of the four songs from the EP, “King Ghost,” actually appears on the new album, albeit in a much truncated version compared to the nine-plus-minute remix featured on the EP. That said, The Future Bites is maybe a bit disappointing to me in that most of the tracks clock in at anywhere from three to four-or-so minutes in length – being an old-school progger at heart, I’m generally not at all unhappy with tunes that spread across an entire album side. And the shortened song lengths give the album much more of a poppy feel – yeah, it’s a Steven Wilson album, but it just doesn’t have the extended grooves of, say a Porcupine Tree album, or even The B-Sides Collection, for that matter. The only song of any length is “Personal Shopper” at over nine minutes, and it’s more typical of what one has come to expect from SW. The album only clocks in at a shade over 41 minutes, which is kind of brief in terms of what I was expecting to see (and hear!) based on the EP – its shortest song was six minutes.

Regardless, the songs here are good, if not generally great, in spite of the pop-ish musical direction and the lack of extended instrumental embellishment. This would probably be a great record to hear live in concert. And the album has the deepest bass you’ll hear outside of anything other than maybe a Kraftwerk disc – my whole house shook like never before while The Future Bites was playing. And, of course, if you spring for the Blu-ray disc, I’m hoping you’d get all the extended remixes as part of the package, though I haven’t been able to confirm that. At the very least, if you have a streaming account, you can always hear the accompanying tracks from the EP to help supplement what is regrettably a pretty short album experience. The Future Bites is definitely Steven Wilson lite.

The album has been released in probably the largest selection of format variants I’ve ever seen for a recent release; you can take your pick of CD, standard 180-gram LP, limited-edition red vinyl LP, Blu-ray Pure Audio disc, or cassette. And there are a variety of bundles available that combine either/or the LP version with the CD and/or cassette and/or Blu-ray. If that’s not enough, there’s also a limited-edition box set that combines everything in deluxe packaging. Steven Wilson definitely gives you lots of choices! I did all my listening with the 24/96 digital stream from Qobuz, and the sound quality was beyond superb, so unless you’re obsessed with beautiful objects (like LPs!), what else could you ask for? In spite of my misgivings about the album with regard to its thematic content and its relative brevity, The Future Bites (especially when in combination with the EP material) is nonetheless recommended.

Arts & Crafts Productions, CD/LP/limited-edition LP/Blu-ray/cassette/limited-edition box set, various bundles (download/streaming [24/96] from Qobuz, Tidal, Amazon, Google Play Music, Pandora, Deezer, Apple Music, Spotify, YouTube, TuneIn)

 

The Staves – Good Woman

The Staves are an English trio of sisters, Emily, Jessica, and Camilla Stavely-Taylor; they originally toured as folk trio the Stavely-Taylors, and about a decade ago shortened the name to simply the Staves. Renowned for their angelic vocal harmonies, they’ve been in constant demand for a bevy of mainstream artists, and have provided background vocals for the likes of Tom Jones (yes, that Tom Jones!), Leonard Cohen, Lucy Rose, and Bruce Hornsby. Good Woman is the group’s fifth studio album, and is a significant departure from the folkish and even somewhat jazzy presentations that have populated much of their body of work. Originally designated to be self-produced by the Staves, somewhat late in the process they brought on John Congleton (St. Vincent, Sharon Van Etten) to take over the production chores. He’s crafted a record that’s nothing like anything else in their album catalog; if they wanted to break the mold with a bold move into a more pop/rock-oriented direction, Good Woman definitely accomplishes that.

Apparently, there was some personal turmoil prior to the recording process: first their mom’s tragic death, then sister Emily gave birth and needed to take a year off, and another sister was going through the end of a five-year relationship. The sisters had already started to take a more adventurous approach to their pure folk stylings with the 2017 album The Way is Read, which featured some almost avant-garde jazz and classically-influenced musical accompaniment. They wanted the new record to express how they were dealing with the emotional baggage they were facing from controlling exes, gender inequality, and the travails of motherhood. And perhaps a more abrupt shift in their musical direction – and a new producer – might help them accomplish that.

Unfortunately, I don’t think bringing John Congleton aboard was the correct decision for the group. His over-the-top production style, with almost unbearably ultra-deep bass and cavernous drums mixed to near-confrontational levels, just doesn’t mesh well with the Staves’ incredible vocal harmonies. Those harmonies are still there, they’re just almost completely drowned out by the accompaniment, which is often excessive. This album came highly recommended to me, but based on my impressions of them from the past – like the live album, Pine Hollow (2018), which is a showcase of the group’s incredible vocal talents – I’m having a tough time getting on board. YMMV, and the 24/44.1 digital stream was decent quality, but the overproduced music completely got in the way of my enjoyment of the album.

Atlantic Records, CD/LP (download/streaming [24/44.1] from Qobuz, Tidal, Amazon, Google Play Music, Deezer, Apple Music, Spotify, YouTube, TuneIn)

 

Hayley Williams – Flowers For Vases/descanos

Hayley Williams is mostly well known for her sometimes over-the-top vocal histrionics with the band Paramore; they’ve been on hiatus since 2019, but the band members have made statements online that they’ll return when the time is right. And, of course, there’s the pandemic that’s throwing a monkey wrench into the existence of just about every band and performer out there. Hayley Williams hasn’t been satisfied to sit around during the pandemic; she released an excellent EP late last year, Petals For Armor, and this new release, Flowers For Vases/descanos, is something of an extension of that effort. It’s also an unannounced, almost complete surprise to her (and Paramore’s) fans.

Like just about everyone else, Hayley Williams has been holed up in her (Nashville) residence, where she has a fully-equipped home studio. The time has given her an opportunity to chill and reflect on the nature of her (and our) existence, and on recording the new record, and she recently released a statement regarding that process: “For me, there’s no better way to tackle these individual subjects other than holistically. The ways I’ve been given time (forcibly, really) to stew on certain pains long enough to understand that they in fact, need to be released…indefinitely. I may never have been offered such a kindness; an opportunity to tend to the seeds I’d planted, to harvest, and to weed or prune what is no longer alive, in order to make space for the living. I wrote and performed this album in its entirety. That’s a career first for me. I recorded it at my home in Nashville, the home at which I’ve resided since Paramore released After Laughter. 2020 was really hard but I’m alive and so my job is to keep living and help others to do the same.”

Flowers For Vases/descanos was produced in Hayley’s home studio by Daniel James (Miley Cyrus, Britney Spears, Selena Gomez, Nicky Minaj, The Veronicas). The album reflects her process of basically dealing with the isolation of quarantine, and all the free time one has to work through all the personal stuff that one rarely has time to confront in a more normal reality. The album’s subtext, descanos, is the Spanish word for “a break,” or “rest,” or it can also refer to a cross that’s been placed at the site of an unanticipated death. And it’s intended here as a metaphor with regard not only to her relationship with Paramore, but also with the extended break imposed on everyone by the pandemic.

Impressively, Hayley Williams wrote all the songs here, and played all the instruments. This is an amazingly great-sounding album, and her skill as both a vocalist and musician is on full display here. The record is much more laid back and introspective than the typical fare from Paramore, but everything works perfectly, and, more surprisingly, she’s a really great singer, even in the much more intimate setting offered by Flowers For Vases/descanos. She takes on the mantle of plaintive, confessional songwriter, and makes it work — she completely owns it here, and makes no apologies. The opening of “HYD” is perhaps one of the most impressively realistic representations of a singer accompanied by an acoustic guitar I’ve ever heard on my home stereo — this is a really great sounding album!

The 24/96 digital stream via Qobuz was impressively musical and dynamic; Flowers For Vases/descanos may not be particularly typical of anything from Hayley Williams or Paramore, but it’s a very well-recorded album, and the digital stream from Qobuz presents it in its full glory. Although there’s no current information available regarding a CD or LP, her last EP from late last year, Petals For Armor, was made available in most formats, including cassette, so I expect the same for this excellent release. Highly recommended.

Atlantic Records, (download/streaming [24/96] from Qobuz, Tidal, Amazon, Google Play Music, Deezer, Apple Music, Spotify, YouTube, TuneIn)

Header image of The Staves courtesy of Wikimedia Commons/Justin Higuchi, cropped to fit format.


Passions

Passions

Passions

Lawrence Schenbeck

Last month I made a single New Year’s resolution: to devote space in Copper to Bach’s two monumental Passion settings. These works are central masterpieces in Western art music, as important in their own way as Beethoven’s Nine Symphonies or Wagner’s Ring. Yet for one reason or another I’d never discussed them. I’m going to begin that discussion this week.

Musical settings of the Passion – the biblical story of Christ’s betrayal, suffering, and crucifixion – have been a staple of Christian worship since the Middle Ages. They still form an essential component of Holy Week, often embedded in the somber rituals of Good Friday. But Passion music underwent radical changes in style and structure during the Baroque Era. As a result, today we are more likely to encounter Bach’s Passions in concert halls than in churches; moreover, in either venue we may find ourselves watching staged or semi-staged performances. Blame it on Claudio Monteverdi (1567–1643).

Short explanation: Monteverdi’s successful production of Orfeo in 1607 opened a door to dramatized musical presentations of classic tragedy; many more followed.

Longer, better explanation: 17th-century Italy witnessed a profound reorientation on the part of creative musicians, one outcome of which was the rise of opera. Instead of devising elaborate musical systems (e.g., rules of counterpoint and consonance) that enabled them to construct sonic paradigms of order (e.g., motets, masses, madrigals), composers henceforth committed to text-dominated ways of expressing chaos (i.e., emotion!). These late-Renaissance innovators, Monteverdi chief among them, wanted to provoke powerful, almost involuntary emotional responses, just as their ancient Greek predecessors had done.

So perhaps it is less useful to situate Bach’s Passions within their narrow – and increasingly archaic – liturgical function than to regard them as a special category of Baroque opera or oratorio. We need to get our bearings there before venturing onward. To that end, I’m recommending two new recordings that illustrate the historic transformations described above. They may work nicely as warm-up exercises for your own Bach Passion Experience, although they’re too good to be relegated to mere warm-up status. (Sometimes a tasting menu is much more satisfying than a seven-course dinner.)

Passions, from Les Cris de Paris, dir. Geoffroy Jourdain (Harmonia Mundi), offers a rich assortment of Venetian music from Giovanni Gabrieli (d. 1612) through Antonio Caldara (d. 1736) with nods to Monteverdi, Lotti, Legrenzi, and others along the way. Sacred and secular, vocal and instrumental, prima e seconda pratica, Jourdain covers a lot of ground. He’s helpfully arranged the music in three- and four-track sequences that allow for intense short-term listening. Each sequence except for the last (1–4, 5–7, 8–10, 11–13, 14–16, 17–20) includes a different setting of the Crucifixus from the Mass Credo. That lends focus to a program that might otherwise seem too diffuse; in the album booklet Jourdain confesses that universal human passions, the Passion of Christ, and his personal passion for Venetian music played overlapping roles in his choice of music.

Two works in particular show off the advantages of that overlap: Tarquinio Merula’s Hor ch’è tempo di dormire (Now that it is time to sleep), tr. 1; and Legrenzi’s Dialogo delle due Marie (Dialogue of the Two Marys), tr. 9. The former, sung with singular intensity by Michiko Takahachi, is a haunting lullaby on a two-note “rocking” bass, in which Mary sings her divine child to sleep while foreseeing every detail of his eventual torture and death. In the latter, Takahachi is joined by Adèle Carlier in poetically imagined laments and prayers that Mary Magdalene and Mary, mother of James and Joseph, offer at the foot of the cross. It’s heartfelt and voluptuous, spiritual and corporeal. (It may be helpful to keep the album booklet open to the translations; this is seconda pratica, so text really does matter!)

NB: The YouTube video below includes the complete album; access individual tracks by clicking on the icon in the upper-right-hand corner and scrolling to the track you want.

 

I began my explorations with an eye on Bach – emphatically not a Venetian composer – so I was eager to hear the music on Andreas Hammerschmidt: “Ach Jesus stirbt” (Ricercar). Hammerschmidt (c1611–1675) was one of many 17th-century Germans who brought the new Italian sensibilities north, creating a cosmopolitan Baroque style manifested not only in Bach’s music but also that of Handel and Rameau. There is no record of Hammerschmidt’s actually journeying to Italy. (His older countryman Heinrich Schütz visited Venice twice, to check out Gabrieli, then Monteverdi.) Nevertheless, his church music abounds in the polychoral, madrigalian, and theatrical touches that so enlivened Venetian ceremonies and services.

“Ach Jesus stirbt” comes to us from Lionel Meunier’s A-list ensemble Vox Luminis. It’s a worthy successor to their album Kantaten, which offers church music from four Bachs: Heinrich, Johann Christoph, Johann Michael, and J. S. himself. That one’s been in heavy rotation at my house for some time; I expect the new Hammerschmidt set will keep steady company with it.

There’s so much fine music in “Ach Jesus.” Where to begin? First, open the album booklet window so you can read the song texts and translations, beginning on p. 23.

Now start with tr. 4, “Ach Gott, warum hast du mein vergessen.” In this dialogo, the despairing words of Jesus (tenor) are met by a hopeful response from Mary (soprano) and her companions. Eventually their faith prevails; an exuberant alleluia ends the piece. Track 6 offers a more extended concerto (a term which, in that era, meant voices and instruments sounding variously together), “Bis hin an des Creutzes Stamm,” scored for five vocalists, five-part choir, and five-part strings. At its very end, with the words “An dem Holze stirbt,” the music swells in a grand peroration, emphasizing the crucifixion’s fulfillment of prophecy.

More straightforwardly joyous is tr. 10, “Triumph, Triumph, Victoria,” for soloists, five-part choir, and brass. The most oratorio-like number in the collection may be tr. 12, “Wer wälzet uns den Stein,” a little scena in which the two Marys (sopranos) approach Jesus’ tomb, where they encounter two angels (alto and tenor) and hear the voice of Jesus (bass) informing them that the grave has given up its intended inhabitant.

If you have Vox Luminis’ Kantaten handy, you will already be familiar with J. S. Bach’s powerful early cantata, Christ lag in Todesbanden (BWV4), based on a well-known Lutheran chorale. “Ach Jesus” includes Hammerschmidt’s skillful variations on that chorale for three singers, three trombones, and continuo (tr. 7). Its presence here reminds us of the central importance of these congregational songs in Lutheran worship. Another, quite different nod to venerable tradition is tr. 15, “Siehe, wie fein und lieblich ists,” a Gabrielian motet for three choirs, replete with the sort of double-echo effects associated with St. Mark’s Cathedral.

 

I haven’t said much about the sound of these recordings, but it’s first-rate. From the deepest organ pedal tones to the silvery contributions of Baroque strings and trumpets, there’s an unforced, elegant blend of orchestral colors, with solo singers – like Ms. Takahachi – placed appropriately forward in the mix. You wouldn’t hear these works this well in a typical church or cathedral acoustic, but here there’s just enough reverberation to reinforce the musicians’ expressivity. (I auditioned both albums via 24/96 streams from Qobuz.)