AES Show Fall 2021 Highlights, Part One

AES Show Fall 2021 Highlights, Part One

Written by John Seetoo

As 2021 entered the fourth quarter, the Audio Engineering Society (AES) held its annual fall show online in October, because of the pandemic still being in effect. Thankfully, the show seminars and interviews were recorded for on demand viewing, so once again, attendees could still attend virtually, and reporters and others could cover the show. While AES is still adding videos as of press time, here are some selected highlights that are available to registered users:

The Soul of the Machine: Do Electronic Instruments Have a Personality?

Some of the most iconic sounds of the 1980s were not guitars, but synthesizers and drum machines. While the late Eddie Van Halen is one of the most revered guitar players of the modern era, Van Halen’s “Jump” was the band’s biggest single, driven by Eddie playing the Oberheim OBX-a synthesizer. The song’s introductory riff is so immediately recognizable that now, other synthesizer brands that emulate a similar-sounding program usually label the preset as “Jump.”

Peter Gabriel’s So is his biggest-selling album, and featured a number of classic songs such as “Sledgehammer,” “Big Time,” “Red Rain,” and “Don’t Give Up.” Much of the sound of So was created with the now-classic Sequential Circuits Prophet-5 synthesizer, which was one of the first analog synthesizers to be able to digitally recall programmed patches and store them as presets.

The more recent advent of digital modeling of guitar sounds, which are at the point where they can rival those created by the beloved amps and guitars designed in the 1950s and pedals from the 1960s, as well as present-day guitar and amp sounds, has sparked an ongoing analog vs. digital debate regarding the use of these devices. (Sound familiar?) Strangely enough, the camps are not solely generational. For example, relatively new bands like Rival Sons embrace old-school analog, with guitarist Scott Holiday reveling in his Orange tube amp stacks and fuzz pedals, while prog rock pioneer Steve Howe of Yes became an early convert to Line 6 modeling amps and has played them for decades. He even used an often-derided beginner model Line 6 Spider series amplifier for every track on the latest Yes release, The Quest.

The Soul of the Machine: Do Electronic Instruments Have a Personality? offered an interesting discussion with some of the architects of these and other synthesizers, devices and sounds. The panelists included Dave Smith, founder of Sequential Circuits; Marcus Ryle, a co-founder of Line 6 and former Oberheim synthesizer designer; and Jennifer Hruska, who was involved in the creation of the Akai MPC Renaissance MIDI controller favored by hip-hop artists, as well as the Solina Redux software (a hybrid combination of the ARP Solina String Ensemble with a sequencer and analog synth) and other products.

Dave Smith and Jennifer Hruska at AES Fall 2021. Dave Smith and Jennifer Hruska at AES Fall 2021.

Along with Michael Bierylo, Chairman of Electronic Production and Design at Berklee College of Music, the participants gave a number of fascinating perspectives on the development of electronic instruments, analog and digital technologies, and how electronic instruments can assume musical and tonal personalities much like their guitar and keyboard analog cousins.

Dave Smith noted that polyphonic synthesizers in the early 1980s were initially sought for their emulative qualities in recreating orchestral strings, brass, woodwinds, and percussion. He cited how the digital Yamaha DX7 and Korg M1, featuring actual instrument samples, assumed market dominance. The re-emergence of the current popularity of analog synths has as much to do with the music and the sounds of the era standing the test of time as a retrospective appreciation for the intrinsic sounds of the instruments themselves.

Marcus Ryle took the notion a step further, explaining how the Hammond organ and even pipe organs were originally designed as “early synthesizers,” with the concept of emulating certain timbres from other instruments, such as flutes or trumpets.

Jennifer Hruska concurred, citing how modular systems (synthesizers comprised of separate electronics modules rather than built into an all-in-one keyboard configuration) have also experienced a renaissance of sorts, with musicians and producers now actively seeking specific analog-type sounds for their own qualities rather than looking for emulations of other instruments.

The tactile aspects of playing actual keys, turning knobs and pushing buttons and switches also has an appeal to the creative spark that many might feel is missing from using a mouse and manipulating software on a computer screen.

Conversely, all of the participants agreed that software is wonderful for obtaining sounds that would be economically or physically unfeasible to get in the real world, such as the Notre Dame pipe organ or a vintage Bösendorfer grand piano.

Marcus Ryle even explored the notion that software synthesizers could also have a “soul,” based on the level of expertise of the programmer. He posited that it was possible that a truly creative programmer could create a plug-in (a software sound or effect) with all of the character and interactive qualities of an analog synthesizer, and make the difference almost indistinguishable. Perhaps his success with Line 6 and their guitar amps and Helix modelers has informed this point of view, which definitely sends the debate into a gray area. He did note that programmers tend to write software based on how analog synths sound when notes are triggered, but don’t tend to take into account the way the sound may change when notes are held for a longer period, which can be a shortcoming of current plug-ins.

Recalling their respective moments of validation as creators of new musical instruments, Smith cited the 1978 NAMM show, where artists like David Bowie and numerous others all were sufficiently wowed by the Prophet-5’s first public demonstration that they ordered units for themselves that same day. Ryle related Tom Oberheim’s revelation that while Oberheim was aware of all of the sounds that could be made on his synths, it wasn’t until he heard Weather Report’s Joe Zawinul use them on “Birdland” that he experienced a true comprehension of what he had achieved.

A reissue of the classic Sequential Circuits Prophet-5 synthesizer. A reissue of the classic Sequential Circuits Prophet 5 synthesizer.

 

 

Reminiscing on the rivalry between Prophet-5 and Oberheim fans, that in many ways parallel the ongoing guitar debate between the merits of Gibson vs. Fender electric guitars, both Ryle and Smith humorously noted that today’s musicians are open to buying both, and acknowledged that the virtual-instrument software versions make the likelihood even more affordable. As long as these instruments can offer such a wide palette of sound choices, artists will find something in them in which to create music.

The panelists also lamented a didactic attitude from purists who don’t care as much about how the instruments sound as much as whether they have only pure analog circuitry, with no digital elements intruding into the signal path.

Sound System Optimization: The Good, The Bad and The Ugly

Something that all live sound-loving Copper readers could probably appreciate, Sound System Optimization: The Good, The Bad and The Ugly, was hosted by Bob McCarthy. The workshop was designed for those under time pressure and with less than optimum equipment and resources to achieve the best sound possible from a sound system, whether one has two days, two hours, or two minutes. The session considered situations in the theatrical, fixed-installation, and touring worlds, as well as what is involved with setting up in spaces with less-than-optimal acoustics. The participants, all engineers with practical experience, included Carolina Anton (freelance), Michael Lawrence (Rational Acoustics), Jessica Paz (freelance, Tony award winner for Hadestown), Finlay Watt (freelance), and Jim Yakabuski (Yak Sound).

The 90-minute presentation began with a historic overview of hardware, ranging from Koenig’s phonautograph (an early wax cylinder recorder), to the Shure Unidyne SM545 mic (popular in the 1970s), and early real-time frequency analyzers., McCarthy illustrated how far sound system optimization has developed by recalling his own history:

  • Shure 545 SD or SM58 mics were used for everything, including vocals and instruments, until later in the 1980s.
  • Speakers were stacked to achieve the required volume, and greater care was taken over making sure they didn’t topple over more than how they sounded.
  • Everyone would painstakingly check cabling and crossovers, only to listen to how the system sounded using a mono cassette player!

Early time alignment was conducted by physically moving speakers closer or further from the edge of a stage, and then the resulting sound was measured on an oscilloscope. Linear analysis of frequency response during the 1980s was a seven-step process when using a SIM System I (SIM stands for Source Independent Measurement; the device was created by Meyer Sound), and one would have a paper printout roll of the waveform. McCarthy noted that a 1990s SIM calibration test system that previously sat in a 600-pound rolling rack (and used a $1,100, 1 MB memory card) is now software that can be run on a laptop.

 

From Bob McCarthy's historic overview slide of SIM Systems. From Bob McCarthy's historic overview slide of SIM Systems.

 

Live sound system optimization has the goals of uniform coverage throughout the space, along with maximum SPL capability with a minimum of phase variance. In addition, latency and loudspeaker time-delay need to be dealt with. A variety of considerations are interrelated, yet need to be addressed by category:

Loudspeakers: factors include placement, the speakers’ coverage angle, setting crossover points, and aiming and spacing the speakers.

Outboard Electronics: EQ, compression, delay and other outboard electronics must be properly used to optimize the performance of the loudspeakers.

Acoustic Modification: this involves the use of baffles, drapes or other physical materials to change the sound in a room.

Yakabuski and Lawrence shared their similar methodology of measuring the sound at the front of house) mixing position in order to pinpoint any setup errors early on, such as mis-wired cabling or an out-of-phase speaker cab within an array. Paz noted that it was important to time-align the speakers first (starting with the subwoofers) before doing any EQ or other processing; otherwise, there was a high likelihood that all of the work in initially tweaking the sound would need to be scrapped if the system had latency, phase, or other issues.

Anton mentioned that she often uses multiple mics, measuring platforms like SMAART, and a variety of audio processors when tuning post-production studios. She prefers to measure the acoustic properties of a live venue (with no sound system installed) first, since the genre of music often will force certain sound design choices. A Placido Domingo concert will require a more nuanced system than a hip-hop or DJ event where sheer amplifier power is the dominant consideration.

Carolina Anton. Carolina Anton.

One tip that Michael Lawrence offered was that they should have ballpark expectations for how a system would measure, so that if something fell notably outside those parameters, they’d know to deal with the issue immediately. He’d learned from experience in tuning a sound system for West Side Story that it’s important for the sound engineers to actually get an audience perspective of the sound in each different area of a venue, since measurements can sometimes give a false sense of what the audience is actually hearing. This is even more critical in the case of outdoor sound systems, where audience size, weather, and other elements can affect the sound.

Additionally, some sound systems have practical considerations that may override achieving optimum audio quality. Lawrence amusingly cited the example of a system he designed in a college that purposely needed to have one speaker pointed oddly in a different direction. The reason was so the sound could be heard by the Dean in her office. It was a mandatory requirement that was actually written into the contract.

 

Michael Lawrence. Michael Lawrence.

McCarthy pointed out that today’s sound systems are considerably more complex due to the advent of digital technology, where any number of parameters can be adjusted with a few strokes on an iPad. However, this makes it more difficult to determine if, say, a speaker might not be working because of being blown or miswired, or if it was simply shut off accidentally.

As system engineers, Lawrence and Yakabuski both aim for a balanced and flat sound from the main left and right speaker arrays at the front of house (mixing) position, with a +/- 2 dB differential between the main and ancillary zones and not more than +6 dB of additional level from the subwoofers. In that way, a mix engineer will have a fairly clean canvas upon which to work. The exceptions are live sound for hip-hop or EDM (electronic dance music), where mix engineers may want the low end to be boosted as much as 18 dB to deliver extra low-frequency energy for these music genres, with less emphasis on an even overall balance.

For theaters, Paz said she prefers a relatively flat frequency response, but rolls off 3 dB on the high and low ends, and as much as 6 dB if the house has excessive reverberation.

Jessica Paz. Jessica Paz.

Yakabuski, who also engineers frequently for corporate clients, will use an extremely flat response, so that tweaking the individual lavalier mics often used at these types of events is easier to accomplish on the fly. He will add back some low end on some playback tracks to make them sound “bigger,” which, as he notes, usually makes for a contented client. He has certain ballpark frequency curve templates for corporate, rock, and hip-hop projects and venues. Outdoor festivals, he says, pose different challenges. System engineers (the ones who set up the sound system) who tailor the frequency curve for a particular music genre need to communicate clearly to the mix engineers as to what degree of deviance from a flat response the system has been tuned to, and as early as possible before the event begins.

Anton, who works frequently in Mexico City, says she has often found herself in a situation where, because of her previous work with certain artists, she has had to check the work of a systems engineer who may have left the premises hours earlier, in order to gauge what the system’s frequency response curve has been set to, and whether or not the system can cleanly handle the demands of those artists’ music. As a result, she often has to formulate workarounds for the FOH mix engineer in advance, as there is usually insufficient time to do a full soundcheck, so her depth of experience is crucial.

Finlay Watt’s experience with sound system rentals in the UK has given him a wide mix of experiences. He tries to communicate in advance with the designated mix engineer to get an idea of the engineer’s preferences, in order to customize a system as close as possible to his or her taste during the system-tuning phase.

McCarthy is often hired to design permanent installations, and he estimates that 80 percent of the time is spent on determining proper speaker placement in order to attain optimum audio fidelity. He has an interesting technique of creating a line array of microphones that corresponds with the hung speaker arrays, and then works to get the performance of the speakers to match what he hears from the mics.

Given her specialty in theater sound, Paz advised calibrating all the microphones that will be used in measuring a venue, since she uses 15 mics on average when configuring a system setup. Sometimes, variances in even the same brand and model of mic can often be too wide to make them acceptable for use in these situations.

Anton also cited that live sound has no fixed rules, because each location has different obstacles to be overcome, whereas in studio sound design, there are very specific rules that need to be observed. In designing setups for smaller spaces and venues, Watt routinely will re-use his own reference tones, which enable him to more easily factor in the effects of the reflective surfaces often encountered in smaller spaces.  Then, he’ll use a piano and vocals to tune the system for the room, and then adjust for live drums, if needed. As smaller spaces often have considerable amounts of reflective concrete and glass, the primary goal is to achieve maximum clarity with these elements, which may be harder to EQ later on when mixing for an audience.

McCarthy half-jokingly offered the adage, “Never underestimate the power of a Twin Reverb (an extremely loud Fender guitar amp) to destroy [the fill from a speaker].” Electric guitar sound pressure levels are often the hardest to deal with, when the players insist on using ear-shattering stacks of 100-watt amps in small venues where a 15-watt combo would be more than enough.

While mix engineers depend on such devices, system setup engineers find that EQ and outboard processing (such as compression and reverb) are actually the weakest tools in their arsenal, as “you can’t EQ your way out of poor speaker placement.” All the participants on the panel still use their tried-and-true reference music tracks which they are thoroughly familiar with in order to tune specific parts of the frequency spectrum, although they’ll use pink noise for initial system measurements.

Part Two will feature coverage of a presentation on stereo panning, a keynote speech by Grammy award-winning record producer Peter Asher, and an analytic look at low frequencies.

Back to Copper home page