Friday, March 25, 2011

Just say "NO!" to phase cancellation

Have you ever watched a sitcom or a soap opera on TV, and the sound swirled around as two characters passed each other?  That's phase cancellation.  It happens when two or more microphones are picking up the same source from different distances.  On TV it's more apparent and changes because the mics are moving. 

Sound consists of waveforms like ripples in a pond.  Two separate microphones will pick up the same waveform at different times and points between the peaks and valleys. Sound is slow.  It only moves around 1,180 feet per second.  Roughly 1 millisecond of delay for every foot of distance.  Depending on the type of sound, we start hearing discrete echoes at 15 milliseconds.  That's only 15'!

When two identical and delayed signals are blended(mixed)together, some frequencies will be cancelled.   Recording musical instruments is similar.  The resulting sound is "hollow" or "thin."  High frequencies can sound "swishy" or "wishy-washy."  If the instrument you're recording sounds "off," try changing the placement of a mic or change mics.  Experiment.

By the way, I have Mr. Mackey from South Park stuck in my head.  He's saying, "Phase cancellation's bad, mmmmkaayyyy."

Any instance when two or more mics are used, there is bound to be some phase issues.  Fortunately, there are some methods to ease the audible effects of phasing.  Here are a few guidelines to follow.  As I've stated previously, there aren't any hard and fast rules pertaining to music production.  However, the following are a couple of rules.  Rules are made to be broken.  Once you understand the rules, you also understand how to break them to your advantage.

The first rule is fairly simple.  Put the microphones as close together as possible.  Putting two mics together in an XY or Blumlein configuration requires the capsules to be next to one another.  These are also known as coincident pairs.  By putting the capsules close together, they receive the sound at approximately the same time.  Therefore, very minimal phasing.  The drawback to the an XY coincident pair is that the recording can sound fairly sterile.  It's a good quality stereo image, but it lacks pizzazz.  Other near-coincident pairs (NOS, ORTF) offer similar sounds with a wider stereo image.  You can read about those in my post about stereo miking techniques.

The second rule is also quite simple.  It's the 3:1 Distance Rule.  Simply put, microphones should be at least three times as far apart as they are from the source.  Let's take an acoustic guitar for example. If one mic is 6" away from the guitar, the second mic should be 18" away from the first mic.  In the case of an orchestra, the mics should be at least 30' apart if they are 10' away from the orchestra.

Close-miking with a directional microphone is also effective in combatting phasing.  When a mic is within 6" of an instrument, the bleed from other instruments is low compared to the primary instrument.  The drawback is that this can make the instrument sound unnatural.  Proximity effect is another side-effect to close-miking.  With directional mics, there is a low frequency build-up as it gets closer to the instrument.  That's why those smooth jazz radio DJ's sound so rich.  They get up close and personal with their mic.

Another method for minimizing phase cancellation is to use a microphone's pickup pattern to your advantage.  Let's return to our acoustic guitar example.  Instead of recording the guitar in stereo, now we want to capture her singing while she's playing.  We'll still use two microphones on two different sources.  However, the vocal mic will pick up some guitar and vice versa.  There are ways to cut down on the amount of leakage between mics.  One method I like to use, in this scenario, is a bipolar(a.k.a. Figure 8)mic.  Since this mic rejects very well from the sides, point the side towards her mouth while the front is picking up the spot on the guitar I like.  The same holds true for the vocal mic.  It can be a cardioid pattern.  I'll position the microphone so that its rear is aimed towards the direction of the guitar.

So far, I've only mentioned two microphone situations.  What happens when there are more than that?  Being a drummer, I believe that getting a good drum sound is the cornerstone to a good recording.  Once the drums sound good, the rest falls into place.  Did you know that on a typical five-piece drum kit, I'll use as many as 13 mics?  I can't always follow all the rules at the same time.  Or can I?  Let's run down the different mics, shall we?

Two mics on the kick drum.  One inside and one outside.  The inside is there to capture the sound of the beater against the batter head, while the outside mic captures the boom of the resonant head.  Each are close-miking their spots about 18" apart while only 2-4" from their source.  So far, so good.

Two mics on the snare.  One on top to get the stick hit and tone.  One on the bottom for some added "snap."  They are 6-8" apart while being 1-2" from their source.  Another close miking situation here.  The bleed from the kick drum is minimal.  And I like to face the rear of the top mic towards the hi-hat to prevent the snare mic from picking up too much hi-hat.

Three mics for the rack and floor toms.  Again, this is a close-miking technique.  Any bleedthrough of other instruments is minimized by facing the rear towards cymbals.

One mic on the hi-hat.  This one is different.  Often, you'll see hi-hats miked right up close towards the top hat.  I point mine away from the hi-hats and the rest of the drum kit.  Sometimes as far as 3' away.  The reason is that cymbals resonate outwards from their sides.  I want more of the sizzle of the hi-hat and less stick hitting the hi-hat.  So, I point the hi-hat mic 1-3' away towards where the sound will be.  Try it!  Very little bleed from the kit.

Two mics for overheads.  These two mics pick up the overall drum kit.  Generally, I'll space them apart 6-9' while they are 2-3' above the kit.  I've also been known to place them beyond the kit to pickup more cymbals than toms.  Same principle as the hi-hat mic here.

Finally, two room mics.  Depending on the size of the room, I'll place these mics 10' away from the kit and 30' apart.  I've had one 8' off the floor and the other 3" off the floor.  If my room isn't that big, another miking rule comes into play.

The Reflective Surface Rule is similar to the 3:1 Distance Rule.  In a single microphone application, the mic should be at least twice as far from the nearest reflective surface than it is from the source.  For example, a single vocal mic is about 6-12" away from the singer and should be at least 3' from the floor or wall.  Sound will reflect off hard surfaces back into the microphone causing phase issues.

In the case of room mics, it's perfectly acceptable to break that rule.  The more room sound, the better.  In fact, reverse the Reflective Surface Rule and your drums will sound bigger.  They should be twice as far from the source as they are from the nearest reflective surface.

One method is not as much phase-related as it is polarity-related.  In professional audio, different pieces of equipment are connected with XLR connectors.  These are 3-pin connectors.  One pin is grounded while the other two carry the audio.  One is 180 degrees out of phase - otherwise known as reverse polarity.  Not all equipment is wired the same.  By that, I mean, some manufacturers design equipment to send the positive signal on pin 2(a.k.a. pin-2-hot)while others send the hot signal out pin 3.  For example, Shure SM58's used to be wired pin-3-hot.  Ampex and Tascam tape machines were pin-3-hot.  Those signals will be out of polarity with equipment that is wired pin-2-hot.  Most of today's equipment is pin-2-hot.  Back in the 1980's, you had to know which.  In any case, try flipping the polarity of one mic and listen to what that does for your sound.

Lastly, when everything is recorded, there is one extra measure that I will go to for my drums.  Back in the days of tape, this wasn't possible.  With today's digital technology, we can go an extra step.  I'm talking about track alignment.  Pro Tools does this exceptionally well since I'm able to edit at the sample level.  Basically, I will align the various drum tracks to be phase aligned to each other, realizing that there is bound to be some bleeding of the instruments.

Starting with the kick drum mics, I'll zoom into the sample level and move forward the outside kick mic to the inside mic.  Then, I'll adjust the overheads to those.  The snare, hi-hat and tom mics will be adjusted to the overheads.  The room mics are left alone.  If done properly, the drums become more open.

That's all the noise I have for this post.  I hope it was all in phase(coherent).  Ha!  Sometimes, I crack myself up!

Rock.  Roll.  Repeat.

Thursday, March 17, 2011

Did you hear the one about...?

The other day I was sharing a few amusing anecdotes about some personal audio experiences.  My colleague(we'll call him...Chuck)suggested I write a book.  Well, since I have a blog, perhaps this is a good place to start.  The stories you are about to read are true.  The names have been changed to protect the innocent.

Most of the stories I shared with Chuck were about wireless microphone systems.  The nature of wireless systems is such that frequencies often crossover in wireless-rich environments.  Here are two of my favorites.

Back in the mid-90's, I managed the Jimmy Durante stage at the Del Mar Fair in San Diego.  It's a grueling 14-hour schedule filled with local country bands, dance troupes and roaming entertainment.  The Durante stage happened to be next to the horse arena where they would host rodeos, tractor pulls and monster truck rallies.  In case you're wondering, it's pointless to run sound during a tractor pull or monster truck race.

One year, a gentleman performed a flea circus on my stage.  His show was quite enjoyable.  One day, the fleas were right in the middle of their flying trapeze act, when a woman ran frantically towards me from the horse arena.  She was yelling, "Turn it down!  Turn it Down!"  I didn't know why she was so upset, so I followed her into the arena where they were changing over the cattle for the rodeo.  Over the loudspeakers I heard the fleas flying through the air!  Like a scene out of a foreign film.

Apparently, one of the rodeo clowns' mics was on the same frequency as my flea circus ringman.  When the clown turned off his mic, the receiver was still on!

Another story from the Del Mar Fair happened at a friend's stage.  His stage was in the infield, and hence, called the Infield Stage.  It was located about 100 yards behind the Main Stage where acts such as Winona Judd and Brandi, would perform each evening.  During the day, they would rehearse.

One day, The Village People were rehearsing their show for the evening.  As suspected, one of the performer's microphones shared a frequency with a wireless mic my friend used for his stage.  As I was visiting him, we were treated to a solo performance of YMCA via the wireless receiver.  During the chorus, we heard, "Yyyyyy, MCA.... oh, $h!#!  I don't know the !@#&ing words to this song!"

A learning story happened at San Diego Symphony Hall.  I call this a learning story because there's a moral to be gleaned.  It does not involve wireless microphones and is not intended to invoke laughter.  However, after a couple decades have passed, I can laugh about it now.

I was hired to provide sound for an up and coming singer who was to perform at Symphony Hall.  To provide enough sound, I rented a system from the Back Stage at San Diego State University.  Along with the system, came a couple of interns to assist in the setup.

All went smoothly through the setup.  The 2-hour rehreasal also went swimmingly well.  We enjoyed a little break to eat some dinner and relax before the show.  The show started on time and halfway through the first number, everything went quiet.  Nothing but drums and vocals.

The PA was dead.  The monitors were dead.  The guitars and keyboard amps were dead.

It turns out the assistants from SDSU had plugged all 10,000 watts of power amps into the same 20amp circuit.  Anyone who knows amplifiers can tell you that's too much for one circuit.  In case you're not one of those people, here's the math.  10,000 watts divided by 120V yields 83amps.  83amps in a 20amp circuit.  I have no idea how we made it through rehearsal without tripping a breaker.  But, we did. 

I have other stories that involve generators and bass heads catching fire.  But, I've omitted them due their audio irrelevancy.  There are a plethora of other stories, but not enough time or room to share them all.

I hope you've enjoyed this little tryst down memory lane as much as I.  There are always lessons to be learned from mistakes.  Often, these mishaps can be funny stories to be shared for years to come.  Some of them are blog worthy.  At least according to Chuck.

Rock.  Roll.  Repeat.

Tuesday, March 8, 2011

Compression is not as scary as it sounds.

Big Rob is a good friend, talented engineer and I'd like to thank him for this installment's topic.

Talk to a musician about compression and they'll usually get a glazed look in their eye.  Unless they are technically savvy, most musicians know what a compressor is capable of, but not really how to use it to their advantage.  They know they can get an instrument louder in the mix, but that's it.  Using compression properly is what separates the chaff from the wheat.

Case in point - A guitar-playing friend asked me to listen to a mix and wanted my opinion on how to improve it. Most noticeably, was the lack of dynamics and abundance of noise.  Looking more closely, I discovered he had a compressor on every track and each were reducing the gain by 10dB (decibels) or more!  His gain structure (another blog topic for another day) was completely out of whack.  I explained to him that his overuse of compression is wreaking havoc with his gain structure and therefore, his mix.  The most efficient method would be to remove all the compressors and start over.

There are times when overuse of compression can create a desireable effect.  The Who experimented with compression on cymbals to create locomotive sounds.  Most often, however, "pumping" and "breathing" is the side effect of too much compression.

We all hear compression everyday.  When you listen to the radio, the audio is passed through several compressors before it is broadcast.  Listening to any commercially released music has passed through at least one compressor, if not several.  Heck, even our middle ear has a compressor.  That's what makes VU or RMS meters more relevant.  But, that's yet another discussion altogether.

The compressor was designed, primarily, as a level control device.  Back in the days of vinyl, a Mahler orchestra, with a dynamic range of 136dB would need to fit onto a record with a total dynamic range of 68dB.  Radio stations have an even smaller dynamic range to fit the same material.  That's why the first compressers were called "leveling amplifiers."  This is known as "downward compression" (compressing down audio peaks) and is the most common useage.

I like to think of compressors in terms of plumbing. The water level is the signal coming into the unit. The compressor circuit is like a valve.  Basically, compressors "compress" a signal by a defined ratio, once it has reached a certain threshold.  The input:output ratio is fairly simple.  A ratio of 4:1 indicates that for every 4dB of input, only 1dB is let out.  A ratio of 10:1 or higher is known as "limiting."

Most compressors have a few simple controls. Optical compressors, such as Universal Audio's LA2A, are the most simple.  Having only two controls - input and output gain.  Other compressor designs allow for more flexibility.  The Urei 1176 had four buttons for different ratios, and attack and release controls along with the input and output controls.  The "attack" control delays the time before the compressor kicks in and the "release" control delays the amount of time the compressor releases after it has gone below the threshold.  Digital compressors can have a function called "look-ahead." Because analog compressors have to react to a signal, they tend to distort easily. The look-ahead feature allows the compressor to see an audio peak before it comes and can prepare for it. The result is more gain before distortion.  Different units have even more controls. There just isn't space here to discuss them all.

In the early days of recording, engineers like Geoff Emerick (whose book "Here, There, and Everywhere is currently on my nightstand), used a compresser on John Lennon's acoustic guitar to level out his playing.  If you've ever heard an acoustic guitar picked in person, the level can be inconsistent.  There could be a note here and there that pops out.  Compression helps even out the performance.  Bass guitars benefit greatly from compression.  Geoff also began using compression on drum mics to prevent his console from distorting. 

On that note (pun intended), here is a fun method for using compression with drums that is really popular.  It's called New York compression or parallel compression.  Basically, send your drum mix
to two parallel outputs.  Across one output, insert a compressor with a low ratio, low threshold, a fast attack and slow release.  This is a method of upward compression (compressing the quiet parts up) and is very transparent.  It's also a good method for getting a drums to sound more agressive when the drummer has had a few too many drinks or has poor technique.

A handy method for compressing vocals is called "serial compression."  As the name indicates, there are more than one compressor, in serial, on a given channel.  Typically two, one is used to gain control of the overall level.  The second, is to take out loud peaks the first let through.  Sterling Winfield (Hell Yeah, Pantera) uses a variation on this technique.  Using two Tubetech CL1B's, he'll feed a vocal through both.  One has an optical compressor type setting and the other a slow attack and release.  The result is a musical and controlled vocal performance.

In mastering, limiters are used to compress the peaks down and increase the overall output.  One side effect of this practice is that instruments tend to sound smaller as they become more compressed.  Often, intruments can be practically "crushed" out of a mix.  One limiter I've become fond of is the Sonnox Limiter.  It has feature no other digital limiter on the market has - an attack control.  By slowing down the attack on the limiter, I can hear the drums get bigger.  The guitars get more rhythmic.  In fact, the entire mix improves.

With compression, the sky is the limit.  This seemingly innocuous and common utility can play a major role in the quality of our music.  As I've said before in other postings, there are no hard and fast rules when it comes to audio.  My best advice is to not look at the numbers on the faceplate and use your ears.  Play with the settings until it sounds "right."

Rock.  Roll.  Repeat.