Monday, June 13, 2011

Less Is More...More or Less

Everytime I get in a rental car, the bass and treble of the stereo are ALWAYS boosted to their limits.  AND the "Loudness" button is engaged.  It only takes a couple of minutes to zero everything.  But, still, it's troubling.  Another typical trend is the "smiley face" eq curve.  You know...the shape a graphic eq dons on a friend's stereo system?  I envision the setup.  The moment the graphic eq is pulled from the box and plugged into the system, the "smiley face" is immediately carved into the once boring flat line.  I wonder if any thought went into it.

Why?  Why do people adjust eq so radically?  Do they listen while making adjustments?  Did they see one of their friends do it and thought that's how it was supposed to be?  Or were there small adjustments over time that culminated in the final curvature?  Perhaps, it's to compensate for hearing loss as a result of listening to another's eq abuse?  Is the Bose marketing campaign that effective?

Whatever the reason, it bothers me.  It's the equivalent to boosting the brightness, contrast, and all color controls fully on your tv or computer monitor.  Does that look good?

Most systems, nowadays, sound decent without all the hype.  It used to be that we would use eq to compensate for the lack of quality in our equipment.  Today's consumer equipment is much better sounding.  I wonder if eq's are even necessary.

What is eq?  EQ is short for equalizer or equaliser(in the Queen's English).  An equalizer in a playback system was designed to compensate or "equalize" an environment's acoustical deficiencies.  Yet, today, an eq is used more as shaping tool much like us audio engineers use during the recording process.  Mastering engineers are aware of this trend and adjust accordingly.  Ocassionally, I'll get a client new to mastering who comments on how flat his project sounds in the studio.  Once they get it out to their car or their home stereo system, they don't have the same comment.

I bet most people don't even know what the "Loudness" feature was designed to do.  The "Loudness" feature was intended to compensate for Fletcher-Munson curves at low volume.  The key part to the previous sentence is "AT LOW VOLUME."  The Fletcher-Munson curve, simply put, is the human ear's insensitivity to bass and treble at low volume.  The Loudness feature on stereo systems was intended to compensate for this lack of sensitivity by boosting the bass and treble.  The trouble comes when the Loudness feature is engaged at high volume.  It's unnecessary!

As an audio engineer, have you ever mixed a song that sounded great in the studio and when you played it in your car or home stereo it sounded thin and tinny?  You might blame the Fletcher-Munson curves and mixing too loudly. I can tell when someone mixed at too high a volume.  The bass is gone.  The treble is gone.  All that's left is mid-range.  I actually prefer to mix at low volumes.  Almost too low for some people. Distortion is easier to hear at low volume.  Everything sounds good loud.  It's easy to make something sound decent at high volume.  If you can make it sound good at a low volume, imagine how much better it will sound when you turn it up!

An issue that arises today for fledgling audio engineers is listening fatigue.  Extended periods of loud music can not only damage your hearing, but it's tiring.  As fatigue sets in, it's easy to reach for the treble frequencies.  You think it sounds better.  Your ear has become accustomed to the increased level.  And, it might sound better after a 12-hour day.  Get some sleep and come back the next day.  Does it still sound good?  If it does, great!  It's a good idea to have some reference material handy.  Go back and check yourself every hour or so.

You'll notice that until now, all that has been mentioned is boosting or adding eq.  I've often told people that I prefer to cut or subtract eq.  Taking out troublesome frequencies can be more effective than boosting other frequencies.  For example, if an instrument sounds dark or muddy, try taking out bass or low frequencies instead of reaching for the treble.  The same goes for an instrument that's too bright.  Cut the high frequencies.  This technique is more effective because it only affects the offending frequencies.


I am fond of high-pass filters.  They're simple and effective.  It is time to filter out my noise now.

Rock.  Roll.  Repeat.

Friday, March 25, 2011

Just say "NO!" to phase cancellation

Have you ever watched a sitcom or a soap opera on TV, and the sound swirled around as two characters passed each other?  That's phase cancellation.  It happens when two or more microphones are picking up the same source from different distances.  On TV it's more apparent and changes because the mics are moving. 

Sound consists of waveforms like ripples in a pond.  Two separate microphones will pick up the same waveform at different times and points between the peaks and valleys. Sound is slow.  It only moves around 1,180 feet per second.  Roughly 1 millisecond of delay for every foot of distance.  Depending on the type of sound, we start hearing discrete echoes at 15 milliseconds.  That's only 15'!

When two identical and delayed signals are blended(mixed)together, some frequencies will be cancelled.   Recording musical instruments is similar.  The resulting sound is "hollow" or "thin."  High frequencies can sound "swishy" or "wishy-washy."  If the instrument you're recording sounds "off," try changing the placement of a mic or change mics.  Experiment.

By the way, I have Mr. Mackey from South Park stuck in my head.  He's saying, "Phase cancellation's bad, mmmmkaayyyy."

Any instance when two or more mics are used, there is bound to be some phase issues.  Fortunately, there are some methods to ease the audible effects of phasing.  Here are a few guidelines to follow.  As I've stated previously, there aren't any hard and fast rules pertaining to music production.  However, the following are a couple of rules.  Rules are made to be broken.  Once you understand the rules, you also understand how to break them to your advantage.

The first rule is fairly simple.  Put the microphones as close together as possible.  Putting two mics together in an XY or Blumlein configuration requires the capsules to be next to one another.  These are also known as coincident pairs.  By putting the capsules close together, they receive the sound at approximately the same time.  Therefore, very minimal phasing.  The drawback to the an XY coincident pair is that the recording can sound fairly sterile.  It's a good quality stereo image, but it lacks pizzazz.  Other near-coincident pairs (NOS, ORTF) offer similar sounds with a wider stereo image.  You can read about those in my post about stereo miking techniques.

The second rule is also quite simple.  It's the 3:1 Distance Rule.  Simply put, microphones should be at least three times as far apart as they are from the source.  Let's take an acoustic guitar for example. If one mic is 6" away from the guitar, the second mic should be 18" away from the first mic.  In the case of an orchestra, the mics should be at least 30' apart if they are 10' away from the orchestra.

Close-miking with a directional microphone is also effective in combatting phasing.  When a mic is within 6" of an instrument, the bleed from other instruments is low compared to the primary instrument.  The drawback is that this can make the instrument sound unnatural.  Proximity effect is another side-effect to close-miking.  With directional mics, there is a low frequency build-up as it gets closer to the instrument.  That's why those smooth jazz radio DJ's sound so rich.  They get up close and personal with their mic.

Another method for minimizing phase cancellation is to use a microphone's pickup pattern to your advantage.  Let's return to our acoustic guitar example.  Instead of recording the guitar in stereo, now we want to capture her singing while she's playing.  We'll still use two microphones on two different sources.  However, the vocal mic will pick up some guitar and vice versa.  There are ways to cut down on the amount of leakage between mics.  One method I like to use, in this scenario, is a bipolar(a.k.a. Figure 8)mic.  Since this mic rejects very well from the sides, point the side towards her mouth while the front is picking up the spot on the guitar I like.  The same holds true for the vocal mic.  It can be a cardioid pattern.  I'll position the microphone so that its rear is aimed towards the direction of the guitar.

So far, I've only mentioned two microphone situations.  What happens when there are more than that?  Being a drummer, I believe that getting a good drum sound is the cornerstone to a good recording.  Once the drums sound good, the rest falls into place.  Did you know that on a typical five-piece drum kit, I'll use as many as 13 mics?  I can't always follow all the rules at the same time.  Or can I?  Let's run down the different mics, shall we?

Two mics on the kick drum.  One inside and one outside.  The inside is there to capture the sound of the beater against the batter head, while the outside mic captures the boom of the resonant head.  Each are close-miking their spots about 18" apart while only 2-4" from their source.  So far, so good.

Two mics on the snare.  One on top to get the stick hit and tone.  One on the bottom for some added "snap."  They are 6-8" apart while being 1-2" from their source.  Another close miking situation here.  The bleed from the kick drum is minimal.  And I like to face the rear of the top mic towards the hi-hat to prevent the snare mic from picking up too much hi-hat.

Three mics for the rack and floor toms.  Again, this is a close-miking technique.  Any bleedthrough of other instruments is minimized by facing the rear towards cymbals.

One mic on the hi-hat.  This one is different.  Often, you'll see hi-hats miked right up close towards the top hat.  I point mine away from the hi-hats and the rest of the drum kit.  Sometimes as far as 3' away.  The reason is that cymbals resonate outwards from their sides.  I want more of the sizzle of the hi-hat and less stick hitting the hi-hat.  So, I point the hi-hat mic 1-3' away towards where the sound will be.  Try it!  Very little bleed from the kit.

Two mics for overheads.  These two mics pick up the overall drum kit.  Generally, I'll space them apart 6-9' while they are 2-3' above the kit.  I've also been known to place them beyond the kit to pickup more cymbals than toms.  Same principle as the hi-hat mic here.

Finally, two room mics.  Depending on the size of the room, I'll place these mics 10' away from the kit and 30' apart.  I've had one 8' off the floor and the other 3" off the floor.  If my room isn't that big, another miking rule comes into play.

The Reflective Surface Rule is similar to the 3:1 Distance Rule.  In a single microphone application, the mic should be at least twice as far from the nearest reflective surface than it is from the source.  For example, a single vocal mic is about 6-12" away from the singer and should be at least 3' from the floor or wall.  Sound will reflect off hard surfaces back into the microphone causing phase issues.

In the case of room mics, it's perfectly acceptable to break that rule.  The more room sound, the better.  In fact, reverse the Reflective Surface Rule and your drums will sound bigger.  They should be twice as far from the source as they are from the nearest reflective surface.

One method is not as much phase-related as it is polarity-related.  In professional audio, different pieces of equipment are connected with XLR connectors.  These are 3-pin connectors.  One pin is grounded while the other two carry the audio.  One is 180 degrees out of phase - otherwise known as reverse polarity.  Not all equipment is wired the same.  By that, I mean, some manufacturers design equipment to send the positive signal on pin 2(a.k.a. pin-2-hot)while others send the hot signal out pin 3.  For example, Shure SM58's used to be wired pin-3-hot.  Ampex and Tascam tape machines were pin-3-hot.  Those signals will be out of polarity with equipment that is wired pin-2-hot.  Most of today's equipment is pin-2-hot.  Back in the 1980's, you had to know which.  In any case, try flipping the polarity of one mic and listen to what that does for your sound.

Lastly, when everything is recorded, there is one extra measure that I will go to for my drums.  Back in the days of tape, this wasn't possible.  With today's digital technology, we can go an extra step.  I'm talking about track alignment.  Pro Tools does this exceptionally well since I'm able to edit at the sample level.  Basically, I will align the various drum tracks to be phase aligned to each other, realizing that there is bound to be some bleeding of the instruments.

Starting with the kick drum mics, I'll zoom into the sample level and move forward the outside kick mic to the inside mic.  Then, I'll adjust the overheads to those.  The snare, hi-hat and tom mics will be adjusted to the overheads.  The room mics are left alone.  If done properly, the drums become more open.

That's all the noise I have for this post.  I hope it was all in phase(coherent).  Ha!  Sometimes, I crack myself up!

Rock.  Roll.  Repeat.

Thursday, March 17, 2011

Did you hear the one about...?

The other day I was sharing a few amusing anecdotes about some personal audio experiences.  My colleague(we'll call him...Chuck)suggested I write a book.  Well, since I have a blog, perhaps this is a good place to start.  The stories you are about to read are true.  The names have been changed to protect the innocent.

Most of the stories I shared with Chuck were about wireless microphone systems.  The nature of wireless systems is such that frequencies often crossover in wireless-rich environments.  Here are two of my favorites.

Back in the mid-90's, I managed the Jimmy Durante stage at the Del Mar Fair in San Diego.  It's a grueling 14-hour schedule filled with local country bands, dance troupes and roaming entertainment.  The Durante stage happened to be next to the horse arena where they would host rodeos, tractor pulls and monster truck rallies.  In case you're wondering, it's pointless to run sound during a tractor pull or monster truck race.

One year, a gentleman performed a flea circus on my stage.  His show was quite enjoyable.  One day, the fleas were right in the middle of their flying trapeze act, when a woman ran frantically towards me from the horse arena.  She was yelling, "Turn it down!  Turn it Down!"  I didn't know why she was so upset, so I followed her into the arena where they were changing over the cattle for the rodeo.  Over the loudspeakers I heard the fleas flying through the air!  Like a scene out of a foreign film.

Apparently, one of the rodeo clowns' mics was on the same frequency as my flea circus ringman.  When the clown turned off his mic, the receiver was still on!

Another story from the Del Mar Fair happened at a friend's stage.  His stage was in the infield, and hence, called the Infield Stage.  It was located about 100 yards behind the Main Stage where acts such as Winona Judd and Brandi, would perform each evening.  During the day, they would rehearse.

One day, The Village People were rehearsing their show for the evening.  As suspected, one of the performer's microphones shared a frequency with a wireless mic my friend used for his stage.  As I was visiting him, we were treated to a solo performance of YMCA via the wireless receiver.  During the chorus, we heard, "Yyyyyy, MCA.... oh, $h!#!  I don't know the !@#&ing words to this song!"

A learning story happened at San Diego Symphony Hall.  I call this a learning story because there's a moral to be gleaned.  It does not involve wireless microphones and is not intended to invoke laughter.  However, after a couple decades have passed, I can laugh about it now.

I was hired to provide sound for an up and coming singer who was to perform at Symphony Hall.  To provide enough sound, I rented a system from the Back Stage at San Diego State University.  Along with the system, came a couple of interns to assist in the setup.

All went smoothly through the setup.  The 2-hour rehreasal also went swimmingly well.  We enjoyed a little break to eat some dinner and relax before the show.  The show started on time and halfway through the first number, everything went quiet.  Nothing but drums and vocals.

The PA was dead.  The monitors were dead.  The guitars and keyboard amps were dead.

It turns out the assistants from SDSU had plugged all 10,000 watts of power amps into the same 20amp circuit.  Anyone who knows amplifiers can tell you that's too much for one circuit.  In case you're not one of those people, here's the math.  10,000 watts divided by 120V yields 83amps.  83amps in a 20amp circuit.  I have no idea how we made it through rehearsal without tripping a breaker.  But, we did. 

I have other stories that involve generators and bass heads catching fire.  But, I've omitted them due their audio irrelevancy.  There are a plethora of other stories, but not enough time or room to share them all.

I hope you've enjoyed this little tryst down memory lane as much as I.  There are always lessons to be learned from mistakes.  Often, these mishaps can be funny stories to be shared for years to come.  Some of them are blog worthy.  At least according to Chuck.

Rock.  Roll.  Repeat.

Tuesday, March 8, 2011

Compression is not as scary as it sounds.

Big Rob is a good friend, talented engineer and I'd like to thank him for this installment's topic.

Talk to a musician about compression and they'll usually get a glazed look in their eye.  Unless they are technically savvy, most musicians know what a compressor is capable of, but not really how to use it to their advantage.  They know they can get an instrument louder in the mix, but that's it.  Using compression properly is what separates the chaff from the wheat.

Case in point - A guitar-playing friend asked me to listen to a mix and wanted my opinion on how to improve it. Most noticeably, was the lack of dynamics and abundance of noise.  Looking more closely, I discovered he had a compressor on every track and each were reducing the gain by 10dB (decibels) or more!  His gain structure (another blog topic for another day) was completely out of whack.  I explained to him that his overuse of compression is wreaking havoc with his gain structure and therefore, his mix.  The most efficient method would be to remove all the compressors and start over.

There are times when overuse of compression can create a desireable effect.  The Who experimented with compression on cymbals to create locomotive sounds.  Most often, however, "pumping" and "breathing" is the side effect of too much compression.

We all hear compression everyday.  When you listen to the radio, the audio is passed through several compressors before it is broadcast.  Listening to any commercially released music has passed through at least one compressor, if not several.  Heck, even our middle ear has a compressor.  That's what makes VU or RMS meters more relevant.  But, that's yet another discussion altogether.

The compressor was designed, primarily, as a level control device.  Back in the days of vinyl, a Mahler orchestra, with a dynamic range of 136dB would need to fit onto a record with a total dynamic range of 68dB.  Radio stations have an even smaller dynamic range to fit the same material.  That's why the first compressers were called "leveling amplifiers."  This is known as "downward compression" (compressing down audio peaks) and is the most common useage.

I like to think of compressors in terms of plumbing. The water level is the signal coming into the unit. The compressor circuit is like a valve.  Basically, compressors "compress" a signal by a defined ratio, once it has reached a certain threshold.  The input:output ratio is fairly simple.  A ratio of 4:1 indicates that for every 4dB of input, only 1dB is let out.  A ratio of 10:1 or higher is known as "limiting."

Most compressors have a few simple controls. Optical compressors, such as Universal Audio's LA2A, are the most simple.  Having only two controls - input and output gain.  Other compressor designs allow for more flexibility.  The Urei 1176 had four buttons for different ratios, and attack and release controls along with the input and output controls.  The "attack" control delays the time before the compressor kicks in and the "release" control delays the amount of time the compressor releases after it has gone below the threshold.  Digital compressors can have a function called "look-ahead." Because analog compressors have to react to a signal, they tend to distort easily. The look-ahead feature allows the compressor to see an audio peak before it comes and can prepare for it. The result is more gain before distortion.  Different units have even more controls. There just isn't space here to discuss them all.

In the early days of recording, engineers like Geoff Emerick (whose book "Here, There, and Everywhere is currently on my nightstand), used a compresser on John Lennon's acoustic guitar to level out his playing.  If you've ever heard an acoustic guitar picked in person, the level can be inconsistent.  There could be a note here and there that pops out.  Compression helps even out the performance.  Bass guitars benefit greatly from compression.  Geoff also began using compression on drum mics to prevent his console from distorting. 

On that note (pun intended), here is a fun method for using compression with drums that is really popular.  It's called New York compression or parallel compression.  Basically, send your drum mix
to two parallel outputs.  Across one output, insert a compressor with a low ratio, low threshold, a fast attack and slow release.  This is a method of upward compression (compressing the quiet parts up) and is very transparent.  It's also a good method for getting a drums to sound more agressive when the drummer has had a few too many drinks or has poor technique.

A handy method for compressing vocals is called "serial compression."  As the name indicates, there are more than one compressor, in serial, on a given channel.  Typically two, one is used to gain control of the overall level.  The second, is to take out loud peaks the first let through.  Sterling Winfield (Hell Yeah, Pantera) uses a variation on this technique.  Using two Tubetech CL1B's, he'll feed a vocal through both.  One has an optical compressor type setting and the other a slow attack and release.  The result is a musical and controlled vocal performance.

In mastering, limiters are used to compress the peaks down and increase the overall output.  One side effect of this practice is that instruments tend to sound smaller as they become more compressed.  Often, intruments can be practically "crushed" out of a mix.  One limiter I've become fond of is the Sonnox Limiter.  It has feature no other digital limiter on the market has - an attack control.  By slowing down the attack on the limiter, I can hear the drums get bigger.  The guitars get more rhythmic.  In fact, the entire mix improves.

With compression, the sky is the limit.  This seemingly innocuous and common utility can play a major role in the quality of our music.  As I've said before in other postings, there are no hard and fast rules when it comes to audio.  My best advice is to not look at the numbers on the faceplate and use your ears.  Play with the settings until it sounds "right."

Rock.  Roll.  Repeat.

Monday, February 21, 2011

Have we come full circle?

I have a theory.  Should I take a poll?  When was the last time you actually sat down and listened to a full-length recording from start to finish?  Unless you're an audiophile or an engineer, it's probably been awhile.  I suspect it's been several years.  My theory is this: We've returned to being consumers of single songs, much  like the early days of the recording industry.  Only this time, it's become a soundtrack for our lives.

When Thomas Edison first invented the wax cylinder, it became possible for us humans to listen to a previously performed song in the privacy of our own living room.  Before then, one would have to witness the musical event in person.  Among the limitations of the cylinder was the time limit.  Early cylinders would hold only about 2 minutes of audio.  Later, it would advance to 4 minutes.  Just about enough time for one song.

Then came along disc recordings.  At 78 RPM, disc phonographs would play about the same length per side.  But, because there were two sides, there was room for two songs.  If a record label were to release more than two songs from an artist, multiple records would be packaged together in an "album."  Due to the cost of producing records, careful attention was paid to crafting and picking the songs that would be included.  The result was a collection of good, well-written songs.

Technology changed again and we could now fit up to 25 minutes per side of a 33 1/3 RPM Long Playing (LP) record.  The term "album" stuck through the consolidation of music to a single disc. 

45RPM records appeared around the same time and were a smaller format and cheaper to produce.  With their sonic superiority, they were intended to replace the 78RPM discs.

45's were more commonly purchased by teenagers while LP's were purchased primarily by adults who had more disposable income.  Initially, while LP's would sell, single 45's made the most money for record labels.  Often, labels would pressure artists for singles they could release and get radio play to generate money to produce full-length records.  It was common to hear a single on the radio and buy the 45 within days of recording it.

LP's took more time and cost to produce.  As I said before, it was important to carefully consider the material that went onto the LP.  Those of us who remember (and still have) our vinyl records, distinctly remember songs that didn't make the radio.  There were many.  A great album was a collection of songs that could stand on their own as singles.

8-tracks enjoyed success from 1965 to the late 1970's.  They were the first "portable" music format.  You can't play records very well in your car.  Even if you could, it would be impractical.  Radio was fine in the car, but the station chose what to play.  With 8-tracks, the choice was ours.  8-tracks consisted of 4 stereo tracks on an endless loop tape cassette.  Only one stereo track would playback at a time.  A full LP could fit onto an 8-track.  However, sometimes songs would need to be suffled into a different order.  Often, a song might fade out at the end of a loop and then fade back in on the next track.  While 8-tracks died out with disco in the general population, they continued to serve radio broadcasters as "carts" for commercials and short material through the late 1990's.

Cassettes came shortly after the 8-track, offering a much more compact portable medium.  With blank cassettes, we could record our favorite LP's and take them with us in the car.  We could also make "mixtapes" and give them to anyone we wanted.  We could make a mixtape of love songs for our sweetheart or a workout mixtape to listen to while we exercised.

In 1978, Sony introduced the Walkman.  The first personal portable cassette player.  The headphones were miniaturized as well.  We could now listen to our music while walking, riding, skating or any number of ambulatory methods that tickled our fancy.  All without bothering anyone else.

Compact Discs emerged in 1983.  Initially, CD's were expensive to manufacture.  Previously released albums were remastered and re-released on CD.  Playback systems were also expensive.  As costs lowered, it became a more popular format.  Eventually, CD's became portable too.

As CD's began to takeover the LP market, record labels were beginning to push artists to release their contractually obligated albums. Less attention was paid to writing quality songs.  Songwriters would write "filler" songs to fill out rest of the album time.  It reached a point where there were only two or three songs on a CD that were worthy of airplay.  Single CD's cost just as much to manufacture as full-length CD's, so record labels decided they weren't worth the effort.  Few artists were carefully crafting their projects.  Most of the popular stuff was generic.  But, people wanted the songs they heard on the radio and would cough up the extra money for the full-length disc.

Enter the consumer digital age and MP3's.  Did you know that MP3 is a format designed by video people?  MP3 is short for MPEG-1 or MPEG-2 Audio Layer-3.  MPEG stands for Moving Picture Experts Group.  Anyway, MP3's were small and could be shared via the internet.  If you wanted to share a song with someone, you could email it to them.  Websites like Napster and Rhaposdy quickly rose to help facilitate people sharing files.  Labels cried "murder" (again) and started suing.  Laws were passed and children were being fined for piracy.

Then came iTunes.  Steve Jobs essentially dictated terms to the record labels and slowly helped the record industry realize it could survive in the modern age.  Labels needed to adapt to the technology.  Why would someone spend $17 for a CD filled with fluff for two songs they really wanted, when they could now buy the songs they wanted for $2?  And they could do it legally.

The demographics haven't changed much.  Most music is still purchased by 13-year-old girls.  They hear a song on the radio and they have to get in on their iPod right away.  The CD as a merchandizing tool is on its way out.  Anyone can post music to iTunes and they do.  It's become more cost effective to produce and release your own music.  Artists are selling "download cards" at their shows for an EP (usually 3-6 songs) and charge $5.

Speaking of iPods, music has become so portable now, a lot of us take it for granted.  It used to be, we would sit around a listen to music.  Now, it's on while we're doing dishes or dusting.  It's background noise while we work.  It's on in grocery stores while we shop.  It has permeated every facet of our existence.  So much so, it's become almost mundane.

There was a time when the "single" was king. I believe we've returned to that era - albeit with a twist.  As Dennis Miller used to say, "That's my opnion.  I could be wrong."

Rock .  Roll.  Repeat.

Tuesday, February 8, 2011

Oops! I'm only human.

Okay, we all watched or heard about the Superbowl XLV halftime show.  I have to say, I was amazed at the level (or lack thereof) of quality in the audio production.  There were some obvious oversights and some not so obvious.  Whether or not you like Black Eyed Peas, Slash or Usher, or any of the music performed, is not the point.  I've seen a live Black Eyed Peas concert from 10 feet off the stage.  I was really impressed then.  What I hope to accomplish in this post is to offer some plausible explanations for the less than stellar audio production of said halftime show.

Let's begin by describing what I found to be the most egregious mistakes.  When the group began singing, it was obvious that Fergie's microphone was not on.  Will I Am's microphone was too loud.  As the songs progressed, the mix did not improve much.  Furthermore, the music track was too low.

At least we know they weren't lip syncing!

Having said all that, let's take a look at some possible explanations for how a multimillion dollar production, viewed by hundreds of millions around the world, could allow such errors.

1.  Lack of rehearsal - Dallas was hit pretty hard by a snow storm days before the Superbowl.  Ice was sliding off Cowboys Stadium the day before the big game.  I know there were plenty of preproduction meetings.  But, there wasn't much time to rehearse and fine tune everything.  However, the sound we heard broadcast could have been much better - even by amateur standards.  In addition, with today's technology, it is possible to take a "snapshot" of the mixing board during rehearsal.  That snapshot would contain levels, effects and any channels that needed to be turned on/off.  It sounded to me like someone was asleep at the wheel.

2.   Lack of redundancy - Equipment failure happens.  Those of us who have ever done any live event have our fair share of horror stories.  I have a few of my own centered around equipment failure.  It happens in the studio too.  Albeit, a lot less frequently.  More often than not, most of these mistakes are preventable through redundancy.

Avid's Venue console was designed by professionals, like Robert Scovill (Rush, Tom Petty, Sting), who've had more than their fair share of horror stories.  They intentionally designed redundancy into the boards.  Each section of Venue has two power supplies in case one fails.  There are redundant cabling.  If the computer crashes (which it rarely does), you can still run sound while it reboots.

With a production like the Superbowl, I imagine they have failsafes in place.  At least that's what I would assume.

3.  An emergency - Whether medical, accidental, or restroom. emergencies happen.  Again, I refer to to point #2.  If Mixing Engineer A is in the hospital from some bad sushi he ate the night before, there should be Mixing Engineer B standing by to fill in.

4.  Inferior equipment - Each audio geek knows that we make judgement calls based on what we hear.  If the monitors are bad and the room has acoustic deficiencies, we may think something sounds great when in fact, it sounds truly horrific.  Again, I'd like to point out that the Superbowl is a big deal with big budgets.  I would assume, they weren't mixing the live feed to the world on a pair of computer speakers in the backseat of a minivan.

On a tangent, if the production company was trying to justify upgrading some equipment to their superiors, they might have blown the mix on purpose, and made the case the upgrade would have made the mix perfect.  Knowing that my name would be associated with a production would prevent me from actually following through on this kind of conspiracy.  I doubt anyone would stoop to such a level.

5.  ESO/ID10T error - Simple human error.  Most likely the cause.  All it takes is one little oversight and panic sets in.  It's possible.  ESO/ID10T errors happen all the time.  At the level of the Superbowl, ESO is less common.  ESO (Equipment Superior to Operator) usually happens with inexperienced people.  ID10T (a silly way of expressing the word "idiot") errors can happen to anyone.  We're only human after all.

There is no use in rehashing what happened during the halftime show, except to learn from it.  If any of you come across an article explaining what happened, please share.

None of us are perfect.  As I said before, we all make mistakes.  We're human.  We have to learn how to get over those mistakes, learn from them and move on.  We can't go back and fix it.

Rock.  Roll.  Repeat.

Tuesday, February 1, 2011

Where are we and how fast are we going?

Thank you, Justin, for this post's topic. He would like me to explain why time code is not dead. At Rocky Mountain Recorders, engineers like Justin breathe time code. It isn't just old fashioned methodology. It's Alive! I imagine Justin suggested it because in today's computer-driven society, time code has been brushed aside as a secondary thought. Most consumer or "pro-sumer" video editors don't even know what time code is, why it is, and how unbelievably necessary it is. There are many DAW's out there that claim you don't need to use time code anymore in post-production. This simply is not true.

In the production of movies, television and music, time code is used to convey two vital pieces of information. Location and speed. In other words: Where are we and how fast are we going?

Since this is a fairly technical topic, I'll try to simplify it for those who aren't familiar with it. If I geek-out a bit, please forgive me. My brain is full of useless information. Actually, its uselessness is still to be determined.

First, let's define what time code is and how it came to be. Have you ever watched an early black and white film and it looks like everyone is running around in fast motion? It looks that way because early cameras were hand-cranked. That meant that the film speed was up to the camera operator's strength and endurance. In order for it to appear normal, the playback would have to precisely mimic the operator's speed and variances. A standard was needed to make it all look normal. So, it was decided that film should run at 24 frames per second. What that means is that 24 still images pass by each second. This is only the "how fast" part of the time code information.

As films became longer and edits more complex, film editors would make an "edit decision list" or and EDL. In the EDL, they would notate the location of the cut by using a reel's feet + frames position. The other part of the time code - where are we? I can still see the editor's mark on the film at each edit in today's releases. I won't ruin it for you if you don't know what it looks like.

Separate audio recorders were used to record "talkies." The audio was then married to the film before editing. We've all seen the "clapboards" when someone comes in front of the camera, annouces the scene and take number, claps the board and runs out the frame. This was done for two reasons. Firstly, to record on film and tape which take and scene to use. Secondly, the audible "clap" could be lined up with precise frame of film for playback. Special machines were used to run at consistent speeds and the audio would match the film from that point forward.

This system wasn't perfect. Over the course of longer pieces of film, the audio would "drift." In other words, the audio would increasingly move out of sync with the film. Enter the Society of Motion Picture and Television (SMPTE).

SMPTE developed a method for matching the audio machine and film regardless of its starting point and continually check and adjust its speed. They developed an audio signal that could be recorded on both the film camera and the audio recorder that transmitted hours, minutes, seconds and frame information. They called it (insert trumpet fanfare here) "SMPTE Time Code." The audio playback machines had special decoders that could translate the signal and control the machine and keep it in sync. So now, it didn't matter where you started playback. Today, various forms of time code exist, but the most popular is SMPTE.

In any synchronization scheme there are two parts: a master and slave. In the previous scenario, the film machine is the master and the audio machine is the slave. The master dictates the location and speed via SMPTE. The slave constantly chases the master signal through it's time code translator. The first time I saw it in action, it was like magic. One machine was moving on its own!

There have been many technological advancements in film and music over the past 50 years. Film became video and television. Tape machines added more tracks. In the 60's Sir George Martin synched two four-tracks using a sine-wave signal and grease pencil marks on tapes to gain just two more tracks for a total of six. Twenty years later, 24-track tape machines began synching to each other, netting 46 recordable tracks for those guitarists who couldn't get enough. Even as Pro Tools and other DAW's started replacing tape machines, synchronization was still key.

Today, most video and audio editors don't have to think about time code as much. It's easy enough to import the footage of your family reunion, make a few edits and spit out a few DVD's. However, professional productions require proper use of time code. Time code is paramount. I'll give you a few real world examples of time code uses and misuses; music examples too.

An example of a disaster production that took way more time that it should have: A four-camera shoot of a live concert. I was in charge of recording the multi-track audio. I asked the video director for a time code feed so I could synchronize my audio to the cameras. He looked at me, dumbfounded, as if I had asked him to shred his shirt, smother it in ketchup and eat it. He couldn't fathom why I, an audio dood, wanted time code. I was successful in at least getting a clocking signal so my system would, in the very least, run at the same speed. It was later apparent this guy hadn't a clue about time code. Each of the four cameras were running at different time code rates. So, when it came time to edit the video, it was a mess. The video editors spent months lining up video clips by eye. To further salt the wound, they were using my audio mixes as guide tracks. Had they used proper time code techniques (and the guy on the roving camera #4 wasn't fixated on the well-endowed blonde in the audience), the whole production could have been finished in weeks instead of months.

At post-production facilities around the world, time code is a way of life. Broadcast media require exacting standards. A simple commercial may seem trivial to the general populace. But, there are many man-hours spent on each one by various people. Furthermore, each commercial may have several different versions. There may be a 30-second, a 20-second, a 15 and a 10-second version. They're all very similar and often will be produced simultaneously. A video editor will make the rough audio cuts with the video and it's up to the audio editor to clean them up. All versions will likely appear in the same session and start at predetermined time code positions. These are predetermined so that everyone working on the project knows where to look. It's more efficient.

Moreover, quite a few facilities will transfer their productions between environments on tape. Yes, I said tape. Would it shock you further, if I said that tape was beta? Well, it is. Digibeta. Each tape is formatted or "striped" with time code. When either "ingesting" the material from the tape or "laying back" to the tape, everything needs to be put in its proper place. This is made possible because of time code.

Have you ever had Pro Tools crash on you while recording before you could hit the Save button? And when you rebooted the machine and launched the session all the audio was missing? You look on the hard drive and the files are there. How do you get them back? The answer is easy. Time code. Each audio file, as it begins recording, will have a time code starting point ebedded into it. In Pro Tools, an audio file can be "spotted" on a track using its original time stamp. Spot each audio file to its respective track to the original time stamp, and you're back in business.

It's easy to become complacent when dealing with time code. Most people don't ever pay attention to it. More don't even know it exists. It's the hamster that makes the wheel go 'round. Without it, we'd be lost.

Rock. Roll. Repeat.

Monday, January 17, 2011

Best Day in the Studio. Ever.

Every once in awhile, in music, there are moments that are truly magical. I suppose that's why we audio engineers do what we do. We want to be a part of that moment. We want to be there when that combination of talent, equipment and sweat, culminates in the golden silence following a take because goosebumps are being felt and everyone in the building knows that was IT. I was blessed enough to be a part of at least one of those moments.

A couple years ago, I was in the fortunate position of first engineer of a tracking session for a superb Americana band by the name of Great American Taxi. We spent ten days in the studio together, living, eating and recording music. They hired an excellent producer, Tim Carbone, of Railroad Earth to help focus their arrangements and offer his musical prowess on the fiddle.

I'm always nervous when working with a band and a producer for the first time. Thankfully, in our pre-session dialoging, we roughed together a plan of attack for our setup and procedure. I really appreciate a producer who knows what they want and knows how to get it done. Careful planning also took care of booking extra musicians for overdubs. I was excited to get rolling.

Our first day didn't start until the afternoon because the band had a late gig the night before. All we had planned for that first day was to get instruments setup with levels and tones. We spent a fair amount of time getting everything just right. Since we would be recording thirteen songs, continuity would be a factor during mixing. Furthermore, both Tim and I prefer to get the sound we want going to tape (or hard disk, as the case was). It saves a lot of time later in the mixing stage.

Our first day of recording went well. We were able to record and punch-in on three songs in a ten hour day. I should tell you, I was really impressed with the level of musicianship in Great American Taxi. These guys could (and can) play. Vince Herman and Chad Staehly front the the band and their collaborative songwriting effort was phenomenal. I was digging the music and we were getting some good takes. Days two and three were also wonderful. Excellent energy. It felt as if we were starting to gel and settle into a groove as a whole.

Day four was the magical day. It began just as effortlessly as the first three days. A couple of songs during the daylight hours. Punches were smooth. We were on a roll. As the sun settled behind the rockies, the studio lights were dimmed and it was time for another song.

The song began with acoustic piano (Chad Staehly), drums (Chris Sheldon) and bass (Edwin Hurwitz) playing a sweet, slow half-time shuffle. A little acoustic guitar by Vince Herman for flavoring and ornamented with electric guitar (Jim Lewin) and pedal steel (Barry Sless) fills between stanzas. First chorus a little heavier, a little darker. This song continued to build through another verse, chorus and bridge. Just when it felt like it was about to climax, the bottom dropped out to a single piano note with the drums and bass following into the quiet groove from the top. Another verse and chorus building to outro which was bigger than the bridge. When Jim Lewin's smoking guitar solo came in, I about jumped out of my chair. The sound was bigger than life. The song ended on a sustained chord and dropped off gradually.

Once the music had stopped, nothing could be heard in the studio for what seemed like an eternity. Everyone was looking at each other silently. Nobody wanted to spoil the mood. We knew that was it. The first and only take. I felt the goosebumps.

Then Tim got on the talkback and announced that "was IT." I remember thinking to myself, I could die a happy man right then. There was nothing left I needed to experience in my life.

The band adjourned to the control room and we listened to the playback. Man, what a great song! Great performances. We still hadn't added the Black Swan Singers or all of Tim's copius string parts yet and it was already an exceptional song.

I'm actually getting goosebumps from reliving that night. It is definitely one of the highlights of my career. These are the moments we all live for. I'm very happy to have had one. Everyone deserves to experience magic at least once in their lives.

Rock. Roll. Repeat.

Tuesday, January 4, 2011

Worst Day in the Studio. Ever.

Nice title, eh? Caught your attention?

Next blog, I'll describe my best day in the studio. This entry is about my worst day in the studio. Mine is nowhere near as bad as Mixerman's, but close. If you haven't read Diary of a Mixerman, you should. It's a humorous daily account of a typically bad LA studio session. It's only funny because it's true.

A band booked a day with me in the studio early on in my career. I was also the acting producer. Let's call them The Whenevers. That appeared to be their motto. Their communication skills were severely lacking. Not a word from them before the session except to comfirm the date, time and location. Regardless, I went through the trouble of booking the studio and personnel.

On the day of the session, I arrived two hours before the start time to supervise and assist setting up and preparing the studio. I didn't know the instrumentation, so I prepared basic rock instrumentation. Since this was a meager studio in SoCal, we didn't have studio intruments or amps. All I could do was setup mics, setup the patchbay and calibrate the tape machine (yes, this was before Pro Tools).

Okay, session start time arrives and The Whenevers have not. No phone calls. Not a word. I tried calling all the numbers I had for them. No one answered.

An hour into the session time, still no band or a single phone call.

Finally, ninety minutes into the session time, I received a phone call that the band played a show the night before that went late and they were just now waking up . Their ETA was another thirty minutes.

Upon their arrival, it was apparent they hadn't brought any instruments. They had carpooled in the singer's Nissan Sentra which barely fit the musicians, much less any real equipment.

After debating for awhile on who was more sober, the bass player went to retrieve the band's van.

A half hour later, a van arrives with all the band's equipment. The equipment is road worn. Old strings on the guitars. Beat up drum heads. A keyboard missing a key. Not exactly studio ready. But, it's equipment and we're hours late getting started as it is. And, since that was the only day we had to record, we had best get crackin'.

An hour later (three and a half hours late), the instruments and amps were setup and they have attempted to tune. Time to position mics and get tones.

I usually start with drums. I'm a drummer. Egotistically, I believe once a good drum sound has been achieved, everything else falls into place. Kick drum - sounds good. Snare top - sounds good (as good as can be with a dead head). Snare bottom - sounds good. Toms - sound reasonable given the dilapidated state of the heads. Hi-hat.... Hi-hat... no sound from the hi-hat. Hmmmm. Let's move on to the overheads and have the second engineer trace the cabling and solve the hi-hat issue. Overheads.... Overheads....

Uh, oh. Upon further troubleshooting, it was determined that the phantom power supply in the console had passed on to the next life. A friend of mine once joked that all electronics ran on smoke. Once the smoke was released, the equipment stopped working. The phantom power supply must have released all its smoke. It wasn't the end of the world. In fact, given how the day started, I half expected it.

Another half hour to replace the condenser mics with dynamics. Not a lot of good options here either. Replacing KM84's with SM57's and 414's with RE20's. These are not ideal substitutes.

Thankfully, the band was patient. Given my name for them, they had no choice. I thanked them anyway.

It took four more hours to get tones and levels. I recorded a little onto tape to get the band's opinion. They were flat about the experience and simply said, "sounds fine." Fine? Fine wasn't good enough for me. I wanted to see that sparkle in their eye. I wanted to hear the air rushing into their lungs as they gasp in amazement. Fine is not acceptable.

I spent another half hour trying different mic placements on instruments and playing with eq's to get the sound that will make the eyes pop out of their heads. But, to no avail. They still seemed unchanged. I sloughed it off as hangoveritis. Never having a hangover myself, I can't relate. But, most people I've run across who've had them, tell me they can only focus on getting rid of the pain. I could sympathize with them a little. They were quickly becoming a pain for me.

We proceeded to tracking. Only eight hours late. I had been there ten hours already myself.

The Whenevers turned out to be pretty good. I'm sure we could've done a much better job had we been in communication throughout the entire process and the band actual gave a horses derriere. Equipment failures are inevitable. Although, usually not as catastrophic.

All in all, we recorded two songs they subsequently submitted to Musicians magazine, and were ranked in the top twenty best unsigned bands in America.

Rock. Roll. Repeat.