Thursday, June 17, 2010

Bring back the dynamics!

A couple days ago, I had the pleasure of visiting a campus in Raleigh, North Carolina. Living Arts College is a modern educational facility with cutting-edge curriculums. Everyone was friendly and energetic. The faculty and staff were brilliant. The students eager to learn. An excellent recipe for success. Thank you LAC for your warm welcome and hospitality. I had a great time!

Tuesday evening I taught a clinic on audio mastering. The room was packed. The air conditioning wasn't effective in cooling down this mass of humanity. But, that didn't seem to bother anyone.

One of the topics we covered was compression/limiting. Within that, was the subject of over-compression/limiting. Ever since Brian Gardner at Bernie Grundman Mastering discovered the "NOVA" button on the UV22, we've had the (what seemed like) the NEBOL (never-ending-battle-of-loudness). I played examples of the negative effects of this practice.

However, NEBOL is not new. Before digital, the limitations of NEBOL were determined by the medium. For example, tape would reach a saturation point. Beyond which, was tape compression, high frequency loss and finally, square wave distortion. Vinyl was limited by the pitch as the side is being cut. Not enough pitch would create cross-cuts. On a turntable that didn't track properly, this would sound like a "broken record".

With digital, the limitation is hard and concrete. 0dBFS or 0 decibels -Full Scale. Given any bit-depth, 0dBFS is the maximum and the scale goes negative from there. Let's use 16-bits as an example, since it is the standard for audio CD's. If all 16 bits are ones, that's it. There's no room to go any higher. Taken a step further, with an audio CD's sampling frequency of 44.1Khz, one full-scale sample out of 44,100 per second is not audible. Two or three consecutive full-scale samples and it's arguable on some playback systems. Four or more, and the distortion is apparent and nasty. While analog distortion can be used musically and sound pleasing at times, digital distortion is just plain awful.

The problem posed to mastering engineers when digital first came out was how to master for this medium and get the most out of its dynamic range. The highest point needed to reach at least 0dBFS at some point. Otherwise, there was a waste of dynamic range. As the NEBOL surfaced, simply placing a hard limiter on the ADC (analog to digital convertor) was sufficient.

Then the A&R (Artist and Repertoire) people got involved and wanted their artists project to stand out. They mistakingly thought that by making the projects louder, it would appear louder on the radio. All radio stations have a slew of compressors and other processing before the final transmission because they have such limited bandwidth. Therefore, every track has the same volume. And because, the A&R people wrote the checks, it was pressed upon the mastering engineer to push the level. As much as we tried to educate, it fell upon deaf ears. Pretty sad for an industry that required listening for its survival, huh?

Anyhow, the problem became more prevalent when we started using compressors and digital limiters with "read-ahead" capabilities. These new techniques and technologies allowed for the increase in perceived loudness. Because digital has a hard ceiling, the only way to accomplish this is to chop the peaks and increase the average level. However, the negative effect is a decrease in dynamic range. Some push it too far and decrease the dynamics of the music altogether. This is over-compression/limiting.

A theory I subscribe to is that eliminating dynamics from music is detrimental to the music. It makes music boring. Of the four basic elements of music (melody, harmony, rhythm and dynamics), the only one we affect as engineers is dynamics. And, by reducing or removing that element, we are robbing the music of the impact the artist is trying to make on their audience.

Another symptom of over-compressing/limiting is the instruments appear to get smaller. When a snare drum in a rock band that should sound the size of a large trash can, sounds the size of a baseball instead, it's bad.

There has been a large backlash from the engineering community. There are groups of engineers whose sole mission is to educate and clear the myths and misnomers of this dangerous practice. Thankfully, people are listening.

Recently, there are new releases that return the dynamics back into music. Foo Fighters' "Echoes, Silence, Patience and Grace" is one of my favorite examples of this. They use dynamics to get your attention when they want. The listening experience is compelling and full of impact. It snares my attention and keeps it. I, for one, am glad for this. Let's keep it rolling!

Rock. Roll. Repeat.

Tuesday, June 1, 2010

Surround Mixing in a Non-Standards World (Part 2 of 2)

One of my favorite scenes from Jurassic Park is when Samuel L Jackson's character says "Hold on to your butts" with a cigarette butt hanging out of his mouth. What was about to unfold was a wild ride. Sometimes, I feel that way about this blog.

With all the standards surrounding surround sound (say that ten times really fast), the only consistency is the lack of one. I think standards are suppose to establish consistency. Unfortunately, everyone thinks their method is better and therefore, should be the "new" standard. The P&E Wing of NARAS have some excellent guidelines for surround mixing. They aren't trying to establish a single standard. However, the guidelines do help establish consistency.

Recently, I'd been asked to mix a music DVD in surround. The project was Leftover Salmon's New Year's Eve show at the Boulder Theater. It also happened to be their 20th anniversary show. A blue moon that evening added to the mysticism. Thankfully, I knew how it was tracked since I helped set up the recording rig.

When thinking about how to approach the project, I wanted to plan out my Pro Tools session for efficiency as well as creating the framework for down mixing. For those unfamiliar with down mixing, it is the way of creating a stereo or two-channel mix from the surround six-channel mix while minimizing phase issues, low-frequency buildup and other anomalies. We do it all the time in stereo mixing by listening in mono. If any instrument disappears, we have a problem.

I thought it best to start with a stereo mix and expand my astage from there. To start, I needed to build a series of sub masters. I created a 5.1 auxiliary return, a quad return, and two mono returns. The 5.1 return was for the reverb. The quad return is for everything and incorporated a low-cut filter. One mono return was the center channel and the other for the LFE (Low Frequency Effects) channel. The center channel had a low-cut filter. The LFE channel had a low-pass filter. Any track I wanted to be heard would be bussed through these sub masters. The sub masters in turn, were sent directly through the 5.1 master fader.

I know the Dolby Digital standard requires the crossover frequency for the LFE channel to be 110Hz. The DTS standard calls for 80Hz. See, already there's a conflict. Personally, I like the DTS standard. 80Hz still seems high to me. I can localize frequencies down to around 50Hz. But, I wanted my mixes to translate to as many playback systems as possible. Therefore, I chose 80Hz.

Now for the LFE decision. According ITU and SMPTE, the recommendation is to turn up the LFE 10dB. Most playback systems in people's homes follow this idea. It's the equivalent of the smiley face eq curve or leaving the "loudness" button on. Most people think it just "sounds better that way". At first, my thought was to leave the LFE channel alone. But, after playing some early mixes back on various systems, I decided to turn the LFE down 10dB to compensate for this bump. The only instruments I wanted in the LFE were the kick drum and bass guitar. So, I bussed the their signals through send outputs on their respective channels.

Now the center channel quandary. What to put in the center channel and how much? If I were mixing for myself, I'd have all the vocals in the center channel. Maybe a pinch of bass guitar and snare added for flavor. That's because I have a center speaker and it works. It works well. Apparently, I'm the black sheep. Most people either don't have a center speakers or if they do, is misplaced, misused or miscalibrated (not a real word, I know, but you get the point).

If the goal is to have these mixes play well on as many systems as possible, I needed to account for the mishandling of the center speaker on most environments. So, I opted for adding in just touch of the vocals and even less of bass guitar and snare. This effectively, tightened those instruments to the center. The vocals were brought up to about 10dB below the level sent to the front stereo pair. Bass and snare were around 25dB below.

The next order of business were the audience mics. At the show, there were two mics placed inconspicuously on the stage facing the audience. They were not placed in the back of the room. Okay, fine. I sent those tracks to the rear of the quad sub master. Initially, this sounded very cool, but quickly became tiring and distracting. I then moved those channels in between the front and the rears, favoring the rear slightly and turned them down a little. This added the "live" element and helped set the stage without detracting from what's happening on stage.

I then focused my attention on the stereo mix and was careful to not go crazy with panning and effects. I did bring the outside instruments further beyond the regular stereo field. This adds to the coolness factor of surround while maintaining phase coherency and down mixing compatibility.

The end result is a surround mix that sounds good in stereo too. By listening to just the left and right channels alone, I was able to gauge how it would sound on a stereo system. The room I used has GRace 906 monitoring system. This is extremely desirable for surround mixing. I could solo or mute any of the 5.1 channels. Now, if Blu-ray's 7.1 format ever takes off on the consumer level, I'll have to remix it. But, at least it will be close.

Next up for this blog; Over-compression and over-limiting. How do we fight back?

Surround Mixing in a Non-Standards World (Part 1 of 2)

I seem to have a lot to write about. I'm having trouble keeping the word count to a minimum. I always feel a little background is helpful when discussing complex topics. So, once again, this first part (and feel free to take out this part if you don't like it, Marx Brothers fans), will be background for Part 2. Here we go:

Mono was paradoxically easy and difficult. Your audience had one speaker. The best method to record was with one microphone. Easy, right? Just put the microphone in the room where it sounds the best. Ah, not so easy now. With one microphone, it's sometimes easier to move the musicians and their instruments to "mix" the song. Even if multiple mics were used, they would need to blend them to fit the frequency spectrum of a single speaker.

Along came stereo. A whole new world opened up. By using two speakers it was now possible to effectively and accurately reproduce a group's performance for the listener. The simplest method to record was to use two microphones - one representing each ear.

Some organizations developed standards for microphone placement. There was your NOS pair, your ORTF pair, the MS or Mid-Side pair, the proverbial spaced omnis and my two favorites, the Blumlein pair and Decca Tree (actually three mics). We can discuss these in another blog. The point here is more than one "standard" was created for recording and mixing stereo. Throughout the years more microphones were added and mixed to create a balance between the instruments.

And then, in the 1970's there was "quad". Quad, as the name implies, requires four speakers for the listener - two in front and two behind. The first surround sound system. With quad, it was now possible to place the listener inside the band. The be in the middle of your favorite rock group while they performed was cool. Unfortunately, producers and engineers took this to extremes. Some recordings had the drums coming from behind you to one side while the bass came from the front and opposite side. It was too weird. Quad went the way of the Dodo bird and stereo remained king for decades. Some would argue it still is.

In the 1980's, the movie industry decided to reinvigorate the surround concept to create a better movie-going experience. The early days of surround for film were variations on quad. Four speakers were used, but there were now three in front and one behind the listener. The center speaker was placed there to help focus the dialog while the "surround" speaker was for special effects and used sparingly. The format is known as LCRS (Left-Center-Right-Surround) The idea flourished.

Organizations such as SMPTE, AES, Dolby Laboratories and ITU got involved and began creating standards. Standards as to levels and speaker placement. LCRS gave way to 5.1. The surround channel became stereo and a subwoofer was added (the .1 in 5.1). Now it was easier to localize sounds and get the audience involved. Laser shots can be heard whizzing past one's head.

It was time for music to venture back into the realm of surround. This time we were more conservative in our approach to mixing and not confusing our audiences. When DVD's were born, the concert-going experience was more accessible than ever to the masses. We can now relive a concert we never attended from the comfort of our own living rooms, without having beer spilled down the backs of shirts.

Surround for music was here to stay. We needed to develop mixing standards to create the best possible listening experience. That is was Part 2 will be about. What do you put in the center channel? Which crossover frequency is best? What do you put into the surround speakers? How much?

Rock. Roll. Repeat.