Citation
Surround sound for the DAW owner

Material Information

Title:
Surround sound for the DAW owner
Creator:
Bayley, Michael John ( author )
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
1 electronic file. : ;

Subjects

Subjects / Keywords:
Pro Tools ( lcsh )
Pro Tools ( fast )
Surround-sound systems ( lcsh )
Computer sound processing ( lcsh )
Computer sound processing ( fast )
Surround-sound systems ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Review:
The purpose of this thesis portfolio project is to broaden the population of surround sound listeners and producers by presenting an eight-song album of multi-channel audio in the form of Pro Tools session files, so that people who own a Digital Audio Workstation (DAW) and an audio interface with more than two outputs are encouraged to set up their own three, four, or five-channel surround sound environment with equipment they already have at home. This is both an experiment in audio advocacy, and a showcase of my best production work. The idea is that while 5.1-channel surround sound is already in place in home theater and elsewhere, there still exists a growing and untapped population of people who have a basic home studio setup that they have not yet realized allows for playback and creation of multi-channel audio. All that these end users must do to listen to my project (and begin mixing in surround sound themselves) is plug in anywhere from one to three additional speakers to their audio interface's additional outputs, put my data DVD in their disc drive, and open the appropriate session files for the number of speakers they have. There they will find multi-channel stems of all the different instruments and vocals from my recordings, pre-routed and mixed for their surround sound enjoyment. If they would like to make changes, such as moving the location of an element in the mix, they can do so freely. Now, the artist's foot is in the door, and the hope is that they will be inspired to create and share surround mixes of their own using their new home setup. The written portion of this thesis presents further explanation and justification for the project idea, as well as thorough documentation of how the end product was created.
Thesis:
Thesis (M.S.)--University of Colorado Denver. Recording arts
Bibliography:
Includes bibliographic references.
System Details:
System requirements: Adobe Reader.
General Note:
College of Arts and Media
Statement of Responsibility:
by Michael John Bayley.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
879386369 ( OCLC )
ocn879386369

Downloads

This item has the following downloads:


Full Text
SURROUND SOUND FOR THE DAW OWNER
by
MICHAEL JOHN BAYLEY
B.A., Pomona College, 2008
A thesis submitted to the
University of Colorado Denver
in partial fulfillment
of the requirements for the degree of
Master of Science
Recording Arts
2012


This thesis for the Master of Science
degree by
Michael John Bay ley
has been approved for the
Master of Science in Recording Arts Program
by
Lome Bregitzer, Chair
Leslie Gaston
Ray Rayburn
October 23rd, 2012
11


Bayley, Michael John (B.A., Sound Technology)
Surround Sound for the DAW Owner
Thesis directed by Assistant Professor Lome Bregitzer
ABSTRACT
The purpose of this thesis portfolio project is to broaden the population of
surround sound listeners and producers by presenting an eight-song album of multi-
channel audio in the form of Pro Tools session files, so that people who own a Digital
Audio Workstation (DAW) and an audio interface with more than two outputs are
encouraged to set up their own three, four, or five-channel surround sound environment
with equipment they already have at home. This is both an experiment in audio
advocacy, and a showcase of my best production work. The idea is that while 5.1-
channel surround sound is already in place in home theater and elsewhere, there still
exists a growing and untapped population of people who have a basic home studio setup
that they have not yet realized allows for playback and creation of multi-channel audio.
All that these end users must do to listen to my project (and begin mixing in surround
sound themselves) is plug in anywhere from one to three additional speakers to their
audio interfaces additional outputs, put my data DVD in their disc drive, and open the
appropriate session files for the number of speakers they have. There they will find
multi-channel stems of all the different instruments and vocals from my recordings, pre-
routed and mixed for their surround sound enjoyment. If they would like to make
changes, such as moving the location of an element in the mix, they can do so freely.
Now, the artists foot is in the door, and the hope is that they will be inspired to create
m


and share surround mixes of their own using their new home setup. The written portion
of this thesis presents further explanation and justification for the project idea, as well as
thorough documentation of how the end product was created.
The form and content of this abstract are approved. I recommend its publication.
Approved: Lome Bregitzer
IV


DEDICATION
I dedicate this thesis to Leslie Gaston, whose Surround Sound course got me
excited enough about surround sound to come up with a way to make my own surround
setup at home, even when the cost of an official setup was out of reach. That
inspiration is what led to this thesis project.
v


ACKNOWLEGEMENT
Many thanks to my advisor, Lome Bregitzer, for always being supportive of my
work and my ideas throughout my time at University of Colorado Denver. While the
criticism Ive received throughout the program has certainly helped me improve as an
engineer, I really needed the positive encouragement I got from Lome to have enough
confidence to continue creating. Cheers my man.
vi


TABLE OF CONTENTS
CHAPTER
I. INTRODUCTION...............................................1
Purpose of the Project..................................1
Scope of the Project....................................3
Speaker Placement.................................3
Calibration.......................................5
Limitations.............................................6
II. PROJECT DOCUMENTATION....................................8
Artist/Group Information................................8
Tracking...............................................10
Instrumental Tracking (Tracks 1-4)...............10
Instrumental Tracking (Tracks 5-8)...............16
Vocal Tracking (Tracks 1-8)......................21
Keyboard Tracking (Tracks 1-8)...................22
Other Tracking Notes.............................23
Editing................................................26
Stereo Mixing..........................................28
Phasing..........................................28
Panning..........................................30
Equalization.....................................31
Dynamics.........................................34
Delays and Reverb................................36
vii


Levels
37
Mastering.....................................38
Surround Mixing...............................40
III. CONCLUSION.....................................42
BIBLIOGRAPHY..............................................44
viii


LIST OF FIGURES
Figure
1.1 Three-Speaker Arrangement
1.2 Four-Speaker Arrangement
1.3 Five-Speaker Arrangement
2.1 Kick Drum
2.2 High/Rack Tom
2.3 Low/Floor Tom
2.4 Snare Drum
2.5 Drums, Overheads
2.6 Drums, Room Mikes
2.7 Guitar 1, All Mikes
2.8 Bass, Close Mikes
2.9 Bass, Room Mikes
2.10 Drums, All Mikes
2.11 Guitar 2, All Mikes
2.12 Bass, Live Tracking
2.13 Vocals, Close Mike
2.14 Vocals, Room Mike


CHAPTER I
INTRODUCTION
Purpose of the Project
If a great surround sound recording is made in the studio, but no one else ever
hears it, does it make a sound? Much has been written about the problematic history of
surround sound that has prevented it from gaining more widespread popularity in the
consumer market (Holman, 1; Sessions, 1; Rumsey, x; Glasgal, Yates 93). Competing
formats, high start-up costs, and difficulties with placing the additional speakers amounts
to a series of obstacles that many consumers are not willing to overcome. For the
purposes of this thesis, however, I would like to set the discussion of the average
consumer aside, and instead focus on a unique population of potential surround sound
listeners: Digital Audio Workstation (DAW) owners.
The number of people with home recording studios has been growing rapidly over
the last five years (Denova, 8). Included in this population are the literally hundreds of
thousands of students currently enrolled in Recording Arts programs, as well as those
who have already graduated, who purchased recording equipment as part of their
education (Education-Portal.com). Unlike the average consumer, whose focus may be
directed more toward surround sound for home theater than for music alone, this
population of studio owners has already shown their interest in music by the fact that they
have invested in music recording equipment for their home. These days, even the most
basic studios must have some form of audio interface in order to get the audio from the
computer to the speakers (Collins, 19; Harris, 43). A survey of the current market shows
that a large percentage of these interfaces offer more than two outputs (Sweetwater.com).
1


Thus, we have a large, growing, untapped population of potential surround sound
listeners who need only had another speaker or two to their setup to begin enjoying
surround sound music. Lastly, since it is common, recommended practice to have two
sets of studio monitors to compare mixes on, many of these studio owners will already
have the additional speakers in their possession (Owsinski, 75).
The basic setup is simple. A person with a computer, Digital Audio Workstation
(such as Pro Tools), and an audio interface can simply plug additional speakers into the
additional audio outputs on their interface, place the surround speakers how they please,
and route various channels of audio to the different speakers in their DAW. Now the
drummer can be behind you, rich guitar layers can wrap around you, and voices can echo
come from all directions. Fading the sound level down in one speaker and up in another
creates an exciting dynamic pan across the room. All of a sudden, your audio experience
has become three-dimensional.
Like many good ideas, this one is simple, but not obvious. Chances are, most
studio owners do not realize they can do this with just a native Pro Tools setup. The
entry cost for an official consumer-level surround sound setup is priced at over $2000
in software alone (Avid). Although running an actual surround session in Pro Tools does
afford the engineer some additional benefits in the way of convenience, such a dedicated
dynamic panner, surround sessions are also very taxing on the computers resources, and
some consumer systems would not be able to handle a full session. For those who
already have surround sound up and running, cheers. For those who do not, this project
is for you.
2


Scope of the Project
My project is an eight-track album of rock music, presented in three, four, and
five-channel surround sound, in the form of Pro Tools 9 sessions on DVD data disc.
Stereo files are included as well. If you own Pro Tools, simply put the data disc in your
computers disc drive, open a song folder, and select the session representing the number
of speakers you have. Within the session, you will find multi-channel stems of the
different instruments and vocals on different tracks, allowing you to solo or mute
instruments and make changes as you please.
Speaker Placement
I have produced the tracks with the following speaker arrangements in mind.
Channel numbers are in bold, and ideally, all speakers should be equidistant from the
listening position. This can be achieved by using a string or cable to measure the
distance from the listening position to each speaker.
Three Speakers
1 (front left) 2 (front right)
A (listening position)
3 (rear center)
Figure 1.1: Three-Speaker Arrangement
3


Four Speakers
1 (front left)
2 (front right)
A
(listening position)
3 (rear left)
4 (rear right)
Figure 1.2: Four-Speaker Arrangement
Five Speakers
3
1 (front left) (front center) 2 (front right)
A
(listening position)
5 (rear left)
6 (rear right)
Figure 1.3: Five-Speaker Arrangement
My decision to use these arrangements and channel orders is based on the
following logic. With three speakers, one way to arrange the speakers would be to have a
left, a right, and a center channel, in what is called 3-0 stereo (Rumsey, 83). However,
this is not really a surround setup, and if the user has only three speakers, I feel that their
experience will be much more exciting if material can be coming from the back. In the
rare case that the users audio interface only has three outputs, this channel order ensures
that the session is already routed correctly. With four speakers, another option would be
to arrange them with a left, a right, a center, and one surround, in what is called 3-1
stereo, or LCRS Surround (Rumsey, 84). Although this is used more commonly than
the Quadraphonic arrangement I chose for my project, I simply find the square speaker
arrangement more interesting for music. Since the intention of this project was never to
4


facilitate the most ideal or commonly-used surround setup, I chose the arrangement
that allowed for more unique and interesting panning possibilities, rather than the most
stable frontal stereo image. As for the channel order, I used the same reasoning that
channels 1-4 would need to be the ones used in case the end users audio interface only
has four outputs. With five speakers, though, I wanted the channel order to correspond to
the ITU standard for 5.1 surround sound (Holman, 122). You will see that channel four
is skipped in this arrangement, due to the fact that I did not employ the LFE channel, as
discussed in the section titled Limitations below.
Calibration
Calibration is the process of fine-tuning a system for the particular parameters of
the situation (McCarthy, 429). In our case here, every situation will be different, as user
are encouraged to scrap together whatever speakers they have available to make a
surround setup. Since the focus of this project is simply getting new users started with
surround sound, with the fewest obstacles possible, calibration should be a quick, simple
process, aimed at getting different speaker levels in the ballpark, so that the mixes
come across more or less as intended. Below are some rough guidelines for how to do
this.
To calibrate the speaker levels, one can use a decibel meter (including the ones
now available in Smart Phone app stores), or worst case, ones ears. I used an iPhone app
called Decibel Meter Pro 2, sold as part of a package called Audio Tool. Included
with each disk worth of tracks is a folder labeled calibration, containing sessions for
three, four, and five-channel calibration. Each session consists of tracks with Pro Tools
5


Digirack noise generator plug-in set to output pink noise. After choosing the session
corresponding to the number of channels you plan to use, un-mute channel 1. You
should hear pink noise coming out of your front left speaker. Using a decibel meter, hold
the device in the listening position, facing speaker #1. Standard calibration levels for
music are in the range of 78-93dB (Holman, 69). For the purposes of enjoying my
project, however, you can simply adjust the speaker to any desired level and get a
reading. Now solo channel 2, point the decibel meter at speaker #2, and adjust the
speaker level until the meter reads the same level as speaker #1. Chances are, you will
only need to do this for the additional channels (speakers #3, 4, and/or 5), in order to
match their level to the two stereo speakers you already have. If you do not have a
decibel meter available, just do your best to get the pink noise sounding roughly the same
volume from the listening position when played through each of the speakers you set up,
and you are good to go.
Limitations
Perhaps the biggest limitation in using a discrete surround setup like the ones
suggested for this project is the lack of a dedicated dynamic panning tool. Dynamic pans
are certainly still possible (and used within my project), but they require fading the level
down in one channel and up in the another, rather than simply moving a joystick. For
two-channel panning, this is not too much of a burden, but it does make certain effects,
like circular pans across all the speakers, much more difficult to pull off.
Another limitation in suggesting that users simply add whatever additional
speakers they have to make a surround setup is that ideally, all speakers in a surround
6


setup are supposed to be identical (Rumsey, 89). Once again though, the purpose of this
project is to get peoples feet in the door to surround sound, not to worry about ideals.
Many surround speaker packages sold today are unmatched anyway.
Although I initially considered presenting the project in multiple DAW formats,
such as Logic or Ableton, I ultimately chose to present in Pro Tools alone. Pro Tools is,
after all, the worlds leading DAW (Collins, 1). I also do not own Logic or Ableton.
Space was a consideration as well, considering that the project already contains 3 data
DVDs worth of information as it stands. Presentation in other DAW formats would be
worthwhile future work.
Lastly, I chose not to employ the sixth 0.1 channel for Low Frequency Effects
used in standard 5.1 setups. Admittedly, this was in part due to the fact that my
subwoofer broke partway through this project. Additionally, though, I find that proper
calibration of a subwoofer in a room is already the most difficult part of setting up a
monitoring system, and then sending more bass to it in a separate channel complicates the
issue further. Adding this sixth channel presents new routing challenges for the end user
as well. I wanted there to be the fewest obstacles possible in order for new users to begin
enjoying surround sound in their homes, so I capped the channel number at five.
7


CHAPTER II
PROJECT DOCUMENTATION
Artist/Group Information
Group: Wes Us
We's Us was formed from the melting pot that is the Denver music scene. Started
by Michael "WeezE" Dawald, the members of We's Us have played together under many
different names and decided to create something new. All of the members come from
wide-ranging musical backgrounds and geographic regions. All eight songs on this
album are originals written by Michael WeezE Dawald, with contributions from the
artists listed below. The following is a list of all the artists that performed on this album.
Performers:
Michael WeezE Dawald Guitar, Bass, Vocals
Blake Manion Drums
Seth Marcus Bass
Michael Bay ley Keys
Terone McDonald Drums
Lawrence Williams Percussion
Patrick Dawald Vocals
When I first met WeezE in the summer of 2011, he was performing with Blake
and Seth as a three-piece group under the name Eager Minds. I agreed to record the
group for my thesis project. Before the sessions began that fall, the group disbanded over
8


personal differences. However, they agreed to come together again to lay down the first
four tracks appearing on this album. For the next four tracks, WeezE decided to fly in his
long-time friend and professional drummer Terone. WeezE played the bass parts himself
on all of the next four tracks, except Metaphor, which features Seth on bass again.
Lawrence then added hand percussion on State of Mind, Aaron, West, and In the
Clouds. After all other instruments were tracked, I added various organ and synth parts
to all of the tracks, except State of Mind, which I kept as-is.
Today, Wes Us is playing at venues all across town, including Cervantes
Masterpiece Ballroom, Larimer Lounge, Lions Lair, Hi Dive, Hermans Hideaway, and
more. The current line-up features WeezE on guitar, Blake back on drums, and a close
friend of WeezEs on bass, by the name of Chris Crantz. I performed with the group on
keys for the first few shows, but ultimately was unable to commit fully, due to my full-
time job and obligation to complete this thesis project. I look forward to working with
these artists again in the future, as well as the opportunity to pursue more production
projects of my own.
Producer/Engineer: Michael Bayley
I tracked, edited, mixed, and mastered all eight tracks myself. Details on each
part of this process are explained below.
9


Tracking
Instrumental Tracking (Tracks 1-4)
Control Room: Studio H (Rupert Neve Portico)
Tracking Room: Arts 295
DAW: Pro Tools 9/10
Interface: Digidesign 192
Drum Tracking: 9/23/1T l-10pm
Figure 2.3: Low/Floor Tom Figure 2.4: Snare Drum
10


Figure 2.5: Drums, Overheads Figure 2.6: Drums, Room Mikes
Microphone List:
(1) AKGD112 (Kick)
(2) Shure SM81s (Toms)
(1) Sennheiser 441 (Snare, top)
(1) Beyer M500 (Snare, bottom)
(2) AKG C414s (Overheads)
(2) Neumann U87s (Rooms)
I have been polishing my drum mic setup for some time now. For kick, I have
tried everything from one microphone to three, and found that a single D112 gets enough
attack and thump for my desired sound. As with all mikes, I have the drummer
repeatedly hit the drum, while another person slowly moves the microphone around until
I hear the sound I like in the studio. For the kick, this normally results in a placement
partway into the hole on the drum, as shown above.
For the toms, I have traditionally used Sennheiser 421s, but after having a
problem with one of them in this session, I switched to Shure SM81s. I couldnt be
happier- these small-diaphragm condensers provided a much more crisp-sounding attack
on the toms than the 421s did, and although they picked up a little more noise from the
11


rest of the kit, I believe this sharp transient response helped when gating the tom mikes
during mixing. I plan to use these mikes for future drum tracking sessions, and would be
interested to hear how they sound on other instruments.
Snare is the drum I have had the hardest time getting to sound how I like. I have
used the traditional Shure SM57 on top, but have always ended up cranking the highs in
mixing, often resulting in unpleasant distortion. I found that the Sennheiser 441s
brighter frequency response brought me closer to my desired sound from the get-go. For
the bottom, the Beyer M500 ribbon microphone has won out for me over time. Although
I do usually end up scooping out some low-mid mud in this microphone, the ribbons
seem to deliver a smoother-sounding representation of high-end material like snares,
which can otherwise sound brittle, especially when mixed with the other microphones.
For overheads, I immediately fell in love with the AKG C414s. These are
perhaps my favorite microphones that I have ever used. The bump in their high-end
frequency response around 12.5kHz helps make cymbals shimmer, and when placed as a
stereo split pair, the entire kit from kick to crash seems to come through crystal-clear. In
his book Shaping Sound: in the Studio and Beyond, Gottlieb makes specific reference to
the quality of the AKG 414 in this capacity, pointing out that these microphones are
excellent in most situations, particularly for instruments that need a strong edge at
higher frequencies (Gottlieb, 142).
12


Guitar Tracking: 9/24/11. l-9pm. 10/1/11. 2-lOpnx & 10/15/11. 12-8pm
Figure 2.7: Guitar 1, All Mikes
Microphone List:
(2) Beyer M500s (Close)
(2) Neumann TLM193s (Mid, front)
(2) Neumann U87s (Rooms, rear)
This was the first time I tried recording guitar using two amps simultaneously.
WeezE brought both amps in so we could decide which we liked better, but upon
listening, I felt that the ideal sound would come from a combination of the two. The
sound he liked to get from his Fender tube amp had a brighter high end, but could boarder
on being harsh to the ear. The old Peavey amp sounded very warm, but lacked definition
in the upper range. My decision to use both amps (and six microphones) was based
13


partly on my knowledge that these tracks would ultimately be presented in surround
sound. Note the seventh microphone, seen at the bottom, was used only for talk-back.
I was very happy with the sound we got. The guitar heard in the intro to 2012
is a good example of a single guitar take coming from both amps. State of Mind
features a very full-sounding guitar double, where each side of the stereo field contains a
different take from each amp.
There is definitely still some amp noise that can be heard on the record, but I do
not feel that it is enough to detract from the enjoyment of the sound. Tightening some
screws on the Peavey amp certainly helped reduce the low-end rumble that it had when
we first set it up, and turning down the lows on this amps EQ reduced this noise further.
14


Microphone List:
(l)AKGD 112 (Close, top)
(1) Sennheiser 421 (Close, bottom)
(2) Neumann U87s (Rooms)
The bass for these recordings was actually re-amped from DI sessions we did the
previous week at my house using my Apogee Ensemble and Pro Tools 9. This is a
strategy I am very likely to use in the future: record bass, guitar, keys, etc. DI at my
house, then send those clean signals out through the amp and any outboard processing
gear the artist has once we are in the studio. During the DI sessions, the signal is split, so
that the interface sees only a clean DI signal from the instrument, while the artist hears a
fully processed/amplified version of their sound in the room while they record.
This technique has a number of benefits. For one, it cuts down greatly on the time
needed in a professional studio, because once the takes are edited, it only takes the length
of the song to record the final sound. Editing is also a little easier using only a DI signal,
because one does not have to consider the tail ends of reverberation from the room in
various mikes as the takes are spliced together. Finally, if the artist or producer is
unhappy with the sound that was achieved in the studio, the performance is still intact, so
that a new sound can later be achieved using the same performance.
The downsides are subtle, but worth noting. There is a slight degradation in the
precision of the audio with the additional analog-to-digital and digital-to-analog
conversions that take place in this process. Also, some artists may perform better in a
professional studio setting, surrounded by high-end equipment, with the feeling that this
is the final sound. For me personally, I like the convenience of the other approach.
15


Instrumental Tracking (Tracks 5-8)
Control Room: Studio H (Rupert Neve Portico)
Tracking Room: Arts 295
DAW: Pro Tools 9/10
Interface: Digidesign 192
Drum Tracking: 11/5/11. llam-7pm
Figure 2.10: Drums, All Mikes
Microphone List:
(1) AKGD112 (Kick)
(3) Shure SM81s (Toms)
(1) Sennheiser 441 (Snare Top)
(1) Beyer M500 (Snare Bottom)
(2) AKG 451s (Overheads)
(2) Neumann U87s (Rooms)
16


For the drum sessions with Terone, I used a nearly identical mic setup. The only
difference was that I had to use the AKG 451s as overheads instead of the C414s, due a
theft that had occurred the previous weekend in the studio. This change, although it was
not by choice, did have its merits. The cymbals, particularly the hi-hat, sounded even
more clear and crisp in tracking with the 451s. I also found that I did not need to scoop
out as much of the low end in the 451s as I did with the 414s later in mixing.
Both drummers I recorded were exceptional in different ways. Blake was very
dynamic, and had some very beautiful transitions and build-ups planned throughout these
tracks that he knew so well. Terone was exceptionally consistent, particularly with how
he hit the snare. This dynamic consistency made processing of his drums much easier, as
I found that I did not need as much dynamic compression. Both were able to supply
some fantastic drum fills, and in Terones case, a very wide variety of them.
17


Microphone List:
(2) Beyer M500s (Close)
(2) Neumann TLM193s (Mid, front)
(2) Neumann U87s (Rooms, rear)
For the second set of four tracks, WeezE decided to borrow his friends Mesa amp
in hopes of achieving an even better guitar sound. Although there was only one amp
being used this time (the Peavey amp to the right of the picture was not being used this
session), I decided to still use the same six microphones to track the sound.
I personally was not as happy with the sound we got with this amp. For one,
although there was not as much amp noise when the guitarist was not playing, the left
speaker on the amp had a low-end rumble during play that I could not tame to a
satisfactory level. Secondly, I was unable to achieve the same fullness in the stereo
image during mixing that I had been able to with the two-amp setup. Lastly, I found that
more equalization was needed to create a tone that I found pleasing, whereas with the two
amps, the combination of their two tones blended into one that was much closer to my
desired sound from the start. In future projects with WeezE, I will ask that we use his
amps.
18


Bass Tracking: 12/10/12. 6-10pm
Figure 2.12: Bass, Live Tracking
Microphone List:
(l)AKGDl 12 (Close, top)
(1) Sennheiser 421 (Close, bottom)
(2) Neumann U87s (Rooms)
This was a very efficient day in the studio. For the first half, we finished off the
guitar for the second set of four tracks. For the second half, I got the identical amp and
microphone setup ready from the previous bass-tracking session, ran three of the four
tracks worth of bass through (from DI recordings WeezE had done at my house during
the previous weeks), and still had time for Seth to perform bass live in the studio for the
final track.
Metaphor was the only track where bass was performed live in the studio, rather
than being re-amped through the process described above. I could not say whether I like
the sound any better. Comparison was difficult, for a number of reasons. For one, Seth
19


played in a somewhat different style for this track than he did for some of the others,
including a slap-bass section, which required an adjustment of the pre-amp level to
prevent clipping. Also, between the drummer change, guitar amp change, and having
WeezE play bass on three of these four tracks, we decided early on that this would be a
two-sided EP rather than a cohesive set of eight tracks. Thus, I used the opportunity to
try some different processing for each set of tracks, including for the bass.
What I do know for sure was that Seth seemed to be feeling much more pressure
recording in the studio, with limited time, than he did at my house. He expressed a fair
amount of frustration with himself during this session. Also, as stated above, editing was
indeed easier for me with the DI takes. For these reasons, I prefer the re-amp approach.
Percussion Tracking: 11/19/12. 6-8pm
I unfortunately did not obtain a photo of the percussionist and his setup in the
short time we had him in the studio. He appeared for a two hour window during our
November 19th guitar session, recorded four tracks worth of percussion, and then had to
leave promptly for a show. I pulled from the same set of microphones that I had used for
the drums and the guitar to mic his percussions set.
Microphone List:
(1) AKG Dl 12 (Low conga, bottom)
(1) AKG C414s (High conga, top)
(2) Beyer M500s (Bongos, top)
(2) Neumann TLM193s (Close, narrow split pair)
(2) AKG 451s (Overheads, wide split pair)
(2) Neumann U87s (Rooms)
20


This was my first time miking a full-fledged percussion setup like this.
Admittedly, ten microphones was probably overkill, but I was able to pick up every part
of his kit as hoped, including the wind chimes and shakers. I also found that the wide
variety of frequency responses obtained from the large mic collection resulted in a very
balanced response that ultimately did not require any equalization in mixing. The overlap
in mic selection between this and previous sessions helped the percussion blend in easily
with the rest of the instruments.
Vocal Tracking (Tracks 1-8)
Control Room: Studio H (Rupert Neve Portico)
Tracking Room: Arts 295
DAW: Pro Tools 9/10
Interface: Digidesign 192
Vocal Tracking: 3/9/12, 2-10pm & 3/10/12, 2-10pm
Figure 2.13: Vocals, Close Mike Figure 2.14: Vocals, Room Mikes
21


Microphone List:
(1) ADK Area 51 Tube Mic (Close)
(2) Neumann U87 AIs (Rooms)
The vocals for all eight tracks were recorded in a period of two days. We flew in
WeezEs cousin, Patrick Dawald, to sing lead on the majority of the tracks. WeezE sang
the lead on Fall Haze, and Aaron, and Drama.
We did a microphone shoot-out to decide which mic sounded best with Patricks
voice. We compared the ADK Area 51 and Rode K2 (both tube mikes), as well as the
Neumann U87 AI, Audio Technica AT4040, and Shure Beta 58. After deciding on the
Area 51, we were able to commit the studios two U87 AIs to the room. The Area 51
was also well-suited for WeezEs voice, and as we were interested in keeping the sound
consistent, we had him use the same setup.
Patrick was able to lay down most of the tracks fairly quickly, usually in three to
five takes. I could tell that his accuracy in pitch would require very little Auto Tune, if
any, as was the case later in mixing. For the tracks that Patrick was less confident in, I
very much found myself in the role of producer- coaching him through lyrics, helping get
his confidence up, and making some artistic decisions as to the delivery of the lyrics.
Keyboard Tracking (Tracks 1-8)
Control Room: Home
Tracking Room: Home
DAW: Pro Tools 9/10
Interface: Apogee Ensemble
Numerous sessions, April-June 2012
22


Once all other instruments were tracked, I began adding some keyboard parts of
my own to help fill out the tracks. I recorded these DI into my Apogee Ensemble at
home. Had I finished the other parts sooner, I might have liked to re-amp these parts
through my Roland amp and mic it up in the studio, much like we did with the bass.
Things being as they were, the studios were closed for the summer, and DI had to suffice.
Each time I came up with a part, I had WeezE listen to confirm or deny its
addition. I ended up adding keys to every track except State of Mind. The keyboard
parts I had for that song went through numerous revisions, until I finally gave up and
decided to release the track without keys. Thanks to my additions on the other tracks
though, I was invited to play a number of shows with the band before the deadline
approached to complete this thesis documentation.
Other Tracking Notes
There was a fair amount of planning and foresight that went into my tracking
technique, and a few aspects of this are worth discussing in more detail. In the following
paragraphs I will address my overall strategy for placement of instruments, mikes, and
baffles in the room, as well as my decision not to use a metronome (or click) for
tracking these songs.
Before I even began placing the instruments on microphones in the room, I took
time to explore the acoustic space using clapping and my voice. I knew that I wanted a
big room sound on the record, so that very little reverb would need to be added in mixing.
Although I have used some of the best reverb plug-ins available, historically, I have
found that this is the type of plug-in I am least satisfied with. Understandably, reverb is
very difficult to emulate, considering that the sound consists of thousands or even
23


millions of individual reflections that can each sound slightly different and arrive at
different times (Shepherd, 182). So, when a large, treated room like Arts 295 is
available, I prefer to have the majority of the reverberation occur naturally in the room
itself. Thus, I did not place any baffles around the instruments, as you can see in the
photos. Instead, I used the baffles to address some of the odd echoes and rattling I could
hear in the corners. In one comer, I found that a particular arrangement of three baffles
created a pleasing slap-back echo that sounded especially good on the snare drum. I
chose to have all the instruments in the center of the room, facing the corner that had the
echo.
As for microphones, I generally used the rule of thumb for surround sound that
there should be at least as many microphones as there are channels in the final
presentation (Robjohns, 4). This definitely meant a greater number of microphones on
each instrument than I was used to using, but I found that this approach afforded me
some distinct advantages. As I mentioned with the percussion, I found that having a wide
variety of frequency responses from the different microphones helped to create a
balanced presentation of each instrument, rather than having the sound heavily colored
by the frequency response of just one microphone. This worked well for my desired
sound, which I hope comes across as more natural and live than artificial or processed. I
also liked the ability to exclusively hard pan the different mikes, reducing the potential
for phase distortion, and create a perceived location of an instrument in the stereo field by
adjusting the relative levels of the mikes rather than moving the pan position.
My microphone placement was based very much on prior experience, as well as
experimentation during each tracking session. Dynamic microphones were consistently
24


placed off-axis, and condenser microphones were consistently used as split pairs. While I
have experimented with many other techniques in previous projects, I have been happiest
with the ones I chose here. The ultimate rule with microphone placement was, of course,
to use the placement that sounded the best. For the drums, I used a technique similar to
what some would call Recorderman Technique (Des, 1). I adjusted the position of the
overheads until I was able to measure that they were both equidistant from the kick and
the snare. This helped ensure that the close mikes were in phase with the overheads.
My decision not to use a metronome turned out to be very much a double-edged
sword. We began experimenting with and without a click during our early drum-tracking
sessions, and for Blake, his performances were consistently better without one. Purely
looking at the performances we came away with, I cannot complain too much about the
rhythmic consistency. I think any deviation in tempo in these tracks is often offset by the
expressiveness of the performances. Drama is a particularly good example of a
dynamic track that benefits from the freedom to deviate slightly above and below the
standard tempo of the song, helping to create moments of calm and excitement. If you
listen closely, the tempo of In the Clouds ends up significantly faster than it begins.
While this was a concern early on, I feel that with all the other instruments in place, the
listener is brought along smoothly for the ride from dreamy-slow to rocking-fast. West
has some noticeable hiccups in tempo that would have been far easier to edit with a
click, but in the grand scheme of the listening experience, I do not consider them too
major. In this day and age, I find it refreshing when a record shows any sign that a song
was actually performed live rather than in tempo-corrected pieces, something that has
become increasingly rare in the past few decades. More than half of these songs consist
25


of one complete drum performance, and the greatest number of edits made to any drum
performance was four.
The biggest downside in not using a click was in the tracking and editing process
for the other instruments. There were numerous segments that could have easily been
copied and pasted had we recorded to a grid, but instead I had to come up with a unique
and tempo-accurate take for every single section of every song. This time-consuming
burden alone may steer me back toward using a click for tracking future projects. When
all is said and done though, it is nice to be able to say that at every moment of every song,
the listener is hearing a unique performance from each instrument.
Editing
Editing is the part of the process that feels the most like work to me. It does
indeed involve artistic choices, and sometimes choosing between a couple guitar solos or
vocal takes can be fun, but most of the time I am looking to finish as quickly as possible
to that I can begin mixing. I did become much more efficient at editing during this
project. This was the first time I began using the playlist view in Pro Tools, where
every take on a given track is displayed at once, in different colors. This made the
comping process much easier. Although the term comping has other meanings in
music, in audio production, a comp track is an audio track composed of segments
copied from other tracks, for instance to combine the best portions of various recorded
takes of the same performance (Sams, 40). In some instances, such as with the drums,
and the lead guitar on Drama, I was able to use whole takes. On the opposite end of
the spectrum, however, I found myself needing to splice together more than 40 different
pieces of percussion in order to have the comp track fit tightly with the drums.
26


As I mentioned above, the fact that I did not use a click track during tracking
came back to haunt me in the editing process. I estimate that editing would have taken at
most half the time that it did would I have been able to copy and paste segments across
the tracks. I do not necessarily regret not using a click track, for the reasons described
above, but for efficiencys sake in future projects, I am likely to recommend that we do.
There were some pleasant surprises in editing, like the fact that I was often able to
get complete guitar doubles from only three takes of rhythm guitar. Other times, I really
struggled to fill in the gaps. Another repercussion of not recording to a click was that it
was often difficult to begin playing the intros on time in overdubbing. In some cases I
was able to get Blake to do a count-off with the drums, and in others I was able to edit a
form of count-off onto the front end of the track, but there were still places where for
some reason I never came away with a solid intro other than the scratch guitar. This is
why both Aaron and In the Clouds begin with a different, thinner-sounding guitar
than what is heard in the rest of the song. I did my best to make this sound like an
intentional effect, but whether or not I succeeded is for the listener to decide.
Probably the most challenging of all editing tasks was putting together the vocals
on Drama. As you can see in the session, the vocals are on quite a few different tracks.
In some cases, this is because multiple layers of vocals are playing back simultaneously,
but for the most part, this was necessary because the final performance came from three
different sessions (including one scratch vocal session that I did not refer to in the
tracking section), requiring separate processing in order to make the takes sound
cohesive. Sometimes WeezEs best take for the first part of a line would come from one
vocal tracking session, and then the only good take of the rest of line would come from
27


another. Blending these was challenging. Sometimes I was able to get a good vocal
double for a line from WeezE, and other times I needed to use one from Patrick. In the
end, though, I am happy with the final performance as it appears on the record, and I
think that track overall is perhaps the strongest on the album.
Stereo Mixing
To me, mixing is the fun part. Mixing is where the engineer uses a wide array of
tools to shape the sound of the audio, adjust levels, affect dynamics, and add effects like
delays, reverb, and distortion. Although my final product is presented in surround sound,
the vast majority of the time and effort spend on this project went into first crafting stereo
mixes that I felt were my best work yet. From there, I was comfortable expanding these
stereo tracks into a surround environment, a process described in the next section. For
this section, I will break down the mixing process into six main steps, which I more or
less performed in order: phasing, panning, equalization, compression, delays and reverb,
and levels.
Phasing
I was very meticulous about adjusting phase relations of microphones for this
project. I ended up going over my work multiple times before I was happy with the
sound. The phasing I am referring to in this discussion is the timing difference as the
same sound arrives at different microphones, rather than the intentional offsetting of
timing one sometimes uses as an added effect (Sams, 146; Borwick, 332). If the timing
of the sound from different microphones is not aligned correctly, unpleasant distortion
occurs.
28


Despite my best efforts to carefully place the overheads equidistant from the kick
and snare in tracking, I found that small adjustments were still needed in order to get the
entire kit sounding crisp and clean. I definitely spent the most time on the drums. To
make phasing decisions, I sometimes measured distances in samples between points
where a particular transient of a drum hit crossed zero on two different microphones, and
sometimes just used my ears. I had not made any effort to get the kick or snare centered
in the room mikes, so this took adjusting. I knew I wanted the reverberant room sound in
the room mikes to arrive a little later than the more direct sound in the overheads, so I did
not try to adjust the room mikes directly into phase with the overheads, but I did use my
ears to adjust their timing so that the cymbals rang out in pleasing way when both the
overheads and room mikes were playing. I did delay the timing of the close mikes on the
kick, snare, and toms to get them into phase with the overheads.
The guitar did not take quite as much time to adjust as the drums. For the
majority of the tracks, I did not like the sound introduced TLM193s I used as middle-
distance microphones, so I dropped these and just used the M500s and room mikes. This
made adjusting phase much easier.
For the bass, I am still not certain whether I made the right decision in terms of
adjusting phase on the room mikes. I was definitely able to get the DI track in time with
the microphones (a delay of only four samples), but after much debate, I ultimately
settled on a timing for the room mikes that involved one being moved up by a few
thousand samples, and the other staying untouched. For whatever reason, this seemed to
improve the clarity of the bass, which I suppose is all that matters.
29


Panning
I had some sense of where I planned to pan the different microphones while I was
adjusting phase, so these two processes were not entirely separate. If the sound from two
microphones was going to be hard-panned to opposite sides anyway, I did not make an
effort align their phase. This does mean my mixes may not be especially mono-
compatible, but in this day and age, that is a risk I am willing to take.
I suppose I did not do anything especially out-of-the-ordinary with my stereo
panning artistically. For the drums, I hard-panned the overheads and room mikes to the
left and right. The kick and snare mikes were panned dead center, and the close mikes on
the toms were placed over their perceived image in the stereo field created by the
overheads and rooms. I oriented the drums from the drummers perspective, as
discussed in Lome Bregitzers book Secrets of Recording (159).
For the guitars, I hard-panned nearly every microphone, generally with the close
mic on one side, and the rooms hard-panned left and right. For the tracks where I
recorded guitar out of two amps, I usually panned the close mikes hard left and right, and
either kept the instrument with a wide stereo sound if it was a lead part in the song, or
turned up one mic relative to the other if I wanted it so sound like it was coming from the
left or right.
The bass was panned dead center, with the room mikes hard-panned left and right.
Same went for the vocals. The keyboards were recorded in stereo, but often needed to be
positioned to one side, so I simply hard-panned again and turned up the relative level of
the side I wanted the keys to be on. For the percussion, I used a process similar to what I
30


used with the drums, positioning close mikes where they were heard in the stereo field of
the overheads and rooms.
Equalization
I consider equalization the most interesting part of the mixing process. This is
where I feel that the engineer has the most freedom to impact the overall sound of the
recording. Equalization, or EQ, is defined as a circuit or device for frequency-selective
manipulation of an audio signals gain (Sams, 76). EQ is a big part of what makes the
kick drum thump and the snare crack. It helps to make the different elements and layers
in the mix sound distinct from one another, and along with level and panning, helps to
draw attention to import elements, while leaving other in the background.
As Bregitzer points out, There is no right or wrong way to begin a mix; each
engineer is different. The most common technique, however, is to begin with the drum
tracks (159). This is what I chose to do. Using the overheads as a reference point, I
began making adjustments to the various close mikes. I gave the kick a big boost down
around 50Hz to help the thump come through the mix. The snare top mike needed a
scoop around 500Hz and a low shelf cut below 400Hz to tame the mids and remove some
unneeded low end. For the snare bottom, I actually boosted the low-mids at 180Hz to
fatten up the sound, and added a high shelf boost above 2.5kHz. The high tom had bell
boosts at 700Hz and 300Hz; the low tom had bell boosts at 300Hz and 100Hz. Both had
some high-end boosts as well. For the overheads, I ended up doing some rather unusual
EQ that was quite different on the two sides. The ride was sounding too sharp on the
right, so I cut 7kHz to tame it, but then I decided the whole stereo image sounded best
when a reciprocal boost was added to the left side at the same frequency. Despite my
31


best efforts during tracking and to get both kick and snare centered in the overheads, I
found that when the snare sounded centered, the kick still sounded heavier on the right.
So, I cut 75Hz on the right and boosted it on the left. This solved my problem. For the
room mikes, I simply used a low shelf cut below 400Hz. The U87s otherwise captured
the room sound quite nicely.
This was definitely the most elaborate, processing-intensive drum work I have
done, but I think the final product sounds fairly clean and natural. In the end, I would
often use the API 550A and 550B plug-ins for different EQ adjustments before
compression, then do some very subtle adjustments with the PuigTec EQP1A, use
another compressor, add tube saturation and plate reverb, and then remove some
remaining mud with the Waves Q-Clone EQ after that. As explained in the dynamics
section, I used parallel compression for most of the drums, so sometimes the compressed
and uncompressed tracks had different EQs on them. Fine tuning the entire kit to my
liking took an extremely long time.
EQing the bass was certainly easier than the drums, but still took a few revisions.
My final decision involved very little EQ on either the DI signal or the miked signals. I
added the 550B to each track for a little color, but the only actual adjustment I made was
to cut below 50Hz on the DI signal to make room for the kick. Thanks to the re-amp
process through which the bass was recorded, I found that I was able to focus all my
efforts on obtaining a good sound during tracking, leaving little to be done afterward.
In terms of the guitars, I generally found that any scratch guitar tracks that I
decided to use took the most EQ work. I tried to use the same microphones as I did for
the full guitar tracking sessions, but we only used one amp in the isolation booth (during
32


drum tracking), and I also think the lack of room sound is part of what necessitated heavy
processing in order to make these scratch tracks fit with the rest. The lead guitar in
Drama, for example, was all one scratch take recorded during drum tracking. The
drums were one solid take as well. I made WeezE try to replicate his performance later
with the full two-amp setup in the room, but nothing came close. That performance had
brought tears to his eyes when he finished playing, and had to be kept. To make it sound
fuller and warmer, I ended up using the Waves Q-clone EQ plug-in multiple times,
essentially alternating a wide cut and a wide boost in the same frequency range for a net
result that mainly just sounded more processed and colored by harmonics. Without the
warmth of the room, the scratch guitar also needed a high shelf cut above 12.5kHz to
avoid sounding harsh.
Most of the other guitars were pretty easy to mange, and did not need a whole lot
of EQ. I did, however, use quite a few low shelf and bell cuts around 300-400Hz to
remove the ubiquitous low-mid mud. Other than that, a few high boosts around 5kHz left
the guitars sounding bright and edgy where needed.
For keys, I used little to no EQ across the board. The way I see it, these samples
are already EQd, and much more processing starts to sound strange. Occasionally I
would reduce some low end in an organ or Rhodes to ensure that it did not compete with
the bass. I did not use any EQ on the percussion either, as I felt that the wide variety of
microphones used helped to accumulate a nice, balanced sample of the sounds.
For the vocals, I really credit the microphone shoot-out we did and the quality of
the ADK Area 51 microphone that we selected for the fact that next to no EQ was
needed. I tried a number of different adjustments, but in the end, all that was added was a
33


small 2.5kHz boost for the vocals on the first four tracks, and for the second set of four, I
did no EQ whatsoever. The main reason I made an adjustment on the first four was that
after comparing my mixes to some professional mixes that I admire using iZotope
oZones matching EQ tool, I found that the only real difference was that my mixes had
a little less 2kHz. Even after adding that boost to the vocals in the 2kHz range, this was
still true. The result of this overall EQ difference, to me, is that compared to some tracks,
my mixes can be played very loudly without sounding harsh in the high mids. The
expense is that at lower volumes, this 2kHz range is very at grabbing our ears attention,
due to the fact that speech carries a lot of important frequency content there, and my
mixes have a little less total content in this range (Edis-Bates, 1).
On the whole, I have learned that the particular EQ chosen can be about as
important as the adjustments made. Many of the EQ plug-ins I have will distinctly color
the sound immediately upon being added to the track, and the number of possible
adjustments that can be made varies greatly between plug-ins. The most delicate
adjustments seem to be high boosts, and for these I found that high-quality emulations of
analog EQs provided the purest sound.
Dynamics
Dynamic processing includes use of gates, expanders, compressors, and limiters,
as well as forms of tape or tube saturation. The need for this type of processing is
perhaps less obvious to the average music listener, but many engineers would argue that
it is the most important part of our work. Renowned mix engineer Andy Johns, for
example, argues that ...compressors can modify the sound more than anything else
34


(Owsinksi, 58). Generally speaking, dynamic processing affects the sound by altering
how quickly the level of the sound increases or decreases.
I used the expander in oZone to gate both the top and bottom snare drum, as well
as the toms. I have always had issues with the hi-hat coming through too loudly in my
mixes, but careful gating of the snare mikes does help with this issue greatly by removing
the hi-hat sound from these microphones most of the time. My attack times for the
expander were extremely short, so that the transients of the hits could come through. The
releases were a little longer, and their exact length depended on the length that the
particular drum made a sound (shorter for the snare, longer for the high tom, then even
longer for the low tom). I have always found it bothersome when I can hear a gate or
expander in a mix boosting the cymbals up each time it opens, so I went to great lengths
to minimize this effect for these tracks. The only time I really notice that effect in these
mixes is in the intro to Drama, where the soft snare hits required a low threshold for the
expander. I opted not to gate or expand any other tracks.
Compression, meanwhile, came mostly from the Empirical Labs EL-8 Distressor,
a wonderful piece of outboard gear that I decided to purchase last year. Although I hear
that a plug-in emulation exists, I have never come across anything that I like as much as
the real thing in terms of compression. This device is highly versatile, and for drums in
particular, has a way of getting big gain reductions while still keeping transients in tact. I
used the longest attack (11) and nearly the shortest release (0-0.1) pretty much every
time, so that the early transients came through, and the sound was only compressed
briefly before returning to its normal dynamic state. These kinds of compressors settings
are known to help give punch to a sound (Shepherd, 137). Just about all the drums
35


were processed with a 4:1 ratio, with the other settings in their neutral state. I found
that limiting the number of circuits I ran the drums through on the Distressor (e.g. high-
pass filters, distortion settings, and other optional circuits on the device) helped to keep
the cymbals sounding clean and clear across the board. For the bass, I did decide to
apply the Distortion 3 setting, and in one case, the Distortion 2 setting. For the
drums, the reduction meter normally read between ldB and 4dB. The bass reductions
were in that range as well, or sometimes a little higher during louder sections. I opted not
to use the Distressor on the other instruments, mainly because running each channel out
through the device in real time became very time consuming.
For the drums, I used a technique called parallel compression
My go-to plug-in compressor was the PuigChild 670, a Waves emulation of a
Fairchild compressor. I found that this compressor often achieved my desired result with
the default settings alone. Like the Distressor, this compressor introduces a noticeable
harmonic coloration of the sound, beyond just the dynamic range reduction, that I found
pleasing to the ears. I used the 670 on most instruments that didnt go through the
Distressor, and for the bass, I actually used both the Distressor and the 670, in that order.
The only other form of plug-in compressor I really ended up using was McDSPs Analog
Channel on the kick and snare buses for a little extra analog-sounding fatness.
Delays and Reverb
I really did not make very extensive use of reverb in these recordings, because as
explained in the tracking section, I was able to capture much of the natural reverberation
in the tracking room, Arts 295. The vast majority of the audible reverb in these
recordings (like on the drums, for example) is from the room itself. A very small amount
36


of plate reverb was added to each instrument and vocal bus using the reverb portion of
iZotope oZone, but the wet signal was only set at about 5-12% relative to the dry signal,
depending on the instrument.
Delays were used, but normally just on the vocals. I used sends form the vocal
buses to the Massey TD5 delay plug-in. I used the tap tempo feature to get the delay in
time with the track, normally with a simple quarter-note division. Guitar was the only
other instrument that had delay on it, and in many cases, I was simply accenting delay
that WeezE already had from his pedals. Two examples of very audible delay are the
lead guitar in Drama, with a fairly loud delay sent to the left channel to help balance
the track, and the vocals in In the Clouds, where I occasionally automated the delay
level up quite hot as an effect.
Levels
Levels were obviously being adjusted throughout the recording process, but for
my final levels in the track, my process tended to be as follows. I first made certain I was
happy with the drum tracks. The challenge was often getting the kick and snare loud
enough without sending too dry, since I was primarily using the overhead and room
mikes for reverb. Accomplishing this was the result of adjustments made in all six of the
categories of mixing presented in this section. Once I was content with my drum levels, I
added in bass, using the comparative level with the kick as my reference point. Then, I
added all the guitars, and adjusted their levels to fit with the drums and bass. I made sure
that rhythm guitars were softer than lead guitars, and sometimes waited to add the guitar
solos until I after I added the vocals. I then added keyboards, making these pieces fit
without standing out too much, with a few exceptions where I felt the keys should draw
37


attention to themselves. Lastly, I added the vocals, adjusting their level to a range where
I thought every lyric could be deciphered, but no louder.
I did do a fair amount of automation, but more so on the first four tracks then the
second set of four. This had to do mainly with Blakes drumming style, but also with the
nature of the tracks. Drama required a fair amount of automation with kick and snare in
order to keep these pieces consistent and sitting well in the mix with overhead and room
mikes. Terone was so consistent that his drums already sounded compressed to me, and
with a little kiss of added compression from the Distressor, I did not have to automate
his drums at all. Vocals were automated on every track, sometimes to extremely
thorough levels. One particular example that took quite a bit of time was in 2012, near
the end, when he says In the shining of the full moon/ I am always looming in the
room. The words full, looming, and room were far louder than the rest of the
vocals, causing some terrible distortion in the microphone. Compression only made the
distortion worse. Instead, I carefully painted volume curves to offset these level changes
as they took place, and simultaneously automated the reverb/delay sends to compensate
in a way that created a cool effect. In the end, it might be the coolest-sounding part of the
vocals on that track.
Mastering
Any mastering engineer will tell you that you should not master a recording
yourself (Bregitzer, 184). Going against this recommendation, I chose to master my
own project, with the goal in mind of an entirely self-produced thesis project. I took a
topics course on mastering during my studies at University of Colorado Denver, and this
gave me some perspective on the mastering process. I arrived at a number of different
38


conclusions after taking the course. One of these is that high-quality, highly expensive
equipment in an acoustically optimized space is obviously going to be better for making
judgments about the sonic qualities of a mix. Indeed, professional mastering houses tend
to be better equipped in this way (Owsinski, 86).
The other conclusion I arrived at, however, is that whenever possible, its
generally better to address issues of balancing frequency content (i.e. equalization) at the
mixing stage. This is for two reasons. For one, when a mix is sounding imbalanced
(heavy or weak) in a certain frequency range, and an overall EQ adjustment made in
mastering improves the mix on the whole, its rare that every component in the mix will
actually sound better after this adjustment. For example, I gave some of my mixes to a
former classmate now working for Sony in a studio in San Diego, and he was able to
identify a build-up around 100Hz in my mixes. I tried cutting some of the 100Hz content
with an EQ on the master bus, and while this did resolve the issue, I thought some of the
guitars now sounded thin. So instead, I used the same EQ adjustment (a bell cut with Q
Clone) on the drum, bass, and some guitar buses, and left the rest untouched. This
sounded better. The second reason I feel that EQ should be handled in the mixing stage
is that the vast majority of EQs even those that are considered mastering-grade
seem to introduce unpleasant, audible distortion when placed across the master bus.
Perhaps this is not true for some of the best mastering-grade hardware EQs, but it is for
all of the ones I own. Mastering EQ seems to improve certain elements of the mix while
hurting others and degrading clarity, and so I prefer to make all EQ adjustments in the
mixing phase.
39


Thus, mastering for me meant purely limiting, using the Waves L3
Ultramaximizer, matching levels, and timing the beginnings and endings of the tracks. I
placed the limiter directly across the master bus, rather than bouncing to a new session, as
I found that the clarity of the audio was improved by avoiding multiple bounces. For this
project, I used the plug-ins basic profile, with nearly the shortest release time, for
reasons described in the dynamics section above. Reduction was generally around 1-
3dB, and I simply used my ears over the course of multiple listening sessions to get the
track levels in comparable ranges.
Surround Mixing
Once the stereo mixes were done, the surround mixes came along quite quickly.
Having already achieved the sounds I wanted for each instrument, my challenge
consisted entirely of placing these different elements and their components across a larger
sonic space. I began bouncing stems of each instrument and vocal element, creating first
a two-channel stereo stem, and then three-channel stems with separate left, right, and
center information. For the three-channel stems, the left and right channels usually
represented the room mikes, with the center channel representing close mikes.
This approach afforded me a number of benefits. For one, it kept in tact the tonal
qualities of each instrument I had worked to hard to achieve during stereo mixing.
Having already spent so much time on the stereo tracks, it prevented me from the
temptation to continue tweaking EQ and compressor settings endlessly. Next, the left-
right-center material from the three-channel stems gave me plenty of flexibility in terms
of creating perspective in surround sound, because I had separate components of each
element that sounded close up or far away. I also had the stereo stems with both close
40


and room sounds together for when I wanted to keep that combination together the way it
was in the stereo mix. Lastly, I knew that I would want to check my mixes on the
schools better-optimized surround systems, and the school does not yet have many of the
plug-ins that I used. Bouncing stems printed these plug-ins onto the audio, solving this
problem.
One could argue that a surround sound mix created from stems, even three-
channel ones, is not technically a full surround sound mix. I would argue that, in addition
to providing the benefits described above, this approach was actually sort of a necessity
for me with my system. By the end, my stereo mixing sessions were already using just
about all the processing power my computer had. Even when I was mixing on the
schools computers during my Surround Sound course, sessions less complex than these
were somewhat of a struggle for the computer to process when converted to surround
sessions. By bouncing stems, I was able to get a fresh start, and focus primarily on
panning, levels, and perspective. The way I see it, I was playing with Legos during
stereo mixing, and Duplos during surround mixing.
For these surround mixes, I went for a middle of the band perspective, as
opposed to an audience perspective (Owsinski, 118; Holman, 7). True to our
positioning on stage, the drummer is in back, the bassist is front left, vocalist is front and
center, keys are usually back right, and I often tried to make the guitars sound like they
are coming from everywhere. For my exact positioning and levels of the different
channels, refer to the sessions themselves.
41


CHAPTER III
CONCLUSION
I am certainly not the first to do surround sound with discrete channels, or even to
suggest that equipment normally meant for stereo can be set up for surround. Many
authors have discussed these options (Sessions, 67; Holman, 112). However, using a
collection of Pro Tools session as the final presentation medium was not something I
came across in my research, and I believe this is the innovative piece of my project.
Chances are, this would not have been a worthwhile distribution method until very
recently, as the number of home studios has only increased greatly over the last five years
(Denova, 8).
Once a user has gotten in the door to surround listening and mixing, then
adjustments to the setup can be made. We often fuss about ideal speaker setups, levels,
and placement, when the reality is, very few people have surround sound at all. What
matters is just getting the additional speakers added, and beginning to enjoy this beautiful
medium for music. After all, as Owsinski points out in his section on surround sound,
Speaker placement is forgiving: Yes there, are standards for placement, but these tend to
be nonccritical.. .In fact, stereo is far more critical in terms of placement than surround
sound is (116). With that in mind, we have all seen stereo speaker setups that are far
less than ideal, and music is still enjoyed on those. The same goes for surround.
The next step for broadening the target audience with this surround presentation
method is to make these mixes available for other DAWs, such as Logic or Ableton. This
would inspire even more people to create their own surround setups. Also, I would be
very excited to explore the possibility of obtaining stems of other surround mixes to use
42


on my new setup. These would not need to be stems of the individual instruments of
course, the way I presented my mixes, but simply the four or five separate channels of
audio used in the mix. If users such as myself could listen to Pink Floyds Dark Side of
the Moon with our new setups, for instance, this would create an even greater appeal.
Even if only one new person makes themselves a surround setup as a result of this
project, I would consider it a success. In the quest for a broader surround sound market,
we may have to inspire people one population at a time. The same way that people used
to be skeptical that stereo would ever take off, many feel that surround sound will never
enjoy broad popular appeal. I think this is equally short-sighted. Over time, surround
sound will continue to become more accessible, and there is no time like the present to
begin enjoying the medium.
43


BIBLIOGRAPHY
"Audio Interfaces." Sweetwater.com. N.p., 2012. Web. 13 Oct. 2012.
.
"Complete Production Toolkit." Avid. Avid Technology, Inc., 2012. Web. 13 Oct. 2012.
ECB37841 lE.ASTPESD2?product=307036370322096>.
"Top Recording Arts Schools and Colleges in the U.S." Eduction-Portal.com. Eduction-
Portal.com, n.d. Web. 13 Oct. 2012. p ortal. com/recording_arts_school s. html>.
Borwick, John. Sound Recording Practice. Oxford: Oxford University, 1980. Print.
Bregitzer, Lome. Secrets of Recording: Professional Tips, Tools & Techniques.
Amsterdam: Focal/Elsevier, 2009. Print.
Collins, Mike. Pro Tools for Music Production: Recording, Editing and Mixing. Oxford:
Focal, 2004. Print.
Denova, Antonio. "Audio Production Studios." IBIS World. N.p., Aug. 2012. Web. 10
Oct. 2012. clientsl.ibisworld.com.skyline.ucdenver.edu/reports/us/industry/default.aspx7enti
d=1254>.
Des. "Recorderman Overhead Drum Mic Technique." Hometracked. Hometracked, 12
May 2007. Web. 13 Oct. 2012.
technique/>.
Edis-Bates, David. "Speech Intelligibility in the Classroom." Edis Education. Edit Trading
(HK) Limited, 2010. Web. 17 Oct. 2012.
intelligibility-in-the-classroom.html>.
Glasgal, Ralph, and Keith Yates. Ambiophonics: Beyond Surround Sound to Virtual Sonic
Reality. Northvale, NJ: Ambiophonics Institute, 1995. Print.
Gottlieb, Gary. Shaping Sound in the Studio and Beyond: Audio Aesthetics and
Technology. Boston: Thomson Course Technology, 2007. Print.
44


Harris, Ben. Home Studio Setup: Everything You Need to Know from Equipment to
Acoustics. Amsterdam: Focal/Elsever, 2009. Print.
Holman, Tomlinson. Surround Sound: Up and Running. Amsterdam: Elsevier/Focal,
2008. Print.
Janus, Scott. Audio in the 21st Century. Hillsboro, OR: Intel, 2004. Print.
Keene, Sherman. Practical Techniques for the Recording Engineer. Hollywood, CA:
Sherman Keene Publications, 1981. Print.
McCarthy, Bob. Sound Systems: Design and Optimization : Modern Techniques and
Tools for Sound System Design and Alignment. Amsterdam: Focal/El sevier, 2010.
Print.
Morton, David. Sound Recording: The Life Story of a Technology. Westport, CT:
Greenwood, 2004. Print.
Moylan, William. The Art of Recording: Understanding and Crafting the Mix. Boston,
MA: Focal, 2002. Print.
Nisbett, Alec. The Sound Studio. Oxford: Focal, 1993. Print.
Owsinski, Bobby. The Mixing Engineer's Handbook. Boston: Thomson Course
Technology, 2006. Print.
Owsinski, Bobby. The Recording Engineer's Handbook. Boston, MA: Artist Pro Pub.,
2005. Print.
Pohlmann, Ken C. Principles of Digital Audio. 6th ed. New York: McGraw-Hill, 2011.
Print.
Robjohns, Hugh. "You Are Surrounded." Sound on Sound. Sound on Sound, Nov. 2001.
Web. 13 Oct. 2012.
.
Rumsey, Francis, and Tim McCormick. Sound and Recording: An Introduction. Oxford:
Focal, 2006. Print.
Rumsey, Francis. Spatial Audio. Oxford: Focal, 2001. Print.
Sams, Howard W. Digital Audio Dictionary. Indianapolis, IN: Prompt Publications, 1999.
Print.
45


Sessions, Ken W. 4 Channel Stereo: From Source to Sound. Blue Ridge Summit, PA: G/L
Tab, 1974. Print.
Shepherd, Ashley. Plug-in Power!: The Comprehensive DSP Guide. Boston, MA:
Thomson Course Technology, 2006. Print.
Traylor, Joseph G. Physics of Stereo Quad Sound. Ames: Iowa State UP, 1977.
Print.
46


Full Text

PAGE 1

SURROUND SOUND FOR THE DAW OWNER by MICHAEL JOHN BAYLEY B.A., Pomona College, 2008 A thesis submitted to the University of Colorado Denver in partial fulfillment of the requirements for the degree of Master of Science Recording Arts 2012

PAGE 2

ii T his thesis for the Master of Science degree by Michael John Bayley has been approved for the Master of Science in Recording Arts Program by Lorne Bregitzer, Chair Leslie Gaston Ray Rayburn October 23 rd 2012

PAGE 3

iii Bayley, Michael John (B.A., S ound Technology) Surround Sound for the DAW Owner Thesis directed by Assistant Professor Lorne Bregitzer ABSTRACT The purpose of this thesis portfolio project is to broaden the population of surround sound listeners and producers by presenting an eight song album of multi channel audio in the form of Pro Tools session files, so that people who own a Digital Audio Workstation (DAW) and an audio interface with more than two outputs are encouraged to set up their own three, four, or five channel surround s ound environment with equipment they already have at home. This is both an experiment in audio advocacy, and a showcase of my best production work. The idea is that while 5.1 channel surround sound is already in place in home theater and elsewhere, there still exists a growing and untapped population of people who have a basic home studio setup that they have not yet realized allows for playback and creation of multi channel audio. All that these end users must do to listen to my project (and begin mixin g in surround sound themselves) is plug in anywhere from one to three additional speakers to their audio interface's additional outputs, put my data DVD in their disc drive, and open the appropriate session files for the number of speakers they have. Ther e they will find multi channel stems of all the different instruments and vocals from my recordings, pre routed and mixed for their surround sound enjoyment. If they would like to make changes, such as moving the location of an element in the mix, they ca n do so freely. Now, the artist's foot is in the door, and the hope is that they will be inspired to create

PAGE 4

iv and share surround mixes of their own using their new home setup. The written portion of this thesis presents further explanation and justificatio n for the project idea, as well as thorough documentation of how the end product was created. The form and content of this abstract are approved. I recommend its publication. Approved: Lorne Bregitzer

PAGE 5

v DEDICATION I de dicate this thesis to Leslie Gaston, whose Surround Sound course got me excited enough about surround sound to come up with a way to make my own surround setup at home, even when the cost of an "official" setup was out of reach. That inspiration is what l ed to this thesis project.

PAGE 6

vi ACKNOWLEGEMENT Many thanks to my advisor, Lorne Bregitzer, for always being supportive of my work and my ideas throughout my time at University of Colorado Denver. While the criticism I've rec eived throughout the program has certainly helped me improve as an engineer, I really needed the positive encouragement I got from Lorne to have enough confidence to continue creating. Cheers my man.

PAGE 7

vii TABLE OF CONTENTS CH APTER I. INTRODUCTION..1 Purpose of the Project..1 Scope of the Project.....3 Speaker Placement...3 Calibration5 Limitations...6 II. PROJECT DOCUMENTATION.8 Artist/Group Information.....8 Tracking.10 Instrumental Tracking (Tracks 1 4)..10 Instrumental Tracking (Tracks 5 8)..16 Vocal Tracking (Tracks 1 8).21 Keyboard Tracking (Tracks 1 8)...22 Other Tracking Notes23 Editing26 Stereo Mixing28 Phasing..28 Panning.30 Equalizatio n...31 Dynamics..34 Delays and Reverb36

PAGE 8

viii Levels37 Mastering...38 Surround Mixing40 III. CONCLUSION42 BIBLIO GRAPHY..44

PAGE 9

ix LIST OF FIGURES Figure 1.1 Three Speaker Arrangement 1.2 Four Speaker Arrangement 1.3 Five Speaker Arrangement 2.1 Kick Drum 2.2 High/Rack Tom 2.3 Low/Floor Tom 2.4 Snare Drum 2.5 Drums, Overheads 2.6 Drums, Room Mikes 2.7 Guitar 1, All Mikes 2.8 Bass, Close Mikes 2.9 Bass, Room Mikes 2.10 Drums, All Mikes 2.11 Guitar 2, All Mikes 2.12 Bass, Live Tracking 2.13 Vocals, Close Mike 2.14 Vocals, Room Mike

PAGE 10

1 CHAPTER I INTROD UCTION Purpose of the Project If a great surround sound recording is made in the studio, but no one else ever hears it, does it make a sound? Much has been written about the problematic history of surround sound that has prevented it from gaining more wi despread popularity in the consumer market (Holman, 1; Sessions, 1; Rumsey, x; Glasgal, Yates 93). Competing formats, high start up costs, and difficulties with placing the additional speakers amounts to a series of obstacles that many consumers are not willing to overcome. For the purposes of this thesis, however, I would like to set the discussion of the average consumer aside, and instead focus on a unique population of potential surround sound listeners: Digital Audio Workstation (DAW) owners. The n umber of people with home recording studios has been growing rapidly over the last five years (Denova, 8). Included in this population are the literally hundreds of thousands of students currently enrolled in Recording Arts programs, as well as those who have already graduated, who purchased recording equipment as part of their education (Education Portal.com). Unlike the average consumer, whose focus may be directed more toward surround sound for home theater than for music alone, this population of stud io owners has already shown their interest in music by the fact that they have invested in music recording equipment for their home. These days, even the most basic studios must have some form of audio interface in order to get the audio from the computer to the speakers (Collins, 19; Harris, 43). A survey of the current market shows that a large percentage of these interfaces offer more than two outputs (Sweetwater.com).

PAGE 11

2 Thus, we have a large, growing, untapped population of potential surround sound lis teners who need only had another speaker or two to their setup to begin enjoying surround sound music. Lastly, since it is common, recommended practice to have two sets of studio monitors to compare mixes on, many of these studio owners will already have the additional speakers in their possession ( Owsinski, 75). The basic setup is simple. A person with a computer, Digital Audio Workstation (such as Pro Tools), and an audio interface can simply plug additional speakers into the additional audio outputs o n their interface, place the surround speakers how they please, and route various channels of audio to the different speakers in their DAW. Now the drummer can be behind you, rich guitar layers can wrap around you, and voices can echo come from all direct ions. Fading the sound level down in one speaker and up in another creates an exciting dynamic pan across the room. All of a sudden, your audio experience has become three dimensional. Like many good ideas, this one is simple, but not obvious. Chances a re, most studio owners do not realize they can do this with just a native Pro Tools setup. The entry cost for an "official" consumer level surround sound setup is priced at over $2000 in software alone (Avid). Although running an actual surround session in Pro Tools does afford the engineer some additional benefits in the way of convenience, such a dedicated dynamic panner, surround sessions are also very taxing on the computer's resources, and some consumer systems would not be able to handle a full sess ion. For those who already have surround sound up and running, cheers. For those who do not, this project is for you.

PAGE 12

3 Scope of the Project My project is an eight track album of rock music, presented in three, four, and five channel surround sound, in t he form of Pro Tools 9 sessions on DVD data disc. Stereo files are included as well. If you own Pro Tools, simply put the data disc in your computer's disc drive, open a song folder, and select the session representing the number of speakers you have. W ithin the session, you will find multi channel stems of the different instruments and vocals on different tracks, allowing you to solo or mute instruments and make changes as you please. Speaker Placement I have produced the tracks with the following spea ker arrangements in mind. Channel numbers are in bold, and ideally, all speakers should be equidistant from the listening position. This can be achieved by using a string or cable to measure the distance from the listening position to each speaker. Thre e Speakers 1 (front left) 2 (front right) ^ (listening position) 3 (rear center) Figure 1.1: Three Speaker Arrangement

PAGE 13

4 Four Speakers 1 (front left) 2 (front right) ^ (listening position) 3 (rear left) 4 (rear right) Fi gure 1.2: Four Speaker Arrangement Five Speakers 3 1 (front left) (front center) 2 (front right) ^ (listening position) 5 (rear left) 6 (rear right) Figure 1.3: Five Speaker Arrangement My decision to use these arrangements and channel orders is based on the following logic. With three speakers, one way to arrange the speakers would be to have a left, a right, and a center channel, in what is called "3 0 stereo" (Rumsey, 83). However, this is not really a surround setup, and i f the user has only three speakers, I feel that their experience will be much more exciting if material can be coming from the back. In the rare case that the user's audio interface only has three outputs, this channel order ensures that the session is al ready routed correctly. With four speakers, another option would be to arrange them with a left, a right, a center, and one surround, in what is called "3 1" stereo, or "LCRS Surround" (Rumsey, 84). Although this is used more commonly than the "Quadrapho nic" arrangement I chose for my project, I simply find the square speaker arrangement more interesting for music. Since the intention of this project was never to

PAGE 14

5 facilitate the most "ideal" or commonly used surround setup, I chose the arrangement that al lowed for more unique and interesting panning possibilities, rather than the most stable frontal stereo image. As for the channel order, I used the same reasoning that channels 1 4 would need to be the ones used in case the end user's audio interface only has four outputs. With five speakers, though, I wanted the channel order to correspond to the ITU standard for 5.1 surround sound (Holman, 122). You will see that channel four is skipped in this arrangement, due to the fact that I did not employ the LFE channel, as discussed in the section titled "Limitations" below. Calibration "Calibration is the process of fine tuning a system for the particular parameters of the situation" (McCarthy, 429). In our case here, every situation will be different, as use r are encouraged to scrap together whatever speakers they have available to make a surround setup. Since the focus of this project is simply getting new users started with surround sound, with the fewest obstacles possible, calibration should be a quick, simple process, aimed at getting different speaker levels "in the ballpark," so that the mixes come across more or less as intended. Below are some rough guidelines for how to do this. To calibrate the speaker levels, one can use a decibel meter (includin g the ones now available in Smart Phone app stores), or worst case, one's ears. I used an iPhone app called "Decibel Meter Pro 2," sold as part of a package called "Audio Tool." Included with each disk worth of tracks is a folder labeled "calibration," co ntaining sessions for three, four, and five channel calibration. Each session consists of tracks with Pro Tools

PAGE 15

6 Digirack noise generator plug in set to output pink noise. After choosing the session corresponding to the number of channels you plan to use, un mute channel 1. You should hear pink noise coming out of your front left speaker. Using a decibel meter, hold the device in the listening position, facing speaker #1. Standard calibration levels for music are in the range of 78 93dB (Holman, 69). F or the purposes of enjoying my project, however, you can simply adjust the speaker to any desired level and get a reading. Now solo channel 2, point the decibel meter at speaker #2, and adjust the speaker level until the meter reads the same level as spea ker #1. Chances are, you will only need to do this for the additional channels (speakers #3, 4, and/or 5), in order to match their level to the two stereo speakers you already have. If you do not have a decibel meter available, just do your best to get t he pink noise sounding roughly the same volume from the listening position when played through each of the speakers you set up, and you are good to go. Limitations Perhaps the biggest limitation in using a discrete surround setup like the ones suggested for this project is the lack of a dedicated dynamic panning tool. Dynamic pans are certainly still possible (and used within my project), but they require fading the level down in one channel and up in the another, rather than simply moving a joystick. For two channel panning, this is not too much of a burden, but it does make certain effects, like circular pans across all the speakers, much more difficult to pull off. Another limitation in suggesting that users simply add whatever additional speakers t hey have to make a surround setup is that ideally, all speakers in a surround

PAGE 16

7 setup are supposed to be identical (Rumsey, 89). Once again though, the purpose of this project is to get people's feet in the door to surround sound, not to worry about ideals. Many surround speaker packages sold today are unmatched anyway. Although I initially considered presenting the project in multiple DAW formats, such as Logic or Ableton, I ultimately chose to present in Pro Tools alone. Pro Tools is, after all, the wor ld's leading DAW (Collins, 1). I also do not own Logic or Ableton. Space was a consideration as well, considering that the project already contains 3 data DVDs worth of information as it stands. Presentation in other DAW formats would be worthwhile futu re work. Lastly, I chose not to employ the sixth "0.1" channel for Low Frequency Effects used in standard 5.1 setups. Admittedly, this was in part due to the fact that my subwoofer broke partway through this project. Additionally, though, I find that pr oper calibration of a subwoofer in a room is already the most difficult part of setting up a monitoring system, and then sending more bass to it in a separate channel complicates the issue further. Adding this sixth channel presents new routing challenges for the end user as well. I wanted there to be the fewest obstacles possible in order for new users to begin enjoying surround sound in their homes, so I capped the channel number at five.

PAGE 17

8 CHAPTER II PROJECT DOCUMENTATION Artist/Group Infor mation Group: "We's Us" We's Us was formed from the melting pot that is the Denver music scene. Started by Michael "WeezE" Dawald, the members of We's Us have played together under many different names and decided to create something new. All of the memb ers come from wide ranging musical backgrounds and geographic regions. All eight songs on this album are originals written by Michael "WeezE" Dawald, with contributions from the artists listed below. The following is a list of all the artists that perfor med on this album. Performers: Michael "WeezE" Dawald Guitar, Bass, Vocals Blake Manion Drums Seth Marcus Bass Michael Bayley Keys Terone McDonald Drums Lawrence Williams Percussion Patrick Dawald Vocals When I first met WeezE in the summe r of 2011, he was performing with Blake and Seth as a three piece group under the name "Eager Minds." I agreed to record the group for my thesis project. Before the sessions began that fall, the group disbanded over

PAGE 18

9 personal differences. However, they a greed to come together again to lay down the first four tracks appearing on this album. For the next four tracks, WeezE decided to fly in his long time friend and professional drummer Terone. WeezE played the bass parts himself on all of the next four tr acks, except "Metaphor," which features Seth on bass again. Lawrence then added hand percussion on "State of Mind," "Aaron," "West," and "In the Clouds." After all other instruments were tracked, I added various organ and synth parts to all of the tracks except "State of Mind," which I kept as is. Today, We's Us is playing at venues all across town, including Cervantes Masterpiece Ballroom, Larimer Lounge, Lion's Lair, Hi Dive, Herman's Hideaway, and more. The current line up features WeezE on guitar, Blake back on drums, and a close friend of WeezE's on bass, by the name of Chris Crantz. I performed with the group on keys for the first few shows, but ultimately was unable to commit fully, due to my full time job and obligation to complete this thesis project. I look forward to working with these artists again in the future, as well as the opportunity to pursue more production projects of my own. Producer/Engineer: Michael Bayley I tracked, edited, mixed, and mastered all eight tracks myself. Detai ls on each part of this process are explained below.

PAGE 19

10 Tracking Instrumental Tracking (Tracks 1 4) Control Room: Studio H (Rupert Neve Portico) Tracking Room: Arts 295 DAW: Pro Tools 9/10 Interface: Digidesign 192 Drum Tracking: 9/23/11, 1 10pm Figu re 2.1: Kick Drum Figure 2.2: High/Rack Tom Figure 2.3: Low/Floor Tom Figure 2.4: Snare Drum

PAGE 20

11 Figure 2.5: Drums, Overheads Figure 2.6: Drums, Room Mikes Microphone List: (1) AKG D112 (Kick) (2) Shure SM81s (Toms) (1) Sennheiser 441 (Snare, top) (1) Beyer M500 (Snare, bottom) (2) AKG C414s (Overheads) (2) Neumann U87s (Rooms) I have been polishing my drum mic setup for some time now. For kick, I have tried everything from one microphone to three, and found that a single D112 gets e nough attack and thump for my desired sound. As with all mikes, I have the drummer repeatedly hit the drum, while another person slowly moves the microphone around until I hear the sound I like in the studio. For the kick, this normally results in a plac ement partway into the hole on the drum, as shown above. For the toms, I have traditionally used Sennheiser 421s, but after having a problem with one of them in this session, I switched to Shure SM81s. I couldn't be happier these small diaphragm condense rs provided a much more crisp sounding attack on the toms than the 421s did, and although they picked up a little more noise from the

PAGE 21

12 rest of the kit, I believe this sharp transient response helped when gating the tom mikes during mixing. I plan to use th ese mikes for future drum tracking sessions, and would be interested to hear how they sound on other instruments. Snare is the drum I have had the hardest time getting to sound how I like. I have used the traditional Shure SM57 on top, but have always end ed up cranking the highs in mixing, often resulting in unpleasant distortion. I found that the Sennheiser 441's brighter frequency response brought me closer to my desired sound from the get go. For the bottom, the Beyer M500 ribbon microphone has won ou t for me over time. Although I do usually end up scooping out some low mid "mud" in this microphone, the ribbons seem to deliver a smoother sounding representation of high end material like snares, which can otherwise sound brittle, especially when mixed with the other microphones. For overheads, I immediately fell in love with the AKG C414s. These are perhaps my favorite microphones that I have ever used. The "bump" in their high end frequency response around 12.5kHz helps make cymbals shimmer, and when placed as a stereo split pair, the entire kit from kick to crash seems to come through crystal clear. In his book Shaping Sound: in the Studio and Beyond Gottlieb makes specific reference to the quality of the AKG 414 in this capacity, pointing out that these microphones are "excellent in most situations, particularly for instruments that need a strong edge at higher frequencies" (Gottlieb, 142).

PAGE 22

13 Guitar Tracking: 9/24/11, 1 9pm, 10/1/11, 2 10pm, & 10/15/11, 12 8pm Figure 2.7: Guitar 1, All Mi kes Microphone List: (2) Beyer M500s (Close) (2) Neumann TLM193s (Mid, front) (2) Neumann U87s (Rooms, rear) This was the first time I tried recording guitar using two amps simultaneously. WeezE brought both amps in so we could decide which we liked bet ter, but upon listening, I felt that the ideal sound would come from a combination of the two. The sound he liked to get from his Fender tube amp had a brighter high end, but could boarder on being harsh to the ear. The old Peavey amp sounded very warm, but lacked definition in the upper range. My decision to use both amps (and six microphones) was based

PAGE 23

14 partly on my knowledge that these tracks would ultimately be presented in surround sound. Note the seventh microphone, seen at the bottom, was used onl y for talk back. I was very happy with the sound we got. The guitar heard in the intro to "2012" is a good example of a single guitar take coming from both amps. "State of Mind" features a very full sounding guitar double, where each side of the stereo field contains a different take from each amp. There is definitely still some amp noise that can be heard on the record, but I do not feel that it is enough to detract from the enjoyment of the sound. Tightening some screws on the Peavey amp certainly hel ped reduce the low end rumble that it had when we first set it up, and turning down the lows on this amp's EQ reduced this noise further. Bass Tracking: 10/22/11, 12 5pm Figure 2.8: Bass, Close Mikes Figure 2.9: Bass, Room Mikes

PAGE 24

15 Mi crophone List: (1) AKG D112 (Close, top) (1) Sennheiser 421 (Close, bottom) (2) Neumann U87s (Rooms) The bass for these recordings was actually re amped from DI sessions we did the previous week at my house using my Apogee Ensemble and Pro Tools 9. This is a strategy I am very likely to use in the future: record bass, guitar, keys, etc. DI at my house, then send those clean signals out through the amp and any outboard processing gear the artist has once we are in the studio. During the DI sessions, the s ignal is split, so that the interface sees only a clean DI signal from the instrument, while the artist hears a fully processed/amplified version of their sound in the room while they record. This technique has a number of benefits. For one, it cuts down greatly on the time needed in a professional studio, because once the takes are edited, it only takes the length of the song to record the final sound. Editing is also a little easier using only a DI signal, because one does not have to consider the tail ends of reverberation from the room in various mikes as the takes are spliced together. Finally, if the artist or producer is unhappy with the sound that was achieved in the studio, the performance is still intact, so that a new sound can later be achiev ed using the same performance. The downsides are subtle, but worth noting. There is a slight degradation in the precision of the audio with the additional analog to digital and digital to analog conversions that take place in this process. Also, some ar tists may perform better in a professional studio setting, surrounded by high end equipment, with the feeling that "this is the final sound." For me personally, I like the convenience of the other approach.

PAGE 25

16 Instrumental Tracking (Tracks 5 8) Control Room : Studio H (Rupert Neve Portico) Tracking Room: Arts 295 DAW: Pro Tools 9/10 Interface: Digidesign 192 Drum Tracking: 11/5/11, 11am 7pm Figure 2.10: Drums, All Mikes Microphone List: (1) AKG D112 (Kick) (3) Shure SM81s (Toms) (1) Sennheiser 441 (Snar e Top) (1) Beyer M500 (Snare Bottom) (2) AKG 451s (Overheads) (2) Neumann U87s (Rooms)

PAGE 26

17 For the drum sessions with Terone, I used a nearly identical mic setup. The only difference was that I had to use the AKG 451s as overheads instead of the C414s, due a theft that had occurred the previous weekend in the studio. This change, although it was not by choice, did have its merits. The cymbals, particularly the hi hat, sounded even more clear and crisp in tracking with the 451s. I also found that I did not need to scoop out as much of the low end in the 451s as I did with the 414s later in mixing. Both drummers I recorded were exceptional in different ways. Blake was very dynamic, and had some very beautiful transitions and build ups planned throughout th ese tracks that he knew so well. Terone was exceptionally consistent, particularly with how he hit the snare. This dynamic consistency made processing of his drums much easier, as I found that I did not need as much dynamic compression. Both were able t o supply some fantastic drum fills, and in Terone's case, a very wide variety of them. Guitar Tracking: 11/19/12, 2 10pm & 12/10/12, 2 6pm Figure 2.11: Guitar 2, All Mikes

PAGE 27

18 Microphone List: (2) Beyer M500s (Close) (2) Neumann TLM193s (Mid, front) (2) N eumann U87s (Rooms, rear) For the second set of four tracks, WeezE decided to borrow his friend's Mesa amp in hopes of achieving an even better guitar sound. Although there was only one amp being used this time (the Peavey amp to the right of the picture was not being used this session), I decided to still use the same six microphones to track the sound. I personally was not as happy with the sound we got with this amp. For one, although there was not as much amp noise when the guitarist was not playing the left speaker on the amp had a low end rumble during play that I could not tame to a satisfactory level. Secondly, I was unable to achieve the same fullness in the stereo image during mixing that I had been able to with the two amp setup. Lastly, I found that more equalization was needed to create a tone that I found pleasing, whereas with the two amps, the combination of their two tones blended into one that was much closer to my desired sound from the start. In future projects with WeezE, I will a sk that we use his amps.

PAGE 28

19 Bass Tracking: 12/10/12, 6 10pm Figure 2.12: Bass, Live Tracking Microphone List: (1) AKG D112 (Close, top) (1) Sennheiser 421 (Close, bottom) (2) Neumann U87s (Rooms) This was a very efficient day in the studio. For the first half, we finished off the guitar for the second set of four tracks. For the second half, I got the identical amp and microphone setup ready from the previous bass tracking session, ran three of the four tracks worth of bass through (from DI recordings WeezE had done at my house during the previous weeks), and still had time for Seth to perform bass live in the studio for the final track. "Metaphor" was the only track where bass was performed live in the studio, rather than being re amped th rough the process described above. I could not say whether I like the sound any better. Comparison was difficult, for a number of reasons. For one, Seth

PAGE 29

20 played in a somewhat different style for this track than he did for some of the others, including a slap bass section, which required an adjustment of the pre amp level to prevent clipping. Also, between the drummer change, guitar amp change, and having WeezE play bass on three of these four tracks, we decided early on that this would be a two sided EP rather than a cohesive set of eight tracks. Thus, I used the opportunity to try some different processing for each set of tracks, including for the bass. What I do know for sure was that Seth seemed to be feeling much more pressure recording in the stud io, with limited time, than he did at my house. He expressed a fair amount of frustration with himself during this session. Also, as stated above, editing was indeed easier for me with the DI takes. For these reasons, I prefer the re amp approach. Perc ussion Tracking: 11/19/12, 6 8pm I unfortunately did not obtain a photo of the percussionist and his setup in the short time we had him in the studio. He appeared for a two hour window during our November 19 th guitar session, recorded four tracks worth o f percussion, and then had to leave promptly for a show. I pulled from the same set of microphones that I had used for the drums and the guitar to mic his percussions set. Microphone List: (1) AKG D112 (Low conga, bottom) (1) AKG C414s (High conga, top) ( 2) Beyer M500s (Bongos, top) (2) Neumann TLM193s (Close, narrow split pair) (2) AKG 451s (Overheads, wide split pair) (2) Neumann U87s (Rooms)

PAGE 30

21 This was my first time miking a full fledged percussion setup like this. Admittedly, ten microphones was probab ly overkill, but I was able to pick up every part of his kit as hoped, including the wind chimes and shakers. I also found that the wide variety of frequency responses obtained from the large mic collection resulted in a very balanced response that ultima tely did not require any equalization in mixing. The overlap in mic selection between this and previous sessions helped the percussion blend in easily with the rest of the instruments. Vocal Tracking (Tracks 1 8) Control Room: Studio H (Rupert Neve Port ico) Tracking Room: Arts 295 DAW: Pro Tools 9/10 Interface: Digidesign 192 Vocal Tracking: 3/9/12, 2 10pm & 3/10/12, 2 10pm Figure 2.13: Vocals, Close Mike Figure 2.14: Vocals, Room Mikes

PAGE 31

22 Microphone List: (1) ADK Area 51 Tube Mic (Close) (2) Neumann U87 AIs (Rooms) The vocals for all eight tracks were recorded in a period of two days. We flew in WeezE's cousin, Patrick Dawald, to sing lead on the majority of the tracks. WeezE sang the lead on "Fall Haze," and "Aaron," and "Drama." We did a microph one shoot out to decide which mic sounded best with Patrick's voice. We compared the ADK Area 51 and Rode K2 (both tube mikes), as well as the Neumann U87 AI, Audio Technica AT4040, and Shure Beta 58. After deciding on the Area 51, we were able to commit the studio's two U87 AIs to the room. The Area 51 was also well suited for WeezE's voice, and as we were interested in keeping the sound consistent, we had him use the same setup. Patrick was able to lay down most of the tracks fairly quickly, usually i n three to five takes. I could tell that his accuracy in pitch would require very little Auto Tune, if any, as was the case later in mixing. For the tracks that Patrick was less confident in, I very much found myself in the role of producer coaching him through lyrics, helping get his confidence up, and making some artistic decisions as to the delivery of the lyrics. Keyboard Tracking (Tracks 1 8) Control Room: Home Tracking Room: Home DAW: Pro Tools 9/10 Interface: Apogee Ensemble Numerous sessions, A pril June 2012

PAGE 32

23 Once all other instruments were tracked, I began adding some keyboard parts of my own to help fill out the tracks. I recorded these DI into my Apogee Ensemble at home. Had I finished the other parts sooner, I might have liked to re amp th ese parts through my Roland amp and mic it up in the studio, much like we did with the bass. Things being as they were, the studios were closed for the summer, and DI had to suffice. Each time I came up with a part, I had WeezE listen to confirm or deny i t's addition. I ended up adding keys to every track except "State of Mind." The keyboard parts I had for that song went through numerous revisions, until I finally gave up and decided to release the track without keys. Thanks to my additions on the othe r tracks though, I was invited to play a number of shows with the band before the deadline approached to complete this thesis documentation. Other Tracking Notes There was a fair amount of planning and foresight that went into my tracking technique, and a few aspects of this are worth discussing in more detail. In the following paragraphs I will address my overall strategy for placement of instruments, mikes, and baffles in the room, as well as my decision not to use a metronome (or "click") for tracking these songs. Before I even began placing the instruments on microphones in the room, I took time to explore the acoustic space using clapping and my voice. I knew that I wanted a big room sound on the record, so that very little reverb would need to be a dded in mixing. Although I have used some of the best reverb plug ins available, historically, I have found that this is the type of plug in I am least satisfied with. Understandably, reverb is very difficult to emulate, considering that the sound consis ts of "thousands or even

PAGE 33

24 millions of individual reflections that can each sound slightly different and arrive at different times" (Shepherd, 182). So, when a large, treated room like Arts 295 is available, I prefer to have the majority of the reverberatio n occur naturally in the room itself. Thus, I did not place any baffles around the instruments, as you can see in the photos. Instead, I used the baffles to address some of the odd echoes and rattling I could hear in the corners. In one corner, I found that a particular arrangement of three baffles created a pleasing "slap back" echo that sounded especially good on the snare drum. I chose to have all the instruments in the center of the room, facing the corner that had the echo. As for microphones, I g enerally used the rule of thumb for surround sound that there should be at least as many microphones as there are channels in the final presentation (Robjohns, 4). This definitely meant a greater number of microphones on each instrument than I was used to using, but I found that this approach afforded me some distinct advantages. As I mentioned with the percussion, I found that having a wide variety of frequency responses from the different microphones helped to create a balanced presentation of each inst rument, rather than having the sound heavily "colored" by the frequency response of just one microphone. This worked well for my desired sound, which I hope comes across as more natural and live than artificial or processed. I also liked the ability to e xclusively hard pan the different mikes, reducing the potential for phase distortion, and create a perceived location of an instrument in the stereo field by adjusting the relative levels of the mikes rather than moving the pan position. My microphone pla cement was based very much on prior experience, as well as experimentation during each tracking session. Dynamic microphones were consistently

PAGE 34

25 placed off axis, and condenser microphones were consistently used as split pairs. While I have experimented wit h many other techniques in previous projects, I have been happiest with the ones I chose here. The ultimate rule with microphone placement was, of course, to use the placement that sounded the best. For the drums, I used a technique similar to what some would call "Recorderman Technique" (Des, 1). I adjusted the position of the overheads until I was able to measure that they were both equidistant from the kick and the snare. This helped ensure that the close mikes were in phase with the overheads. My d ecision not to use a metronome turned out to be very much a double edged sword. We began experimenting with and without a click during our early drum tracking sessions, and for Blake, his performances were consistently better without one. Purely looking at the performances we came away with, I cannot complain too much about the rhythmic consistency. I think any deviation in tempo in these tracks is often offset by the expressiveness of the performances. "Drama" is a particularly good example of a dynami c track that benefits from the freedom to deviate slightly above and below the standard tempo of the song, helping to create moments of calm and excitement. If you listen closely, the tempo of "In the Clouds" ends up significantly faster than it begins. W hile this was a concern early on, I feel that with all the other instruments in place, the listener is brought along smoothly for the ride from dreamy slow to rocking fast. "West" has some noticeable "hiccups" in tempo that would have been far easier to e dit with a click, but in the grand scheme of the listening experience, I do not consider them too major. In this day and age, I find it refreshing when a record shows any sign that a song was actually performed "live" rather than in tempo corrected pieces something that has become increasingly rare in the past few decades. More than half of these songs consist

PAGE 35

26 of one complete drum performance, and the greatest number of edits made to any drum performance was four. The biggest downside in not using a cl ick was in the tracking and editing process for the other instruments. There were numerous segments that could have easily been copied and pasted had we recorded to a grid, but instead I had to come up with a unique and tempo accurate take for every singl e section of every song. This time consuming burden alone may steer me back toward using a click for tracking future projects. When all is said and done though, it is nice to be able to say that at every moment of every song, the listener is hearing a un ique performance from each instrument. Editing Editing is the part of the process that feels the most like work to me. It does indeed involve artistic choices, and sometimes choosing between a couple guitar solos or vocal takes can be fun, but most of th e time I am looking to finish as quickly as possible to that I can begin mixing. I did become much more efficient at editing during this project. This was the first time I began using the "playlist view" in Pro Tools, where every take on a given track is displayed at once, in different colors. This made the "comping" process much easier. Although the term "comping" has other meanings in music, in audio production, a "comp track" is "an audio track composed of segments copied from other tracks, for insta nce to combine the best portions of various recorded takes of the same performance" (Sams, 40). In some instances, such as with the drums, and the lead guitar on "Drama," I was able to use whole takes. On the opposite end of the spectrum, however, I foun d myself needing to splice together more than 40 different pieces of percussion in order to have the comp track fit tightly with the drums.

PAGE 36

27 As I mentioned above, the fact that I did not use a click track during tracking came back to haunt me in the editin g process. I estimate that editing would have taken at most half the time that it did would I have been able to copy and paste segments across the tracks. I do not necessarily regret not using a click track, for the reasons described above, but for effic iency's sake in future projects, I am likely to recommend that we do. There were some pleasant surprises in editing, like the fact that I was often able to get complete guitar doubles from only three takes of rhythm guitar. Other times, I really struggle d to fill in the gaps. Another repercussion of not recording to a click was that it was often difficult to begin playing the intros on time in overdubbing. In some cases I was able to get Blake to do a count off with the drums, and in others I was able t o edit a form of count off onto the front end of the track, but there were still places where for some reason I never came away with a solid intro other than the scratch guitar. This is why both "Aaron" and "In the Clouds" begin with a different, thinner sounding guitar than what is heard in the rest of the song. I did my best to make this sound like an intentional effect, but whether or not I succeeded is for the listener to decide. Probably the most challenging of all editing tasks was putting togethe r the vocals on "Drama." As you can see in the session, the vocals are on quite a few different tracks. In some cases, this is because multiple layers of vocals are playing back simultaneously, but for the most part, this was necessary because the final performance came from three different sessions (including one scratch vocal session that I did not refer to in the tracking section), requiring separate processing in order to make the takes sound cohesive. Sometimes WeezE's best take for the first part o f a line would come from one vocal tracking session, and then the only good take of the rest of line would come from

PAGE 37

28 another. Blending these was challenging. Sometimes I was able to get a good vocal double for a line from WeezE, and other times I needed to use one from Patrick. In the end, though, I am happy with the final performance as it appears on the record, and I think that track overall is perhaps the strongest on the album. Stereo Mixing To me, mixing is the fun part. Mixing is where the engine er uses a wide array of tools to shape the sound of the audio, adjust levels, affect dynamics, and add effects like delays, reverb, and distortion. Although my final product is presented in surround sound, the vast majority of the time and effort spend o n this project went into first crafting stereo mixes that I felt were my best work yet. From there, I was comfortable expanding these stereo tracks into a surround environment, a process described in the next section. For this section, I will break down the mixing process into six main steps, which I more or less performed in order: phasing, panning, equalization, compression, delays and reverb, and levels. Phasing I was very meticulous about adjusting phase relations of microphones for this project. I ended up going over my work multiple times before I was happy with the sound. The "phasing" I am referring to in this discussion is the timing difference as the same sound arrives at different microphones, rather than the intentional offsetting of timin g one sometimes uses as an added effect (Sams, 146; Borwick, 332). If the timing of the sound from different microphones is not aligned correctly, unpleasant distortion occurs.

PAGE 38

29 Despite my best efforts to carefully place the overheads equidistant from the kick and snare in tracking, I found that small adjustments were still needed in order to get the entire kit sounding crisp and clean. I definitely spent the most time on the drums. To make phasing decisions, I sometimes measured distances in samples betw een points where a particular transient of a drum hit crossed zero on two different microphones, and sometimes just used my ears. I had not made any effort to get the kick or snare centered in the room mikes, so this took adjusting. I knew I wanted the r everberant room sound in the room mikes to arrive a little later than the more direct sound in the overheads, so I did not try to adjust the room mikes directly into phase with the overheads, but I did use my ears to adjust their timing so that the cymbals rang out in pleasing way when both the overheads and room mikes were playing. I did delay the timing of the close mikes on the kick, snare, and toms to get them into phase with the overheads. The guitar did not take quite as much time to adjust as the d rums. For the majority of the tracks, I did not like the sound introduced TLM193s I used as middle distance microphones, so I dropped these and just used the M500s and room mikes. This made adjusting phase much easier. For the bass, I am still not certa in whether I made the right decision in terms of adjusting phase on the room mikes. I was definitely able to get the DI track in time with the microphones (a delay of only four samples), but after much debate, I ultimately settled on a timing for the room mikes that involved one being moved up by a few thousand samples, and the other staying untouched. For whatever reason, this seemed to improve the clarity of the bass, which I suppose is all that matters.

PAGE 39

30 Panning I had some sense of where I planned to pan the different microphones while I was adjusting phase, so these two processes were not entirely separate. If the sound from two microphones was going to be hard panned to opposite sides anyway, I did not make an effort align their phase. This does me an my mixes may not be especially mono compatible, but in this day and age, that is a risk I am willing to take. I suppose I did not do anything especially out of the ordinary with my stereo panning artistically. For the drums, I hard panned the overhead s and room mikes to the left and right. The kick and snare mikes were panned dead center, and the close mikes on the toms were placed over their perceived image in the stereo field created by the overheads and rooms. I oriented the drums from the "drumme r's perspective," as discussed in Lorne Bregitzer's book Secrets of Recording (159). For the guitars, I hard panned nearly every microphone, generally with the close mic on one side, and the rooms hard panned left and right. For the tracks where I record ed guitar out of two amps, I usually panned the close mikes hard left and right, and either kept the instrument with a wide stereo sound if it was a lead part in the song, or turned up one mic relative to the other if I wanted it so sound like it was comin g from the left or right. The bass was panned dead center, with the room mikes hard panned left and right. Same went for the vocals. The keyboards were recorded in stereo, but often needed to be positioned to one side, so I simply hard panned again and turned up the relative level of the side I wanted the keys to be on. For the percussion, I used a process similar to what I

PAGE 40

31 used with the drums, positioning close mikes where they were heard in the stereo field of the overheads and rooms. Equalization I consider equalization the most interesting part of the mixing process. This is where I feel that the engineer has the most freedom to impact the overall sound of the recording. Equalization, or EQ, is defined as "a circuit or device for frequency selecti ve manipulation of an audio signal's gain" (Sams, 76). EQ is a big part of what makes the kick drum thump and the snare crack. It helps to make the different elements and layers in the mix sound distinct from one another, and along with level and panning helps to draw attention to import elements, while leaving other in the background. As Bregitzer points out, "There is no right or wrong way to begin a mix; each engineer is different. The most common technique, however, is to begin with the drum tracks (159). This is what I chose to do. Using the overheads as a reference point, I began making adjustments to the various close mikes. I gave the kick a big boost down around 50Hz to help the "thump" come through the mix. The snare top mike needed a sco op around 500Hz and a low shelf cut below 400Hz to tame the mids and remove some unneeded low end. For the snare bottom, I actually boosted the low mids at 180Hz to fatten up the sound, and added a high shelf boost above 2.5kHz. The high tom had bell boo sts at 700Hz and 300Hz; the low tom had bell boosts at 300Hz and 100Hz. Both had some high end boosts as well. For the overheads, I ended up doing some rather unusual EQ that was quite different on the two sides. The ride was sounding too sharp on the r ight, so I cut 7kHz to tame it, but then I decided the whole stereo image sounded best when a reciprocal boost was added to the left side at the same frequency. Despite my

PAGE 41

32 best efforts during tracking and to get both kick and snare centered in the overhea ds, I found that when the snare sounded centered, the kick still sounded heavier on the right. So, I cut 75Hz on the right and boosted it on the left. This solved my problem. For the room mikes, I simply used a low shelf cut below 400Hz. The U87s other wise captured the room sound quite nicely. This was definitely the most elaborate, processing intensive drum work I have done, but I think the final product sounds fairly clean and natural. In the end, I would often use the API 550A and 550B plug ins for different EQ adjustments before compression, then do some very subtle adjustments with the PuigTec EQP1A, use another compressor, add tube saturation and plate reverb, and then remove some remaining mud with the Waves Q Clone EQ after that. As explained in the dynamics section, I used parallel compression for most of the drums, so sometimes the compressed and uncompressed tracks had different EQs on them. Fine tuning the entire kit to my liking took an extremely long time. EQ'ing the bass was certainly easier than the drums, but still took a few revisions. My final decision involved very little EQ on either the DI signal or the miked signals. I added the 550B to each track for a little color, but the only actual adjustment I made was to cut below 50Hz on the DI signal to make room for the kick. Thanks to the re amp process through which the bass was recorded, I found that I was able to focus all my efforts on obtaining a good sound during tracking, leaving little to be done afterward. In terms of the guitars, I generally found that any scratch guitar tracks that I decided to use took the most EQ work. I tried to use the same microphones as I did for the full guitar tracking sessions, but we only used one amp in the isolation booth (during

PAGE 42

33 drum tracki ng), and I also think the lack of room sound is part of what necessitated heavy processing in order to make these scratch tracks fit with the rest. The lead guitar in "Drama," for example, was all one scratch take recorded during drum tracking. The drums were one solid take as well. I made WeezE try to replicate his performance later with the full two amp setup in the room, but nothing came close. That performance had brought tears to his eyes when he finished playing, and had to be kept. To make it so und fuller and warmer, I ended up using the Waves Q clone EQ plug in multiple times, essentially alternating a wide cut and a wide boost in the same frequency range for a net result that mainly just sounded more processed and colored by harmonics. Without the warmth of the room, the scratch guitar also needed a high shelf cut above 12.5kHz to avoid sounding harsh. Most of the other guitars were pretty easy to mange, and did not need a whole lot of EQ. I did, however, use quite a few low shelf and bell cu ts around 300 400Hz to remove the ubiquitous low mid mud. Other than that, a few high boosts around 5kHz left the guitars sounding bright and edgy where needed. For keys, I used little to no EQ across the board. The way I see it, these samples are alrea dy EQ'd, and much more processing starts to sound strange. Occasionally I would reduce some low end in an organ or Rhodes to ensure that it did not compete with the bass. I did not use any EQ on the percussion either, as I felt that the wide variety of m icrophones used helped to accumulate a nice, balanced sample of the sounds. For the vocals, I really credit the microphone shoot out we did and the quality of the ADK Area 51 microphone that we selected for the fact that next to no EQ was needed. I tried a number of different adjustments, but in the end, all that was added was a

PAGE 43

34 small 2.5kHz boost for the vocals on the first four tracks, and for the second set of four, I did no EQ whatsoever. The main reason I made an adjustment on the first four was tha t after comparing my mixes to some professional mixes that I admire using iZotope oZone's "matching" EQ tool, I found that the only real difference was that my mixes had a little less 2kHz. Even after adding that boost to the vocals in the 2kHz range, thi s was still true. The result of this overall EQ difference, to me, is that compared to some tracks, my mixes can be played very loudly without sounding harsh in the high mids. The expense is that at lower volumes, this 2kHz range is very at grabbing our ears' attention, due to the fact that speech carries a lot of important frequency content there, and my mixes have a little less total content in this range (Edis Bates, 1). On the whole, I have learned that the particular EQ chosen can be about as import ant as the adjustments made. Many of the EQ plug ins I have will distinctly color the sound immediately upon being added to the track, and the number of possible adjustments that can be made varies greatly between plug ins. The most delicate adjustments seem to be high boosts, and for these I found that high quality emulations of analog EQs provided the purest sound. Dynamics Dynamic processing includes use of gates, expanders, compressors, and limiters, as well as forms of tape or tube saturation. The need for this type of processing is perhaps less obvious to the average music listener, but many engineers would argue that it is the most important part of our work. Renowned mix engineer Andy Johns, for example, argues that "compressors can modify the sound more than anything else"

PAGE 44

35 (Owsinksi, 58). Generally speaking, dynamic processing affects the sound by altering how quickly the level of the sound increases or decreases. I used the expander in oZone to gate both the top and bottom snare drum, as we ll as the toms. I have always had issues with the hi hat coming through too loudly in my mixes, but careful gating of the snare mikes does help with this issue greatly by removing the hi hat sound from these microphones most of the time. My attack times for the expander were extremely short, so that the transients of the hits could come through. The releases were a little longer, and their exact length depended on the length that the particular drum made a sound (shorter for the snare, longer for the hig h tom, then even longer for the low tom). I have always found it bothersome when I can hear a gate or expander in a mix boosting the cymbals up each time it opens, so I went to great lengths to minimize this effect for these tracks. The only time I reall y notice that effect in these mixes is in the intro to Drama, where the soft snare hits required a low threshold for the expander. I opted not to gate or expand any other tracks. Compression, meanwhile, came mostly from the Empirical Labs EL 8 Distressor a wonderful piece of outboard gear that I decided to purchase last year. Although I hear that a plug in emulation exists, I have never come across anything that I like as much as the real thing in terms of compression. This device is highly versatile, and for drums in particular, has a way of getting big gain reductions while still keeping transients in tact. I used the longest attack (11) and nearly the shortest release (0 0.1) pretty much every time, so that the early transients came through, and the sound was only compressed briefly before returning to its normal dynamic state. These kinds of compressors settings are known to help give "punch" to a sound (Shepherd, 137). Just about all the drums

PAGE 45

36 were processed with a 4:1 ratio, with the other setti ngs in their "neutral" state. I found that limiting the number of circuits I ran the drums through on the Distressor (e.g. high pass filters, distortion settings, and other optional circuits on the device) helped to keep the cymbals sounding clean and cle ar across the board. For the bass, I did decide to apply the "Distortion 3" setting, and in one case, the "Distortion 2" setting. For the drums, the reduction meter normally read between 1dB and 4dB. The bass reductions were in that range as well, or so metimes a little higher during louder sections. I opted not to use the Distressor on the other instruments, mainly because running each channel out through the device in real time became very time consuming. For the drums, I used a technique called "para llel compression" My go to plug in compressor was the PuigChild 670, a Waves emulation of a Fairchild compressor. I found that this compressor often achieved my desired result with the default settings alone. Like the Distressor, this compressor introd uces a noticeable harmonic coloration of the sound, beyond just the dynamic range reduction, that I found pleasing to the ears. I used the 670 on most instruments that didn't go through the Distressor, and for the bass, I actually used both the Distressor and the 670, in that order. The only other form of plug in compressor I really ended up using was McDSP's Analog Channel on the kick and snare buses for a little extra analog sounding "fatness." Delays and Reverb I really did not make very extensive use of reverb in these recordings, because as explained in the tracking section, I was able to capture much of the natural reverberation in the tracking room, Arts 295. The vast majority of the audible reverb in these recordings (like on the drums, for examp le) is from the room itself. A very small amount

PAGE 46

37 of plate reverb was added to each instrument and vocal bus using the reverb portion of iZotope oZone, but the wet signal was only set at about 5 12% relative to the dry signal, depending on the instrument. Delays were used, but normally just on the vocals. I used sends form the vocal buses to the Massey TD5 delay plug in. I used the "tap tempo" feature to get the delay in time with the track, normally with a simple quarter note division. Guitar was the o nly other instrument that had delay on it, and in many cases, I was simply accenting delay that WeezE already had from his pedals. Two examples of very audible delay are the lead guitar in "Drama," with a fairly loud delay sent to the left channel to help balance the track, and the vocals in "In the Clouds," where I occasionally automated the delay level up quite hot as an effect. Levels Levels were obviously being adjusted throughout the recording process, but for my final levels in the track, my process tended to be as follows. I first made certain I was happy with the drum tracks. The challenge was often getting the kick and snare loud enough without sending too dry, since I was primarily using the overhead and room mikes for reverb. Accomplishing th is was the result of adjustments made in all six of the categories of mixing presented in this section. Once I was content with my drum levels, I added in bass, using the comparative level with the kick as my reference point. Then, I added all the guitar s, and adjusted their levels to fit with the drums and bass. I made sure that rhythm guitars were softer than lead guitars, and sometimes waited to add the guitar solos until I after I added the vocals. I then added keyboards, making these pieces fit wit hout standing out too much, with a few exceptions where I felt the keys should draw

PAGE 47

38 attention to themselves. Lastly, I added the vocals, adjusting their level to a range where I thought every lyric could be deciphered, but no louder. I did do a fair amou nt of automation, but more so on the first four tracks then the second set of four. This had to do mainly with Blake's drumming style, but also with the nature of the tracks. Drama required a fair amount of automation with kick and snare in order to keep these pieces consistent and sitting well in the mix with overhead and room mikes. Terone was so consistent that his drums already sounded compressed to me, and with a little "kiss" of added compression from the Distressor, I did not have to automate his drums at all. Vocals were automated on every track, sometimes to extremely thorough levels. One particular example that took quite a bit of time was in "2012," near the end, when he says "In the shining of the full moon/ I am always looming in the room." The words "full," "looming," and "room" were far louder than the rest of the vocals, causing some terrible distortion in the microphone. Compression only made the distortion worse. Instead, I carefully painted volume curves to offset these level change s as they took place, and simultaneously automated the reverb/delay sends to compensate in a way that created a cool effect. In the end, it might be the coolest sounding part of the vocals on that track. Mastering "Any mastering engineer will tell you th at you should not master a recording yourself" (Bregitzer, 184). Going against this recommendation, I chose to master my own project, with the goal in mind of an entirely self produced thesis project. I took a topics course on mastering during my studies at University of Colorado Denver, and this gave me some perspective on the mastering process. I arrived at a number of different

PAGE 48

39 conclusions after taking the course. One of these is that high quality, highly expensive equipment in an acoustically optimi zed space is obviously going to be better for making judgments about the sonic qualities of a mix. Indeed, professional mastering houses tend to be better equipped in this way (Owsinski, 86). The other conclusion I arrived at, however, is that whenever p ossible, it's generally better to address issues of balancing frequency content (i.e. equalization) at the mixing stage. This is for two reasons. For one, when a mix is sounding imbalanced (heavy or weak) in a certain frequency range, and an overall EQ a djustment made in mastering improves the mix on the whole, it's rare that every component in the mix will actually sound better after this adjustment. For example, I gave some of my mixes to a former classmate now working for Sony in a studio in San Dieg o, and he was able to identify a build up around 100Hz in my mixes. I tried cutting some of the 100Hz content with an EQ on the master bus, and while this did resolve the issue, I thought some of the guitars now sounded thin. So instead, I used the same EQ adjustment (a bell cut with Q Clone) on the drum, bass, and some guitar buses, and left the rest untouched. This sounded better. The second reason I feel that EQ should be handled in the mixing stage is that the vast majority of EQs -even those that are considered "mastering grade" seem to introduce unpleasant, audible distortion when placed across the master bus. Perhaps this is not true for some of the best mastering grade hardware EQs, but it is for all of the ones I own. Mastering EQ seems to i mprove certain elements of the mix while hurting others and degrading clarity, and so I prefer to make all EQ adjustments in the mixing phase.

PAGE 49

40 Thus, "mastering" for me meant purely limiting, using the Waves L3 Ultramaximizer, matching levels, and timing t he beginnings and endings of the tracks. I placed the limiter directly across the master bus, rather than bouncing to a new session, as I found that the clarity of the audio was improved by avoiding multiple bounces. For this project, I used the plug in' s "basic profile," with nearly the shortest release time, for reasons described in the dynamics section above. Reduction was generally around 1 3dB, and I simply used my ears over the course of multiple listening sessions to get the track levels in compar able ranges. Surround Mixing Once the stereo mixes were done, the surround mixes came along quite quickly. Having already achieved the sounds I wanted for each instrument, my challenge consisted entirely of placing these different elements and their comp onents across a larger sonic space. I began bouncing stems of each instrument and vocal element, creating first a two channel stereo stem, and then three channel stems with separate left, right, and center information. For the three channel stems, the le ft and right channels usually represented the room mikes, with the center channel representing close mikes. This approach afforded me a number of benefits. For one, it kept in tact the tonal qualities of each instrument I had worked to hard to achieve during stereo mixing. Having already spent so much time on the stereo tracks, it prevented me from the temptation to continue tweaking EQ and compressor settings endlessly. Next, the left right center material from the three channel stems gave me plenty of flexibility in terms of creating perspective in surround sound, because I had separate components of each element that sounded close up or far away. I also had the stereo stems with both close

PAGE 50

41 and room sounds together for when I wanted to keep that com bination together the way it was in the stereo mix. Lastly, I knew that I would want to check my mixes on the school's better optimized surround systems, and the school does not yet have many of the plug ins that I used. Bouncing stems printed these plug ins onto the audio, solving this problem. One could argue that a surround sound mix created from stems, even three channel ones, is not technically a full surround sound mix. I would argue that, in addition to providing the benefits described above, thi s approach was actually sort of a necessity for me with my system. By the end, my stereo mixing sessions were already using just about all the processing power my computer had. Even when I was mixing on the school's computers during my Surround Sound cou rse, sessions less complex than these were somewhat of a struggle for the computer to process when converted to surround sessions. By bouncing stems, I was able to get a fresh start, and focus primarily on panning, levels, and perspective. The way I see it, I was playing with Legos during stereo mixing, and Duplos during surround mixing. For these surround mixes, I went for a "middle of the band" perspective, as opposed to an "audience" perspective (Owsinski, 118; Holman, 7). True to our positioning on stage, the drummer is in back, the bassist is front left, vocalist is front and center, keys are usually back right, and I often tried to make the guitars sound like they are coming from everywhere. For my exact positioning and levels of the different cha nnels, refer to the sessions themselves.

PAGE 51

42 CHAPTER III CONCLUSION I am certainly not the first to do surround sound with discrete channels, or even to suggest that equipment normally meant for stereo can be set up for surround. Many authors have discusse d these options (Sessions, 67; Holman, 112). However, using a collection of Pro Tools session as the final presentation medium was not something I came across in my research, and I believe this is the innovative piece of my project. Chances are, this wou ld not have been a worthwhile distribution method until very recently, as the number of home studios has only increased greatly over the last five years (Denova, 8). Once a user has gotten in the door to surround listening and mixing, then adjustments to the setup can be made. We often fuss about ideal speaker setups, levels, and placement, when the reality is, very few people have surround sound at all. What matters is just getting the additional speakers added, and beginning to enjoy this beautiful med ium for music. After all, as Owsinski points out in his section on surround sound, "Speaker placement is forgiving: Yes there, are standards for placement, but these tend to be nonccriticalIn fact, stereo is far more critical in terms of placement than s urround sound is" (116). With that in mind, we have all seen stereo speaker setups that are far less than ideal, and music is still enjoyed on those. The same goes for surround. The next step for broadening the target audience with this surround present ation method is to make these mixes available for other DAWs, such as Logic or Ableton. This would inspire even more people to create their own surround setups. Also, I would be very excited to explore the possibility of obtaining stems of other surround mixes to use

PAGE 52

43 on my new setup. These would not need to be stems of the individual instruments of course, the way I presented my mixes, but simply the four or five separate channels of audio used in the mix. If users such as myself could listen to Pink Fl oyds Dark Side of the Moon with our new setups, for instance, this would create an even greater appeal. Even if only one new person makes themselves a surround setup as a result of this project, I would consider it a success. In the quest for a broade r surround sound market, we may have to inspire people one population at a time. The same way that people used to be skeptical that stereo would ever take off, many feel that surround sound will never enjoy broad popular appeal. I think this is equally s hort sighted. Over time, surround sound will continue to become more accessible, and there is no time like the present to begin enjoying the medium.

PAGE 53

44 BIBLIOGRAPHY "Audio Interfaces." Sweetwater.com N.p., 2012. Web. 13 Oct. 2012. < http://www.sweetwater.com/shop/computer audio/audio_interfaces/>. "Complete Production Toolkit." Avid Avid Technology, Inc., 2012. Web. 13 Oct. 2012. . "Top Recording Arts Schools and Colleges in the U.S." Eduction Portal.com Eduction Portal.com, n.d. Web. 13 Oct. 2012. . Borwick, John. Sound Recording Practice Oxford: Oxford Unive rsity, 1980. Print. Bregitzer, Lorne. Secrets of Recording: Professional Tips, Tools & Techniques Amsterdam: Focal/Elsevier, 2009. Print. Collins, Mike. Pro Tools for Music Production: Recording, Editing and Mixing Oxford: Focal, 2004. Print. Denova, Antonio. "Audio Production Studios." IBIS World N.p., Aug. 2012. Web. 10 Oct. 2012. . Des. "Recorderman Overhead Drum Mic Technique." Hometracked Hometrack ed, 12 May 2007. Web. 13 Oct. 2012. . Edis Bates, David. "Speech Intelligibility in the Classroom." Edis Education. Edit Trading (HK) Limited, 2010. Web. 17 Oct. 2012. . Glasgal, Ralph, and Keith Yates. Ambiophonics: Beyond Surround Sound to Virtual Sonic Reality Northvale, NJ: Ambiophonics Institute, 1995. Print. Gottlieb, Gary. Shaping Sound in the Studio and Beyond: Audio Aesthetics and Technology Boston: Thomson Course Technology, 2007. Print.

PAGE 54

45 Harris, Ben. Home Studio Setup: Everything You Need to Know from Equipment to Acoustics Amsterdam: Focal/Elsever, 2009. Print. Holman, Tomlinson. Surround Sound: Up and Running Amsterdam: Elsevier/Focal, 2008. Print. Janus, Scott. Audio in the 21st Century Hillsboro, OR: Intel, 2004. Print. Keene, Sherman. Practical Techniques for the Recording Engineer Hollyw ood, CA: Sherman Keene Publications, 1981. Print. McCarthy, Bob. Sound Systems: Design and Optimization : Modern Techniques and Tools for Sound System Design and Alignment Amsterdam: Focal/Elsevier, 2010. Print. Morton, David. Sound Recording: The Life Story of a Technology Westport, CT: Greenwood, 2004. Print. Moylan, William. The Art of Recording: Understanding and Crafting the Mix Boston, MA: Focal, 2002. Print. Nisbett, Alec. The Sound Studio Oxford: Focal, 1993. Print. Owsinski, Bobby. The M ixing Engineer's Handbook. Boston: Thomson Course Technology, 2006. Print. Owsinski, Bobby. The Recording Engineer's Handbook Boston, MA: Artist Pro Pub., 2005. Print. Pohlmann, Ken C. Principles of Digital Audio 6th ed. New York: McGraw Hill, 2011. P rint. Robjohns, Hugh. "You Are Surrounded." Sound on Sound Sound on Sound, Nov. 2001. Web. 13 Oct. 2012. . Rumsey, Francis, and Tim McCormick. Sound and Recording: An Introduction Oxford: Focal, 2006. Print. Rumsey, Francis. Spatial Audio Oxford: Focal, 2001. Print. Sams, Howard W. Digital Audio Dictionary Indianapolis, IN: Prompt Publications, 1999. Print.

PAGE 55

46 Sessions, Ken W. 4 Channel Stereo: From Source to Sound Blue Ridge Summit, PA: G/L Tab, 1974. Print. Shepherd, Ashley. Plug in Power!: The Comprehensive DSP Guide Boston, MA: Thomson Course Technology, 2006. Print. Traylor, Joseph G. Physics of Stereo Quad Sound Ames: Iowa State UP, 1977. Print.