Monday, November 22, 2010

The Lucky 13 Song Mixing Tips

Before I get started I just want to reinforce something I've mentioned in earlier posts - sometimes a reduction in parameters actually generates more creativity. Being aware of a set of limitations, or guidelines, can actually allow you much more creative control over your final mix.This could mean limiting the amount of effects that you allow yourself to use, or a more obvious one is to only use a particular set of effects that suits the genre or style. If you have the permission to do it, perhaps editing tracks or even removing "surplus" instrumentation or vocal is the first step.

Approach-wise, ideally you want all aspects of a song to reinforce together and create a stronger impact, and if you aren't aware of what you're doing, it's very possible (in fact more common than you think) to get a generally nice balance of instruments that somehow doesn't "gel". You can hear everything, but it lacks emotional impact.

So here's a bunch of ideas to think about next time you're mixing a song - there are many more ideas and concepts to experiment with than these, but I stopped myself before the post became a novel.

 
1  Know what the song's about. Clues are in the lyrics. Knowing what it's about gives you the opportunity to amplify the concept rather than inadvertently fighting it. That doesn't mean you have to "follow" the lyrics with the mix in a literal sense - you might do nothing at all in that regard, but at least you won't be fighting the meaning of the song without even realising it, and when it comes to trying to think of creative mix directions, it's yet another clue to help you.

2  Know the context of the music. What's the genre or style of the artist. How does it relate to the artist's identity? Being aware of this really makes it much more likely that you'll promote that artist's identity and overall concept, plus the artist will be more likely to appreciate what you do with the mix. For example does the artist exemplify "authenticity" where a raw, "character" sound with any intonation problems remaining unfixed is most desirable? Or is it about slick and smooth production.

3  Be adventurous. A mix is not a simple balance of levels of the instruments in a mix, it's about featuring various aspects that you think the listener would like to hear, or more accurately needs to hear at any given section of the song. Pretend it's a movie - how do you present each section of the song? Don't be scared to go "over the top" with effects, fader moves and featuring of mix aspects - you can always tone it back if need be. Don't be scared to turn vocal up loud - trying to hide weak vocals makes it even worse. Even ugly actors have to have close-ups in a movie to make it all work.

4  Think about texture and tone. It's partly tone, partly level, partly how dominant something is in the mix. If you compress something - its texture changes. Listen out for it tonally as a sound rather than just checking it's variation in level. How pervasive is it compared to everything else, despite it's volume in the mix?
How does it link into the overall texture of the song? Textures are like a tonal colour palette - you probably don't want to mix a neon green element in with some nice earth tones (remember there are no rules!), but then again you don't want everything the same shade of beige.

5 It's about melody Even in the most distortion-fest mixes, our human nature will use our built-in pattern-detecting algorithms to extract a melody out of it somewhere, whether it be in the movement of the harmonics in the wall of guitar noise or in the groovy bassline. Make sure there's one dominant melody at any given instant, or if there's more than one, that they aren't fighting and canceling each other out.

6 The pocket. It's more than something to put your wallet in. It's that magic interaction of instruments when it all suddenly locks into a groove. Spend some time adjusting relative timing of instruments to see if you can help the groove "gel". You'll know when it happens because it's magic and you'll start moving with the music whether you want to or not. Note that Beat Detective and other forms of quantization can fight this effect - it's "felt" rather than being on an exact grid. saying that, if the playing is too loose then a timing grid is a step up.

7  Keep it simple stupid. Less is more. These things are fundamental truths despite our over-familiarity with them often leaving them as meaningless statements in our minds. Think about the mix as a photo - the more people you want to appear in the photo, the smaller they'll have to be. Don't be scared to bring the main things to the foreground, and push other things back to the point of blurriness or being hidden behind the main elements. A good mix is not about individual band members' egos, it's about the overall blend. When you think about it, the individual band members have the least idea about what the mix should sound like - they all hear completely different versions of a mix depending on where they stand/sit when they perform.

8  Three "Tracks". Back in the olden days, after mono and stereo, there were three tracks. One was for "Rhythm" (and could include drums, bass, percussion and rhythm guitar for example), one for Vocals and one for "Sweetening" which might be things like brass, strings, lead instruments etc. This strategy is still a great one to keep in mind for mixing. It forces you to think about your rhythm section as one single thing, and you need to make it all gel. Bass needs to lock in the pocket with the kick drum. Sweetening nowadays is whatever else you need outside rhythm and vocals. Think carefully about which mix elements fit into each of these three roles, and if all three are already populated - maybe it's time to do some cutting. Note that some instruments such as guitars might switch between modes depending on what they're playing at the time - rhythm, fills or lead.

9 One thing at a time. Rather than thinking of one of the aforementioned three tracks as just "Vocals" perhaps it's better to look on it as "Melody". The melody line often chops and changes between vocal, instrumental fills and solos. If you think of these three elements as playing a similar role at different times in the song, it makes it easy when trying to decide on levels/sounds between the three. It also highlights that you shouldn't have any of those melodies crossing over each other and fighting at any point - keep 'em separated!

10 Getting the bass sitting right is tricky - especially when it needs to work on both large and small speaker systems. Try mixing the bass while listening on the smallest speakers that you have, to get it sitting at the right level. Then adjust the tonal balance while listening on bigger speakers to reign any extreme frequencies back in.

11 Don't over-compress everything. Listen to the TONE while compressing each instrument and keep it sounding natural if possible. Pay close attention to the start and end (attack and release) of the notes of each instrument you compress. Your final mix should be sitting at an average RMS level of about -12dBFS with peaks no higher than around-3dBFS. Leave the mastering engineer to do the final compression and limiting. Remember to leave dynamic range in the mix - contrast! Our ears need some sort of contrast to determine what's loud and soft. If you hammer all the levels to the max you may as well just record the vacuum cleaner at close range and overdrive the mic/preamp. Hmmm. Might have to try that.

12 Easier than Automation. In these days of automation, it's easy to spend inordinate amounts of time tweaking automation changes on instruments or vocals between different sections of a song (eg adding more reverb to the vocals in the chorus or adjusting rhythm gtr levels in the bridge). With today's digital audio workstations, extra tracks are usually in ready supply, so rather than fluffing about with automation for a specific section of the song, why not just move that part over onto another duplicated track instead, then just make whatever changes you need to suit that section. Much quicker than continually mucking around with automation on the same track. By the way - make sure your mix is dynamic. A mix is a performance in itself, not a static set of levels.

13 Use submix busses for each element of the mix. Eg drum subgroup,  guitar subgroup, vocal subgroup etc. Rather than send all your drums straight to the L/R or Stereo mix, first send them all to an Aux return channel instead. Then send that Aux to the LR/Stereo mix. (Tip: disable solo on the Auxes) This makes it simple to do overall tweaks to your mix even after you've automated levels on individual tracks.
You need to be careful about aux effects returns and where they come back though, as their balance might change slightly if you adjust the instrument subgroups.
And hey, what about creating just three subgroups - Rhythm, Melody, Sweetening? Let me know if it works ;o)

Tuesday, June 8, 2010

Relieving Threshold Shift (Temporary Hearing Loss) with Acupressure

This is a handy tip for those moments when you've gone to see a loud band and forgotten to take earplugs, and one that I've used numerous times to "reset" my ears after a gig. I was shown this trick about 20 years ago by a friend and have been using it since then, but in preparing this blog I've also found lots of supporting evidence on the web that reinforces the basic concept. It has definitely and audibly worked for me and others that I've shown it to and it really can't hurt to try it. Actually it does hurt a bit when you find the right spot to press, and I also have to admit it looks a bit stupid when you're doing it, so best not do it when actually walking out of the gig - at least wait till you're in the car when nobody can see you.

Press and hold the area shown in the diagram - it's in the hollow just in front of the ear lobe. If you press the right spot it will feel tender, and after a few minutes you should feel the "cotton-wool" feeling diminish and your hearing begin to return.

Threshold Shift, for those who don't know, is muffled high-frequencies or pressure or ringing in your ears that you can feel as you're walking out of a loud gig. This is extremely dangerous in the long-term, and has even more significance nowadays for long-term headphone or ear-bud use.

Long-term exposure to loud sounds:

What happens is that when loud noise is perceived by the brain, it attempts to protect your hearing by tightening the muscles inside the ear in order to reduce the amount of noise passing through the ear mechanism. A fantastic system really, but not designed for a lifetime of loud music or industrial noise.

This muscle constriction can also restrict blood flow to the inner ear, and if it happens repeatedly it can cause long-term damage to the nerve cells in the inner ear, which eventually end up dying. Fatally. As Motorhead almost said - "Killed by Deaf".

Seriously - long-term noise exposure can cause permanent hearing damage.

This acupressure trick relieves the constriction of the muscles around and in your ear, and hence allows full blood flow again to the nerves in the ear, hopefully extending the life of your hearing a bit longer. Obviously it won't suddenly reincarnate the dead nerve cells in your inner ear, but if used early and often enough it will hopefully at least minimise the damage somewhat. No guarantees of course.

* Threshold shift and associated long-term hearing damage are not the only cause of hearing damage. I have met people who have lost hearing with a single exposure to a loud impulse sound (someone pressing the wrong button on the mixing console and blasting maximum volume through the headphones, or a massively loud click through a PA system at a gig) as well as others who have ended up with tinnitus (ringing in the ears) which can last FOR THE REST OF YOUR LIFE. Apart from these problems there are other odd things that can happen related to your inner ear - for example upsetting your sense of balance. Not much fun - I had continuous vertigo for a few days when I had the nasty flu earlier last year - no laughing matter as I ride a motorcycle to work.

Noise vs Music - I've often pondered this as I've been assaulted by a band that sounds like crap - as long as you perceive the music as, well music, your brain isn't trying to shut your ears down, but if the band sucks and it sounds like obnoxious noise they're effectively killing your ears! Obvious solutions - drink more alcohol to thin the blood and keep that oxygen getting to the ear cells, or try to psych yourself into believing the band is awesome, thereby fooling your own brain.
My wife says "why don't you just leave?", but I view that as defeatist.

Factoid 1: Research disputes what I just said before. Studies have shown that musicians suffer as much hearing damage as those exposed to industrial noise of equivalent level. I argue that musicians aren't ever just exposed to music they like (we usually all have to share gigs with other bands), or other loud noise, so it's hard to prove this either way without adequate methods or controls.

Factoid 2: Published Acceptable Exposure Time vs Sound level graphs are based on industrial noise, not music. At 110dBA your acceptable daily exposure time is 1 min 30 seconds!

Other (more serious) solutions:
Obviously, considering all this, the best solution is to avoid loud sound or wear appropriate hearing protection. Go get some proper earplugs -the custom-moulded "musician's" earplugs are pretty darn good- they're relatively "flat" and uncolored but there are other slightly cheaper options as well (custom-fitted plugs can be quite expensive, but they last for years with careful use). The problem I've found with them is you can truly hear how out-of-tune the singer is when watching a live band which might ruin your enjoyment or "perception of talent" slightly, but I have to say I've been to gigs and wearing some -15dB custom plugs and my eardrums have still been distorting painfully at times. You can get -25dB or more reduction plugs, and some come with both inserts as options so you can swap them.

And finally an observation - isn't it weird how society is au-fait with people wearing glasses to correct their vision, but wearing a hearing aid has a stigma attached to it. You see graphic artists, photographers, directors and numerous other industry professionals (who rely on visual acuity!) wearing glasses, but would you trust an audio engineer with a hearing aid? Hmmmmm.
Not that I need one YET, just paving the way for the future.

References:
Acupressure Points

Sunday, May 16, 2010

The Apogee GIO and Mainstage Experiment Part 2

Well, I got through my solo gig in one piece and with reasonable success, but some things became immediately apparent that I will definitely change for next time.


My Setup:

17" Apple MacBook Pro (mine's an older one) running MainStage 2

PreSonus Firestudio audio interface. It uses a FireWire connection, so has lower latency than most USB-based interfaces.

Fender guitar and for vocals a Shure Green Bullet microphone plugged into the PreSonus
(I usually have another Shure Beta58 mic set up for percussion loops, but I didn't bother for this gig).

Novation 49SLII keyboard controller connected (and powered) via USB to the laptop - for playing the occasional keyboard line and controlling levels etc

The Apogee GIO connected (and powered) via USB to the laptop for playing backing and loops, with my expression pedal connected to it for guitar bits.



Come time to perform, the laptop conformed to Murphy's law relating to gigs and played-up despite being solid on every rehearsal, and I had to boot it three times before it played nice - including a forced-shutdown once as it froze up.

The Novation keyboard comes with its own Automap software, and the software runs automatically when you start up a MIDI-compatible application so it can act as an intermediary between the application and the keyboard, but it in this case it locked-up searching for the Novation (which was plugged-in with all its lights going) - forcing the restart.
Of course it goes without saying that this was an agonizingly long time while standing on stage with my guitar waiting to play.

Also - for some reason the GIO didn't recognise my expression pedal - a bit of a major since I need it to cross-fade between some of my guitar tones. I have it set up so it either cross-fades between two separate channel strips with, for example, verse and chorus guitar patches (rather than a complete patch switch I often like to mix in a bit of "clean" guitar with the "distorted" guitar as it adds clarity, or sometimes I set it up so the pedal turns up a second "layering" channel strip with some pad-like or weird character guitar effects at appropriate times in the song.
I suspect maybe the GIO likes to see the expression pedal plugged-in as it fires up, and on the third laptop reboot it finally discovered it (after I had decided it must be the cable!). The GIO doesn't have a power switch, it just turns on when you plug it in.

Both the Novation and the GIO both get their power off their USB connections, and although it normally doesn't seem to make any difference, I made sure to turn on the Novation well after the laptop booted on that third attempt. At home I also usually have a computer keyboard, wireless bluetooth mouse dongle and external hard drive all running happily off the USB power as well, so the lappie should be able to run just the Novation and GIO.

Mix Issues

Once it was all up and going, the issues were mainly mix-based.

The trick, of course, is getting something that works out front as well as in the foldback monitors, and although it actually sounded fine in the foldback, the vocals were apparently too quiet out front.
Trying to turn them up got the mic a bit too close to feedback, which meant turning down the backing instead, which meant some of the backing became just a bit TOO quiet to be able to hear. One song had a triangle rhythm intro that ended up being way too quiet and I got out of sync - needing a restart of the song. A wee bit embarrassing.

So - before the next gig the main thing I will do is;

Create separate audio outputs to the PA system for the different mix elements.

Or at the very least create a separate physical output for the vocals, since they're one of the most critical things to get happening properly in both monitors and out front.


For the gig I did actually create separate subgroups for each type of sound:
(Vocals, Guitars, Drums, Backing, Keys, FX) so I could use the nifty little faders on the Novation to balance the overall mix, but it wasn't enough. It has to be a separate output from the audio interface into its own channel on the PA mixer.

Backing Tracks

Apart from that, the only other niggles I had were with the backing tracks - they were a little inconsistent with their start times due to the too-quiet monitoring.

I have it set up so I can switch between sections of a song with the GIO with the "wait for next bar" setting - meaning you have to hit the foot-switch within the last bar before you want the next section to start. If you're a fraction too early or late, the whole backing is out by a bar.

I'm still not sure of the ideal way to set these backing tracks up. I've tried having the entire backing for the song as one track, but it leaves no flexibility for jamming out on sections or padding it out a bit if you stuff up or something.

I've also tried having just the one backing track with some song section markers that you can cycle within when necessary, and to be honest that wasn't too bad, so I may go back to that method.

The beauty of the way I was doing it this time though is that you can jump to any section of the song if you feel like it, but that flexibility comes with its own risks and problems.

The thing is to try to keep it all as simple as possible for the performance itself, so I'll need to experiment a bit more with the ideal method.


Finally, I'd like to come up with a better system for using Ultrabeat drum machines in my setup and find a way to simply switch between patterns - I might map the bottom few keys on the Novation for that purpose or perhaps assign some of the many buttons on it.

Overall, I'm pretty pleased with the whole setup apart from those few tweaks I'll need to make.
I really like MainStage 2 - it's an incredibly powerful live performance program with only a few minor bugs that will hopefully be sorted soon.

Wednesday, April 21, 2010

The Apogee Gio and Mainstage Experiment

I have a solo gig coming up and have decided that being yet another singer-songwriter is boring as hell. Especially as I haven't been blessed with one of those voices that could make singing the shopping list sound awesome.

So I need to use everything in my power to add value and variety to the gig - hence the MainStage experiment.

I wanted to be able to go from simple vocal and guitar to full-on backing based on my recorded songs. While keeping it all "live" and interactive so I can jam it out a bit if the opportunity arises.

The beauty of MainStage 2 is that it's basically the guts of Logic Pro bundled into an application for performing live. That means you get the same instruments and effects, plus any of your third-party plug-ins as well.


It means you can also add bounced backing tracks for your songs - with markers that you can loop around or jump to. The markers allow you to see what song section's coming up next in case you forgot.
And there's a cool Looper plug-in that allows you to recreate the current trend of having those dinky guitar pedals that allow you to build up your own musical or percussive layers during a live set. You just play something in, hit the pedal and it loops around while you play something over the top, or you can just keep recording more layers, undo the last one, or clear it all and start fresh.


MainStage allows you to create your own user-interface - you can customise what you are looking at on the computer screen, and also create objects that will be controlled by whatever pedals, buttons, knobs, faders or keyboards you have connected to it in the real world.

Hence me also getting an Apogee Gio - this allows me to have 12 buttons on the foot controller that I can assign to whatever I need to per song, and I can also plug in my expression pedal to do my chucka-chucka-wah-wah thing.


The Gio also has a built-in audio input for guitar or bass, which actually sounds great. Apogee are renowned for their great-sounding converters and it's nice to find even their cheap-ish ones are good. Definitely a good way of getting your instrument into MainStage.

The only hassles I had were when I wanted to plug in a microphone as well as my guitar - meaning I had to use another audio interface as well - in this case an M-Box Pro.

Apple's OSX allows you to combine two separate interfaces together as an aggregate device so they appear as one source to the audio application, but no matter which way I did it, they didn't play nice with each other, eventually degrading the audio quality.

So I had to ditch the awesome sound of the Apogee for the more average M-Box one.
Oh well - at least the Gio buttons still worked and looked pretty.
The little LED indicators change color to suit what the pedals are mapped to in MainStage - ooooh aaaaah....

When you use the Gio with Logic, and apparently GarageBand as well, the foot controls are automatically mapped to Record, Play, Rewind, Fast Forward etc for hands-free recording which is a bonus.

Build quality of the Gio is great by the way - it's a solid little unit - quite heavy in fact, so it's going to stay put on stage, and feels fairly indestructible.

So, for the moment I'm still wrestling my way through customizing MainStage for the upcoming gig - there's still a trick or two I need to learn. There's a Concert/Set/Patch hierarchy that is important to get your head around otherwise the backing stops when you change guitar patches for example - and the synchronisation options with backing tracks and loops has some quirks.

But I'm getting there bit by bit, so I'll let you know how it goes...

Links:
The Gio
MainStage