1st supplement to “The hitchhiker’s guide to music production”
It’s been a while I haven’t written anything new for the website.
As probably I mentioned few times I pride myself of trying to learn something new every day. Well, to pride might be not the right word since I think that keeping working on knowledge and skills is just the right approach to life in general.
Therefore I decided to think about what I learnt in this past year or so and make a sort of list since apparently doing so it’s the only way I’m able to express concepts that are important to me.
When I first started working and mixing songs I was always editing everything to the grid. I edited most notes and/or hits and quantise them even if they were just slightly off. I would use whatever tool fitted the task better such as cut and drag on grid, beat detective, elastic audio. I thought that since I had those tools I might as well use them. I would also replace all inconsistent hits with better ones. I soon realised that I was just killing the grove. I started a very simple practice that it’s supposed to be obvious to most but apparently it wasn’t to me. I started to have a listen before doing anything else. I soon realised that some performance didn’t need editing at all and it was much better to adjust the grid to the performance for automations’ sake than doing the opposite. This is just very basic and it might seems stupid but at the beginning I almost forgot what recording music was all about: capturing a musician’s performance. There are few further considerations to think about. For example let’s say that we are listening to some band’s recorded material; there are usually 2 cases:
They might have played all together and if the interplay between them it’s tight enough there’s seriously no need to edit the audio.
If they played separately, and usually the drums are the first to be recorded, we need to understand if the rest of the instruments played over the click or the drums themselves. And in the latter case there’s still no need to do major editing.
With this simple practice the song I was working on started to have a better feel and to have their own life and groove. They started to offer a much better listening experience.
One thing that the digital age has brought to us is the totally and sometimes destructive abundance of tools available. The previous point about editing falls surely into this category. Another topic that surely falls into the same category is Plugins. What I was doing is using EQ and compressor on every track for the only reason they were easily accessible to me through a scroll down menu. And at the same time I completely forgot about some basic things like checking phase issues and using simple filters to give each instruments their own space in the mix. As I previously stated I started having a listen and doing rough mixes without any processing before doing anything else and after that realising that some tracks didn’t need compression and/or EQing was simple enough. Now I use just what’s needed and I try to keep my mixing sessions as simple as possible.
Talking about compressors I discovered they are my favourite type of processing as they are extremely powerful to shape the sound of a drum kit. What I would do in the early days of my career as a sound engineer is thinking about the nature of the sounds I was listening to. Let’s say I was listening to some bass drum or snare hits and since they are transient based instruments I would use fast attack on the compressors to control their amplitude. However I would often end up with a material what would sound ok on its own but that was hugely struggling to cut through the mix since I was also using ratios way above what I’m accustomed to use now. In the mean time I often heard this phrase “let the drums breathe” therefore I started to rethink my way to approach compressors, how they affect the sound, what I’d need to preserve to reach the impact I wanted. I discovered that often letting the attack of the note pass though the compressor would give me the result I was looking for. Obviously none of this has to be taken as a general rule as you need to adjust the settings to what are you working on and what kind of instrument you want to affect. E.G. I would behave differently processing a vocal track. Also right now I rarely compress cymbals and overheads unless we are talking about room mics. I love squashing room mics’ sounds to reach that huge impact that I like so much.
One topic that always baffles me and I’m keep thinking about is from which instrument you start to mix. I know people that start from drums, others that start from vocals. I’m generally part of the former group: I start with drums. The reason behind this is mostly due to personal taste and my experience as a listener. I listen to a lot of instrumental music and my approach is always to get the beat and the grove done first and keep them as the core of the song and then treat every melodic instrument/voice in order to give them their own space and importance. I do this because when I listen to music that’s the order I focus on. Up until recently I would start working on drums beginning with their essential components: Kick and Snare. However I discovered recently that sometimes it’s useful to switch perspective and have a look at the big picture and start with the overheads and room mics, then focus on the close ones to add what’s missing, like impact, definition and clarity. I realised that sometimes I would end up with a very busy mix with just the close mics and I had to struggle to find a proper space for the overheads. So in those cases it’s useful to change behaviour.
New paragraph new topic: I seriously don’t understand why people send me stereo tracks for mono instruments. I understand the pursuit for wider mixes and some people might think that having all instruments in stereo with some sort of movement in the image is the way to go. However having everything as stereo tracks limits the way you can pan them especially if you have some sort of stereo modulation on as these, down mixed to mono, can lead to phase issue. The best way to achieve wider mixes is by simply panning mono instruments, possibly panning opposite parts that are complementary to each other. A stereo bass sound it’s almost always pointless as most of it it’s going in the centre anyway. A stereo hi hat with a stereo widener on is also useless most of the time. Also watch out for stereo synths’ preset, they might sounds great on their own with a lot of movement but most of them keep most of their energy in the centre therefore are useless for widening the stereo image and giving each instruments its proper space. Please send mono instruments as mono and let me take care of widening the mix. Also I did the same mistake myself in the past. So don’t worry, we all have to learn something.
Similar topic, refrain yourself from applying reverbs and other spacial effects on the tracks themselves as they limit greatly my freedom to achieve clarity and definition during mixing.
Furthermore realise that prominent sub sounds are not the way to achieve a powerful bass or kick sounds as they clog the low end of the spectrum with material that it’s mostly undefined. Having important material in the 30 Hz region is not the same as having nice subharmonics which are achievable in different ways. Quite surely that nice low end that you hear in so many charts’ song is not achievable transposing the octave of your synths. Let me take care of enhancing your low end.
I know I’m far from being very good at what I do, that’s why I try to logically think about it and see if I can get any better and this is what I really love. One thing I’m trying to achieve now is how to get more dynamic mixes with greater crest factor. I’ll keep you posted on how this goes.
Good luck to everyone and keep experimenting with your craft.
I'm a sound engineer based in London, UK.
Music has always been my main passion and finally now after more then 14 years of studying music and then audio engineering I'm trying to make a living being a sound engineer/sound designer.