Mix Analysis: 'Sol'
- lgleeson98
- May 2, 2017
- 6 min read
Hey hey!
So off the back on my previous single ‘Valleys’, amongst a variety of different creative projects - I’ve been writing a ton of new music and laying the ground for a lot of strong songs with potential. Today I wanted to show you one of the new tracks I’ve been working on and give a bit of a mix analysis as to how I’ve improved on the overall mix quality, sound design and techniques; compared to my last couple originals.
So this song is called ‘Sol’ (WIP title at this stage), and its more of a downtempo, minimal yet groovier melodic house. Below is a private stream, which I have also given it a master through Ozone.
One of the first things I would like to cover is some techniques I’ve carried over from past productions, like Clap layering + delaying:

In this case I’ve got two layers of claps, each delayed at -15ms and -6ms respectively. What I aim to craft with this technique is create a chunky, full clap sound which feels a little more organic and also allows the transient of the kick to punch through. I’ve found this works particularly well with kick sounds with clicky transients or are just bright in general. In a lot of situations where I’ve experimented with different layering and delays, allowing the kick that space for its own transient - also makes it easier on the compressor when going through the mastering stage; as there is less build up of frequencies in that split-second.
Another element I made sure I kept in mind from ‘Valleys’, is the mix relationship between the Kick & Bass.

The ‘Mdastereo’ plugin (which is a part of a huge range of fantastic, free plugins you can grab here http://mda.smartelectronix.com) in the bottom right corner allows me to quickly give a channel either a comb or haas style stereo-width effect, and with minimal CPU usage. Some light comb filtering has given this bass sound which was extremely mono, a lot of dimension and gives the kick enough space in the centre of the stereo image to really punch through.
I think a lot of producers/engineers rely too much (or solely) on EQ & sidechain-compression as a way to craft this mix relationship.
I’ve found for me it’s a much more effective process to: work out the balance of levels between instruments (gain staging), start panning and give instruments their own space in the stereo field, THEN flick on that mono switch and start EQing to correct, shape and enhance to overall sound and presence of the mix.
I believe what has helped me achieve this process much quicker and resulting in a cleaner mix, is that I created a better sound design right from the start. Good sound sources made it so much easier, and ended up allowing me to focus more on the songwriting aspects rather than continuously hitting walls with cleaning up poor sound design choices.
Between the kick and the bass, there is some subtle EQ in terms of shaping each sound around their fundamental frequencies:
Kick:

Bass:

and also some compression via Ableton’s Glue Compressor on the bass. I really love the character the Glue Compressor gives to bass instruments, as it really warms up the overall sound and gives the transients some edge and punch.
The kick didn’t require any other processing apart from EQ, as it was already quite a well-treated sample - and this one in particular has quickly become one of my favourite kick sounds.
I was at a masterclass on vocal mixing towards the end of last year, and one of the main things I picked up was that it helps greatly to have a core selection of solid samples that always work and act as your ‘go tos.’ That way you’ve got the bulk of your sound design out of the way, hours of processing out of the way - which leaves more time and effort for the most important element - the song!
Over the past couple of years I’ve been building up a large sample library. However, since developing my live performance - I’ve created a core selection of my preset sounds, and am working towards making my own folders of favourite samples for different genres with their own Ableton templates, in order to make my workflow at the start of a new session even more efficient.

So let’s talk about the rhodes/clav keys breakdown at 1:12. I hadn’t touched the ‘convert to MIDI’ option in a long while - a feature that is unique to Ableton - however I’ve found in Live 9 it has been improved dramatically. How I ended up using it in the case of Sol, was that I found a soul/funk chord progression sample; and rather than just using that sample and warping it into time etc. I pulled the MIDI information from it! The great thing was that with the quality of the recording, Ableton was able to pick out some of the intricacies of the performance and it turned out sounding great on the keys. I definitely recommend giving this a go with any recordings or samples you have lying around, you never know how Ableton is going to process it - and more often than not, it will come out with a crazy riff or melodic element that you may never would’ve thought of.

In terms of other techniques I’ve carried across from older projects. As soon as I start getting into EQing different elements - I will first ask myself ‘okay, what channels do not require or contain bass information?’ And generally, all channels other than the Kick and the Bass I will cut at 150Hz. Once I’m happy with how all the frequencies are working with each other and I have controlled my bass frequency instruments - that’s when I start pulling back that low-cut on the EQ on certain channels to re-apply some warmth. As I’m doing this, I will be thinking in my head: does it sounds better after I have made this change? If the answer is no, I will bring the low-cut back to at least 150Hz as otherwise that channel will just be taking up space for my important bass frequencies.
As I reached the pre-master stage of this current project, my master channel in Ableton was sitting around a peak of -10dBFS. I tend to like leaning on the side of extra headroom, and since I work with 32-bit depths (and 48kHz sample rate), you’ve got a massive noise floor to work with. The higher quality the better when considering the master stage, as it results in less upward conversion, and will most likely have a better result when pushing your track through the crushing stage of SoundCloud compression (and a lot of other streaming services), or when/if your work may be used in video or games where it will be run through a codec.
In Ozone, the RMS levels inputting into the software for this track were sitting around -16 to -19dB. As I’ve been working on more and more mixes, I’ve been able to achieve hitting this RMS range more often which is a great mini-achievement and provides me with ample headroom to work with.

As I’ve talked a lot about Ozone’s Imager in the past, how did I use it on this track? For Sol, I didn’t end up using it on individual channels as it would’ve taken up a lot of CPU usage and I felt in general with the mdastereo plugin and the use of auto-panning to create movement on a lot of the tracks, it didn’t really need the extra stereo enhancement.
In the mastering process, I’ve used it to accentuate the movement and space I’ve given to the percussion & high frequency elements, whilst lowering the width of any frequencies below 80Hz. I didn’t want to manipulate the low-mids as I had already crafted the positioning of the bass in the mix how I liked.

When will I release this track? Well for now, you’ve got a private stream of the song right here. However, I do believe it needs something else (possibly a topline) and/or something taken way so that I can get a hook element to really shine and raise the entire vibe of the track when it gets to the busier sections. In the future I’m going to be doing a lot more blogs on the songwriting process, as between that and live performance aspects - I’m really focussing on skilling myself up and pushing myself into new perspectives or workflows in order to open up a new flow of creativity! So stay tuned for all of that plus another mix analysis (on a punk/rock band!!) this week.
Lachy :)
Comments