Monday, October 19, 2020

"Purge and disable", a.k.a "How I work, the 2020 edition".

 

As mentioned in my earlier blog text, my previous workflow consisted of different phases where I demoed in one project, arranged in one, mixed in another and mastered yet in another one. With this new system, I wanted to combine all those parts under one project as much as it could be humanly possible. So it's not only about playable templates but more about a complete change of my workflow as I used to know it.
This text also contains pretty much zero metal stuff and may bore the fuck out of you.


The purpose is the following:

1. To create a project which also serves as a quick demo platform.
2. The ability to add and remove different VST instruments with flexible routing system and pre-mixing options.
3. (mostly) Unified articulation and programming system for maximum efficiency.
4.
Being able to export different music parts out by stems.
5. Being able to divide and export different music parts out horizontally according to my markers.
6. The ability to do the final mix still in audio and replace some of the instruments with real ones.
7. Delivering final in-the-box masters on demand for different purposes.


 

 

1. THE SKETCHING AND DEMO PHASE

Having been playing the piano since my childhood, it's the instrument I feel I can utilize the best when trying to express and improvise my musical ideas. So unless I know exactly what I'm doing, I start with that until I've finally crafted something worth polishing and re-evaluating. Being also the spoiled brat of today's technology and taking some shortcuts, though, I have quick sketching tracks for strings, brass and woodwinds ensembles in case I augment or even skip the piano completely. But usually I leave them alone and work only with the piano.  

The first ideas start with only those sketching tools and tempo track visible. I usually work so intuitively that many times when I'm finally done with the demo, I need some sort of a transcription on the initial idea in front of me to remember what's actually coming next. For that, I've found Cubase's chord track an invaluable tool. Even though my music usually isn't strictly chord- based, pretty much every musical idea in the world can be transcribed and reduced down to chords, and being a very visual-oriented person for what it comes to analytical tasks, I find it extremely helpful. 

The most valuable use for the chord track I've found, though, is when I need to write a quick transcription for someone else's song I am performing on top of. It's a fucking lifesaver on those sessions!

I recently started to use the different "views" (which are also synced with the mixer channels): If you look at the upper left side of the picture which I conveniently circled in red, you can see the word "SKETCH" there which means that I've hidden every single track I don't need in this particular view in order not to clutter the space or distract me. I have different views for sketching, working and mixing and they make my life way easier than it used to be earlier when working with a kazillion tracks.


 

2. THE WORKING PHASE: SAMPLES, ROUTING AND PRE-MIXING

While this is technically being called a template, it lacks all the instruments. Yet. When I'm done with the demo phase I will think of the following things:

- The instrumentation in general:
is this traditional, hybrid, techy or weird ethnic undescore mishmash?
- The purpose of the music:
is this coming for an ad, game or for a musical tapestry played in a lobby?
- The size and sound of the ensembles:
are we doing an intimate small-sounding piece or going full traileresque hell?
- Special needs - traditional instruments:
is the music meant to sound like it was made in a certain era, like the 70's?
- Special needs - additional instruments:
are there certain period- or other special instruments needed?
- Efficiency:
how much time and resources am I granted?
- Authenticity and realism:
how deep programming and articulation-switching is necessary?


Considering all these options, I will then make decisions on which sample libraries I will use as a backbone of my sound and which instruments need to be specifically included to the palette. And add a ton more on the go based on my ideas and needs. For what it comes to the orchestral sample libraries, I have earlier made something I refer as a "skeleton midi template" which is pictured above. It holds almost all of my orchestral stuff as disabled tracks, all of them having unified keyswitch system and 6-9 most common articulations (depending on the library and it's possibilities) included. Choosing the right ones for the project's needs and importing them to the current template takes only a small amount of time, and after that I will start thinking all the other, more specific instruments and additional flavours such as ensembles, FX and whatnot.

For each imported or on-the-fly added instrument, I will do the balancing ONCE in the following order:

1. Routing: All instruments will be routed to their respective outputs and getting a small amount of the section's default reverb. M
ore on that later!

2. Microphone choices:
Based on sound and the instrument importance and proximity, I will always start with the best-suitable mic for the job and add others to augment things if needed. For CPU and RAM reasons, I try to survive with as few as possible without compromising anything. I tend not to route different mics to different outputs unless I feel it's crucial for the mixing process, for example having the close mic on it's own mixer channel when using solo instruments.

3. Articulation balancing and possible tweaking: In many times, the different articulations of an instrument are not balanced together. Sometimes the pizzicatos are abnormally loud compared to the sustains, or sometimes the stereo image is completely bonkers on one articulation. Or, you may realize that this particular legato transition is way too slow at default or that short articulation needs to be stacked with a more aggressive one because it's too anemic alone.

4. Volume and panning: I do this strictly within the VST instrument and keep all the mixer channels at unity position until I start to mix. Panning things internally in the instrument lets you usually pan them without tilting the whole room in the process, and when gain staging already at the input, you're ensuring that all your dynamics plugins play nicely later. And it's a bliss to start the mix with everything sounding balanced at unity. <3

5. Corrective EQ: Removing unnecessary low rumble or too sizzly highs, and going through possible nasty resonances of the instrument either with traditional or dynamic EQ. I'm utilizing M/S processing if needed, especially on some libraries which I feel have the stereo image a bit out of place compared to the rest. In many cases, the most audible part of the room is between 100 and 700 hz and taming that area down from the sides makes the instrument feel much more natural and in the same space than the rest. In case the volume changes too much due to excessive need of corrective EQ (not usual, but may happen), I balance it using the EQ ouput to ensure unity gain.


All these balanced and prepared instrument tracks are also ready to be imported into any future project, so this method isn't only very future-proof, but also saves me a ton of time in the long run as I basically have to do it only the first time I build each patch for use. I also purge each patch completely before starting to use them, and whenever I decide not to use something, I disable it. In fact, I purge and disable like hell because I'm only running 32GB of RAM on an i5 instead of a monster machine. Gods, please send me a monster machine so I can still complain like I was playing in Wintersun!
It would be extremely hard to choose my "go-to" libraries due to the fact that it all depends rather much on the project. Generally I'm more interested about the size and character of the library than anything else, because those are the things you cannot change much. I mix and match a lot and have probably way too much sample libraries in general, but some of my favourite deserted island ones are SCS, SSW, Cinebrass (both), the Swing!-series, CSS and Cineperc. I don't tend to use that much synths or hybrid things in general, but when I do, it's most likely Omnisphere and the Heavyocity stuff.

Concerning Kontakt and other players in general, I'm running mostly at 20-60 kb buffer and launch my most used samples from SSD's and the rest from HDD's using symlinks to keep the paths also easily switchable and backuppable. I'm loading them with Quickload (arranged by manufacturer) and keep only my most used libraries at the left Kontakt panel to avoid clutter. Yes, that includes the factory library. Never underestimate the factory library!



3. ARTICULATIONS AND PROGRAMMING

Before getting a valuable tip about expression maps from my ex-boss, traditional keyswitches were driving me crazy. I couldn't see which articulation I had chosen from the piano roll, the chasing wasn't working and on top of everything it messed up my notation if I had to do some. So I decided to take advantage of Cubase's expression maps some years ago and haven't been looking back since. I've recently switched from "direction" to "attribute" due to the very fact that when cutting parts, the directional articulation does NOT follow over the cut but resets to default (empty) every time the new block starts. Going through every track after a cut and doing all the starting articulations again isn't exactly fun, I can tell you.

I operate on one track/ one instrument- basis. That means no single articulation per track, because it would make my workflow a living hell to switch tracks a million times within a simple phrase. Besides, I'm still more of a notation-oriented guy, and that would confuse me a lot. Not to mention the clutter. The only exception for this are the ensemble patches, which have longs and shorts divided into different tracks for ease of use.
 

Most of the time, I don't need those sul D# pizzicatos or overblown flautandos either. Those are GREAT to have when you actually need them, but most of the time the basic articulations are more than enough for me. I prefer using multiple midi channels in the instruments instead of those big keyswitch-patches (especially as in some libraries the big patches don't necessarily have the articulations which I want but have those overblown harmonics at the edge of silences instead) for easier balancing, but whenever that's not a possibility (CSS, CSB, etc) I use tailored keyswitches. However, in the end it doesn't matter as long as I'm using those expression maps- which is exactly why I love them.
When I build those new combination patches from different articulations, I always use the same order and the generic maps as much as possible because it's also very easy to switch libraries on the fly when all the articulations are unified into "sustains at C-2 or Ch.1", legatos at D-2 or Ch.3", and so on. And as a clever bastard, I mapped those keys to some futile buttons in my controller keyboard so now I can actually switch articulations whenever I feel like it instead of having to program one before being able to play with it. Steinberg, you may want to issue this at some point.

Concerning programming in general, I tend not to go overboard with that. Someone mentioned at VI-C that it would be possible to define default values for tracks which they will always reset into, but right now I'm using a short midi block on every track with all the most useful values for the current sound are pre-set in advance. That's basically CC1 (60), CC11 (80) and possible CC2 (0) for vibrato for 90% of the regular tracks. I never touch CC7 for it messes up my balances faster than I can say "oh shit". If I need another lane of CC's, I midi learn a slider from my keyboard controller and hit it up on the piano roll.


 

4a. THE ROUTING (a.k.a "here be dragons")

Brace yourselves, for this will be confusing. Let's start with an imaginary situation where I'm doing a light and more intimate piece and use only strings. I've decided to go with SCS Violin 2 and Viola using close mics with some added outriggers, and using Con Moto for cellos with blended mics. Please note that "sum", "stem" and "bus" are initially meaning the same process in Cubase, the group channel.

First I route both the violin and viola close mics to a "strings close bus" and the outriggers to "strings outrigger bus" which both then are routed to a strings sum bus. The lonely Con Moto cello will go straight to that string sum like any other extra strings not being part of the "main strings library". If the strings need overall processing, it's done at this stage (the STEM channel will only get shared master processing which will be explained later), and from here it goes to "strings STEM" which will be my main output when exporting them. So, as a recap for the signal flow:

SCS Violin out 1 (close mic) -> Str close  -> Str Sum -> Str STEM
SCS Violin out 2 (outrigger mic)  -> Str outrigger  -> Str Sum -> Str STEM
SCS Viola out 1 (close mic) -> Str close  -> Str Sum -> Str STEM
SCS Viola out 2 (outrigger mic) -> Str outrigger  -> Str Sum -> Str STEM
Con Moto Cello (mix mix) -> Str Sum -> Str STEM

Still with me? Great! Let's add some reverb next!

I choose my reverbs on the basis of the samples and the style and purpose of the music. I might put something as a "placeholder" first and change it later when I know better what the sound needs. Sometimes it's Spaces 2 and the SoCal instrument-specific ones, sometimes a TC6000- styled natural and discrete one...or whenever I want to watch the world burn, the Lexicons. Whatever suits the job, really. I have too many reverbs anyway. And hey, the best way to check out fast which settings you want (or which reverb to use) to is to slap it as an insert on the master bus at 100% wet. When done, just drag the plugin to the reverb channel and enjoy.

Adding that reverb to the close strings, I will activate the sends of the two close channels (violin and viola close) routed to "str close" bus while that reverb output is routed to the same bus as well. Both string group outputs (close and outrigger) are then getting a bit of string reverb 2 on their sends and are summed to a bus called as "strings sum", where the output of string reverb 2 is also routed. The Con Moto cello will get some reverb 2. Confused much? Let's try to explain the reverb routing like we did before:


String Reverb 1 -> Str close  [-> Str Sum -> Str STEM]
String Reverb 2 -> Str Sum [-> Str STEM]

And this is how the sends are open:

SCS Violin out 1 (close mic) -> send open, string reverb 1
SCS Viola out 1 (close mic) -> send open, string reverb 1
Con Moto Cello -> send open, string reverb 2
str close (group) -> send open, string reverb 2
str outrigger (group) -> send open, string reverb 2


You can also just skip the aux (send) reverbs completely and just use them as inserts wherever you want to. I personally like to keep them as sends for easy tweaking, changing and balancing, plus I'm also horribly old school. YMMV.  On your right there's a badly drawn chart explaining it all more visually!

 


 

4b. VERTICAL EXPORTING AND PROCESSING STEMS

But wait, there's more! What if we want do any sort of master bus processing? When exporting the whole mix, I may slap a tiny bit of Lexicon reverb tail (about 5-10%) for glue, add a Gullfoss for housekeeping and finally Ozone 9 for leveling and possible very quick mastering. These are never in use unless I need to export the whole thing. But even if they were, they wouldn't be heard on the individual stem exports because their outputs will never reach that master bus when exporting- and even if they did, I'd be adding ALL the processing cumulatively for each stem. Play those stems together after export.....and I'll have 8 glue reverbs on top of each other. Enjoy the mush!

This is also why I'm having those separate stem outputs after the sum outputs. If I'd want to do unified master processing to the stems using sidechains, it would be impossible if I'd only use those "sum" groups as outputs because among other things, I couldn't get sidechaining working. There are a ton of tutorials around how to do it, and the purpose for this is to make the stem compression and limiting to behave like the dynamics processing is still "hearing" everything else when being exported solo. All my instrument sum groups are sending information to the stem compressors' sidechains via their sends, as pictured at right.

[Note: You can also do this by making a single new group for pure sidechaining purposes, and it's mandatory if you need more than 8 sidechain sends. Just route all your groups' sends to that new group, disable the output of it and use that group's sends to control the SC input of the stem inserts. In my other template, I have 8 stems which both use a sidechained compressor PLUS limiter, which means that I have built a SC group for each insert, a.k.a 8x2 sends.]

As I want the same dynamics settings for each stem export, the best way to do it is to link the channel inserts together (I'm sure your DAW allows that) so that whatever you do in one channel insert slot is getting done to the others as well.

Naturally, at this point I can do other "master" processing as well with synced EQ's or any other things if needed. Using this method gives me also the possibility to listen to the processing in a cumulative state, which means that I can tailor all processing according to that instead of preparing for nasty surprises later. And most importantly: I never use stem processing AND master processing simultaneously for obvious reasons.

Another reason for using stem groups is the fact than in most of the DAWs they can be exported with a single click. Having been earlier soloing, exporting, soloing, exporting, soloing and so forth until the end of eternity, I appreciate the fact that with some pre-planning, I'm able to do all this way more simpler. You can of course route the instruments as you wish for the stems and not necessarily divide them by instrument groups, but that means that you need to build your original sum bus groups according to your needs. Naturally, you can also just export track stems the traditional way by just soloing the ones you want to export in case things change and you don't want to reroute everything again. For this, you can use the folders for quick solo/mute grouping. Note though, that if you are using the sidechain dynamics on stems, that won't work unless you do the routing accordingly.

Below you can see my typical routing in action: The inserts marked in red are for master processing -either stems or the master bus-  and are both simultaneously turned on for visual purposes here. Clicking the picture bigger will show you better how the routing is done (shown in the most upper rows of both mixers).

Last but not least: when using many tracks with multiple mic outputs, complex routings and stem bussing it's very important to stay organized. I use color-coding, folders (lol protools), simple and understandable naming and disable stuff I don't use. The more I start adding stuff while skipping the housekeeping, the more royally fucked I'd become at some point when realizing the stems are completely all over the place and that snare reverb is coming out from the woodwinds stem 10 db louder than the winds. Shortcuts are awesome when used with keyboards, not so much in our workflow!

 


5. HORIZONTAL EXPORTING AND CYCLE MARKERS

When working with a project having multiple cues in one project, I always use the cycle markers to quickly give me an overview of the exact lenght and location of any cue. They are easy to set up and even easier to export, and will also ensure your exported files are always starting and ending at the same time when doing revisions or other changes.

Where they really start to shine, though, is in game music composing, because I can insert multiple cycle markers per cue and line them up. By doing that, I have the possibility to export different seamless parts of the cue as separate audio files for horizontal resequencing or just generally loop them to see how they work.

With songs being divided for horizontal resequencing, I have two options: either exporting the cycle markers brutally "as is" or including the instrument and reverb tails of each cycle, which of course means that I need to insert at least a couple of bars of silence between each part of the cue. While being a bit less natural-sounding, the former usually works ok-ish and is easier to implement, but the best way to do it is to include the tails of the parts in the files and define the loops and post-exits in Wwise or your middleware of choice afterwards. Naturally, the more flexibility you want to have in the game with the lenght of the parts, the more you need to do the eternal cuts, insertsilences and repeats. Horizontal resequencing done properly, however, takes a lot of planning, optimizing and construction and there isn't a clear consensus on the best practices.


 
6. AUDIO VS MIDI: THE ADDITIONAL RECORDING AND MIXING PROCESS

I have a love/hate relationship with VST instruments. They're usually fast to work with, have a great sound out of the box and by using those, I have access to kazillion of instruments and ensembles that I ever could record myself nor have the budget for. But it is never like the real deal. A 100% midi sounds less lively, less energetic and less....human, no matter what you do. It's like witnessing too much CGI. So I tend to implement at least some real stuff all the time into my music, because it's amazing how much even just little things can alter the overall picture.

My previous workflow consisted of doing quick midi demos first (which were usually not implemented to the game) and then when getting them greenlighted, redoing everything from scratch with also using some real instruments for the final versions. That was extremely risky and felt like running amok with a blindfold for what it came to the music implementation, especially with adaptive stuff. Nowadays I do everything in midi first (=all the phases up to that chapter #5 in this text), implement them to the game, iterate the fuck out of them and when we finally feel everything is considered 100% locked, I continue to the final phase. Remember the chapters on stems and ready-made cycle markers? This is where it all pays off for real, because I can actually now replace stuff as much as I want and STILL export everything with just a couple of button clicks.

So when everything is proven to work in the game like it should, I'll render all the VST instruments to audio and start replacing some of them instruments with real ones and maybe add some more. I use the render in place- function in Cubase to transform all my midi into audio but keep the channel settings as is. Every plugin I used in the pre-mix is still completely tweakable and every routing is the same; the only difference is that the VST instrument output is now an audio file. After that I disable and hide the original midi tracks. Naturally, if and when I may have to revisit some of the original midi tracks, it's easy as hell to do the changes, render them into audio again and replace the previous audio with the new one. Rinse and repeat.
 
Even if I didn't have any real instruments, I can't stand the idea to mix without having the possibility to manipulate audio files and see the waveforms visually on my screen. (yeah yeah, tape recorders, blabla 80's, whatever...yes, I'm spoiled). I want to do manual fades, quick volume adjustments, editing and everything else that's impossible to do with midi. So I always mix the final product in audio despite of the original soundsources. Besides, operating in audio leaves me a lot of processing power headroom to utilize all sorts of plugins I need, and also ensures the project opens up nicely in the future years as well.

But sometimes we're just in a rush. And in that case, especially if we're only working with a single stereo output file, I'll just skip the whole audio thing completely and mix straight on the midi channels. Slapping a few compressors here and there, adding possible a long insert reverb tail to the solo flute, EQ'ing a bit the brass group and generally just making it a bit better with quick larger moves. And then finally adding a bit of master processing and I'm ready to export the whole thing out. And to be honest, it's usually more than enough anyway. 
 

 
7. THE FINAL MIX AND MASTERING

I never master my mixes in the mix project unless they are going to the game. In which case I use either the stem (bus) mastering or the master bus processing with possible outboard hardware, depending on the layering. So, let's divide the two processes first:

1. Mastering for the game.

This is always done in the mixing project. When I start the final compression and EQ tweaking, I always have the gameplay sound FX and ambience playing simultaneously in another track in order to hear the music in actual context and balance it to the rest of the sounds. I have recently started to use the brilliant software called "Listento" which enables me to stream my DAW output straight into my phone (as 95% on my projects are for mobile games) where it's also easy to pinpoint balance issues. Generally, you want the music not to fight with the sound FX and you may need to carve a lot of hi mids out when mastering for mobile. And just like in any other game, your music is not there to give you (and hopefully the rest) a spontaneous erection but to support the game, enhance the story and deepen the immersion. Don't balance it too loud because if you don't make it fit to the rest of the game sounds, someone else will.

You also need usually way more compression for the ingame stuff than you think. I remember reading that the suitable dynamic range for music is actually no more than six (yes, 6) db for PS4, and while most of us don't prolly compose and implement those final master assets for PS4 games, it's a good benchmark to keep in mind. For this, parallel and even dual compression combined with a transparent limiter is a good practice. I have the luxury of using a hardware tube Vari-Mu compressor myself which is incredibly transparent, but unfortunately I can only use it when delivering single tracks instead of stems. For final volumes on mobile, we use -1db true peak and levels around -18 LUFS but YMMV.

2. Mastering for soundtrack.
 
Making a listenable soundtrack consisting of adaptive music pieces is a huge fucking mess, and I always need to "compile" the tracks first before I can even think about the mastering process. That's usually at least a day's work to do, sometimes even three times more if it's a scattered trainwreck of files and stems. After that, the world is my oyster, really. When I master myself, I always import the final tracks into a new project and master them like I'd master anything for a client.
Sometimes I may only do the mixing and use the World's Best Mastering Engineer Mika Jussila, and sometimes I do the masters myself. It really depends on the project's scope, budget and overall importance. I tend to do compiled soundtracks for every single game I do despite of being released or not, mostly because it's fun and sometimes may prove useful for later.
As this blog post isn't about mastering, I won't go deeper to the subject, but my final advice is that for any commercial soundtrack release, the mastering can either make everything even better... or just simply destroy it. Make sure that you have the best possible people and tools to work with. Also, clipping is so 2000's.

You should never use the same masters for the game and the commercial soundtrack, because if done right, they both should be serving a completely different purpose and need to be treated differently. And as the both versions are mastered to serve solely their own purpose, they definitely also sound different in the end. Don't use your ingame masters for the soundtrack.

 
8. ADDENDUM

As we're living in this dreaded age of all video- vloggers, "influencers" and pleasedontforgettolikeandsubscribe everywhere- I understand that some of you may think that "why the hell didn't he just make a video out of this all."

First of all, I write because I want to. Because for every sentence that I write down, I feel like learning it myself even better as well. So I think that in the end I may actually be writing for myself and publishing it for the others who may find it useful is just a bonus. And as a big fan of written text and literature, I like the idea that you can advance at your own pace instead of a video which keeps on going and going despite you understanding the last sentence or not.
Secondly, I'm not a camera-oriented person. Some people do the video-thing really well, and I actually enjoy watching some composers'  Youtube channels while I do some workout at home, like Jason Graves, Trevor Morris and the James Bond of media music, mr. Guy Michelmore. But I'm not planning to go on that route myself at least for now. Besides, I don't know shit about video editing anyway, so it would probably take me a year to come out with the first one and at the point it was released, I'd already hate it and it would be completely outdated.

At last, I know all this information flood may feel a bit confusing, and I can't be sure this will be the exact working method for me in the next years. Hell, I even changed a couple of things on the fly when writing this because I got a couple of awesome small tips from the latest GameSoundCon, which I implemented instantly into my workflow. But right now I feel all this is working really nice for me, and in case you are struggling to find a method for yourself or want to get some tips to speed up your workflow, this text might be of help in the future.

And now I guess I'm finally ready to press that "publish"- button. Don't forget to like and subscrib...wait, what?

3 comments:

  1. Thank you so much for this in-depth explanation, Henri. Some parts were a bit tough to understand because you're much more professional than I am, but I liked reading every bit of your articles. It's amazing to learn how other composers work. Can't wait to try some of your ideas!

    ReplyDelete
  2. Ei mittää uusia tekstejä tulis..?
    Aiemman postauksen kommentointeihin liittyen; FabFilterin softaa hommattu ja niiden kanssa tuotannon levelit kyl nousseet. Toisaalta oma kotistudio on nyt täysin äänikäsitelty ja -eristelty, joka sekin auttanut työskentelyä monin tavoin.

    ReplyDelete
  3. On tulossa kun ehdin vaan! Tuo akustointi on ehdottomasti hyvä idea ja tärkein vaihe jota ei kannata skipata kotistudion kanssa. Fabfilterit on ihania!

    /H

    ReplyDelete