Planning - Posted 13/10/2016
The aim of this logbook is not only to meet the requirements of my Creative Project module for my University degree, but to also help me reflect on my creative process and keep on track. My planning stage is done. My Gantt Chart is ready. All that's left to do is actually..."do".
This project will be more challenging than anything i've done in audio before. My aim is to recreate the sound from a popular sci-fi horror video game called Dead Space 2. I've chosen this particular game as i'm very keen to create engrossing atmospheres through sound design. That harrowing background ambience that keeps the player on edge and their heart racing. At least, that was the first thing that drew me to choosing the horror genre. While i'm uncertain whether i want to focus on learning game audio specfically at this point i saw this as a golden opportunity to try some real sound effect design for a game as i've never tried this before. The implementation of sound through middleware tools like Wwise and FMod is still alien to me, but it's also one of the few things that i see seperating game sound design from that of a film. The sound creation process remains the same, therefore this experience will be beneficial for me in any sound design field.
This weeks work involved a small bit of basic setup. The main objective was to decide which footage i would be using. It had to be somewhere that featured both monster encounters and tense atmospheres, combat scenarios and quiet time. I really wanted to have a wealth of different sound design opportunities at my disposal and i believe the footage i ended up recording will be suitable for the project. I recorded my own gameplay footage as the videos on Youtube were riddled with bad pacing and pause menus. By recording myself playing i could create my own opportunities for sound implementation.
My second task was to get to grips with a new DAW i purchased not too long ago called Studio One by Presonus. My initial interest in this DAW came from hearing reliable sources raving about how smooth and easy its workflow is. Any DAWs job is to get the music from your head to your computer with minimal fuss. I gave Studio One a trial run and have to say i was very impressed. I'm taking the next two weeks to learn its interface so i can work as efficiently as possible when it comes to implementing my sound. My sound design project last year was completed entirely on Cubase, which was another DAW i had never touched at the time, so i have no worries about producing good work on Studio One with my limited knowledge of its power.
Next week i'll be writing up my spot sheet for the video as well as planning out the file structure for my sound library. In the meantime i have reports to write and life to hate.
Setting the Scene - Posted 01/11/2016
Things are progressing quite smoothly so far, though i've not stuck to my gantt chart very well. The spot sheet that should have been created by now has been overshadowed by reports i have to write for classes. Not only that but while implementing my video and playing around with sounds within Studio One i got sucked into an experimental trance. I've recorded and implemented some fully synthesised work and also Foley recordings to test the waters, so i've not fallen behind as such, i've just done things in a different order. As i become more familiar with the steps involved in sound for picture as a whole i'll be much more capable of planning ahead. For now i'm satisfied with my experimental workflow.
So the things i've accomplished so far: I've got my video up and workng in Studio One, along with all the necessary hardware i'll need. Everything seems to be in working order. I've started placing my recorded Foley sounds in a suitable folder hierarchy as well as naming my sounds something recognisable, and example being an ambient impact sound labelled as "AMBI_IMPACT_1", or and evironmental metal clanking sound being "ENVI_METAL_1". I'm still unsure of a lot of things to do with folder labelling and the audio that is put in the folder. At the moment i'm assuming only a completely raw capture would be put into the library and then any effects would be added in the project. Thinking back to the research i did on film sound i was under the impression that by the time the Foley and such is given to the re-recording mixer it will already have had any necessary processing done to it. For example if i there was a Foley capture that needed to be pitched down to sound like a dinosaur roaring, would this be done first and then the file would be stored or would the raw sound be stored to be called up at a later point and edited? Is room sound and such added before it is stored? It's all a bit confusing.
Another thing i'm confused about is what would be considered "music". I've been working on ambient synth patches that aren't really musical, more droning with pitch shifting and distortion. It's all non-diegetic though. The ambience isn't coming from the room itself, it's essentially doing the job of the music score. What even counts as ambience? I've got a lot of questions to ask...
The sounds themselves i'm very happy with. The first was an Omnisphere patch that had a very cold feeling to it that suited the cryogenic stasis area in the video. It wasn't unsettling enough so i performed a bit of frequency modulation and it became crackly and distorted. This was perfect for setting the scene as the player walks into the crypt. I also placed the sound to trigger in the room prior to entering the crypt to raise anticipation of what's ahead. After walking for a short while there's a large jumpscare as the character hallucinates horrific images of screaming dead bodies. I accidentally found an incredible patch on my synth for the jumpscare segment by turning on the distortion function. It's terrifying. I was so happy with it i'm going to deconstruct how it was built instead of just using it and forgetting about it. Hopefully i'll learn a lot from the synthesis involved and could maybe even enhance it. I plan to layer in Foley recordings of screams on top as well because it sounds a tad too synthesised right now. The jumpscare was one of the main reasons i chose this segment of the game to redub. It just had so much potential for experimentation and really pushing the horror so it's important to me that i get it right. After the jumpscare i've attempted to increase the dread as the players heart is now racing. The sound i used is actually inspired from the original soundtrack and i managed to build it completely from scratch. It's a very low and rumbly sound that is pitch automated my an LFO so it continues to swing back and forth, assaulting the players senses and keeping them on edge.
As of today my gantt chart is telling me the Foley recording and implentation process is to begin. That's going to be more difficult without a spot sheet so i'll need to crack on with that ASAP. I've decided not to do a traditional film spot sheet that details the time certain sounds make and appearance. This is still game audio that i'm making so i'll categorise the sounds as i would in the folders. Categories would include characters, weaponry, environment, ambience etc.
Creating the 'Plasma Cutter' - Posted 07/1/2017
After weeks of writing reports and pretending i like Christmas i've finally got back to the project. According to my Gantt Chart i've fallen very far behind, but if i focus heavily on the task from now on i should be fine. I've realised my Gantt Chart is flawed anyway, for instance implementing all the Foley to the video before any processing occurs is a bit absurd. I won't know if the Foley i've captured will work until it's processed to fit into the scene, after all i can't straight record myself walking in a similar environment. I've also set up tasks over the period of a week that can be done in an hour, such as the spot sheet and sound library planning, which can be done as a single piece of work. To ease myself back into things i got comfortable on the sofa with a pen and paper and watched the video a few times, noting down the sounds i would need in a simple and clear way. I started with the character and noted down things like boots, breathing and vocals. I dissected the movements of the weapons being used into laser bursts, servos etc. I made a section for room tone consisting of ventilation sounds and engine hums. Just by doing this i had passively also planned my sound library without even realising. From here i decided that instead of capturing all my Foley, then processing all my Foley, then using creative synthesis on all my Foley i would simply work on one object at a time and complete all the processing for it. It's far easier that way.
In around 2 hours i had managed to almost finish the sound for the first weapon, the Plasma Cutter. I didn't really know where to begin with creating weapons but a small amount of internet research on pitch modulation and sub layering caused a snowball effect. The pitch modulation is the main factor for energy based weapons like this. I made it so that every time the weapon fires the pitch would descend, creating this "powering down" effect after each shot. It's very subtle but effective. The sub layering was useful for adding punch to the weapon. Finally i added some high end fizz. The hardest part was trying to balance all three layers to sound like a single entity. Knowing when to stop adding sounds is also proving diffcult. You see, the weapon has mechanisms that move after each shot, and while i'm adding quite convincing sounds for them i'm finding it hard to make them worth being there since they need to be so quiet as opposed to the firing sound itself. This can be seen in the video to the left of this paragraph.
After that i began working on the mechanisms of the gun to make it more real. I also thought the sound i had made was too "lasery". Plasma is fire based so i EQ'd out some of the mid highs and eased off on the range of pitch modulation i was using. I also wanted to add some sort of sizzling effect after each shot to emphasize the heat of the gun. To do this i recorded a soluble tablet dissolving in water to capture the fizzy sound it made. I attempted to add this but just couldn't make it sound natural. The fact i wanted it to be heard was ruining the balance of the weapon sound i already had and making it less defined. When i started adding mechanical sounds the fizzing was completely drowned out, so i made the decision to remove it entirely as i found it unnecessary. That was until i realised it made a rather good subtle servo sound. I layered it quietly underneath the main sound just to fill the space a bit. Listening closer i realised my low end was messy and the weapon lacked punch. I EQ'd out some of the low end i had and replaced it with a dance kick drum to add a cleaner punch. I still wanted some kind of subtle plasma effect, so my flatmate told me to look up plasma being created in microwaves using grapes. Strangest thing i'd ever heard of but they sounded really cool. I managed to make a similar buzzing sound from a live guitar lead pressed against my guitars input. Layering this subtly over everything gave it the kick it needed. Using a recording of an electric shaver gave me the sound i needed for the rotating barrel and with that i had a completed weapon sound. I added a placeholder reverb in the final demo for added context but still plan to record and use my own impulse responses later.
Footsteps and Room Tone - Posted 23/1/2017
During the last couple of weeks i worked on getting the footsteps and suit movement sounds recorded, processed and implemented as they were likely to be the least engaging and most repetitive task of the project. They also make up a huge chunk of the immersion and without them i was feeling like the project wasn't really coming together in any way. My first attempt at this involved banging an empty tin can on a variety of surfaces. I knew i needed a metallic impact sound since the floor in the video is metal, but i just wasn't coming up with anything. My flatmate said to me "Can you not just walk on metal?". Initially i thought that sounded too good to be true. I was in a mindset that made me believe the only way to make sounds was by combining other sounds together and an easy way out didn't exist. I tried it anyway by laying a metal speaker stand on its side and recording a walking motion with my foot. It worked like a charm! Some steps had more thud and less metallic ring or vice versa, so i layered them together to create a convincing footstep.
After i had implemented all the files into the project i still wasn't convinced, so i started playing with the EQ. I found that reducing the high frequencies between 3-6dB and creating a thin notch at 2KHz created the illusion of the steps being in front of me instead of in my head. I understand why the high frequency removal had this effect because of how easily they drop off over distance, but the 2K notch was what confused me. I took to a forum to ask some professionals why this had an effect and learned a couple of things. I learned that the region of 2-5KHz is known as "Presence" and reducing this can have a large effect on the percieved distance of a sound source. I remember when looking at Fletcher-Munson Curves that the 2-5KHz range is the area our ears are most sensitive to as human speeh lies there, so this made sense to me rather quickly. In regards to the notch filter i was simply told that maybe my own ear is used to having a notch in that range, which makes sense as everybody will be different in some way. However it worried me that what i may think sounds good will sound odd to others, so i changed the notch into a scoop to lessen it and reduced the presence further. It still sounds perfectly fine. By using fades at the beginning of the clips i could control the transients as well, and by lessening the impact it made the footsteps sound naturally farther away. I needed a slight bit of sound from the suit as well. While it's quite a tight fit and wouldn't make a lot of noise it still needed something, so i record myself twisting some cling film back and forth. Pitch this down and making it quiet in comparison to the footsteps added a light rubbing sound and gave the suit more life. In the grand scheme of things when all the other sound and music is playing it may have been a completely unnecessary step, but i'll keep it for now.
Next i decided to tackle the room tone, because there was just no life to the production yet. The environment still had no immersion. The main thing i know i wanted was a deep rumble to give the impression you're on a massive moving spaceship. I needed a constant engine-like sound that i could pitch really low. The extractor fan in the kitchen was perfect. While i was in there i noticed the fridge making a crackling/humming sound in the back, so i recorded that too just in case. After implementing the extractor fan i checked out the sound of the fridge and decided it was perfect for the cryo chambers. Using automation i moved the sound in relation to the character as he walked past them and it really added to the immersion.
Optimising Workflow - Posted 10/2/2017
My workflow so far has been experimental at best and embarrassing at worst. Before continuing the project i needed to nail down a solid system of audio processing and folder management. Some sounds that required multiple layers to create had there own project files associated with them, while others that were less complex i created in the final mix project file. It had been plaguing my mind how a professional would approach this though. My audio files and project files were scattered and my final mix project was looking for audio in many locations. It also dawned on me that the final video is only half of the Creative Project assignment. I also need to create a structured heirarchy of sound files, and some sounds i had created in the final mix project hadn't even been bounced out as their own singular audio files.
I spent the morning thinking through the best way to approach sound design. I find it highly unlikely that professionals will create their sounds in the final mix project. It's much more likely they have a project file for every single sound they make and the biggest thing will do is COMMIT to the sound they've made before dragging into the final mix. A worry i've been having is how do i know if the sound will work in the final mix without building it in the project file. I could always build a sound in it's own project file and then bounce it and test it and then if it doesn't work i can go and edit it and continue bouncing until i'm happy. Then i realised that a sound is a singular enitity that should be convincing by itself above all! If i wanted to make a chainsaw sound then it should sound like a chainsaw before it's placed into the pictures environment. If it's believable by itself then it will be believeable in the picture. It's a rule i've created for myself and i think it's a safe one to stick to. To make things easier i'm able to view the video clip in each individual sound project file where i can demo reverbs and such, and this moves me onto my next thought...
Where does the sound creation stop? This is one i was really struggling with. Should i add EQ and effects to the sound and bounce them with the processing or should all that processing be done in the final mix? I have commited to a rule with this one as well, in that effects and EQ can be used in the sound bounce provided it has nothing to do with placing the sound in the environment. That's really all i've found it comes down to. If i want the sound of a monster growling behind a metal wall then that will require a serious low pass filter and a lot of reverb for the metal resonance, but the sound will always be specfically for a monster growling behind a metal wall. When it comes to adding reverb, EQ or panning to create distance or positioning of the sound then that is where the processing in the final mix project will happen. It's a dangerous game though, as i find myself in the sound creation stage syncing the sound to the picture instead of thinking about the singular sound itself without environmental impact. I need to make it sound correct without context.
I Built a Vocal Booth! (sort of) - Posted 17/2/2017
I was frankly sick of having nowhere quiet to record my audio. I live right next to a busy main road with rickety single glazed windows and the noise is hurrendous, not to mention all the rooms have bare walls! So my only option was to build myself a small treated space. I call it a vocal booth, but as far as i know i can't sing (UPDATE: I tried. I can't.). I tore the shelves out of my bedroom closet and nailed a sleeping bag around the walls. It is incredibly effective as a sound absorber, perhaps too effective! That only covered half of the cupboard vertically so i nailed pillows around some of the remainder and left the very top bare. It's not glamorous, but at least now i can record in peace.
Creating "The Ripper" - Posted 24/2/2017
The thought of trying to create this weapon without access to an actual circular saw was daunting to say the least, but i think the process of building it from the ground up was far more rewarding and possibly sounds better than recording the real thing! This was the test run for the rule i established in my previous post - "sound is a singular enitity that should be convincing by itself above all". This was also my chance to try and remain organised from the inital recording to the final processing and i'm very pleased with how i faired. The enitre process took roughly 10 hours and i'm feeling far more confident about being able to create a solid sound effect beginning with nothing but a theory. So here's the final sound i ended up with...
I also added a reload sound after i recorded the video demo. You can see from the project file image that i'm far more organised, keeping each weapon element in a group folder and nicely colour coded. I'm also making better use of busses to control my group levels and inserts easily. For the sound creation I began by using the motor sound of a stick blender that i had already recorded for a previous project. On top of that i added multiple recordings of a handheld mini fan. One recording was just the fan spinning normally to blend in with the other motor sounds. I needed a cutting sound and i found that letting the fan blades hit off a piece of paper as well as the microphone itself worked quite well.
Then came the difficult part...
I needed a sound that made the listener believe that it's a sharp metal blade, because as it was it sounded very dull. In my head i had the sound of a piece of metal rubbing against a spinning grindstone, which was something i had no access too. In fact i'd be more likely to find an actual circular saw. I thought about the building blocks of the sound and theorised that if i was to record a single metallic sound and then trim and loop it repeatedly it would be a good start. Then i remembered the granular synthesis engine in Omnisphere. It had the ability to repeat a specific point on an audio file very rapidly while also adding granular artifacts to avoid it sounding repetitve. I recorded the sound of a knife being sharpened, synthesised the middle section of the audio file and it worked! I then did the same thing to another take of the audio but this time focused on the tail because it had a very metallic ring to it. By layering the two together i created a very believable metallic saw! For the aiming and reloading sound i used a variety of clicks from various plastic thingsin my toolkit and a tape measure retracting. For the firing sound i bitcrushed the tape measure to create an exploding effect.
Capturing Impulse Responses - Posted 5/3/2017
One of the things i wanted to achieve in my creative project was the capture of my own impulse response to use as a reverb in the video. Studio One has it's own convolution reverb plugin called 'Open AIR' so this was the perfect opportunity to get to grips with it. I found the process to be both incredibly easy and rewarding. The building i live in has a stairwell that is made almost entirely of stone so it was fairly reflective. I recall reading that if you're capturing a stairwell you should create the impulse one floor above or below where the microphone is placed. I placed the microphone on the top floor and used a balloon pop as my impulse on the second floor. The capture wasn't as reflective as i expected but i was really happy with the result.
I used the Ripper firing sound effect that i had created previously to demo the impulse in use. Using the length and pre-delay controls in Open Air i was able to fit the reverb to the in-game environment. An EQ was added to the reverb channel to remove some of the low and low-mid build up that was retained from the balloon sound. This was my final result...
Full Progress Summary - Posted 6/3/2017
This is a demonstration of everything i have achieved so far and i'll also explain some changes i've made and minor details that have been aded along the way. Firstly i have completely redone the footsteps. I wasn't at all happy with the original attempt as it felt so dry and lifeless. The suit movements were barely noticeable and didn't sound real. I also had the suit sounds and footsteps as seperate files, which doesn't really make sense since the suit moves when the character takes a step. I created a brand new project file and layered multiple sounds on top of the footsteps to add more animation and realism. These including plastic clicks, jacket rustling and keys jingling. It all came together spectactularly .
Another neat thing i did was when i was creating breathing sounds for the character. I wanted to see what my breath recordings sounded like through Omnispheres granuliser since it's served me very well throughout multiple projects now. By reducing the intensity and spreading the grains i managed to create very ghostly sounding breaths. This can be heard at 0:33.
I realised that my overall mix was lacking a lot of high end. I'm not sure why i felt the need to cut so much out of so many sounds. I think to my ears it sounded more realistic, but in this case the realism was letting down the entertainment value. My new movement sounds have much more clarity and i've had to bring the high end content of the other sounds up to match, which has given the overall production a lot more sheen that i'm very happy with.
Something incredible that i learned was how to make a sound appear to be behind the listener. It was as simple as flipping the phase of one side of a stereo channel. It's not perfect yet and i'm sure there is a much more complicated way that can give a better result. You can hear this at 0:37 as the character walks past the tank on his left and stands in front of the door. The tank should sound as though it moves behind you when the camera turns.
The big things i have left to do are the sounds of the doors opening, the necromorph growls and movements and the gore sounds. I want to work on the jumpscare sound to make it more intense and add some more environmental sounds to add tension, such as banging on the walls. I also need to find a way to record glass breaking but there's a chance i may cheat a bit for that and utilise a sound library. I don't have a treated and controlled space to do this in and the vocal booth i built is far too small. Also i'm worried my cat might tread on leftover broken glass. To make it less "cheaty" i'll use it as an opportunity to research different areas where sounds are available online. I already know many professionals utilise them so it's important i know how to as well.
Doors, Sidechains & White Noise- Posted 8/3/2017
Today marked another pretty incredible discovery for me. Before i started working on my redub for the day i was conducting a substantial amount of research into sidechaining. This was a subject that had always felt intimidating to me because it sounded very complex. While i was researching i began to apply the techniques to a song i was mixing. I began with the most common application of sidechaining, which is ducking a bassline to provide room for a kick drum by using a compressor. Then i felt more adventurous and ducked rhythm guitars to make room for the snare drum, and ducking the strings to make room for the vocals. The process was so incredibly simple, especially in Studio One, that i couldn't believe i had been avoiding it for so many years! The creation peaked when i discovered i could create a fake snare tail by using a channel of white noise and an expander. The expander would open up as the snare hit and then slowly close, creating a very nice decaying tail. Making the noise slightly stereo spread filled out the drum section and everything felt much more glued.
I moved onto my redub work for the day and tried to create the sound of the large mechanical doors. The task was proving incredibly difficult no matter what i tried. I had a number of mechanical and metallic sounds but no matter how far down i pitched them or how layered they were i just couldn't add the feeling of weight to the sound. A theory popped into my head that if i added white noise with the highs cut out to the metal it would add this weight that i was missing, but the thought of drawing in the individual MIDI notes and adjusting the envelopes to be perfect felt like such a painstaking process (you can see where i'm going with this). The eureka moment struck me. I could create a channel with a white noise generator that is continuously playing but block it with an expander. Whenever the metal sounds play (the door moves) the expander should not only let the white noise through but also automatically adjust the noise level depending on the amplitude of the metallic SFX bus it's sidechained to. It worked like an absolute charm! Suddenly my door sounded far more close to the real thing and i had to do almost no mixing or editing whatsoever! The most incredible part was that i could then build the rest of the door by placing more metallic sounds and they automatically sounded great! The heavier sounds were letting more noise through by opening the expander further and thus more weight was automatically added to the sounds that were supposed to sound heavier. This was an unbelievable discovery in both sound design and workflow for me and i plan to continue using this technique as often as possible to see how far it can go. I created a video documenting the discovery.
The White Noise Glass Test- Posted 9/3/2017
There has been a recurring saying throughout the sound design books i've been reading, and that is that the brain will believe what it sees (or something along those lines). I remember watching a presentation, and the designer said "I'm going to show you three videos of rain falling, but one of the videos has the sound of bacon frying instead". I'm paraphrasing here but you get the idea. Each video sounded so real i thought he was going to say that none of them were bacon frying. It turned out that ALL of them were bacon frying! My brain believed it was rain because of what my eyes were seeing, and i've discovered you can get away with some very sloppy sound design because of this (not that i wish to be sloppy).
In my video there are three instances where monsters break out of glass cryo chambers. I needed the sound of glass smashing but i refused to download a sample online and i had nowhere in my home that i felt comfortable smashing glass, let alone a treated area that had enough floor space. I thought i might trying clanging and scraping a drinking glass and then running it through granular synthesis, but it sounded too tonal. I added some white noise and a flanger to it but the glass sounds themselves still weren't right. I listened to the sound with just the white noise and flanger and started to wonder if i could get away with omitting the glass effects completely. I turned to my flatmate and asked "Does this sound like glass smashing?". He wasn't looking at the screen when i played the sound and gave me a pretty firm "No". He said it sounded like paper tearing. He asked me to play it again and this time he was watching the footage. His "No" immediately turned into a "Yes".
Well that was a short work day.
Making & Breaking Monsters - Posted 14/3/2017
It was time to attempt the sound effects for the monsters or 'Necromorphs' as they're named in the game. There are two types of necro, the 'Spitter' and the 'Slasher' and i felt it was important that they sounded different to each other. Spitters, as the name suggests, attack you by vomiting various fluids so i imagined them to have a very gargly sound. Fortunately a few days prior i stumbled upon somebody attempting something similar and they created the sound by gargling water in front of the microphone. On top of this effect i layered general recordings of me making monster like growls and pitching them down. For the Slasher I removed the gargling sounds and also pitched the voice layers down to a lesser degree. This gave me enough diversity between the two monsters.
Next i had to make some evisceration sounds as a large part of the gameplay involves the player dismembering the necromorphs limbs to cripple them. I knew that watermelons were very popular for gore noises so i proceeded to smash one with a hammer without a second thought as any respectable member of society would. I did this in my new vocal booth and managed to get melon bits eveywhere but the results were fantastic! They do still need some work as the watermelon is flimsy by itself. I plan to record the sound of a wet cloth being dropped onto a hard surface to get a more chunky gore feel. Below is an edited sound file i made for the the Rippers evisceration.
The Music - Posted 18/3/2017
I had already created a lot of the ambient music at the beginning of the project and over time added some small things here and there. One rather happy accident was when i ran the sound of a door handle moving into Omnispheres granuliser and it produced and incredibly eerie whining sound. Some of the clicks it made had a percussive element that added a lot of animation to the sound
The main musical elements needed to happen when the monsters appeared. The main concept i was going for was a sensory assault. Screeching string instruments, thumping bass and a plethora of other noises that gave a feeling of panic. I had recently watched a presentation by Mick Gordon, the composer for the 2016 'Doom' reboot. That game had a very uniquely gritty soundtrack and i was keen to learn how it came to fruition. He used a sine impulse pitched down to a sub frequency and sent the impulse through four parallel FX chains of various distortion pedals, phasers, reverbs etc. The fourth chain was interesting as he miked up a mini amp and fed the mic pickup back into the amp to create feedback. He called this his 'Doom Instrument' and by feeding rhythmic sine pulses into it and twisting the knobs it made a very unique, distorted sound. His philosophy behind this was "Change the process, change the outcome", and i'll always remember that. By changing the way he did things his end product was unique. I tried to replicate something similar, but i have no hardware guitar pedals and very few software distortion plugins so it didn't work to the same effect. However I did manage to get a nice saturated bass sound that would be the foundation for my battle music. From here i layered in multiple strings with detuning modulation, bowed guitars and lots of ghostly whispering. It all came together into a swarm of audio assaulting the players ears and i was really happy with the outcome.
Finalising Things - Posted 22/3/2017
There were very few things left to do before the final mix. The gore sounds needed a bit more love in a few places, especially the moment Isaac stamps on the dead necromorph. It needed more weight to it so i layered in more watermelon samples, a sub sine impulse and some detuned metallic clangs to make it more convincing. The main thing i was yet to complete was the jumpscare that i had began right at the start of the project. I still only had a synthesised sound, and while it was good it didn't have any human element to it. Luckily a good friend from University is in a ridiculous metal band and happened to have wav files of him and his bandmates screaming in pain! I wasn't in the least bit surprised. After layering them in and bitcrushing them for some extra grittiness this was the final result...
From here it was just finalising the mix, which was pretty much already done as i had been mixing the entire time anyway. I have to say though, what really made it pop in the end was adding a limiter to the master track. Everything became far more in-your-face without losing it's spatialisation in the environment.