Production Plan - 8D Mix
Hardware Devices: Sony Stereo Headphones MDR-V500, and a Macbook Pro (16-inch, 2021).
Software: Ableton Live 11 Suite, Envelope for Live Plugin, Latest VLC Player
Delivery Platform: Youtube 360
For this production I will be creating a spatialised audio experience in the style of other 8D Mixes and will later combine this with a video to aesthetically suit in terms of the lyrical and sonic storyline. This 8D mix will assist in the Immersive Mix that follows which will focus on enhancing the lyrical content through effects such as panning, direction, delay, and reverb, that will ultimately create movement around the listener. Creative decisions on where this movement will be will require research into how our human ears perceive meaning through the sonic space around us. This research will then be utilised to plan a mix that will effectively tell the story that parallels the lyrical content.
The plug in I’ll be using is Envelope for Live, since it was not only the most compatible with Ableton (the DAW I’m using), but it was free. The only other plug in I could find that was compatible with Ableton was 80,000 Euro’s, here is a link to it:
The XP.1.11 >> https://www.xp4l.com/produit/xp/
The plug in worked really well, the visual element makes it more user friendly especially to an amateur Ableton user. The workflow set up is great, it only requires you to set up a bus/Master and then start putting in tracks.
The software has binaural playback which “simulates a spatial scene when content is played back using headphones” (Slee, 2018). Ableton allows videos to be imported in order to be synched to the music, the video acts as a separate track. The software includes a “B-Format Sampler” which “plays back existing Ambisonics wav files” (Kirn, 2018).
There are some great tutorials online, the first one really helped me understand the possibilities beyond what they gave as examples.
Here are a few links:
https://www.youtube.com/watch?v=iAHzJJhJVSQ - this one is from their own youtube channel.
There isn’t much there, but other users of the software post a fair few tutorials.
Like this one: https://www.youtube.com/watch?v=Ki8JEykaN9k
The only problem I had at the start was that the program required Rosetta rather than Apple Silicone, but it was just a matter of changing the setting in Ableton’s info. This took me less than 5 minutes to solve, and I am terrible when it comes to tech issues.
I will most definitely be using this for my 8D Music Mix, especially the extra effects that the program offers such as the delay and the aura verb, yet to learn what the others do.
This Soundcloud link is just me trying out the program with some samples I pulled off Ableton.
Figure 1. Playing around with Envelope.
Planning My Mix
As I wrote the song, I decided it was probably best I try to explore the storyline once again. The project was a great opportunity to further explore the themes I wrote about lyrically, and tried to get across musically through harmony, drum pattern, bass ridd, etc. Being able to take it one step further by trying to paint the storyline sonically through 360 made me think about how I wanted to elicit certain affects from listeners, and possible ways I should try and go about it.
Preliminary Sound Design Plan
To undertake this creative project with the theme of fear in mind, I will look into various parts of research. I will consider the way composers in film use sound as a means to elicit fear through aesthetic means such as frequency, decibel and dynamic changes. In addition, an understanding of the way humans localise sounds and how this is an important element to our survival will be considered (Schunpp et al, 2010).
Song Structure & Initial ideas:
Want it to increase in volume in one or a few directions so it’s like the listener doesn’t know where it's coming from.
Think Jumanji drums.
Pan the drums in time with their rhythmic patterns behind the listener. I wanted to give the effect that something unknown was coming for the listener. Similar to the way films elicit fear through “first person perspective”; the actor is staring at something in shock and horror, but the audience can’t see what it is yet, we can only imagine (Miller, 2021).
Get the snare drum hit to act as the thing that jumps at you out of nowhere by quickly panning it in.
High, sustained notes to be kind of lost at the tail ends somewhere off in the distance.
Chords, and saxophone to follow a slow pan around.
Panning stays back, moving from left and right of the back.
Pre-Chorus & Chorus
Moving slowly to the hard left and rights
Some movement with the synth sound, kind of like its creeping in
The delay could be amplified in the panning
Chorus “rage” coming through the middle, like it's coming through the listener. Total envelopment at this small moment.
Then release the panning to the front, like it's ahead of them and out of reach, but slowly bring it back in.
I am currently waiting on the individual tracks from the mixing engineer. They should be sent through soon.
The master track
First recording - the 8D Mix
Unfortunately, I have not received the tracks as of yet from the mixing engineer. However, I still attempted to explore the sound design plans with the master track.
This is the Soundcloud link to hear the planning of mainly panning in practice.
Unfortunately, I ran into a couple of problems when trying to upload the 360 clip to Youtube in order to share the 8D mix. After downloading the videos that were 360 compatible, I tried downloading the Spatial Media Metadata Injector. However, the app kept crashing whenever I tried to open it. This was troubleshooted with the help of some online sources (Mills, 2022), and the following steps were implemented:
Tried to find the latest version of the Spatial Media Metadata Injector. Tried downloading all of them, but the crashing continued.
Downloading the latest software update from Apple Mac, the macOS Monterey 12.5.1. I recently just bought my Macbook Pro (16-inch, 2021), so I didn't think this was the problem.
After updating and restarting I downloaded the Spatial Media Metadata Injector again, still kept crashing.
I tried deleting the app preference cache, but could not find anything as I was deleting the files after they would crash and emptying the trash.
I then tried downloading the Spatial Media Metadata Injector to my older Apple Desktop computer as I thought that possibly it only worked with older Macs. My old Mac desktop is an OS X Version 10.9.5. However, the Injector still crashed on the old desktop.
After this I looked to a friend who owns a PC and tried downloading it there. It finally worked!
After injecting the metadata in I checked that it worked on VLC, and it did!
After all these attempts I finally was able to upload the 360 clip to Youtube with the 8D mix This mix achieved a better quality than my preliminary sound design plan I tested out and uploaded to Soundcloud. The clip was taking a while to process on Youtube, however tutorials online did advise that Youtube may take a while to recognise the clip as 360 (Downs, 2020).
Here is a link to the Youtube clip.
Next sessions will be recorded here as I'll be moving onto creating an Immersive Music Mix. I hope to figure out problems with distortion during panning. I hope that I can somehow alleviate it once I get the individual tracks from the engineer.
Immersive Music Mix
Working on the drum ‘RUMBLE’ intro
For this session I worked really intensely on the beginning drum part as I wanted the intro to be quite intense and really ‘start the show’. I made note to stick to the original aim of keeping the drums behind the listener in order to elicit the ‘fear of the unknown’ as put forward in my mixing plan.
To help make this happen I worked with Envelope’s Source Panner, Multi-Delay and Convolution Reverb.
For the source panner I left the signal as stereo in order to have a bigger sound emerge from the drum intro as the 2 channels afforded the sound a lot more natural reverb. I spread the channels apart in order to achieve sonic width. The delay and reverb effect helped to move the sound around as I felt that the panner lost a lot of the frequency information. I was still not happy with the weakness of the sound as it lost a lot of the bottom end, so I decided to add in an EQ and raise the amount of bass frequencies coming through. However, as much as I liked the big bass sound, I remembered that when sounds are further away from us, or rather at a low volume, we recognise the higher frequencies first. This is made evident by the research of Howard & Angus (2017) who note that the phenomena of human hearing reveals a sensitivity to our audible mid range frequencies, and indicate how amplitude plays a huge role in how we perceive the mix of sounds around us (p. 91-95). Therefore, it made sense to keep the thinner sound there in order to give the illusion that the sound was far away. In order to make it feel as though it were coming closer, I knew I needed to somehow record the automation. This was due to the fact that when I was moving the sound around and playing with reverb, that was when I was getting closer to the effect I wanted. I looked online for a tutorial and found one by MusicTech (2020) which helped immensely. After playing around I ended up recording a change in the wet and dry signal of the convolution reverb. I made it intensely wet at the start in order to give that far away feeling, and then dry as the sound was moving closer. I also automated the source panner slightly as I wanted it to move around on the back to make it seem like no matter where you turned, it was hard to find the sound source. One aspect of the source panner that I did not fully understand was the elevation trigger. When playing around with it I noticed the sound moving slightly up and down. I figured that the elevation was more closely linked to its position not just above or below the listener, but when interacting with the other triggers, its position behind and in front. One medical study found that the “elevation tuning observed in low-level auditory cortex… is contingent on experience with spectral cues and covaries with the change in perception” (Trapeau & Schönwiesner, 2018). With this information in mind it seemed as though the perception of elevation changes occur and are most likely identified as other parameters of the sound changes. When changing the effect I settled on keeping it behind but slightly above in order to give a looming effect, similar to the effect of a high angle shot in a film; “generally, a high angle is used to make the subject within the frame seem small, isolated, vulnerable or less powerful” (Into Film, 2021).
Painting the sonic landscape
For this session I worked on setting the scene, or rather, painting the sonic landscape. My last session worked on the one drum track 'Rumble', but with the influx of new material from all the seperate tracks I needed to start thinking about each layers potential in engaging with the overall theme of fear. In a 360 video mix, sound foley is used to create the illusion of reality as those sounds are assigned to objects in the video associated with them (Ament, 2022, p.1). To create that same illusion, I needed to start thinking about the tracks as more than just musical layers. Instead, I needed to think about them as contextual factors of the space I am planning to place the listener in. In order to do this, I needed to start assigning tracks to their embodiment of a particular metaphorical object in space.
I received all the tracks for the Monstress song from the audio engineering/producer who was working on it with us a couple years back. All the tracks are wav files, I received 16 tracks in total (Figure. 2).
Electric Piano dry
Figure 2. Tracks received. Top row indicates type of track musically, and the last row indicates total tracks of this type.
After brainstorming for a while I decided to change the mixing plans in order to create tension and difference.
Updated Mixing Plans
Drums and keys to create space.
Bass stays stagnant and overarching as a foundational layer.
Sax lines to be quite spontaneous in their movement
Vocal line to also be spontaneous, more like a ghost.
Lots of delay on the sax line to start to set the mood for the chorus.
All tracks moving together except bass, that stays as the foundational looming layer.
Movement to be slow around the listener, similar to other 8D mixes I’ve heard.
The other 8D mixes I had as reference tracks came from the following 8D mixer:
The aim was to make the instrumentation - bass, piano and drums - exist within the sonic environment as the context. Whilst the voice acted as the antagonist and the saxophone as the sidekick, the Lago to Jafar.
I began by working with the drum parts, I had about 6 tracks to work with that had continuous sound:
The other tracks came in randomly when the tune called for it, as musical colours. They were not in these sections.
Rumble (from the intro)
At first I added quite a lot of reverb in and some delay effects, but after a few hours of tinkering around I settled on reverb on a couple of tracks, and the others to just be out in the space. Since I had so many tracks to work with, it made no sense to have both the drum room and hi-hats have reverb. Also, when they both did the sound was often quite distorted, and you’d hear too many audible frequencies coming through. This distracted away from the antagonist role - the vocal line. In order to create the context of the space as noted earlier, it became clear that the drum tracks were the perfect way to achieve this, especially since there were 8 (Figure 2). After taking out the effects and keeping the drum lines moving in a fixed motion within the space, each pattern became the metaphorical furniture of the space. This then better enhanced the role of the vocal and sax line as the antagonist and the side kick, as they were the only overtly spontaneous moving parts in the mix.
When mixing the bass, I kept the space quite wide and back in order to achieve a big, overwhelming presence. Bass frequencies elicit fear and danger compared to treble frequencies because “all humans are programmed to recognise… low sounds with large things, and high sounds with small things”, thus low sounds are more strongly associated with a potential predator (Sideways, 2016). Therefore, the bass in this song was mixed with the intention that it was like a predator that was approaching. I tried to give it a looming effect by keeping it back and wide, but still close enough so that its presence could not be escaped. I did not have it moving within the space because as planned, there had to be a foundational layer tying everything together. The bass fit this role perfectly. I also EQ’d the bass as it lost a fair bit of its punch whilst mixing in 8D.
I aimed to have the vocal line have spontaneous movement around the listener, as though they were being interrogated. In order to get the effect I desired I refreshed my memory in regards to recording automation and found Ableton’s (n.d.) website was a great help, as well as a tutorial by MusicTech (2020). From these sources I learnt about changes I could make to the envelope of a sound, including the ways I could do it; through midi, manually drawing it in, different waves, etc. After various attempts I finally settled on a recording that moved behind the listener. It took a while to achieve as close to the desired result as I could because I needed to get used to the way it sounded in different spots of the plugins Source Panner. I tried to have it sound as though she were in your ear but then sort of just strolling behind you, like a ghost, or a shadow.
The pre-chorus is where the saxophone had its moment. I kept its position within the space in the Source Panner, but included Muli-Delay and the Convolution Reverb. At first it was just the delay, and I recorded automation for the amount of delay that occurred and when in order to suit the melodic motif that was occurring in terms of its rhythm and dynamics. After this I added in the reverb and recorded an automation for that to match the delay patterns I created. Both of these effects worked really well in creating tension. I needed to start creating tension as the chorus was coming up, and this was the moment to bring in a lot of movement.
Figure 2. Working on the Verse & Pre-chorus
Eye of the Tornado
In this session I worked on the chorus and aimed to invoke the feeling of a sonic tornado with the listener floating in the eye. I was incredibly intrigued by accounts of people who had survived after being caught in the eye. The following are excerpts from an article that found two people who had survived:
'"Once inside the swirling cloud, Keller said that everything was "as still as death." He reported smelling a strong gassy smell and had trouble breathing. When he looked up, he saw the circular opening directly overhead, and estimated it to be roughly 50 to 100 feet in diameter and about a half a mile high. The rotating cloud walls were made clearly visible by constant bursts of lightning that "zigzagged from side to side." He also noticed a lot of smaller tornadoes forming and breaking free, making a loud hissing noise."'
'"He claims to have seen green sheets of rain just before the tornado formed. After baseball-sized hail started coming down, he went inside. He then heard a loud rumbling followed by complete silence. The walls began to shake, and to his surprise, his roof was ripped away and thrown into the woods nearby. At this point, he looked up to find the tornado directly overhead. He described the inside as a smooth wall of clouds, with smaller twisters swirling around the inside before breaking free. Once again, non-stop lightning created a bluish light, enabling him to see everything clearly."'
I took this accounts as a model to help inform the arrangement and flow of the chorus. To begin I took out all the instruments just before the chorus was about to begin, which was only about 1 and a half to 2 seconds. This was done to achieve the stillness as described in the first persons account, and the silence that occurred just before their roof was ripped off their house in the second persons account. The vocals lead the listener into the chorus, and I started the vocal panning into rotation here. I placed the vocals up close and then quickly away to the outer edges in order to make it seem as though they are being sucked away.
In order to create the effect of rotation I had the drums and piano moving together around the listener, whilst I had the sax and voice moving at a slightly different rate. However, the natural reverb of a lot of the sounds — especially the Rhodes piano sound — afforded much more space outwards to the listener. I kept the drums together in order to give the feeling of immediate chaos as everything I had set up in the context of the verse section was immediately stripped away from the listener. Even the antagonist and their sidekick had lost the ability to stay within range of the listener, their absence acting to better inform the chaos of this sonic world.
In order to achieve this rotation in Ableton, I created additional tracks with the material but changed the settings in the E4L Source Panner to mono. For the drums and keys I grouped them and then automated the slow rotation to match the music. I also did this for the a seperate track for the saxophone but tried to slightly delay its rotation. For the vocals I allowed more movement by keeping the source panner mono, but adding delay. I did this to make it seem like the source of the vocal line were the sort of hero/five-star general in this world and thus had much more of a presence. Using mono worked to effectively create the dramatic shifts in movement I desired for the chorus section.
I kept the bass part the same as the verse throughout this section as I wanted to keep a foundational layer of envelopment. I liked how the sound wasn't in a low placement, and was glad I didn't choose to make it so during the verse as I don't want it to feel grounding. Music often places bass within the back of a piece, and this usually provides a sense of comfort. However, placing the bass behind and with a slight elevation — to give that looming effect as desired earlier in the mixing process — removes that sense of comfort listeners are familiar with when listening to music.
To create the hissing inside this sonic tornado I used the Casio audio layer that is introduced in the chorus as an additional colour. I wanted to add in delay as the accounts of the hissing from small tornados forming and then breaking free sounds as though they are occurring all around the individual. However, I wasn't happy with the placement of the delay in the plug in, so instead I added another track with the same audio, copied the automations, lowered the volume and slightly pushed it back to get the exact delay I wanted.
Let's hear a solo
After the chorus I went back to the same arrangement in the first verse, where the voice was the only track with spontaneous movement that matched the lyrical content. This then lead to another chorus which also followed the same patterns as the tornado as a model. However, in this chorus the piano plays some extra added bits of colour, these were highlighted with extra movement around the listener.
Finally, we were up to the fantastic solo by Jenna. Just like the verse, every other track remained within their position whilst the saxophone moved around freely. I didn't move it around too much, and kept the sources closing to the back of the listener in order to elicit the similar feelings of the unknown mentioned earlier. Also, as the sidekick to the antagonist it made sense to have the sax line delayed as it's almost like it's laughing at the listener.
After finishing the mix, I looked at cleaning up the balance by adjusting volume levels on all the tracks. Once this was done, I injected metadata into my chosen 360 video and uploaded the video to Youtube.
The Mix can be found here.
The Plug-In 'Envelope'
Overall, the Ableton Live plug in Envelope was a great tool in creating an Immersive Music Mix. I would have preferred more tutorials online, either video or written articles by the developers in order to better understand the functions of the effects. There were a few moments that I tried to use some of their tools, but found dead ends when I couldn't understand how they were functioning. Also, they offered an LFO tool in their tutorial but when I downloaded the plug in, no LFO tool could be found. This would have been a fantastic edition as I tried using the LFO in Ableton but struggled achieving the same results as heard in the tutorial video on Envelope's website. The Auraverb and Convolution Reverb were great effects in that I could really hear the sound source moving freely around the virtual space. I did think that I lost quite a bit of bass from the instruments and was constantly trying to counter act this by EQ. However, this may just be the case for mixing in 360.
I really enjoyed working on the Immersive Mix, it provided me with a lot more creative possibilities than the 8D mix. Additionally, the whole process was a fantastic opportunity to stretch my creative potential in that I had to connect musical ideas and lived experiences to form a sonic narrative. Usually while writing lyrics I am thinking of a perspective rather than a storyline, but through the process of this mix I turned the perspective I had written from to fit into a grander narrative. The creation of this narrative would come about by working through other mediums and their traditional structures I was already familiar with in storytelling; film and animation. Mixing in 360 is definitely a skill I will consider developing in the future, and future endeavours will look into working with ambisonic microphones.
Ableton. (n.d.). 21. Automation and Editing Envelopes. https://www.ableton.com/en/manual/automation-and-editing-envelopes/
Ament, V. T. (2022). The foley grail: the art of performing sound for film, games, and animation (3rd ed.). Routledge.
Bryant, C. W. (2021, April 16). What is it like in the eye of the tornado?. howstuffworks. https://science.howstuffworks.com/nature/natural-disasters/eye-of-tornado.htm
Downs, W. (2022, March 5). How to post a insta360 video to Youtube the easy way [online video]. Youtube. https://www.youtube.com/watch?v=V895NteI1_w
Howard, D. M., & Angus, J. (2017). Acoustics and psychoacoustics. Taylor & Francis Group.
Into Film. (2021). 7 camera shots and angles to use in filmmaking. Future Learn. https://www.futurelearn.com/info/courses/sustainability-through-film/0/steps/266342
Kirn, P. (2019, May 30). Free Tools for Live Unlock 3D Spatial Audio, VR, AR. Ableton. https://www.ableton.com/en/blog/free-tools-live-unlock-3d-spatial-audio-vr-ar/
Miller, A. (2021, September 28). Seven Ways Horror Likes to Scare Its Audience. No Film School. https://nofilmschool.com/ways-horror-likes-scare-its-audience
Mills, J. (2022, February 9). Apps Crashing on macOS Monterey? Here's the Fix. Dr.Buho. https://www.drbuho.com/how-to/fix-crashing-apps-macos
MusicTech. (2020, February 12). Ableton Live Tutorials: Recording automaton [online video]. Youtube. https://www.youtube.com/watch?v=PgOfLEa7tq8
Schnup, J., Nelken, I., King, J. A. (2010). Auditory Neuroscience: Making Sense of Sound. MIT Press.
Sideways. (2015, August 27). How to make music sound scary [online video]. Youtube. https://www.youtube.com/watch?v=S-u9YDDrTFo
Slee, M. (2018, April 20). System Overview. github. https://github.com/EnvelopSound/EnvelopForLive/wiki/System-Overview
Trapeau, R., & Schönwiesner, M. (2018). The Encoding of Sound Source Elevation in the Human Auditory Cortex. The Journal of neuroscience : the official journal of the Society for Neuroscience, 38(13), 3252–3264. https://doi.org/10.1523/JNEUROSCI.2530-17.2018