sound design experimental foundations: just finished my sound course after many weeks – i took my time! and learned a lot – about the theory of listening, production, manipulating and designing sound, and also finding my way around Ableton programme to create a sound story. next step: get to grips with my MIDI. anyway, below are some of my course notes – 6 sessions.
session one:
time and structure, planning sound composition: always the aim is to produce new sounds by any means necessary. studios of past had limited means and required a lot of physical mixing but still could create new sounds and compositions. they came up with new notation methods, playing methods, recording methods and new electronic means of creating sounds. now we use laptop instead of physical spaces. be open, organised and free to explore. start by downloading – audacity and ableton live.
foundations for structuring your experiments: work efficiently, structure working sessions, change between software as needed. use voice memo to collect sounds from nature. transfer to specific folders. make notes, notations, record. sketch book is also useful. all unmediated by the computer as well as software. organise your folders. film management is important – to be fluid in workflow and to cross fertilise ideas. naming and versioning needs to be considered.
going over media folder hierarchies: use a system such as name and number, maybe with underscores and 00s.
auditory vantage point exercise: make 2 detailed lists of sounds you hear, each with 25 sounds. take a position in your home/work/environment and listen to everything you hear – inside. write down the first 10 sounds you hear, then the next 10 which will be harder. repeat this exercise in another place – outside. note the direction the sound comes from and if it changes, describe the sound. the human brain ignores 80% of sounds to allow us to focus on tasks such as walking and talking. this exercise asks us to turn on and off our active listening perception. this takes practice. this exercise also makes us aware of sounds when we are recording and collecting – ones we want and do not want.
exercise one: make a list of 10 things you hear in your space and 10 things you hear outside your space.
list one: fan heater, typing keys, branch against window, creak of chair, wall heater hum, computer fan, slight tinnitus in my ear, passing distant car, slight high pitched rattle like a spoon, cloth on my arm moving against my body, my breath in my nostrils, a trickle sound in a pipe in the corner, my knee shifts against my other knee, another hum of a car passing, a faint hiss above me, a sniff from my nose, my lips parting, my foot shifting on the rug, the sound of my chin rubbing against my collar, my nail scratching the keyboard.
list two: car passing, leaves of plants hitting against wall, a bird’s short tweet, distant dog barking, a truck passing, something like a wheel going over puddle, he rumble of something (engine?), wind shaking tree branches, wind chime plays note, a longer bird tweet, my breath, ivy blowing against wall, plastic tag shaking on skip bag, my coat shifting as I turn, I sniff, my foot against the grass, then my foot moving against gravel, a plane overhead, the sound of my mouth opening, my pulse in my head, a distant chat of people passing by the back lane.
session two:
early tape studio techniques: check out lewis and bebe early analogue studios, new drum machines influenced production – pushing these to the limit and beginning ‘techno’ sound. film sound design also pushed the limits through experimentation, developing their own sonic personality. ideas lead the process and lead the way for innovation. John Cage was part of the experimentation approach. using recorded sounds from the everyday to mix and remix – a squeaking door by Pierre Henry (1963), which adapts a flute timbre and Otto Luening.
musique concrete theory: identify and then categorise sounds – a system of describing to communicate with yourself and others – important to name sounds. capture and manipulation of sounds – trains, crowds and in the studio they were sped up, distorted, cut etc. this effected popular music and film scores. treatise on musical objects – book worth reading – 4 ways of listening – listening modes – to listen (actively and passively), oral perception (awareness that a sound has occurred – conscious and subconsciously has no choice to perceive), to hear (listen carefully and ignore its sources – a scientific listening), comprehension (we focus on the meaning of the sound, a context added, assigning meaning). reduced listening. in creating sound we need to move between these modes. sampling can have this effect – making familiar sounds faster, distorted etc.
the cut bell: Schafer used a bell recording and cut the end off so it is unrecognisable
the closed loop: Schafer splices and loops – speeding up to create tones and slowing down to create rhythms.
exercise two: when trying a new programme, scroll through all dropdowns to see what is possible. open audacity and explore all the dropdown menus to see their capabilities. also look for commonalities – effects etc that you are familiar with. always locate the ‘transport’ area – where you play and pause your sounds. cursor tools is also important. the selection tool here is most used. it is important to have a good set of speakers or headphones for careful listening. the ‘onset’ transient is the beginning of the sound. try deleting this and see how it effects it. try the envelope tool to experiment further.
experimenting with the closed loop: learn to cut and paste multiple times in audacity which can make something concrete seem abstract and/or rhythmical.
exercise: revisit the list of sounds from session one and categorise them according to their listing mode.
building a sound library assignment: go back to lists from assignment one and name and describe the sounds.
list and describe –
list one: fan heater (constant hum – two toned) typing keys (light, high, changes on pitch depending on how it is struck), branch against window (light, quiet), creak of chair (light, high almost inaudible) , wall heater hum (low, constant), computer fan (constant, low and quiet, sporadically louder when updating) , slight tinnitus in my ear (when conscious of it), passing distant car (changes dynamics as passes), slight high pitched rattle like a spoon (not able to find this source – sounds like a vibration).
list two: car passing (changes in dynamics, louder than from inside) , leaves of plants against wall (faint, textured), bird short tweet (staccato sound, high pitched), distant dog barks (changes in dynamics as passing, rhythmic), truck passing (low tone, changes in dynamics as passes – a rumble), wheel going over puddle (short sound, high splash), rumble of something (engine?), wind shaking tree branches (high, quiet, builds in loudness), wind chime plays note (short, high), a longer bird tweet (like a rat tat tat), my breath (always there when conscious of it).
record –
record 3 short momentary sounds (impacts, quick sounds with a defined attack and release) – less than 2 seconds long – mono, – examples: a tap on a desk, a pencil breaking, the pop of a balloon: typing key 001, scissors point on table 001, finger tap on table 001 (all 2 sec mono).
record 3 medium short sounds with a clearly defined attack, decay, sustain, and release envelop – between 1 and 3 seconds long – mono, – examples: picking up keys, fridge door opening, a single bite of an apple: crunch bag 001, shades placed on table 001, can opening 001 (all 3 sec mono).
record 3 medium sounds with some kind of internal complexity… sounds that consist of multiple sounds – between 2 and 5 seconds long – stereo – opening a door (knob turn, door mechanism, door open, sound on other side of door), taking a bite of food from a plate with a fork: french door opening, closing fridge door 001, spray bottle 001 (all 3 sec, stereo).
record 3 long sustained sounds – ambiences of the site, machines that make sustained sounds – between 10 seconds and 1 minute long – stereo: fan heater 001 , kettle boiling 001, 10 sec, stereo
session three:
electronic studio instruments: now software manipulates sound but in the past knobs and oscillators did. we need to learn manipulation techniques in our software to work towards your sound goals of expression. trial and error is important. midi controllers may help bridge hardware and software manipulation.
making sawtooth from scratch with audacity: additive synthesis through audacity. a sawtooth wave also known as a ramp wave is made up of both odd and even harmonic partials. use the generator menu in audacity.
additive and subtractive synthesis: by hand, creative application of filters to a sound source. filters include – 4 types. pass, shelf – alter the amplitude and frequencies, band and notch alter the amplitude of the centre frequency. you can carve away frequencies (subtractive) or add (additive) . pink noise is like white noise and the high frequencies are quieter. use the equaliser (under effects) to carve away at the frequencies to create a new timber.
speed change, reverse and doubling: use audacity to explore changing the speed of a sound, reverse the sound, equalise the sound and double up – copy and paste the sound over each other in multiple tracks.
experimenting with tape techniques using ableton: export from audacity each wave and import into ableton the change the chorus effect, modulation and modulation of each wave track.
assignment: take some of your clips from your sound library and play around with them in ableton – change speed, repeat, reverse etc and upload them in a row. take the long sound clips and bring them in and out of focus through ableton. completed this assignment – the seamless mixing of worlds. i completed this and am getting more familiar with ableton.
session 4:
self analysis – training your ear for production analysis: you need to be objective when listening – deciding what needs editing and what to leave as is etc. we get fatigue from listening attention when editing that we can sometimes miss something crucial.
directed listening – some strategies for re-engaging tired ears: directed listening can be trained like a muscle. use tools to help you such as the mute and solo tools which help you focus your attention on your sound material and the tracks interactions with each other. also play with the levels of different aspects of your sound to see where it is at and where it can go if you choose. you can also use the EQ to create a frequency window. this allows you to focus on one layer of the sound and cut out some frequency bands of the track.
ear reset techniques – recovering from detail fatigue: take a walk, step away, clear your head, change to passive listening as opposed to active listening, change listening method – earphones, stereo, different speakers. change environment – listen in the home, outside, in a new venue, at a larger or smaller room/space. set a timer for a particular time to work on your sound and take breaks after.
graphing form in example tracks: to counteract listening fatigue in your own work – look at the sound work of someone else – e.g. a sound effect from a film. start graphing all the sounds you hear and their interactions. list what you hear, place them on a graphic timeline of when they come in and what plays together etc. you can use coloured pencils and thickened line if it comes in strongly etc. this really helps you to analyse your own work.
markers to delineate form and time: use markers in the timeline of ableton to show when sounds come in and out that might not be noticed otherwise. to do this in ableton place your curser at a point of attention and go to create and add a locator. you can do this multiple times. you can assign these locations to different keys on your computer to get to these points easily and quickly. use the edit key map function to assign keys to the location points.
assignment: possible sound worlds – write a short narrative of a possible character or event based on your sound library of edited and non edited sounds. title, 3-5 sentences
assignment: create a 3-5 minute soundscape story composition based on your possible sound worlds story, using your sound library of edited and unedited sounds. think about figure and ground – what is to the fore and what is playing in the background. build using your sounds rather than create new sounds from scratch. your story can change as you edit but make a note of what you change/add/erase from your story.
session 5:
spectral balance – timbral orchestration: how the ear physically hears sound and psychoacoustics – how we hear sound psychically. find sounds that the ear usually ignores or goes unnoticed. now we further analyse how frequency and timbre effects our listening.
frequency range and sound spectrum: the human ear can hear and interpret sound between 20hrz and 20000 hrz and 3 octaves below middle c and an octave above the highest note an instrument can play. in western composition the range of frequency and timbre of each instrument was developed but with digital and synthesized composition this can exceed these frequencies and timbres. paying attention to the frequency range will make sure your composition is not mid range heavy. its important to identify the highest and lowest frequency for each sound/voice/sound layer in your piece. when looking at the wave form of a sound – this really only gives you the loudness of the sound and not much more. spectrogram (in the drop down menu on audacity) also gives you information about the sound – which is a better way to gauge the high and low frequencies rather than amplitudes (wave form)
masking and precedence effect: sounds overlap and may occupy the same frequency range. when 2 sounds at the same frequency range and volume are played at the same time masking occurs. this happens all the time – for example in a restaurant – to overcome this you raise your voice. this also occurs in your composition mix and you will need to change something to overcome this – for example make sure one is a little louder than the other, or change the EQ of one, or pan them differently. these changes helps the ear to differentiate the sounds. if 2 sounds are played within 30 milliseconds of each other the 2 sounds will fuse into one another and sound like their timbre has been altered. if longer than 30 milliseconds then precedence occurs or the Haas effect and we will hear them separately. micro delays can make a sound pop or give it more weight. you can experiment with this in audacity – linking 2 tracks and playing around with the delay intervals.
equal loudness curves: the ear does not hear all sounds equally, higher frequencies are harder for the ear to hear. this means that minor adjustments to the frequency of a sound effects its ability to be heard.
spectral density: most sound compositions have a build up in the mid-range. base and high frequency can also build up. the meaning of a sound can blur our listening and hearing of sounds. we can make adjustments to allow for certain sounds to come into focus and be heard. to do this we need to listen to our sound paying attention to one sound/frequency at a time – scan listening one sound at a time – like listening to speech and only listening for vowels or the T attack etc. start exploring your sound mix by changing the EQ and frequency scale in audacity.
coursework: apply detailed attention to your first draft of your piece – address the spectrum of the total mix of your piece. use volume and equalisation in audacity to adjust the spectral mix of the frequencies in your piece. refine your piece through active listening. make a note of what you change and why. deep listening and analysis: things to focus on – volume, equalisation, panning. keep notes
refinement notes:
overall – my aim is to increase dynamics in terms of volume (greater distances between them_, equalisation (greater low and high frequencies, not all in the mid range) and panning from left to right, sound to sound.
1: change graduation transition to first sound by stretching out the fade in
2: change volume of second sound – lower by 2db
3: change these clips to bring down eq on lower frequencies and bring up eq on the alternating higher clips. – a more up and down sequence
3: change fade in to longer in 3rd fan clip
4: halve the tempo on this 3rd fan clip
5: bring in the high staccato clip to start faster – more abrupt
6: decrease the volume of the underlying fan sound as staccato clip plays
7: change the mid gain on this clip also – low shelf gain
8: lower the pitch on fan clip 5
9: increase the volume on fan heater 001 by 4 db to increase contrast in the piece
10: decrease the high frequencies in this clip also
11: lengthen the fade out of this loud clip so volume dissolves sooner
12: bring up the eq on the fizzy sounds that come in to 10 – more zing
13: increase the volume of the whipping sound for greater contrast
14: change the transition to a slower fade in of the fan clip at 2.31
15: delete last whip sound
16: lower volume of melodic clip
17: finally go through the tracks and select some panning for the piece according to impact coming in and rhythm of piece.
Session 6: the sound field – designing space
the sound stage: spatial depth and time important components to sound design. this is referred to as sound stage which is the virtual space into which all the production elements will be placed. like a real stage – it has 3 dimensions that need to be considered – left and right (panning), front to back, and sometimes height (hard to achieve this on headphones). consider your piece in terms of space, openness and intimacy. how do we create a spatial experience for our listeners?
panning vs balance: left and right space, placing mono sounds left or right. but stereo sounds are already placed left and right so we can only address balance of these. mono is very effective and powerful and should be used carefully. dry mono sounds usually push themselves to the front of a mix – because they come comes out of each speaker equally.
panning: mixers are getting more sophisticated. early devices moved the mics from left to right manually using magnetically charged rings or using spinning tables to create surround sound. panning is now much easier using left right controls and the azimuth values give as 0 in the middle and – or + left and right. helps situate the listener to the place the object creating the sound is. or effects the listeners movement. midi controllers are useful also in creating movement.
acoustic cues for space: whether the sound is mono or stereo we hear binaurally – with our 2 ears. this allows us to know if the sound is coming from left to right or behind us. interaural time difference (itd) – if our sound is coming from the right it reaches the right ear before the left so we know where the sound is coming from and situated. interaural frequency level difference (ifd) – like reverberation also situates sound – ‘worldising’ making a world around the sound, placing a dry sound in a space and letting it reverberate in the size and materiality of the space.
the interaction between frequency, reverb, delay and distance: it is important to also know how a sound behaves in a free space – out in the open without material boundaries. understanding this will help us to fake or set up distance in our mixes. sound waves take up physical space – high frequency waves are shorter and take up less space than lower frequency waves which are longer and take up more physical space. sounds that are closer to you have a shorter distance to travel than sounds that are further away. we can use this to effect – maybe play a sound at the same time but with delay which suggests that one is further away than the other.
graphing space: think of your sound piece as a physical space. work out where you want the source sounds to be placed in the space for the listener. consider what sounds can work together and what sounds work separately in their own space. a good rule of thumb is to always follow an reverb with an eq. so that the reverb is controlled. reverbs sustain frequencies and frequencies can build up and weigh down your mix.
producing depth and creating distance part 1: if you want a sound to appear to be coming from a distance do these 3 things – decrease the loudness, roll off the volume of the high frequencies on the eq, and apply a tiny amount of delay – milli seconds. when working on space, save in a new document, try adjusting the dry wet mix in auto pan setting. play around with turning on and off stereo and mono channels to see how this effects the piece. look through the EQ eight setting and play around with EQ levels.
producing depth and creating distance part 2: experiment with raising and lowering the send button on the reverbs, also the decay levels and also the diffuse levels. you can find this under EQ 8. work on tracks one at a time in this manner
conclusion: the tying together of the history and practice of sound mixing and engineering, learning to structure sound pieces, record, manipulate sound files and develop sound space and balance. listen closely and refine your sound piece – pay attention.

sound stage: sound placement – distance, middle distance, close, intimate, panning left right centre

sound story: graph and plotting through sound
Leave a comment