Glacier Sounds

Overlapping loops of varying duration to represent natural cycles.


In October I collaborated with Wade Kavanaugh and Stephen P. Nguyen to compose and perform the sounds of a glacier for their installation at the Gem theatre in Bethel, Maine. The glacier was made from paper.

Wade and Stephen:


A time-lapse video of the project:

A time-lapse video of a similar project they did in Minnesota 2005:

The approach was to take a series of ambient loops and organize them by duration. The longer loops would represent the slow movement of time. Shorter loops would represent events like avalanches. One-shot samples would represent quick events, like the cracking of ice.

It took several iterations to produce something slow and boring enough to be convincing. I used samples from the Ron MacLeod’s Cyclic Waves library from Cycling 74 Samples were pitched down to imply largeness.

Screen Shot 2015-12-21 at 1.09.59 AM

Each vertical column in an Ableton Live set represents a time-frame of waves. That is, the far left column contains quick events and the far right column contains long cycle events. Left to right, the columns have gradually increasing cycle durations.  I used a Push controller to trigger samples in real time as people walked through the theatre to see the glacier.

The theatre speakers were arranged in stereo but from front to back. Since the glacier was also arranged along the same axis, a slow auto-panning effect sent sounds drifting off into the distance, or vice versa. Visually and sonically there was a sense that the space extended beyond the walls of the theatre.

In the “control room” above the theatre… using Push to trigger samples and a Korg NanoKontrol to set panning positions of each track:


The performance lasted about 45 minutes. Occasionally the cracking of ice would startle people in the room. There were kids crawling around underneath the paper glacier. Afterwards we just let the sounds play on their own. A short excerpt:


Photographs by Rebecca Zicarelli.

Presets in Max for Live

How to use the Max preset object inside of M4L.

Screen Shot 2015-03-22 at 8.06.14 PM

There is some confusion about how to use Max presets in a M4L device. The method described here lets you save and recall presets with a device inside of a Live set, without additional files or dialog boxes. It uses pattrstorage. It works automatically with the Live UI objects.

It also works with other Max UI objects by connecting them to pattr objects.

Its based on an article by Gregory Taylor:


Folder: presets

Patch: aaa-preset3.amxd

How it works:

Instructions are included inside the patch. You will need to add objects and then set attributes for those objects in the inspector.  For best results, set the inspector values after adding each object

Write the patch in this order:

A1. Add UI objects.

For each UI object:

  1. check link-to-scripting name
  2. set long and short names to actual name of param

Screen Shot 2015-03-22 at 8.44.23 PM

A2 (optional) Add non Live (ie., Max UI objects)

For each object, connect the middle outlet of a pattr object (with a parameter name as an argument) to the left inlet of the UI object. For example:

Screen Shot 2015-03-22 at 8.30.24 PM

Then in inspector for each UI object:

  1. check  parameter-mode-enable
  2. check inital-enable

Screen Shot 2015-03-22 at 8.51.10 PM

B. Add a pattrstorage object.

Screen Shot 2015-03-22 at 8.35.28 PM

Give the object a name argument, for example: pattrstorage zoo. The name can be anything, its not important. Then in the inspector for pattrstorage:

  1. check parameter-mode enable
  2. check Auto-update-parameter Initial-value
  3. check initial-value
  4. change short-name to match long name

Screen Shot 2015-03-22 at 8.42.49 PM

C. Add an autopattr object

Screen Shot 2015-03-22 at 8.34.21 PM

D. Add a preset object

Screen Shot 2015-03-22 at 8.34.53 PM

In the inspector for the preset object:

  1. assign pattrstorage object name from step B. (zoo) to pattrstorage attribute

Screen Shot 2015-03-22 at 8.52.11 PM


The preset numbers go from 1-n. They can be fed directly into the pattrstorage object – for example if you wanted to use an external controller

You can name the presets (slotnames). See the pattrstorage help file

You can interpolate between presets. See pattrstorage help file

Adding new UI objects after presets have been stored

If you add a new UI object to the patch after pattrstorage is set up, you will need to re-save the presets with the correct setting of the new UI object. Or you can edit the pattrstorage data.



The Live Object Model in Max for Live

Introduction to getting, setting, and observing Live parameters with Max for Live.

Screen Shot 2015-03-04 at 11.48.03 AM


folder: lom

patch: aaa-lom-examples


This example device shows several ways of working with Ableton Live parameters in a M4L patch. It can be a confusing process. And there are many different ways to accomplish the same result.

The examples here will use the LOM (Live Object Model) directly, and via builtin LiveAPI abstractions and choosers – available from the context menu that appears when you <ctrl> click inside of an open patcher window.

Screen Shot 2015-03-04 at 11.57.45 AM

The snippet above shows how to continuously monitor Live’s tempo.

  • live.this.device sends out a bang when the device loads
  • the “path live_set” message tells live.path to get an id number for the current set. This id is sent to the right inlet of, telling it we want to observe the current Live set
  • the “property tempo” message asks for the current tempo value
  • If the tempo changes it will be automatically updated
live.object (set)

Screen Shot 2015-03-04 at 12.09.49 PM

The snippet above shows how to set Live’s tempo.

  • Get the Live set path id using the same method as shown for
  • the “set tempo” message sends a tempo value to live.object
live.object (get)

Screen Shot 2015-03-04 at 12.15.57 PM

The snippet above shows how to get Live’s tempo.

  • Get the Live set path id using the same method as shown for
  • the “get tempo” message requests the current tempo value from live.object
live.object (call)

Screen Shot 2015-03-04 at 12.23.11 PM

The snippet above shows how to start or stop the Live transport by calling functions.

  • Get the Live set path id using the same method as shown for
  • the “call start_playing” message tells live.object to start the Live transport. “start_playing” is the name of a function builtin to the Live set.
LiveAPI abstractions

The LiveAPI abstractions provide convenient shortcuts to working with Live parameters. Copy them into your patch by <ctrl> clicking inside an open (unlocked) patcher window – and selecting “Paste from -> LiveAPI abstractions”

Screen Shot 2015-03-04 at 12.30.25 PM

Observing the Live transport

Screen Shot 2015-03-04 at 12.31.49 PM

The snippet above shows how to monitor whether the Live transport is running

  • paste the abstraction into your patch as explained above
Selecting the master track

Screen Shot 2015-03-04 at 12.34.07 PM

The snippet above shows how to select the Master track

  • paste the abstraction into your patch as explained above
LiveAPI choosers

The LiveAPI choosers provide convenient shortcuts to selecting Live parameters from menu objects. Copy them into your patch by <ctrl> clicking inside an open (unlocked) patcher window – and selecting “Paste from -> LiveAPI choosers”

Screen Shot 2015-03-04 at 12.36.43 PM

Setting device parameters remotely with live.remote

Screen Shot 2015-03-04 at 12.38.29 PM

The snippet above shows how to remotely set the volume on the master track

  • paste the chooser into your patch as explained above
  • connect the left outlet of “substitute” to the right inlet of live.remote
  • send values to live.remote to change the selected parameter

Patches for Cycling 74 “Programming in Max for Live” videos

These are my versions of some of the patches. Not Cycling 74’s. If you find ‘official’ versions please let me know. The list isn’t complete. The simpler (beginning) patches are not included, but the interesting ones are.

Screen Shot 2015-02-23 at 12.50.20 PM


folder: c74-video-tutorials


  • API-step-sequencer
  • dub-delay
  • poly-synth
  • velocity-sequencer
  • wobble-bass

In Live, create new Max for Live devices, as instructed in the videos – and then copy and paste from these patches into your new patches.



Hearing voices

A presentation for Berklee BTOT 2015

(KITT dashboard by Dave Metlesits)

The voice was the first musical instrument. Humans are not the only source of musical voices. Machines have voices. Animals too.

  • synthesizing voices (formant synthesis, text to speech, Vocaloid)
  • processing voices (pitch-shifting, time-stretching, vocoding, filtering, harmonizing),
  • voices of the natural world
  • fictional languages and animals
  • accents
  • speech and music recognition
  • processing voices as pictures
  • removing music from speech
  • removing voices


We instantly recognize people and animals by their voices. As an artist we work to develop our own voice. Voices contain information beyond words. Think of R2D2 or Chewbacca.

There is also information between words: “Palin Biden Silences” David Tinapple, 2008:

Synthesizing voices

The vocal spectrum

What’s in a voice?

Singing chords

Humans acting like synthesizers.

More about formants
Text to speech

Teaching machines to talk.


  • phonemes (unit of sound)
  • diphones (combination of phonemes) (Mac OS “Macintalk 3  pro”)
  • morphemes (unit of meaning)
  • prosody (musical quality of speech)
  • articulatory (anatomical model)
  • formant (additive synthesis) (speak and spell)
  • concatentative (building blocks) (Mac Os)

Try the ‘say’ command (in Mac OS terminal), for example: say hello

More about text to speech

Combining the energy of voice with musical instruments (convolution)

  • Peter Frampton “talkbox”: (about 5:42) – Where is the exciting audience noise in this video?
  • Ableton Live example: Local file: Max/MSP: examples/effects/classic-vocoder-folder/classic_vocoder.maxpat
  • Max vocoder tutorial (In the frequency domain), by dude837 – Sam Tarakajian (local file: dude837/4-vocoder/robot-master.maxpat
More about vocoders

By Yamaha

(text + notation = singing)

Demo tracks:

Vocaloop device demo:

Processing voices


Pitch transposing a baby

Real time pitch shifting

Autotune: “T-Pain effect” ,(I-am-T-Pain bySmule), “Lollipop” by Lil’ Wayne. “Woods” by Bon Iver

Autotuna in Max 7

by Matthew Davidson

Local file: max-teaching-examples/autotuna-test.maxpat

InstantDecomposer in Pure Data (Pd)

by Katja Vetter

Autocorrelation: (helmholtz~ Pd external) “Helmholtz finds the pitch”

(^^ is input pitch, preset #9 is normal)

  • local file: InstantDecomposer version: tkzic/pdweekend2014/IDecTouch/IDecTouch.pd
  • local file: slicejockey2test2/slicejockey2test2.pd
Phasors and Granular synthesis

Disassembling time into very small pieces


Adapted from Andy Farnell, “Designing Sound” Download these patches from: folder: granular-timestretch

  • Basic granular synthesis: graintest3.maxpat
  • Time-stretching: timestretch5.maxpat

More about phasors and granular synthesis
Phase vocoder

…coming soon

Sonographic sound processing

Changing sound into pictures and back into sound

by Tadej Droljc

(Example of 3d speech processing at 4:12)

local file: SSP-dissertation/4 – Max/MSP/Jitter Patch of PV With Spectrogram as a Spectral Data Storage and User Interface/basic_patch.maxpat

Try recording a short passage, then set bound mode to 4, and click autorotate

Speech to text

Understanding the meaning of speech

The Google Speech API

A conversation with a robot in Max

Google speech uses neural networks, statistics, and large quantities of data.

More about speech to text

Voices of the natural world

Changes in the environment reflected by sound

Fictional languages and animals

“You can talk to the animals…”

Pig creatures example:

  • 0:00 Neutral
  • 0:32 Single morphemes – neutral mode
  • 0:37 Series, with unifying sounds and breaths
  • 1:02 Neutral, layered
  • 1:12 Sad
  • 1:26 Angry
  • 1:44 More Angry
  • 2:11 Happy

What about Jar Jar Binks?


The sound changes but the words remain the same.

The Speech accent archive

Finding and removing music in speech

We are always singing.

Jamming with speech
Removing music from speech

by Xavier Serra and UPF

Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).

Screen Shot 2015-01-06 at 1.40.37 AM

Screen Shot 2015-01-06 at 1.40.12 AM

Settings for above example:

  • Window size: 1800 (SR / f0 * lobeWidth) 44100 / 200 * 8 = 1764
  • FFT size: 2048
  • Mag threshold: -90
  • Max harmonics: 30
  • f0 min: 150
  • f0 max: 200
feature detection
  • time dependent
  • Low level features: harmonicity, amplitude, fundamental frequency
  • high level features: mood, genre, danceability

Acoustic Brainz: (typical analysis page)

Essentia (open source feature detection tools)

Freesound (vast library of sounds): – look at “similar sounds”

Removing voices from music

A sad thought

phase cancellation encryption

This method was used to send secret messages during world war 2. Its now used in cell phones to get rid of echo. Its also used in noise canceling headphones.


Center channel subtraction

What is not left and not right?

Ableton Live – utility/difference device: (Allison Krause example)

Local file: Ableton-teaching-examples/vocal-eliminator

More experiments


  • Why do most people not like the recorded sound of their voice?
  • Can voice be used as a controller?
  • How do you recognize voices?
  • Does speech recognition work with singing?
  • How does the Google Speech API know the difference between music and speech?
  • How can we listen to ultrasonic animal sounds?
  • What about animal translators?