Office hours: Tuesday 1-2 PM, or Tuesday 4-5PM, at the EPD office #401 at 161 Mass Ave. Please email or call ahead.
Assignments and class notes will be posted to this blog: https://reactivemusic.net before or after the class. Search for: ep-341 to find the notes
Examples, software, links, and references demonstrated in class are available for you to use. If there is something missing from the notes, please ask about it. This is your textbook.
Syllabus:
Prototyping is the focus. Max is a seed that has grown into music, art, discoveries, products, and entire businesses.
After you take the course, you will have developed several projects. You might design a musical instrument or a plugin. You will have opportunities to solve problems. But mostly you will have a sense of how to explore possibilities by building prototypes in Max. You will have the basic skills to quickly make software to connect things, and answer questions like, “Is it possible to make something that does x?”.
You will become familiar with how other artists use Max to make things. You will be exposed to to a world of possibilities – which you may embrace or reject.
We will explore a range of methods and have opportunities to use them in projects. We’ll look at examples by artists – asking the question: How does this work?
Success depends on execution as well as good ideas.
Topics: (subject to change)
Max
Reverse engineering
Transforming and scaling data
Designing user interfaces
Messages and communication, MIDI/OSC
randomness and probability
Connecting hardware and other devices
Working with sensors, data, and API’s
Audio signal processing and synthesis.
Problem solving, prototyping, portfolios.
plugins, Max for Live.
Basic video processing and visualization
Alternative tools: Pd
Max externals
How to get ideas
Computers and Live performance
Transcoding
Grading and projects:
Grades will be assigned projects, several small assignments/quizzes, and class participation. Please see Neil Leonard’s EP-341 syllabus for details. I encourage and will give credit for: collaboration with other students, outside projects, performances, independent projects, and anything else that will encourage your growth and success.
I am open to alternative projects. For example, if you want to use this course as an opportunity to develop a larger project or continue a work in progress.
Around the year 1700, several startup ventures developed prototypes of machines with thousands of moving parts. After 30 years of engineering, competition, and refinement, the result was a device remarkably similar to the modern piano.
What are the musical instruments of the future being designed right now?
new composition tools,
reactive music,
connecting things,
sensors,
voices,
brains
Notes:
predictions?
Ray Kurzweil’s future predictions on a timeline: http://imgur.com/quKXllo (The Singularity will happen in 2045)
Are there patterns in the ways that artists adapt technology?
For example, the Hammond organ borrowed ideas developed for radios. Recorded music is produced with computers that were originally as business machines.
Instead of looking forward to predict future music, lets look backwards to ask,”What technology needs to happen to make musical instruments possible?” The piano relies upon a single-escapement (1710) and later a double-escapement (1821). Real time pitch shifting depends on Fourier transforms (1822) and fast computers (~1980).
Artists often find new (unintended) uses for tools. Like the printing press.
New pianos
The piano is still in development. In December 2014, Eren Başbuğ composed and performed music on the Roli Seaboard – a piano keyboard made of 3 dimensional sensing foam:
Here is Keith McMillen’s QuNexus keyboard (with Polyphonic aftertouch):
Here are tools that might lead to new ways of making music. They won’t replace old ways. Singing has outlasted every other kind of music.
These ideas represent a combination of engineering and art. Engineers need artists. Artists need engineers. Interesting things happen at the confluence of streams.
Analysis, re-synthesis, transformation
Computers can analyze the audio spectrum in real time. Sounds can be transformed and re-synthesized with near zero latency.
Infinite Jukebox
Finding alternate routes through a song.
by Paul Lamere at the Echonest
Echonest has compiled data on over 14 million songs. This is an example of machine learning and pattern matching applied to music.
Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).
“Detetcting Drops in EDM” – by Karthik Yadati, Martha Larson, Cynthia C. S. Liem, Alan Hanjalic at Delft University of Technology (2014) https://reactivemusic.net/?p=17711
Polyphonic audio editing
Blurring the distinction between recorded and written music.
What is the speed of electricity? 70-80 ms is the best round trip latency (via fiber) from the U.S. east to west coast. If you were jamming over the internet with someone on the opposite coast it might be like being 100 ft away from them in a field. (sound travels 1100 feet/second in air).
Global communal experiences – Bill McKibben – 1990 “The Age of Missing Information”
There is a quickening of discovery: internet collaboration, open source, linux, github, r-pi, Pd, SDR.
“Robots and AI will help us create more jobs for humans — if we want them. And one of those jobs for us will be to keep inventing new jobs for the AIs and robots to take from us. We think of a new job we want, we do it for a while, then we teach robots how to do it. Then we make up something else.”
“…We invented machines to take x-rays, then we invented x-ray diagnostic technicians which farmers 200 years ago would have not believed could be a job, and now we are giving those jobs to robot AIs.”
We instantly recognize people and animals by their voices. As an artist we work to develop our own voice. Voices contain information beyond words. Think of R2D2 or Chewbacca.
There is also information between words: “Palin Biden Silences” David Tinapple, 2008: http://vimeo.com/38876967
Ableton Live example: Local file: Max/MSP: examples/effects/classic-vocoder-folder/classic_vocoder.maxpat
Max vocoder tutorial (In the frequency domain), by dude837 – Sam Tarakajian https://reactivemusic.net/?p=17362 (local file: dude837/4-vocoder/robot-master.maxpat
Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).
This method was used to send secret messages during world war 2. Its now used in cell phones to get rid of echo. Its also used in noise canceling headphones.
-from X. Serra (2014) “Audio Signal Processing for Music Applications”
low level vs. high level
single events vs. groups of events
combinations of descriptors
order of events (markov chains)
Humans are very good at pattern recognition. Is it a survival mechanism? People who listen to music are very good at analysis. Compared to the abilities of an average child, computer music information retrieval has not yet reached the computational ability of a worm: https://reactivemusic.net/?p=17744
First test: Use sounds by querying “saxophone”, tag=”alto-sax” https://www.freesound.org/people/clruwe/sounds/119248/
I am using the same descriptors [0,2,9] that worked well in previous section. with K=3. Tried various values of K with this analysis and it always came out matching ‘violin’ which I think is correct.
In [26]: SA.classifySoundkNN(“qs/saxophone/119248/119248_2104336-lq.json”, “tmp”, 33, descInput = [0,2,9])
This sample belongs to class: violin
Out[26]: ‘violin’
Second test: I am trying out the freesound “similar sound” feature. Using one of the bassoon sounds I clicked “similar sounds” and chose a sound that was not a bassoon – “Bad Singer” (male).
Running the previous descriptors returned a match for violin. So I tried various other descriptors, and was able to get it to match bassoon consistently by using: [0,5,10] which are lowlevel.spectral_centroid.mean, lowlevel.spectral_contrast.mean.0, and lowlevel.mfcc.mean.0.
I honestly don’t know the best strategy for choosing these descriptors and tried to go with ones that seemed the least esoteric. The value of K does not seem to make any difference in the classification.
Here is the output
In [42]: SA.classifySoundkNN(“qs/175454/175454/175454_2042115-lq.json”, “tmp”,13, descInput = [0,5,10])
This sample belongs to class: bassoon
Out[42]: ‘bassoon’
Gibber by Charie Roberts: http://gibber.mat.ucsb.edu/ (for demos, select all code, press <ctrl>-Enter to start, press <ctrl>-. to stop) (updated but check github)
The project was derived from computer technology, but the overall effect was that people would go into a mysterious room, for a minute, and when they emerged, they would be smiling and happy.
The Max patch is based on a tutorial by dude837 called “Automatic Silly Video Generator”
download
The patch at the download link in the video is broken – but the javascript code for the Max js object is intact. You can download the entire patch from the Max-projects archive: https://github.com/tkzic/max-projects folder: maxvine
Internet API’s
API’s (application programming interfaces) provide methods for programs (other than web browsers) to access Internet data. Any app that access data from the web uses an API.
For example, if you copy this URL into a web browser address bar, it will return a block of data in JSON format about the most popular videos on Vine: https://api.vineapp.com/timelines/popular
HTTP requests
An HTTP request transfers data to or from a server. A web browser handles HTTP requests in the background. You can also write programs that make HTTP requests. A program called “curl” runs http requests from the terminal command line. Here are examples: https://reactivemusic.net/?p=5916
Response data
Data is usually returned in one of 3 formats:
JSON
XML
HTML
JSON is the preferred method because its easy to access the data structure.
Max HTTP requests
There are several ways to make HTTP requests in Max, but the best method is the js object: Here is the code that runs the GET request for the Vine API:
function get(url)
{
var ajaxreq = new XMLHttpRequest();
ajaxreq.open("GET", url);
ajaxreq.onreadystatechange = readystatechange;
ajaxreq.send();
}
function readystatechange()
{
var rawtext = this._getResponseKey("body");
var body = JSON.parse(rawtext);
outlet(0, body.data.records[0].videoUrl);
}
The function: get() formats and sends an HTTP request using the URL passed in with the get message from Max. When the data is returned to Max, the readystatechange() function parses it and sends the URL of the most popular Vine video out the left outlet of the js object.
Playing Internet audio/video files in Max
The qt.movie object will play videos, with the URL passed in by the read message.
Unfortunately, qt.movie sends its audio to the system, not to Max. You can use Soundflower, or a virtual audio routing app, to get the audio back into Max.