Machine learning library for Max and Pd.
By Ali Momeni and Jamie Bullock
https://github.com/cmuartfab/ml-lib
Machine learning library for Max and Pd.
By Ali Momeni and Jamie Bullock
https://github.com/cmuartfab/ml-lib
By Stefan Brunner at Cycling 74
https://cycling74.com/2014/12/19/music-hack-for-max-7-elevator-music-generator/
A presentation for Berklee BTOT 2015 http://www.berklee.edu/faculty
Around the year 1700, several startup ventures developed prototypes of machines with thousands of moving parts. After 30 years of engineering, competition, and refinement, the result was a device remarkably similar to the modern piano.
What are the musical instruments of the future being designed right now?
Ray Kurzweil’s future predictions on a timeline: http://imgur.com/quKXllo (The Singularity will happen in 2045)
In 1965 researcher Herbert Simon said: “Machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky added his own prediction: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.” https://forums.opensuse.org/showthread.php/390217-Will-computers-or-machines-ever-become-self-aware-or-evolve/page2
Are there patterns in the ways that artists adapt technology?
For example, the Hammond organ borrowed ideas developed for radios. Recorded music is produced with computers that were originally as business machines.
Instead of looking forward to predict future music, lets look backwards to ask,”What technology needs to happen to make musical instruments possible?” The piano relies upon a single-escapement (1710) and later a double-escapement (1821). Real time pitch shifting depends on Fourier transforms (1822) and fast computers (~1980).
Artists often find new (unintended) uses for tools. Like the printing press.
The piano is still in development. In December 2014, Eren Başbuğ composed and performed music on the Roli Seaboard – a piano keyboard made of 3 dimensional sensing foam:
Here is Keith McMillen’s QuNexus keyboard (with Polyphonic aftertouch):
https://www.youtube.com/watch?v=bry_62fVB1E
Here are tools that might lead to new ways of making music. They won’t replace old ways. Singing has outlasted every other kind of music.
These ideas represent a combination of engineering and art. Engineers need artists. Artists need engineers. Interesting things happen at the confluence of streams.
Computers can analyze the audio spectrum in real time. Sounds can be transformed and re-synthesized with near zero latency.
Finding alternate routes through a song.
by Paul Lamere at the Echonest
Echonest has compiled data on over 14 million songs. This is an example of machine learning and pattern matching applied to music.
http://labs.echonest.com/Uploader/index.html
Try examples: “Karma Police”, Or search for: “Albert Ayler”)
“Mindblowing Six Song Country Mashup”: https://www.youtube.com/watch?v=FY8SwIvxj8o (start at 0:40)
Local file: Max teaching examples/new-country-mashup.mp3
Looking at music under a microscope.
First you have to separate them.
by Xavier Serra and UPF
Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).
Settings for above example:
Finding the drop
“Detetcting Drops in EDM” – by Karthik Yadati, Martha Larson, Cynthia C. S. Liem, Alan Hanjalic at Delft University of Technology (2014) https://reactivemusic.net/?p=17711
Blurring the distinction between recorded and written music.
by Celemony
http://www.celemony.com/en/start
A minor version of “Bohemian Rhapsody”: http://www.youtube.com/watch?v=voca1OyQdKk
“How Shazam Works” by Farhoud Manjoo at Slate: https://reactivemusic.net/?p=12712, “About 3 datapoints per second, per song.”
Transforming music through pictures.
by Tadej Droljc
https://reactivemusic.net/?p=16887
(Example of 3d speech processing at 4:12)
local file: SSP-dissertation/4 – Max/MSP/Jitter Patch of PV With Spectrogram as a Spectral Data Storage and User Interface/basic_patch.maxpat
Try recording a short passage, then set bound mode to 4, and click autorotate
Spectral scanning in Ableton Live:
http://youtu.be/r-ZpwGgkGFI
Web browser is the new black
by Joe Berkowitz
http://www.noteflight.com/login
by Dinahmoe
http://labs.dinahmoe.com/plink/
What is the speed of electricity? 70-80 ms is the best round trip latency (via fiber) from the U.S. east to west coast. If you were jamming over the internet with someone on the opposite coast it might be like being 100 ft away from them in a field. (sound travels 1100 feet/second in air).
Global communal experiences – Bill McKibben – 1990 “The Age of Missing Information”
Computers finding meaning
https://reactivemusic.net/?p=9834
The Google speech API uses neural networks, statistics, and large quantities of data.
Making music from from sounds that are not music.
by Katja Vetter
. (InstantDecomposer is an update of SliceJockey2): http://www.katjaas.nl/slicejockey/slicejockey.html
Transforming motion into music
camera based hand sensor
“Muse” (Boulanger Labs) with Paul Bachelor, Christopher Konopka, Tom Shani, and Chelsea Southard: https://reactivemusic.net/?p=16187
Max/MSP piano example: Leapfinger: https://reactivemusic.net/?p=11727
local file: max-projects/leap-motion/leapfinger2.maxpat
Detecting motion from the Internet
https://reactivemusic.net/?p=5859
https://reactivemusic.net/?p=5786
MBTA bus data
Sonification of Mass Ave buses, from Harvard to Dudley
https://reactivemusic.net/?p=17524
https://reactivemusic.net/?p=12029
By Steve Hensley
Using Max/MSP/jitter
local file: tkzic/stevehensely/shensley_maxvine.maxpat
By Christopher Konopka at future, music, technology
http://futuremusictechnology.com
Sensing motion with video using frame subtraction
by Adam Rokhsar
https://reactivemusic.net/?p=7005
local file: max-projects/frame-subtraction
Music is stored all across the brain.
The Allen institute
https://reactivemusic.net/?p=17758
“Hacking the soul” by Christof Koch at the Allen institute
(An Explanation of the wiring diagram of the mouse brain – at 13:33) http://www.technologyreview.com/emtech/14/video/watch/christof-koch-hacking-the-soul/
A complete simulation of the nematode worm, in software, with a Lego body (320 neurons)
: https://reactivemusic.net/?p=17744
Harold Cohen’s algorithmic painting machine
https://reactivemusic.net/?p=17778
A perfect pitch pill? http://www.theverge.com/2014/1/6/5279182/valproate-may-give-humans-perfect-pitch-by-resetting-critical-periods-in-brain
Could we grow music producing organisms? https://reactivemusic.net/?p=18018
There is a quickening of discovery: internet collaboration, open source, linux, github, r-pi, Pd, SDR.
“Robots and AI will help us create more jobs for humans — if we want them. And one of those jobs for us will be to keep inventing new jobs for the AIs and robots to take from us. We think of a new job we want, we do it for a while, then we teach robots how to do it. Then we make up something else.”
“…We invented machines to take x-rays, then we invented x-ray diagnostic technicians which farmers 200 years ago would have not believed could be a job, and now we are giving those jobs to robot AIs.”
Kevin Kelly – January 7, 2015, reddit AMA http://www.reddit.com/r/Futurology/comments/2rohmk/i_am_kevin_kelly_radical_technooptimist_digital/
Will people be marrying robots in 2050? http://www.livescience.com/1951-forecast-sex-marriage-robots-2050.html
“What can you predict about the future of music” by Michael Gonchar at The New York Times https://reactivemusic.net/?p=17023
Jim Morrison predicts the future of music:
A presentation for Berklee BTOT 2015 http://www.berklee.edu/faculty
(KITT dashboard by Dave Metlesits)
The voice was the first musical instrument. Humans are not the only source of musical voices. Machines have voices. Animals too.
We instantly recognize people and animals by their voices. As an artist we work to develop our own voice. Voices contain information beyond words. Think of R2D2 or Chewbacca.
There is also information between words: “Palin Biden Silences” David Tinapple, 2008: http://vimeo.com/38876967
What’s in a voice?
Humans acting like synthesizers.
Teaching machines to talk.
Try the ‘say’ command (in Mac OS terminal), for example: say hello
Combining the energy of voice with musical instruments (convolution)
By Yamaha
(text + notation = singing)
Demo tracks: https://www.youtube.com/watch?v=QWkHypp3kuQ
Vocaloop device http://vocaloop.jp/ demo: https://www.youtube.com/watch?v=xLpX2M7I6og#t=24
Transformation
Pitch transposing a baby https://reactivemusic.net/?p=2458
Autotune: “T-Pain effect” ,(I-am-T-Pain bySmule), “Lollipop” by Lil’ Wayne. “Woods” by Bon Iver https://www.youtube.com/watch?v=1_cePGP6lbU
by Matthew Davidson
Local file: max-teaching-examples/autotuna-test.maxpat
by Katja Vetter
http://www.katjaas.nl/slicejockey/slicejockey.html
Autocorrelation: (helmholtz~ Pd external) “Helmholtz finds the pitch” http://www.katjaas.nl/helmholtz/helmholtz.html
(^^ is input pitch, preset #9 is normal)
Disassembling time into very small pieces
Adapted from Andy Farnell, “Designing Sound”
https://reactivemusic.net/?p=11385 Download these patches from: https://github.com/tkzic/max-projects folder: granular-timestretch
…coming soon
Changing sound into pictures and back into sound
by Tadej Droljc
https://reactivemusic.net/?p=16887
(Example of 3d speech processing at 4:12)
local file: SSP-dissertation/4 – Max/MSP/Jitter Patch of PV With Spectrogram as a Spectral Data Storage and User Interface/basic_patch.maxpat
Try recording a short passage, then set bound mode to 4, and click autorotate
Understanding the meaning of speech
A conversation with a robot in Max
https://reactivemusic.net/?p=9834
Google speech uses neural networks, statistics, and large quantities of data.
Changes in the environment reflected by sound
“You can talk to the animals…”
Pig creatures example: http://vimeo.com/64543087
What about Jar Jar Binks?
The sound changes but the words remain the same.
The Speech accent archive https://reactivemusic.net/?p=9436
We are always singing.
by Xavier Serra and UPF
Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).
Settings for above example:
Acoustic Brainz: (typical analysis page) https://reactivemusic.net/?p=17641
Essentia (open source feature detection tools) https://github.com/MTG/essentia
Freesound (vast library of sounds): https://www.freesound.org – look at “similar sounds”
A sad thought
This method was used to send secret messages during world war 2. Its now used in cell phones to get rid of echo. Its also used in noise canceling headphones.
https://reactivemusic.net/?p=8879
max-projects/phase-cancellation/phase-cancellation-example.maxpat
What is not left and not right?
Ableton Live – utility/difference device: https://reactivemusic.net/?p=1498 (Allison Krause example)
Local file: Ableton-teaching-examples/vocal-eliminator
Questions
oggrx~ and oggtx~
by Robin Gareus
At Cycling 74 forum: https://cycling74.com/forums/topic/streaming-internet-radio-in-maxmsp/
I was able to receive mp3 files from a server in Max 6.18. using oggrx~. There doesn’t appear to be transport control – so this would need to be built in for synchronization.
Unexpected find: The external uses “secret rabbit code” for resampling. So it works in Max. And we have the source code but not the i386 libs that were used to compile it.
There is no binary for v.7 of oggrx~.mxo, but there is one for v.6
i managed to get Robin Gareus’ externals. They are available here, though they are unmaintained.
The binaries are still online at:
http://gareus.org/d/oggZmax-v0.6-i386.zip
http://gareus.org/d/oggZmax-v0.7-rc2-i386.zip
It’s been more than 3 years (OSX 10.5) since I last looked at it, it
should still work, but I don’t know. Please let me know if you encounter
any problems, so that I can warn others.
I don’t maintain this external anymore. I neither have a MAX/MSP
license, nor do I own any Apple devices. On the upside, complete
source-code is available from
By André Baltazar, Carlos Guedes, Bruce Pennycook, and Fabien Gouyon at INESC Porto andPortuguese Catholic University -School of the Arts and UT Austin
https://www.academia.edu/824925/A_REAL-TIME_HUMAN_BODY_SKELETONIZATION_ALGORITHM_FOR_MAX_MSP_JITTER
Data.
Building a Max patch that displays, transforms, and responds to internet data.
Both available from Cycling 74 http://cycling74.com/
The Max patch is based on a tutorial by dude837 called “Automatic Silly Video Generator”
The patch at the download link in the video is broken – but the javascript code for the Max js object is intact. You can download the entire patch from the Max-projects archive: https://github.com/tkzic/max-projects folder: maxvine
API’s (application programming interfaces) provide methods for programs (other than web browsers) to access Internet data. Any app that access data from the web uses an API.
Here is a link to information about the Vine API: https://github.com/starlock/vino/wiki/API-Reference
For example, if you copy this URL into a web browser address bar, it will return a block of data in JSON format about the most popular videos on Vine: https://api.vineapp.com/timelines/popular
An HTTP request transfers data to or from a server. A web browser handles HTTP requests in the background. You can also write programs that make HTTP requests. A program called “curl” runs http requests from the terminal command line. Here are examples: https://reactivemusic.net/?p=5916
Data is usually returned in one of 3 formats:
JSON is the preferred method because its easy to access the data structure.
There are several ways to make HTTP requests in Max, but the best method is the js object: Here is the code that runs the GET request for the Vine API:
function get(url) { var ajaxreq = new XMLHttpRequest(); ajaxreq.open("GET", url); ajaxreq.onreadystatechange = readystatechange; ajaxreq.send(); } function readystatechange() { var rawtext = this._getResponseKey("body"); var body = JSON.parse(rawtext); outlet(0, body.data.records[0].videoUrl); }
The function: get() formats and sends an HTTP request using the URL passed in with the get message from Max. When the data is returned to Max, the readystatechange() function parses it and sends the URL of the most popular Vine video out the left outlet of the js object.
The qt.movie object will play videos, with the URL passed in by the read message.
Unfortunately, qt.movie sends its audio to the system, not to Max. You can use Soundflower, or a virtual audio routing app, to get the audio back into Max.
https://reactivemusic.net/?p=12570
https://reactivemusic.net/?p=12570
There is a large archive of examples here: Internet sensors: https://reactivemusic.net/?p=5859
We will look at more of these next week. Here is simple Max patch that uses the Soundcloud API: https://reactivemusic.net/?p=17430
Gokce Kinayoglu has written a java external for Max called Searchtweet: http://cycling74.com/toolbox/searchtweet-design-patches-that-respond-to-twitter-posts/
Many API’s require complex authentication, or money, before they will release their data. We will look ways to access these API’s from Max next week.
There are API services that consolidate many API’s into one API. For example:
Look at the Max tutorial (built in to Max Help) called “Data : data scaling” It contains most of what you need to know to work with streams of data.
Using the Vine API patch that we built during the class as a starting point: Build a better app.
Ideas to explore:
This project will be due in 2-3 weeks. But for next week please bring in your work in progress, and we will help solve problems.
An example of the Soundcloud API in Max.
At the Cycling 74 Wiki
http://cycling74.com/wiki/index.php?title=MaxURL_SoundCloud
local version: tkzic/max teaching examples/souncloud-test
Twitter search for Max/MSP.
Uses mxj and the twitter4j library
By Gokce Kinayoglu
http://cycling74.com/toolbox/searchtweet-design-patches-that-respond-to-twitter-posts/
Note: installation instructions in readme.txt file with download. Requires copying of java classes and library files.