MBTA API in Max

Sonification of Mass Ave buses, from Nubian to Harvard.

Updated for Max8 and Catalina

This patch requests data from MBTA API to get the current location of buses – using the Max js object. Latitude and Longitude data is mapped to oscillator pitch. Data is polled every 10 seconds, but it seems like the results might be more interesting to poll at a slower rate, because the updates don’t seem that frequent. And buses tend to stop a lot.

Original project link from 2014: https://reactivemusic.net/?p=17524

MBTA developer website: https://www.mbta.com/developers

This project uses version 3 of the API. There are quality issues with the realtime data. For example, there are bus stops not associated with the route. The direction_id and stop_sequence data from the buses is often wrong. Also, buses that are not in service are not removed from the vehicle list or indicated as such.

The patch uses a [multislider] object to graph the position of the buses along the route – but due to the data problems described above, the positions don’t always reflect the current latitude/longitude coordinates or the bus stop name.

download

https://github.com/tkzic/internet-sensors

folder: mbta

patches:

  • mbta.maxpat
  • mbta.js
  • poly-oscillator.maxpat
authentication

You will need  to replace the API key in the message object at the top of the patch with your own key. Or you can probably just remove it. The key distributed with the patch is fake. You can request your own developer API key from MBTA. It’s free.

instructions
  • Open mbta.maxpat
  • Open the Max console window so you can see what’s happening with the data
  • click on the yellow [getstops] message to get the current bus stop data
  • Toggle the metro (at the top of the patch) to start polling
  • Turn on the audio (click speaker icon) and turn up the gain

Note: there will be more buses running during rush hours in Boston.  Try experimenting with the polling rate and ramp length in the poly-oscillator patch. Also, you can experiment with the pitch range.

Spotify segment analysis player in Max

Echo Nest API audio analysis data is now provided by Spotify. This project is part of the internet-sensors project: https://reactivemusic.net/?p=5859

There is an older version here using the discontinued Echo Nest API: https://reactivemusic.net/?p=6296

Note:  Last tested 2024/01/21

The original analyzer document by Tristan Jehan can be found here (for the time being):  https://web.archive.org/web/20160528174915/http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf

This implementation uses node.js for Max instead of Ruby to access the API. You will need set up a developer account with Spotify and request API credentials. See below.

Other than that, the synthesis code in Max has not changed.  Some of the following background information and video is from the original version. ..

What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.

spotify-synth1.maxpat

download

https://github.com/tkzic/internet-sensors

folder: spotify2

files

main Max patch
  • spotify-synth1.maxpat
abstractions and other files
  • polyvoice-sine.maxpat
  • polyvoice2.maxpat
node.js code
  • spot1.js
node folders and infrastructure
  • /node_modules
  • package-lock.json
  • package.json
dependencies:
  • You will need to install node.js
  • the node package manager will do the rest – see below.

Note: Your best bet is to just download the repository, leave everything in place, and run it from the existing folder

authentication

You will need to sign up for a developer account at Spotify and get an API key. https://developer.spotify.com/documentation/general/guides/authorization-guide/

Edit spot1.js replacing the cliendID and clientSecret with your spotify credentials

node for max install instructions (first time only)

  •  Open the Max patch: spotify-synth1.maxpat
  •  Scroll the patch over to the far right side until you see this green panel:

  • Click the [script npm init] message – this initializes the node infrastructure in the current folder
  • Then click each of the 2 script npm install messages –  this installs the necessary libraries

Instructions

  •  Open the Max patch: spotify-synth1.maxpat
  •  Click the green [script start] message
  • Click the Speaker icon to start audio
  • Click the first dot in the preset object to set the mixer settings to something reasonable
  • open the Max Console window so you can see the Spotify API data
  • From the 2 menus at the top of the screen select an Artist and Title that match, for example: Albert Ayler and “Witches and Devils”
  • Click the [analyze] button – the console window should fill with interest data about your selection.
  • Click [play]
  • Note: if you hear a lot of clicks and pops, reduce the audio sample rate to 44.1 KHz.
Alternative search method:

Enter an Artist and Song title for analysis, in the text boxes. Then press the buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.

troubleshooting

If there are problems with the analysis, its most likely due to one of the following:

  • artist or title spelled incorrectly
  • song is not available
  • song is too long
  • API is busy
Mixer controls

The Mixer channels from Left to right are:

  • bass
  • synth (left)
  • synth (right)
  • random octave synth
  • timbre synth
  • master volume
  • gain trim
  • HPF cutoff frequency
You can also adjust the reverb decay time and the playback rate. Normal playback rate is 1.

programming notes

Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.

Here are screenshots of the Max sub-patches (the main screen is in the video above)

Timbre (percussion synth) – plays filtered noise:

Random octave synth:

Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:

There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!

Several modules respond to the API data:

  • tone synthesiszer (pitch data)
  • harmonic (random octave) synthesizer (pitch data)
  • filtered noise (timbre data)
  • bass synthesizer (key and mode data)
  • envelope generator (loudness data)

Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. It’s essentially the same approach I use when asked to play bass at an open mic night.

additional notes

Now that this project is running again. I plan to write additional synthesizers that follow more of the spirit of the data. For example, distinguishing strong pitches from noise.

Also would like to make use of  the [section] data as well as the rhythmic analysis. There is an amazing amount of potential here.

Glacier Sounds

Overlapping loops of varying duration to represent natural cycles.

glacier1

In October I collaborated with Wade Kavanaugh and Stephen P. Nguyen to compose and perform the sounds of a glacier for their installation at the Gem theatre in Bethel, Maine. The glacier was made from paper.

Wade and Stephen:

wadeandsteven

A time-lapse video of the project:

A time-lapse video of a similar project they did in Minnesota 2005:

The approach was to take a series of ambient loops and organize them by duration. The longer loops would represent the slow movement of time. Shorter loops would represent events like avalanches. One-shot samples would represent quick events, like the cracking of ice.

It took several iterations to produce something slow and boring enough to be convincing. I used samples from the Ron MacLeod’s Cyclic Waves library from Cycling 74 https://www.ableton.com/en/packs/cyclic-waves/. Samples were pitched down to imply largeness.

Screen Shot 2015-12-21 at 1.09.59 AM

Each vertical column in an Ableton Live set represents a time-frame of waves. That is, the far left column contains quick events and the far right column contains long cycle events. Left to right, the columns have gradually increasing cycle durations.  I used a Push controller to trigger samples in real time as people walked through the theatre to see the glacier.

The theatre speakers were arranged in stereo but from front to back. Since the glacier was also arranged along the same axis, a slow auto-panning effect sent sounds drifting off into the distance, or vice versa. Visually and sonically there was a sense that the space extended beyond the walls of the theatre.

In the “control room” above the theatre… using Push to trigger samples and a Korg NanoKontrol to set panning positions of each track:

glacer2

The performance lasted about 45 minutes. Occasionally the cracking of ice would startle people in the room. There were kids crawling around underneath the paper glacier. Afterwards we just let the sounds play on their own. A short excerpt:

 

Photographs by Rebecca Zicarelli.

Synscape

A soundscape that responds to color.

By Helen Trevillion

The Max patch is not available. From the video it appears that many channels of sound are playing concurrently. Color values are assigned to faders for each channel.

Boids sonification in Max

“Boids is a bird flight and animal flock simulator. It is based on the same algorithm which was used in “Jurassic Park” for herding dinosaurs.”

Max external by Singer, Jasch, Sier and Smith. Tutorial by dude837

download

https://github.com/tkzic/max-projects

folder: boids

project: boids23

patch: main-tz.maxpat (slight modification to enable existing presets to work)

Screen Shot 2015-03-02 at 9.49.09 AM

externals
  • Download version 1.1 from http://s373.net/code/ (in the section called “boids for max”)
  • Then add the path to the downloaded folder to Max objects | file preferences

Screen Shot 2015-03-02 at 10.02.09 AM

New musical instruments

A presentation for Berklee BTOT 2015 http://www.berklee.edu/faculty 

monk-thelonious-4fc61815c29ec

Around the year 1700, several startup ventures developed prototypes of machines with thousands of moving parts. After 30 years of engineering, competition, and refinement, the result was a device remarkably similar to the modern piano.

What are the musical instruments of the future being designed right now?

  • new composition tools,
  • reactive music,
  • connecting things,
  • sensors,
  • voices, 
  • brains

Notes:

predictions?

Ray Kurzweil’s future predictions on a timeline: http://imgur.com/quKXllo (The Singularity will happen in 2045)

In 1965 researcher Herbert Simon said: “Machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky added his own prediction: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.” https://forums.opensuse.org/showthread.php/390217-Will-computers-or-machines-ever-become-self-aware-or-evolve/page2

Patterns

Are there patterns in the ways that artists adapt technology?

For example, the Hammond organ borrowed ideas developed for radios. Recorded music is produced with computers that were originally as business machines.

Instead of looking forward to predict future music, lets look backwards to ask,”What technology needs to happen to make musical instruments possible?” The piano relies upon a single-escapement (1710) and later a double-escapement (1821). Real time pitch shifting depends on Fourier transforms (1822) and fast computers (~1980).

Artists often find new (unintended) uses for tools. Like the printing press.

New pianos

The piano is still in development. In December 2014, Eren Başbuğ composed and performed music on the Roli Seaboard – a piano keyboard made of 3 dimensional sensing foam:

Here is Keith McMillen’s QuNexus keyboard (with Polyphonic aftertouch):

https://www.youtube.com/watch?v=bry_62fVB1E

Experiments

Here are tools that might lead to new ways of making music. They won’t replace old ways. Singing has outlasted every other kind of music.

These ideas represent a combination of engineering and art. Engineers need artists. Artists need engineers. Interesting things happen at the confluence of streams.

Analysis, re-synthesis, transformation

Computers can analyze the audio spectrum in real time. Sounds can be transformed and re-synthesized with near zero latency.

Infinite Jukebox

Finding alternate routes through a song.

by Paul Lamere at the Echonest

Echonest has compiled data on over 14 million songs. This is an example of machine learning and pattern matching applied to music.

http://labs.echonest.com/Uploader/index.html

Try examples: “Karma Police”, Or search for: “Albert Ayler”)

Remixing a remix

“Mindblowing Six Song Country Mashup”: https://www.youtube.com/watch?v=FY8SwIvxj8o (start at 0:40)

Screen Shot 2015-01-09 at 11.25.13 PM

Local file: Max teaching examples/new-country-mashup.mp3

More about Echonest

Feature detection

Looking at music under a microscope.

removing music from speech

First you have to separate them.

SMS-tools

by Xavier Serra and UPF

Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).

Screen Shot 2015-01-06 at 1.40.37 AM

Screen Shot 2015-01-06 at 1.40.12 AM

Settings for above example:

  • Window size: 1800 (SR / f0 * lobeWidth) 44100 / 200 * 8 = 1764
  • FFT size: 2048
  • Mag threshold: -90
  • Max harmonics: 30
  • f0 min: 150
  • f0 max: 200
Many kinds of features
  • Low level features: harmonicity, amplitude, fundamental frequency
  • high level features: mood, genre, danceability
Examples of feature detection
Music information retrieval

Finding the drop

“Detetcting Drops in EDM” – by Karthik Yadati, Martha Larson, Cynthia C. S. Liem, Alan Hanjalic at Delft University of Technology (2014) https://reactivemusic.net/?p=17711

Polyphonic audio editing

Blurring the distinction between recorded and written music.

Melodyne

by Celemony

http://www.celemony.com/en/start

A minor version of “Bohemian Rhapsody”: http://www.youtube.com/watch?v=voca1OyQdKk

Music recognition

“How Shazam Works” by Farhoud Manjoo at Slate: https://reactivemusic.net/?p=12712, “About 3 datapoints per second, per song.”

  • Music fingerprinting: https://musicbrainz.org/doc/Fingerprinting
  • Humans being computers. Mystery sounds. (Local file: Desktop/mystery sounds)
  • Is it more difficult to build a robot that plays or one that listens?

Sonographic sound processing

Transforming music through pictures.

by Tadej Droljc

 https://reactivemusic.net/?p=16887

(Example of 3d speech processing at 4:12)

local file: SSP-dissertation/4 – Max/MSP/Jitter Patch of PV With Spectrogram as a Spectral Data Storage and User Interface/basic_patch.maxpat

Try recording a short passage, then set bound mode to 4, and click autorotate

Spectral scanning in Ableton Live:

http://youtu.be/r-ZpwGgkGFI

Web Audio

Web browser is the new black

Noteflight

by Joe Berkowitz 

http://www.noteflight.com/login

Plink

by Dinahmoe

http://labs.dinahmoe.com/plink/

Can you jam over the internet?

What is the speed of electricity? 70-80 ms is the best round trip latency (via fiber) from the U.S. east to west coast. If you were jamming over the internet with someone on the opposite coast it might be like being 100 ft away from them in a field. (sound travels 1100 feet/second in air).

Global communal experiences – Bill McKibben – 1990 “The Age of Missing Information”

More about Web Audio

Conversation with robots

Computers finding meaning

The Google speech API

https://reactivemusic.net/?p=9834

The Google speech API uses neural networks, statistics, and large quantities of data.

Microsoft: real-time translation

Reverse entropy

InstantDecomposer

Making music from from sounds that are not music.

by Katja Vetter

. (InstantDecomposer is an update of SliceJockey2):   http://www.katjaas.nl/slicejockey/slicejockey.html

  • local: InstantDecomposer version: tkzic/pdweekend2014/IDecTouch/IDecTouch.pd
  • local: slicejockey2test2/slicejockey2test2.pd
More about reactive music

Sensors and sonification

Transforming motion into music

Three approaches
  • earcons (email notification sound)
  • models (video game sounds)
  • parameter mapping (Geiger counter)
Leap Motion

camera based hand sensor

“Muse” (Boulanger Labs) with Paul Bachelor, Christopher Konopka, Tom Shani, and Chelsea Southard: https://reactivemusic.net/?p=16187

Max/MSP piano example: Leapfinger: https://reactivemusic.net/?p=11727

local file: max-projects/leap-motion/leapfinger2.maxpat

Internet sensors project

Detecting motion from the Internet

https://reactivemusic.net/?p=5859

Twitter streaming example

https://reactivemusic.net/?p=5786

MBTA bus data

 Sonification of Mass Ave buses, from Harvard to Dudley

https://reactivemusic.net/?p=17524

Screen Shot 2014-11-11 at 3.26.16 PM

Stock market music

https://reactivemusic.net/?p=12029

More sonification projects
Vine API mashup

By Steve Hensley

Using Max/MSP/jitter

local file: tkzic/stevehensely/shensley_maxvine.maxpat

Audio sensing gloves for spacesuits

By Christopher Konopka at future, music, technology

http://futuremusictechnology.com

Computer Vision

Sensing motion with video using frame subtraction

by Adam Rokhsar

https://reactivemusic.net/?p=7005

local file: max-projects/frame-subtraction

The brain

Music is stored all across the brain.

Mouse brain wiring diagram

The Allen institute

https://reactivemusic.net/?p=17758 

“Hacking the soul” by Christof Koch at the Allen institute

(An Explanation of the wiring diagram of the mouse brain – at 13:33) http://www.technologyreview.com/emtech/14/video/watch/christof-koch-hacking-the-soul/

OpenWorm project

A complete simulation of the nematode worm, in software, with a Lego body (320 neurons)

https://reactivemusic.net/?p=17744

AARON

Harold Cohen’s algorithmic painting machine

https://reactivemusic.net/?p=17778

Brain plasticity

A perfect pitch pill? http://www.theverge.com/2014/1/6/5279182/valproate-may-give-humans-perfect-pitch-by-resetting-critical-periods-in-brain

DNA

Could we grow music producing organisms? https://reactivemusic.net/?p=18018

 

Two possibilities

Rejecting technology?
bob-dylan-5WFW_o_tn
An optimistic future?

There is a quickening of discovery: internet collaboration, open source, linux,  github, r-pi, Pd, SDR.

“Robots and AI will help us create more jobs for humans — if we want them. And one of those jobs for us will be to keep inventing new jobs for the AIs and robots to take from us. We think of a new job we want, we do it for a while, then we teach robots how to do it. Then we make up something else.”

“…We invented machines to take x-rays, then we invented x-ray diagnostic technicians which farmers 200 years ago would have not believed could be a job, and now we are giving those jobs to robot AIs.”

Kevin Kelly – January 7, 2015, reddit AMA http://www.reddit.com/r/Futurology/comments/2rohmk/i_am_kevin_kelly_radical_technooptimist_digital/

Will people be marrying robots in 2050? http://www.livescience.com/1951-forecast-sex-marriage-robots-2050.html

“What can you predict about the future of music” by Michael Gonchar at The New York Times https://reactivemusic.net/?p=17023

Jim Morrison predicts the future of music:

More areas to explore

MBTA bus data in Max

Sonification of Mass Ave buses, from Harvard to Dudley.

Screen Shot 2014-11-11 at 3.26.16 PM

This patch sends requests to the MBTA developer portal to get the current location of buses – using the Max js object. Latitude and Longitude data is mapped to oscillator pitch. Data is polled every 10 seconds, but it seems like the results might be more interesting to poll at a slower rate, because the updates don’t seem that frequent. And buses tend to stop a lot.

MBTA developer portal: https://reactivemusic.net/?p=17511

Here is the get request URL used in the patch:

http://realtime.mbta.com/developer/api/v2/vehiclesbyroute?api_key=wX9NwuHnZU2ToO7GmGR9uw&route=01&format=json
download

https://github.com/tkzic/internet-sensors

folder: mbta

patches:

  • mbta.maxpat
  • mbta.js
  • poly-oscillator.maxpat
authentication

You will not need authentication to run run this patch. It uses the default developer API-key for testing. Please read the terms of service at the MBTA developer portal. Data should not be polled more often than 10 seconds. You can also request your own developer API key from MBTA.

instructions
  • Open mbta.maxpat
  • Toggle the metro (at the top of the patch) to start polling
  • Turn on the audio (at the bottom of the patch) and turn up the gain

Note: there will be more buses running during rush hours in Boston.  Try experimenting with the polling rate and ramp length in the poly-oscillator patch. Also, you can experiment with the pitch range.

 

data-stream-switch.maxpat

 

ep-413 DSP week 9

Data.

Data

Building a Max patch that displays, transforms, and responds to internet data.

building materials
  • Max (6.1.7 or newer)
  • Soundflower –

Both available from Cycling 74 http://cycling74.com/

The Max patch is based on a tutorial by dude837 called “Automatic Silly Video Generator”

download

The patch at the download link in the video is broken – but the javascript code for the Max js object is intact. You can download the entire patch from the Max-projects archive: https://github.com/tkzic/max-projects folder: maxvine

Internet API’s

API’s (application programming interfaces) provide methods for programs (other than web browsers) to access Internet data. Any app that access data from the web uses an API.

Here is a link to information about the Vine API: https://github.com/starlock/vino/wiki/API-Reference

For example, if you copy this URL into a web browser address bar, it will return a block of data in JSON format about the most popular videos on Vine: https://api.vineapp.com/timelines/popular

HTTP requests

An HTTP request transfers data to or from a server. A web browser handles HTTP requests in the background. You can also write programs that make HTTP requests. A  program called “curl” runs http requests from the terminal command line. Here are examples: https://reactivemusic.net/?p=5916

Response data

Data is usually returned in one of 3 formats:

  • JSON
  • XML
  • HTML

JSON is the preferred method because its easy to access the data structure.

Max HTTP requests

There are several ways to make HTTP requests in Max, but the best method is the js object: Here is the code that runs the GET request for the Vine API:

function get(url)
{
    var ajaxreq = new XMLHttpRequest();
    ajaxreq.open("GET", url);
    ajaxreq.onreadystatechange = readystatechange;
    ajaxreq.send();
}

function readystatechange()
{
    var rawtext = this._getResponseKey("body");
    var body = JSON.parse(rawtext);
    outlet(0, body.data.records[0].videoUrl);
}

 

The function: get() formats and sends an HTTP request using the URL passed in with the get message from Max. When the data is returned to Max, the readystatechange() function parses it and sends the URL of the most popular Vine video out the left outlet of the js object.

Playing Internet audio/video files in Max

The qt.movie object will play videos, with the URL passed in by the read message.

Unfortunately, qt.movie sends its audio to the system, not to Max. You can use Soundflower, or a virtual audio routing app, to get the audio back into Max.

Audio from video

https://reactivemusic.net/?p=12570

Video from audio

https://reactivemusic.net/?p=12570

Other Internet API examples in Max

There is a large archive of examples here: Internet sensors: https://reactivemusic.net/?p=5859

We will look at more of these next week. Here is simple Max patch that uses the Soundcloud API: https://reactivemusic.net/?p=17430

Gokce Kinayoglu has written a java external for Max called Searchtweet: http://cycling74.com/toolbox/searchtweet-design-patches-that-respond-to-twitter-posts/

Many API’s require complex authentication, or money, before they will release their data. We will look ways to access these API’s from Max next week.

Aggregators

There are API services that consolidate many API’s into one API. For example:

Scaling data

Look at the Max tutorial (built in to Max Help) called “Data : data scaling” It contains most of what you need to know to work with streams of data.

Assignment

Using the Vine API patch that we built during the class as a starting point: Build a better app.

Ideas to explore:

  • Is it possible to run several API requests simultaneously?
  • Recording? Time expansion? Effects that evolve over time?
  • Generate music from motion, data, and raw sound?
  • Make a video respond to your instrument or voice?
  • Design a better user interface or external controller?
  • Will this idea work in Max For Live?
  • How would you make adjustments to the loop length, or synchronize a video to other events?
  • Make envelopes to change the dynamic shape?
  • Destruction? Abstraction?
  • Find or write a Max URL streaming object?
  • What about using a different API or other data from the Vine API?

This project will be due in 2-3 weeks. But for next week please bring in your work in progress, and we will help solve problems.