MBTA API in Max

Sonification of Mass Ave buses, from Nubian to Harvard.

Updated for Max8 and Catalina

This patch requests data from MBTA API to get the current location of buses – using the Max js object. Latitude and Longitude data is mapped to oscillator pitch. Data is polled every 10 seconds, but it seems like the results might be more interesting to poll at a slower rate, because the updates don’t seem that frequent. And buses tend to stop a lot.

Original project link from 2014: https://reactivemusic.net/?p=17524

MBTA developer website: https://www.mbta.com/developers

This project uses version 3 of the API. There are quality issues with the realtime data. For example, there are bus stops not associated with the route. The direction_id and stop_sequence data from the buses is often wrong. Also, buses that are not in service are not removed from the vehicle list or indicated as such.

The patch uses a [multislider] object to graph the position of the buses along the route – but due to the data problems described above, the positions don’t always reflect the current latitude/longitude coordinates or the bus stop name.

download

https://github.com/tkzic/internet-sensors

folder: mbta

patches:

  • mbta.maxpat
  • mbta.js
  • poly-oscillator.maxpat
authentication

You will need  to replace the API key in the message object at the top of the patch with your own key. Or you can probably just remove it. The key distributed with the patch is fake. You can request your own developer API key from MBTA. It’s free.

instructions
  • Open mbta.maxpat
  • Open the Max console window so you can see what’s happening with the data
  • click on the yellow [getstops] message to get the current bus stop data
  • Toggle the metro (at the top of the patch) to start polling
  • Turn on the audio (click speaker icon) and turn up the gain

Note: there will be more buses running during rush hours in Boston.  Try experimenting with the polling rate and ramp length in the poly-oscillator patch. Also, you can experiment with the pitch range.

Soundcloud API in Max8

updated version of  https://reactivemusic.net/?p=5413

This project is part of the internet-sensors project: https://reactivemusic.net/?p=5859

In this patch, Max uses the Soundcloud API to search tracks and then select a result to download and play. It uses the node.js soundcloud-api-client https://github.com/iammordaty/soundcloud-api-client

For information on the soundcloud API http://developers.soundcloud.com/docs/api/reference

download

https://github.com/tkzic/internet-sensors

folder: soundcloud

files

main Max patch
  • sc.maxpat
node.js files
  • scnode.js

authorization

  • The Soundcloud client-id is embedded in scnode.js – you will need to edit this file to replace the worthless client-id with your own. To get a client ID you will first need a Soundcloud account. Then register an app at: http://soundcloud.com/you/apps

first time instructions

  • Open the Max patch: sc.maxpat
  • In the green panel, click on [script npm init]
  • In the green panel , click on [script install soundcloud-api-client]

instructions

  • Open the Max patch sc.maxpat
  • open the max.console window so you can see the API data
  • click [script start]
  • click the speaker icon to start audio
  • type something into the search box and press <enter> or click the button to the left to search for what is already in the box.
  • select a track from the result menu, wait for it to download and start playing

Spotify segment analysis player in Max

Echo Nest API audio analysis data is now provided by Spotify. This project is part of the internet-sensors project: https://reactivemusic.net/?p=5859  and updates the 2013 Echo Nest project described here: https://reactivemusic.net/?p=6296

 

The original analyzer document by Tristan Jehan can be found here (for the time being):  https://web.archive.org/web/20160528174915/http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf

This implementation uses node.js for Max instead of Ruby to access the API. You will need set up a developer account with Spotify and request API credentials. See below.

Other than that, the synthesis code in Max has not changed.  Some of the following background information and video is from the original version. ..

What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.

spotify-synth1.maxpat

download

https://github.com/tkzic/internet-sensors

folder: spotify2

files

main Max patch
  • spotify-synth1.maxpat
abstractions and other files
  • polyvoice-sine.maxpat
  • polyvoice2.maxpat
node.js code
  • spot1.js
node folders and infrastructure
  • /node_modules
  • package-lock.json
  • package.json
dependencies:
  • You will need to install node.js
  • the node package manager will do the rest – see below.

Note: Your best bet is to just download the repository, leave everything in place, and run it from the existing folder

authentication

You will need to sign up for a developer account at Spotify and get an API key. https://developer.spotify.com/documentation/general/guides/authorization-guide/

Edit spot1.js replacing the cliendID and clientSecret with your spotify credentials

node for max install instructions (first time only)

  •  Open the Max patch: spotify-synth1.maxpat
  •  Scroll the patch over to the far right side until you see this green panel:

  • Click the [script npm init] message – this initializes the node infrastructure in the current folder
  • Then click each of the 2 script npm install messages –  this installs the necessary libraries

Instructions

  •  Open the Max patch: spotify-synth1.maxpat
  •  Click the green [script start] message
  • Click the Speaker icon to start audio
  • Click the first dot in the preset object to set the mixer settings to something reasonable
  • open the Max Console window so you can see the Spotify API data
  • From the 2 menus at the top of the screen select an Artist and Title that match, for example: Albert Ayler and “Witches and Devils”
  • Click the [analyze] button – the console window should fill with interest data about your selection.
  • Click [play]
  • Note: if you hear a lot of clicks and pops, reduce the audio sample rate to 44.1 KHz.
Alternative search method:

Enter an Artist and Song title for analysis, in the text boxes. Then press the buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.

troubleshooting

If there are problems with the analysis, its most likely due to one of the following:

  • artist or title spelled incorrectly
  • song is not available
  • song is too long
  • API is busy
Mixer controls

The Mixer channels from Left to right are:

  • bass
  • synth (left)
  • synth (right)
  • random octave synth
  • timbre synth
  • master volume
  • gain trim
  • HPF cutoff frequency
You can also adjust the reverb decay time and the playback rate. Normal playback rate is 1.

programming notes

Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.

Here are screenshots of the Max sub-patches (the main screen is in the video above)

Timbre (percussion synth) – plays filtered noise:

Random octave synth:

Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:

There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!

Several modules respond to the API data:

  • tone synthesiszer (pitch data)
  • harmonic (random octave) synthesizer (pitch data)
  • filtered noise (timbre data)
  • bass synthesizer (key and mode data)
  • envelope generator (loudness data)

Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. It’s essentially the same approach I use when asked to play bass at an open mic night.

additional notes

Now that this project is running again. I plan to write additional synthesizers that follow more of the spirit of the data. For example, distinguishing strong pitches from noise.

Also would like to make use of  the [section] data as well as the rhythmic analysis. There is an amazing amount of potential here.

Updating Max/MSP internet sensor projects

Notes for updating from Max6 to Max8 in Mac OS Catalina

In general, 32 bit code will not work

Link to internetsensors project: https://reactivemusic.net/?p=5859

Github: https://github.com/tkzic/internet-sensors

1. mxj object

Need to update, but the Oracle link leads to a dead end message. Go to the Oracle download link https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html but instead of pressing the green download button, <ctrl> click and save the link as described in the instructions from intrepidOlivia in this link https://gist.github.com/wavezhang/ba8425f24a968ec9b2a8619d7c2d86a6

2. aka.objects

I have used aka.shell, and aka.speech – among others. These objects no longer work. Replace with Jeremy Bernstein’s shell object: https://github.com/jeremybernstein/shell/releases/tag/1.0b2

NOTE: There’s a problem with [shell] – it rejects input that is converted to a symbol using [tosymbol].

This can be fixed by using from symbol – or just eliminating [tosymbol] – it make affect the stderr-stdout redirection token, ie., “>” and other special characters but for now [shell] does not accept symbol input

aka.speech can be replaced using the “say” command in the shell.  more details to follow about voice parameters.

‘say’ has similar params to aka.speech, eg., voice name and rate. There are voices for specific languages. This feature could be used, for example, to match the language from a Tweet to an appropriate voice

3. Twitter streaming API

I revised the php code for the Twitter streaming project, to use the coordinates of a corner of the city polygon bounding box. That seems to be more reliable than the geo coordinates which are absent from most Tweets.

There is a new API in the works – but its difficult to decipher the Twitter API docs because they have so many products and the documentation is obtuse.

Also it would be interesting to extract the “language” field and use it to select which voice to use in the speech synthesizer. Or even have an english translation option.

4. Echonest API

Echonest was absorbed into Spotify. The API is gone. But the Spotify API does have some of the feature detection and analysis code. But it doesn’t allow you to submit your own audio clips. There are also some efforts to preserve some of the Echonest stuff like the blog by Paul Lamere, and the remix code. Here are a few links I found to get started.

Spotify API (features) https://developer.spotify.com/console/get-audio-features-track/?id=06AKEBrKUckW0KREUWRnvT

Echonest blog:  https://blog.echonest.com/

Amen – algorithmic remix project:  https://github.com/algorithmic-music-exploration/amen

5. Google speech to text

Several issues:

  • Replacing [aka.shell] with [shell] – instead of using [tosymbol], this workaround seems to help

  • Now have rewritten all of the recording code, and shell interactions with Google.
  • Still need to work on voice options for the ‘say’ command (text to speech)
  • pandorabots API problems turned out that the URL needed to be https instead of http

6. twitter curl project

Looks like xively.com is gone. Maybe purchased by google? Anyway – this project is toast

7. Twitter via Ruby

Got this working again.

8. Bird calls from xeno-canto.com

This patch has been completely re-written. The old API was obsolete. This version uses [dict] and [maxurl] to format and execute the initial query. Then it uses [jit.uldl] to download the mp3 file with the bird-call audio.  Interesting that [maxurl] would not download the file using the “download” URL. It only worked with a URL containing the actual file name.

9. ping

Needed to reinstall ruby gems using xcrun (see above)

seems to be a problem with mashape:

Could not resolve host: igor-zachetly-ping-uin.p.mashape.com (Patron::HostResolutionError)

[mashape was acquired by rapidapi.com – so will need to refactor the code in the ruby server.]

 

MBTA bus data in Max

Sonification of Mass Ave buses, from Harvard to Dudley.

Screen Shot 2014-11-11 at 3.26.16 PM

This patch sends requests to the MBTA developer portal to get the current location of buses – using the Max js object. Latitude and Longitude data is mapped to oscillator pitch. Data is polled every 10 seconds, but it seems like the results might be more interesting to poll at a slower rate, because the updates don’t seem that frequent. And buses tend to stop a lot.

MBTA developer portal: https://reactivemusic.net/?p=17511

Here is the get request URL used in the patch:

http://realtime.mbta.com/developer/api/v2/vehiclesbyroute?api_key=wX9NwuHnZU2ToO7GmGR9uw&route=01&format=json
download

https://github.com/tkzic/internet-sensors

folder: mbta

patches:

  • mbta.maxpat
  • mbta.js
  • poly-oscillator.maxpat
authentication

You will not need authentication to run run this patch. It uses the default developer API-key for testing. Please read the terms of service at the MBTA developer portal. Data should not be polled more often than 10 seconds. You can also request your own developer API key from MBTA.

instructions
  • Open mbta.maxpat
  • Toggle the metro (at the top of the patch) to start polling
  • Turn on the audio (at the bottom of the patch) and turn up the gain

Note: there will be more buses running during rush hours in Boston.  Try experimenting with the polling rate and ramp length in the poly-oscillator patch. Also, you can experiment with the pitch range.

 

data-stream-switch.maxpat

 

Stock market music in Max

Make music from the motion of stock prices.

This program gathers stock prices into a database. It generates Midi data – mapping price to pitch, and mapping trading volume to velocity and rhythmic density. It uses ancient Web technology: HTML/javascript front-end with a php back-end accessing a mysql database.

Case study: http://zproject.wikispaces.com/stock+market+music

To run this project, you will need a server (preferably linux) with the following capabilities:

  • mysql + phpmyadmin
  • php (and ability to run php over the web)
  • netcat (nc)
  • network access

All of this is pretty standard – so I won’t talk about it here. I am running it on Ubuntu Linux. There are many other ways to get the project working, by using the layout described here.

download

https://github.com/tkzic/internet-sensors

folder: stock-market

files

Max

stock_market_music.maxpat

HTML/javascript web client
  • newstock3.html: (web page interface)
  • selectstock3.js:  (front-end)
php server
  • getstock3.php (back-end server to get quotes and save them to a database)
  • play3.php: (back-end server to retrieve quotes, analyze, and map to Midi sequence to send to Max)
  • udp.php: Osc library
Set execute privileges on php files so they can be run from your web server. (chmod +x)

 

database

The selectstock3.php program harvests stock quote data and stores it in a mysql database.

The database name is:  stocks – table is: quotes

Table structure:

The table is basic flat representation of a stock quote, indexed by the ticker symbol. It contains price, volume, high/low/change, timestamp, etc., For our purposes, the price, volume and timestamp are essentially all we need.

SQL to create the table:

 CREATE  TABLE  `stocks`.`quotes` (  `ticker` varchar( 12  )  NOT  NULL ,
 `price` decimal( 10, 2  )  NOT  NULL ,
 `qtime` datetime NOT  NULL ,
 `pchange` decimal( 10, 2  )  NOT  NULL ,
 `popen` decimal( 10, 2  )  NOT  NULL ,
 `phigh` decimal( 10, 2  )  NOT  NULL ,
 `plow` decimal( 10, 2  )  NOT  NULL ,
 `volume` int( 11  )  NOT  NULL ,
 `ttime` timestamp NOT  NULL  DEFAULT CURRENT_TIMESTAMP ,
 `id` int( 11  )  NOT  NULL  AUTO_INCREMENT ,
 `spare` varchar( 30  )  DEFAULT NULL ,
 UNIQUE  KEY  `id` (  `id`  ) ,
 KEY  `ticker` (  `ticker`  )  ) ENGINE  =  MyISAM  DEFAULT CHARSET  = latin1 COMMENT  =  'stock quote transactions';

creating the database, user, and table

  • Log into  phpmyadmin as root
  • Create a new database called ‘stocks’
  • In privileges, add a user called: ‘webdb1′ with a password of ’34door’ (note you can change the password later)
  • In SQL, copy in the above query to create the ‘stocks’ table

Instructions

Web client

The webpage control program allows you to select stocks by ticker symbol, and get either one quote or get quotes at regular time interval. Each quote is inserted into the stock table for later retrieval and analysis.

 

The web front end is quirky so I will describe it in terms of how you might typically use it:

market is open – and you just want to play music based on current stock prices
  1. Enter the ticker symbols for your stocks
  2. Press ‘tracking’ button – so the quotes get saved
  3. Enter the IP address of the computer running Max
  4. Press the ‘auto’ button in the upper left corner – it will run and play forever
market is closed – or you want to play historical data you have saved
  1. Enter the ticker symbols for your stocks
  2. Enter the IP address of the computer running Max
  3. set start end end dates
  4. Press the ‘play’ button to play once or press ‘loop’ to play continuously (using time interval in seconds)
  5. market is open – you just want to collect stock quote data
  6. Enter the ticker symbols for your stocks
  7. Press the tracking button so quotes will get saved
  8. Press the ‘get quotes’ button to get current quote or press ‘loop’ button (on the same line) to retrieve quotes  continuously every 30 seconds.

Instructions

Max patch

  1. Make sure the IP address is set to the address of your server
  2. Select the Midi port for output
  3. Play a few test notes
  4. Select either ‘one instrument’ mode (piano) or multi instrument mode. Each time you click the multi instrument button it randomly selects a new combination 

notes on stock market data

To look at historical trends, you would need access to historical stock data. To use it as a tool for short term analysis, you would need access to real-time quote data in an API. At the time, both of these cost money.

However, it doesn’t cost money to get recent quotes from Yahoo throughout the day and store them in a database – so that’s the approach I took.

If I were to do this project today, I’d look for a free online source of historical data, in machine-readable form – because the historical data provides the most interesting and organic sounds when converted into music. The instant high speeding trading data would probably make interesting sounds as well, but you still need to pay for the data.

notes on local files

Google Maps in Max

Draw points in Max by sending latitude and longitude to a Web client via Osc and web sockets.

Uses Ruby, WebSockets, Chrome, Google Maps API, Osc, Max, Jquery, and Node.js… But the Max patch is actually quite simple.

Based on this geocoding tutorial:  http://www.sitepoint.com/google-maps-api-jquery/

download

https://github.com/tkzic/internet-sensors

folder: google-maps

files

main Max patch
  • googlemaptest.maxpat
html and javascript for Google API
  • js/ (folder containing javascript code for map client)
  • markers.html (web client)
ruby
  • mapserver.rb (Osc and Websockets server)
node.js (optional)
  • nodeserver.js (local node.js webserver)

running node.js local web server (optional)

To run the project locally, you will either need to install node.js or have a local web server. The instructions assume that you have installed node, as well as the http package.

If you don’t want to bother with node, there is also an online version of the web client running at  http://zerokidz.com/gmap/markers/markers.html

intalling ruby gems

Running Ruby 2.0 as well as the following gems:

  • osc-ruby
  • em-websocket
  • json

instructions

1. If you are using the online Web client, go to this URL in a Google Chrome browser: http://zerokidz.com/gmap/markers/markers.html then skip to step 4.

2. In a terminal window start the node webserver

node nodeserver

3. Launch a Google Chrome web browser and type in this URL

127.0.0.1:8081/markers.html

4. In another terminal window start the ruby server for Osc and websockets

ruby mapserver.rb

5. Now in the Web Client (Chrome) press the “OSC” button underneath the map – to open the web sockets connection with the ruby server.

6. Open the Max patch:

googlemaptest.maxpat

7. Now you should be able to click on the message boxes for Bethel and Rumford in the Max patch to add location markers to the map in the browser.

 

More conversations with robots in Max

Using Google speech API and Pandorabots API

(updated 1/21/2021)

This project is an extension to the speech-to-text project: https://reactivemusic.net/?p=4690 You might want to try running that project first to get the Google speech API running.

features

  • Everything runs in one Max patch
  • menu selection of chat bots and voices (currently disabled)
  • filtering of non speakable text (like HTML tags)
  • python script now runs under current directory of patch using relative path
  • refinements to recording and chatbot engines

download

https://github.com/tkzic/internet-sensors

folder: google-speech

files
main Max patch
  • robot-conversation7.maxpat
abstractions and other files
  • clean-html.js
  • xml2json/xml2json.py
  • JSON-google-speech.js
  • JSON-pandorabot.js
  • ms-counter.maxpat (timer for recording messages)
  • pandorabots.txt
Max external objects
external programs:

sox: sox audio conversion program must be in the computer’s executable file path, ie., /usr/bin – or you can rewrite the [sprintf] input to [aka.shell] with the actual path. In our case we installed sox using Macports. The executable path is /opt/local/bin/sox – which is built into a message object in the subpatcher [call-google-speech]

get sox from: http://sox.sourceforge.net

Instructions

  • Open robot-converstaion7.maxpat and turn on audio
  • select chatbot as destination
  • Press the spacebar to start recording.
  • Ask a question.
  • Press the spacebar to stop recording. 

notes

Need to fix the selection of voices.

revision history

  • 1/21/2021: complete rewrite for Max8 and Catalina
  • 4/24/2016: need to have explicit path to sox, in the call-google-speech subpatch. In my Macports version the path is /usr/local/opt/bin/sox.
  • 6/6/2014: re-added missing pandorabots.txt (list of chatbots) – also noticed that pandorabots.com was not available. May need to look for another site.
  • 5/11/2014: The newest version requires Max 6.1.7 (for JSON parsing). Also have updated to Google Speech API v2.
  • Note: Instructions for getting a real key from Google – which will need to be inserted into the patch.  http://www.chromium.org/developers/how-tos/api-keys – so far we have been getting by with common keys from a github site (see notes in next link)

Also please see these notes about how to modify the patch with your key – until this gets resolved: https://reactivemusic.net/?p=11035

Max data recorder

This has been around for a while and is being used in several internetSensor projects. Records and plays back lists (this version allows lists where the first element is a symbol). Can record and playback simultaneously at different rates. Was originally adapted from MZ’s CNMAT example. And subsequently ruined.

download

https://github.com/tkzic/internet-sensors

folder: data-recorder

files

  • data-recorder-wrapper.maxpat (main entry point)
  • data_recorder_list-tz.maxpat (recorder abstraction)

There is also an example of multi-rate streaming in: data-recorder-tester.maxpat

 

Max Twitter client using ruby

Send and receive Tweets using Max via OSC to a background ruby server.

An advantage of this method is that both the patch and the server are  compact and easy to understand. The Max patch does things in a Max way. And likewise with the ruby scripts.

download

https://github.com/tkzic/internet-sensors

folder: twitter-ruby

files

Max
  • twitter-client.maxpat
ruby
  • twitter-server-send.rb (for sending Tweets)
  • twitter-server-get.rb (for receiving Tweets)
ruby gems

The ruby script requires installation of the following gems

  • json
  • osc-ruby
  • twitter

For example:

# sudo gem install twitter


Twitter authorization

In addition to having a Twitter account, you will need to set up a Twitter application from the developer site here:

https://dev.twitter.com/apps

Good instructions on how to do this can be found in this stackoverflow.com post under this heading: So you want to use the Twitter v1.1 API?

http://stackoverflow.com/questions/12916539/simplest-php-example-for-retrieving-user-timeline-with-twitter-api-version-1-1

When you get to step 5 – in the instructions – instead of writing your own code, just use a text editor to copy your access tokens into these ruby programs:

  • twitter-server-send.rb
  • twitter-server-get.rb

Replace the strings in this line of code by copying and pasting the appropriate ones from your Twitter application:

twitterClient = Twitter::REST::Client.new do |config|
  config.consumer_key = "mqQtoYh16343tDFG3BK7QQ"       
  config.consumer_secret = "X0KexjlK49fhhrnn9EztapZfATCQqWCc5fXVJH2pE"      
  config.oauth_token = "205589709-5krgh9FR3KkLGRDnewiU7GKKBMA6i2La84c"       
  config.oauth_token_secret = "LNARAeooN2vkklkF006GRdihQ5D8YYkm8dYvEs68M"  
end
Yeah – its cryptic, but trivial compared to writing the ouath code. Just a reminder, if even one letter or quote mark, or anything is out of place, the authorization will fail.

instructions

(note: currently running with ruby version 2.0) Display your ruby version by typing: ruby –version

Sending Tweets
  • Open the Max patch: twitter-client.maxpat
  • In a terminal window run the ruby script:
# ./twitter-server-send.rb

  • In the Max patch, type in a Tweet. Press the green button to send. 
  • When you have tweeted enough, end the ruby server program by typing <ctrl-c>
 Receiving Tweets
  • Open the Max patch: twitter-client.maxpat
  • In a terminal window run the ruby script:
  • From Twitter, send a Tweet to the user name embedded in the server
# ./twitter-server-get.rb

Both ruby servers can run at the same time.

What’s next?

  • Parse incoming Tweets into various components
  • Combine the 2 Ruby servers

revision history

  • 5/21/2014 – refactored app names. Added receive server
  • 5/19/2004 – moved to twitter-ruby folder
  • 1/18/2014 – minor fixes to ruby server for current ruby version 2.0
  • 9/7/2013 – uses oauth to communicate directly to Twitter from ruby