note: 6/2021 – Everything on this post no longer works, except the idea. See Internetsensors posts for current methods
notes
pachube.com and cosm.com have now changed into xively.com. Also Twitter has migrated to the version 1.1 API. The old method of sending tweets with cosm.com no longer works as of of June 2013.
Here are revised instructions for setting up an intermediary program that allows you to send Tweets from Max/MSP using a xively.com “device” (feed). This setup is required by the tweetCurl patches in the Max/MSP internet sensors project:
Summary: the Max program will issue an http: put request to your xively.com feed with the Tweet text as data. When the data is received it triggers a request to xapier.com which logs into your Twitter account and sends the Tweet. xapier.com handles authentication using Oauth.
1. set up a xively.com account (self explanatory)
2. add a device. From the xively.com home page select “develop” from the “web tools” menu. You’ll get a screen like this one:
Click on +add device. The following screen will appear:
Fill in the fields. The names and choices can be whatever you want. Then click the “Add Device” box to get the following screen:
Now click on +Add Channel. Fill in the Channel ID field with the name: tweet (as shown below) then click the “save channel” box.
You will need to sign up for a free zapier account
assign Twitter as your action service for the “zap”
For the ‘message’ field of your zap select “body trigger datastream value value”. This will be the actual text of the tweet coming from xively.com
limited to 10 tweets per hour
when you select twitter as an action in zapier, it will attempt to log you in to do the oauth authentication. If you are already logged into Twitter – make sure its the correct account.
zapier will provide you with the URL to insert into the trigger field in your xively.com device.
4. From the xively.com device screen copy the feed-ID and API-key into your Max patch in the appropriate fields.
update 12/2013 – see this post for an example of how to display geo-coded data from Max/MSP to Google maps https://reactivemusic.net/?p=8115
original post
This is a reference to some notes. In June, I wrote a Max patch to communicate with my brother David’s Tesla Model S, using an API which runs on Tesla servers and communicates with the car. You can do things like honk horn, flash lights, open doors – and also receive data on speed, position, and battery condition.
Can’t really test the control part of this – without the possibility of causing a car accident in California, but here’s a screen shot of the files. Essentially I just ran a node server for the API and communicated from Max using Osc.
The last thing I did was to track his return trip from SFO to Santa Cruz and plot points on a map.We will eventually update this prototype to plot data on a Google Map.
What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.
Edit the ruby server file: echonest-synth2.rb replacing the API with your new API from echonest
installing ruby gems
Install the following ruby gems (from the terminal):
gem install patron
gem install osc-ruby
gem install json
gem install uri
instructions
1. In Terminal run the ruby server:
./echonest-synth2.rb
2. Open the Max patch: echonest-synth4.maxpat and turn on the audio.
3. Enter an Artist and Song title for analysis, in the text boxes. Then press the greet buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.
If there are problems with the analysis, its most likely due to one of the following:
artist or title spelled incorrectly
song is not available
song is too long
API is busy
If the ruby server hangs or crashes, just restart it and try again.
3. Press one of the preset buttons to turn on the tracks.
4. Now you can play the track by pressing the /play button.
The Mixer channels from Left to right are:
bass
synth (left)
synth (right)
random octave synth
timbre synth
master volume
gain trim
HPF cutoff frequency
You can also adjust the reverb decay time and the playback rate. Normal playback rate is 1.
programming notes
Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.
Here are screenshots of the Max sub-patches (the main screen is in the video above)
Timbre (percussion synth) – plays filtered noise:
Random octave synth:
Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:
There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!
Several modules respond to the API data:
tone synthesiszer (pitch data)
harmonic (random octave) synthesizer (pitch data)
filtered noise (timbre data)
bass synthesizer (key and mode data)
envelope generator (loudness data)
Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. Its essentially the same approach I use when asked to play bass at an open mic night.
The envelopes click at times – it may be due to the relaxed method of timing, i.e.., none at all. If they don’t go away when timing is corrected, this might get cleaned up by adding a few milliseconds to the release time – or looking ahead to make sure the edges of segments are lining up.
[update] Using the Max [poly~] object cleared up the clicking and distortion issues.
Timbre data drives a random noise filter machine. I just patched something together and it sounded responsive – but its kind of hissy – an LPF might make it less interesting.
Haven’t used any of the beat, tatum, or section data yet. The section data should be useful for quashing monotony.
another update – 4/2013
tried to write this into a Max4Live device – so that the pitch data would be played my a Midi (software) instrument. No go. The velocity data gets interpreted in mysterious ways – plus each instrument has its own envelope which interferes with the segment envelopes. Need to think this through. One idea would be to write a device which uses EN analysis data for beats to set warp markers in Live. It would be an amazing auto-warp function for any song. Analysis wars: Berlin vs. Somerville.
I downloaded the fork version from ‘dewb’ as it has been converted to run in Max6. It looks like the object retrieves all of the analysis data. It would actually be instructive to read the source code to see how they implemented libcurl and JSON for the http: requests.