EP-3xx41 Max – week 1

Design

  • People
  • Ideas
  • Connections

Max patches

[wpdm_file id=3]

Projects

Conversation with robots

https://reactivemusic.net/?p=4710

Frame subtraction remixer by Adam Rokshar

Using video to control audio

Twitter Streaming Radio

https://reactivemusic.net/?p=5786

AR-drone Quadcopter

code not available

The sound of a new machine

https://reactivemusic.net/?p=5945

“Designing Sound” by Andy Farnell. Max examples: Helicopter, TOS transporter,
SynthCar, Jet Engine, Granular Timestretch.

Mira by Sam Tarakajian

The [gizmo~] help file
Demonstration
Little Tikes Piano controller

https://reactivemusic.net/?p=6993

Assignment

Build a control panel in Max. It should look amazing. It should be the coolest control panel you can imagine. Use any objects, colors, shapes that you can find. But… it shouldn’t actually control anything.

Due: Have a first draft on your computer for next week. Final version due before class on 9/23. Email me the Max patch (maxpat) file or a link.

Adam Rokhsar’s Max frame subtraction example

Uses video frame subtraction in jitter to control playback of audio clip.

download

https://github.com/tkzic/max-projects

folder: frame-subtraction

patch: frameSubtraction_example.maxpat

You will also need an audio file: aiff, wav, etc., to load into a Max buffer.

dependencies

You will also need the cv.jit library (computer vision): http://jmpelletier.com/cvjit/

Add the location of these files to your path in Max using Options | File Preferences.

Note: When I loaded the patch in Mac OS 10.8 – the computer automatically downloaded and installed Java updates.

instructions

  • Load an audio file for playback
  • Try setting minimum summed pixels to 150,000 or less for greater effect – depending on amount of light in the room

Tweet from Max with ruby

Update 6/2014: working version here: https://reactivemusic.net/?p=7013

notes

The zapier.com trigger method of sending tweets from Max is limited by number of tweets and sync rate. So it would be nice to set up another intermediary server program in ruby or php which eliminates the middle-man and just sends tweets directly.

Or you could use the mxj searchTweet program, which has been updated to do this on the search side.

twitter gem

update: Got it working with this gem: https://github.com/sferik/twitter. Its much easier than dealing with xively.

docs: http://rdoc.info/gems/twitter

how to destroy a tweet: http://stackoverflow.com/questions/10640811/twitter-gem-how-to-get-the-id-of-a-new-tweet

how to get a timeline: http://bobbyprabowo.wordpress.com/2010/01/01/simple-twitter-gem-tutorial/

example of error handling code:

MAX_ATTEMPTS = 3
num_attempts = 0
begin
  num_attempts += 1
  retweets = Twitter.retweeted_by_user("sferik")
rescue Twitter::Error::TooManyRequests => error
  if num_attempts <= MAX_ATTEMPTS
    # NOTE: Your process could go to sleep for up to 15 minutes but if you
    # retry any sooner, it will almost certainly fail with the same exception.
    sleep error.rate_limit.reset_in
    retry
  else
    raise
  end
end

 

Another useful SO post: http://stackoverflow.com/questions/16618037/twitter-rate-limit-hit-while-requesting-friends-with-ruby-gem/16620639#16620639

 

Flying an AR_drone quadcopter using Max

This project uses Max/MSP to control and track a Parrot AR-drone quadcopter, using an intermediary server which runs the (open source) node-ar-drone in node.js. https://github.com/felixge/node-ar-drone

download

https://github.com/tkzic/internet-sensors

folder: ar-drone

files

main Max patch
  • drone4.maxpat
abstractions and other files
  • data-recorder-list-tz.maxpat
  • data-recorder-wrapper.maxpat
node.js
  • drone5.js (AR_drone server)
  • bigInt.js: (Osc support)
  • byteConverter.js: (Osc support)
  • libOsc.js: (Osc library)
  • tz-dronestream-server/app-tz.js (video server)
  • tz-dronestream-server/index.html (video client – called automatically by the video server)

installing node.js and dependencies:

Install node.js on your computer.  Instructions here: http://nodejs.org

The following node packages are required. Install using npm. For example:

npm install request
  • request
  • xml2js
  • util
  • http
  • socket.io

Also, install following packages for the ar-drone and video streaming:

  • ar-drone
  • dronestream

how to run

For this experiment, we will be running everything on the same computer.

1. Connect your computer to the AR drone Wifi network. For example mine is: ardrone2_260592 – Note: after you do that, you will not be able to read this post on the internet.

 2. Run both of the node programs in from terminal windows:

Since you are running Max control dashboard on the same computer as the server – you can call it without args, like this:

node drone5.js

Then from another terminal window start the video server:

node tz-dronestream-server/app-tz.js

3. In a Chrome web browser, type the following URL: (you can make multiple simultaneous connections to the video server) You should see the video from the AR-drone at in the browser.

127.0.0.1:5555

4. Load the following Max patch (control dashboard)

drone4.maxpat

5. In the Max patch, try clicking the /takeoff and /land messages in the Max patch.

Max programming tips

To control the drone from Max, use [udpsend] [udpreceive] with ports 4000 and 4001 respectively. You can’t make multiple connections with OSC – also it would probably not be so cool while flying. but you can specify a target ip for telemetry when running the OSC server.

We will eventually publish a complete list of commands, but they are using the API from the ar-drone docs readme file – converted into OSC style. For example:

  • /takeoff
  • /up .5
  • /animate flipAhead 2000

More notes on video…

You can capture the video stream into Max, by either capturing the chrome window using jitter, or by using syphon  – but for demo purposes I have just run Chrome window side by side with Max control patch.

See this post for setting up Syphon in Max: https://reactivemusic.net/?p=8662

running separate server and control computers

You may find it more practical to run the node.js server on a separate computer. If you do that you will need to

  •  modify the dronestream script: app-tz.js to insert the proper ip address in the server.listen() – which should be the last line of the program. You will also need to use that address as the URL in Chrome, for example: 192.168.1.140:5555
  • And include the controller ip address on the command line as shown below

When testing this I set up a dual IP address on my Macbook with a static ip: 192.168.1.140 – so this would always be the server. I ended up getting rid of it because it caused problems with other software.

Here is a link to how to set up a dual IP address: https://reactivemusic.net/?p=6628

Here is the command you would use to specify a separate IP address when launching the server:

For example if your Max control program is on 192.168.1.104 and you want to run in outdoor mode – use this command:

node drone5.js 192.168.1.104 TRUE

 

program notes

These students are just about to send the quadcopter into the air using control panels developed in Max. Ali’s control panel uses speech via the Google API. Her computer is connected to the Internet via wiFi and also connected to Chase’s computer via a Midi/USB link. Her voice commands get translated into Midi. Chase’s control panel reads the commands. Chase’s computer is on the same WiFi network as the quadcopter. Chase’s control panel sends commands to my computer which is running Max and the at-drone software in node.js. Occasionally this all works. But there is nobody to hold a camera.

We’re now running two node servers, one for Max and one for web video streaming – which can be accessed by other computers connected to the same LAN as the AR-drone.

We did have a mishap where Chase’s control panel sent an “/up” command to the quadcopter. Then his Macbook batter died as the quadcopter was rising into the sky. I managed to rewrite the server program, giving it a /land command – then restarted it. It was able to re-establish communication with the quadcopter and make it land.

Unfortunately we did not get video of this experiment but here are a few seconds of video showing the quadcopter taking off and landing under control of Max – while indoors.

EchoNest segment analysis player in Max

The Echonest API provides sample level audio analysis.

http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf

What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.

download

https://github.com/tkzic/internet-sensors

folder: echo-nest

files

main Max patch
  • echonest-synth4.maxpat
abstractions and other files
  • polyvoice-sine.maxpat
  • polyvoice2.maxpat
ruby server
  • echonest-synth2.rb

authentication

You will need to sign up for a developer account at The Echo Nest, and get an API key. https://developer.echonest.com

Edit the ruby server file: echonest-synth2.rb replacing the API with your new API from echonest

 

installing ruby gems

Install the following ruby gems (from the terminal):

gem install patron

gem install osc-ruby

gem install json

gem install uri

instructions

1. In Terminal run the ruby server:

./echonest-synth2.rb

2. Open the Max patch: echonest-synth4.maxpat and turn on the audio.

3. Enter an Artist and Song title for analysis, in the text boxes. Then press the greet buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.

If there are problems with the analysis, its most likely due to one of the following:

  • artist or title spelled incorrectly
  • song is not available
  • song is too long
  • API is busy
If the ruby server hangs or crashes, just restart it and try again.

3. Press one of the preset buttons to turn on the tracks.

4. Now you can play the track by pressing the /play button.

The Mixer channels from Left to right are:

  • bass
  • synth (left)
  • synth (right)
  • random octave synth
  • timbre synth
  • master volume
  • gain trim
  • HPF cutoff frequency
You can also adjust the reverb decay time and the playback rate. Normal playback rate is 1.

programming notes

Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.

Here are screenshots of the Max sub-patches (the main screen is in the video above)

Timbre (percussion synth) – plays filtered noise:

Random octave synth:

Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:

There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!

Several modules respond to the API data:

  • tone synthesiszer (pitch data)
  • harmonic (random octave) synthesizer (pitch data)
  • filtered noise (timbre data)
  • bass synthesizer (key and mode data)
  • envelope generator (loudness data)

Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. Its essentially the same approach I use when asked to play bass at an open mic night.

The envelopes click at times – it may be due to the relaxed method of timing, i.e.., none at all. If they don’t go away when timing is corrected, this might get cleaned up by adding a few milliseconds to the release time – or looking ahead to make sure the edges of  segments are lining up.

[update] Using the Max [poly~] object cleared up the clicking and distortion issues.

Timbre data drives a random noise filter machine. I just patched something together and it sounded responsive – but its kind of hissy – an LPF might make it less interesting.

Haven’t used any of the beat, tatum, or section data yet. The section data should be useful for quashing monotony.

another update – 4/2013

tried to write this into a Max4Live device – so that the pitch data would be played my a Midi (software) instrument. No go. The velocity data gets interpreted in mysterious ways – plus each instrument has its own envelope which interferes with the segment envelopes. Need to think this through. One idea would be to write a device which uses EN analysis data for beats to set warp markers in Live. It would be an amazing auto-warp function for any song. Analysis wars: Berlin vs. Somerville.

en_analyzer~

Echonest anaylsis in Max.

By Michael Dewberrry

I downloaded the fork version from ‘dewb’ as it has been converted to run in Max6. It looks  like the object retrieves all of the analysis data. It would actually be instructive to read the source code to see how they implemented libcurl and JSON for the http: requests.

https://github.com/Dewb/en_analyzer 

domain ping machine in Web Audio

A ‘mini’ version of the Google domain ping synthesizer from the internet-sensors collection (Using the Mashape API). This one runs in Web Audio, using the Web Audio Playground with OSC.

Looks like a card game. Anyway it sounds cool. Doesn’t have the panning of the original, but it has an organic sound due to portamento in frequency changes, and more ‘beating’. Here’s a short excerpt.

Another example of Max controlling WAP https://reactivemusic.net/?p=6193

download

https://github.com/tkzic/WebAudio

folder is: WebAudio/osctest/

files

  • wapOSCserver-ping.rb
  • wapPingTest.maxpat
  • WAP patch: – ping2 (5 osc’s -> 5 gains, -> 1 master gain) – ping2.json
  • Web Page: WebAudio/index.html

instructions

update: you can run an online version of WAP Web client at http://zerokidz.com/wap/index.html – If you load this page, skip to step 3.

1. run the node webserver in WebAudio

node nodeserver.js

(it will run on localhost port 8081 – for example http://127.0.0.1:8081)

2. In Chrome web browser, run: 127.0.0.1:8081/index.html

3. From a terminal window, go to the osctest/ folder and start the server by typing:

./wapOSCserver-ping.rb

4. Load the Max patch:

wapPingTest.maxpat

5. In Chrome, click the OSC button – the ruby server should open a socket connection

6. Also in Chrome, load the patch: ping2 (note that there is a json copy of this patch ping2.json that can be pasted in, if it doesn’t show up in the menu)

6.5 In WAP, Click the square buttons on the 5 Oscillators to start them playing. You should hear sounds at this point.

7. Now back in Max patch – click green toggle to start polling and you probably want to increase the polling rate to about 50 ms instead of 1000 ms

suggestions
  • If it doesn’t seem like there is much action in the patch, try adjusting the FREQ_MULT and GAIN_MULT inside the ruby script.
  • You will probably also want to open the developer javascript console in Chrome to see what is going on.

Internet sensors projects

overview

A series of projects that use Internet API’s for interactive media projects.

updated 2/14/2021.

Projects have been tested on Max8 and Mac OS Catalina – except where noted.  Other dependencies are are listed on individual project pages.

My goal is to show a variety of methods to get data to and from Max. API’s come and go, as do the libraries that support them.

download

internet-sensors is on Github at:  https://github.com/tkzic/internet-sensors

Each project is in a separate folder.

authorization

Some projects require passwords and API-keys from providers.

For example, for the ‘Twitter streaming API in Max’ project you’ll need to set up a Twitter application from your account to get authorization credentials.

For projects that need authorization usually you’ll just need to modify the patches/source code with your user information – as directed in the instructions.  The API keys embedded in the code will not work unless specifically mentioned, like with the Google speech API.

help

API’s used in the projects change fairly often. So there’s no guarantee they’ll work. If you find problems or have ideas – please post to them to the github repository. Or email me at [email protected].

projects

1. Twitter streaming API in Max (FM, php, curl, geocoding, [aka.speech], Soundflower (optional), Morse code, OSC, data recorder, Twitter v1.1 API, Twitter Apps, Oauth)

https://reactivemusic.net/?p=5786

2. Sending tweets from Max using curl ([sprintf], [aka.shell], xively.com API, zapier.com API, JSON, javascript Twitter v1.1 API, Oauth)

deprecated 2/11/2021 – old project link here: https://reactivemusic.net/?p=5447

3. Send and receive tweets in Max using ruby (ruby, API, JSON, javascript Twitter v1.1 API, OSC, Oauth)

New! – use the project above to send tweets from using a Fisher Price “Little Tikes” piano: https://reactivemusic.net/?p=6993

4. Speech to text in Max (Google speech API, JSON, javascript, sox, Twitter v1.1 API, Oauth)

Note: Send Tweets using speech as well.

https://reactivemusic.net/?p=4690

5. A conversation with a robot in Max (Google speech API, sox, JSON,  pandorabots API, python, [aka.speech]

https://reactivemusic.net/?p=9834

7. Playing bird calls in Max (xeno-canto API, [jit.uldl], [jit.qt.movie])

https://reactivemusic.net/?p=4225

8. Soundcloud API in Max (node.js)

https://reactivemusic.net/?p=20120

 

9. Real time train map using Max and node.js (XML, JSON, OSC, data recorder, web sockets, Irish Rail API)

https://reactivemusic.net/?p=5477

10. stock market music in Max (OSC, netcat,  php, mysql, html, javascript, Yahoo API, linux)

…updates in progress…

https://reactivemusic.net/?p=12029

11. Using weather forecast data to drive weather sounds in Pure Data (ruby, OSC, JSON, openweathermap API, “Designing Sound” by Andy Farnell)

https://reactivemusic.net/?p=5846

… updates in progress…

12. Using ping times to control oscilators in Max (Mashape ping-uin API, ruby, OSC, JSON)

https://reactivemusic.net/?p=5945

13. Spotify Segment analysis player – sonification of audio analysis data from Spotify (echo nest) API, node,  Max/MSP)

https://reactivemusic.net/?p=20096

14. Quadcopter AR_drone – Fly a quadcopter using Max – with streaming Web video. ( node.js, AR_drone, Google Chrome, Osc, Max/MSP)

deprecated 2/14/2021 – old project link: https://reactivemusic.net/?p=6635

15. Adding markers to Google Maps in Max – ( node.js, ruby, Google Chrome, Osc, Max/MSP, websockets, Google Maps API, Jquery, javascript)

deprecated 2/14/2021 – old project link: https://reactivemusic.net/?p=11412

16. Max data recorder –  Record and play back streams of data simultaneously at various rates

https://reactivemusic.net/?p=8053

17. MBTA bus data in Max –  Sonification of Mass Ave buses, from Harvard to Dudley

… updates in progress…

https://reactivemusic.net/?p=17524

Screen Shot 2014-11-11 at 3.26.16 PM

 

Twitter streaming API in Max

World map and radio simulation

note August 3, 2022 –

This program is broken on Mac OS Monterey. The PHP code is throwing errors in the OSC library.  I’m not certain there is a reasonable workaround at this point and will be looking at replacing the php code with node.js or another more reliable platform. Also, as noted below – php is no longer installed in Mac os – so it requires homebrew or macports

features

  • Twitter streaming v1.1 API and Twitter Apps (using http requests and Oauth in php)
  • lat/lon conversion and map plotting in Max
  • sending data to Max using OSC in php
  • ‘speaking’ tweets using several voices (text to speech)
  • Using geo-coordinates to control an FM synthesizer
  • Converting Tweet text to Morse code
  • Using a data recorder to replay/save data streams (Max lists)

Compare to satellite photo of earth – note the pattern of lights.

download

https://github.com/tkzic/internet-sensors

folder: twitter-stream

files

Max
  • world3.maxpat (main patch)
  • data_recorder_list-tz.maxpat (abstraction for recording data)
  • data-recorder-wrapper.maxpat (abstraction for recording data)
  • worldMap.jpg
  • twitter-morning.txt (sample data – not required)
php
  • ctwitter_max3.php (main program)
  • ctwitter_stream_max3.php (twitter engine)
  • udp.php (Osc client)

3 August, 2022

(note: starting with mac os monterey, php is no longer included in mac os. You can install it with homebrew. See this post: https://www.ergonis.com/products/tips/install-php-on-macos.php

externals

[note]  The project displays Tweets without these externals, but you won’t hear any speech

authorization

In addition to having a Twitter account, you will need to set up a Twitter application from the developer site here:

https://dev.twitter.com/apps

Good instructions on how to do this can be found in this stackoverflow.com post under this heading: So you want to use the Twitter v1.1 API?

http://stackoverflow.com/questions/12916539/simplest-php-example-for-retrieving-user-timeline-with-twitter-api-version-1-1

When you get to step 5 – in the instructions – instead of writing your own code, just use a text editor to copy your access tokens into this php program which is provided:

  • ctwitter_max3.php

Replace the strings in this line of code by copying and pasting the appropriate ones from your Twitter application:

$t->login('consumer_key', 'consumer secret', 'access token', 'access secret');

 

So it will end up looking something like this:

$t->login('ZdzfNaeflihFydfOHeOA', 'eXzUOfhif4riifgRbCTnnSN0T7neYtg8dIWDC7j3bs', '205589709-5kRI1fllJvU94jjffeerSn9LrTajtxSrvO8', 'u5MuSxPseBemUIBWlMxEFaw899feedXA0eHlReCnQ');

Yeah – its cryptic…

instructions

1. open the Max Patch: world3.maxpat

2. in a terminal window run the php program: ctwitter_max3.php. [note] it runs forever. Press <ctrl-c> when you want to stop streaming Tweets.

php ./ctwitter_max3.php

3. Switch back to world3.maxpat to see dots populating the map

4. In Max, press the speaker icon (lower left) to turn on audio.

5. Activate  voice synth/morse code using the blue toggle (lower left)

6. Clear the map by pressing the blue message box: “clear, drawpict a 0 0”

7. Stop the Tweet stream by pressing <ctrl-c> in the terminal window

special voice fx

If you have Soundflower installed, the Mac OS speech synth output can be routed back to Max for audio processing. This is somewhat complicated, but shows how to process audio in Max from other sources.

  • In MacOS System Preferences, set audio output device to Soundflower 2ch
  • Turn up hardware volume control on your computer
  • In Max, Options | Audio Status, set input device to Soundflower 2ch
  • In world3.maxpat double click on [p audio engine] (lower left). Then in the audio-engine sub-patch activate the toggle, (lower right) for voice-fx

data recording

The built-in data recorder/playback is on the left side of world3.maxpat:

  • toggle ‘record’ (red toggle)  to start or stop data recording
  • Note that data will only be recorded when the php program is streaming Tweets in the terminal window (see above)
  • Press /play message or other transport controls to replay data
<span style="font-family: 'Helvetica Neue', Helvetica, Helvetica, Arial, sans-serif; font-size: 23px; font-weight: bold; line-height: 1.1;">
</span>
<span style="font-family: 'Helvetica Neue', Helvetica, Helvetica, Arial, sans-serif; font-size: 23px; font-weight: bold; line-height: 1.1;">revision history</span>
<span style="font-family: 'Helvetica Neue', Helvetica, Helvetica, Arial, sans-serif; font-size: 23px; font-weight: bold; line-height: 1.1;">
</span>

revision history

1/19/2021

Updates for Max8 and Catalina:

Replaced [aka.speech] external with Jeremy Bernstein’s [shell] external and the Mac OS command line ‘say’ command.

Reinstalled Java Development Kit for [mxj] object

I revised the php code for the Twitter streaming project, to use the coordinates of a corner of the city polygon bounding box. That seems to be more reliable than the geo coordinates which are absent from most Tweets.

  • updated 3/26/2014 – fixed runtime error in php server
  • updated 2/2/2014 – simplified user interface and updated audio engine
  • updated 9/2/2013 for Twitter v1.1 API with Oauth – note that older versions of this project are broken due to discontinued Twitter v1.0 API as of June 2013