Max for Live Pluggo abstractions

By Cycling74

Max/MSP folder: applications/max6.1/patches/m4l-patches/Pluggo for Live Resources/patches

Here you will find things like pluggo.randomPreset that gets called with sends/receives in the Pluggo patches. A challenge will be how to randomize ‘any’ params in any patch?  Without knowing the ranges?

 

Programming Max For Live devices

By Cycling 74

HIghlights
(by video number in the Youtube series)
  1. overview/intro
  2. * a way to show how many midi notes are being played, using [borax] and build a list of them using [bag]
  3.  how to sync a [phasor~] signal to the Live transport (1 bar)
  4.  transport controlled velocity step sequencer.
  5.  making a midi instrument
  6.  ADSR control of midi instrument
  7. *  filterning with sync to get dubstep bass
  8.  working with buffer~ and samples, and drop
  9. *  [poly~] polyphonic synth : http://cycling74.com/download/polysynthset.zip
  10. * live API and remote (to control external devices) and live.step, with [phasor~] control
  11. * parameter sequencer

 

EP-3xx41 Max – week 1

Design

  • People
  • Ideas
  • Connections

Max patches

[wpdm_file id=3]

Projects

Conversation with robots

https://reactivemusic.net/?p=4710

Frame subtraction remixer by Adam Rokshar

Using video to control audio

Twitter Streaming Radio

https://reactivemusic.net/?p=5786

AR-drone Quadcopter

code not available

The sound of a new machine

https://reactivemusic.net/?p=5945

“Designing Sound” by Andy Farnell. Max examples: Helicopter, TOS transporter,
SynthCar, Jet Engine, Granular Timestretch.

Mira by Sam Tarakajian

The [gizmo~] help file
Demonstration
Little Tikes Piano controller

https://reactivemusic.net/?p=6993

Assignment

Build a control panel in Max. It should look amazing. It should be the coolest control panel you can imagine. Use any objects, colors, shapes that you can find. But… it shouldn’t actually control anything.

Due: Have a first draft on your computer for next week. Final version due before class on 9/23. Email me the Max patch (maxpat) file or a link.

Adam Rokhsar’s Max frame subtraction example

Uses video frame subtraction in jitter to control playback of audio clip.

download

https://github.com/tkzic/max-projects

folder: frame-subtraction

patch: frameSubtraction_example.maxpat

You will also need an audio file: aiff, wav, etc., to load into a Max buffer.

dependencies

You will also need the cv.jit library (computer vision): http://jmpelletier.com/cvjit/

Add the location of these files to your path in Max using Options | File Preferences.

Note: When I loaded the patch in Mac OS 10.8 – the computer automatically downloaded and installed Java updates.

instructions

  • Load an audio file for playback
  • Try setting minimum summed pixels to 150,000 or less for greater effect – depending on amount of light in the room

Tweet from Max with ruby

Update 6/2014: working version here: https://reactivemusic.net/?p=7013

notes

The zapier.com trigger method of sending tweets from Max is limited by number of tweets and sync rate. So it would be nice to set up another intermediary server program in ruby or php which eliminates the middle-man and just sends tweets directly.

Or you could use the mxj searchTweet program, which has been updated to do this on the search side.

twitter gem

update: Got it working with this gem: https://github.com/sferik/twitter. Its much easier than dealing with xively.

docs: http://rdoc.info/gems/twitter

how to destroy a tweet: http://stackoverflow.com/questions/10640811/twitter-gem-how-to-get-the-id-of-a-new-tweet

how to get a timeline: http://bobbyprabowo.wordpress.com/2010/01/01/simple-twitter-gem-tutorial/

example of error handling code:

MAX_ATTEMPTS = 3
num_attempts = 0
begin
  num_attempts += 1
  retweets = Twitter.retweeted_by_user("sferik")
rescue Twitter::Error::TooManyRequests => error
  if num_attempts <= MAX_ATTEMPTS
    # NOTE: Your process could go to sleep for up to 15 minutes but if you
    # retry any sooner, it will almost certainly fail with the same exception.
    sleep error.rate_limit.reset_in
    retry
  else
    raise
  end
end

 

Another useful SO post: http://stackoverflow.com/questions/16618037/twitter-rate-limit-hit-while-requesting-friends-with-ruby-gem/16620639#16620639

 

Flying an AR_drone quadcopter using Max

This project uses Max/MSP to control and track a Parrot AR-drone quadcopter, using an intermediary server which runs the (open source) node-ar-drone in node.js. https://github.com/felixge/node-ar-drone

download

https://github.com/tkzic/internet-sensors

folder: ar-drone

files

main Max patch
  • drone4.maxpat
abstractions and other files
  • data-recorder-list-tz.maxpat
  • data-recorder-wrapper.maxpat
node.js
  • drone5.js (AR_drone server)
  • bigInt.js: (Osc support)
  • byteConverter.js: (Osc support)
  • libOsc.js: (Osc library)
  • tz-dronestream-server/app-tz.js (video server)
  • tz-dronestream-server/index.html (video client – called automatically by the video server)

installing node.js and dependencies:

Install node.js on your computer.  Instructions here: http://nodejs.org

The following node packages are required. Install using npm. For example:

npm install request
  • request
  • xml2js
  • util
  • http
  • socket.io

Also, install following packages for the ar-drone and video streaming:

  • ar-drone
  • dronestream

how to run

For this experiment, we will be running everything on the same computer.

1. Connect your computer to the AR drone Wifi network. For example mine is: ardrone2_260592 – Note: after you do that, you will not be able to read this post on the internet.

 2. Run both of the node programs in from terminal windows:

Since you are running Max control dashboard on the same computer as the server – you can call it without args, like this:

node drone5.js

Then from another terminal window start the video server:

node tz-dronestream-server/app-tz.js

3. In a Chrome web browser, type the following URL: (you can make multiple simultaneous connections to the video server) You should see the video from the AR-drone at in the browser.

127.0.0.1:5555

4. Load the following Max patch (control dashboard)

drone4.maxpat

5. In the Max patch, try clicking the /takeoff and /land messages in the Max patch.

Max programming tips

To control the drone from Max, use [udpsend] [udpreceive] with ports 4000 and 4001 respectively. You can’t make multiple connections with OSC – also it would probably not be so cool while flying. but you can specify a target ip for telemetry when running the OSC server.

We will eventually publish a complete list of commands, but they are using the API from the ar-drone docs readme file – converted into OSC style. For example:

  • /takeoff
  • /up .5
  • /animate flipAhead 2000

More notes on video…

You can capture the video stream into Max, by either capturing the chrome window using jitter, or by using syphon  – but for demo purposes I have just run Chrome window side by side with Max control patch.

See this post for setting up Syphon in Max: https://reactivemusic.net/?p=8662

running separate server and control computers

You may find it more practical to run the node.js server on a separate computer. If you do that you will need to

  •  modify the dronestream script: app-tz.js to insert the proper ip address in the server.listen() – which should be the last line of the program. You will also need to use that address as the URL in Chrome, for example: 192.168.1.140:5555
  • And include the controller ip address on the command line as shown below

When testing this I set up a dual IP address on my Macbook with a static ip: 192.168.1.140 – so this would always be the server. I ended up getting rid of it because it caused problems with other software.

Here is a link to how to set up a dual IP address: https://reactivemusic.net/?p=6628

Here is the command you would use to specify a separate IP address when launching the server:

For example if your Max control program is on 192.168.1.104 and you want to run in outdoor mode – use this command:

node drone5.js 192.168.1.104 TRUE

 

program notes

These students are just about to send the quadcopter into the air using control panels developed in Max. Ali’s control panel uses speech via the Google API. Her computer is connected to the Internet via wiFi and also connected to Chase’s computer via a Midi/USB link. Her voice commands get translated into Midi. Chase’s control panel reads the commands. Chase’s computer is on the same WiFi network as the quadcopter. Chase’s control panel sends commands to my computer which is running Max and the at-drone software in node.js. Occasionally this all works. But there is nobody to hold a camera.

We’re now running two node servers, one for Max and one for web video streaming – which can be accessed by other computers connected to the same LAN as the AR-drone.

We did have a mishap where Chase’s control panel sent an “/up” command to the quadcopter. Then his Macbook batter died as the quadcopter was rising into the sky. I managed to rewrite the server program, giving it a /land command – then restarted it. It was able to re-establish communication with the quadcopter and make it land.

Unfortunately we did not get video of this experiment but here are a few seconds of video showing the quadcopter taking off and landing under control of Max – while indoors.

EchoNest segment analysis player in Max

The Echonest API provides sample level audio analysis.

http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf

What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.

download

https://github.com/tkzic/internet-sensors

folder: echo-nest

files

main Max patch
  • echonest-synth4.maxpat
abstractions and other files
  • polyvoice-sine.maxpat
  • polyvoice2.maxpat
ruby server
  • echonest-synth2.rb

authentication

You will need to sign up for a developer account at The Echo Nest, and get an API key. https://developer.echonest.com

Edit the ruby server file: echonest-synth2.rb replacing the API with your new API from echonest

 

installing ruby gems

Install the following ruby gems (from the terminal):

gem install patron

gem install osc-ruby

gem install json

gem install uri

instructions

1. In Terminal run the ruby server:

./echonest-synth2.rb

2. Open the Max patch: echonest-synth4.maxpat and turn on the audio.

3. Enter an Artist and Song title for analysis, in the text boxes. Then press the greet buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.

If there are problems with the analysis, its most likely due to one of the following:

  • artist or title spelled incorrectly
  • song is not available
  • song is too long
  • API is busy
If the ruby server hangs or crashes, just restart it and try again.

3. Press one of the preset buttons to turn on the tracks.

4. Now you can play the track by pressing the /play button.

The Mixer channels from Left to right are:

  • bass
  • synth (left)
  • synth (right)
  • random octave synth
  • timbre synth
  • master volume
  • gain trim
  • HPF cutoff frequency
You can also adjust the reverb decay time and the playback rate. Normal playback rate is 1.

programming notes

Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.

Here are screenshots of the Max sub-patches (the main screen is in the video above)

Timbre (percussion synth) – plays filtered noise:

Random octave synth:

Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:

There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!

Several modules respond to the API data:

  • tone synthesiszer (pitch data)
  • harmonic (random octave) synthesizer (pitch data)
  • filtered noise (timbre data)
  • bass synthesizer (key and mode data)
  • envelope generator (loudness data)

Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. Its essentially the same approach I use when asked to play bass at an open mic night.

The envelopes click at times – it may be due to the relaxed method of timing, i.e.., none at all. If they don’t go away when timing is corrected, this might get cleaned up by adding a few milliseconds to the release time – or looking ahead to make sure the edges of  segments are lining up.

[update] Using the Max [poly~] object cleared up the clicking and distortion issues.

Timbre data drives a random noise filter machine. I just patched something together and it sounded responsive – but its kind of hissy – an LPF might make it less interesting.

Haven’t used any of the beat, tatum, or section data yet. The section data should be useful for quashing monotony.

another update – 4/2013

tried to write this into a Max4Live device – so that the pitch data would be played my a Midi (software) instrument. No go. The velocity data gets interpreted in mysterious ways – plus each instrument has its own envelope which interferes with the segment envelopes. Need to think this through. One idea would be to write a device which uses EN analysis data for beats to set warp markers in Live. It would be an amazing auto-warp function for any song. Analysis wars: Berlin vs. Somerville.