Transcribing non musical events

Originally, this idea involved birds at the sea shore, but Yesterday I was in a food court and heard an air conditioner playing a sus chord and rhythmic beeps on root and fourth. Would be interesting to score some of these.

 

get web video into Jitter

notes

1. first QC + syphon

http://cycling74.com/forums/topic/using-jitter-to-display-a-website/

Have installed this plugin with Quartz Composer and used the Simple Browser.qtz example, in tkzic/quartz

http://code.google.com/p/cogewebkit/

2. Also there is the [jweb] object

http://williamjturkel.net/code/interacting-with-multimedia-in-max-6/

3. An approach using [jweb] amd [jit.desktop]

http://cycling74.com/forums/topic/import-live-feed-into-jitter/

 

Flying an AR_drone quadcopter using Max

This project uses Max/MSP to control and track a Parrot AR-drone quadcopter, using an intermediary server which runs the (open source) node-ar-drone in node.js. https://github.com/felixge/node-ar-drone

download

https://github.com/tkzic/internet-sensors

folder: ar-drone

files

main Max patch
  • drone4.maxpat
abstractions and other files
  • data-recorder-list-tz.maxpat
  • data-recorder-wrapper.maxpat
node.js
  • drone5.js (AR_drone server)
  • bigInt.js: (Osc support)
  • byteConverter.js: (Osc support)
  • libOsc.js: (Osc library)
  • tz-dronestream-server/app-tz.js (video server)
  • tz-dronestream-server/index.html (video client – called automatically by the video server)

installing node.js and dependencies:

Install node.js on your computer.  Instructions here: http://nodejs.org

The following node packages are required. Install using npm. For example:

npm install request
  • request
  • xml2js
  • util
  • http
  • socket.io

Also, install following packages for the ar-drone and video streaming:

  • ar-drone
  • dronestream

how to run

For this experiment, we will be running everything on the same computer.

1. Connect your computer to the AR drone Wifi network. For example mine is: ardrone2_260592 – Note: after you do that, you will not be able to read this post on the internet.

 2. Run both of the node programs in from terminal windows:

Since you are running Max control dashboard on the same computer as the server – you can call it without args, like this:

node drone5.js

Then from another terminal window start the video server:

node tz-dronestream-server/app-tz.js

3. In a Chrome web browser, type the following URL: (you can make multiple simultaneous connections to the video server) You should see the video from the AR-drone at in the browser.

127.0.0.1:5555

4. Load the following Max patch (control dashboard)

drone4.maxpat

5. In the Max patch, try clicking the /takeoff and /land messages in the Max patch.

Max programming tips

To control the drone from Max, use [udpsend] [udpreceive] with ports 4000 and 4001 respectively. You can’t make multiple connections with OSC – also it would probably not be so cool while flying. but you can specify a target ip for telemetry when running the OSC server.

We will eventually publish a complete list of commands, but they are using the API from the ar-drone docs readme file – converted into OSC style. For example:

  • /takeoff
  • /up .5
  • /animate flipAhead 2000

More notes on video…

You can capture the video stream into Max, by either capturing the chrome window using jitter, or by using syphon  – but for demo purposes I have just run Chrome window side by side with Max control patch.

See this post for setting up Syphon in Max: https://reactivemusic.net/?p=8662

running separate server and control computers

You may find it more practical to run the node.js server on a separate computer. If you do that you will need to

  •  modify the dronestream script: app-tz.js to insert the proper ip address in the server.listen() – which should be the last line of the program. You will also need to use that address as the URL in Chrome, for example: 192.168.1.140:5555
  • And include the controller ip address on the command line as shown below

When testing this I set up a dual IP address on my Macbook with a static ip: 192.168.1.140 – so this would always be the server. I ended up getting rid of it because it caused problems with other software.

Here is a link to how to set up a dual IP address: https://reactivemusic.net/?p=6628

Here is the command you would use to specify a separate IP address when launching the server:

For example if your Max control program is on 192.168.1.104 and you want to run in outdoor mode – use this command:

node drone5.js 192.168.1.104 TRUE

 

program notes

These students are just about to send the quadcopter into the air using control panels developed in Max. Ali’s control panel uses speech via the Google API. Her computer is connected to the Internet via wiFi and also connected to Chase’s computer via a Midi/USB link. Her voice commands get translated into Midi. Chase’s control panel reads the commands. Chase’s computer is on the same WiFi network as the quadcopter. Chase’s control panel sends commands to my computer which is running Max and the at-drone software in node.js. Occasionally this all works. But there is nobody to hold a camera.

We’re now running two node servers, one for Max and one for web video streaming – which can be accessed by other computers connected to the same LAN as the AR-drone.

We did have a mishap where Chase’s control panel sent an “/up” command to the quadcopter. Then his Macbook batter died as the quadcopter was rising into the sky. I managed to rewrite the server program, giving it a /land command – then restarted it. It was able to re-establish communication with the quadcopter and make it land.

Unfortunately we did not get video of this experiment but here are a few seconds of video showing the quadcopter taking off and landing under control of Max – while indoors.

Max and AR-drone update

update 6/2014. This project is now part of the Internet sensors projects: https://reactivemusic.net/?p=5859

original post

Have now got Max controlling AR-drone and receiving telemetry for altitude, battery, throttle, etc.,

  • tkzic/internetsensors/drone-test-max2.maxpat
  • tkzic/internetsensors/drone-test3.js (node)
requires: ar-drone node module…
video feed using dronestream
  • tkzic/ar-drone/node-dronestream-master/example/express/app.js (node)

To access video, run chrome browser set to localhost:3000

Finally got this working – the big issue was WebGL – I was running in safari. It needs to run in chrome. So I got the express version of the dronestream example working.

also look at the new vizzie abstractions in:

ended up needing a bunch of npm installs, but not sure which ones now cuz I wasted so much time dealing with webgl

requires: drone stream, ws, buffy, express,  broadway, jade….
I think we can use the [jit.desktop] object to get the video from the browser into max
maybe one of the jit.render jt.gl.node – or something will work for connecting the video to vizzie

 

controlling Parrot AR drone2 with Max

5/2014: see latest version here: https://reactivemusic.net/?p=6635

update – Got the drone today and ran successful test of takeoff, rotate, and land. Next thing to check out is how to get the drone on an existing wifi network…

http://nodecopter.com/guides/connect_to_access_point

To work with a WPA network requires installing a patch to the drone. https://github.com/daraosn/ardrone-wpa2

One workaround for school networks would be to use 2 computers, one for the interenet – one for the drone – and then connect them with midi or something.

This is a work in progress. Going to use the node.js code from the Irish train project for the OSC communication with Max.

First, install ar-drone

$ npm install ar-drone

Here’s the generic Max <=> node.js code using OSC. It handles OSC commands bidirectionally.

  • internetsensors/generic-node-OSC.js
  • internetsensors/generic-node-OSC.maxpat
Next step is to plug in the basic drone commands like takeoff and land from the example code.

Here’s the initial drone testing code

  • internetsensors/drone-test1.js
  • internetsensors/drone-test-max1.maxpat

EchoNest segment analysis player in Max

The Echonest API provides sample level audio analysis.

http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf

What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.

download

https://github.com/tkzic/internet-sensors

folder: echo-nest

files

main Max patch
  • echonest-synth4.maxpat
abstractions and other files
  • polyvoice-sine.maxpat
  • polyvoice2.maxpat
ruby server
  • echonest-synth2.rb

authentication

You will need to sign up for a developer account at The Echo Nest, and get an API key. https://developer.echonest.com

Edit the ruby server file: echonest-synth2.rb replacing the API with your new API from echonest

 

installing ruby gems

Install the following ruby gems (from the terminal):

gem install patron

gem install osc-ruby

gem install json

gem install uri

instructions

1. In Terminal run the ruby server:

./echonest-synth2.rb

2. Open the Max patch: echonest-synth4.maxpat and turn on the audio.

3. Enter an Artist and Song title for analysis, in the text boxes. Then press the greet buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.

If there are problems with the analysis, its most likely due to one of the following:

  • artist or title spelled incorrectly
  • song is not available
  • song is too long
  • API is busy
If the ruby server hangs or crashes, just restart it and try again.

3. Press one of the preset buttons to turn on the tracks.

4. Now you can play the track by pressing the /play button.

The Mixer channels from Left to right are:

  • bass
  • synth (left)
  • synth (right)
  • random octave synth
  • timbre synth
  • master volume
  • gain trim
  • HPF cutoff frequency
You can also adjust the reverb decay time and the playback rate. Normal playback rate is 1.

programming notes

Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.

Here are screenshots of the Max sub-patches (the main screen is in the video above)

Timbre (percussion synth) – plays filtered noise:

Random octave synth:

Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:

There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!

Several modules respond to the API data:

  • tone synthesiszer (pitch data)
  • harmonic (random octave) synthesizer (pitch data)
  • filtered noise (timbre data)
  • bass synthesizer (key and mode data)
  • envelope generator (loudness data)

Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. Its essentially the same approach I use when asked to play bass at an open mic night.

The envelopes click at times – it may be due to the relaxed method of timing, i.e.., none at all. If they don’t go away when timing is corrected, this might get cleaned up by adding a few milliseconds to the release time – or looking ahead to make sure the edges of  segments are lining up.

[update] Using the Max [poly~] object cleared up the clicking and distortion issues.

Timbre data drives a random noise filter machine. I just patched something together and it sounded responsive – but its kind of hissy – an LPF might make it less interesting.

Haven’t used any of the beat, tatum, or section data yet. The section data should be useful for quashing monotony.

another update – 4/2013

tried to write this into a Max4Live device – so that the pitch data would be played my a Midi (software) instrument. No go. The velocity data gets interpreted in mysterious ways – plus each instrument has its own envelope which interferes with the segment envelopes. Need to think this through. One idea would be to write a device which uses EN analysis data for beats to set warp markers in Live. It would be an amazing auto-warp function for any song. Analysis wars: Berlin vs. Somerville.

echonest API wind experiment

Since I had been thinking a lot about how to use data streams to compose music, it seemed like it would be cool to reverse the process. To use music as a data stream to control something.

The Echonest API http://developer.echonest.com analyzes audio content at the sample level. I requested an analysis of ‘Free Bird’ by Lynyrd Skynyrd, thinking this song has dynamic variation so it might sound like a wind event. Segment loudness data was connected to wind speed in the simulation and you could hear, as the song progressed, how the wind matched the song’s dynamic levels. But there wasn’t much variation – mostly a steady build from very low to very high. Tried connecting other analysis data to wind speed and found that the segment confidence values sounded like a ‘gusty’ kind of wind. Not wanting to lose the excitement of the song, the loudness peaks now trigger thunder – using the pd threshold object – with random timing to thin it out a bit. Also there’s no thunder until the loudness values stay above the threshold for several seconds.

API data is being served via OSC from a ruby script. For this first test I ran analysis using curl and saved it to a file. Was thinking of using this ruby GEM for the API calls:

https://github.com/youpy/ruby-echonest

But will first try hooking them up using Patron (curl) because it offers more flexibility to get at all aspects of the API.

This project is not included in internet-sensors repo yet.

Local Files:

  • tkzic/internetsensors/
  • wind-echonest.rb
  • echonest.txt (curl examples)
  • enfb-analysis.json (analysis data)
  • wind-echonest1.rb

 

 

Open weather map API queries

edit 4/27/2015

This curl examples for this API are broken. – the API now requires a key: http://openweathermap.org/api

notes

I’m using this API now instead of the CORDC wind forecast data for the internet-sensors wind example. You can get 7 day forecast data for practically anywhere. Also historical data is available. And its free.

Get forecast by city id

Weather forecast in the city for the next 7 days.

http://api.openweathermap.org/data/2.1/forecast/city/{CITY_ID}

http://api.openweathermap.org/data/2.1/forecast/city/524901

{"message":"","cod":"200","calctime":0.0189,"list":[
{"dt":1345251600,
    "main":{"temp":286.6,"humidity":98,"pressure":1002,"temp_min":286,"temp_max":287},
    "wind":{"speed":0,"deg":-2},
    "rain":{"3h":2},
    "clouds":{"all":56},
    "weather":{"id":803,"main":"Clouds","description":"broken clouds", "img":"..." }
....

This gets city code for santa cruz – 5393052

curl http://api.openweathermap.org/data/2.1/find/name?q=santa%20cruz,US

this gets 7 days of santa cruz forecasts with time stamps

curl http://api.openweathermap.org/data/2.1/forecast/city/5393052

typical JSON response (for one datapoint):

{
		"dt": 1364176800,
		"main": {
			"temp": 285.54,
			"temp_min": 282.6,
			"temp_max": 287.5,
			"pressure": 1015.99,
			"humidity": 87.6,
			"temp_kf": 2.94
		},
		"weather": [{
			"id": 801,
			"main": "Clouds",
			"description": "few clouds",
			"icon": "02n"
		}],
		"clouds": {
			"all": 17,
			"low": 0,
			"middle": 0,
			"high": 17
		},
		"wind": {
			"speed": 4.29,
			"deg": 311,
			"gust": 5.1
		},
		"dt_txt": "2013-03-25 02:00:00"
	}

So… to get precipitation, we need to just look for “rain”, or “snow”  indicator

 

domain ping machine in Web Audio

A ‘mini’ version of the Google domain ping synthesizer from the internet-sensors collection (Using the Mashape API). This one runs in Web Audio, using the Web Audio Playground with OSC.

Looks like a card game. Anyway it sounds cool. Doesn’t have the panning of the original, but it has an organic sound due to portamento in frequency changes, and more ‘beating’. Here’s a short excerpt.

Another example of Max controlling WAP https://reactivemusic.net/?p=6193

download

https://github.com/tkzic/WebAudio

folder is: WebAudio/osctest/

files

  • wapOSCserver-ping.rb
  • wapPingTest.maxpat
  • WAP patch: – ping2 (5 osc’s -> 5 gains, -> 1 master gain) – ping2.json
  • Web Page: WebAudio/index.html

instructions

update: you can run an online version of WAP Web client at http://zerokidz.com/wap/index.html – If you load this page, skip to step 3.

1. run the node webserver in WebAudio

node nodeserver.js

(it will run on localhost port 8081 – for example http://127.0.0.1:8081)

2. In Chrome web browser, run: 127.0.0.1:8081/index.html

3. From a terminal window, go to the osctest/ folder and start the server by typing:

./wapOSCserver-ping.rb

4. Load the Max patch:

wapPingTest.maxpat

5. In Chrome, click the OSC button – the ruby server should open a socket connection

6. Also in Chrome, load the patch: ping2 (note that there is a json copy of this patch ping2.json that can be pasted in, if it doesn’t show up in the menu)

6.5 In WAP, Click the square buttons on the 5 Oscillators to start them playing. You should hear sounds at this point.

7. Now back in Max patch – click green toggle to start polling and you probably want to increase the polling rate to about 50 ms instead of 1000 ms

suggestions
  • If it doesn’t seem like there is much action in the patch, try adjusting the FREQ_MULT and GAIN_MULT inside the ruby script.
  • You will probably also want to open the developer javascript console in Chrome to see what is going on.