By Marty Quinn
A NASA sonification project.
by Anton Schertenleib & Robert Candey
“Sonic Interaction Design (SID) is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts.”
By Davide Rocchesso
http://www.cost.eu/media/publications/11-32-Explorations-in-Sonic-Interaction-Design
Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff
PDF version available for download…
http://sonification.de/handbook/
Early sonification device.
by Dr. d’Albe
http://en.wikisource.org/wiki/1922_Encyclop%C3%A6dia_Britannica/Optophone
http://www.probertencyclopaedia.com/cgi-bin/res.pl?keyword=Optophone&offset=0
The Echonest API provides sample level audio analysis.
http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf
What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.
https://github.com/tkzic/internet-sensors
folder: echo-nest
You will need to sign up for a developer account at The Echo Nest, and get an API key. https://developer.echonest.com
Edit the ruby server file: echonest-synth2.rb replacing the API with your new API from echonest
Install the following ruby gems (from the terminal):
gem install patron
gem install osc-ruby
gem install json
gem install uri
1. In Terminal run the ruby server:
./echonest-synth2.rb
2. Open the Max patch: echonest-synth4.maxpat and turn on the audio.
3. Enter an Artist and Song title for analysis, in the text boxes. Then press the greet buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.
If there are problems with the analysis, its most likely due to one of the following:
3. Press one of the preset buttons to turn on the tracks.
4. Now you can play the track by pressing the /play button.
The Mixer channels from Left to right are:
Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.
Here are screenshots of the Max sub-patches (the main screen is in the video above)
Timbre (percussion synth) – plays filtered noise:
Random octave synth:
Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:
There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!
Several modules respond to the API data:
Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. Its essentially the same approach I use when asked to play bass at an open mic night.
The envelopes click at times – it may be due to the relaxed method of timing, i.e.., none at all. If they don’t go away when timing is corrected, this might get cleaned up by adding a few milliseconds to the release time – or looking ahead to make sure the edges of segments are lining up.
[update] Using the Max [poly~] object cleared up the clicking and distortion issues.
Timbre data drives a random noise filter machine. I just patched something together and it sounded responsive – but its kind of hissy – an LPF might make it less interesting.
Haven’t used any of the beat, tatum, or section data yet. The section data should be useful for quashing monotony.
another update – 4/2013
tried to write this into a Max4Live device – so that the pitch data would be played my a Midi (software) instrument. No go. The velocity data gets interpreted in mysterious ways – plus each instrument has its own envelope which interferes with the segment envelopes. Need to think this through. One idea would be to write a device which uses EN analysis data for beats to set warp markers in Live. It would be an amazing auto-warp function for any song. Analysis wars: Berlin vs. Somerville.
This curl examples for this API are broken. – the API now requires a key: http://openweathermap.org/api
I’m using this API now instead of the CORDC wind forecast data for the internet-sensors wind example. You can get 7 day forecast data for practically anywhere. Also historical data is available. And its free.
Weather forecast in the city for the next 7 days.
http://api.openweathermap.org/data/2.1/forecast/city/{CITY_ID}
http://api.openweathermap.org/data/2.1/forecast/city/524901
{"message":"","cod":"200","calctime":0.0189,"list":[ {"dt":1345251600, "main":{"temp":286.6,"humidity":98,"pressure":1002,"temp_min":286,"temp_max":287}, "wind":{"speed":0,"deg":-2}, "rain":{"3h":2}, "clouds":{"all":56}, "weather":{"id":803,"main":"Clouds","description":"broken clouds", "img":"..." } ....
This gets city code for santa cruz – 5393052
curl http://api.openweathermap.org/data/2.1/find/name?q=santa%20cruz,US
this gets 7 days of santa cruz forecasts with time stamps
curl http://api.openweathermap.org/data/2.1/forecast/city/5393052
typical JSON response (for one datapoint):
{ "dt": 1364176800, "main": { "temp": 285.54, "temp_min": 282.6, "temp_max": 287.5, "pressure": 1015.99, "humidity": 87.6, "temp_kf": 2.94 }, "weather": [{ "id": 801, "main": "Clouds", "description": "few clouds", "icon": "02n" }], "clouds": { "all": 17, "low": 0, "middle": 0, "high": 17 }, "wind": { "speed": 4.29, "deg": 311, "gust": 5.1 }, "dt_txt": "2013-03-25 02:00:00" }
So… to get precipitation, we need to just look for “rain”, or “snow” indicator