Google Maps in Max

Draw points in Max by sending latitude and longitude to a Web client via Osc and web sockets.

Uses Ruby, WebSockets, Chrome, Google Maps API, Osc, Max, Jquery, and Node.js… But the Max patch is actually quite simple.

Based on this geocoding tutorial:  http://www.sitepoint.com/google-maps-api-jquery/

download

https://github.com/tkzic/internet-sensors

folder: google-maps

files

main Max patch
  • googlemaptest.maxpat
html and javascript for Google API
  • js/ (folder containing javascript code for map client)
  • markers.html (web client)
ruby
  • mapserver.rb (Osc and Websockets server)
node.js (optional)
  • nodeserver.js (local node.js webserver)

running node.js local web server (optional)

To run the project locally, you will either need to install node.js or have a local web server. The instructions assume that you have installed node, as well as the http package.

If you don’t want to bother with node, there is also an online version of the web client running at  http://zerokidz.com/gmap/markers/markers.html

intalling ruby gems

Running Ruby 2.0 as well as the following gems:

  • osc-ruby
  • em-websocket
  • json

instructions

1. If you are using the online Web client, go to this URL in a Google Chrome browser: http://zerokidz.com/gmap/markers/markers.html then skip to step 4.

2. In a terminal window start the node webserver

node nodeserver

3. Launch a Google Chrome web browser and type in this URL

127.0.0.1:8081/markers.html

4. In another terminal window start the ruby server for Osc and websockets

ruby mapserver.rb

5. Now in the Web Client (Chrome) press the “OSC” button underneath the map – to open the web sockets connection with the ruby server.

6. Open the Max patch:

googlemaptest.maxpat

7. Now you should be able to click on the message boxes for Bethel and Rumford in the Max patch to add location markers to the map in the browser.

 

More conversations with robots in Max

Using Google speech API and Pandorabots API

(updated 1/21/2024)

all of these changes are local – for now.

replace path to sox with /opt/homebrew/bin/sox in [p call-google-speech]

Also had to write a new python script to convert xml to json. Its in the subfolder /xml2json/xml4json.py

The program came from this link: https://www.geeksforgeeks.org/python-xml-to-json/

Also inside [p call-pandorabots] the path for this python program had to be explicit to the full path on the computer. this will vary depending on your python installation.

Also, note that you must install a dependency with pip:

pip install xmltodict

After all that I was actually able to have a conversation. These bots seem primitive, but loveable, now compared to chatGPT. Guess its time for a new project.

Also the voice selection for speech synth is still not connected

(updated 1/21/2021)

This project is an extension to the speech-to-text project: https://reactivemusic.net/?p=4690 You might want to try running that project first to get the Google speech API running.

features

  • Everything runs in one Max patch
  • menu selection of chat bots and voices (currently disabled)
  • filtering of non speakable text (like HTML tags)
  • python script now runs under current directory of patch using relative path
  • refinements to recording and chatbot engines

download

https://github.com/tkzic/internet-sensors

folder: google-speech

files
main Max patch
  • robot-conversation7.maxpat
abstractions and other files
  • clean-html.js
  • xml2json/xml2json.py
  • JSON-google-speech.js
  • JSON-pandorabot.js
  • ms-counter.maxpat (timer for recording messages)
  • pandorabots.txt
Max external objects
external programs:

sox: sox audio conversion program must be in the computer’s executable file path, ie., /usr/bin – or you can rewrite the [sprintf] input to [aka.shell] with the actual path. In our case we installed sox using Macports. The executable path is /opt/local/bin/sox – which is built into a message object in the subpatcher [call-google-speech]

get sox from: http://sox.sourceforge.net

Instructions

  • Open robot-converstaion7.maxpat and turn on audio
  • select chatbot as destination
  • Press the spacebar to start recording.
  • Ask a question.
  • Press the spacebar to stop recording. 

notes

Need to fix the selection of voices.

revision history

  • 1/21/2021: complete rewrite for Max8 and Catalina
  • 4/24/2016: need to have explicit path to sox, in the call-google-speech subpatch. In my Macports version the path is /usr/local/opt/bin/sox.
  • 6/6/2014: re-added missing pandorabots.txt (list of chatbots) – also noticed that pandorabots.com was not available. May need to look for another site.
  • 5/11/2014: The newest version requires Max 6.1.7 (for JSON parsing). Also have updated to Google Speech API v2.
  • Note: Instructions for getting a real key from Google – which will need to be inserted into the patch.  http://www.chromium.org/developers/how-tos/api-keys – so far we have been getting by with common keys from a github site (see notes in next link)

Also please see these notes about how to modify the patch with your key – until this gets resolved: https://reactivemusic.net/?p=11035

Max data recorder

This has been around for a while and is being used in several internetSensor projects. Records and plays back lists (this version allows lists where the first element is a symbol). Can record and playback simultaneously at different rates. Was originally adapted from MZ’s CNMAT example. And subsequently ruined.

download

https://github.com/tkzic/internet-sensors

folder: data-recorder

files

  • data-recorder-wrapper.maxpat (main entry point)
  • data_recorder_list-tz.maxpat (recorder abstraction)

There is also an example of multi-rate streaming in: data-recorder-tester.maxpat

 

Max Twitter client using ruby

Send and receive Tweets using Max via OSC to a background ruby server.

An advantage of this method is that both the patch and the server are  compact and easy to understand. The Max patch does things in a Max way. And likewise with the ruby scripts.

download

https://github.com/tkzic/internet-sensors

folder: twitter-ruby

files

Max
  • twitter-client.maxpat
ruby
  • twitter-server-send.rb (for sending Tweets)
  • twitter-server-get.rb (for receiving Tweets)
ruby gems

The ruby script requires installation of the following gems

  • json
  • osc-ruby
  • twitter

For example:

# sudo gem install twitter


Twitter authorization

In addition to having a Twitter account, you will need to set up a Twitter application from the developer site here:

https://dev.twitter.com/apps

Good instructions on how to do this can be found in this stackoverflow.com post under this heading: So you want to use the Twitter v1.1 API?

http://stackoverflow.com/questions/12916539/simplest-php-example-for-retrieving-user-timeline-with-twitter-api-version-1-1

When you get to step 5 – in the instructions – instead of writing your own code, just use a text editor to copy your access tokens into these ruby programs:

  • twitter-server-send.rb
  • twitter-server-get.rb

Replace the strings in this line of code by copying and pasting the appropriate ones from your Twitter application:

twitterClient = Twitter::REST::Client.new do |config|
  config.consumer_key = "mqQtoYh16343tDFG3BK7QQ"       
  config.consumer_secret = "X0KexjlK49fhhrnn9EztapZfATCQqWCc5fXVJH2pE"      
  config.oauth_token = "205589709-5krgh9FR3KkLGRDnewiU7GKKBMA6i2La84c"       
  config.oauth_token_secret = "LNARAeooN2vkklkF006GRdihQ5D8YYkm8dYvEs68M"  
end
Yeah – its cryptic, but trivial compared to writing the ouath code. Just a reminder, if even one letter or quote mark, or anything is out of place, the authorization will fail.

instructions

(note: currently running with ruby version 2.0) Display your ruby version by typing: ruby –version

Sending Tweets
  • Open the Max patch: twitter-client.maxpat
  • In a terminal window run the ruby script:
# ./twitter-server-send.rb

  • In the Max patch, type in a Tweet. Press the green button to send. 
  • When you have tweeted enough, end the ruby server program by typing <ctrl-c>
 Receiving Tweets
  • Open the Max patch: twitter-client.maxpat
  • In a terminal window run the ruby script:
  • From Twitter, send a Tweet to the user name embedded in the server
# ./twitter-server-get.rb

Both ruby servers can run at the same time.

What’s next?

  • Parse incoming Tweets into various components
  • Combine the 2 Ruby servers

revision history

  • 5/21/2014 – refactored app names. Added receive server
  • 5/19/2004 – moved to twitter-ruby folder
  • 1/18/2014 – minor fixes to ruby server for current ruby version 2.0
  • 9/7/2013 – uses oauth to communicate directly to Twitter from ruby

Send Tweets with a Little Tikes piano

This project uses the Max fzero~ object to detect which key of the piano gets pressed and send a pre-written Tweet like “Signs point to yellow” based on the color of the key.

It works with the Internet sensors project that sends Tweets from Max using Ruby. https://reactivemusic.net/?p=7013

download

https://github.com/tkzic/internet-sensors

folder: twitter-ruby

instructions

  1. Follow instructions here to send Tweets using Max and Ruby: https://reactivemusic.net/?p=7013
  2. At this point you will have a Max patch open and a Ruby server running in a terminal window.
  3. Now open little-tikes.maxpat
  4. Carefully play individual tones on the Little Tikes piano. 

notes

fzero~ is probably not the best choice for this. It doesn’t work above 2500hz which means it won’t probably distinguish between the lowest and highest key which are an octave apart. In fact the Little-Tikes piano, for a pitched instrument, is difficult to analyze. Due to relatively equal weight of partials to fundamental, and the quick decay. Other choices, would be pitch~ (Jehan) fiddle~ (Pucket…)

I remember seeing an Arduino project where somebody did this in reverse – actually built a motorized striker to play the piano)

… insert link to video here…

 

EchoNest segment analysis player in Max

The Echonest API provides sample level audio analysis.

http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf

What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.

download

https://github.com/tkzic/internet-sensors

folder: echo-nest

files

main Max patch
  • echonest-synth4.maxpat
abstractions and other files
  • polyvoice-sine.maxpat
  • polyvoice2.maxpat
ruby server
  • echonest-synth2.rb

authentication

You will need to sign up for a developer account at The Echo Nest, and get an API key. https://developer.echonest.com

Edit the ruby server file: echonest-synth2.rb replacing the API with your new API from echonest

 

installing ruby gems

Install the following ruby gems (from the terminal):

gem install patron

gem install osc-ruby

gem install json

gem install uri

instructions

1. In Terminal run the ruby server:

./echonest-synth2.rb

2. Open the Max patch: echonest-synth4.maxpat and turn on the audio.

3. Enter an Artist and Song title for analysis, in the text boxes. Then press the greet buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.

If there are problems with the analysis, its most likely due to one of the following:

  • artist or title spelled incorrectly
  • song is not available
  • song is too long
  • API is busy
If the ruby server hangs or crashes, just restart it and try again.

3. Press one of the preset buttons to turn on the tracks.

4. Now you can play the track by pressing the /play button.

The Mixer channels from Left to right are:

  • bass
  • synth (left)
  • synth (right)
  • random octave synth
  • timbre synth
  • master volume
  • gain trim
  • HPF cutoff frequency
You can also adjust the reverb decay time and the playback rate. Normal playback rate is 1.

programming notes

Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.

Here are screenshots of the Max sub-patches (the main screen is in the video above)

Timbre (percussion synth) – plays filtered noise:

Random octave synth:

Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:

There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!

Several modules respond to the API data:

  • tone synthesiszer (pitch data)
  • harmonic (random octave) synthesizer (pitch data)
  • filtered noise (timbre data)
  • bass synthesizer (key and mode data)
  • envelope generator (loudness data)

Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. Its essentially the same approach I use when asked to play bass at an open mic night.

The envelopes click at times – it may be due to the relaxed method of timing, i.e.., none at all. If they don’t go away when timing is corrected, this might get cleaned up by adding a few milliseconds to the release time – or looking ahead to make sure the edges of  segments are lining up.

[update] Using the Max [poly~] object cleared up the clicking and distortion issues.

Timbre data drives a random noise filter machine. I just patched something together and it sounded responsive – but its kind of hissy – an LPF might make it less interesting.

Haven’t used any of the beat, tatum, or section data yet. The section data should be useful for quashing monotony.

another update – 4/2013

tried to write this into a Max4Live device – so that the pitch data would be played my a Midi (software) instrument. No go. The velocity data gets interpreted in mysterious ways – plus each instrument has its own envelope which interferes with the segment envelopes. Need to think this through. One idea would be to write a device which uses EN analysis data for beats to set warp markers in Live. It would be an amazing auto-warp function for any song. Analysis wars: Berlin vs. Somerville.

domain ping machine in Web Audio

A ‘mini’ version of the Google domain ping synthesizer from the internet-sensors collection (Using the Mashape API). This one runs in Web Audio, using the Web Audio Playground with OSC.

Looks like a card game. Anyway it sounds cool. Doesn’t have the panning of the original, but it has an organic sound due to portamento in frequency changes, and more ‘beating’. Here’s a short excerpt.

Another example of Max controlling WAP https://reactivemusic.net/?p=6193

download

https://github.com/tkzic/WebAudio

folder is: WebAudio/osctest/

files

  • wapOSCserver-ping.rb
  • wapPingTest.maxpat
  • WAP patch: – ping2 (5 osc’s -> 5 gains, -> 1 master gain) – ping2.json
  • Web Page: WebAudio/index.html

instructions

update: you can run an online version of WAP Web client at http://zerokidz.com/wap/index.html – If you load this page, skip to step 3.

1. run the node webserver in WebAudio

node nodeserver.js

(it will run on localhost port 8081 – for example http://127.0.0.1:8081)

2. In Chrome web browser, run: 127.0.0.1:8081/index.html

3. From a terminal window, go to the osctest/ folder and start the server by typing:

./wapOSCserver-ping.rb

4. Load the Max patch:

wapPingTest.maxpat

5. In Chrome, click the OSC button – the ruby server should open a socket connection

6. Also in Chrome, load the patch: ping2 (note that there is a json copy of this patch ping2.json that can be pasted in, if it doesn’t show up in the menu)

6.5 In WAP, Click the square buttons on the 5 Oscillators to start them playing. You should hear sounds at this point.

7. Now back in Max patch – click green toggle to start polling and you probably want to increase the polling rate to about 50 ms instead of 1000 ms

suggestions
  • If it doesn’t seem like there is much action in the patch, try adjusting the FREQ_MULT and GAIN_MULT inside the ruby script.
  • You will probably also want to open the developer javascript console in Chrome to see what is going on.

The sound of a new machine

Using internet ping data to control a synthesizer in Max

This project uses ‘ping’ times to about 40 Google domains, like google.ca, google.de, etc., to control pitch and amplitude of a 20 voice droning synthesizer.

Imagine working in a Google control center. A soothing low pitched drone fills the room. Then Suddenly you hear a slowly rising pitch. You check the monitors – Google Paraguay is experiencing network failure. You light a cigarette and wait for things to calm down.

update 2/6/2021

Not using ruby to ping – due to API shutting down. The new version uses the Max [shell] external to ping from the command line.

download

https://github.com/tkzic/internet-sensors

folder: ping

files

main Max patch
  • sound-of-a-new-machine3.maxpat
abstractions and other files

instructions

  • Open the Max patch: sound-of-a-new-machine3.maxpat
  • Turn on audio. Turn up the gain.
  • In the Max patch, click the toggle box to start polling. It may take a minute to hear any sounds, while the oscillators are loading. Increase polling speed to 400 or so if you can’t wait.
  • Another reason you might not hear anything interesting is if the clip threshold is too low. Watch the incoming ping times and set the clip threshold above the average level.
  • Adjust the pitch multiplier to your desired pitch range. It will take some time for all of the oscillators to adjust after a pitch change.
  • If you hear clicks or pops, try reducing the sample rate to 44.1 KHz, or increasing the IO vector size (in Options | Audio Status).

note: Occasionally the server program will time-out when its launched. Try launching again, or edit it and increase the timeout value in [p shellping].


deprecated information for previous version using ruby

The server is a ruby script which handles the http: requests using the Mashape ping-uin API and sends messages to Max using OSC

The synth has a weird clustering drone like effect like some kind of alien life force.

The patch design is kind of embarrassing. Its obvious I forgot how to use [poly~]. Maybe by the time you read this, we’ll have addressed this.  Hey billions of patch cords look cool.

Here’s an example of the Mashape API in curl

curl --include --request GET 'https://igor-zachetly-ping-uin.p.mashape.com/pinguin.php?address=google.ca' \
  --header 'X-Mashape-Authorization: YOUR-MASHAPE-API-KEY'

Here’s a list of Google domains

http://en.wikipedia.org/wiki/List_of_Google_domains

download

https://github.com/tkzic/internet-sensors

folder: ping

files

main Max patch
  • sound-of-a-new-machine2.maxpat
abstractions and other files
  • google.txt (list of domains for [coll] object
ruby
  • domain-ping.rb

ruby gems

install the following ruby gems using: sudo xcrun gem install <gem-name>

  • require ‘osc-ruby’
  • require ‘patron’
  • require ‘json’

authorization

  • Register with mashape http://mashape.com to get an API-key for ping-uin
  • Then edit domain-ping.rb to enter your mashape API-key.

instructions

  • Open the Max patch: sound-of-a-new-machine2.maxpat
  • Turn on audio. Turn up the gain.
  • From a terminal window type the following command
# ./domain-ping.rb


  •  In the Max patch, click the toggle box to start polling. It may take a minute to hear any sounds, while the oscillators are loading. Increase polling speed to 400 or so if you can’t wait.
  • Another reason you might not hear anything interesting is if the clip threshold is too low. Watch the incoming ping times and set the clip threshold above the average level.
  • Adjust the pitch multiplier to your desired pitch range.
  • When you’ve had enough, type <ctrl-c> in the terminal window to stop the server.

note: Occasionally the server program will time-out when its launched. Try launching again, or edit it and increase the timeout value.

 

Internet sensors projects

overview

A series of projects that use Internet API’s for interactive media projects.

updated 2/14/2021.

Projects have been tested on Max8 and Mac OS Catalina – except where noted.  Other dependencies are are listed on individual project pages.

My goal is to show a variety of methods to get data to and from Max. API’s come and go, as do the libraries that support them.

download

internet-sensors is on Github at:  https://github.com/tkzic/internet-sensors

Each project is in a separate folder.

authorization

Some projects require passwords and API-keys from providers.

For example, for the ‘Twitter streaming API in Max’ project you’ll need to set up a Twitter application from your account to get authorization credentials.

For projects that need authorization usually you’ll just need to modify the patches/source code with your user information – as directed in the instructions.  The API keys embedded in the code will not work unless specifically mentioned, like with the Google speech API.

help

API’s used in the projects change fairly often. So there’s no guarantee they’ll work. If you find problems or have ideas – please post to them to the github repository. Or email me at [email protected].

projects

1. Twitter streaming API in Max (FM, php, curl, geocoding, [aka.speech], Soundflower (optional), Morse code, OSC, data recorder, Twitter v1.1 API, Twitter Apps, Oauth)

https://reactivemusic.net/?p=5786

2. Sending tweets from Max using curl ([sprintf], [aka.shell], xively.com API, zapier.com API, JSON, javascript Twitter v1.1 API, Oauth)

deprecated 2/11/2021 – old project link here: https://reactivemusic.net/?p=5447

3. Send and receive tweets in Max using ruby (ruby, API, JSON, javascript Twitter v1.1 API, OSC, Oauth)

New! – use the project above to send tweets from using a Fisher Price “Little Tikes” piano: https://reactivemusic.net/?p=6993

4. Speech to text in Max (Google speech API, JSON, javascript, sox, Twitter v1.1 API, Oauth)

Note: Send Tweets using speech as well.

https://reactivemusic.net/?p=4690

5. A conversation with a robot in Max (Google speech API, sox, JSON,  pandorabots API, python, [aka.speech]

https://reactivemusic.net/?p=9834

7. Playing bird calls in Max (xeno-canto API, [jit.uldl], [jit.qt.movie])

https://reactivemusic.net/?p=4225

8. Soundcloud API in Max (node.js)

https://reactivemusic.net/?p=20120

 

9. Real time train map using Max and node.js (XML, JSON, OSC, data recorder, web sockets, Irish Rail API)

https://reactivemusic.net/?p=5477

10. stock market music in Max (OSC, netcat,  php, mysql, html, javascript, Yahoo API, linux)

…updates in progress…

https://reactivemusic.net/?p=12029

11. Using weather forecast data to drive weather sounds in Pure Data (ruby, OSC, JSON, openweathermap API, “Designing Sound” by Andy Farnell)

https://reactivemusic.net/?p=5846

… updates in progress…

12. Using ping times to control oscilators in Max (Mashape ping-uin API, ruby, OSC, JSON)

https://reactivemusic.net/?p=5945

13. Spotify Segment analysis player – sonification of audio analysis data from Spotify (echo nest) API, node,  Max/MSP)

https://reactivemusic.net/?p=20096

14. Quadcopter AR_drone – Fly a quadcopter using Max – with streaming Web video. ( node.js, AR_drone, Google Chrome, Osc, Max/MSP)

deprecated 2/14/2021 – old project link: https://reactivemusic.net/?p=6635

15. Adding markers to Google Maps in Max – ( node.js, ruby, Google Chrome, Osc, Max/MSP, websockets, Google Maps API, Jquery, javascript)

deprecated 2/14/2021 – old project link: https://reactivemusic.net/?p=11412

16. Max data recorder –  Record and play back streams of data simultaneously at various rates

https://reactivemusic.net/?p=8053

17. MBTA bus data in Max –  Sonification of Mass Ave buses, from Harvard to Dudley

… updates in progress…

https://reactivemusic.net/?p=17524

Screen Shot 2014-11-11 at 3.26.16 PM

 

Using wind forecast data to generate wind sounds with Pd

This project receives wind data from the U.S. Coastal Observing Research and Development Center (CORDC) at Scripps Institute of Oceanography  the openweathermap.org API

http://cordc.ucsd.edu/projects/models/coamps/api/

http://openweathermap.org/API

It uses the data to set the ‘wind speed’ of Andy Farnell’s wind sound patches from “Designing Sound”

http://aspress.co.uk/sd/index.php

Error 4/26/2014

Getting a parsing error on the ruby script. Will be debugging shortly!

<span style="color: rgb(0, 0, 0);">/Users/tkzic/.rvm/gems/ruby-2.0.0-p353@global/gems/json-1.8.1/lib/json/common.rb:155:in `initialize': A JSON text must at least contain two octets! (JSON::ParserError)</span>
<span style="color: rgb(0, 0, 0);">from /Users/tkzic/.rvm/gems/ruby-2.0.0-p353@global/gems/json-1.8.1/lib/json/common.rb:155:in `new'</span>
<span style="color: rgb(0, 0, 0);">from /Users/tkzic/.rvm/gems/ruby-2.0.0-p353@global/gems/json-1.8.1/lib/json/common.rb:155:in `parse'</span>
<span style="color: rgb(0, 0, 0);">from ./wind-open-forecast.rb:92:in `&lt;main&gt;'</span>

———————————————–

The http: requests are managed by a ruby server script and sent via OSC to Pure Data.

Two of Andy Farnell’s patches are also running (with very slight mods): wind4a.pd, and thunder4a.pd

From pd, you can select which city to get data from, and control the rate at which the data is replayed from the ruby server. The forecast cycle is about 3 days long.

From the pd patch you can select the rate of data playback, and select which city to use for data (Santa Cruz or San Diego).

Note: the wind speed expected by Andy Farnell’s wind patch is between 0 and about 0.7

download

https://github.com/tkzic/internet-sensors

folder: pd-weather

files

main Pd patches
  • wind-open-machine.pd
  • thunder4a.pd
  • wind4a.pd
pd abstractions

designingSound/ folder:

Add this folder to the path list in Pd before the system files because the distance.pd abstraction has the same name as a system file used for map distances.

  • distance.pd
  • fcpan.pd
  • strike-pattern.pd
  • strike-sound.pd
  • udly.pd
ruby
  • wind-open-forecast.rb
required gems include:
  • osc-ruby
  • patron
  • json

authorization

  • none required

instructions

1. Open all three pd patches
  • wind-open-machine.pd
  • thunder4a.pd
  • wind4a.pd
2. Turn audio on
3. start the ruby script from a terminal window by typing

./wind-open-forecast.rb
(NOTE: Sometimes the ruby script will time-out when you first launch it. Just run it again.)

4. You can tweak various parameters to scale the effect of the wind data on the sounds

revision history

5/19/2014

currently getting this error in the ruby server – need to rewrite:

  • update 3-25-2013 If you are running this  project and getting errors from the ruby script –   CORDC is not producing wind data – so please use this workaround – which uses the openweathermap.org API