I guess, because I didnt have this blog last year, I couldn’t find my notes about git and github repositories.
So I ran into situations where I tried to send committed projects from the local repository to the github – without realizing that I needed to pull any commits down from github first, using:
# git pull origin master
Merging just doesn’t seem to work well at all. So my advice to self would be, always pull from github before you start working on your code. Or even when you create a new github with a README.md file.
In this version, the Max patch communicates via OSC to a background server running in ruby. An advantage of this method is that both the patch and the server are compact and easy to understand. The Max patch does things in a Max way. And likewise with the ruby script.
Here’s a screen shot of the Max patch:
files
Max
ruby-max-tweet.maxpat
ruby
ruby-max-tweet.rb
The ruby script requires installation of the following gems
patron
osc-ruby
For example:
# gem install patron
authorization
The xively.com feed id and api-key are embedded in ruby script
To get this project to work you’ll need a Twitter account. And you’ll need to set up a device (feed) at xively.com and a ‘zap’ at zapier.com as directed in this post. It explains how to send tweets using triggers. https://reactivemusic.net/?p=6903
instructions
Open the Max patch: ruby-max-tweet
In a terminal window run the ruby script:
# ./ruby-max-tweet.rb
In the Max patch, type in a tweet. Press the green button to send.
When you have Tweeted enough, end the ruby server program by typing <ctrl-c>
download
The files for this project can be downloaded from the intenet-sensors archive at github
ipadMidiOsc
-----------
March 4, 2013
version 1.0
This program is a simulator to test Midi and Osc communication in iOS. There is a companion Max/MSP patch in the archive (oscmiditest3.maxpat). The Max patch lets you control the user interface on the iPad. And it well display incoming messages from the iPad.
I have only tested the default iOS midi networking devices via Mac OS, and an iRig Midi interface.
This is the only documentation right now - but there are big plans, yeah, for a programming guide, and a free app store app, along the lines of audioGraph.
I wanted to get this initial version out before the spacecraft lands in the backyard.
Acknowledgements:
The Midi code was derived from PGMidi by Pete Goodliffe
The Osc code was derived from OscPack by Ross Bencina
Thank you.
Tom Zicarelli
[email protected]
Notes:
Local Project files are in: tkzic/oscapps/ipadmiditest4
I made the update described here for iOS 6 compatibility:
Today I’m attempting to update audiograph to run under the current iOS/xcode releases. Here are some helpful solutions to resolving compilation errors and warnings…
(update) Had problems with git, because I forgot to pull the changes from Michael Tyson, down from github before I made changes to the files locally and committed them.
Ended up doing a wholesale copy and a lot of duplicated effort – anyway it seems to work now.
Very important: The current local version of audiograph is in tkzic/coreaudio/audiograph
I have also submitted a new version 1.1 to app store. but its in the same folder I just mentioned.
runtime view controller error:
<code>'A view can only be associated with at most one view controller at a time!</code>
The only really required thing to do is to add a launch image named “[email protected]” to the app resources, and in general case (if you’re lucky enough) the app will work correctly.
In case the app does not handle touch events, then make sure that the key window has the proper size. The workaround is to set the proper frame:
This program is broken on Mac OS Monterey. The PHP code is throwing errors in the OSC library. I’m not certain there is a reasonable workaround at this point and will be looking at replacing the php code with node.js or another more reliable platform. Also, as noted below – php is no longer installed in Mac os – so it requires homebrew or macports
features
Twitter streaming v1.1 API and Twitter Apps (using http requests and Oauth in php)
lat/lon conversion and map plotting in Max
sending data to Max using OSC in php
‘speaking’ tweets using several voices (text to speech)
Using geo-coordinates to control an FM synthesizer
Converting Tweet text to Morse code
Using a data recorder to replay/save data streams (Max lists)
Compare to satellite photo of earth – note the pattern of lights.
When you get to step 5 – in the instructions – instead of writing your own code, just use a text editor to copy your access tokens into this php program which is provided:
ctwitter_max3.php
Replace the strings in this line of code by copying and pasting the appropriate ones from your Twitter application:
2. in a terminal window run the php program: ctwitter_max3.php. [note] it runs forever. Press <ctrl-c> when you want to stop streaming Tweets.
php ./ctwitter_max3.php
3. Switch back to world3.maxpat to see dots populating the map
4. In Max, press the speaker icon (lower left) to turn on audio.
5. Activate voice synth/morse code using the blue toggle (lower left)
6. Clear the map by pressing the blue message box: “clear, drawpict a 0 0”
7. Stop the Tweet stream by pressing <ctrl-c> in the terminal window
special voice fx
If you have Soundflower installed, the Mac OS speech synth output can be routed back to Max for audio processing. This is somewhat complicated, but shows how to process audio in Max from other sources.
In MacOS System Preferences, set audio output device to Soundflower 2ch
Turn up hardware volume control on your computer
In Max, Options | Audio Status, set input device to Soundflower 2ch
In world3.maxpat double click on [p audio engine] (lower left). Then in the audio-engine sub-patch activate the toggle, (lower right) for voice-fx
data recording
The built-in data recorder/playback is on the left side of world3.maxpat:
toggle ‘record’ (red toggle) to start or stop data recording
Note that data will only be recorded when the php program is streaming Tweets in the terminal window (see above)
Press /play message or other transport controls to replay data
Replaced [aka.speech] external with Jeremy Bernstein’s [shell] external and the Mac OS command line ‘say’ command.
Reinstalled Java Development Kit for [mxj] object
I revised the php code for the Twitter streaming project, to use the coordinates of a corner of the city polygon bounding box. That seems to be more reliable than the geo coordinates which are absent from most Tweets.
updated 3/26/2014 – fixed runtime error in php server
updated 2/2/2014 – simplified user interface and updated audio engine
updated 9/2/2013 for Twitter v1.1 API with Oauth – note that older versions of this project are broken due to discontinued Twitter v1.0 API as of June 2013
This patch demonstrates the Twitter search API. Its self contained within Max – using the [mxj search tweet] external. This object allows you to input
keyword
maximum number of results
The response is:
username
Tweet text
date/time
This patch is a great way to get Tweets into Max. You can use a [metro] object to poll the API. There are no additional programs running outside of Max.
The limitation is lack of flexibility. You don’t have access to any of the other parameters in the request response. For example, geographic data. Also, it can be difficult to install and maintain java [mxj] programs in Max.
Here’s a screenshot of a patch which takes the output of the above patch and sends it to the aka.speech object – which runs the Mac Os built in text-to-speech program