ipadOscMidi app with Max patch

Have set up a Github repository for the ipadOscMidi simulator app: https://github.com/tkzic/ipadoscmidi

There is also a companion Max patch for testing

Here is the README file for the project:

ipadMidiOsc
-----------
March 4, 2013
version 1.0

This program is a simulator to test Midi and Osc communication in iOS. There is a companion Max/MSP patch in the archive (oscmiditest3.maxpat). The Max patch lets you control the user interface on the iPad. And it well display incoming messages from the iPad.

I have only tested the default iOS midi networking devices via Mac OS, and an iRig Midi interface. 

This is the only documentation right now - but there are big plans, yeah, for a programming guide, and a free app store app, along the lines of audioGraph.

I wanted to get this initial version out before the spacecraft lands in the backyard.

Acknowledgements:

The Midi code was derived from PGMidi by Pete Goodliffe
The Osc code was derived from OscPack by Ross Bencina

Thank you.

Tom Zicarelli
[email protected]

Notes:

Local Project files are in: tkzic/oscapps/ipadmiditest4

I made the update described here for iOS 6 compatibility:

http://stackoverflow.com/questions/12548856/coremidi-pgmidi-virtual-midi-error-in-ios6

 

 

 

Updates to audiograph for iOS 6

notes

Today I’m attempting to update audiograph to run under the current iOS/xcode releases. Here are some helpful solutions to resolving compilation errors and warnings…

(update) Had problems with git, because I forgot to pull the changes from Michael Tyson, down from github before I made changes to the files locally and committed them.

Ended up doing a wholesale copy and a lot of duplicated effort – anyway it seems to work now.

Very important: The current local version of audiograph is in tkzic/coreaudio/audiograph

I have also submitted a new version 1.1 to app store. but its in the same folder I just mentioned.

runtime view controller error:
<code>'A view can only be associated with at most one view controller at a time!</code>

http://stackoverflow.com/questions/12434937/uiviewcontrollerhierarchyinconsistency-when-trying-to-present-a-modal-view-contr

group table view default color warning:

http://stackoverflow.com/questions/12539861/group-table-view-background-color-is-deprecated-in-ios-6-0

deprecated AvAudioSession methods (iOS 6.0)

I commented these out and added the updated methods

see this link for setDelegate

http://stackoverflow.com/questions/13078901/cocos2d-2-1-delegate-deprecated-in-ios-6-how-do-i-set-the-delegate-for-this

miscellaneous

http://developer.apple.com/library/ios/#documentation/AVFoundation/Reference/AVAudioSession_ClassReference/DeprecationAppendix/AppendixADeprecatedAPI.html

see Apple docs for everything else

The ‘play’ button on the bottom toolbar didn’t work on the 5g ipod touch (with taller screen)

Here’s what fixed it (in applicationDidFinishLaunching…)

from this stack overflow post: http://stackoverflow.com/questions/12395200/how-to-develop-or-migrate-apps-for-iphone-5-screen-resolution

The only really required thing to do is to add a launch image named “[email protected]” to the app resources, and in general case (if you’re lucky enough) the app will work correctly.

In case the app does not handle touch events, then make sure that the key window has the proper size. The workaround is to set the proper frame:

<code>[window setFrame:[[UIScreen mainScreen] bounds]]</code>

There are other issues not related to screen size when migrating to iOS 6. Read iOS 6.0 Release Notesfor details.

Twitter streaming API in Max

World map and radio simulation

note August 3, 2022 –

This program is broken on Mac OS Monterey. The PHP code is throwing errors in the OSC library.  I’m not certain there is a reasonable workaround at this point and will be looking at replacing the php code with node.js or another more reliable platform. Also, as noted below – php is no longer installed in Mac os – so it requires homebrew or macports

features

  • Twitter streaming v1.1 API and Twitter Apps (using http requests and Oauth in php)
  • lat/lon conversion and map plotting in Max
  • sending data to Max using OSC in php
  • ‘speaking’ tweets using several voices (text to speech)
  • Using geo-coordinates to control an FM synthesizer
  • Converting Tweet text to Morse code
  • Using a data recorder to replay/save data streams (Max lists)

Compare to satellite photo of earth – note the pattern of lights.

download

https://github.com/tkzic/internet-sensors

folder: twitter-stream

files

Max
  • world3.maxpat (main patch)
  • data_recorder_list-tz.maxpat (abstraction for recording data)
  • data-recorder-wrapper.maxpat (abstraction for recording data)
  • worldMap.jpg
  • twitter-morning.txt (sample data – not required)
php
  • ctwitter_max3.php (main program)
  • ctwitter_stream_max3.php (twitter engine)
  • udp.php (Osc client)

3 August, 2022

(note: starting with mac os monterey, php is no longer included in mac os. You can install it with homebrew. See this post: https://www.ergonis.com/products/tips/install-php-on-macos.php

externals

[note]  The project displays Tweets without these externals, but you won’t hear any speech

authorization

In addition to having a Twitter account, you will need to set up a Twitter application from the developer site here:

https://dev.twitter.com/apps

Good instructions on how to do this can be found in this stackoverflow.com post under this heading: So you want to use the Twitter v1.1 API?

http://stackoverflow.com/questions/12916539/simplest-php-example-for-retrieving-user-timeline-with-twitter-api-version-1-1

When you get to step 5 – in the instructions – instead of writing your own code, just use a text editor to copy your access tokens into this php program which is provided:

  • ctwitter_max3.php

Replace the strings in this line of code by copying and pasting the appropriate ones from your Twitter application:

$t->login('consumer_key', 'consumer secret', 'access token', 'access secret');

 

So it will end up looking something like this:

$t->login('ZdzfNaeflihFydfOHeOA', 'eXzUOfhif4riifgRbCTnnSN0T7neYtg8dIWDC7j3bs', '205589709-5kRI1fllJvU94jjffeerSn9LrTajtxSrvO8', 'u5MuSxPseBemUIBWlMxEFaw899feedXA0eHlReCnQ');

Yeah – its cryptic…

instructions

1. open the Max Patch: world3.maxpat

2. in a terminal window run the php program: ctwitter_max3.php. [note] it runs forever. Press <ctrl-c> when you want to stop streaming Tweets.

php ./ctwitter_max3.php

3. Switch back to world3.maxpat to see dots populating the map

4. In Max, press the speaker icon (lower left) to turn on audio.

5. Activate  voice synth/morse code using the blue toggle (lower left)

6. Clear the map by pressing the blue message box: “clear, drawpict a 0 0”

7. Stop the Tweet stream by pressing <ctrl-c> in the terminal window

special voice fx

If you have Soundflower installed, the Mac OS speech synth output can be routed back to Max for audio processing. This is somewhat complicated, but shows how to process audio in Max from other sources.

  • In MacOS System Preferences, set audio output device to Soundflower 2ch
  • Turn up hardware volume control on your computer
  • In Max, Options | Audio Status, set input device to Soundflower 2ch
  • In world3.maxpat double click on [p audio engine] (lower left). Then in the audio-engine sub-patch activate the toggle, (lower right) for voice-fx

data recording

The built-in data recorder/playback is on the left side of world3.maxpat:

  • toggle ‘record’ (red toggle)  to start or stop data recording
  • Note that data will only be recorded when the php program is streaming Tweets in the terminal window (see above)
  • Press /play message or other transport controls to replay data
<span style="font-family: 'Helvetica Neue', Helvetica, Helvetica, Arial, sans-serif; font-size: 23px; font-weight: bold; line-height: 1.1;">
</span>
<span style="font-family: 'Helvetica Neue', Helvetica, Helvetica, Arial, sans-serif; font-size: 23px; font-weight: bold; line-height: 1.1;">revision history</span>
<span style="font-family: 'Helvetica Neue', Helvetica, Helvetica, Arial, sans-serif; font-size: 23px; font-weight: bold; line-height: 1.1;">
</span>

revision history

1/19/2021

Updates for Max8 and Catalina:

Replaced [aka.speech] external with Jeremy Bernstein’s [shell] external and the Mac OS command line ‘say’ command.

Reinstalled Java Development Kit for [mxj] object

I revised the php code for the Twitter streaming project, to use the coordinates of a corner of the city polygon bounding box. That seems to be more reliable than the geo coordinates which are absent from most Tweets.

  • updated 3/26/2014 – fixed runtime error in php server
  • updated 2/2/2014 – simplified user interface and updated audio engine
  • updated 9/2/2013 for Twitter v1.1 API with Oauth – note that older versions of this project are broken due to discontinued Twitter v1.0 API as of June 2013

Thoughts on API’s and Max

After looking at hundreds of API’s over past months – models begin to emerge:

  • Visualization: Looking at data by filtering, analysis, or factors defining movement.
  • Synthesis: producing  feeds from sources, in combination – fusion
  • Transcoding: changing one type of signal into another

Or some combination of all three.

The best tools for getting data in and out of Max:

  • curl (or variants in client libraries)
  • JSON
  • Osc
  • string parsing outside of Max
  • database tools (or data recorder in Max)
  • basic data filtering and scaling tools in Max
  • for complex networked systems: node.js

 

Installing pycurl on Mac Os

notes

update: 11/1/2014 installed version 7.19.5 using easy_install

Here’s the website: http://pycurl.sourceforge.net

It worked. Because xCode was installed?

sudo env ARCHFLAGS="-arch x86_64" /usr/bin/easy_install setuptools pycurl==7.19.0

Here are instructions, but really all I needed was the above command

http://blog.carlotorniai.net/lion-and-pycurl/

Pycurl examples: http://www.angryobjects.com/2011/10/15/http-with-python-pycurl-by-example/

Local file examples in tkzic/python-api/

There are other Python libs like request, human curl, urllib….

Processing, Twitter, OSC, and Max

A variation on the Twitter mood-lamp program.

note 6/2014 – this may not work due to changes in oauth

local files:

  • Processing: /documents/processing/osc_max_testing
  • Max: tkzic/max teaching examples/processing-osc

It grabs a ‘feed’ or any URL which returns a bunch of text. Then it does some analysis on the text, and using the results to send RGB data back to Max using OSC.

Raspberry-Pi FX pedal

Running in Pure Data

(update) Tried this with guitarist John Drew today (2/26/2012). We ran the guitar directly into the iMic (switched to microphone, not line) and the output of iMic into an amplifier. The R-Pi was plugged into to wifi router with an Ethernet cable, so we could use touchOsc to control the delay parameters. It sounded great.

We talked about the possibility of making this into a ‘product’. One idea would be to ditch the Osc controls and build a simple hardware interface – some encoders, switches, and LED’s. You could map everything in PD and download new patches using an ethernet cable or a usb wifi connection – or even some kind of serial/usb link.

Yesterday I programmed a simple variable delay effect in pd to run on Raspberry-Pi. Control was using touchOSC as described in previous posts. I ran the effect in mono at 32k sampling rate – and it sounded great. Also its using the Griffin iMic for sound.

Here’s the command line to set the sample rate and number of channels:

pd-extended -r 32000 -nogui -channels 1 delay-effect-osc.pd

 

Local files:

 

 

 

Parsing Tweets into spoken language

notes

I’ve revised the php program that streams Tweets and sends them to Max, to remove hyperlinks, RT indicators, user mentions, and ascii art. Now it works better with text-to-speech.

things that could be done in a future project:

  • figure out which #hashtags are integral to content, and which are just tagged onto the end of a tweet
  • remove extraneous hyperlinks which don’t get parsed by the API
  • translate symbols like > into “great than” or “better than”
  • translate (or at least flag) foreign languages – this could be aided by geocoding data
  • translate slang acronyms like OMG, LOL
  • natural language parsing (see Stanford open source program) for content and grammatical analysis
  • replace hyperlinks/picture-links with a ‘title’ from the actual target
  • natural language equivalents of things like: RT @zooloo:
things to try
  • Running the output of text-to-speech through musical analysis tools, to detect pitch and rhythm
  • Chaining: Use the content of one tweet to direct a search for the next one. For example say you search for cats and get: “my cat is turning purple” – then you would search for ‘purple’ and get: “I’ve never eaten a purple cow” – then you would search for “cow” and so forth

Twitter streaming API

from https://dev.twitter.com/docs/streaming-apis

This diagram shows the process of handling a stream, using a data store as an intermediary.

The JSON response breaks out various components of the tweet like hashtags and URL’s but it doesn’t provide a clean version of the text – which for example could be converted to speech.

Here’s a sample response which shows all the fields:

{
	"created_at": "Sat Feb 23 01:30:55 +0000 2013",
	"id": 305127564538691584,
	"id_str": "305127564538691584",
	"text": "Window Seat #photo #cats http:\/\/t.co\/sf0fHWEX2S",
	"source": "\u003ca href=\"http:\/\/www.echofon.com\/\" rel=\"nofollow\"\u003eEchofon\u003c\/a\u003e",
	"truncated": false,
	"in_reply_to_status_id": null,
	"in_reply_to_status_id_str": null,
	"in_reply_to_user_id": null,
	"in_reply_to_user_id_str": null,
	"in_reply_to_screen_name": null,
	"user": {
		"id": 19079184,
		"id_str": "19079184",
		"name": "theRobot Vegetable",
		"screen_name": "roveg",
		"location": "",
		"url": "http:\/\/south-fork.org\/",
		"description": "choose art, not life",
		"protected": false,
		"followers_count": 975,
		"friends_count": 454,
		"listed_count": 109,
		"created_at": "Fri Jan 16 18:38:11 +0000 2009",
		"favourites_count": 1018,
		"utc_offset": -39600,
		"time_zone": "International Date Line West",
		"geo_enabled": false,
		"verified": false,
		"statuses_count": 10888,
		"lang": "en",
		"contributors_enabled": false,
		"is_translator": false,
		"profile_background_color": "1A1B1F",
		"profile_background_image_url": "http:\/\/a0.twimg.com\/profile_background_images\/6824826\/BlogisattvasEtc.gif",
		"profile_background_image_url_https": "https:\/\/si0.twimg.com\/profile_background_images\/6824826\/BlogisattvasEtc.gif",
		"profile_background_tile": true,
		"profile_image_url": "http:\/\/a0.twimg.com\/profile_images\/266371487\/1roveggreen_normal.gif",
		"profile_image_url_https": "https:\/\/si0.twimg.com\/profile_images\/266371487\/1roveggreen_normal.gif",
		"profile_link_color": "2FC2EF",
		"profile_sidebar_border_color": "181A1E",
		"profile_sidebar_fill_color": "252429",
		"profile_text_color": "666666",
		"profile_use_background_image": false,
		"default_profile": false,
		"default_profile_image": false,
		"following": null,
		"follow_request_sent": null,
		"notifications": null
	},
	"geo": null,
	"coordinates": null,
	"place": null,
	"contributors": null,
	"retweet_count": 0,
	"entities": {
		"hashtags": [{
			"text": "photo",
			"indices": [12, 18]
		}, {
			"text": "cats",
			"indices": [19, 24]
		}],
		"urls": [{
			"url": "http:\/\/t.co\/sf0fHWEX2S",
			"expanded_url": "http:\/\/middle-fork.org\/?p=186",
			"display_url": "middle-fork.org\/?p=186",
			"indices": [25, 47]
		}],
		"user_mentions": []
	},
	"favorited": false,
	"retweeted": false,
	"possibly_sensitive": false,
	"filter_level": "medium"
}