A vocoder example using an excerpt from an Obama acceptance speech.
local file: obama vocoder example in Ableton teaching examples.
Vocoder tutorials from Huston Singletary at his Ableton Live Clinic Youtube channel:
A vocoder example using an excerpt from an Obama acceptance speech.
local file: obama vocoder example in Ableton teaching examples.
Vocoder tutorials from Huston Singletary at his Ableton Live Clinic Youtube channel:
Apply envelopes and effects to ambient sounds to create “music”
Start with the sound of an orchestra warming up, traffic, sea lions, or any field recording – then apply global envelopes to shape the sound over time.
updated 4/19/2015
Based on a tutorial by Sonic Academy: https://reactivemusic.net/?p=2431
Local file is in Ableton teaching examples: dummy clips band recorder 11-08b (under dummy clips example project) This version uses ambient traffic sounds.
A version with a high-school band horsing around is in: dummy clips band recorder3z
The actual set will be posted here – eventually…
This project is an effects processor made with dummy clips – it has a simple hip-hop drum track.
(edit) you may want to just delete the example clip that slice to midi makes because it will totally screw everything when you try to play it.
sound effects from here http://www.pachd.com/sounds.html
I’m not sure what this idea is, but it would translate radio wave field strength into light, musical notes, bar graphs, mechanical stuff, etc.,
Chrome now includes online speech recognition
By Jer Thorp
http://blog.blprnt.com/blog/blprnt/updated-quick-tutorial-processing-twitter
Local files: Processing sketchbook: jerTwitterExample.pde
Basically it shows how to use the twitter4j library to grab all the tweets with a given hashtag
Here’s the twitter4j documentation: http://twitter4j.org/en/index.html
Apparently there is a limitation on sending more than one Midi channel from a track
http://cycling74.com/forums/topic.php?id=24020
This presents an issue for receiving osc and splitting it to various instruments…
(update) What I’ve been trying to do is set up a generalized way to trigger midi from web data. Currently getting the web data via processing, sending OSC messages to max for live (or max).
Once the OSC gets to m4l, I use an m4l device in a track which receives all the osc messages then sends them off on new upd ports using [updsend] – for example a separate port for what would normally be a separate midi channel.
Then I have another m4l receiver which you can set the appropriate port number to get the channelized data stream and scale it to midi notes.
[edit – need link to sample live set]
update – the “internet sensors” project shows some easier ways to do the translation of internet API to OSC”
Use [pong~ 1.] instead of [%~ 1.] see note below about negative numbers…
hi julien,
for non-negative input you will get the same output from [%~ 1.], as from wrap~. However, for the whole domain, wrapping pong~, [pong~ 1], does the job (although it costs more)
Krzysztof
juli…@tremplin-utc.net wrote: …
is the Max/MSP object [%~ 1.] an equivalent to the Pd object [wrap~] ?
We are trying to measure RPM of a bicycle wheel by reading the accelerometer data stream of a Wiimote wedged between the spokes of the wheel.
When the wii-mote is in the bicycle wheel it generates a stream of numbers much like a sine wave. Lets say we want to just get the speed of the wheel. It would be the frequency of these ‘sine’ pulses.
Strategy: use [past] object send a bang once each cycle. Then use a tap-tempo patch to convert pulses into bpm, mph, etc.,