Remote radio – client

How to set up the client side of the remote radio system.

(under construction) patches have not been uploaded to github

This client works with the sever described in the previous post. We are running a MacBook Pro OSx 10.11.4. with the following:

  • logmein hamachi VPN
  • Soundjack VOIP
  • Max/MSP
  • Novation launchpad
  • DJ-Tech CDJ-101 controller

VPN

Install and set up LogMein Hamachi. It is free, for a limited number of computers.  Set up a Hamachi server on both the server and client. It should look something like this:

 

VOIP

We are using Soundjack VOIP. It is also free. Use the following parameters on the client side.

  • mic: soundflower 2ch (or something that is not currently producing input!)
  • headphone: default output (or whatever you want to listen on)

Screen Shot 2016-04-05 at 12.23.54 AM

You can ignore the rest of the settings, since we are not sending audio. Most of the configuration is done on the server side.

Max

We are using several patches, depending on which hardware controllers you are using. Make sure that the hardware controllers are connected before opening Max.

patches:
  • eagle-ui5.maxpat – User interface and main entry point for client communication and CDJ-101 abstrations

Screen Shot 2016-04-05 at 12.28.56 AM

  • freqdb4.maxpat – database handler

Screen Shot 2016-04-05 at 12.29.17 AM

  • lp_radio2.maxpat – launchpad driver

Screen Shot 2016-04-05 at 12.28.41 AM

After the max patches are loaded, you should be able to control the radio using the UI and the CDJ-101 controller. The red Mixer button in the upper right corner of the Launchpad should be lit.

Instructions:

to be continued…

Remote radio – server

How to run the server side (base station) of the remote shortwave radio system.

(Under construction) The patches have not been uploaded to github yet.

Assuming that the radio and antenna system are operating. We are using an internet connected MacBook Pro running OSx 10.9.5, with a MOTU 828 MK3 audio interface.

VPN

Install and set up LogMein Hamachi. It is free, for a limited number of computers.  Set up a Hamachi server on both the server and client. It should look something like this:

 

VOIP

We are using Soundjack VOIP. It is also free. Use the following parameters on the server side.

Local Settings:
  • mic: audio interface channel that is connected to radio audio output
  • headphone: doesn’t matter
  • volume: 0
  • audio block samples: 512
  • channels: 2
  • network packet samples 512
  • quality: high
  • userlist: manual

Screen Shot 2016-04-05 at 12.38.03 AM

User list:

UDP/IP: enter hamachi IP of client.

When Soundjack is set up on the client. Press the green start button on the right side of the user list window on the server.

If all goes well, you should hear the radio on the client. Note: The input meter under local settings should be registering audio from your radio. If not, there is a problem with the audio interface.

Max Server

The Max/MSP server exchanges CAT commands via the server serial port to the radio. The command data is exchanged with a Max patch on the client using OSC (over UDP).

Screen Shot 2016-04-05 at 12.46.02 AM

patch:

eagle-cat8.maxpat

instructions:
  • select the radio serial port from the menu (for example: usbmodem 14531)
  • initialize port settings
  • set toggle to poll the serial port

At this point you should be able to try the example commands, for instance to get the version or set frequency. If the commands are not working, it indicates a problem with the serial connection to the radio.

Next, check the IP address of the udpsend object. It should be the hamachi IP of the client.

 

Remote controlled shortwave radio system

Under construction…

The first in a series describing a system for internet remote control of a shortwave radio station. Its not something new. There are commercial products that provide remote operation of amateur radio transceivers. The purpose of this project is to make it possible to use shortwave radio sounds in musical performance, without the need of an antenna system.

Features:

  • Max/MSP for USB serial control of radio, OSC remote interface, user interface, Midi device handling, and an SQLITE database of preset frequencies.
  • Low latency, good quality audio using Soundjack by Alex Carot.
  • Hardware control of radio using Midi controllers (CDJ-101 and Launchpad)
  • Bi-directional OSC and VOIP using Logmein Hamachi VPN
  • Additional hardware control of AC power and antenna selection using Arduino and a WeMo switch.
  • TouchOSC Ipad audio mixer control using MOTU Cuemix
  • TeamViewer remote desktop software for logging into to base station compuer
  • Optional radio user interface control with Ipod TouchOSC, Griffin Powermate dial, and Korg Nano-kontrol.
  • Optional VOIP backup using Mumble.

System diagram

base station:

remote-radio-sys1

remote control:

remote-radio-sys2

 

 

 

 

 

 

Glacier Sounds

Overlapping loops of varying duration to represent natural cycles.

glacier1

In October I collaborated with Wade Kavanaugh and Stephen P. Nguyen to compose and perform the sounds of a glacier for their installation at the Gem theatre in Bethel, Maine. The glacier was made from paper.

Wade and Stephen:

wadeandsteven

A time-lapse video of the project:

A time-lapse video of a similar project they did in Minnesota 2005:

The approach was to take a series of ambient loops and organize them by duration. The longer loops would represent the slow movement of time. Shorter loops would represent events like avalanches. One-shot samples would represent quick events, like the cracking of ice.

It took several iterations to produce something slow and boring enough to be convincing. I used samples from the Ron MacLeod’s Cyclic Waves library from Cycling 74 https://www.ableton.com/en/packs/cyclic-waves/. Samples were pitched down to imply largeness.

Screen Shot 2015-12-21 at 1.09.59 AM

Each vertical column in an Ableton Live set represents a time-frame of waves. That is, the far left column contains quick events and the far right column contains long cycle events. Left to right, the columns have gradually increasing cycle durations.  I used a Push controller to trigger samples in real time as people walked through the theatre to see the glacier.

The theatre speakers were arranged in stereo but from front to back. Since the glacier was also arranged along the same axis, a slow auto-panning effect sent sounds drifting off into the distance, or vice versa. Visually and sonically there was a sense that the space extended beyond the walls of the theatre.

In the “control room” above the theatre… using Push to trigger samples and a Korg NanoKontrol to set panning positions of each track:

glacer2

The performance lasted about 45 minutes. Occasionally the cracking of ice would startle people in the room. There were kids crawling around underneath the paper glacier. Afterwards we just let the sounds play on their own. A short excerpt:

 

Photographs by Rebecca Zicarelli.

Internet shortwave radio using Max, Hamachi, and Mumble

How to control an amateur radio transceiver over the internet, using Osc (Open Sound Control), VOIP (Voice over Internet Protocol) and VPN (Virtual Private Networks).

What problem does this solve?

Using a shortwave radio receiver in a  live performance without installing a large antenna system.

This method gives low-latency real-time access to audio, and radio control using a laptop computer from anywhere. I suppose it could also remote-control a synthesizer, if you’re into that kind of thing.

CAT

Modern ham radio receivers can be controlled using serial commands using the CAT (Computer Aided Transceiver) protocol. Usually this is done via a USB port. There are hardware solutions for remote controlling radios over the internet, like RemoteRig http://www.remoterig.com/wp/. But there is also a free, or low cost, solution using software.

System diagram

Screen Shot 2015-12-21 at 1.23.10 AM

The ‘base’ computer is connected to the radio/antenna. The ‘remote’ computer is a laptop that could be anywhere connected by WiFi

For this experiment we used a TenTec Eagle transceiver connected to a MacBook USB port. The audio output of the radio connects to the audio input of the MacBook. The MacBook is directly connected to an internet WiFi router using an ethernet cable.

VOIP

A mumble client runs on the base computer, https://en.wikipedia.org/wiki/Mumble_(software)  and also on the remote laptop. Both clients are connected to a Mumble server (Murmur) at Mumble.com http://www.mumble.com/mumble-download.php. You could also run your own server. I set the audio to the best quality and muted the microphone on the remote laptop. We are only using the laptop as a receiver. For transmitting, you could simply open up another channel on the Murmur server. Mumble has very low latency (compared to Skype) and decent audio quality.

Bi-directional commands using VPN and OSC

CAT commands go in both directions – to and from the radio. For example, you would send a command to the radio to change frequency. The radio would send acknowledgements back to the remote laptop.

This is a problem for networks that use NAT (Network Address Translation) because local IP addresses are private, hidden behind routers. The solution that eventually worked was using a VPN called Hamachi https://secure.logmein.com/products/hamachi/download.aspx on both the remote and base computers. Hamachi servers are setup on both computers and connected to each other. This allows the computers to ‘see’ each other as if they were on a local network.

Max and Osc

Max patches are run on both the base and remote computers. The Max patch on the base computer connects to the radio using the serial object and passes commands back and forth over the internet using udpsend and udpreceive (which use Osc).

The Max patch on the remote MacBook sends and receives commands from the base computer using updsend and udpreceive. With the Hamachi VPN, Osc works just like it does on a LAN (local area network).

Automatic reconfiguration of clients

The main advantage of this system is that when you move the remote MacBook to a new location – for example, a coffee shop with public Wifi – both the Mumble and Hamachi clients automatically reconfigure for the location. So you don’t need to know the actual IP address of your computer in the coffee shop. The reconfiguration usually happens within seconds after the Wifi connection is made.

Alternatives

If you are just working across a LAN, you don’t need a VPN. Osc will run on a local network using private IP’s.

You could also try Ross Bencina’s Oscgroups http://www.rossbencina.com/code/oscgroups. Although I was not able to get Oscgroups to work, other than in a LAN.

For uni-directional Osc communication from remote to base, in a WAN (wide area network) you can use a static IP address for the target.

Skype is another (free) solution for transmitting VOIP audio. Set the base computer in auto-answer mode and call it from the remote computer. Skype will process the audio more than mumble, with noise gates and such. And the latency is higher. But its very easy to set up.

Development

The next step is to build a remote interface for the radio that uses Midi/Osc controllers, so for example you can turn a dial on the Midi controller to change frequency or filter settings on a base radio.

to be continued…

RF mixer simulation in Max

Audio simulation of an RF circuit.

Screen Shot 2015-03-29 at 4.49.36 PM

The simulation serves no purpose, but its fun. There are 4 versions. I think the third one sounds best (rf-mixer-sim3.maxpat). Its interesting to hear how much spectral distortion happens from multiplying sawtooth waves.

Screen Shot 2015-03-29 at 4.46.55 PM

Download

https://github.com/tkzic/max-projects/

folder: rf-mixer

patches:

Note: please set the signal vector size to 1 (or as low as possible) and enable overdrive and audio interrupt

Screen Shot 2015-03-29 at 5.22.30 PM

Four versions:

  • rf-mixer-sim.maxpat (initial attempt)
  • rf-mixer-sim2.maxpat (uses sah~ and rate~ objects)
  • rf-mixer-sim3.maxpat (uses gate~ objects with a phasor~ clock)
  • rf-mixer-sim4.maxpat (bandpass filter on RF input)

 

Basic synth in Max – part 2

Yet another Basic synthesizer design

Screen Shot 2015-03-26 at 3.16.19 PM

See part 1 here: https://reactivemusic.net/?p=18511

New features

Drag to select buffer start/end points

waveform~ object

Screen Shot 2015-03-26 at 3.21.17 PM

Sample recording

record~ object.

Screen Shot 2015-03-26 at 3.22.25 PM

How to design voice activated recording?

*Time compress/stretch

groove~ (Max 7 only)

Presets

Screen Shot 2015-03-26 at 3.23.30 PM

M4L preset management: https://reactivemusic.net/?p=18557

Polyphony

poly~ object

Polyphonic Midi synth in Max

https://reactivemusic.net/?p=11732

local: poly-generic-example1.maxpat (polyphonic)

Polyphonic instrument in Max for Live

Wave~ sample player: https://reactivemusic.net/?p=18354

local: m4l: poly-synth1.als (aaa-polysynth2.amxd)

Screen Shot 2015-03-22 at 10.01.22 PM

Max For Live

automation and UI design (review)

Distributing M4L devices

How to create a Live ‘Pack’

by Winksound

  • save set
  • collect all and save
  • file manager
    • manage project
      • packing : create live pack

 

Presets in Max for Live

How to use the Max preset object inside of M4L.

Screen Shot 2015-03-22 at 8.06.14 PM

There is some confusion about how to use Max presets in a M4L device. The method described here lets you save and recall presets with a device inside of a Live set, without additional files or dialog boxes. It uses pattrstorage. It works automatically with the Live UI objects.

It also works with other Max UI objects by connecting them to pattr objects.

Its based on an article by Gregory Taylor: https://cycling74.com/2011/05/19/max-for-live-tutorial-adding-pattr-presets-to-your-live-session/

Download

https://github.com/tkzic/max-for-live-projects

Folder: presets

Patch: aaa-preset3.amxd

How it works:

Instructions are included inside the patch. You will need to add objects and then set attributes for those objects in the inspector.  For best results, set the inspector values after adding each object

Write the patch in this order:

A1. Add UI objects.

For each UI object:

  1. check link-to-scripting name
  2. set long and short names to actual name of param

Screen Shot 2015-03-22 at 8.44.23 PM

A2 (optional) Add non Live (ie., Max UI objects)

For each object, connect the middle outlet of a pattr object (with a parameter name as an argument) to the left inlet of the UI object. For example:

Screen Shot 2015-03-22 at 8.30.24 PM

Then in inspector for each UI object:

  1. check  parameter-mode-enable
  2. check inital-enable

Screen Shot 2015-03-22 at 8.51.10 PM

B. Add a pattrstorage object.

Screen Shot 2015-03-22 at 8.35.28 PM

Give the object a name argument, for example: pattrstorage zoo. The name can be anything, its not important. Then in the inspector for pattrstorage:

  1. check parameter-mode enable
  2. check Auto-update-parameter Initial-value
  3. check initial-value
  4. change short-name to match long name

Screen Shot 2015-03-22 at 8.42.49 PM

C. Add an autopattr object

Screen Shot 2015-03-22 at 8.34.21 PM

D. Add a preset object

Screen Shot 2015-03-22 at 8.34.53 PM

In the inspector for the preset object:

  1. assign pattrstorage object name from step B. (zoo) to pattrstorage attribute

Screen Shot 2015-03-22 at 8.52.11 PM

 Notes

The preset numbers go from 1-n. They can be fed directly into the pattrstorage object – for example if you wanted to use an external controller

You can name the presets (slotnames). See the pattrstorage help file

You can interpolate between presets. See pattrstorage help file

Adding new UI objects after presets have been stored

If you add a new UI object to the patch after pattrstorage is set up, you will need to re-save the presets with the correct setting of the new UI object. Or you can edit the pattrstorage data.

 

 

ep-341 Max/MSP – Spring 2015 week 7

The Live Object Model in Max for Live.

Screen Shot 2015-03-04 at 12.43.39 PM

Several ways of working with Ableton Live parameters in a M4L patch. (This is an improved version of the patch we built in class) https://reactivemusic.net/?p=18401

The Live Object Model description: https://cycling74.com/docs/max5/refpages/m4l-ref/m4l_live_object_model.html

In the coming weeks we will build synthesizers and work with control surfaces in M4L

Assignment

Build 3 or more M4L devices, including one of each of the following

  • An audio effect
  • An instrument
  • A Midi effect

Its ok to adapt and “improve” an existing device.

Please bring in your work in progress for next week and be prepared to demonstrate something. The entire assignment will not be due until March 31.

 

ep-426 syllabus – Spring 2015

Interactive video programming and performance

Spring 2015

teacher: Tom Zicarelli – http://tomzicarelli.com

You can reach me at:  [email protected]

Office hours: Tuesday 1-2 PM, or Tuesday 4-5PM, at the EPD office #401 at 161 Mass Ave. Please email or call ahead.

Assignments and class notes will be posted to this blog: https://reactivemusic.net before or after the class. Search for: ep-426 to find the notes

Examples, software, links, and references demonstrated in class are available for you to use. If there is something missing from the notes,  please ask about it. This is your textbook.

Syllabus:

Everybody calls this course “The Jitter class” – referring to Max/MSP jitter from Cycling 74. You will learn to use Jitter. But the object is to create interactive visual art. Jitter is one tool of many available.

The field of interactive visual art is constantly evolving.

After you take the course, you will have designed projects. You might design a new tool for other artists. You will have opportunities to solve problems.  You will become familiar with how others make interactive art. You will explore the connection between sound, video, graphics, sensors, and data. You will be exposed to to a world of possibilities – which you may embrace or reject.

We will explore a range of methods and have opportunities to use them in projects. We’ll look at examples by artists – asking the question: How does that work?

Topics: (subject to change)

  1. Jitter
  2. Matrixes
  3. Reverse engineering
  4. Visualization of audio
  5. Visualization of live data, API’s
  6. Video analysis (realtime)
  7. Video hardware and controllers
  8. Prototyping
  9. Video signal processing
  10. OpenGL
  11. Other tools: Processing, WebGL, Canvas, 2d graphics
  12. Portfolios
  13. Live performance

Grading and projects:

Grades are based on two projects that you will design – and class participation. Please see Neil Leonard’s EP-426 syllabus for details. I encourage and will give credit for: collaboration with other students, outside projects, performances, independent projects, and anything else that will foster your growth and success.

I am open to alternative projects. For example, if you want to use this course as an opportunity to develop a larger project or continue a work in progress.

Reference material

https://cycling74.com/wiki/index.php?title=Max_Documentation_and_Resources