Category: Uncategorized

Gqrx SDR with Max Mac OS

Notes gqrx – piping I/Q audio stream into Max

Update: This method has limitations – see below. It is an alternative to CubicSDR. But not as robust especially with audio streaming. There is something wrong with the  I/Q output from gqrx. Can’t get a consistent set of signals across the band. Information here on adding hardware drivers.

Install gqrx with macports:

sudo port install gqrx

Install gtelnet with macports (Mac OS has jettisoned telnet)

sudo port install inetutils

Here’s a list of telnet commands that work with gqrx:

For some reason gqrx not accepting ivp4 addresses. Need to telnet the frequency commands using this:

gtelnet ::ffff: 7356

actually this netcat command works too. Make sure to use double quotes:

echo “F 7015000” | nc -w 1 ::ffff: 7356

In gqrx, select the I/Q demodulator and set the audio output to blackhole 2ch

For some reason, you can’t set the audio output sr to anything other than 48 KHz. This is apparently a feature. So the I/Q output bandwidth is limited, therefore no Wide band FM.

Unfortunately UDP audio streaming is limited to 1 channel, so no chance of I/Q streaming:


Updating Max/MSP internet sensor projects

Notes for updating from Max6 to Max8 in Mac OS Catalina

In general, 32 bit code will not work

Link to internetsensors project:


1. mxj object

Need to update, but the Oracle link leads to a dead end message. Go to the Oracle download link but instead of pressing the green download button, <ctrl> click and save the link as described in the instructions from intrepidOlivia in this link

2. aka.objects

I have used, and aka.speech – among others. These objects no longer work. Replace with Jeremy Bernstein’s shell object:

NOTE: There’s a problem with [shell] – it rejects input that is converted to a symbol using [tosymbol].

This can be fixed by using from symbol – or just eliminating [tosymbol] – it make affect the stderr-stdout redirection token, ie., “>” and other special characters but for now [shell] does not accept symbol input

aka.speech can be replaced using the “say” command in the shell.  more details to follow about voice parameters.

‘say’ has similar params to aka.speech, eg., voice name and rate. There are voices for specific languages. This feature could be used, for example, to match the language from a Tweet to an appropriate voice

3. Twitter streaming API

I revised the php code for the Twitter streaming project, to use the coordinates of a corner of the city polygon bounding box. That seems to be more reliable than the geo coordinates which are absent from most Tweets.

There is a new API in the works – but its difficult to decipher the Twitter API docs because they have so many products and the documentation is obtuse.

Also it would be interesting to extract the “language” field and use it to select which voice to use in the speech synthesizer. Or even have an english translation option.

4. Echonest API

Echonest was absorbed into Spotify. The API is gone. But the Spotify API does have some of the feature detection and analysis code. But it doesn’t allow you to submit your own audio clips. There are also some efforts to preserve some of the Echonest stuff like the blog by Paul Lamere, and the remix code. Here are a few links I found to get started.

Spotify API (features)

Echonest blog:

Amen – algorithmic remix project:

5. Google speech to text

Several issues:

  • Replacing [] with [shell] – instead of using [tosymbol], this workaround seems to help

  • Now have rewritten all of the recording code, and shell interactions with Google.
  • Still need to work on voice options for the ‘say’ command (text to speech)
  • pandorabots API problems turned out that the URL needed to be https instead of http

6. twitter curl project

Looks like is gone. Maybe purchased by google? Anyway – this project is toast

7. Twitter via Ruby

Got this working again.

8. Bird calls from

This patch has been completely re-written. The old API was obsolete. This version uses [dict] and [maxurl] to format and execute the initial query. Then it uses [jit.uldl] to download the mp3 file with the bird-call audio.  Interesting that [maxurl] would not download the file using the “download” URL. It only worked with a URL containing the actual file name.

9. ping

Needed to reinstall ruby gems using xcrun (see above)

seems to be a problem with mashape:

Could not resolve host: (Patron::HostResolutionError)

[mashape was acquired by – so will need to refactor the code in the ruby server.]



Returning after a long long time

For many years this website lived on an Amazon cloud server. Around 2016 the attacks began. Malware and viruses swarmed the site and used it to send bizarre emails. Then the people at Amazon threatened to shut it down if I didn’t do something. I added recommended security features. The attacks continued. Eventually I closed the site to all outside IP addresses. Until today. I found a new hosting platform.


Voltage controlled variable capacitors

Also called varicaps. They are diodes operated in a reverse bias condition. As voltage increases the capacitance decreases.

Read this first

Tutorial by Phillip Atchely, KO6BB

More tutorials

Using Varactors by Stefan Hollos and Richard Hollos:

Tutorial by Ian Poole at Radio-Electronics 

Another tutorial from Radio-Electronics?:

Threads from The RadioBoard Forum:

Varactor tuned regenerative radio by Tony G4WIF “The Two Dollar Regen” : … ntid=28430 More information here:


Slice//Jockey help

A compilation of Pd help screens.

Slice//Jockey is an interactive audio performance instrument

by Katja Vetter

Screen Shot 2015-04-19 at 5.03.09 PM


Click any of the screenshots to see a full size image.

Screen Shot 2015-04-19 at 4.57.49 PM

X-Y field

The x-y origin is in the lower left corner. In the x plane numbers increase from left to right. In the y plane numbers increase from bottom to top.

  • white: left slice unit
  • grey: right slice unit
  • black: global

Screen Shot 2015-04-19 at 5.59.03 PM

Notes on notes:

  • x: pitch low to high (independent of BPM)
  • y: rhythmic variation low to high

Screen Shot 2015-04-19 at 4.58.27 PM

Screen Shot 2015-04-19 at 4.59.05 PM

Screen Shot 2015-04-19 at 4.59.17 PM

Screen Shot 2015-04-19 at 5.01.49 PM

Screen Shot 2015-04-19 at 5.02.03 PM

Screen Shot 2015-04-19 at 5.02.13 PM

Screen Shot 2015-04-19 at 5.02.36 PM

Screen Shot 2015-04-19 at 4.58.52 PM

Audio Settings

Screen Shot 2015-04-19 at 5.01.27 PM

Screen Shot 2015-04-19 at 4.59.33 PM

 IO section

Screen Shot 2015-04-19 at 5.00.32 PM

Screen Shot 2015-04-19 at 5.00.41 PM

Screen Shot 2015-04-19 at 5.00.58 PM

Screen Shot 2015-04-19 at 5.00.49 PM


Patterns and slices

Slice units left and right

Screen Shot 2015-04-19 at 4.59.57 PM

Screen Shot 2015-04-19 at 5.56.19 PM

Screen Shot 2015-04-19 at 5.00.06 PM

Screen Shot 2015-04-19 at 5.00.17 PM

Screen Shot 2015-04-19 at 5.53.28 PM

Screen Shot 2015-04-19 at 5.53.08 PM

Global recorder

Screen Shot 2015-04-19 at 4.59.42 PM

ep-341 Max/MSP – Spring 2015 week 13

Algorithmic composition and generative music – part 2


Reactive music

With reactive music, audio is the input. Music is the output. Music can also be the input.

from Wikipedia:

“Reactive music, a non-linear form of music that is able to react to the listener and his environment in real-time.[2] Reactive music is closely connected to generative musicinteractive music, and augmented reality. Similar to music in video-games, that is changed by specific events happening in the game, reactive music is affected by events occurring in the real life of the listener. Reactive music adapts to a listener and his environment by using built in sensors (e.g. cameramicrophoneaccelerometertouch-screen and GPS) in mobile media players. The main difference to generative music is that listeners are part of the creative process, co-creating the music with the composer. Reactive music is also able to augment and manipulate the listeners real-world auditory environment.[3]

What is distributed in reactive music is not the music itself, but software that generates the music…”

Ableton Live field recorder

Uses dummy clips to apply rhythmic envelopes and effects to ambient sound:

InstantDecomposer and Slice/Jockey

Making music from from sounds that are not music.

by Katja Vetter

InstantDecomposer is an update of Slice//Jockey. It has not been released publicly. Slice//Jockey runs on Mac OS, Windows, and Linux – including Raspberry-PI

Slice//Jockey help:

Slice//Jockey is written in Pd (PureData) – open source – the original Max.

By Miller Puckette

Local file reference

  • local: InstantDecomposer version: tkzic/pdweekend2014/IDecTouch/IDecTouch.pd
  • local: slicejockey2test2/slicejockey2test2.pd


A music factory.

By Christopher Lopez

Inception and The Dark Knight iOS apps:

As of iOS 8.2, Dark Knight crashes on load. Inception only works with “Reverie Dream” (lower left corner)

Running RJDJ scenes in Pd in Mac OS X

Though RJDJ is a lost relic in 2015. It still works in Pd. The example scenes used here are meant to run under libpd in iOS or Android, but they will actually work in Mac OS X.

Screen Shot 2015-04-18 at 10.13.01 PM

First, use Pd Extended. Ok maybe you don’t need to.

1. Read the article from Makezine by Mike Dixon

2. Download sample scenes from here: The link is under the heading “RJDJ Sources”

3. Download RJLIB from here:

4. Add these folders in RJLIB to your Pd path (in preferences)

  • pd
  • rj
  • deprecated

5. Now, try running the scene called “echelon” from the sample scenes you downloaded. It should be in the folder rjdj_scenes/Echelon.rj/_main.pd

  • turn on audio
  • turn up the sliders
  • you should here a bunch of crazy feedback delay effects

Note: with Pd-extended 0.43-4 the error message: “base: no method for ‘float'” fills the console as soon as audio is turned on.

Scenes that I have got to work:

The ones with marked with a * seem to work well without need for modification or an external GUI. They all have error messages) – and they really are meant to work on Mobile devices, so a lot of the sensor input isn’t there.

  • Amenshake (you will need to provide accelerometer data somehow)
  • Atsuke (not sure how to control)
  • CanOfBeats (requires accelerometer)
  • ChordSphere (sounds cool even without accelerometer)
  • Diving*
  • DubSchlep* (interesting)
  • Ehchelon*
  • Eargasm*
  • Echochamber*
  • Echolon*
  • Flosecond (requires accelerometer)
  • FSCKYOU* (Warning, massive earsplitting feedback)
  • Ghostwave* (Warning, massive earsplitting feedback)
  • HeliumDemon (requires accelerometer)
  • JingleMe*
  • LoopRinger*
  • Moogarina
  • NobleChoir* (press the purple button and talk)
  • Noia*
  • RingMod*
  • SpaceBox*
  • SpaceStation (LoFi algorithmic synth)
  • WorldQuantizer*

to be continued…

Random RJDJ links

Including stuff explained above.



I’m thinking of something: