Websdr as a remote receiver with Max/MSP and Elecraft K4

Remote receiver project using Websdr as a remote alternative to a local receiver.

Demonstration of a Max/MSP program that connects an amateur radio transceiver to Websdr – transmitting locally from Maine (USA) while receiving remotely using a radio in the Netherlands. The Max program reads the frequency from a Elecraft K4 transceiver, to control the Websdr sites. It also loads the remote receivers, controls audio routing, mode, filter, and waterfall display settings. An iPad, running touchOSC, acts a a control panel. Up to 4 remote receivers operate at the same time. Websdr is a remarkable system, developed by PA3FWM at http://websdr.org/. It lets you control remote receivers worldwide, from your Web browser.

Components:

  • Max/MSP
  • Websdr
  • TouchOSC
  • Elecraft K4 transceiver with antenna system
  • Skookumlogger (logging software)

Max Patches:

websdrjweb7.maxpat : main control program. Contains [jweb] objects for launching websdr instances. Also code for injecting javascript to control parameters like frequency, filter, and volume. This patch acts as an intermediary between TouchOSC, WebSDR, and allows external MIDI control as well as getting frequency input from CAT controlled radios like the Elecraft K4.

websdrCATaudio.maxpat : handles serial port interaction for the K4. Also reads audio stream from either the K4 receiver (via USB) or the websdr receiver (via Blackhole.)  I created an aggregate audio device called K4sdr to allow Max to read both devices at the same time. Audio switching and levels are handled using a Korg nanoControl2. For example to switch between the audio streams or listen to both.

Optional: arduino-ptt-detect2.maxpat : reads serial data from an Arduino, connected to the amplifier keying line, to determine whether the radio is in transmit mode, so we can switch back to the local audio stream to eliminate latency of hearing your signal via websdr. See subsequent post about this setup…

TouchOSC

websdrCW3.touchOSC : controls all 4 websdr channels, ie,., volume, mute, filter, CW offset, filtershift, – Also handles window management, loading js code, zoom in/out websdr and selecting channel waterfall views or Max code views.

CW Offset

websdr doesn’t have a control for CW pitch offset. To sync the frequency of the K4, the websdr is run in LSB mode with a frequency offset equal to the CW pitch setting in the transceiver. eg., 450 Hz. This works for most of the websdr sites, but unfortunately some of the sdr’s are off-frequency. You can usually compensate by adjusting the CWfreqOffset for that channel (in Max or TouchOSC).

Setting the offset also requires shifting the filter so it is centered over the actual signal.

Files

This is currently a work in progress, not available on Github. Local files are in max teaching examples folder.

Videosync remix projects

New experiments using Videosync with Ableton Live.

A remix of “Gappa the Triphibian Monster”, 1967 directed by Haruyasu Noguchi. Produced using Ableton Live 12 and Videosync. Editing was done using LosslessCut for slicing and CapCut for editing. The underwater sequence uses the Ableton Audio Effects Rack: “Dawn Shimmers”.

Yet another remix of “Invasion of The Neptune Men” by Koji Ota. Produced with Videosync, Ableton Live, and Lossless Cut. The editing was done entirely in Live. Probably not a great way to do extensive video edits. And there was some trouble with Live 11, but the problems resolved by installing Live 12.

Another tribute to “The Invasion of The Neptune Men” by Koji Ota Produced with Videosync in Ableton Live. This was my first effort with Videosync. I tried to edit the video based on the sounds of the clips.

Amateur radio contesting time-lapse map

Mapping geocoded  contest log data using node.js and openlayers.

The goal was to make something that looks like the Reverse Beacon Network map, only for contest log files. I use RBN for testing antennas now. That map display gives you a pretty good idea of your actual antenna pattern.

Code is written in node.js (javascript) and html.

Part 1: Read a Cabrillo log file containing QSO: records. Look up each callsign, get latitude and longitude, and rewrite the file as json data, tagged with geo coordinates. I originally tried getting the data from hamQTH but it was not current, so ended up using the qrz.com xml callsign lookup. For callsigns “not found” I used the qrz.com dxcc prefix lookup to get general coordinates for the country. There are still a few bad/missing data issues to resolve. Like European stations with coordinates at the South Pole.

Part 2: Tried various mapping frameworks – like leaflet, arcgis, and openlayers. Wanted to use a great-circle projection (azimuthal equidistant) like the big ARRL world map. And may still figure this out. But working with map projections and coordinate transforms is way worse than doing a Smith Chart.  I ended up hacking a flight tracking example from openlayers.org and basically replacing airplanes with QSO’s. That is why the lines are animated from source to destination.  Also added a layer for day/night, and QSO/time status display.

It probably makes sense to get rid of the flight animation and just display the entire path in sync with the QSO data – with color code for each band (K1KP) – and speed control on the time lapse, etc., So you can get a better sense of rate and propagation.

It would be cool to have a website where you could upload a log file and generate maps.

Note: this project is not yet available

Files

local files:

generating data:

internetsensors/cabrillomap

put the cabrillo data in testdata.cbr (use QSO: records only for now) should be sorted chronologically.

run:  node index.js

the output file will be: geocab.json (which is used as input to the mapping program)

mapping

internetsensors/oltest

main.js = node source with ol mapping and data processing

index.html = web page for map

geocab.json = geocoded cabrillo json test data

to run, type: npm start

Then open: http://localhost:5173/ in a browser

Additional work / current issues

Some of the qrz.com callsign data has bad geo coordinates. In particular some of the records show a latitude of -89 and longitute -179 – need to check for these numbers and replace with dxcc coordinates.

There should be an argument on the node program to pass in the datafile. Also the program should clean up any non QSO: records, like the file header info and any X-QSO recs.

Also need to clean up the async/await stuff – currently there are several methods for handling state transitions.

mapping ideas:

As mentioned above, its probably a good idea to make a version of the code without the flight animation, and have various controls to stop/start the data playback to look at individual qso’s do speed control, etc.,

azimuthal equidistant projection: there are some links to examples in leaflet, and arcgis to handle complex projections. In documents, look at:  “map links for projection stuff.txt”

leaflet test version:

in the internetsensors/cabrillomap folder there’s a test file: cbworld1.html that works using websockets when you run the index.js file to generate test data. It uses a leaflet map, but the lines don’t adapt to great circle polar paths.

arcgis

I believe the arcgis examples are in internetsensors/projected geometries

And: internetsensors/pe-gs-projection

The former is a a very nice world projection with some point markers. The latter is an example that shows how to switch out various projections in realtime.

X API update

Attempts to use the X API (formerly Twitter) for projects with Max/MSP have been disappointing at best. Most of the API is behind a paywall now.  The cost is $5000 per month. to implement streaming API used in projects like this:  https://reactivemusic.net/?p=5786

The free tier only allows basic tweeting and user lookup. Search is not available.

I was able to find only one node example that actually worked in the free tier. By “Coding with Ado”. The code requests a token from X, and then you enter a timestamped pin number to continue. Making it worthless for programs and bots. https://youtu.be/G5ZW5j5cwHk?si=vbAtGa0bQ3T_tga9

A local copy of the source code for this is in tkzic/nodetweet3/index.js

Other options

Another option with X, is to use a service like Socialdata. https://socialdata.tools/

There service sits in the middle to handle X API calls. You are charged by the number of calls. It doesn’t offer streaming either, but you can simulate it by calling a search every few seconds.

Other social media options

There are API’s for other social sites like facebook, instagram, tiktok, etc.,

Milford Graves Experiment

Milford Graves was one of my all time favorite musicians. His approach to percussion, and music generally, was unique in a way that defies explanation.

I sampled a bunch of clips of his drumming into Ableton live and then experimented with the Buffer Shuffler 2.0 device to see if I could randomize small slices, ie., several seconds each, of longer samples – without losing the “texture” of the original recordings.

Here is an example of what it sounds like:

This video shows a clip from David Murray’s “Real Deal” running through Buffer Shuffler using slices only about 2-3 seconds in length. The slicing rate is just arbitrary, since there is no warping or specific clock pulse.

Local files: tkzic/aardvark/milfordgraves1 project/milfordgraves1a.als

Spotify segment analysis player in Max

Echo Nest API audio analysis data is now provided by Spotify. This project is part of the internet-sensors project: https://reactivemusic.net/?p=5859

There is an older version here using the discontinued Echo Nest API: https://reactivemusic.net/?p=6296

Note:  Last tested 2024/01/21

The original analyzer document by Tristan Jehan can be found here (for the time being):  https://web.archive.org/web/20160528174915/http://developer.echonest.com/docs/v4/_static/AnalyzeDocumentation.pdf

This implementation uses node.js for Max instead of Ruby to access the API. You will need set up a developer account with Spotify and request API credentials. See below.

Other than that, the synthesis code in Max has not changed.  Some of the following background information and video is from the original version. ..

What if you used that data to reconstruct music by driving a sequencer in Max? The analysis is a series of time based quanta called segments. Each segment provides information about timing, timbre, and pitch – roughly corresponding to rhythm, harmony, and melody.

spotify-synth1.maxpat

download

https://github.com/tkzic/internet-sensors

folder: spotify2

files

main Max patch
  • spotify-synth1.maxpat
abstractions and other files
  • polyvoice-sine.maxpat
  • polyvoice2.maxpat
node.js code
  • spot1.js
node folders and infrastructure
  • /node_modules
  • package-lock.json
  • package.json
dependencies:
  • You will need to install node.js
  • the node package manager will do the rest – see below.

Note: Your best bet is to just download the repository, leave everything in place, and run it from the existing folder

authentication

You will need to sign up for a developer account at Spotify and get an API key. https://developer.spotify.com/documentation/general/guides/authorization-guide/

Edit spot1.js replacing the cliendID and clientSecret with your spotify credentials

node for max install instructions (first time only)

  •  Open the Max patch: spotify-synth1.maxpat
  •  Scroll the patch over to the far right side until you see this green panel:

  • Click the [script npm init] message – this initializes the node infrastructure in the current folder
  • Then click each of the 2 script npm install messages –  this installs the necessary libraries

Instructions

  •  Open the Max patch: spotify-synth1.maxpat
  •  Click the green [script start] message
  • Click the Speaker icon to start audio
  • Click the first dot in the preset object to set the mixer settings to something reasonable
  • open the Max Console window so you can see the Spotify API data
  • From the 2 menus at the top of the screen select an Artist and Title that match, for example: Albert Ayler and “Witches and Devils”
  • Click the [analyze] button – the console window should fill with interest data about your selection.
  • Click [play]
  • Note: if you hear a lot of clicks and pops, reduce the audio sample rate to 44.1 KHz.
Alternative search method:

Enter an Artist and Song title for analysis, in the text boxes. Then press the buttons for title and artist. Then press the /analyze button. If it works you will get prompts from the terminal window, the Max window, and you should see the time in seconds in upper right corner of the patch.

troubleshooting

If there are problems with the analysis, its most likely due to one of the following:

  • artist or title spelled incorrectly
  • song is not available
  • song is too long
  • API is busy
Mixer controls

The Mixer channels from Left to right are:

  • bass
  • synth (left)
  • synth (right)
  • random octave synth
  • timbre synth
  • master volume
  • gain trim
  • HPF cutoff frequency
You can also adjust the reverb decay time and the playback rate. Normal playback rate is 1.

programming notes

Best results happen with slow abstract material, like the Miles (Wayne Shorter) piece above. The bass is not really happening. Lines all sound pretty much the same. I’m thinking it might be possible to derive a bass line from the pitch data by doing a chordal analysis of the analysis.

Here are screenshots of the Max sub-patches (the main screen is in the video above)

Timbre (percussion synth) – plays filtered noise:

Random octave synth:

Here’s a Coltrane piece, using roughly the same configuration but with sine oscillators for everything:

There are issues with clicks on the envelopes and the patch is kind of a mess but it plays!

Several modules respond to the API data:

  • tone synthesiszer (pitch data)
  • harmonic (random octave) synthesizer (pitch data)
  • filtered noise (timbre data)
  • bass synthesizer (key and mode data)
  • envelope generator (loudness data)

Since the key/mode data is global for the track, bass notes are probable guesses. This method doesn’t work for material with strong root motion or a variety of harmonic content. It’s essentially the same approach I use when asked to play bass at an open mic night.

additional notes

Now that this project is running again. I plan to write additional synthesizers that follow more of the spirit of the data. For example, distinguishing strong pitches from noise.

Also would like to make use of  the [section] data as well as the rhythmic analysis. There is an amazing amount of potential here.

Analog video synthesis

Generative art in motion.

ck

https://vimeo.com/cskonopka

By Christopher Konopka

Background

In the past year, Chris has published nearly 2500 improvised video pieces.

13522998_945476555985_3590046569317439705_o

You may be familiar with analog modular audio synthesis. The hardware to produce video looks nearly identical – a maze of patch cords and dials.

Television

13709835_10154366721684231_6046749253184273850_n

Analog video is television. A CRT (cathode ray tube) resynthesizes video information by demodulating signals from a camera. Vintage televisions have dials to adjust color and vertical sync. When you turn the dials you are synthesizing analog video. Distortion, filtering, and feedback – either at the source (camera) or the destination (tv screen) – offer up an infinite variety of images.

Analog vs. Digital

Today all media is digital. Like the screen you are looking at. The difference with analog is in how it’s produced. Boundaries are less definite. Lines curve. Colors waver. Feedback looks like flames. Every frame is a painting.

https://vimeo.com/172035463

Patterns

Images can be generated electronically using modules – without a camera.

Filters

Like with audio sampling, anything is a source. Movies, Youtube, live television, even Felix the Cat.

https://vimeopro.com/cskonopka/analogvideo-december-2015/video/153312961

Feedback

When you aim a guitar at an amplifier it screams. Tilt it away slightly and the screaming subsides. In between there’s sweet spot. The same is true with cameras and screens. Feedback results when output is mixed with input.

Radio

https://vimeopro.com/cskonopka/analogvideo-december-2015/video/153306760

Analog shortwave radio signals are distorted by the atmosphere in a manner similar to video filtering.

A studio in Bethel, Maine.

image1

An improvised collaboration between Chris and Tom Zicarelli using shortwave radio processed with audio effects.

Live Performance

Gem

https://www.instagram.com/p/BImQwOGBveV/?taken-by=cskonopka

A recent screen test at the Gem Theatre in Bethel, Maine. Source material is a time lapse film of a glacier installation – produced at the same theatre – by Wade Kavanaugh and Steven Nguyen. https://www.youtube.com/watch?v=6c36Y-Dcj30  The film was re-synthesized using analog video and feedback. Soundtrack by Tom Zicarelli.

https://www.instagram.com/p/BImRSzHBOLL/?taken-by=cskonopka

Big screen equals mind bending experience.

Note: previous clip excerpted from this 15 minute jam: https://vimeo.com/177843310

TAL

The patterns in this clip appear to be three dimensional. They are not.

From a show that happened somewhere in the known universe:

Alto

Improvised analog video with the band “Alto”. Patterns reminiscent of magical textiles.

More about analog video synthesis