Generic SDR realtime IQ converter

With CubicSDR, HAMLIB, and Max

update: 1/25/2021 – Using this general setup with Airspy, CubicSDR, rigctld, and netcat (nc) to send IQ data into the basicSDR3.maxpat patch


CubicSDR uses the SoapySDR library as generic tool for extracting realtime IQ data streams from common SDR devices. It also provides TCP external frequency control using HAMLIB.

http://cubicsdr.com/

Although its not the main purpose of CubicSDR, the IQ streaming capability will connect SDR devices to Max, Pd, and other DSP platforms, to build experimental radios. All without building external objects or hardware device drivers.  The convenience of using CubicSDR for this purpose far outweighs the overhead.

A prototype with Max and rtl_sdr

How to use CubicSDR as a front-end for SDR experiments in Max.

The signal path for this test is:

  1. antenna
  2. NooElec HAM IT UP upconverter
  3. rtl-sdr dongle
  4. CubicSDR
  5. Soundflower (or a “loop-backed” external audio device)
  6. Max

Running in the other direction, the frequency control path is:

  1. netcat running in Mac OS X terminal (or a Max patch that sends TCP)
  2. rigctld (hamlib TCP server)
  3. CubicSDR
  4. rtl-sdr dongle

There’s a lot of stuff going on here, so the choice to use hardware audio routing instead of Soundflower and netcat instead of TCP in Max, is an effort toward simplicity.

CubicSDR settings:

  • Plug in the rtl-sdr before launching CubicSDR, so it will be discovered on the setup screen
  • On the main display, click just to the right of the mode buttons to bring up a drop down menu of audio devices
  • select I/Q mode
  • select the audio device, or Soundflower, that you will use to route audio to Max
  • If using an upconverter, set the ‘frequency offset’ in the settings menu (e.g. -125000000)
  • click on any of the frequency digits, press space, and enter in the same frequency as the Center Frequency (e.g., 7000000)
  • click the ‘V’ to the left of the frequency digits, to select ‘delta lock mode’. This causes the frequency and center frequency to sync.
  • Be careful not to click anywhere in the waterfall window – or this will mess up the sync
  • Under Rig Control menu:
    • Select “Hamlib NET rigctl” as the model
    • Enter localhost:4532 as the control port
    • Select 57600 as the serial rate
    • Make sure that “follow rig” and “floating center” are checked
    • ‘Check’ ‘enable rig’. If it doesn’t stay checked, then there is a problem with the connection.
  • Under the Audio sample rate menu, select the correct sample rate for your audio device (e.g. 96k)

TCP and rigctld settings

  • Open a terminal window
  • type: rigctld -m 1 4532 &
  • This starts the server running in the background using the HAMLIB test dummy rig
  • to set frequency to 7.010 MHz, type:

    echo ‘F 7010000’ | nc -w 1 localhost 4532

  • This should change the center frequency and frequency in CubicSDR

Max settings

For this test, you can use any of the MaxSDR tutorials available at https://github.com/tkzic/maxradio but I chose to use the main program, currently maxsdr7a.maxpat. The key is to choose the default audio input device and set it to be the same as what is coming out of CubicSDR.  I used a stereo patch cord to connect the line output of my Apollo Twin interface to the input jacks – but you can also use Soundflower.

  • Set the audio input device to match CubicSDR, as described above. Also match the sample rate (e.g., 96k)
  • Set the audio output device to your internal soundcard/speakers
  • You may need to toggle the flip IQ button
  • Start audio and recall preset 1 or some normal settings for SSB
  • It should be receiving I/Q data now from Cubic SDR

Links:

Installing Hamlib: https://reactivemusic.net/?p=19402

Installing CubicSDR: https://github.com/cjcliffe/CubicSDR/releases

Supported SDR devices: https://reactivemusic.net/?p=19746

Notes:

I had some success using the Max TCP external described at the Installing Hamlib link above, but temporarily abandoned it due to some latency and dropouts.

Local version of this patch is: tcpClient-small2.maxpat

Next steps:

  • hardware (i.e., MIDI controller) control of frequency – and refinement of Max TCP patch. Can likely re-use the patch from the remote radio project.
  • Convert to PD : TCP/IP code is builtin
  • Consider forking CubicSDR and adding direct MIDI/OSC control of UI.

Analog video synthesis

Generative art in motion.

ck

https://vimeo.com/cskonopka

By Christopher Konopka

Background

In the past year, Chris has published nearly 2500 improvised video pieces.

13522998_945476555985_3590046569317439705_o

You may be familiar with analog modular audio synthesis. The hardware to produce video looks nearly identical – a maze of patch cords and dials.

Television

13709835_10154366721684231_6046749253184273850_n

Analog video is television. A CRT (cathode ray tube) resynthesizes video information by demodulating signals from a camera. Vintage televisions have dials to adjust color and vertical sync. When you turn the dials you are synthesizing analog video. Distortion, filtering, and feedback – either at the source (camera) or the destination (tv screen) – offer up an infinite variety of images.

Analog vs. Digital

Today all media is digital. Like the screen you are looking at. The difference with analog is in how it’s produced. Boundaries are less definite. Lines curve. Colors waver. Feedback looks like flames. Every frame is a painting.

https://vimeo.com/172035463

Patterns

Images can be generated electronically using modules – without a camera.

Filters

Like with audio sampling, anything is a source. Movies, Youtube, live television, even Felix the Cat.

https://vimeopro.com/cskonopka/analogvideo-december-2015/video/153312961

Feedback

When you aim a guitar at an amplifier it screams. Tilt it away slightly and the screaming subsides. In between there’s sweet spot. The same is true with cameras and screens. Feedback results when output is mixed with input.

Radio

https://vimeopro.com/cskonopka/analogvideo-december-2015/video/153306760

Analog shortwave radio signals are distorted by the atmosphere in a manner similar to video filtering.

A studio in Bethel, Maine.

image1

An improvised collaboration between Chris and Tom Zicarelli using shortwave radio processed with audio effects.

Live Performance

Gem

https://www.instagram.com/p/BImQwOGBveV/?taken-by=cskonopka

A recent screen test at the Gem Theatre in Bethel, Maine. Source material is a time lapse film of a glacier installation – produced at the same theatre – by Wade Kavanaugh and Steven Nguyen. https://www.youtube.com/watch?v=6c36Y-Dcj30  The film was re-synthesized using analog video and feedback. Soundtrack by Tom Zicarelli.

https://www.instagram.com/p/BImRSzHBOLL/?taken-by=cskonopka

Big screen equals mind bending experience.

Note: previous clip excerpted from this 15 minute jam: https://vimeo.com/177843310

TAL

The patterns in this clip appear to be three dimensional. They are not.

From a show that happened somewhere in the known universe:

Alto

Improvised analog video with the band “Alto”. Patterns reminiscent of magical textiles.

More about analog video synthesis