Category: Max/MSP

cv.jit revisited

Jitter computer vision library

By Jean-Marc Pelletier

http://jmpelletier.com/cvjit/

Screen Shot 2015-03-23 at 2.24.37 AM

Several examples you can try by running the “help” files. All of the explanatory text below is by Jean-Marc Pelletier.

cv.jit.blobs.elongation

The utility abstraction cv.jit.blobs.elongation.draw superimposes elongation values on the image sent to its right inlet. You MUST also connect the output of cv.jit.blobs.moments to its middle inlet. You can use the attribute “frgb” to set the colour used.

Screen Shot 2015-03-23 at 2.27.20 AM

cv.jit.binedge

Marks as ON only pixels that are themselves ON and have a least one OFF neighbour. In other words, it returns only the edges in a binary image.

Screen Shot 2015-03-23 at 2.29.39 AM

cv.jit.blobs.bounds

cv.jit.blobs.bounds offers similar functionality to jit.findbounds but finds the bounding box for every blob in a labeled image.

cv.jit.blobs.bounds outputs a 4-plane 1-dimension float32 matrix whose number of cells is the same as the number of blobs in the input image.

Screen Shot 2015-03-23 at 2.31.16 AM

cv.jit.blobs.centroids

cv.jit.blobs.centroids functions much like cv.jit.centroids except that it takes for input the output of cv.jit.label and calculates the center of mass and area of each connected component individually.

The output of cv.jit.label must be of type char.

cv.jit.blobs outputs a single-row, 3-plane char matrix where the number of cells is the same as the number of labeled components.

Screen Shot 2015-03-23 at 2.32.59 AM

cv.jit.blobs.direction

cv.jit.blobs.direction is almost identical to cv.jit.blobs.orientation. It also takes in the output of cv.jit.blobs.moments and calculates the orientation of each blob’s main axis. However, unlike cv.jit.blobs.orientation, it takes into account symmetry. This means that cv.jit.blobs.direction can tell which direction a connected component is pointing.

Like cv.jit.blobs.orientation, the output is in radians by default and can be changed to degrees with the “mode” attribute. The output is between 0 and 2Pi.

Screen Shot 2015-03-23 at 2.34.38 AM

 cv.jit.blobs.moments

cv.jit.blobs.moments functions much like cv.jit.moments but computes moments and invariants for every blob identified by cv.jit.label. See cv.jit.moments for a discussion on moments and invariants.

The output is a 17-plane, single-row float32 matrix. The number of cells is the same as the number of connected components.

The output of cv.jit.moments can be fed to other objects for further analysis. See cv.jit.blobs.orientation, cv.jit.blobs.direction, cv.jit.blobs.elongation, and cv.jit.blobs.recon.

Screen Shot 2015-03-23 at 2.36.34 AM

cv.jit.blobs.orientation

cv.jit.blobs.orientation functions much like cv.jit.orientation except that it takes for input the output of cv.jit.blobs.moments and calculates the orientation of the main axis of each connected component individually.

cv.jit.blobs.orientation outputs a single-row, 1-plane char matrix where the number of cells is the same as the number of labeled components.

Orientation is measured in radians by default but you can switch to degree output by specifying “mode 1”. The values are between 0. and Pi radians, with the extremes being horizontal and Pi/2 vertical.

Screen Shot 2015-03-23 at 2.38.22 AM

cv.jit.blobs.recon

cv.jit.blobs.recon calculates the statistical distance between blob shape descriptors and a pre-computed model. The model must be created using cv.jit.learn, and cv.jit.blobs.recon functions much like cv.jit.learn’s “compare” mode.

cv.jit.blobs.recon must be fed the output of cv.jit.blobs.moments. Use the “mode” attribute to set whether moments (0) or Hu invariants (1) are used. Make sure that this matches the data used to train the model.

The output is a 1-plane float32 matrix, in which each cell contains the statistical distance between the corresponding blob and the model.. The lower the output value, the more similar the blob’s shape is to the model.

Screen Shot 2015-03-23 at 2.41.13 AM

Basic synth in Max – part 2

Yet another Basic synthesizer design

Screen Shot 2015-03-26 at 3.16.19 PM

See part 1 here: http://reactivemusic.net/?p=18511

New features

Drag to select buffer start/end points

waveform~ object

Screen Shot 2015-03-26 at 3.21.17 PM

Sample recording

record~ object.

Screen Shot 2015-03-26 at 3.22.25 PM

How to design voice activated recording?

*Time compress/stretch

groove~ (Max 7 only)

Presets

Screen Shot 2015-03-26 at 3.23.30 PM

M4L preset management: http://reactivemusic.net/?p=18557

Polyphony

poly~ object

Polyphonic Midi synth in Max

http://reactivemusic.net/?p=11732

local: poly-generic-example1.maxpat (polyphonic)

Polyphonic instrument in Max for Live

Wave~ sample player: http://reactivemusic.net/?p=18354

local: m4l: poly-synth1.als (aaa-polysynth2.amxd)

Screen Shot 2015-03-22 at 10.01.22 PM

Max For Live

automation and UI design (review)

Distributing M4L devices

How to create a Live ‘Pack’

by Winksound

  • save set
  • collect all and save
  • file manager
    • manage project
      • packing : create live pack

 

Presets in Max for Live

How to use the Max preset object inside of M4L.

Screen Shot 2015-03-22 at 8.06.14 PM

There is some confusion about how to use Max presets in a M4L device. The method described here lets you save and recall presets with a device inside of a Live set, without additional files or dialog boxes. It uses pattrstorage. It works automatically with the Live UI objects.

It also works with other Max UI objects by connecting them to pattr objects.

Its based on an article by Gregory Taylor: https://cycling74.com/2011/05/19/max-for-live-tutorial-adding-pattr-presets-to-your-live-session/

Download

https://github.com/tkzic/max-for-live-projects

Folder: presets

Patch: aaa-preset3.amxd

How it works:

Instructions are included inside the patch. You will need to add objects and then set attributes for those objects in the inspector.  For best results, set the inspector values after adding each object

Write the patch in this order:

A1. Add UI objects.

For each UI object:

  1. check link-to-scripting name
  2. set long and short names to actual name of param

Screen Shot 2015-03-22 at 8.44.23 PM

A2 (optional) Add non Live (ie., Max UI objects)

For each object, connect the middle outlet of a pattr object (with a parameter name as an argument) to the left inlet of the UI object. For example:

Screen Shot 2015-03-22 at 8.30.24 PM

Then in inspector for each UI object:

  1. check  parameter-mode-enable
  2. check inital-enable

Screen Shot 2015-03-22 at 8.51.10 PM

B. Add a pattrstorage object.

Screen Shot 2015-03-22 at 8.35.28 PM

Give the object a name argument, for example: pattrstorage zoo. The name can be anything, its not important. Then in the inspector for pattrstorage:

  1. check parameter-mode enable
  2. check Auto-update-parameter Initial-value
  3. check initial-value
  4. change short-name to match long name

Screen Shot 2015-03-22 at 8.42.49 PM

C. Add an autopattr object

Screen Shot 2015-03-22 at 8.34.21 PM

D. Add a preset object

Screen Shot 2015-03-22 at 8.34.53 PM

In the inspector for the preset object:

  1. assign pattrstorage object name from step B. (zoo) to pattrstorage attribute

Screen Shot 2015-03-22 at 8.52.11 PM

 Notes

The preset numbers go from 1-n. They can be fed directly into the pattrstorage object – for example if you wanted to use an external controller

You can name the presets (slotnames). See the pattrstorage help file

You can interpolate between presets. See pattrstorage help file

Adding new UI objects after presets have been stored

If you add a new UI object to the patch after pattrstorage is set up, you will need to re-save the presets with the correct setting of the new UI object. Or you can edit the pattrstorage data.

 

 

Portrait series

Optical flow, a depth camera, and edge detection.

By Matt Romein

https://cycling74.com/project/gif-portraits/

Portrait+-+Margo+Cramer

Portrait of Margo Cramer from http://mattromein.squarespace.com/#/portrait-series/

The programming uses the following external code:

jit.gl.hap – Rob Ramirez
ab.hsflow.jxs – Andrew Benson
jit.openni – DiabloDale
cv.jit – Jean-Marc Pelletier

 

Ableton Push as a low resolution video display

Is 8 x 8 enough?

Adapted from a tutorial by Darwin Grosse

This Max tutorial, from Cycling 74, connects the builtin-camera to a Push display matrix, using Midi sysex codes. https://cycling74.com/wiki/index.php?title=Push_Programming_Oct13_03

If you set the frame rate high enough, you can clearly see motion.

I thought it would be interesting to display icons at this resolution, but its not very impressive. Here’s an example.

Screen Shot 2015-03-10 at 12.00.56 AM

The 8 x 8 version is on the left. The original, on the right, is 57 x 57. Another problem is that the RGB quality of the Push is not very accurate for anything beyond primary colors. Here is the modified version of the patch.

Download

Screen Shot 2015-03-10 at 12.01.12 AM

https://github.com/tkzic/max-projects

folder: push

patches:

  • pushpix-tz.maxpat