A presentation for Berklee BTOT 2015 http://www.berklee.edu/faculty
Around the year 1700, several startup ventures developed prototypes of machines with thousands of moving parts. After 30 years of engineering, competition, and refinement, the result was a device remarkably similar to the modern piano.
What are the musical instruments of the future being designed right now?
- new composition tools,
- reactive music,
- connecting things,
Ray Kurzweil’s future predictions on a timeline: http://imgur.com/quKXllo (The Singularity will happen in 2045)
In 1965 researcher Herbert Simon said: “Machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky added his own prediction: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.” https://forums.opensuse.org/showthread.php/390217-Will-computers-or-machines-ever-become-self-aware-or-evolve/page2
Are there patterns in the ways that artists adapt technology?
For example, the Hammond organ borrowed ideas developed for radios. Recorded music is produced with computers that were originally as business machines.
Instead of looking forward to predict future music, lets look backwards to ask,”What technology needs to happen to make musical instruments possible?” The piano relies upon a single-escapement (1710) and later a double-escapement (1821). Real time pitch shifting depends on Fourier transforms (1822) and fast computers (~1980).
Artists often find new (unintended) uses for tools. Like the printing press.
The piano is still in development. In December 2014, Eren Başbuğ composed and performed music on the Roli Seaboard – a piano keyboard made of 3 dimensional sensing foam:
Here is Keith McMillen’s QuNexus keyboard (with Polyphonic aftertouch):
Here are tools that might lead to new ways of making music. They won’t replace old ways. Singing has outlasted every other kind of music.
These ideas represent a combination of engineering and art. Engineers need artists. Artists need engineers. Interesting things happen at the confluence of streams.
Analysis, re-synthesis, transformation
Computers can analyze the audio spectrum in real time. Sounds can be transformed and re-synthesized with near zero latency.
Finding alternate routes through a song.
by Paul Lamere at the Echonest
Echonest has compiled data on over 14 million songs. This is an example of machine learning and pattern matching applied to music.
Try examples: “Karma Police”, Or search for: “Albert Ayler”)
- Analyze your own music: https://reactivemusic.net/?p=18026
Remixing a remix
“Mindblowing Six Song Country Mashup”: https://www.youtube.com/watch?v=FY8SwIvxj8o (start at 0:40)
Local file: Max teaching examples/new-country-mashup.mp3
More about Echonest
- Music Machinery by Paul Lamere: http://musicmachinery.com
- Echonest segment analysis player: https://reactivemusic.net/?p=6296
Looking at music under a microscope.
removing music from speech
First you have to separate them.
by Xavier Serra and UPF
Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).
Settings for above example:
- Window size: 1800 (SR / f0 * lobeWidth) 44100 / 200 * 8 = 1764
- FFT size: 2048
- Mag threshold: -90
- Max harmonics: 30
- f0 min: 150
- f0 max: 200
Many kinds of features
- Low level features: harmonicity, amplitude, fundamental frequency
- high level features: mood, genre, danceability
Examples of feature detection
- Acoustic Brainz: https://reactivemusic.net/?p=17641 (typical analysis page)
- Freesound (vast library of sounds): https://www.freesound.org – look at “similar sounds”
- Essentia (open source feature detection tools) https://github.com/MTG/essentia
- “What We Watch” – Ethan Zuckerman https://reactivemusic.net/?p=10987
Music information retrieval
Finding the drop
“Detetcting Drops in EDM” – by Karthik Yadati, Martha Larson, Cynthia C. S. Liem, Alan Hanjalic at Delft University of Technology (2014) https://reactivemusic.net/?p=17711
Polyphonic audio editing
Blurring the distinction between recorded and written music.
A minor version of “Bohemian Rhapsody”: http://www.youtube.com/watch?v=voca1OyQdKk
“How Shazam Works” by Farhoud Manjoo at Slate: https://reactivemusic.net/?p=12712, “About 3 datapoints per second, per song.”
- Music fingerprinting: https://musicbrainz.org/doc/Fingerprinting
- Humans being computers. Mystery sounds. (Local file: Desktop/mystery sounds)
- Is it more difficult to build a robot that plays or one that listens?
Sonographic sound processing
Transforming music through pictures.
by Tadej Droljc
(Example of 3d speech processing at 4:12)
local file: SSP-dissertation/4 – Max/MSP/Jitter Patch of PV With Spectrogram as a Spectral Data Storage and User Interface/basic_patch.maxpat
Try recording a short passage, then set bound mode to 4, and click autorotate
Spectral scanning in Ableton Live:
Web browser is the new black
by Joe Berkowitz
Can you jam over the internet?
What is the speed of electricity? 70-80 ms is the best round trip latency (via fiber) from the U.S. east to west coast. If you were jamming over the internet with someone on the opposite coast it might be like being 100 ft away from them in a field. (sound travels 1100 feet/second in air).
Global communal experiences – Bill McKibben – 1990 “The Age of Missing Information”
More about Web Audio
- A quick Web Audio introduction: https://reactivemusic.net/?p=17600
- Gibber by Charlie Roberts http://gibber.mat.ucsb.edu/
Conversation with robots
Computers finding meaning
The Google speech API
The Google speech API uses neural networks, statistics, and large quantities of data.
Microsoft: real-time translation
- German/English http://digg.com/video/heres-microsoft-demoing-their-breakthrough-in-real-time-translated-conversation
- Skype translator – Spanish/English: http://www.skype.com/en/translator-preview/
Making music from from sounds that are not music.
by Katja Vetter
. (InstantDecomposer is an update of SliceJockey2): http://www.katjaas.nl/slicejockey/slicejockey.html
- local: InstantDecomposer version: tkzic/pdweekend2014/IDecTouch/IDecTouch.pd
- local: slicejockey2test2/slicejockey2test2.pd
More about reactive music
- RJDJ apps – create personal soundtracks from the environment
- “Lyrebirds” by Christopher Lopez https://www.youtube.com/watch?v=Ouws45R2iXg
Sensors and sonification
Transforming motion into music
- earcons (email notification sound)
- models (video game sounds)
- parameter mapping (Geiger counter)
camera based hand sensor
“Muse” (Boulanger Labs) with Paul Bachelor, Christopher Konopka, Tom Shani, and Chelsea Southard: https://reactivemusic.net/?p=16187
Max/MSP piano example: Leapfinger: https://reactivemusic.net/?p=11727
local file: max-projects/leap-motion/leapfinger2.maxpat
Internet sensors project
Detecting motion from the Internet
Twitter streaming example
MBTA bus data
Sonification of Mass Ave buses, from Harvard to Dudley
Stock market music
More sonification projects
Vine API mashup
By Steve Hensley
local file: tkzic/stevehensely/shensley_maxvine.maxpat
Audio sensing gloves for spacesuits
By Christopher Konopka at future, music, technology
Sensing motion with video using frame subtraction
by Adam Rokhsar
local file: max-projects/frame-subtraction
Music is stored all across the brain.
Mouse brain wiring diagram
The Allen institute
“Hacking the soul” by Christof Koch at the Allen institute
(An Explanation of the wiring diagram of the mouse brain – at 13:33) http://www.technologyreview.com/emtech/14/video/watch/christof-koch-hacking-the-soul/
A complete simulation of the nematode worm, in software, with a Lego body (320 neurons)
Harold Cohen’s algorithmic painting machine
Could we grow music producing organisms? https://reactivemusic.net/?p=18018
An optimistic future?
There is a quickening of discovery: internet collaboration, open source, linux, github, r-pi, Pd, SDR.
“Robots and AI will help us create more jobs for humans — if we want them. And one of those jobs for us will be to keep inventing new jobs for the AIs and robots to take from us. We think of a new job we want, we do it for a while, then we teach robots how to do it. Then we make up something else.”
“…We invented machines to take x-rays, then we invented x-ray diagnostic technicians which farmers 200 years ago would have not believed could be a job, and now we are giving those jobs to robot AIs.”
Kevin Kelly – January 7, 2015, reddit AMA http://www.reddit.com/r/Futurology/comments/2rohmk/i_am_kevin_kelly_radical_technooptimist_digital/
Will people be marrying robots in 2050? http://www.livescience.com/1951-forecast-sex-marriage-robots-2050.html
“What can you predict about the future of music” by Michael Gonchar at The New York Times https://reactivemusic.net/?p=17023
Jim Morrison predicts the future of music:
More areas to explore
- NIME (New interfaces for musical expression) http://en.wikipedia.org/wiki/New_Interfaces_for_Musical_Expression
- Immersive virtual musical instruments http://en.wikipedia.org/wiki/Immersive_virtual_musical_instrument
- I’m thinking of something: http://imthinkingofsomething.com