The Lorentz factor and its role in relativity
From “Sixty Symbols” By Brady Haran for the University of Nottingham
The Lorentz factor and its role in relativity
From “Sixty Symbols” By Brady Haran for the University of Nottingham
Built into mountain.
at Web Urbanist
http://weburbanist.com/2014/12/25/long-now-future-proof-10000-year-clock-built-into-mountain/
The Sound Of Tubes, Tape & Transformers.
By Hugh Robjohns at Sound On Sound
http://www.soundonsound.com/sos/feb10/articles/analoguewarmth.htm
By Stefan Brunner at Cycling 74
https://cycling74.com/2014/12/19/music-hack-for-max-7-elevator-music-generator/
by soundwavescience
A presentation for Berklee BTOT 2015 http://www.berklee.edu/faculty
Around the year 1700, several startup ventures developed prototypes of machines with thousands of moving parts. After 30 years of engineering, competition, and refinement, the result was a device remarkably similar to the modern piano.
What are the musical instruments of the future being designed right now?
Ray Kurzweil’s future predictions on a timeline: http://imgur.com/quKXllo (The Singularity will happen in 2045)
In 1965 researcher Herbert Simon said: “Machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky added his own prediction: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.” https://forums.opensuse.org/showthread.php/390217-Will-computers-or-machines-ever-become-self-aware-or-evolve/page2
Are there patterns in the ways that artists adapt technology?
For example, the Hammond organ borrowed ideas developed for radios. Recorded music is produced with computers that were originally as business machines.
Instead of looking forward to predict future music, lets look backwards to ask,”What technology needs to happen to make musical instruments possible?” The piano relies upon a single-escapement (1710) and later a double-escapement (1821). Real time pitch shifting depends on Fourier transforms (1822) and fast computers (~1980).
Artists often find new (unintended) uses for tools. Like the printing press.
The piano is still in development. In December 2014, Eren Başbuğ composed and performed music on the Roli Seaboard – a piano keyboard made of 3 dimensional sensing foam:
Here is Keith McMillen’s QuNexus keyboard (with Polyphonic aftertouch):
https://www.youtube.com/watch?v=bry_62fVB1E
Here are tools that might lead to new ways of making music. They won’t replace old ways. Singing has outlasted every other kind of music.
These ideas represent a combination of engineering and art. Engineers need artists. Artists need engineers. Interesting things happen at the confluence of streams.
Computers can analyze the audio spectrum in real time. Sounds can be transformed and re-synthesized with near zero latency.
Finding alternate routes through a song.
by Paul Lamere at the Echonest
Echonest has compiled data on over 14 million songs. This is an example of machine learning and pattern matching applied to music.
http://labs.echonest.com/Uploader/index.html
Try examples: “Karma Police”, Or search for: “Albert Ayler”)
“Mindblowing Six Song Country Mashup”: https://www.youtube.com/watch?v=FY8SwIvxj8o (start at 0:40)
Local file: Max teaching examples/new-country-mashup.mp3
Looking at music under a microscope.
First you have to separate them.
by Xavier Serra and UPF
Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).
Settings for above example:
Finding the drop
“Detetcting Drops in EDM” – by Karthik Yadati, Martha Larson, Cynthia C. S. Liem, Alan Hanjalic at Delft University of Technology (2014) https://reactivemusic.net/?p=17711
Blurring the distinction between recorded and written music.
by Celemony
http://www.celemony.com/en/start
A minor version of “Bohemian Rhapsody”: http://www.youtube.com/watch?v=voca1OyQdKk
“How Shazam Works” by Farhoud Manjoo at Slate: https://reactivemusic.net/?p=12712, “About 3 datapoints per second, per song.”
Transforming music through pictures.
by Tadej Droljc
https://reactivemusic.net/?p=16887
(Example of 3d speech processing at 4:12)
local file: SSP-dissertation/4 – Max/MSP/Jitter Patch of PV With Spectrogram as a Spectral Data Storage and User Interface/basic_patch.maxpat
Try recording a short passage, then set bound mode to 4, and click autorotate
Spectral scanning in Ableton Live:
http://youtu.be/r-ZpwGgkGFI
Web browser is the new black
by Joe Berkowitz
http://www.noteflight.com/login
by Dinahmoe
http://labs.dinahmoe.com/plink/
What is the speed of electricity? 70-80 ms is the best round trip latency (via fiber) from the U.S. east to west coast. If you were jamming over the internet with someone on the opposite coast it might be like being 100 ft away from them in a field. (sound travels 1100 feet/second in air).
Global communal experiences – Bill McKibben – 1990 “The Age of Missing Information”
Computers finding meaning
https://reactivemusic.net/?p=9834
The Google speech API uses neural networks, statistics, and large quantities of data.
Making music from from sounds that are not music.
by Katja Vetter
. (InstantDecomposer is an update of SliceJockey2): http://www.katjaas.nl/slicejockey/slicejockey.html
Transforming motion into music
camera based hand sensor
“Muse” (Boulanger Labs) with Paul Bachelor, Christopher Konopka, Tom Shani, and Chelsea Southard: https://reactivemusic.net/?p=16187
Max/MSP piano example: Leapfinger: https://reactivemusic.net/?p=11727
local file: max-projects/leap-motion/leapfinger2.maxpat
Detecting motion from the Internet
https://reactivemusic.net/?p=5859
https://reactivemusic.net/?p=5786
MBTA bus data
Sonification of Mass Ave buses, from Harvard to Dudley
https://reactivemusic.net/?p=17524
https://reactivemusic.net/?p=12029
By Steve Hensley
Using Max/MSP/jitter
local file: tkzic/stevehensely/shensley_maxvine.maxpat
By Christopher Konopka at future, music, technology
http://futuremusictechnology.com
Sensing motion with video using frame subtraction
by Adam Rokhsar
https://reactivemusic.net/?p=7005
local file: max-projects/frame-subtraction
Music is stored all across the brain.
The Allen institute
https://reactivemusic.net/?p=17758
“Hacking the soul” by Christof Koch at the Allen institute
(An Explanation of the wiring diagram of the mouse brain – at 13:33) http://www.technologyreview.com/emtech/14/video/watch/christof-koch-hacking-the-soul/
A complete simulation of the nematode worm, in software, with a Lego body (320 neurons)
: https://reactivemusic.net/?p=17744
Harold Cohen’s algorithmic painting machine
https://reactivemusic.net/?p=17778
A perfect pitch pill? http://www.theverge.com/2014/1/6/5279182/valproate-may-give-humans-perfect-pitch-by-resetting-critical-periods-in-brain
Could we grow music producing organisms? https://reactivemusic.net/?p=18018
There is a quickening of discovery: internet collaboration, open source, linux, github, r-pi, Pd, SDR.
“Robots and AI will help us create more jobs for humans — if we want them. And one of those jobs for us will be to keep inventing new jobs for the AIs and robots to take from us. We think of a new job we want, we do it for a while, then we teach robots how to do it. Then we make up something else.”
“…We invented machines to take x-rays, then we invented x-ray diagnostic technicians which farmers 200 years ago would have not believed could be a job, and now we are giving those jobs to robot AIs.”
Kevin Kelly – January 7, 2015, reddit AMA http://www.reddit.com/r/Futurology/comments/2rohmk/i_am_kevin_kelly_radical_technooptimist_digital/
Will people be marrying robots in 2050? http://www.livescience.com/1951-forecast-sex-marriage-robots-2050.html
“What can you predict about the future of music” by Michael Gonchar at The New York Times https://reactivemusic.net/?p=17023
Jim Morrison predicts the future of music:
A presentation for Berklee BTOT 2015 http://www.berklee.edu/faculty
(KITT dashboard by Dave Metlesits)
The voice was the first musical instrument. Humans are not the only source of musical voices. Machines have voices. Animals too.
We instantly recognize people and animals by their voices. As an artist we work to develop our own voice. Voices contain information beyond words. Think of R2D2 or Chewbacca.
There is also information between words: “Palin Biden Silences” David Tinapple, 2008: http://vimeo.com/38876967
What’s in a voice?
Humans acting like synthesizers.
Teaching machines to talk.
Try the ‘say’ command (in Mac OS terminal), for example: say hello
Combining the energy of voice with musical instruments (convolution)
By Yamaha
(text + notation = singing)
Demo tracks: https://www.youtube.com/watch?v=QWkHypp3kuQ
Vocaloop device http://vocaloop.jp/ demo: https://www.youtube.com/watch?v=xLpX2M7I6og#t=24
Transformation
Pitch transposing a baby https://reactivemusic.net/?p=2458
Autotune: “T-Pain effect” ,(I-am-T-Pain bySmule), “Lollipop” by Lil’ Wayne. “Woods” by Bon Iver https://www.youtube.com/watch?v=1_cePGP6lbU
by Matthew Davidson
Local file: max-teaching-examples/autotuna-test.maxpat
by Katja Vetter
http://www.katjaas.nl/slicejockey/slicejockey.html
Autocorrelation: (helmholtz~ Pd external) “Helmholtz finds the pitch” http://www.katjaas.nl/helmholtz/helmholtz.html
(^^ is input pitch, preset #9 is normal)
Disassembling time into very small pieces
Adapted from Andy Farnell, “Designing Sound”
https://reactivemusic.net/?p=11385 Download these patches from: https://github.com/tkzic/max-projects folder: granular-timestretch
…coming soon
Changing sound into pictures and back into sound
by Tadej Droljc
https://reactivemusic.net/?p=16887
(Example of 3d speech processing at 4:12)
local file: SSP-dissertation/4 – Max/MSP/Jitter Patch of PV With Spectrogram as a Spectral Data Storage and User Interface/basic_patch.maxpat
Try recording a short passage, then set bound mode to 4, and click autorotate
Understanding the meaning of speech
A conversation with a robot in Max
https://reactivemusic.net/?p=9834
Google speech uses neural networks, statistics, and large quantities of data.
Changes in the environment reflected by sound
“You can talk to the animals…”
Pig creatures example: http://vimeo.com/64543087
What about Jar Jar Binks?
The sound changes but the words remain the same.
The Speech accent archive https://reactivemusic.net/?p=9436
We are always singing.
by Xavier Serra and UPF
Harmonic Model Plus Residual (HPR) – Build a spectrogram using STFT, then identify where there is strong correlation to a tonal harmonic structure (music). This is the harmonic model of the sound. Subtract it from the original spectrogram to get the residual (noise).
Settings for above example:
Acoustic Brainz: (typical analysis page) https://reactivemusic.net/?p=17641
Essentia (open source feature detection tools) https://github.com/MTG/essentia
Freesound (vast library of sounds): https://www.freesound.org – look at “similar sounds”
A sad thought
This method was used to send secret messages during world war 2. Its now used in cell phones to get rid of echo. Its also used in noise canceling headphones.
https://reactivemusic.net/?p=8879
max-projects/phase-cancellation/phase-cancellation-example.maxpat
What is not left and not right?
Ableton Live – utility/difference device: https://reactivemusic.net/?p=1498 (Allison Krause example)
Local file: Ableton-teaching-examples/vocal-eliminator
Questions