10048 Gestural Music Sequencer [Processing, Sound]
For those that may not know I had a pleasure of being invited to speak at the Flashbelt conference that has just ended. Over the next few weeks I will be writing about the great people I got to meet there and one of those is John Keston, a founder of AudioCookbook.org. The site is a non-profit resource for music and sound enthusiasts made possible by contributions from Unearthed Music.
One of the AC-org projects is the GMS, a Gestural Music Sequencer developed in Processing by John Keston. The application is essentially a performance tool that converts video input into musical phrases. It samples video and displays it either normally or inverted so it looks as though you’re looking into a mirror. Each frame is analyzed for brightness, then the X and Y data of the brightest pixel is converted into a MIDI note. The X axis is used to select a pitch, while the Y axis determines the dynamics. As users move, dance, gesture, or draw in front of the capture device, notes are generated based on a predetermined scale. Currently the available scales are pentatonic minor, whole tone, major, minor, and chromatic, all of which can be dynamically selected during a performance.
Other dynamic controls include MIDI out channel, BPM, low and high octave, transposition, sustain, duration selection (manual or randomized with probability distributions), BPM adjustment, and note randomization. A “free” mode allows the durations to be manipulated by the mean brightness of the video input. Finally, four, simple video filter presets were recently added that can be applied by pressing shift + [1-4]. The application works especially well in dark lighting while using a light source to control the sequencer. More information on the controls and features can be found in the readme.txt file distributed with the download.
You can read more and download the beta version of the application by visiting audiocookbook.org/gms/
John Keston – Development
Grant Muller for work on improving the RWMidi library for Processing.
Ali Momeni inspiring John to develop the software as a potential tool for Minneapolis Art on Wheels.
The audio for the lava lamp is generated in real time by analyzing the brightness of the image in every frame.