Gestural Music Sequencer [#Processing, Sound] – Performance tool that converts video input into music /by @jkeston

10048 Gestural Music Sequencer [Processing, Sound]

For those that may not know I had a pleasure of being invited to speak at the Flashbelt conference that has just ended. Over the next few weeks I will be writing about the great people I got to meet there and one of those is John Keston, a founder of The site is a non-profit resource for music and sound enthusiasts made possible by contributions from Unearthed Music.

One of the AC-org projects is the GMS, a Gestural Music Sequencer developed in Processing by John Keston. The application is essentially a performance tool that converts video input into musical phrases. It samples video and displays it either normally or inverted so it looks as though you’re looking into a mirror. Each frame is analyzed for brightness, then the X and Y data of the brightest pixel is converted into a MIDI note. The X axis is used to select a pitch, while the Y axis determines the dynamics. As users move, dance, gesture, or draw in front of the capture device, notes are generated based on a predetermined scale. Currently the available scales are pentatonic minor, whole tone, major, minor, and chromatic, all of which can be dynamically selected during a performance.

Other dynamic controls include MIDI out channel, BPM, low and high octave, transposition, sustain, duration selection (manual or randomized with probability distributions), BPM adjustment, and note randomization. A “free” mode allows the durations to be manipulated by the mean brightness of the video input. Finally, four, simple video filter presets were recently added that can be applied by pressing shift + [1-4]. The application works especially well in dark lighting while using a light source to control the sequencer. More information on the controls and features can be found in the readme.txt file distributed with the download.

You can read more and download the beta version of the application by visiting

John Keston – Development
Grant Muller for work on improving the RWMidi library for Processing.
Ali Momeni inspiring John to develop the software as a potential tool for Minneapolis Art on Wheels.

Related Posts with Thumbnails

  • Author: Filip Filip

    Architect, Lecturer, New Media Technologist. Based in London and Berlin. MacBook Pro 15″ + iPhone.

    View all entries by Filip

You might also like:

RGB MusicLab [Mac, Windows]

CAN at Flashbelt 2010 + Contest [Events]

TypeStar [Processing, Sound]

Forester [Mac, Windows]

Bupp!! [Mac, Windows]

Do you like this blog? Subscribe to its RSS feed or Twitter to keep up to date!

blog comments powered by Disqus

The audio for the lava lamp is created by analyzing the level of brightness for each frame in realtime!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: