Gestural Music Sequencer [#Processing, Sound] – Performance tool that converts video input into music /by @jkeston

July 19, 2010

10048 Gestural Music Sequencer [Processing, Sound]

For those that may not know I had a pleasure of being invited to speak at the Flashbelt conference that has just ended. Over the next few weeks I will be writing about the great people I got to meet there and one of those is John Keston, a founder of The site is a non-profit resource for music and sound enthusiasts made possible by contributions from Unearthed Music.

One of the AC-org projects is the GMS, a Gestural Music Sequencer developed in Processing by John Keston. The application is essentially a performance tool that converts video input into musical phrases. It samples video and displays it either normally or inverted so it looks as though you’re looking into a mirror. Each frame is analyzed for brightness, then the X and Y data of the brightest pixel is converted into a MIDI note. The X axis is used to select a pitch, while the Y axis determines the dynamics. As users move, dance, gesture, or draw in front of the capture device, notes are generated based on a predetermined scale. Currently the available scales are pentatonic minor, whole tone, major, minor, and chromatic, all of which can be dynamically selected during a performance.

Other dynamic controls include MIDI out channel, BPM, low and high octave, transposition, sustain, duration selection (manual or randomized with probability distributions), BPM adjustment, and note randomization. A “free” mode allows the durations to be manipulated by the mean brightness of the video input. Finally, four, simple video filter presets were recently added that can be applied by pressing shift + [1-4]. The application works especially well in dark lighting while using a light source to control the sequencer. More information on the controls and features can be found in the readme.txt file distributed with the download.

You can read more and download the beta version of the application by visiting

John Keston – Development
Grant Muller for work on improving the RWMidi library for Processing.
Ali Momeni inspiring John to develop the software as a potential tool for Minneapolis Art on Wheels.

Related Posts with Thumbnails

  • Author: Filip Filip

    Architect, Lecturer, New Media Technologist. Based in London and Berlin. MacBook Pro 15″ + iPhone.

    View all entries by Filip

You might also like:

RGB MusicLab [Mac, Windows]

CAN at Flashbelt 2010 + Contest [Events]

TypeStar [Processing, Sound]

Forester [Mac, Windows]

Bupp!! [Mac, Windows]

Do you like this blog? Subscribe to its RSS feed or Twitter to keep up to date!

blog comments powered by Disqus

The audio for the lava lamp is created by analyzing the level of brightness for each frame in realtime!


Nuit Blanche + Making of

July 15, 2010

Great example of compositing and effects

Wonderful performance piece using a mixture of live and pre-recorded video: My Secret Heart – Excerpts

June 25, 2010

Wonderful performance piece using a mixture of live and pre-recorded video.

Call for video works: YouTube – playbiennial’s Channel

June 15, 2010

Intro to Motion

April 30, 2010

Here are some handy links to get you started with Apple’s motion graphics package Motion;

Storyboarding Universe

April 27, 2010
This sounds worthwhile getting along to;
Friday on My Mind at AFTRS this week ‘Storyboarding Universe’ – David Russell is one of Hollywood’s top conceptual illustrators who cut his teeth on Star Wars: Return of the Jedi. Since then, he has created key sequences for over 100 productions including Who Framed Roger Rabbit, Batman, Terminator 2, The Thin Red Line, Moulin Rouge, Wolverine, Sanctum and The Chronicles of Narnia. He shares with us the secrets of capturing cinematic style and action at the crucial planning stage.

This Friday 5pm AFTRS Sydney – all welcome!

Top ten worst special effects in movies?

March 21, 2010

A list compiled in the SMH.  I think the list could have been endless!