In May 2013 I attended the second Processing Code Jam, organized by some friends in collaboration with Open Tech School. It’s a similar concept to our Berlin Mini Game Jam, with the difference that the focus, instead of games, is in creative coding. That is, artistically interesting applications using frameworks like Processing, Cinder, OpenFrameworks, etc.
One of the themes was “The color of sound”, if I remember correctly. My idea was to make a simple application where a short audio clip would be recorded from the microphone and then some visuals procedurally generated, using as seed some information that would be extracted from an analysis of the audio. That is, you could say a word into the microphone, and the application would take a “photo” of it. Since we’re talking about words, a typical measure to use in these cases is the Mel Cepstrum, which has been used traditionally for voice recognition algorithms.
After a bit of testing, it became clear that the application is more fun if it’s continuously rendering visuals in real time instead of the “photo” concept.
In order to try this application, you need to follow the instructions in the readme file which is provided at the Github repository.