Since years I have been dreaming of combining my love for playing drums with the love for harmony, chords and tonality. In the last months I was working on a system that definitely brought me closer. A system that sort of reads my way of playing drums and is interconnected with my electronics.
In this system, I’m using a combination of microphone signals and sunhouse sensory percussion triggers for getting the drum information into maxMSP and Ableton live. There, I not only detect velocity, density and timbre of playing drums but also use some audio analysis to let synthesizers and effects interact with each other and add some unpredictable chaos.
At the end, there is a full sound, just by playing groove. Enjoy!
If you are interested in the signalpath - here is what it more or less looks like:
So the concept really is that all signals are somehow triggered by either an audio signal or a trigger signal coming from an acoustic source. There is three main elements of audio and midi processing:
- Sensory Percussion Plugin
- BinaryPhraseRecognition
- audio analysis
In the Sensory Percussion Plugin, volume, density and sound/timbre are analyzed and forwarded to Max MSP as a midi CC. There they get scaled according to the target parameter and sent to the synthesizer via midi.
The binary phrase recognition enables me to control different parameters using concrete 16-th patterns between the bass drum and the snare. This is clearly explained in the video BinaryPhraseRecognition Part1. One of the problems was to create a way of recognizing it without restricting the playing of the drums too mu
ch. My goal was to recognize a phrase that was played within a certain amount of time. This period can be set flexibly or synchronized with the speed of Ableton Live.
In the audio analysis in MaxMSP, a very closely microphoned cymbal is analyzed for the most prominent frequency, scaled to midi values and sent to an Eventide Space effect pedal. Different parameters can be selected, in the videos Part1-3 the “Size” parameter of the reverb is controlled. Due to the chaotic frequency spectrum of the pool, the parameter is controlled relatively unpredictably, which is an interesting random parameter for me in connection with the rather strictly programmed control.
On the other hand, the audio signal of the prophet is analyzed and an average value is determined over a certain period of time. This is also scaled to midi values, sent to a Strymon timeline effects pedal and controls the delay time.
Here is one Max patch that is important for all the scaling and distribution:
If you have any questions, feel free to leave me message or comment!
Comments