Cycle Three

For the third phase of my project, I refined the Max patch to improve its responsiveness and overall precision. This interactive setup enables the user to experiment with hand movements as a means of generating both real-time visual elements and synthesized audio. By integrating Mediapipe-based body tracking, the system captures detailed hand and body motion data, which are then used to drive three independent synthesizer voices and visual components. The result is an integrated multimedia environment where subtle gestures directly influence pitch, timbre, rhythmic patterns, colors, and shapes allowing for a fluid, intuitive exploration of sound and image.

Visual component:

Adaptive Visual Feedback:
A reactive visual system has been incorporated, one that responds to the performer’s hand movements. Rather than serving as mere decoration, these visuals translate the evolving soundscape into a synchronized visual narrative. The result is an immersive, unified audio-visual experience that makes both the musical and visual experience.

Sound component:

Left Hand – Harmonic Spectrum Shaping:
The left hand focuses on sculpting the harmonic spectrum. Through manipulation of partials and overtones, it introduces complexity and depth to the aural landscape. This control over the harmonic series allows for evolving textures that bring richness and variation to the overall sound.

Right Hand – Synthesizer Control:
The right hand interfaces with a dedicated synthesizer module. In this role, it manages a range of real-time sound production parameters, including oscillator waveforms, filter cutoff points, modulation rates, and envelope characteristics. By manipulating these elements on-the-fly, the performer can craft sounds lines and dynamically shape the signals.



Leave a Reply