Creating audio reactive visuals through real-time instrument input

Goal: 

As the working title suggests my goal is to enhance live performances of musicians and bands through connecting sound, light and other visuals like animation or maybe even generative processes. This could also be described as audio reactive visuals.

Possibilities:

Through integrating different software and hardware into a live performance, there’s the possibility to create a more immersive experience for the audience, as well as the musician(s) themselves. Best case scenario, visuals that react directly to instruments and voices (audio reactive visuals) give a unique character to the performed music and sound that not even a pre-programmed live set or an experienced light tech can provide. Also it can be used in a wide variety of genres. Styles that have a very expressive character and lots of dynamic range can benefit from it, but I can also imagine singer/songwriters using it. Sitting there alone on stage, with their guitar and microphone and just one moody little light reacting to the words they sing for instance. 

Difficulties:

Currently there are a lot of plans and ideas but just as much work ahead. I have barely any practical experience with the technical aspects necessary to realize a project like this. My first steps are to get familiar with programming languages like Pure Data and Max/MSP. Then I need to find a way to make an analogue input interact with these softwares and also routing an output signal to an interactive light for instance. This all has to happen in real-time with the biggest difficulty being the synchronization of light and sound. Latency might be a problem.

Leave a Reply

Your email address will not be published. Required fields are marked *