
Explore how to bridge the gap between sound and sight using the powerful connection of Ableton Live and TouchDesigner. From visual processing to MIDI mapping, we break down a custom performance setup where harmonised vocals drive immersive, responsive visuals.
Immersive and responsive visuals can add a real sense of intrigue and depth to a live performance. There are a couple of solutions you could explore, but few integrate with Ableton as well as TouchDesigner.
TouchDesigner is a visual, node-based development platform. Think of it a little like Max, where each node has a specific function, and data flows through connected nodes. Unlike Max, though, it handles a vast array of different data types, and you’ll most often find yourself using it to create something visual. You can download TouchDesigner right now, absolutely free, though you’ll be limited to a maximum output resolution of 1280x1280. Need more? You’ll need a license.
Let’s step through the entire process using one particular performance as our guide. We will first clarify the Ableton setup, then examine the TouchDesigner network, and finally illustrate how the two connect and interact in a live context.
In this performance, the visuals are partly controlled by the audio, while the audio is also shaped in real time by the visuals. In this tutorial, we’ll break down exactly how the entire performance was put together.
Ableton
This piece in the video above features up to seven voices: the live vocal and six harmonised voices, each managed by a separate instance of Ableton’s Auto Shift. Though Auto Shift supports polyphony, using individual instances allows precise control over each voice.
MIDI from the live controller enters via a master MIDI track, is slightly transposed, the velocity flattened, and then routed to a custom Max for Live device, Voice Allocate. Voice Allocate splits all the notes played, assigns each a unique ID, and sends each note to a Voice Receive (a further custom Max for Live device designed to receive just one of those IDs).
Since Auto Shift is an audio effect, each Auto Shift is grouped in a pair with a MIDI track, each MIDI track receiving an input from a single instance of Voice Receive. Finally, Auto Shift is set to monophonic and MIDI mode, receiving from its counterpart. We land on each Auto Shift, receiving a single note split from one controller’s input.
Audio from the live vocal is routed to a master audio track and then to all instances of Auto Shift.

The MIDI input channel is selected here. Pitch handles transposition, Velocity flattens the dynamics of all incoming notes, Voice Allocate splits the notes across voices, and the TDA MIDI device handles the TouchDesigner connection (explained shortly).
What we end up with is a live-played vocal harmoniser – up to six harmony voices plus the live vocal, each sitting in its own distinct channel.
TDAbleton
TouchDesigner primarily uses OSC to communicate with Ableton, but its own script and Max for Live devices make the setup easy to get running.
We’re going to use TDAbleton – the bundled collection of Max for Live devices and acco