Sync

From Score to System

Composing with Distributed
Networks in the Sync Platform

Rendering of the Sync spatial performance grid. The performers, represented as black figures, are placed among or around the white figures of the audience, creating an omni-directional sonic landscape.

Sync is a distributed performance system for composing music as a real-time, spatial experience. It enables composers to send evolving instructions to a network of performers, creating music that unfolds through motion, interaction, and spatial relationships. Built on principles from systems theory, generative design, and spatial computing, Sync opens up new ways of thinking about harmony, rhythm, and structure.


Traditional music composition is rooted in the centrality of the score: a fixed, linear framework where each performer’s part is predefined and synchronized in time. Whether guided by a conductor or a sequencer, ensemble performance typically operates within a hierarchical structure, oriented toward a front-facing audience. The composer, in this model, is the architect of a static plan: what happens, when, and to whom.

Sync offers an alternative: it frames composition as the design of a responsive system. In Sync, each performer becomes a node in a distributed network, receiving individualized instructions in real time. These instructions can be deterministic or rule-based, enabling both precision and variation. The audience is immersed inside the system, surrounded by sound that unfolds dynamically, shaped by proximity, spatial relations, and performer interaction.


In Sync harmony is a spatial phenomenon. Each performer acts as a localized emitter of sound, and their arrangement in space forms harmonic relationships that change with proximity and distribution. This approach reframes composition as a form of structural thinking, where relationships between nodes determine how sound unfolds across space and time.

This spatial harmonic relationship is best understood via the metaphor of additive color mixing. Just as overlapping light sources blend into new colors depending on their hue and intensity, overlapping musical voices in Sync create perceptual mixtures. A major triad formed by three surrounding nodes might feel radiant and stable, while a suspended or dissonant set produces tension that varies depending on the listener’s position within the field.

Spatial harmony illustrated through additive color mixing. Each white circle represents a performer emitting sound.

The music is distributed in space and capable of movement through it. The system treats space as an active dimension of composition, where motion is articulated through the sequence, transformation, or displacement of sound sources across the performer grid.
Movement can be expressed in many ways. A melodic figure can travel horizontally across a row of performers, rotate around the audience, or ripple diagonally through the grid. A harmonic field can shift as one node drops out and another takes its place. Even silence can move, a rest traveling through the grid like a shadow. This introduces a new kind of listening based on trajectory, perspective, and spatial memory. Because each node is independent, motion becomes a property of the system: a function of sequence, interaction, or displacement.

Top Row: Performers execute directional movements (rotations, vertical shifts, and diagonal gestures) guided by real-time mobile prompts. Bottom Row: Highlighted circles indicate active roles or featured sonic gestures, shifting focal points across the group.

Watch an early walkthrough of Sync’s core components and real-time behavior.

This is an early demo of Sync, where I explain how the system works. Sync is built around three core components: Ableton Live (used to optimize and sequence behavior via Max for Live), a central control dashboard, and a web-based performer interface accessible on mobile devices. Within Max for Live, composers can implement a variety of structural models such as fractal logic, graph networks, cellular automata, or matrix-based transformations to generate dynamic instructions in real time. A single instance of the system can support 25 or more performers simultaneously, and multiple instances can be networked for larger ensembles.

Currently, there is an 800ms delay between when a note is triggered in Ableton Live and when it appears on the performer’s device. This latency is intentional, giving performers time to prepare. However, it can be adjusted as needed—down to near-instant triggering—for different performance scenarios.

Watch the first improvisation using the Sync System.

This screen recording shows the first live improvisation with the Sync system. The video captures the Dashboard interface, where each highlighted circle represents a performer’s activity—not the notes themselves. A minimal pitch set was used, focusing on spatial patterns and the interplay between structure and improvisation.

Watch an improvisation inspired by Debussy’s Nuages.

This screen recording presents a live improvisation using the Sync system, inspired by Debussy’s Nuages. The performance blends free improvisation with looped patterns.

On the left side of the screen, the Sync dashboard displays real-time performer activity. Each circle represents a performer; when it turns yellow, the performer is active — offering a bird’s-eye view of the ensemble’s dynamics.

On the right, three mobile phone screens simulate the performer interfaces for positions D0, D7, and D23. Each interface shows a floating pentagram with personalized notes — a direct representation of what each performer sees on their device.

Keywords

Distributed Composition, Networked Performance, Spatial Sound, Generative Systems, Emergence, Real-time Music, Systemic Music, Algorithmic Composition