Sync
From Score to System
Composing with Distributed
Networks in the Sync Platform

Sync is a distributed performance system for composing music as a real-time, spatial experience. It enables composers to send evolving instructions to a network of performers, creating music that unfolds through motion, interaction, and spatial relationships. Built on principles from systems theory, generative design, and spatial computing, Sync opens up new ways of thinking about harmony, rhythm, and structure.
Mobile-based performer interface
Scales from 5 to 100+ performers
Supports real-time triggering and latency control
Integrates with Max for Live & Ableton
Spatial harmony & motion logic
Supports generative, rule-based, and algorithmic composition
Traditional music composition is rooted in the centrality of the score: a fixed, linear framework where each performer’s part is predefined and synchronized in time. Whether guided by a conductor or a sequencer, ensemble performance typically operates within a hierarchical structure, oriented toward a front-facing audience. The composer, in this model, is the architect of a static plan: what happens, when, and to whom.
Sync offers an alternative: it frames composition as the design of a responsive system. In Sync, each performer becomes a node in a distributed network, receiving individualized instructions in real time. These instructions can be deterministic or rule-based, enabling both precision and variation. The audience is immersed inside the system, surrounded by sound that unfolds dynamically, shaped by proximity, spatial relations, and performer interaction.

Sync deploys many outputs simultaneously: performers are arranged in a grid or network, each receiving individualized instructions via a mobile interface. Notes, phrases, and behaviors can follow predefined sequences or be computed, distributed, and rendered dynamically.
The platform includes a mobile-first user interface that adapts to any screen size. It features a robust clef and note positioning engine supporting nine clef types (Treble, Alto, Bass, Soprano, Mezzo-soprano, Tenor, Baritone, Treble 8vb, Sub-bass), with accurate pitch placement across a 44-semitone range, covering the full range of instruments such as clarinet, flute, bassoon, and horn. Dynamics from pianissimo to fortissimo are also supported.



Composers can design musical systems using various structural models. Fractal logic enables recursive development across multiple scales, graph networks define custom topologies and node relationships, cellular automata allow behaviors to evolve based on local conditions, and matrix operations shift musical values dynamically in response to systemic rules.
Through this lens, Sync becomes a tool for designing music as a living architecture, shaped by flow, transformation, and interaction.

The Sync Dashboard is the central control interface for managing, monitoring, and composing with the Sync platform. It offers composers a powerful real-time overview of a distributed performer network, essential for building spatial music systems that unfold across time and space.
Key features include:
- Performer Grid View Displays all connected performers in a visual grid. Each tile reflects connection status, current note, timing accuracy, and response latency—helping identify issues quickly during tests or live performance.
- Spatial Mapping Logic Composers can view how instructions flow through the network, enabling design of musical structures that rotate, ripple, or shift across the performer grid, key techniques in 3D musical composition.
- Latency Analysis & Sync Timing Real-time sync diagnostics help understand performers responsiveness.
- System Stability & Logging The dashboard includes debug tools and logging for performance analysis, especially important when running multi-instance Sync setups for large-scale ensembles.
In Sync harmony is a spatial phenomenon. Each performer acts as a localized emitter of sound, and their arrangement in space forms harmonic relationships that change with proximity and distribution. This approach reframes composition as a form of structural thinking, where relationships between nodes determine how sound unfolds across space and time.
This spatial harmonic relationship is best understood via the metaphor of additive color mixing. Just as overlapping light sources blend into new colors depending on their hue and intensity, overlapping musical voices in Sync create perceptual mixtures. A major triad formed by three surrounding nodes might feel radiant and stable, while a suspended or dissonant set produces tension that varies depending on the listener’s position within the field.

The music is distributed in space and capable of movement through it. The system treats space as an active dimension of composition, where motion is articulated through the sequence, transformation, or displacement of sound sources across the performer grid.
Movement can be expressed in many ways. A melodic figure can travel horizontally across a row of performers, rotate around the audience, or ripple diagonally through the grid. A harmonic field can shift as one node drops out and another takes its place. Even silence can move, a rest traveling through the grid like a shadow. This introduces a new kind of listening based on trajectory, perspective, and spatial memory. Because each node is independent, motion becomes a property of the system: a function of sequence, interaction, or displacement.

Watch an early walkthrough of Sync’s core components and real-time behavior.
This is an early demo of Sync, where I explain how the system works. Sync is built around three core components: Ableton Live (used to optimize and sequence behavior via Max for Live), a central control dashboard, and a web-based performer interface accessible on mobile devices. Within Max for Live, composers can implement a variety of structural models such as fractal logic, graph networks, cellular automata, or matrix-based transformations to generate dynamic instructions in real time. A single instance of the system can support 25 or more performers simultaneously, and multiple instances can be networked for larger ensembles.
Currently, there is an 800ms delay between when a note is triggered in Ableton Live and when it appears on the performer’s device. This latency is intentional, giving performers time to prepare. However, it can be adjusted as needed—down to near-instant triggering—for different performance scenarios.
Watch the first improvisation using the Sync System.
This screen recording shows the first live improvisation with the Sync system. The video captures the Dashboard interface, where each highlighted circle represents a performer’s activity—not the notes themselves. A minimal pitch set was used, focusing on spatial patterns and the interplay between structure and improvisation.
Watch an improvisation inspired by Debussy’s Nuages.
This screen recording presents a live improvisation using the Sync system, inspired by Debussy’s Nuages. The performance blends free improvisation with looped patterns.
On the left side of the screen, the Sync dashboard displays real-time performer activity. Each circle represents a performer; when it turns yellow, the performer is active — offering a bird’s-eye view of the ensemble’s dynamics.
On the right, three mobile phone screens simulate the performer interfaces for positions D0, D7, and D23. Each interface shows a floating pentagram with personalized notes — a direct representation of what each performer sees on their device.
Keywords
Distributed Composition, Networked Performance, Spatial Sound, Generative Systems, Emergence, Real-time Music, Systemic Music, Algorithmic Composition
For more content visit: