All works listed engage with any combination of machine learning (supervised or unsupervised), machine listening, or AI models trained by RAVE. Some work with A-Life audio/visual programs such as the Python/Taichi-based environment Tölvera. Some are formal works with scores, some are sound design without a score.

2025

Velocity Bounce (in progress, presentation in Latvia IRCAM Forum 2025) [ca. 5-10'] An audiovisual acousmatic piece that highlights processes from A-life using Tölvera particle simulations to create "beings". Since this will not involve a live performance, my goal is to pull out all the stops and not skimp on any chance to create feedback systems that flow between the audio and visual. The scenario is 4 species whose velocities (reduced to 1D vectors) will determine various aspects of a single Somax2 instance and two RAVE models. As an A-life model with flocking, pulsating behaviors, it displays seemingly effortless motion. However, at times, the chaos of life surfaces, revealing the complexities of neuronal firing and metabolic burning that take place under the skin. View rough draft (no video edits, original colors) here, and read about this in more detail here.

(A)live for trumpet, live electronics and live video [10'] (premiere 20 September 2025, Maastricht Musica Sacra with Marco Blaauw) This will be a live implementation of Tölvera's species interaction simulation, programmed in such a way that they coalesce into symbiotic "beings". These multicolored groups will coalesce and decohere according to the pitch and volume of the trumpet sounds. The sound processing will be a quite simple feedback system, primarily guided by the player. The average velocity of the species will modify their colors, which will in turn guide the player. There are two graphic scores in this piece, the written score gives the player a shape in which to sculpt the narrative, representing the long-scale time-frame encapsulating the past, present and future all together at a glance. The video provides the second graphic score; it displays motion and color, to which the player responds spontaneously and discrete moments. The player may focus on either one in any given time. As living beings, are we not masters of time polyphony, living in the past (memories), present, and future (imagination) at once?

Poem for Ghidorina for alto flute and live electronics [6'] (premiere 20 September 2025, Maastricht Musica Sacra) Through the sound of the alto flute and the text based on the a confessionalist poem ( "O ye tongues" by Anne Sexton), Ghidorina, the daughter of King Ghidora, explores her 3-fold psyche. I have been using creative AI to explore this idea of a 3-fold psyche for some time now. 1) Shadow, cast by you, never separable 2) Avatar, an embellishment of the self, costumed, but moving in sync and not a separate identity 3) Alter Ego, may resemble you, matching your characteristics, but a decided foil. In this piece I show these three categories through a pitch-following synthesizer, a RAVE model, trained myself on my own flute playing, and a SOMAX2 instance with my piccolo playing through phase vocoding. This imaginary, three-headed princess, despite having inherited her father's physiology, has renounced his propensity to eat planets and wage war on Godzilla and has chosen a life of non-violence. May all our offspring be better than us.

2024

lourd comme un oeuf for tuba or double-bell euphonium with live electronics [6'] for Maxime Morel. This explores the 3-fold psyche described above in Poem for Ghidorina, but this time, she is still in her egg. (The title is a play on Erik Satie's final lyrics from Descriptions automatiques (1903) "léger comme un œuf".) In this piece, there is more of a narrative, so the player has complete control over the stages of the life cycle. The three musical representations of the layers of psyche are a pitch-following synthesizer, a transposed delay system with an adjustable memory, and concatenative synthesis (using Data Knot [a Max package, still in beta test] tools, based on Flucoma) with a corpus of tromba marina sounds. In the version for double-bell euphonium, the player acts as a mostly hidden agent and timekeeper, activating the electronics through the bell that is not amplified (thus unheard), but whose signal is the basis for the myriad parameter changes of the layers.

Julia, the Egg, and a Big Wheel for tuba, live electronics and video. The title refers to the Julia fractal set, the instrumental material (tuba) from lourd comme un oeuf, and the well-known "wheel" RAVE voice model. I have been looking for ways to use AI in a non-generative way, to make a creative counterpart for live musicians. The visuals, made with Taichi Lang embedded in Python, provide not only eye-candy, but can help create a framework and sound modulation for compositions. This is a very simple example of how an algorithm can play out its movements while sending and receiving information to audio. The latent spaces of AI models have always interested me. It's much more interesting to find weird and interesting parts of their sounding matrix than try to create a clone of the original sound. The moving graphics here provide a natural navigation through these sounds. It is primarily the tuba however, whose sounds provide the navigation through the latent space of the wheel model.

Sweep Profound Shadow | Wolf from the Door two very short experiments for alto flute that use timbre matching, concatentive synthesis and RAVE models. As in many of my works, these seek to create new worlds through exploring the corpera of other beings, in this case, the "beings" are high-pitched tromba marina sounds, extended techniques on the glass harmonica, and RAVE models. In Sweep, there is timbre matching through unsupervised machine learning (Data Knot [a Max package, still in beta test] tools, based on Flucoma). These shadow sounds match the real-time audio analysis of my playing, sweeping over their universal manifold projections created through unsupervised machine learning, and shortening or lengthening their commentary according to my sounds. Wolf from the Door oscillates between two RAVE models according to my sounds. One model I trained myself on lupophone sounds (hence the "wolf") performed by Peter Veale, the other trained at IRCAM on percussion sounds.

Live-electronics with Flucoma and SP-tools (now Data Knot) a lively display of how flute can be extended through creative machine learning. Here a neural network (mlp classifier, Flucoma) is trained to recognize 5 distinct sounds which are sent to five different tracks in Ableton Live. The moog synthesizer makes a cameo appearance at the end. Thanks to the next-to-zero latency of these fine objects, there seems to be no separation from the human player and the extended sounds. This gives the feeling of playing an entire one-woman band just from the flute!

2023

Erin is for speaking voice, neurally-trained lupophone and sampled contrabass flute. Poem by Jo Richter, lupophone sounds trained with RAVE on samples by Peter Veale.

Peabrain consists of a small neural network (peabrain!) performing on a corpus of tromba marina sounds. The corpus is analyzed and grouped according to K-nearest neighbor (a form of unsupervised learning). Classical regression techniques drive various parameters such as the navigation through the corpus. Thanks to Taylor Brook for the inspiration of this patch and Sara Cubarsi for the tromba marina sounds.

2022

Horo horo to [9'] an homage to Makiko Gotō for bass koto, alto flute, oboe and electronics (live and samples) with Peter Veale, bass koto and oboe. This work is dedicated to the memory of koto player and teacher Makiko Goto, (1963 - 2021). The work draws its inspiration from a haiku of the same title by Matsuo Basho (1644 - 1694).

quietly, quietly / yellow mountain roses fall-/ sound of the rapids
(tr. Makoto Ueda)

The bass koto presents the backdrop of the mountains, staid, in places impenetrable.The oboe can be seen/heard as the mortal petals of the mountain roses, who fall and seek their way floating in the unfamiliar medium of water. By extension it is the mortal soul seeking and imagining life beyond the veil. This imagination is taken into "unreal" terrain imagined by a neural network (mlp classifier, Flucoma) that recognizes different oboe techniques and replies according to its own language.
In several traditions, it is the sound of the flute that can penetrate this veil. The minimal but pivotal appearances of the alto flute mark these transitions.

Work List