All works listed engage with any combination of machine learning (supervised or unsupervised), machine listening, or AI models trained by RAVE. Some are formal works with scores, some are sound design without a score.
2025
Velocity Bouncer (in progress, presentation in Latvia IRCAM Forum 2025) [ca. 15'] An audiovisual acousmatic piece that highlights processes from A-life using Tölvera particle simulations. Since this will not involve a live performance, my goal is to pull out all the stops and not skimp on any chance to create environmentally safe (but not CPU or speaker-safe 😄) feedback systems that flow between the audio and visual. Each species will have it's own RAVE model associated with it, the sounds will in turn modify the species rules according to sound descriptors. (I know model training is not without environmental concerns, but I am using existing models, not expending energy training new ones.) This will modify the species' velocities, which will be mapped to the latent spaces of the RAVE models which will modify the sound descriptors..... it will reflect the chaos of life and it's seemingly effortless motion, which hides the complexities of neuronal firing and metabolic burning that take place under the skin.
(A)live for trumpet, live electronics and live video [10'] (premiere 20 September 2025, Maastricht Musica Sacra with Marco Blaauw) This will be a live implementation of Tölvera's species interaction simulation. These multicolored groups will coalesce and decohere according to the trumpet sounds. The sound processing will be a quite simple feedback system, primarily guided by the player. The average velocity of the species will guide the color variation of themselves, which will in turn guide the player. There are two graphic scores in this piece, the written score gives the player a shape in which to sculpt the narrative, representing the long-scale timeframe, containing the past, present and future. The video provides the second graphic score; it displays motion and color, to which the player responds spontaneously. The player may focus on either one in real-time. As living beings, are we not masters of time polyphony, living in the past (memories), present, and future (imagination) at once?
Poem for Ghidorina for alto flute and live electronics [6'] (premiere 20 September 2025, Maastricht Musica Sacra) Through the sound of the alto flute and the reading of a confessionalist poem (based on "O ye tongues" by Anne Sexton), Ghidorina, the daughter of King Ghidora, explores her 3-fold psyche. I have been using creative AI to explore this idea of a 3-fold psyche for some time now. 1) Shadow, cast by you, never separable 2) Avatar, an embellishment of the self, costumed, but moving in sync and not a separate identity 3) Alter Ego, may resemble you, matching your characteristics but a decided foil. In this piece I show these three categories through a pitch-following synthesizer, a RAVE model of my own flute playing, and a SOMAX instance with my playing but it's own rules. This imaginary offspring, despite having inherited her father's 3-headed physiology, has renounced his propensity to eat planets and wage war on Godzilla and has chosen a life of non-violence. May all our offspring be better than us.
2024
lourd comme un oeuf for tuba or double-bell euphonium with live electronics [6'] for Maxime Morel. This explores the 3-fold psyche described above in Poem for Ghidorina, but this time, she is still in her egg. (The title is a play on Erik Satie's final lyrics from Descriptions automatiques (1903) "léger comme un œuf".) In this piece, there is more of a narrative, so the player has complete control over the stages of the life cycle. The three musical representations of the layers of psyche are a pitch-following synthesizer, a transposed delay system with an adjustable memory, and concatenative synthesis (using Data Knot) with a corpus of tromba marina sounds. In the version for double-bell euphonium, the player acts as a mostly hidden agent and timekeeper, activating the electronics through the bell that is not amplified (thus unheard), but whose sound is the basis for the myriad parameter changes of the layers.
Julia, the Egg, and a Big Wheel for tuba, live electronics and video. The title refers to the Julia fractal set, the instrumental material (tuba) from lourd comme un oeuf, and the well-known "wheel" RAVE voice model. I have been looking for ways to use AI in a non-generative way, to make a creative counterpart for live musicians. The visuals, made with Taichi Lang embedded in Python, provide not only eye-candy, but can help create a framework and sound modulation for compositions. This is a very simple example of how an algorithm can play out its movements while sending and receiving information to audio. The latent spaces of AI models have always interested me. It's much more interesting to find weird and interesting parts of their sounding matrix than try to create a clone of the original sound. The moving graphics here provide a natural navigation through these sounds. It is primarily the tuba however, whose sounds provide the navigation through the latent space of the wheel model.
Sweep Profound Shadow | Wolf from the Door two very short experiments for alto flute that use timbre matching, concatentive synthesis and RAVE models. As in many of my works, these seek to create new worlds through exploring the corpera of other beings, in this case, the "beings" are high-pitched tromba marina sounds, extended techniques on the glass harmonica, and RAVE models. In Sweep, there is timbre matching through unsupervised machine learning (Data Knot tools, based on Flucoma). These shadow sounds match the real-time audio analysis of my playing, sweeping over their universal manifold projections created through machine learning, and shortening or lengthening their commentary according to my sounds. Wolf from the Door oscillates between two RAVE models according to my sounds. One model I trained myself on lupophone sounds (hence the "wolf"), the other trained at IRCAM on percussion sounds.
Live-electronics with Flucoma and SP-tools (now Data Knot) a lively display of how flute can be extended through creative machine learning. Here a neural network is trained to recognize 5 distinct sounds which are sent to five different tracks. The moog synthesizer makes a cameo appearance at the end. Thanks to the next-to-zero latency of these fine objects, there seems to be no separation from the human player and the extended sounds. It's as if I am playing an entire one-woman band just from the flute!
2023
Erin is for speaking voice, neurally-trained lupophone and sampled contrabass flute. Poem by Jo Richter, lupophone sounds trained with RAVE on samples by Peter Veale.
Peabrain A small neural network (peabrain!) performing on a corpus of tromba marina sounds that is reacting and not reacting to my live flute playing. Thanks to Taylor Brook for the patching and Sara Cubarsi for the tromba marina sounds.
2022
Horo horo to [9'] an homage to Makiko Gotō for bass koto, alto flute, oboe and electronics (live and samples) with Peter Veale, bass koto and oboe. This work is dedicated to the memory of koto player and teacher Makiko Goto, (1963 - 2021). The work draws its inspiration from a haiku of the same title by Matsuo Basho (1644 - 1694).
quietly, quietly / yellow mountain roses fall-/ sound of the rapids
(tr. Makoto Ueda)
The bass koto presents the backdrop of the mountains, staid, in places impenetrable.The oboe can be seen/heard as the mortal petals of the mountain roses, who fall and seek their way floating in the unfamiliar medium of water. By extension it is the mortal soul seeking and imagining life beyond the veil. This imagination is taken into "unreal" terrain imagined by a neural network that recognizes different oboe techniques and replies according to its own language.
In several traditions, it is the sound of the flute that can penetrate this veil. The minimal but pivotal appearances of the alto flute mark these transitions.