(A)life and Monsters - beginning
This is my first attempt into documenting works at their very early stages. It seems strange, I am used to writing text post facto, but this process will be necessary for future research and as a basis for presentations.
I have undertaken this year to create two solo works for performance(s) this September already, so taking time to write blog entries also seems like a waste of time. Really, I should be getting to work on the creative side, recording sounds, creating preliminary patches, scoping out meta-structures and learning code to execute all of this.
Since reading a number of articles on Human-Machine-Interaction (HMI), I will borrow four questions cited in the following papers, and attempt to answer them at different stages of the work's development. It might be interesting to see how my answers change over time, and to develop my own questions.
Anna Xambó & Gerard Roma (2024) Human–machine agencies in
live coding for music performance, Journal of New Music Research, 53:1-2, 33-46, DOI: 10.1080/09298215.2024.2442355
Rutz, H. (2016). Agency and algorithms. Journal of Science and
Technology of the Arts, 8, 73. https://doi.org/10.7559/citarj.
v8i1.223
- What are the boundaries for HMI?
- What are the consequences for authorship and intention?
- What is the structure for decision-making processes?
- What are the mechanisms of control?
The two new pieces in question are:
A Poem for Ghidorina for alto flute and live electronics. Ghidorina, like her father, King Ghidorah, has three heads. However, she defies him by rejecting his lifestyle of constant combat with aliens and other monsters and devotes her life to introspection and the arts. This partially sung poem explores aspects of each of her three psyches: 1) The Shadow, always trailing, close, dark, immune to colors 2) The Avatar, a figurative, iconic representation or embellishment of oneself, perhaps autonomous in its own right, and 3) The Alter Ego, a foil, matching characteristics of oneself but manifesting through a separate entity.
(A)Live for Trumpet and live electronics and video projection The concept of diverse intelligence is beginning to gain traction against the influx of AI and its corporate dominance. Embracing organic structures, Alife and the synchronous intelligence of swarming, growing, and flocking reveal a universal tendency to self-organize and lay the ground-work for cooperation, symbiosis, or more complex structures.
In this entry I will briefly give preliminary answers in regards to the latter piece (A)live. 1) Boundaries to HMI: there will be two "interfaces" to this boundary, sonic and visual. These will be the channels of two-way communication that exchange data between the player and the machine elements: dsp processes and visual production. Since the visuals will be simulating their own "life" through a separate (python) code that code has pretty firm boundaries, with only a few parameters that will be open for communication through OSC. A simulated crossing of these boundaries (or blurring?) will be achieved by bringing these visuals into jitter, where the life of dsp will make a further mark upon them. Exactly how, I am not sure yet.
2) the consequences for authorship and intention. As far as HMI goes, I don't foresee relevant consequences. Of course, shared authorship with the piece will go to the trumpet player (Marco Blaauw), and credits will go to some of the code I will use, and perhaps even the libraries (if I use Tölvera and Taichi, for example, and the DK package within Max). This work will certainly in that sense be a collective, but not collaborative effort (except for the collaborative work with Marco).
3) structure for decision-making processes. I am tempted to do as I do in the past, keep the movement of the meta-structure up to the player through the use of triggers, either acoustic or through hardware such as a foot pedal. However, since I am now venturing into the concept of life-forms, I need to start experimenting with simulations that can suggest structures. This means that instead of tacking on visuals at the end of musical production, I need to make this a parallel work-flow from the beginning.
4) mechanisms for control. At the moment, I am seeing this question related to the previous, and don't have much to add, except that on a smaller level, I may set up some neural networks or regression to control some parameters of the dsp. I am seeing this as a way to offload some of the small parameter modulations.
Other creative questions are about choosing the sound world altogether. With the trumpet, I am wondering if I should choose a kind of world that emulates analog feedback and filter effects, and what sounds should I choose to play off the trumpet, to create the synchronous effects that can represent flocking, swarming, and other movements. Something rhythmic? Percussive?
End of musings for now.