Less than three weeks to go, and am so grateful for my previous entries to keep me up to speed after having to take a long break from this piece.
So, the specs: veras: (osc) pulse with orbit, weight 0.33, rotate 4, species 3, particles 1000 (may increase this slightly). I find this combination coheres into structures easily, sometimes too stable, but since this simulation will be projected onto geometry and subject to other audio-reactive elements through jitter, that's ok.
Definitely decided to keep the human in the feedback loop. ATM, I am not taking any species velocities (which are my go-to specs) from Tölvera via OSC. Therefore, the audio affects the visuals (mappings-list below) via OSC and jitter, but the visuals will only affect the audio in that the player will be watching and has the possibility to react. In addition to this graphic input for the player, there will be a "paper" score with some notated music and graphic / textual indications. I will also keep the decision for the human to be in complete control (via pedals) over the audio effects (reverb, feedback, ring modulation). My job will be to set and map parameters, scalings, and in the live performance manipulate the geometries, brightness, background, and some motion elements (list below). This is so the visuals have some kind of form - a narrative of their own, and the viewers are not bombarded with just particles flying around, but are introduced slowly to the shapes and colors.
ATM my idea is to run Tölvera on the big laptop, through OBS/NDI into Max and Jitter on the small one. Them HDMI out to room screen and player's monitor. Might in the end swap roles for the big and small laptops. This works really well so far. Or, I could do all the computing on the small laptop and send the final Jitter out to the big one to use as a monitor for the player, but the NDI transfer over wifi looks bad, too jittery. I need to find a new set-up that can use OSC and NDI without wifi. Will look into ethernet or some kind of screen sharing.
Some notes on scaling the jit.window. Have to make sure it is in full screen, and the matricies are the same dimensions as the Tölvera default window. And the Tölvera simulation also needs to be full screen. Still every start-up needs some tweaking of the videoplanes, and I am mystified by that.
For audio descriptors to drive the reactive video, I realized I needed two sets: those that send a live, constant stream, and those that only change when an onset is detected. The latter I am using for OSC parameters for Tölvera such as attract, radius, randomize, and the chaos trigger. The constant stream feeds into parameters from jitter such as the x/y values of sphere 1, the brightness of spheres 1 and 2, and the blue values for the background. It makes sense to have visual parameters that initiate motion be based on onsets, this seems to give an impulse to the simulation and underscores random changes of mappings. There is enough fluid motion programmed into the behaviors (like orbit and pulse) to keep this from looking too block-like. For visual parameters that initiate color, brightness and size, it is nice to have a stream of constant change that makes strong correspondences to small changes in the audio.
There are three basic jitter geometries: the background plane, sphere 1 and sphere 2 (these are often contorted and don't resemble spheres at all). So far the mappings are like this:
frequency (constant stream) -> x/y position of sphere 1
frequency (constant stream) -> background blue value
midi (onset pitch detection) -> radius of particle/species attraction
loudness (constant stream) -> speed of hue change
loudness (constant stream) -> brightness of spheres 1 & 2
loudness (onset) -> attraction of particle/species
Midi-mappings for live performance: chaos trigger, background on/off, spheres 1 & 2 on/off, rotate sphere 2 on/off, background brightness, sphere 1 brightness manual/descriptor switch, sphere 1 brightness, sphere 2 scale x/y, sphere 2 scale z, sphere 2 position x, sphere 2 position y.