︎︎︎

Project Blog


Welcome to my PROJECT BLOG! This is where I share the behind the scenes process for some of the projects I’m working on.

I’m sharing all this with you so that when you experience the final artworks, you can tell everyone around you all about this inside information you mysteriously know. This will definitely lead to an increase in overall coolness and score you trivia points.

You’re welcome! ︎




3 September 2021


It has been a while since I launched this short blog with a single, perfect post. To my millions of devoted followers and fans that I definitely have, fear not! The reason for this prolonged and lonesome silence is not that I have nothing to share, but that I have been too busy creating wonderful things.

With this post, I present the making of Night Swim, a sight-specific installation by Mia Thom in collaboration with Claire Patrick and Lucy Strauss... that’s me! The installation comprises three large sculptures for people to lie on or interact with as they please. Each sculpture contains a speakerboard, playing a generative soundscape that I composed and developed in Max MSP. 

You can read more about the raw sounds that Mia and I created on the Night Swim project page of this website, but today I want to tell you about Latent Timbre Synthesis (LTS). LTS is a Deep Learning tool that I used to synthesize new sounds, by interpolating between a selection of two audio samples. Back in 2020, I took part in the qualitative study for LTS as a composer, to investigate how I could use LTS in my artistic practice. I’m so happy that I could use it for Night Swim! LTS is available for anyone to use open-source ︎

You might be thinking something like: “WHY did you use Deep Learning in this particular artwork, Lucy? Just to be extra?”

My answer is: “Well no! Although I do like to be extra on occasion, the use of LTS in Night Swim expresses themes of ambiguity and organic fluctuation. Sounds obscured and amalgamated into one by the forces of the ocean tides...”

The Latent Timbre Synthesis GUI:


The interpolation curve that you see in the image above was generated by fluctuations in geophysical ocean data. The synthesized sound resulting from this generation is made up of Mia’s voice singing a phrase that I composed, and me playing a different phrase that I composed on viola. According to the interpolation curve, the synthesized audio is pretty balanced throughout the whole generation, though there are some subtle fluctuations. The first half of the generated audio is closer to the timbre of Mia’s voice, but then there is a dip and the viola timbre takes over. At about two thirds of the way into the generation, the timbre pulls back towards Mia’s voice.

Now that I’ve told you all about how I used Mia’s voice to make new sounds, let me tell you about how this collaboration came to be:

My artistic relationship with Mia goes way back to April 2017, when we were both in the final year of our undergraduate degrees. Mia posted an open call for a composer for her final BFA exhibiton, I answered the call, and we made the first ever Darkroom Performance. Over the next 4 years, I worked with Mia on many different projects that were exhibited at galleries all around Cape Town. It has been so incredibly wonderful to be able to make a project together again for Night Swim, even though I’ve spent the last two years on literally the other side of the world!

Here Mia and I are lying on a sculpture together to test if it holds... a lot of kilograms (it does!) and here is some evidence that we did in fact get some work done that day:


I composed material for Night Swim on three different continents! When we first started working on this project, I had been in Canada for almost 2 years without the possibility of travelling home to Cape Town during the pandemic. It just so happened that I got vaccinated just in time to take the 33 hour flight from Vancouver to Cape Town, right before the opening of Night Swim on August 28th! That meant that I was composing in Canada, Germany and South Africa. Phew!


Here is a picture of my DIY workstation in the Frankfurt airport, during a 12 hour layover. Unfortunately, this session was short-lived because there were hardly any working plug points ︎


︎




11 June 2021


Welcome to my telematic music-dance project with Makhanda based dance artist, Julia de Rosenworth. Our project doesn’t have a title just yet, but you can watch our creative, soma-design process unfold here. Then when you watch our first performance in a few months, you’ll feel so fancy because you will know what’s going on! Look at you, knowing all the inside jokes... ︎

This morning, I got into the studio to do some work with the Kinect. I used my own body to refine the scaling of the gesture data, in preparation for the next tech rehearsal with Julia.

The defining features of today’s work were the balancing acts... an embodied interaction between human, laptop hardware and laptop software, if you will ︎



Once the data was workable, I made some good progress with the simple Machine Learning model we are using for this project in Max MSP. Here is a little tech rant. Skip over it for more fun jokes if it just sounds like “blah blah blah...”

First, I input gesture data into Max MSP, using the KiCASS system for gesture tracking. Then, the data is filtered using the [pipo mavrg] external.
Once the numbers are scaled and filtered, I record gesture data from a few specific poses into a MuBu track. At the same time, I record the corresponding parameter settings of a granular viola synth that I made. I train the [mubu.xmm] object on the data, so that it sets the synth parameters to specific settings, depending on the position of my body. Since [mubu.xmm] uses a regression Machine Learning model, it interpolates between the different poses, like it’s filling in the gaps. That way, it (hopefully!) always sounds like something!


Here are my makeshift moves, in preparation for Julia’s real moves. ︎



These moves would have actually have been impressive... if only I had been recording the sound on my headphones.

︎