Our thinking process for this assignment had gone through a few phases. We started with the idea to take realtime weather data and create instruments that would play according to the changing parameters of the data. Then we shifted to another idea of taking past weather data (10 years) from different cities in the US and do the same thing but this time representing the climate-change.
After hitting a conceptual wall, we realized that both of us were intrigued by Jason Levine’s talk and the t-SNE algorithm.
The concept –
Create an application that generates a t-SNE map of selected audio data, and have an “outside” to trigger the samples in the browser.
The process –
Using Gene Kogan’s ML4A guides, t-SNE audio Python script,
we analyzed and segmented large quantities of audio data.
Using principal component analysis, we ended up with a 2D vector map of sound-similarities represented in a JSON file.
We started to experiment with the performative aspect of the output. Initially playing back samples using video to trigger the samples through pixel analysis. The video-triggering seemed less interesting, and we were looking for a more organic generation.
So, we took the p5 Flocking example and decided to fit it into our application and have the flocking objects trigger the samples.
The biggest challenge, as always, is finding the right data (audio) that would create an interesting generation, and will also work well with the t-SNE app – meaning that the analysis would create organic segments that sound good.