We Need Us

by Thijs Koerselman

Introduction

Artist Julie Freeman has been working together with the Open Data Institute and SiLab to create an online data driven audiovisual artwork, ‘We Need Us’ (www.weneedus.org) .

The artwork consists of an number of abstract dynamic animations rendered in a web browser. Each animation is driven by a live data feed from one of the projects from the zooniverse.org website. This online scientific platform allows users to classify data based on images for all sorts of research projects varying from the galaxy to the deep sea and cancer research. The database has a public API that allows you to query the classification data over a period of time.

weneedus-screenshot2

A typical still in weneedus (with the user controls visible)

In the animations, basic geometric 2D shapes move around and rotate in different ways depending on a set of rules. These rules are handcrafted functions that differ for each project and define the visual behaviour of the elements in relation to the incoming data.

 

Julie approached the SiLab to implement a webbrowser based sound-engine and work on audio content to accompany the animations. She was looking for sound largely based on processed field recordings and drones. The composition would have to be dynamic like the animations and the data that drives them, and be mapped in a way that make them work together with the visuals and fit the context of the project.

weneedus-screenshot1

Our time schedule was very tight. Julie was about present the installation during the global TEDx conference in Brazil, and that left us with only with a few weeks to go from concept to production, and with man-hours being a fraction of that. Because of my specific experience as a programmer and sound-designer we agreed that I would tackle this project on my own to cut overhead as much as possible.

Challenges

There were a few challenges in this project.

* The artwork is mainly focussed on online users, therefor the amount of audio data had to be limited so the artwork would be accessible over a reasonably slow connection by mobile client.
* The artist wanted to be involved in the production process of the sound material and thus be able to alter and tweak the content of the engine.
* The time schedule was very tight. The sound engine plus its content would have to go from concept to prototype to production in just a few weeks.
* The mapping from the data to the animations was still a work in progress.

Overview

I will not go into detail about every design decision, and largely just describe how the system is working together.

The data coming from Zooniverse is basically a single varying value describing the number of classifications over time. In terms of input mapping this is very limited. This “intensity” of classifications together with the animation events derived from the set of rules would form the input that’s driving the sound-engine. We decided to use the varying intensity to change the overall mix of the composition, and use the extracted animation events to add shorter sounds on top of that.

Each of the projects has its own audio content, from here on referred to as scenes, and each scene is made up of two types of audio sources; layers and one-shot samples. At any time a scene can move to any other scene, and all of its layers will crossfade into the next to create smooth transitions.

Components

###Layers
Audio layers play constantly during a scene. They are variable in both number and length and are played as a loop. By combining layers of different lengths you can use shorter content before sounding repetitive, which is very useful since our download time is crucial.

To simplify mapping all layers are mixed by using a single 0-1 value input. This value stacks layers by fading them in one by one. For example if you have four layers the first will fade in between 0.00-0.25 the second between 0.25-0.50 and so forth.

A special scene named “__base” can be used to play layers continuously during all scenes to create a homogeneous layer for the whole artwork.

###One-Shot Samples
The one-shot samples are triggered on top of the scene layers. They are short sounds that are designed to fit with the rest of the composition to create a direct link with the events in the animation.

Each scene has two samplers called A and B, and each sampler contains a variable number of samples. When triggered they pick a sample randomly, and a pool is used to avoid repetition. Because you have no control over what sample you’re going to trigger we’ve created two samplers so we can map two different categories of sounds.

###File Structure
In order to make it easy to swap sounds and edit content for the sound-engine the system uses a predefined folder structure over which it iterates in order to generated the configuration for the engine. Every scene has a folder to match its project ID and also contains a small JSON file that allows you to adjust the levels for each of the layers, the volume of the two samplers and the overall mix volume for the scene.

Source Code

The engine uses the Web Audio API, which is supported by all modern browsers. It is designed as a standalone component so it would be reusable and could be fully tested outside of the scope of the existing WNU code.

All code is open source and available here:
https://github.com/theodi/colleen

The documentation for the sound engine API can be found on the wiki:
https://github.com/theodi/colleen/wiki/Sound%20Engine