Echolocation (Dynamic Aural Fragmentation)
Echolocation (Dynamic Aural Fragmentation) is an interactive quadraphonic and panoramic audio/visual installation that exaggerates fundamental acoustic principles, allowing for spatial perceptions through the observation of sound. Using the experience and perception of a bat as a foundation, the installation examines how different species share the same physical laws but utilize them in different ways – in this scenario, the sense of hearing. The installation explores how one would translate the dominant perceptual system of a bat (hearing) into the dominant perceptual system of most humans (sight). However, Echolocation (Dynamic Aural Fragmentation) is not meant to be simulation, but a stylized, abstracted representation. The system is an environment that transforms how we interpret awareness of the self through the stimuli that we generate through interaction.
Notes: Kyle Duffield's Thesis essay, Echolocation (Dynamic Aural Fragmentation) was nominated for OCADU's Fourth Year Liberal Studies Award.
Download OCADU Thesis Essay Here
During the development of this project, Kyle Duffield also received the Sumo Art and Technology Scholarship (2009), the Integrated Media Faculty Award (2011) as well as an honorable mention for the 401 Richmond Career-Launcher Prize during the 96th annual OCADU Grad Show The Show Off. Kyle also pitched this project at Artspin's crEATe and won the benefits from the dinner which will be used towards exhibiting in Artspin 2012.
Beta 1.0 of this project was successfully tested in OCAD's audio studio April 2009. Version 1 consisted of a basic audio constuction of the environment. I wish to note that I used Ville Pulkk's VBAP~ Max/MSP object within my patch.
Notes: Kyle Duffield recieved OCAD's Sumo Art and Technology Scholarship for the development of Echolocation Beta 1.0.
Beta 2.0 of this project was successfully tested in OCAD's audio studio April 2010.
The completion of this project, Echolocation (Dynamic Aural Fragmentation) was developed as my 2011 OCAD undergraduate thesis project.
Notes: Echolocation (Dynamic Aural Fragmentation) received the Integrated Media Faculty Award and received an honorable mention for the 401 Richmond Career-Launcher Prize during the 96th annual OCADU Grad Show The Show Off.
The lights are turned off creating a near dark situation. The space is set up with four microphones and four speakers to create a (quadraphonic) surround sound environment and allow for real-time playback of the sounds that are being emitted within the space (See "Audio Setup Diagram" below). All audio information (emitted sound or direct sound) is captured and input by the microphones into the software Max/MSP/Jitter where it is processed. It is then output through the speakers with a delay/reverb effect that is reflective of a participant’s spatial orientation. As a visual component, each wall of the structure (which each contains a video camera) projects the captured event of emitted sound and plays the (delayed with feedback) footage back in sync with the audio playback. Only the event of the participant emitting sound is displayed, but from multiple perspectives (respective to each camera), played back at distinct times.
The participant within the room emits a sound which is then played back (after real-time processing) into the space with a delay/reverb effect to emulate the reverberation experienced by bats. The louder the participant emits a sound (greater amplitude), the faster the delay rate (e.g., 200 milliseconds [ms]) of the played back processed sound, and the shorter the delay feedback intervals (e.g., multiple, decaying, shorter intervals of the emitted sound playing back every 200 ms). Conversely, quieter emitted sounds (less amplitude) are output with a slower delay rate (e.g., 3000 ms) and a longer delay feedback interval (e.g., multiple, decaying, shorter intervals of the emitted sound playing back every 3000 ms). Therefore, the rates of delay assigned to the spatially separated microphones correspond to the volume of the incoming signal, which is an emulation the speed of early reflections in relation to the sound pressure level distance of the sound source (the participant).
In a similar placement as the microphones, the visual component consists of four infrared (night vision) cameras that capture the participant from four angles simultaneously. The room is lit with infrared light to maintain a dark ambience and allow the cameras to detect the participant within the space. Each wall of the room projects the captured event of emitted sound and plays the (delayed) footage back in sync with the audio playback, including the delay feedback effect (which is visually represented as decaying trails of the participants action when they emit a sound). The video playback contains only the events of sound being emitted by the participant. Each of the four video channels has their own fluctuating luminosity, which is affected by the amplitude of the input audio signal (emitted sound of participant). Because of the surround audio/visual design, one can imagine being in a mirrored room except that the “reflection” of the participant’s actions is temporal. Therefore, the “sound event” is thought of as being perceived by each input “simultaneously”, but is actually received at different rates and represented as such.
I would also like to give a special thanks to Bentley Jarvis for his help and support in countless ways throughout the development of this project. Many thanks also go out to the following: Artspin, MOCCA, Teresa Ascencao, Douglas Back, Christine Duffield, John Duffield, Steve Duffield, Mike Duffield, Renzi Guarin, Keith Hamilton, Dan Heinz, Daniele Hopkins, Jacob’s Hardware, Johanna Householder, Sam Pelletier, Nolan Ramseyer (Peau Productions), Ryan Randall, Jim Ruxton, John Scarpino, Elida Shogt, Mike Steventon and B.H. Yael.