The alteration of sound via a computer is an application of Digital Signal Processing (DSP), a discipline which deals with converting real world data into their digital representations, altering them according to certain user-defined parameters, and/or converting them back to analog signals. The relevant subfield of DSP AriVibes is primarily concerned with is Digital Audio Effects, viz. “boxes or software tools with input audio signals or sounds which are modified according to some sound control parameters and deliver output signal or sounds”.
Effects and the augmentations they seek to achieve are vast topics, for there are as many ways to potentially augment a sound (or any other analog signal) as there are ways to configure and combine the elements of a signal processing chain:
This section presents a survey and discussion of technological developments and research projects related to AriVibes, evaluating their financial and technical accessibility to a wide public.
Each one of these technical and creative efforts provided guidance and insights for the making of AriVibes.
One of the sensors best fit for the augmentation of everyday objects is the piezoelectric microphone, which produces an alternating voltage analogous to the vibration of the object it is affixed to. Nicolas Collins, a leading figure of the Do It Yourself musical electronics movement, recommends them as “great for greatly amplifying hidden sounds in everyday objects” in his Handmade Electronic Music: The Art of Hardware Hacking.
The proliferation of piezoelectric components in domestic appliances gave rise to a music genre aptly named “Piezo Music”. Music was made by affixing the microphones to everyday or purpose-made objects and amplifying the sounds thereby produced.
Hugh Davies, an eminent Piezo musician, built and performed instruments made with egg-slicers, springs, bread bins, and tailors’ dummies, among other things. Adami Tomomi and Eric Leonardson amplified springs, wires and other bits of scrap metal, creating percussive and Gamelan-like sounds.
For good or bad, their credo was “Make it louder, a lot” (Collins 2006, p. 38). The Piezo Musicians pioneered the music of everyday objects and brought us sonically closer to them. Yet musicians tend to favor dynamic or condenser microphones, regarding the sound of piezoelectric microphones as “unnatural and often unattractive”
Systems for real-time audio effects processing are typically designed as self-contained embedded systems (or “effects units“) to be used in an “effects chain”.
Effects pedals are self-contained electronic devices specifically designed to alter the sound of an instrument connected to it. The musician’s instrument is connected directly to the unit, which outputs the altered sound in real-time.
They are the augmentation tool of choice for the vast majority of performing musicians for a series of reasons. Their self-explanatory mode of operation and perceptible effect on sound makes them the simplest and easiest way to augment an instrument. Their modular nature (two or more pedals can be connected) offers a large amount of room for sound customization. Designed with the performing musician in mind, they are robust and reliable.
These pedals have been widely and consistently celebrated by musicians, with songs such as “Interstellar Overdrive” by Pink Floyd, “Wah-Wah” by George Harrisson, or “Big Muff” by Depeche Mode, and even band names such as “We’ve Got a Fuzzbox and We’re Gonna Use It”.
Pedals, however, can process a limited amount of inputs (such as electric guitars), and necessitate dedicated amplification.
Echoing many of the motivations of the present project, Delle Monache, Papetti, Polotti and Rocchesso (Delle Monache et al. 2008) fitted Nintendo Wii Remote controllers to cutlery and dressing bottles sending gestural data derived from accelerometer readings to a Max/MSP patch, which “sonified” the gestures.
Thus for a knife “the action of cutting is sonified as rubbing on a wrinkled plastic surface” and for a salad bowl “continuous dripping and boiling sounds are coupled with the action of stirring and mixing the salad” (Delle Monache et al. 2008, p. 3). Different “families of parameters configurations” were devised that allowed to map different gestures to different sonifications.
While an interesting development, the SAFO project required the specialized (and expensive) software Max/MSP to run, along with knowledge of how to use it, a set of Wii Remote Controllers and a receiver for the accelerometers’ data.
The “MO” is a research project from the Real-Time Musical Interactions section of the Institut de Recherche Acoustique/Musique (IRCAM) that developed a set of tangible objects and software modules designed to capture and musically interpret a wide range of gestures.
A MO object can be held in one hand and contains sensors such as accelerometers, gyroscopes, and buttons, transmitting gestural data over a wireless network. This gestural data is received and analyzed by a Max/MSP patch, which records and eventually recognizes the gestures, allowing the users to adapt their gesture to play and manipulate pre-recorded sounds. Some MO objects are fitted with piezoelectric microphones and allow shaping the sound they pick up. Watch a video of the MO in action here.
MO is a tabula rasa: “a central concept is to let users determine the final musical function of the working objects, favoring customization, assembling, repurposing.” (Rasamimanana et al. 2011, p. 2). The authors produced videos showing the object’s potential for augmenting any object, in which music is played with common kitchen implements.
The researchers have tested several different configurations of the MO in a school, augmenting a wide variety of objects.
“In a particular music context, the teacher and students proposed to use a chess game as an interaction metaphor. The game was found to resonate with musical concepts about opposition and dialogue (e.g. canons or duets). A chess table was then augmented with a piezo sensor to register when a piece was put on the table. Players had also attached a MO module to their wrists to track the hand motion above the table. All the sensors were used to control the tempo of a musical piece. Using this setup, performance of a music duet was possible, where each player controls a different music line.” (Rasamimanana et al. 2011, pp. 3-4)
MO seemed like a promising augmentation device, but its accessibility leaves much to be desired. For although the project was “designed for musical interaction and performance” (Rasamimanana et al. 2011, p. 2) and won the first prize at the 2011 Margaret Guthman Musical Instrument Competition, it received more exposure at its exhibition at New York’s Museum of Modern Art than in a music shop.
One of the most innovative approaches to augmentation is Roberto Aimi’s Hybrid Percussion project, in which the vibrations of an object being hit are mixed with prerecorded samples in a way that generates an augmented timbre grounded in real acoustics.
Aimi proposed a system architecture in which the sound coming from a damped object is picked up by a piezoelectric microphone affixed to it and convolved with a sound sample, continuously and in real time. The sound sample is typically the pre-recorded sound (or “impulse response”) of a particular instrument the performer wants to use as a resonator.
As you can see from the videos embedded in Roberto’s page, a brush fitted with the system can be made to sound like a cymbal, or a cymbal in turn could be made to sound like a Tibetan bowl. The system attempts to endow objects with new resonances, which uncannily accurately respond to the intensity of the user’s hits, and, unlike most digital kits, nuances such as scrapes too.
Aimi’s system, however, requires custom-built piezoelectric microphones, a computer fitted with an external audio interface, and knowledge of graphical programming environments.
Newton and Marshall’s Augmentalist project (Newton and Marshall 2011) was an effort “to allow musicians to become the developers of their own augmented instruments.” (Newton and Marshall 2011 p. 251), lamenting the poor adoption of new interfaces for musical expression (NIMEs) among musicians’ communities. The researchers invited ten musicians to take part in consultations and tests, and their instruments were fitted with commercially available USB interfaces that transmit gestural and motion data. The researchers were able to use gestures such as strumming or the instrument’s inclination as real-time control of audio effects.
“The design process for the Augmentalist took an iterative, user-centred approach. This involved numerous consultation, testing and design sessions with musicians. The overall goal of this process was to ensure that the system is designed in such a way as to be useful to the musicians themselves” (Newton and Marshall 2011, p. 249).
Evaluations of the Augmentalist conducted with musicians gave overwhelmingly positive results. But the system required an external audio interface, a powerful laptop, a set of Phidgets, a professional Digital Audio Workstation, and Max/MSP, totaling thousands of pounds in financial investment and a couple of technical training courses that will put most musicians off.
A computer-based Digital Audio Workstation (DAW) such as the one The Augmentalist sent the musician’s sound through, is an ensemble of audio processing hardware and software modules designed primarily for recording and manipulating digital audio. It consists of a computer acting as a host for a sound card and the specialized software to run it and process the audio.
Nearly all of today’s DAWs allow real-time alteration of samples or audio input, and some, such as Ableton Live, are specifically designed to handle real-time audio processing. Some effects, such as Ableton Live’s Overdrive or Logic Pro’s Pedalboard, emulate the guitar effects pedals mentioned earlier. In the screenshot below, a chain consisting of overdrive, fuzz, phaser and chorus pedals are emulated in Pedalboard’s “Shoegazer” preset.
A screenshot of a preset in Apple Logic Pro Pedalboard plug-in. The drop-down menu allows the user to select other different-sounding (and looking) Pedalboard presets, though the preset may also be reconfigured at will by swapping the virtual pedals or adding new ones (from the bank on the right).
These presets offer pre-made permutations of the parameters, and can be customized and saved into new ones by the user.
DAWs’ power and versatility make them ideal candidates for augmenting objects and instruments. Guillaume Zenses’s “Bureaucratic sounds” performance made heavy use of Ableton Live’s Collision, a technology that emulates real world percussion instruments, to augment a table. Said table was fitted with contact microphones connected to the performer’s audio interface. Collision used the amplitude of the piezoelectric microphones’ input to drive banks of oscillators that modeled the sonic behavior of the performer’s own personalization of xylophone-like instruments.
Zenses adds that the performance “is not a laptop performance, it is the performance of a desk. […] My main concept was ‘unexpected sounds from unexpected sources’. By playing with the furniture rather than with the laptop, I intended to keep the link between movement and sound.”
But this setup requires contact microphones, an audio interface, and an expensive DAW.
A paper cup fitted with the WaveDrum Mini’s sensor and augmented with the Triarimba preset
One of the first fully self-contained commercially available devices that could be used to augment objects is Korg’s WaveDrum Mini. A successor of the WaveDrum, this electronic drum processes the user’s hits through a series of filters, waveguides, and synthesizers to emulate a wide variety of different percussive sounds, and comes with an external piezoelectric sensor that can be clipped onto any other object.
The WaveDrum Mini comes with a wide range of presets, is battery-powered to maximize portability and thus facilitates spontaneous creative impulses.
However, users have considered its £259 price off-putting.
One of the first apps to turn a mobile phone into an audio effects unit was RjDj. Audio captured through the phone or its headset’s microphone drives the real time generation and modulation of complete musical pieces, immersing the user in a soundscape different to the one he or she is in. The underlying idea is to offer altogether “new music formats including augmented music and smart music.”
The RjDj sound engine is a purpose-made iOS port of a lightweight distribution of a digital signal processing environment (Pure Data), effectively allowing the phone to run processing chains (or “patches”) built in the Pure Data format. This allows for one of the app’s most innovative features: the possibility to build one’s own Pure Data patches on a desktop computer, to then be uploaded and shared with a community of RjDj users as scenes. Mainstream artists and producers have joined the RjDj music network and distributed millions of scenes offering copies of their music ready for augmentation.
But despite offering a copy of Farnell’s Designing Sound on their website, which explains the rudiments of building Pure Data patches, most iOS users are not willing to learn the fundaments of signal processing and a graphical programming language to build their own patches.
With the simple credo of “creating social music-making experiences for everyone, no talent required”, Smule’s creations are among the App Store’s most successful music-making apps. The works have been shared and listened to more than 126 million times since 2008.
Smule’s breakthrough was with Ocarina, an app in which a user blew into the phone’s microphone to produce a synthetized Ocarina sound that could be modulated by interacting with virtual sound holes appearing on the touch screen. Ocarina became the top paid iPhone app in 2008, and has been nominated for several awards. The download count of their following apps, such as Magic Piano and Magic Fiddle, are in the hundreds of thousands (Wang et al. 2011).
Smule’s founders were quick to see the potential of the iPhone as a platform to make innovative music technology accessible to a wide range of musical and non-musical users. They showed the potential of mobile apps that combine ease of use with a visually and sonically appealing user experiences, distributed by a store immediately available to millions of people.