Sound from physical matter: Hyperacoustic and Physiphonic user-interfaces

Swarm Modulation

Swarm Modulation: Hearing beyond the range of human hearing, in real-time.

Enhancing the human senses, swarm modulation can also allow us to create expressive new user-interfaces, where a person directly touches physical matter, and hears the subsonic sound.

Read the Swarm Modulation paper [PDF].

Ryan Janzen and Steve Mann (2015), "Swarm Modulation: An algorithm for real-time spectral transformation", Proc. IEEE GEM 2015, 8 pages, to appear. [PDF]

A novel class of modulation is introduced, for frequency transformation and spectral synthesis in real-time. Swarm modulation has potential applications to enhance human hearing with extended frequency ranges, in medical diagnostics for electrocardiogram (ECG), electroencephalogram (EEG) and other medical signals, for RADAR analysis, for user-interface sonification, and for sound synthesis or non-synthetic sound transformation. Swarm modulation is a new way to transform signals, and is demonstrated for transforming subsonic and ultrasonic sound into the audible range of human hearing.

Swarm modulation is based on the principle of phase-incoherent frequency-roaming oscillation. Features in the frequency-time plane are reconstructed via a time-varying process, controllable with instantaneous zero-latency reaction time to new information. Swarm modulation allows prioritization of salient output spectral features for efficient processing, and overcomes cyclic beating patterns when Fourier and wavelet-based methods are applied in a stationary manner.

Swarm modulation can flexibly re-map sound when a user expressively touches physical matter creating vibration. By detecting subsonic, sonic and ultrasonic vibrations, we can add to materials a rich acoustic user-feedback that can be adjusted to sound like a bell, xylophone, dull piece of wood, or a variety of other objects, in real-time. By dynamically controlling the output sound spectrum depending on the input spectrum, simultaneously with a continuous and low-latency temporal response, the system imitates the physicality of touching a real object.

Applied in control panels and expressive control surfaces, swarm modulation can create realistic sonic feedback, for human head-up operation of controls in critical applications.

Slides from presentation
Videos of exotic musical instruments at RJ's site.
Other interesting research from our lab: Veillametrics - measuring "veillance flux" from cameras and the human eye.

Using Swarm Modulation and Physiphones to create DIY musical instruments

Hyperacoustic instruments:

With this invention, we can turn any object into a musical instrument! We can also outfit any smartphone, computer, car dashboard, or aircraft cockpit, to make it sonically responsive, for hands-free operation so that instead of looking at a button, you can hear the button exactly in the way you touch, press or twist it. This allows head-up operation of complex controls in critical operations. On the other hand, for musical applications, hyperacoustic instruments are a class of truly acoustic, yet computational musical instruments. These interfaces and instruments are based on physiphones (where the initial sound-production is physical rather than virtual), which have been outfitted with computation and tactuation, such that the final sound delivery is also physical. The result is a highly-expressive musical instrument that gives a "sweet sound" and "sweet feel" for customers.


  1. Summary explanation -- how it works: With this invention, we can turn any object into a musical instrument! We can also outfit any smartphone, computer, car dashboard, or aircraft cockpit, to make it sonically responsive, for hands-free operation so that instead of looking at a button, you can hear the button exactly in the way you touch, press or twist it. This allows head-up operation of complex controls in critical operations. On the other hand, for musical applications, hyperacoustic instruments are a class of truly acoustic, yet computational musical instruments. These interfaces and instruments are based on physiphones (where the initial sound-production is physical rather than virtual), which have been outfitted with computation and tactuation, such that the final sound delivery is also physical. The result is a highly-expressive musical instrument that gives a "sweet sound" and "sweet feel" for customers.
  2. Example commercial applications:
  3. Other hyperacoustic instruments, in which this invention could be applied: Hyperacoustic Instruments, The Xyolin, Physiphones, Making a badly tuned or unpitched instrument play in perfect harmony.

Applying Swarm Modulation to make Hyperacoustic Instruments

We have created several instances of a new kind of computer-based musical instrument in which the sound (a) originates acoustically, and (b) is conveyed to the audience acoustically, i.e. by acoustic vibrations in the physical body of the instrument.

Acoustic instruments can be made from: everyday objects, such as tables, chairs, shoes, hands, arms, legs, or even a fallen tree branch found in a forest (which we did recently) simply by fitting them with sound pickups.

The result is a highly expressive and sonorous instrument that can be used to play highly intricate recognizable songs and classical or jazz reperetoire (including intricate Bach fugues, etc.) as well as new experimental music, owing to the microtonal character and high degree of timbral variability.

Ice was carved to form an acoustic instrument, exploring acoustic vibrations in the opening keynote for ACM (Association of Computing Machinery)'s TEI conference. We used four transmit transducers, and 12 receive transducers, arranged on and in blocks of ice. Some of the transducers were frozen right into the ice blocks, and others were coupled acoustically to the ice.

Another instrument using found materials includes the ``Xyolin'', a xylophone that has infinitely many notes and covers the entire audio range of human hearing, where sound originates as vibrations in wood, and is conveyed to the audience by vibrations in wood, as well as the pagophone, in which sound originates in vibrations in ice, and is conveyed to the audience by way of vibrations in ice.

Players strike the tree branches with mallets, and the actual sounds from the tree are picked up by listening devices attached to the tree. The natural sounds produced by tapping, scratching, or rubbing the tree are pitch-transposed to musical notes. The target pitch of the pitch transposition is dependent on where the tree is struck. This is determined by using an array of listening devices with sound localization (time-of-flight), and/or a vision system (camera(s) and computer input frame grabber) that also ``watches'' to see where the tree is struck. Individual parts of the tree can then be labeled with chalk, e.g. A, B-flat, B, C., C-sharp, etc..

Our work differs from computer-controlled musical instruments like player-pianos, solenoid-activated xylophones, and other computer actuated musical instruments in the sense that we are not trying to get the computer to play the instrument. In fact, quite the opposite: we're trying to get the computer to help us get ``closer to nature''.

Rather than computerized sound generators, we use the original sound itself, which enables more expression and natural. For example, an ordinary desk can be turned into a xylophone in which tapping on the desk can make sounds like a bell, whereas rubbing on it can make more sustained notes like that of a violin or cello. This sonic expressivity is due to the fact that the original sound, not a synthsized sound, is used.

Various found objects, such as a bath tub that were found in a dumpster, were turned into expressive musical instruments that could play any classical or jazz repertoire, intricate Bach fugues, etc., as well as being able to play newly composed music written specifically for the new instruments. A hyperacoustic instrument built into a SpaBerry hot tub was used as the main instrument for the main act in North America's largest winter festival, to perform for Canada's Prime Minister and Governor General, in front of an audience of more than 10,000 people. The resulting instrument was a variation of the hydraulophone known as the balnaphone.

We proposed ``acoustic physiphones'' which are natural user-interfaces in which: \begin{itemize} \item the initial sound production (sound generation) is natural, i.e. acoustic, as with physiphones; \item the final sound delivery (sound reproduction) is by way of the natural material. Thus if the sound originated xylophonically (from vibrations in wood), the processed sound is also reproduced xylophonically (i.e. by way of vibrations in wood). Preferably the same wood that is used to generate the original sound is used to deliver the processed (e.g. pitch-transposed) sound. \end{itemize}

The software used for the work done in this paper was written in the ``C'' programming language, on specialized embedded computers that we designed and built to be completely waterproof and environmentally sealed, so as to operate in a natural environment. We used GNU Linux and wrote our own device drivers to extend the operating system to adapt to the new hardware we built.


Hearing-Aid

A new hearing-aid using swarm modulation is under construction. Check back soon!

Multipurpose Swarm Processor

A new audio processor for sonifying everyday objects is under construction. This device will let you turn anything into a musical instrument! Check back soon.

Other Hyperacoustic User-Interfaces

Water-Hammer Piano

SQUEAKeys

"SqueaKEYS" was a musical instrument application to enhance touch screens on mobile devices. Sound is generated acoustically by the sound of one or more fingers rubbing on the glass surface of a liquid crystal display screen, or the like. In this sense, the instrument is not an electronic instrument, but, rather, a friction idiophone (e.g. in the Hornbostel Sachs musical instrument classification sense). Location sensing on the touch screen is used to frequency-shift the sound onto a musical scale depending on where the screen is rubbed, struck, or touched. In other embodiments the location of the touch is determined with sound localization by way of geophones bonded to a glass substrate, eliminating the need for a touch screen (e.g. to implement the instrument on any glass surface equipped with appropriate listening devices).

As published in IEEE GEM 2015.

Please note: SqueaKEYS does NOT yet use swarm modulation, and therefore the sound quality is not as good as what swarm modulation can provide.