How can you use audio to improve the user experience of a website?

The underestimated sense

Targeted tones can have a positive effect on how users perceive websites and improve the user experience in the long term. Read in our article why the sound makes the music, how sounds support the use of certain UI elements and emotionally enrich storytelling.

What is User Experience (UX) and why is it important?

User Experience, UX for short, is a holistic approach to shaping the experience of a user on a website, web application, or app. It's not just about whether a user likes a site or whether they can quickly reach the desired goal, but most importantly about the emotions experienced on the user’s journey. Fun, joy and the perceived simplicity of a product are typically the goals of a successful user experience. Audio plays a crucial role in said experiences; they are effectively the soundtrack of digital applications. Acoustics evoke certain feelings and leave a lasting impression that connects a user with a product or service.

What is Soundsynthesis?

Soundsynthesis describes the creation of harmonious or alienated natural sounds. Sound synthesis is not just about the digital production of audio signals, but also very specific timbres and frequencies. A typical characteristic of sound synthesis is the digital generation of instruments such as the piano, trumpet, or drums without recording any tones in advance.



Improve user experience with the right tones

Sound has long been a feature embedded in websites. Audio, along with images, was among the first formats to go online on the Web back in the 1980s. Yet, it wasn't until the introduction of the “<audio>” tag in HTML5 that the streaming and playback of audio became easily accessible. Before that, web developers had to resort to Flash or QuickTime, which often negatively affected the UX as well as the website speed. However, even with the <audio> tag, one will eventually reach a limit in how much creativity can be expressed in audio. Hoping to overcome this limitation, the Web Audio API was implemented in 2011, providing high-performance dynamic loading, creation, editing, and mixing of audio data. Tone.js, a JavaScript framework based on the WebAudio API, took its basis further and added additional features to the system of digital music creation. Not only does the library have a variety of musical abstractions (e.g., Labeling specific notes such as "C3" instead of the equivalent value in hertz), but it also allows the planning of musical events in advance.

In the following section, you can read about how the principle of subtractive synthesis works within Tone.js. Continue on to find possible application areas in which Tone.js can be used in web development so that users receive actualized added value.

What is subtractive synthesis?

To understand the principle of subtractive synthesis, we first need to look at the basics of audio generation and processing. To create a sound, you need a source that allows periodic oscillations in the human audible frequency range. These oscillations can be generated by electricity or digital software. This source is called an oscillator. In order to transform this sound into playable tones, an input signal is needed to give the oscillator the pitch in hertz. To ensure that the sound is not constant, but only audible due to an event, such as a click on a UI element, we need a so-called amplifier. If the input signal is passed through the oscillator to an amplifier during a certain time, a sound will only be heard as long as the signal is in an active state.

For example, if a user drags a UI element from A to B with an “onmousedown” event, a sound will be played in that event until “onmouseup” is triggered.

Audio Example Mouse-Event

In the above audio sample the abrupt start and stop of the sound are very clear. To give the tone a sound progression after the event has been triggered, we need an envelope curve (Hüllkurve). This is a modulation, a means of shaping sound, which determines the temporally embodied shape of the triggered sound.

UX Klangsynthese

In our example, we regulate the volume with the envelope curve (Hüllkurve). When the user interacts with a click on the UI element, the volume of the sound now rises above a defined time value and then flattens out again when the click is released. The modulation now allows for a carefully, and literally, rounded sound experience.

Audio Example With Varying Volume

In order to alter the timbre of the waveform created by the oscillator, the last part of our synthesizer is still missing: the filter. A filter makes it possible to reduce certain frequency ranges in order to shape the characteristics of the sound to fit the respective application. In the following example, you can hear the use of a lowpass filter, which cuts off all frequencies higher than 1000Hz.

Audio Example with Lowpass Filter

Creating the synthesizer described above using tone.js looks like this:

// Instanz eines Oszillators, mit einer Rechteckschwingung der Tonhöhe C#3 let osc = new Tone.OmniOscillator('C#3’, ‘square4’); // Instanz eines Lowpass Filters mit einer Flankensteilheit von 48dB bei 1000Hz let filter = new Tone.Filter(1000, 'lowpass', -48); // Instanz einer Verstärker-Hüllkurve mit vier Modulationswerten für Attack, Decay, Sustain & Release let amplitudeEnvelope = new Tone.AmplitudeEnvelope(0.1, 0, 5, 0.5); // Der Oszillator wird mit dem Filter verbunden osc.connect(this.filter); // Der Filter wird mit dem Verstärker verbunden, der gleichzeitig auf die Hüllkurven Modulation reagiert filter.connect(this.amplitudeEnvelope);

The oscillator is connected to the envelope via a filter. A separate amplifier isn't necessary here since the envelope is already connected to a gain node that regulates the volume. With the last two lines of code, the oscillator is turned on and then triggered with the envelope, so that a C#3 note of the given square wave is released (in this example).

In the library of Tone.js, some standard pre-made synthesizers for sound generation are already available. Otherwise, other already existing audio data can also be downloaded and edited with Tone.js software.

Practical examples of audio use in user experience

Integrating sound effects offers an enormous advantage in enhancing the user experience, as it provides the user with another dimension of UI interaction. For example, if a user receives not only a confirmation text when a purchase is successfully completed but also a chord in C major, triggered for 0.5 seconds, combined with a slight reverb effect, the positive experience is enhanced not only visually but also auditorily. Such use of microtonal elements can promote emotional attachment to a website and/or service.

Auditory storytelling in corporate design and marketing, for products or navigations on websites, not only improves the experience of a website visit but also lets users subconsciously identify with the image of the service in its entirety.

This article aims to remind you not to neglect the sensory perception of hearing, which is often forgotten when planning and designing websites. If audio-visual components on websites are coordinated with each other, the user experience will be unequivocally enhanced.

Header image courtesy of Annie Spratt at Unsplash