The present work proposes a live demonstration of a learning and artistic interactive digital interface, applying the idea of digital biofeedback to a traditional Mongolian singing technique. The basic aim was to create a visual and acoustical biofeedback which reacts to particular parameters of overtone singing, stimulating the singer’s performance. The feedback provided to the user should also help him to gain control of complex motor abilities. The well-established Max/MSP+Jitter (by Cycling ’74) programming environment was used to develop five different patches following a pedagogical path inspired by the ethnomusicologist Tran-Quang-Hai. By means of an online FFT analysis we segmented the resulting frequency spectrum in order to detect the most significant components of the vocal signal. The following stages are included in all the patches: 1. normalization; 2. FFT; 3. segmentation; 4. threshold. Depending on the specific didactical step, other modules are included for extracting the salient features under analysis. Such parameters are sent to the visual environment (Jitter) for the final visual processing related to the visual biofeedback. The specific Max/MSP+Jitter application to overtone singing gave excellent results because of the integrated sound-visual characteristics of such software environment. With respect to the classical didactical procedures, presentation of the five pedagogical steps to inexpert subjects allowed a rapid learning of the techniques on which this kind of singing is based on. Every subject was able to find his own specific singing modalities just by observing a suitable graphical representation of the produced vocal sound. The digital biofeedback applied here to overtone singing can be adapted to develop interactive environments with artistic, pedagogical, and rehabilitative purposes. The possibility of a cross-modal relationship between user’s input (e.g. acoustic) and the machine’s output (e.g. visual) can be crucial to applications with disables with sensorial deficits (e.g. speech learning for the deaf).

Learning by interaction: Implementing digital biofeedback interfaces for overtone singing

BRUNETTI, RICCARDO;
2004-01-01

Abstract

The present work proposes a live demonstration of a learning and artistic interactive digital interface, applying the idea of digital biofeedback to a traditional Mongolian singing technique. The basic aim was to create a visual and acoustical biofeedback which reacts to particular parameters of overtone singing, stimulating the singer’s performance. The feedback provided to the user should also help him to gain control of complex motor abilities. The well-established Max/MSP+Jitter (by Cycling ’74) programming environment was used to develop five different patches following a pedagogical path inspired by the ethnomusicologist Tran-Quang-Hai. By means of an online FFT analysis we segmented the resulting frequency spectrum in order to detect the most significant components of the vocal signal. The following stages are included in all the patches: 1. normalization; 2. FFT; 3. segmentation; 4. threshold. Depending on the specific didactical step, other modules are included for extracting the salient features under analysis. Such parameters are sent to the visual environment (Jitter) for the final visual processing related to the visual biofeedback. The specific Max/MSP+Jitter application to overtone singing gave excellent results because of the integrated sound-visual characteristics of such software environment. With respect to the classical didactical procedures, presentation of the five pedagogical steps to inexpert subjects allowed a rapid learning of the techniques on which this kind of singing is based on. Every subject was able to find his own specific singing modalities just by observing a suitable graphical representation of the produced vocal sound. The digital biofeedback applied here to overtone singing can be adapted to develop interactive environments with artistic, pedagogical, and rehabilitative purposes. The possibility of a cross-modal relationship between user’s input (e.g. acoustic) and the machine’s output (e.g. visual) can be crucial to applications with disables with sensorial deficits (e.g. speech learning for the deaf).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14092/1133
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact