Papers & Talks

2020

  • //PAPER//
    L. Feugère, G. Gibson, N. C. Manoukis, O. Roux (2020)
    Mosquito ‘mate-seeking’ at long-range: are male swarms loud enough to be located by females?
    bioRxiv 2020.09.01.277202; doi: https://doi.org/10.1101/2020.09.01.277202
  • //TALK//
    Gibson, G., Feugère, L. , Manoukis, N. and Roux, O. (2020)
    Do female mosquitoes detect and locate male swarms by attraction to the sound or smell (pheromones) of male aggregations?
    Fourth Research coordination meeting on “Mosquito Handling, Transport, Release and Male Trapping Methods”, 14-18 Sep 2020, Vienna, Austria (videoconference).
  • //TALK//
    Feugère, L. , Gibson, G., Manoukis, N. and Roux, O. (2020)
    Les essaims de moustiques ont-ils une puissance sonore suffisante pour être localisables à distance par les femelles ?
    Behaviour research-group meeting (IRD), 18 Jun 2020, Montpellier, France.

2019

  • //PAPER//
    C. d’Alessandro, S. Delalez, B. Doval, L. Feugère, O. Perrotin
    Les instruments chanteurs
    Acoustique & Techniques, n°88, 36-43, 2019.

Abstract

Abstract: Singing instruments are the result of the encounter between voice synthesis and new interfaces for human-computer interaction. Voice is not a musical instrument, since it does not involve an external object stimulated by limbs or breathe. In contrast, digital synthesis allows for the first time to separate the subject from its voice, by building singing instruments manipulated by hands, feet, or any human-computer interface. However, possibilities
 for singing instruments control are still limited to some aspects, as the transposition from internal singing gestures realised by the vocal apparatus to external gestures is not trivial. Some gestures are analogous, while others are transposed in perceptive spaces. Related work realised on three singing instruments is introduced : the intonation control by a stylus on a graphic tablet and writing gestures ; the vocalic and voice quality controls on a surface ; the bi-manual control of consonantal articulation ; the rhythmic control of syllables. The underlying voice production models use either the simulation of a source- lter model, or
the modi cation of pre-recorded and labelled samples. The control of singing instruments
is multi-modal, involving hearing, sight, touch, and kinaesthesia. In some extent, this sensorimotor combination allows the singing instrument to be more accurate and precise than natural voice : the sight favouring melodic aspects while hearing being more related
to rhythmic aspects. Mirror of voice, the singing instrument allows any kind of speculation : indubitably musical with Chorus Digitalis, a choir of synthesised voices, but also for the analysis of vocal practices, for education or re-education by strengthening the learning of vocal gestures through uses of visual traces, and manual and corporal gestures. Finally,
the symbolic status of voice is also affected by the possibility to produce a vocal sound
from outside the body : augmented body, staging of vocal expression, voice double, play of someone else’s voice.

show less

Résumé

Résumé: Les instruments chanteurs sont nés de la rencontre entre synthèse vocale et nouvelles interfaces pour l’interaction humain-machine. La voix n’est pas un instrument de musique, car il n’y a pas d’objet externe mis en jeu par les membres ou par le soufle. La synthèse numérique permet pour la première fois une coupure entre le sujet et sa voix,
en construisant des instruments chanteurs manipulés par les mains, les pieds, ou par toutes sortes d’interfaces humain-machine. Cependant, les possibilités de contrôle des instruments chanteurs sont encore limitées à certains aspects, car une transposition des gestes internes du chant effectués par l’appareil vocal en gestes externes des membres ne va pas de soi. Certains gestes sont analogues, d’autres sont médiatisés par un espace perceptif. Les travaux menés sur trois instruments chanteurs sont présentés : le contrôle de l’intonation par un stylet sur une tablette graphique et des gestes d’écriture ; le contrôle des voyelles et de la qualité vocale sur une surface ; le contrôle bimanuel de l’articulation consonantique ; le contrôle syllabique du rythme. Les modèles de synthèse sous-jacents utilisent soit la simulation du modèle source ltre, soit la modi cation d’échantillons préenregistrés et étiquetés. Le contrôle des instruments chanteurs est multimodal, impliquant l’ouïe ainsi que la vue, le toucher, et la kinesthésie. Cette combinaison sensorielle et motrice permet dans certains cas de rendre les instruments chanteurs plus justes et précis que la voix, la vue privilégiant les aspects mélodiques et l’audition des aspects rythmiques. Miroir de la voix, l’instrument chanteur autorise toutes sortes de spéculations : musicales bien sûr, avec le Chorus Digitalis, chœur de voix de synthèse, mais aussi pour l’analyse des pratiques vocales, pour l’éducation ou la rééducation, en renforçant l’apprentissage de gestes vocaux par des traces visuelles, des gestes manuels ou corporels. Le statut symbolique de la voix est également affecté par la possibilité de contrôler et de produire le son vocal en dehors du corps : corps augmenté, mise en scène de l’expression vocale, double de la voix, jeu de la voix d’un ou d’une autre.

Cacher le résumé

2018

  • //PAPER//
    C. d’Alessandro, L. Feugère, O. Perrotin, S. Delalez, B. Doval
    Le contrôle des instruments chanteurs (PDF)
    14ème Congrès Français d’Acoustique (CFA ’18), Le Havre, 23-27 Avril 2018.
  • //REPORT//
    Lionel Feugère (2018)
    Acoustical descrition of field An. coluzzii swarms in Bama, Burkina Faso
    Institut de Recherche pour le Développement, Février 2018.

2017

  • //PAPER//
    E. Amy de la Bretèque, B. Doval, L. Feugère, L. Moreau-Gaudry
    Liminal Utterances and Shapes of Sadness: Local and Acoustic Perspectives on Vocal Production among the Yezidis of Armenia (PDF)
    Yearbook for Traditional Music, vol. 49, 2017, pp. 128–148

2016

  • //TALK//
    L. Feugère, C. d’Alessandro, B. Doval, O. Perrotin (2016)
    Cantor Digitalis: Interactive Voice Factory and Digital Instrument of Sung Vowels/Semi-Vowels
    Ecole d’été Sciences et Voix: expressions, usages et prises en charge de l’instrument vocal humain, Porquerolles, Septembre 24-28, 2016.
  • //TALK//
    J. Simonnot, T. Fillon, G. Pellerin, J. Pinquier, L. Feugère, and E. Lechaux (2016)
    The web platform Telemeta: New tools and perspectives for use of ethnomusicological sources
    ICTM Studygroup Historical Sources Paris 2016, March 9-13, 2016, Paris.
  • //REPORT//
    L. Feugère, C. d’Alessandro (2015), Cantor Digitalis: Evaluation of Performative Singing Synthesis LIMSI, CNRS, Internal report, 2015

Show abstract

Abstract: The aim
of this work is to assess the sound quality of Cantor Digitalis (CD), a
performative singing synthesizer. CD is composed of two main parts, a
chironomic control interface and a parametric voice synthesizer. The
control interface is based on a pen/touch graphic tablet, equipped with a
template representing vowel and melodic spaces. The synthesis engine is
based on a parametric synthesizer. The two songs provided by the
Singing Synthesis Challenge ”Fill-in the gap” serve as a basis for the
experiments. In the first experiment, the songs are sung on the vowel
/a/, comparing five conditions: the Yamaha CP4 keyboard, ”voice” sound,
the CD played on the same keyboard, the CD played with the tablet, the
CD played using MIDI, and natural voice. The second experiment combines
vowels and melody, with four conditions: CD-keyboard, CD-tablet,
CD-MIDI, and natural voice. Mean Opinion Scores are obtained using an
Absolute Category Rating test. These experiments indicate that: 1/ the
tablet interface is better than the keyboard interface for synthetic
vocal music; 2/ for vowels, the CD overall quality is comparable or
slightly better than the best stage synthesizer choir synthesis, but of
course much more flexible.

Hide abstract

  • //REPORT//
    Lionel Feugère (2016)
    Survey of rules between perceptive parameters and low-level synthesis parameters of LF glottal flow model, and application to Cantor Digitalis
    LIMSI-CNRS, Nov 2016

2015

  • //PAPER//
    L. Feugère, M.-F. Mifune, B. Doval (2015)
    The sound continuum between speech and song
    In L. Feugère, S. Fürniss, J. Lambert, E. Lechaux, M.-F. Mifune, G. Pellerin, J. Pinquier, J. Simonnot, V. Vapnarsky, A. Monod Becquelin, M. Chausson et C. Becquey. Round table on Diadems program. 23rd International Council for Traditional Music (ICTM) Colloquium “Between Speech and Song: Liminal Utterances“, May 20-22, 2015, Nanterre.
  • //REPORT//
    L. Feugère, Analyse de productions vocales sur des documents sonores ethnomusicologiques : caractérisation de catégories vocales intermédiaires basée sur la distribution des notes selon leur durée et développement d’outils
    LAM-d’Alembert, Rapport de fin de contrat, projet DIADEMS, July, 2015.

2014

  • //PAPER//
    C. d’Alessandro, L. Feugère, S. Le Beux, O. Perrotin, A. Rilliard, Drawing melodies : Evaluation of Chironomic Singing Synthesis, J. Acoust. Soc. Am., 135(6), 3601-3612, 2014.
    (PDF Copyright (2014) Acoustical Society of America. This article may be downloaded for personal use only. Any other use requires prior permission of the author and the Acoustical Society of America.)
  • //PAPER//
    L. Feugère, C. d’Alessandro, Rule-based performative synthesis of sung syllables, Proceedings of the International Conference on New Interfaces for Musical Expression, London, United Kingdom, June 30 – July 03, 2014, 86-87. ISBN: 978-1-906897-29-­1
  • //PAPER//
    L. Feugère, C. D’Alesandro (2014), Gesture analysis of voice synthesis chironomy, LIMSI, CNRS, Internal report

    Show abstract

    Abstract: Chorus Digitalis is a 3-years-old musical ensemble of synthe- sized voices, each controlled by a musician using one or two graphic tablets. This paper aims at studying the use of these instruments, how the musicians have developed their own playing to perform the repertoire of the ensemble. First, after having described the gestures of these instruments in the DMI typology, the analysis has focused on the musician strategies to perform some particular musical tasks like por- tamento, vibrato or note attack. Some particular strategies specific to the control of pitch on a surface are discussed. Then, we qualitatively characterize the differences in terms of personal styles using the gesture recording of a baroque choral performance using a 3D representation.

    Hide abstract

2013

Show abstract

Abstract: This thesis deals with the production and control modeling of a synthetic singing voice in the context of making a digital musical instrument. Two instruments are presented: the Cantor Digitalis, focusing on singing vowel control and voice individualization, and the Digitartic, which aims at controlling the articulation of Vowel-Consonant-Vowel syllables. Using an augmented graphic tablet, these instruments allow interactive musical applications with fine temporal control of voice production parameters. The relevance of these musical instruments was established through several public performances of the Chorus Digitalis ensemble. The gestures of the musicians were studied along with the musical tasks required for playing the selected repertoire which was composed of traditional world music (baroque choral, North Indian khayal singing) as well as more contemporary pieces. In particular, an experiment was conducted to analyze the ability to control the fundamental frequency of the Cantor Digitalis. Participants were asked to imitate intervals and melodies according to three tempos using three different modalities (one’s own voice, tablet, and tablet with audio feedback). Results showed that precision was better with the tablet modalities than with one’s own voice, while no significant difference was found between the tablet with and without audio feedback. Both instruments have been unified into one Max/MSP application, which provides an audio-visual and interactive educational tool for understanding voice production.

Hide abstract

Show abstract

Abstract: Digitartic, a system for bi-manual gestural control of Vowel-Consonant-Vowel performative singing synthesis is presented. This system is an extension of a real-time gesture-controlled vowel singing instrument developed in the Max MSP language. In addition to pitch, vowels and voice strength controls, Digitartic is designed for gestural control of articulation parameters, including various places and manners of articulation. The phases of articulation between two phonemes are continuously controlled and can be driven in real time without noticeable delay, at any stage of the synthetic phoneme production. Thus, as in natural singing, very accurate rhythmic patterns are produced and adapted while playing with other musicians. The instrument features two (augmented) pen tablets for controlling voice production: one is dealing with the glottal source and vowels, the second one is dealing with consonant/vowel articulation. The results show very natural consonant and vowel synthesis. Virtual choral practice confirms the effectiveness of Digitartic as an expressive musical instrument.

Hide abstract

2012

  • //PAPER//
    S. De Laubier, G. Bertrand, H. Genevois, V. Goudard, B. Doval, L. Feugere, S. Le Beux, C. d’Alessandro, OrJo et la Méta-Mallette 4.0, Journées d’Informatique Musicale (JIM 2012), Mons, Belgique, 09/05-11/05, 2012, 227-232. Unreferenced proceedings available online.
  • //REPORT//
    L. Feugère, S. Lebeux, C. d’Alessandro, OrJo : développement d’instruments virtuels sur la synthèse vocale (LIMSI-CNRS), Rapport de fin du projet, 2012.

2011

  • //PAPER//
    L. Feugère, S. Le Beux, C. d’Alessandro, Chorus digitalis : polyphonic gestural singing, 1st International Workshop on Performative Speech and Singing Synthesis (P3S 2011), Vancouver (Canada), March 14-15, 2011, 4p. Unreferenced printed proceedings.

2010

2009

  • //THESIS//
    L. Feugère (2009), Automatic segmentation and real-time voice recognition: the case of beatbox, Master 2 thesis supervised by Bruno Verbrugghe and Pedro Dias Miguel Cardoso, Voxler, Paris

Show abstract

Abstract: The aim of this study is to build a system to be used as an instrument controller by the voice. The reasons for the choice of the voiced onomatopoeias —’Poo’ /pu/, ‘Tsi’ /tsi/, and ‘Ka’ /ka/— are discussed. A corpus of these three sounds was established in musical contexts, over twelve speakers, totalling up to 2850 instances. An onset detection based on energy variation in the Mel bands was implemented in Matlab from an Ircam MAX/MSP patch. The onset detection and the features (LPC, ZCR, High Frequency Content and spectral rolloff for the reduced set) are computed in the same frame. Lastly, we present our choice of classifiers to classify the three vocal sounds. With a thirty-two milliseconds computation size window from the onset, we get 97 % of correctly classified instances among the well detected sounds with Multi-layer perceptron, and a 10-cross validation protocol. We also discuss good results with a smaller computation window so that this system can be used in a realtime context. We are currently implementing the real time system in C++.

Hide abstract

2008

  • //THESIS//
    L. Feugère (2008), Binaural acoustic clues to perceive space dynamically, Magistere thesis of Université Paris-Sud supervised by Lorenzo Picinali and Brian F.G. Katz, LIMSI-CNRS, Orsay, FR

Show abstract

Abstract: How can a blind person perceive the shape of the space where he is walking? We have tried to partially answer this question by looking at some binaural acoustics cues of a walker moving in a changing shape corridor. In this study, related to the Wayfinding project, we analyzed the similarities between: some parameters computed from the acoustic signal measured at the entrance of the two ear canals; the verbal description of the perceived space; and the physical shape of the corridor. These binaural parameters were compared to the ones from an acoustical model of the corridor which were tested by blind people with binaural rendering in a virtual navigation.

Hide abstract

2007

  • //THESIS//
    L. Feugère (2007), Automatic labelling of tabla sounds, Master 1 thesis supervised by Perfecto Herrera, UPF, Music Technology Group, Barcelona

Show abstract

Abstract: This study deals with the automatic recognition of sounds of tabla, an Indian percussion played by single or double stroke. Along the choice of the database, its labelling, the choice of the descriptors, a script has been written to get the descriptor value matrix of all the database. This values have been trained and tested by the help of different classifiers from the WEKA software. The global quality of the database/descriptors is demonstrated by a score of 94% correctly classified instances. It seems that the choice of taking into account a human hearing model (Meddhi’s hairs model) and that considering tabla music closed to language is quite relevant. LPC coefficients, which are speech articulator descriptors, reach a good rank among the best descriptors of the classification. While most of the classification has been done in a supervised way, the last part of the work gives some clues about a semi­-unsupervised approach using a Cobweb algorithm recently developed by a MTG pHd student.

Hide abstract

  • //THESIS/
    L. Feugère (2006), Experimental study of nonlinear vibrations of drums cymbals, Licence 3 thesis supervised by Cyril Touzé, ENSTA ParisTech, Unité de Mécanique, Palaiseau, FR

Show abstract

Résumé : Les vibrations des instruments de percussion non-linéaires (famille regroupant les cymbales) présentent des comportements complexes et des caractéristiques propres aux systèmes non-linéaires en conditions normales de jeu. On peut notamment citer le spectre continu et à large bande en vibrations de grande amplitude et la sensibilité aux conditions initiales. Le comportement de la cymbale en condition normal de jeu est la superposition des vibrations de la cymbale engendrés par une infinité d’excitation forcées mono fréquentielles Cette étude expérimentale conduite sur 5 cymbales d’instrumentiste a pour but de caractériser la transition vers le chaos de la vibration de la cymbale lors d’une excitation forcée mono fréquentielle croissante, et de comparer brièvement les résultats avec le model théorique de vibration de coques minces sphériques « idéales ». Pour cela, j’ai travaillé d’abord en régime linéaire pour l’analyse modale de chaque cymbale, c’est- à-dire à faible amplitude de vibration, puis en passant d’une faible à une forte amplitude d’excitation pour parvenir à faire intervenir les effets non-linéaires de la vibration de la cymbale, qui se traduisent par l’apparition de 2 phases successives succédant la phase périodique de vibration : l’une pseudo périodique, et l’autre chaotique. Ne rentrant pas profondément dans l’aspect théorique, je me suis appliqué à décrire le matériel et protocole expérimental, et les problèmes liés à chaque type d’expérience.

Hide abstract