Audiovisual Speech Processing

Audiovisual Speech Processing
Author: Gérard Bailly
Publisher: Cambridge University Press
Total Pages: 507
Release: 2012-04-26
Genre: Computers
ISBN: 1107006821

This book presents a complete overview of all aspects of audiovisual speech including perception, production, brain processing and technology.

Audiovisual Speech Recognition: Correspondence between Brain and Behavior

Audiovisual Speech Recognition: Correspondence between Brain and Behavior
Author: Nicholas Altieri
Publisher: Frontiers E-books
Total Pages: 102
Release: 2014-07-09
Genre: Brain
ISBN: 2889192512

Perceptual processes mediating recognition, including the recognition of objects and spoken words, is inherently multisensory. This is true in spite of the fact that sensory inputs are segregated in early stages of neuro-sensory encoding. In face-to-face communication, for example, auditory information is processed in the cochlea, encoded in auditory sensory nerve, and processed in lower cortical areas. Eventually, these “sounds” are processed in higher cortical pathways such as the auditory cortex where it is perceived as speech. Likewise, visual information obtained from observing a talker’s articulators is encoded in lower visual pathways. Subsequently, this information undergoes processing in the visual cortex prior to the extraction of articulatory gestures in higher cortical areas associated with speech and language. As language perception unfolds, information garnered from visual articulators interacts with language processing in multiple brain regions. This occurs via visual projections to auditory, language, and multisensory brain regions. The association of auditory and visual speech signals makes the speech signal a highly “configural” percept. An important direction for the field is thus to provide ways to measure the extent to which visual speech information influences auditory processing, and likewise, assess how the unisensory components of the signal combine to form a configural/integrated percept. Numerous behavioral measures such as accuracy (e.g., percent correct, susceptibility to the “McGurk Effect”) and reaction time (RT) have been employed to assess multisensory integration ability in speech perception. On the other hand, neural based measures such as fMRI, EEG and MEG have been employed to examine the locus and or time-course of integration. The purpose of this Research Topic is to find converging behavioral and neural based assessments of audiovisual integration in speech perception. A further aim is to investigate speech recognition ability in normal hearing, hearing-impaired, and aging populations. As such, the purpose is to obtain neural measures from EEG as well as fMRI that shed light on the neural bases of multisensory processes, while connecting them to model based measures of reaction time and accuracy in the behavioral domain. In doing so, we endeavor to gain a more thorough description of the neural bases and mechanisms underlying integration in higher order processes such as speech and language recognition.

Real World Speech Processing

Real World Speech Processing
Author: Jhing-Fa Wang
Publisher: Springer
Total Pages: 0
Release: 2011-09-21
Genre: Technology & Engineering
ISBN: 9781441954398

Real World Speech Processing brings together in one place important contributions and up-to-date research results in this fast-moving area. The contributors to this work were selected from the leading researchers and practitioners in this field. The work, originally published as Volume 36, Numbers 2-3 of the Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, will be valuable to anyone working or researching in the field of speech processing. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.

Cognitively Inspired Audiovisual Speech Filtering

Cognitively Inspired Audiovisual Speech Filtering
Author: Andrew Abel
Publisher: Springer
Total Pages: 134
Release: 2015-08-07
Genre: Computers
ISBN: 3319135090

This book presents a summary of the cognitively inspired basis behind multimodal speech enhancement, covering the relationship between audio and visual modalities in speech, as well as recent research into audiovisual speech correlation. A number of audiovisual speech filtering approaches that make use of this relationship are also discussed. A novel multimodal speech enhancement system, making use of both visual and audio information to filter speech, is presented, and this book explores the extension of this system with the use of fuzzy logic to demonstrate an initial implementation of an autonomous, adaptive, and context aware multimodal system. This work also discusses the challenges presented with regard to testing such a system, the limitations with many current audiovisual speech corpora, and discusses a suitable approach towards development of a corpus designed to test this novel, cognitively inspired, speech filtering system.

Speech and Audio Processing

Speech and Audio Processing
Author: Ian McLoughlin
Publisher: Cambridge University Press
Total Pages: 403
Release: 2016-07-21
Genre: Computers
ISBN: 1107085462

An accessible introduction to speech and audio processing with numerous practical illustrations, exercises, and hands-on MATLAB® examples.

Audio and Speech Processing with MATLAB

Audio and Speech Processing with MATLAB
Author: Paul Hill
Publisher: CRC Press
Total Pages: 330
Release: 2018-12-07
Genre: Technology & Engineering
ISBN: 0429813953

Speech and audio processing has undergone a revolution in preceding decades that has accelerated in the last few years generating game-changing technologies such as truly successful speech recognition systems; a goal that had remained out of reach until very recently. This book gives the reader a comprehensive overview of such contemporary speech and audio processing techniques with an emphasis on practical implementations and illustrations using MATLAB code. Core concepts are firstly covered giving an introduction to the physics of audio and vibration together with their representations using complex numbers, Z transforms and frequency analysis transforms such as the FFT. Later chapters give a description of the human auditory system and the fundamentals of psychoacoustics. Insights, results, and analyses given in these chapters are subsequently used as the basis of understanding of the middle section of the book covering: wideband audio compression (MP3 audio etc.), speech recognition and speech coding. The final chapter covers musical synthesis and applications describing methods such as (and giving MATLAB examples of) AM, FM and ring modulation techniques. This chapter gives a final example of the use of time-frequency modification to implement a so-called phase vocoder for time stretching (in MATLAB). Features A comprehensive overview of contemporary speech and audio processing techniques from perceptual and physical acoustic models to a thorough background in relevant digital signal processing techniques together with an exploration of speech and audio applications. A carefully paced progression of complexity of the described methods; building, in many cases, from first principles. Speech and wideband audio coding together with a description of associated standardised codecs (e.g. MP3, AAC and GSM). Speech recognition: Feature extraction (e.g. MFCC features), Hidden Markov Models (HMMs) and deep learning techniques such as Long Short-Time Memory (LSTM) methods. Book and computer-based problems at the end of each chapter. Contains numerous real-world examples backed up by many MATLAB functions and code.

Toward a Unified Theory of Audiovisual Integration in Speech Perception

Toward a Unified Theory of Audiovisual Integration in Speech Perception
Author: Nicholas Altieri
Publisher: Universal-Publishers
Total Pages:
Release: 2010-09-09
Genre:
ISBN: 1599423618

Auditory and visual speech recognition unfolds in real time and occurs effortlessly for normal hearing listeners. However, model theoretic descriptions of the systems level cognitive processes responsible for integrating auditory and visual speech information are currently lacking, primarily because they rely too heavily on accuracy rather than reaction time predictions. Speech and language researchers have argued about whether audiovisual integration occurs in a parallel or in coactive fashion, and also the extent to which audiovisual occurs in an efficient manner. The Double Factorial Paradigm introduced in Section 1 is an experimental paradigm that is equipped to address dynamical processing issues related to architecture (parallel vs. coactive processing) as well as efficiency (capacity). Experiment 1 employed a simple word discrimination task to assess both architecture and capacity in high accuracy settings. Experiments 2 and 3 assessed these same issues using auditory and visual distractors in Divided Attention and Focused Attention tasks respectively. Experiment 4 investigated audiovisual integration efficiency across different auditory signal-to-noise ratios. The results can be summarized as follows: Integration typically occurs in parallel with an efficient stopping rule, integration occurs automatically in both focused and divided attention versions of the task, and audiovisual integration is only efficient (in the time domain) when the clarity of the auditory signal is relatively poor--although considerable individual differences were observed. In Section 3, these results were captured within the milieu of parallel linear dynamic processing models with cross channel interactions. Finally, in Section 4, I discussed broader implications for this research, including applications for clinical research and neural-biological models of audiovisual convergence.

Speech and Audio Processing in Adverse Environments

Speech and Audio Processing in Adverse Environments
Author: Eberhard Hänsler
Publisher: Springer Science & Business Media
Total Pages: 740
Release: 2008-07-22
Genre: Technology & Engineering
ISBN: 354070602X

Users of signal processing systems are never satis?ed with the system they currently use. They are constantly asking for higher quality, faster perf- mance, more comfort and lower prices. Researchers and developers should be appreciative for this attitude. It justi?es their constant e?ort for improved systems. Better knowledge about biological and physical interrelations c- ing along with more powerful technologies are their engines on the endless road to perfect systems. This book is an impressive image of this process. After “Acoustic Echo 1 and Noise Control” published in 2004 many new results lead to “Topics in 2 Acoustic Echo and Noise Control” edited in 2006 . Today – in 2008 – even morenew?ndingsandsystemscouldbecollectedinthisbook.Comparingthe contributions in both edited volumes progress in knowledge and technology becomesclearlyvisible:Blindmethodsandmultiinputsystemsreplace“h- ble” low complexity systems. The functionality of new systems is less and less limited by the processing power available under economic constraints. The editors have to thank all the authors for their contributions. They cooperated readily in our e?ort to unify the layout of the chapters, the ter- nology, and the symbols used. It was a pleasure to work with all of them. Furthermore, it is the editors concern to thank Christoph Baumann and the Springer Publishing Company for the encouragement and help in publi- ing this book.

Advances in Audiovisual Speech Processing for Robust Voice Activity Detection and Automatic Speech Recognition

Advances in Audiovisual Speech Processing for Robust Voice Activity Detection and Automatic Speech Recognition
Author: Fei Tao (Electrical engineer)
Publisher:
Total Pages:
Release: 2018
Genre: Automatic speech recognition
ISBN:

Speech processing systems are widely used in existing commercial applications, including virtual assistants in smartphones and home assistant devices. Speech-based commands provide convenient hands-free functionality for users. Two key speech processing systems in practical applications are voice activity detection (VAD), which aims to detect when a user is speaking to a system, and automatic speech recognition (ASR), which aims to recognize what the user is speaking. A limitation in these speech tasks is the drop in performance observed in noisy environments or when the speech mode differs from neutral speech (e.g., whisper speech). Emerging audiovisual solutions provide principled frameworks to increase the robustness of the systems by incorporating features describing lip motion. This study proposes novel audiovisual solutions for VAD and ASR tasks. The dissertation introduces unsupervised and supervised audiovisual voice activity detection (AV-VAD). The unsupervised approach combines visual features that are characteristic of the semi-periodic nature of the articulatory production around the orofacial area. The visual features are combined using principal component analysis (PCA) to obtain a single feature. The threshold between speech and non-speech activity is automatically estimated with the expectation-maximization (EM) algorithm. The decision boundary is improved by using the Bayesian information criterion (BIC) algorithm, resolving temporal ambiguities caused by different sampling rates and anticipatory movements. The supervised framework corresponds to the bimodal recurrent neural network (BRNN), which captures the taskrelated characteristics in the audio and visual inputs, and models the temporal information within and across modalities. The approach relied on three subnetworks implemented with long short-term memory (LSTM) networks. This framework is implemented with either hand-crafted features or features representations directly derived from the data (i.e., end-toend system). The study also extends this framework by increasing the temporal modeling by using advanced LSTMs (A-LSTMs). For audiovisual automatic speech recognition (AV-ASR), the study explores the use of visual features to compensate for the mismatch observed when the system is evaluated with whisper speech. We propose supervised adaptation schemes which significantly reduce the mismatch between normal and whisper speech across speakers. The study also introduces the Gating neural network (GNN). The GNN aims to attenuate the effect of unreliable features, creating AV-ASR systems that improve, or at least maintain, the performance of an ASR system implemented only with speech. Finally, the dissertation introduces the front-end alignment neural network (AliNN) to address the temporal alignment problem between audio and visual features. This front-end system is important as the lip motion often precedes speech (e.g., anticipatory movements). The framework relies on RNN with attention model. The resulting aligned features are concatenated and fed to conventional back-end ASR systems obtaining performance improvements. The proposed approaches for AV-VAD and AV-ASR systems are evaluated on large audiovisual corpora, achieving competitive performance under real world scenarios, outperforming conventional audio-based VAD and ASR systems or alternative audiovisual systems proposed by previous studies. Taken collectively, this dissertation has made algorithmic advancements for audiovisual systems, representing novel contributions to the field of multimodal processing.