Unconstrained Dysfluency Modeling for Dysfluent Speech Transcription and Detection

Clicks: 14
ID: 282564
2023
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
Dysfluent speech modeling requires time-accurate and silence-aware transcription at both the word-level and phonetic-level. However, current research in dysfluency modeling primarily focuses on either transcription or detection, and the performance of each aspect remains limited. In this work, we present an unconstrained dysfluency modeling (UDM) approach that addresses both transcription and detection in an automatic and hierarchical manner. UDM eliminates the need for extensive manual annotation by providing a comprehensive solution. Furthermore, we introduce a simulated dysfluent dataset called VCTK++ to enhance the capabilities of UDM in phonetic transcription. Our experimental results demonstrate the effectiveness and robustness of our proposed methods in both transcription and detection tasks.
Reference Key
anumanchipalli2023unconstrained Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Jiachen Lian; Carly Feng; Naasir Farooqi; Steve Li; Anshul Kashyap; Cheol Jun Cho; Peter Wu; Robbie Netzorg; Tingle Li; Gopala Krishna Anumanchipalli
Journal arXiv
Year 2023
DOI DOI not found
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.