Prosody-TTS: An End-to-End Speech Synthesis System with Prosody Control

Pamisetty, Giridhar and Kodukula, Sri Rama Murty (2022) Prosody-TTS: An End-to-End Speech Synthesis System with Prosody Control. Circuits, Systems, and Signal Processing. ISSN 0278-081X

Full text not available from this repository. (Request a copy)

Abstract

End-to-end text-to-speech synthesis systems achieved immense success in recent times, with improved naturalness and intelligibility. However, the end-to-end models, which primarily depend on the attention-based alignment, do not offer an explicit provision to modify/incorporate the desired prosody while synthesizing the speech. Moreover, the state-of-the-art end-to-end systems use autoregressive models for synthesis, making the prediction sequential. Hence, the inference time and the computational complexity are quite high. This paper proposes Prosody-TTS, a data-efficient end-to-end speech synthesis model that combines the advantages of statistical parametric models and end-to-end neural network models. It also has a provision to modify or incorporate the desired prosody at the finer level by controlling the fundamental frequency (f) and the phone duration. Generating speech utterances with appropriate prosody and rhythm helps in improving the naturalness of the synthesized speech. We explicitly model the duration of the phoneme and the f to have a finer level control over them during the synthesis. The model is trained in an end-to-end fashion to directly generate the speech waveform from the input text, which in turn depends on the auxiliary subtasks of predicting the phoneme duration, f, and Mel spectrogram. Experiments on the Telugu language data of the IndicTTS database show that the proposed Prosody-TTS model achieves state-of-the-art performance with a mean opinion score of 4.08, with a very low inference time using just 4 hours of training data. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Kodukula, Sri Rama Murtyhttps://orcid.org/0000-0002-6355-5287
Item Type: Article
Additional Information: The authors would like to thank the Ministry of Electronics and Information Technology (MeitY) for supporting this work under the project “Speech to Speech Translation for Tribal Languages using Deep Learning Framework”.
Uncontrolled Keywords: Data-efficient end-to-end models; Finer prosody control; Neural vocoder; Non autoregressive models; Text-to-speech synthesis system
Subjects: Electrical Engineering
Divisions: Department of Electrical Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 19 Aug 2022 10:06
Last Modified: 19 Aug 2022 10:06
URI: http://raiithold.iith.ac.in/id/eprint/10231
Publisher URL: http://doi.org/10.1007/s00034-022-02126-z
OA policy: https://v2.sherpa.ac.uk/id/publication/15622
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 10231 Statistics for this ePrint Item