Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/380777
Full metadata record
DC FieldValueLanguage
dc.coverage.spatial
dc.date.accessioned2022-05-18T06:16:20Z-
dc.date.available2022-05-18T06:16:20Z-
dc.identifier.urihttp://hdl.handle.net/10603/380777-
dc.description.abstractThis thesis focuses on exploring, investigating and analysing perception based speech features for emotion recognition in diverse and mixed-language environments across discrete and dimensional emotion spaces. Majority of the existing approaches suggested in literature for multilingual speech emotion recognition (SER) studies have evolved around exploring new speech features and expanding the existing speech feature vectors for effective emotion recognition. Subtle emotions like disgust and boredom whose sample size are found to be less across majority of the databases are usually less recognized. Besides, the cross corpus SER systems are usually associated with various preprocessing techniques, large speech feature vectors and feature selection mechanisms. For these systems to be applicable in countries like India with population communicating in a mix of diverse languages, they must be further enhanced as existing cross corpus SER works have mostly dealt with 2 to 3 language samples each time during training-testing process. Also, most of the emotion recognition works have been targeted either for discrete or dimensional emotion spaces. The thesis aims to solve the newlinementioned shortcomings and limitations of the prevailing works. The main focus of SER system design in this work involves identifying vital compact set of features through speech analysis for efficient emotion recognition. From the exhaustive literature survey and initial SER studies performed by the author, it is found that human emotions are better perceived through cepstral feature analysis. In this thesis, the initial research work started in search of effective cepstral speech feature combination for a monolingual SER system . Through the experimentation performed, it was found that cepstral features derived from Mel and Bark scales were quiet significant for emotion discrimination across both emotion spaces. Artificial Neural Networks (ANN) and Deep Neural Networks (DNN)were chosen for classification. Next,the proposed monolingual SER system...
dc.format.extentxviii, 171
dc.languageEnglish
dc.relation
dc.rightsuniversity
dc.titleDiverse Multilingual and Mixed lingual Emotion Recognition using Perception based Speech Analysis
dc.title.alternative
dc.creator.researcherLalitha S
dc.subject.keywordelectronics and communication Engineering; Arousal ; Valence; speech analysis; mixed-language; speech emotion recognition;SER; Artificial Neural Networks;Deep Neural Networks; Speech Technology; Cepstrum; Emotion detection; corpus
dc.subject.keywordEngineering and Technology
dc.description.note
dc.contributor.guideDeepa Gupta
dc.publisher.placeCoimbatore
dc.publisher.universityAmrita Vishwa Vidyapeetham University
dc.publisher.institutionDept. of Electronics and Communication Engineering
dc.date.registered2015
dc.date.completed2021
dc.date.awarded2021
dc.format.dimensions
dc.format.accompanyingmaterialNone
dc.source.universityUniversity
dc.type.degreePh.D.
Appears in Departments:Department of Electronics & Communication Engineering (Amrita School of Engineering)

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File134.07 kBAdobe PDFView/Open
02_certificate.pdf239.59 kBAdobe PDFView/Open
03_preliminary pages.pdf199.66 kBAdobe PDFView/Open
04_chapter 1.pdf124.46 kBAdobe PDFView/Open
05_chapter 2.pdf212.45 kBAdobe PDFView/Open
06_chapter 3.pdf793.8 kBAdobe PDFView/Open
07_chapter 4.pdf370.92 kBAdobe PDFView/Open
08_chapter 5.pdf778.73 kBAdobe PDFView/Open
09_chapter 6.pdf141.17 kBAdobe PDFView/Open
10_bibliography.pdf115.47 kBAdobe PDFView/Open
11_publications.pdf50.81 kBAdobe PDFView/Open
80_recommendation.pdf274.8 kBAdobe PDFView/Open


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: