Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/339409
Title: | An enhanced framework for identifying human emotions from multimodal signals |
Researcher: | Allen Joseph, R |
Guide(s): | Geetha, P |
Keywords: | Engineering and Technology Computer Science Computer Science Software Engineering |
University: | Anna University |
Completed Date: | 2020 |
Abstract: | The notion of emotions is very vital when one person wants to communicate with another person. When a child is sorrowful, it tends to cry. The father or the mother of the child will try to make the child happy by giving what the child needs and the child laughs, which makes the father and mother happy. This happiness cannot be expressed because it will be beyond the measure. Similarly emotions play an important role when a human wants to interact with a machine. Take for example an automated vehicle assist system, making the vehicle understand the emotions of the driver will make the driving experience better. Another example is automated telephony wherein the emotions of the caller can be analyzed in order to improve the feedback of the system. Understanding emotions from image, speech signal, and video is of major concern in this research work. In this thesis we first work on image processing, for identifying emotions. Second we work on speech signal for emotion identification. Finally we use video to identify emotions. In the first framework the emotion of a person is identified from an image with the help of tensorflow. It is an upcoming technology in the field of artificial intelligence. Tensorflow has grown to help users to determine what they need with respect to their daily life. One of the most talked about tensorflow applications is the RankBrain developed by Google corporation which allows users to find the relevant pages with the help of deep neural nets. For our purpose we use tensorflow to identify emotions from an image using the features extracted from the image. Basically the feature extraction is done by identifying the geometry of the face after detecting the landmarks of the eyes and mouth of the face. The landmarks are constructed by using the proposed modified eyemap-mouthmap algorithm on an enhanced image which uses discrete wavelet transform and fuzzy for enhancement. Results of classification show that the proposed methodology is better when tensorflow is used. The second framework on em |
Pagination: | xvii,129 p. |
URI: | http://hdl.handle.net/10603/339409 |
Appears in Departments: | Faculty of Information and Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 190.71 kB | Adobe PDF | View/Open |
02_certificates.pdf | 162.85 kB | Adobe PDF | View/Open | |
03_vivaproceedings.pdf | 415.96 kB | Adobe PDF | View/Open | |
04_bonafidecertificate.pdf | 263.19 kB | Adobe PDF | View/Open | |
05_abstracts.pdf | 47.12 kB | Adobe PDF | View/Open | |
06_acknowledgements.pdf | 305.03 kB | Adobe PDF | View/Open | |
07_contents.pdf | 90.52 kB | Adobe PDF | View/Open | |
08_listoftables.pdf | 54.66 kB | Adobe PDF | View/Open | |
09_listoffigures.pdf | 83.39 kB | Adobe PDF | View/Open | |
10_listofabbreviations.pdf | 49.15 kB | Adobe PDF | View/Open | |
11_chapter1.pdf | 1.62 MB | Adobe PDF | View/Open | |
12_chapter2.pdf | 133.29 kB | Adobe PDF | View/Open | |
13_chapter3.pdf | 1.21 MB | Adobe PDF | View/Open | |
14_chapter4.pdf | 4.12 MB | Adobe PDF | View/Open | |
15_chapter5.pdf | 1.28 MB | Adobe PDF | View/Open | |
16_chapter6.pdf | 91.92 kB | Adobe PDF | View/Open | |
17_conclusion.pdf | 91.92 kB | Adobe PDF | View/Open | |
18_references.pdf | 122.54 kB | Adobe PDF | View/Open | |
19_listofpublications.pdf | 90.48 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 90.5 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: