Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/339409
Title: An enhanced framework for identifying human emotions from multimodal signals
Researcher: Allen Joseph, R
Guide(s): Geetha, P
Keywords: Engineering and Technology
Computer Science
Computer Science Software Engineering
University: Anna University
Completed Date: 2020
Abstract: The notion of emotions is very vital when one person wants to communicate with another person. When a child is sorrowful, it tends to cry. The father or the mother of the child will try to make the child happy by giving what the child needs and the child laughs, which makes the father and mother happy. This happiness cannot be expressed because it will be beyond the measure. Similarly emotions play an important role when a human wants to interact with a machine. Take for example an automated vehicle assist system, making the vehicle understand the emotions of the driver will make the driving experience better. Another example is automated telephony wherein the emotions of the caller can be analyzed in order to improve the feedback of the system. Understanding emotions from image, speech signal, and video is of major concern in this research work. In this thesis we first work on image processing, for identifying emotions. Second we work on speech signal for emotion identification. Finally we use video to identify emotions. In the first framework the emotion of a person is identified from an image with the help of tensorflow. It is an upcoming technology in the field of artificial intelligence. Tensorflow has grown to help users to determine what they need with respect to their daily life. One of the most talked about tensorflow applications is the RankBrain developed by Google corporation which allows users to find the relevant pages with the help of deep neural nets. For our purpose we use tensorflow to identify emotions from an image using the features extracted from the image. Basically the feature extraction is done by identifying the geometry of the face after detecting the landmarks of the eyes and mouth of the face. The landmarks are constructed by using the proposed modified eyemap-mouthmap algorithm on an enhanced image which uses discrete wavelet transform and fuzzy for enhancement. Results of classification show that the proposed methodology is better when tensorflow is used. The second framework on em
Pagination: xvii,129 p.
URI: http://hdl.handle.net/10603/339409
Appears in Departments:Faculty of Information and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File190.71 kBAdobe PDFView/Open
02_certificates.pdf162.85 kBAdobe PDFView/Open
03_vivaproceedings.pdf415.96 kBAdobe PDFView/Open
04_bonafidecertificate.pdf263.19 kBAdobe PDFView/Open
05_abstracts.pdf47.12 kBAdobe PDFView/Open
06_acknowledgements.pdf305.03 kBAdobe PDFView/Open
07_contents.pdf90.52 kBAdobe PDFView/Open
08_listoftables.pdf54.66 kBAdobe PDFView/Open
09_listoffigures.pdf83.39 kBAdobe PDFView/Open
10_listofabbreviations.pdf49.15 kBAdobe PDFView/Open
11_chapter1.pdf1.62 MBAdobe PDFView/Open
12_chapter2.pdf133.29 kBAdobe PDFView/Open
13_chapter3.pdf1.21 MBAdobe PDFView/Open
14_chapter4.pdf4.12 MBAdobe PDFView/Open
15_chapter5.pdf1.28 MBAdobe PDFView/Open
16_chapter6.pdf91.92 kBAdobe PDFView/Open
17_conclusion.pdf91.92 kBAdobe PDFView/Open
18_references.pdf122.54 kBAdobe PDFView/Open
19_listofpublications.pdf90.48 kBAdobe PDFView/Open
80_recommendation.pdf90.5 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: