Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/423737
Full metadata record
DC FieldValueLanguage
dc.coverage.spatial
dc.date.accessioned2022-12-09T10:31:17Z-
dc.date.available2022-12-09T10:31:17Z-
dc.identifier.urihttp://hdl.handle.net/10603/423737-
dc.description.abstractFacial expressions play a crucial role in human social interaction; and this is the primary component needed to be integrated into machines to make human computer interaction more user friendly. Although, humans are very efficient in recognizing even the minute changes in facial expression but for machines it is a very complex task. Recently, this area of research has attracted much needed attention due to its broad spectrum of application. However, expression analysis in an unconstrained environment is a very difficult task. Variations in illumination, facial features, head pose and changes in background make it very difficult to correctly recognize emotions in an open setup for commercial applications. This thesis develops deep learning based representation learning methods for analyzing facial expressions. In this work, multiple frameworks are developed for different applications of facial expression analysis. First proposed framework analyzes the emotional sentiment represented by an image based on its content. The proposed system investigates the faces and background in the image, and extracts facial and scene features from them, respectively. Two different convolutional neural networks are used to extract facial and scene features. Conditional occurrence of these features is modeled using long short term memory networks to predict the sentiment represented by the image. The second framework, proposed in this work, predicts likability of the multimedia content based on the facial expression of the viewer. A database with two different sets of video samples was collected for the task under unconstrained environment. First set of samples consists of videos to be watched by recruited subjects called as stimulants. Second set of samples are recordings of facial expressions of subjects while watching stimulants. The proposed framework is a multimodal system which learns spatio-temporal features from the videos of subject to predict the likability.
dc.format.extent117p.
dc.languageEnglish
dc.relation
dc.rightsuniversity
dc.titleDevelopment of Framework for Facial Expression Analysis Using Representation Learning
dc.title.alternative
dc.creator.researcherSingh, Vivek
dc.subject.keywordEngineering
dc.subject.keywordEngineering and Technology
dc.subject.keywordEngineering Electrical and Electronic
dc.description.note
dc.contributor.guideKumar, Vinay
dc.publisher.placePatiala
dc.publisher.universityThapar Institute of Engineering and Technology
dc.publisher.institutionDepartment of Electronics and Communication Engineering
dc.date.registered
dc.date.completed2020
dc.date.awarded2020
dc.format.dimensions
dc.format.accompanyingmaterialNone
dc.source.universityUniversity
dc.type.degreePh.D.
Appears in Departments:Department of Electronics and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File91.39 kBAdobe PDFView/Open
02_prelim pages.pdf1.82 MBAdobe PDFView/Open
03_content.pdf274.02 kBAdobe PDFView/Open
04_abstract.pdf339.88 kBAdobe PDFView/Open
05_chapter 1.pdf1.04 MBAdobe PDFView/Open
06_chapter 2.pdf2.19 MBAdobe PDFView/Open
07_chapter 3.pdf2.81 MBAdobe PDFView/Open
08_chapter 4.pdf2.88 MBAdobe PDFView/Open
09_chapter 5.pdf3.18 MBAdobe PDFView/Open
10_chapter 6.pdf3.25 MBAdobe PDFView/Open
11_chapter 7.pdf545.9 kBAdobe PDFView/Open
12_annexures.pdf2.92 MBAdobe PDFView/Open
80_recommendation.pdf539.08 kBAdobe PDFView/Open


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: