Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/11691
Title: Multi view based human action recognition and behavior understanding using shape features
Researcher: Gomathi V
Guide(s): Ramar, K
Keywords: Human action recognition, Triangulated shape orientation context, centroid orientation context
Upload Date: 3-Oct-2013
University: Anna University
Completed Date: 
Abstract: Humans have the ability to recognize an event from a single still image. It is the natural tendency of the human beings to give more attention to dynamic objects compared to static objects in a scene. Human motion analysis is currently one of the most active research topics in computer vision. It has three tasks, namely detection or low level processing, mid level processing or tracking and high level vision or recognition. There is a need for automated human action recognition (HAR) system which could recognize human actions and subsequently to analyze their behavior in order to understand their motion patterns. This thesis proposes orientation information based shape representation scheme with reduced dimensionality and computational complexity. The proposed orientation context based shape features, namely Triangulated Shape Orientation Context (TSOC) and Centroid Orientation Context (COC) are proposed which are robust to noise, occlusion and multiple view points. The viewpoint invariance helps to learn and recognize human shape using different camera configurations. Consideration of outer periphery boundary pixels during feature extraction provides invariance to different clothing styles. Similarly, for ViHASi test data set with 20 actions and 72 samples per action, this work has achieved an average 95.42% accuracy of recognition. The results demonstrate suitability of the proposed work for real world practice. The comparative analysis reported ensures the reliability of the multi-view based human action recognition using TSOC and COC shape features. Currently, the vision based analysis for human behavior is performed by assigning multiple analysts to watch the same video stream continuously. The proposed work achieved 86% specificity and 84% sensitivity for training data set. For the test data set, the system achieved 85% specificity and 84% sensitivity. This proposed model could be effectively utilized to learn the real world scenarios where behavior understanding is a complex task. newline newline newline
Pagination: xxiv, 160
URI: http://hdl.handle.net/10603/11691
Appears in Departments:Faculty of Information and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File49.53 kBAdobe PDFView/Open
02_certificates.pdf750.5 kBAdobe PDFView/Open
03_abstract.pdf19.54 kBAdobe PDFView/Open
04_acknowledgement.pdf16.1 kBAdobe PDFView/Open
05_contents.pdf68.39 kBAdobe PDFView/Open
06_chapter 1.pdf176.51 kBAdobe PDFView/Open
07_chapter 2.pdf273.33 kBAdobe PDFView/Open
08_chapter 3.pdf20.01 kBAdobe PDFView/Open
09_chapter 4.pdf920.52 kBAdobe PDFView/Open
10_chapter 5.pdf1.34 MBAdobe PDFView/Open
11_chapter 6.pdf456.4 kBAdobe PDFView/Open
12_chapter 7.pdf16.48 kBAdobe PDFView/Open
13_references.pdf38.94 kBAdobe PDFView/Open
14_publications.pdf17.91 kBAdobe PDFView/Open
15_vitae.pdf13.17 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: