Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/362280
Title: Sign Language Recognition Resolution In Variable Background For Hearing Impaired People
Researcher: Kanauzia Rohit
Guide(s): Singh Mohan Brij
Keywords: Computer Science
Computer Science Artificial Intelligence
Engineering and Technology
University: Uttarakhand Technical University
Completed Date: 2021
Abstract: There are several methods used to detect the sign language. Researchers have used two technologies to capture hand gestures first one is to use data gloves and second one is to use computer vision based technologies. The glove based recognition requires the user to wear a cumbersome data glove to capture hand and finger movement. Vision based hand sign language recognition can be divided into two forms as static and dynamic. Vision based approach have achieved many progress but it still faces some challenges and remains an open problem due to its less adaptability in terms of varying hand size, lighting, background and camera characteristics. This thesis has evaluated the recognition percentage of the hand gestures in the form of alphabets A to Z using different feature extractor and classifiers. Feature extractors used for recognition are Wavelet Transform, Curvelet Transform and Contourlet Transform. Two classifiers namely Neural Network and K Nearest Neighbor are used for classification. The experiment was done on three data sets in which each data set has 1014 image and for the training the data set splitted in the ratio of 80 percentage and 20 percentage. The results are declared with the help of experiments performed on three data sets an overall accuracy of 95.38 percent is achieved with the combination of Wavelet Transform and k NN classifier. newline We presented a robust position invariant Sign Language Recognition framework in which a depth sensor device Kinect has been used to obtain the signers information. Similarly, non manual signs play an important role in Sign Language Recognition systems because they carry grammatical and prosodic information. We have proposed multimodal framework for Sign Language Recognition system by incorporating expression with sign gesture using two different sensors and based on the proposed work our analysis shows promising recognition result of 96.05 percent. newline newline
Pagination: 153 pages
URI: http://hdl.handle.net/10603/362280
Appears in Departments:Department of Computer Science and Engineering

Files in This Item:
File Description SizeFormat 
01-tittle page.pdfAttached File23.95 kBAdobe PDFView/Open
02-certificate page.pdf121.13 kBAdobe PDFView/Open
03-contents.pdf184.34 kBAdobe PDFView/Open
10 chapter 5.pdf812.49 kBAdobe PDFView/Open
11 chapter 6.pdf183.53 kBAdobe PDFView/Open
12 references.pdf302.28 kBAdobe PDFView/Open
13 publications.pdf352.41 kBAdobe PDFView/Open
4 list of tables.pdf176.56 kBAdobe PDFView/Open
5 list of figures.pdf253.01 kBAdobe PDFView/Open
6 chapter 1.pdf416.06 kBAdobe PDFView/Open
7 chapter 2.pdf207.54 kBAdobe PDFView/Open
80_recommendation.pdf102.07 kBAdobe PDFView/Open
8 chapter 3.pdf593.18 kBAdobe PDFView/Open
9 chapter 4.pdf965.33 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: