Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/362280
Full metadata record
DC FieldValueLanguage
dc.coverage.spatialEffective Communication between hearing impaired people with normal people
dc.date.accessioned2022-02-15T06:28:09Z-
dc.date.available2022-02-15T06:28:09Z-
dc.identifier.urihttp://hdl.handle.net/10603/362280-
dc.description.abstractThere are several methods used to detect the sign language. Researchers have used two technologies to capture hand gestures first one is to use data gloves and second one is to use computer vision based technologies. The glove based recognition requires the user to wear a cumbersome data glove to capture hand and finger movement. Vision based hand sign language recognition can be divided into two forms as static and dynamic. Vision based approach have achieved many progress but it still faces some challenges and remains an open problem due to its less adaptability in terms of varying hand size, lighting, background and camera characteristics. This thesis has evaluated the recognition percentage of the hand gestures in the form of alphabets A to Z using different feature extractor and classifiers. Feature extractors used for recognition are Wavelet Transform, Curvelet Transform and Contourlet Transform. Two classifiers namely Neural Network and K Nearest Neighbor are used for classification. The experiment was done on three data sets in which each data set has 1014 image and for the training the data set splitted in the ratio of 80 percentage and 20 percentage. The results are declared with the help of experiments performed on three data sets an overall accuracy of 95.38 percent is achieved with the combination of Wavelet Transform and k NN classifier. newline We presented a robust position invariant Sign Language Recognition framework in which a depth sensor device Kinect has been used to obtain the signers information. Similarly, non manual signs play an important role in Sign Language Recognition systems because they carry grammatical and prosodic information. We have proposed multimodal framework for Sign Language Recognition system by incorporating expression with sign gesture using two different sensors and based on the proposed work our analysis shows promising recognition result of 96.05 percent. newline newline
dc.format.extent153 pages
dc.languageEnglish
dc.rightsuniversity
dc.titleSign Language Recognition Resolution In Variable Background For Hearing Impaired People
dc.title.alternative
dc.creator.researcherKanauzia Rohit
dc.subject.keywordComputer Science
dc.subject.keywordComputer Science Artificial Intelligence
dc.subject.keywordEngineering and Technology
dc.description.note
dc.contributor.guideSingh Mohan Brij
dc.publisher.placeDehradun
dc.publisher.universityUttarakhand Technical University
dc.publisher.institutionDepartment of Computer Science and Engineering
dc.date.registered2015
dc.date.completed2021
dc.date.awarded2022
dc.format.dimensions21.2cm x 30.5cm x 2.3cm
dc.format.accompanyingmaterialCD
dc.source.universityUniversity
dc.type.degreePh.D.
Appears in Departments:Department of Computer Science and Engineering

Files in This Item:
File Description SizeFormat 
01-tittle page.pdfAttached File23.95 kBAdobe PDFView/Open
02-certificate page.pdf121.13 kBAdobe PDFView/Open
03-contents.pdf184.34 kBAdobe PDFView/Open
10 chapter 5.pdf812.49 kBAdobe PDFView/Open
11 chapter 6.pdf183.53 kBAdobe PDFView/Open
12 references.pdf302.28 kBAdobe PDFView/Open
13 publications.pdf352.41 kBAdobe PDFView/Open
4 list of tables.pdf176.56 kBAdobe PDFView/Open
5 list of figures.pdf253.01 kBAdobe PDFView/Open
6 chapter 1.pdf416.06 kBAdobe PDFView/Open
7 chapter 2.pdf207.54 kBAdobe PDFView/Open
80_recommendation.pdf102.07 kBAdobe PDFView/Open
8 chapter 3.pdf593.18 kBAdobe PDFView/Open
9 chapter 4.pdf965.33 kBAdobe PDFView/Open


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).