Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/458777
Title: A deep neural network architecture for the recognition of american sign language from depth maps
Researcher: Beena, M V
Guide(s): Agnisarman Namboodiri, M N and Rani Thottungal.,
Keywords: Engineering and Technology
Computer Science
Telecommunications
Neural Network
Depth maps
Sign language
University: Anna University
Completed Date: 2020
Abstract: Human-computer interaction is a field of study focused on the design of technology for the interaction between humans and computers. This field of study aims to develop technologies which make interactions with computers as natural as an interaction between humans. Gesture recognition is an integral part of this research area. Technology for the computer recognition of the sign language used by the deaf and hard of hearing community is of particular interest to make the world friendlier to the disabled persons. American Sign Language (ASL) is a kind of symbolic language for communication among deaf and hard of hearing people of North America. It is a complete natural language with its grammar and linguistic properties. The language is expressed with movements of the hands and face. In this language, the letters of the English alphabet and words are represented using hand signals. To help deaf and hard of hearing people communicate with a computer using the sign language, the computer must have the power to recognise the meanings of these signs and gestures. An efficient system for doing this would be of immense benefit for the members of the deaf and hard of hearing community. This is the context of the present investigation. Many methods have been developed for the computer recognition of the symbols of the ASL using classifier techniques with varying prediction accuracies. The accuracies of these techniques need betterment and the research reported in the thesis is an attempt to develop better strategies for more effective recognition of the symbols of the ASL with better prediction accuracies. The present study examines four techniques for the recognition of the gestures in the ASL. In every technique, three common functions, namely, pre-processing,segmentation, and feature extraction, are used. After applying these functions, different classifiers are used for the prediction of the gestures in the ASL newline
Pagination: xvi,240p.
URI: http://hdl.handle.net/10603/458777
Appears in Departments:Faculty of Information and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File105.51 kBAdobe PDFView/Open
02_prelim pages.pdf604.03 kBAdobe PDFView/Open
03_content.pdf82.53 kBAdobe PDFView/Open
04_abstract.pdf72.16 kBAdobe PDFView/Open
05_chapter 1.pdf1.2 MBAdobe PDFView/Open
06_chapter 2.pdf908.48 kBAdobe PDFView/Open
07_chapter 3.pdf1.8 MBAdobe PDFView/Open
08_chapter 4.pdf1.07 MBAdobe PDFView/Open
09_chapter 5.pdf780.85 kBAdobe PDFView/Open
10_chapter 6.pdf1.23 MBAdobe PDFView/Open
11_annexures.pdf358.44 kBAdobe PDFView/Open
80_recommendation.pdf277.15 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: