Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/458777
Title: | A deep neural network architecture for the recognition of american sign language from depth maps |
Researcher: | Beena, M V |
Guide(s): | Agnisarman Namboodiri, M N and Rani Thottungal., |
Keywords: | Engineering and Technology Computer Science Telecommunications Neural Network Depth maps Sign language |
University: | Anna University |
Completed Date: | 2020 |
Abstract: | Human-computer interaction is a field of study focused on the design of technology for the interaction between humans and computers. This field of study aims to develop technologies which make interactions with computers as natural as an interaction between humans. Gesture recognition is an integral part of this research area. Technology for the computer recognition of the sign language used by the deaf and hard of hearing community is of particular interest to make the world friendlier to the disabled persons. American Sign Language (ASL) is a kind of symbolic language for communication among deaf and hard of hearing people of North America. It is a complete natural language with its grammar and linguistic properties. The language is expressed with movements of the hands and face. In this language, the letters of the English alphabet and words are represented using hand signals. To help deaf and hard of hearing people communicate with a computer using the sign language, the computer must have the power to recognise the meanings of these signs and gestures. An efficient system for doing this would be of immense benefit for the members of the deaf and hard of hearing community. This is the context of the present investigation. Many methods have been developed for the computer recognition of the symbols of the ASL using classifier techniques with varying prediction accuracies. The accuracies of these techniques need betterment and the research reported in the thesis is an attempt to develop better strategies for more effective recognition of the symbols of the ASL with better prediction accuracies. The present study examines four techniques for the recognition of the gestures in the ASL. In every technique, three common functions, namely, pre-processing,segmentation, and feature extraction, are used. After applying these functions, different classifiers are used for the prediction of the gestures in the ASL newline |
Pagination: | xvi,240p. |
URI: | http://hdl.handle.net/10603/458777 |
Appears in Departments: | Faculty of Information and Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 105.51 kB | Adobe PDF | View/Open |
02_prelim pages.pdf | 604.03 kB | Adobe PDF | View/Open | |
03_content.pdf | 82.53 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 72.16 kB | Adobe PDF | View/Open | |
05_chapter 1.pdf | 1.2 MB | Adobe PDF | View/Open | |
06_chapter 2.pdf | 908.48 kB | Adobe PDF | View/Open | |
07_chapter 3.pdf | 1.8 MB | Adobe PDF | View/Open | |
08_chapter 4.pdf | 1.07 MB | Adobe PDF | View/Open | |
09_chapter 5.pdf | 780.85 kB | Adobe PDF | View/Open | |
10_chapter 6.pdf | 1.23 MB | Adobe PDF | View/Open | |
11_annexures.pdf | 358.44 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 277.15 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: