Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/522355
Title: Design and implementation of sign language classification system using deep learning models
Researcher: Daniel Nareshkumar, M
Guide(s): Jaison. b,
Keywords: Computational powers
Computer Science
Computer Science Information Systems
Deep learning
Engineering and Technology
Sign language
University: Anna University
Completed Date: 2023
Abstract: Recognition of Sign Language has become more feasible due to advancements in both computational powers as well as in advanced architecture that are able to process sign language. With advancements in the both computing power generally available and the overall quality of images captured on everyday cameras, a much wider range of possibilities have opened up various scenarios. This particular fact also has several implications for deaf and mute people as they have a chance to communicate with a more number of people easily. Now more so than ever, a variety of different data are available that cover the use of sign language in the real world. Sign languages, and by extension the datasets available, are of two forms, isolated sign language and continuous sign language. The main difference between the two types is the fact that in isolated sign language, the hand signs cover individual letters of the alphabet and in continuous sign language words and hand signs are used. The key idea is to implement a novel deep learning architecture that will use recently published large pre-trained image models to accurately recognize the alphabets in the American Sign Language (ASL). This thesis works on the isolated sign language to demonstrate that it is possible to achieve high level of accuracy on the data, showing that interpreters can interpret in the real world. The backbone of this work is the newly proposed MobileNetV2 architecture, that is capable of inferencing from images in a very short duration of time as it is designed to be run on end systems such as mobile phones. With the proposed architecture in this thesis, it was possible to achieve a classification accuracy of 98.77% on the ASL sign language dataset, out-performing other state-of-the-art solutions. newline newline newline
Pagination: xiv,145p.
URI: http://hdl.handle.net/10603/522355
Appears in Departments:Faculty of Information and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File152.18 kBAdobe PDFView/Open
02_prelim pages.pdf4.34 MBAdobe PDFView/Open
03_content.pdf184.24 kBAdobe PDFView/Open
04_abstract.pdf141.64 kBAdobe PDFView/Open
05_chapter 1.pdf2.15 MBAdobe PDFView/Open
06_chapter 2.pdf2.45 MBAdobe PDFView/Open
07_chapter 3.pdf2.15 MBAdobe PDFView/Open
08_chapter 4.pdf1.35 MBAdobe PDFView/Open
09_chapter 5.pdf1.13 MBAdobe PDFView/Open
10_chapter 6.pdf623.93 kBAdobe PDFView/Open
11_annexures.pdf1.52 MBAdobe PDFView/Open
80_recommendation.pdf331.64 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: