Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/484965
Title: | Computer Vision based Approach for Indian Sign Language Recognition and Translation using Deep Learning |
Researcher: | Mistree, Kinjal Bhargavkumar |
Guide(s): | Thakor, Devendra V and Bhatt, Brijesh S |
Keywords: | Computer Engineering Engineering and Technology Machine Learning |
University: | Uka Tarsadia University |
Completed Date: | 2023 |
Abstract: | Sign language, a language used by the Deaf, is an entirely visual language with its own grammar and differs mainly from spoken language. Translation between two spoken languages is much smoother than translation between a spoken language and a sign language due to several factors. Deaf people find it very difficult to express their feelings to the rest of society since other people lack the knowledge of the sign language used by the Deaf community. The conception of the system that would be able to translate the sign language into text would facilitate the understanding of the sign language to the rest of society. Such a system would eliminate the dependency on the translators to understand the sign language. newlineThere is significant variation between sign languages of different countries worldwide, although there are many similarities. Therefore, it may not be preferred for Indian Sign Language to fully adopt such approaches for sign language recognition (ISL). After nearly 30 years of research, ISL recognition is still in its infancy when compared to other international sign languages. Many device-based and vision-based approaches have been used for ISL recognition from images of signs. Still, most of these approaches focus on regional versions or manual components of signs having one or two words. It is challenging to handle continuous sign language sentence recognition and translation into the text as no large dataset and corpora are available for videos of ISL sentences. newlineIn our first proposed approach, we have attempted to answer one of the research questions: how to use deep learning to deal with very small amounts of input videos to incorporate both left-handed and right-handed signs without affecting ISL sentence recognition performance. Deep learning gives promising results in gesture recognition, but it requires large datasets to avoid overfitting. To avoid this problem, the concept of image augmentation and pretrained model is used to increase dataset size and reduce over fitting. |
Pagination: | xxv;140p |
URI: | http://hdl.handle.net/10603/484965 |
Appears in Departments: | Faculty of Engineering and Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 79.26 kB | Adobe PDF | View/Open |
02_certificates.pdf | 6.32 MB | Adobe PDF | View/Open | |
03_contents.pdf | 55.37 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 77.41 kB | Adobe PDF | View/Open | |
05__chapter 1.pdf | 206.68 kB | Adobe PDF | View/Open | |
06_chapter 2.pdf | 347.13 kB | Adobe PDF | View/Open | |
07_chapter 3.pdf | 93.58 kB | Adobe PDF | View/Open | |
08_chapter 4.pdf | 1.75 MB | Adobe PDF | View/Open | |
09_chapter 5.pdf | 125.08 kB | Adobe PDF | View/Open | |
10_chapter 6.pdf | 163.5 kB | Adobe PDF | View/Open | |
11_chapter 7.pdf | 807.98 kB | Adobe PDF | View/Open | |
12_chapter 8.pdf | 333.97 kB | Adobe PDF | View/Open | |
13_chapter 9.pdf | 1.61 MB | Adobe PDF | View/Open | |
14_chapter 10.pdf | 97.54 kB | Adobe PDF | View/Open | |
15_chapter 11.pdf | 52.08 kB | Adobe PDF | View/Open | |
16_chapter 12.pdf | 49.36 kB | Adobe PDF | View/Open | |
17_appendix.pdf | 3.51 MB | Adobe PDF | View/Open | |
80_recommendation.pdf | 172.41 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: