Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/333286
Title: Speech enhancement based on noise diminution and formant extraction using deep learning algorithm for hearing aid applications
Researcher: Vanitha Lakshmi, M
Guide(s): Sudha, S
Keywords: Speech
Hearing aid
Convolutional neural network
University: Anna University
Completed Date: 2020
Abstract: Speech is one of the efficient modes of communication among human beings. A person may lose his hearing ability due to many factors such as aging, exposure to abnormal loudness, etc. A hearing aid plays a major role in assisting a person to meet the hearing-impaired challenges. The sound may have an influence of background noise when heard through the hearing aid. A person with the hearing aid cannot tune or cancel the external noise manually. The device looks for an efficient speech sound without any static or background noise. Thus, the device needs an effective and efficient algorithm to have a quality speech sound in terms of intelligence and filtering ability. Hence, it is required to first segregate the desired speech signal from rest of the noise interference sources and deliver an enhanced speech quality. There are several noise reduction methodologies or algorithms, such as spectral subtraction, wiener filtering, discrete wavelet transform (DWT), etc. Not necessarily a noise can interfere with sound in a uniform way, and it can occur randomly. There are varieties of noise, like stationary and nonstationary noise, exist in our day-to-day life, but these are not scaled up by the conventional noise reduction algorithm. Therefore, the idea is to incorporate the modified spectral subtraction algorithm with time-variant filter (TVF) and DWT with the wavelet-independent interval method to improve the quality and clarity of the speech sound in the hearing aid. The aforementioned algorithm has not minimised the distortion to an acceptable limit at low signal-to-noise ratios (SNRs), and therefore, it has been addressed with the new proposed convolutional neural network (CNN) based deep learning algorithm, which has better low-complexity noise reduction for the enhanced speech. The dataset for voice is created with 1000 speakers speaking short sentences to train and test the audio signals. newline
Pagination: xviii,158p.
URI: http://hdl.handle.net/10603/333286
Appears in Departments:Faculty of Information and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File181.13 kBAdobe PDFView/Open
02_certificates.pdf118.68 kBAdobe PDFView/Open
03_vivaproceedings.pdf2.1 MBAdobe PDFView/Open
04_bonafidecertificate.pdf191.14 kBAdobe PDFView/Open
05_abstracts.pdf365.58 kBAdobe PDFView/Open
06_acknowledgements.pdf215.18 kBAdobe PDFView/Open
07_contents.pdf377.06 kBAdobe PDFView/Open
08_listoftables.pdf350.02 kBAdobe PDFView/Open
09_listoffigures.pdf417.66 kBAdobe PDFView/Open
10_listofabbreviations.pdf351.81 kBAdobe PDFView/Open
11_chapter1.pdf537.96 kBAdobe PDFView/Open
12_chapter2.pdf639 kBAdobe PDFView/Open
13_chapter3.pdf1.2 MBAdobe PDFView/Open
14_chapter4.pdf1.38 MBAdobe PDFView/Open
15_chapter5.pdf1.6 MBAdobe PDFView/Open
16_chapter6.pdf2.45 MBAdobe PDFView/Open
17_conclusion.pdf221.98 kBAdobe PDFView/Open
18_references.pdf2.01 MBAdobe PDFView/Open
19_listofpublications.pdf375.21 kBAdobe PDFView/Open
80_recommendation.pdf197.51 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).

Altmetric Badge: