Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/423833
Full metadata record
DC FieldValueLanguage
dc.coverage.spatial
dc.date.accessioned2022-12-09T10:53:33Z-
dc.date.available2022-12-09T10:53:33Z-
dc.identifier.urihttp://hdl.handle.net/10603/423833-
dc.description.abstractArtificial Neural Networks (ANNs) are the mainstay of machine learning. It is hard to talk about pattern analysis in general without mentioning ANNs or its modern-day variants (e.g. deep networks). Used prolifically as a tool since decades, ANNs as a computational block have until recently been treated as a black box with a grain of suspicion about its actual efficacy. However, eventually the black box has become larger and harder to fathom about since the advent of deep learning which has now propelled the machine learning research in some sort of a revolutionary path where everyone treading it is armed with some variant of a deep learning tool . The time is now ripe to open up the black box of ANN and play around with its components to see how stuff works and may be make it work better. This thesis is a compilation of efforts made for improving the performance of a typical ANN with a basic text book architecture with an aim to extract an efficient and versatile performance an ANN with a minimalist architecture in order to ensure a hardware implementable design and an IC compatible implementation. In general the following methodology was adopted for identifying an optimal architecture and measuring its performance. The data were first partitioned into training and test sets and several candidate architectures of the ANN were initialized. All the architectures were trained separately were trained repeatedly and Mean Squared Error (MSE) was recorded for each set of experiments. In order to avoid over-fitting K-fold cross validation scheme was adopted and the data in the training and test sets were replaced after a fixed set of experiments. The ANN architecture giving minimum average MSE was selected as the optimal architecture and as then subjected to test data for evaluating its generalization performance. In three out of four chapters describing backpropagation (BP) trained ANN, the above-mentioned methodology was adopted.
dc.format.extent183p.
dc.languageEnglish
dc.relation
dc.rightsuniversity
dc.titleEfficient Implementation of Adaptive Filters and Classifiers Using Multilayer Perceptron Feedforward Neural Network
dc.title.alternative
dc.creator.researcherSakshi
dc.subject.keywordEngineering
dc.subject.keywordEngineering and Technology
dc.subject.keywordEngineering Electrical and Electronic
dc.subject.keywordNeural networks (Computer science)
dc.description.note
dc.contributor.guideKumar, Ravi
dc.publisher.placePatiala
dc.publisher.universityThapar Institute of Engineering and Technology
dc.publisher.institutionDepartment of Electronics and Communication Engineering
dc.date.registered
dc.date.completed2020
dc.date.awarded2020
dc.format.dimensions
dc.format.accompanyingmaterialNone
dc.source.universityUniversity
dc.type.degreePh.D.
Appears in Departments:Department of Electronics and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File68.11 kBAdobe PDFView/Open
02_prelim pages.pdf611.2 kBAdobe PDFView/Open
03_content.pdf125.2 kBAdobe PDFView/Open
04_abstract.pdf143.25 kBAdobe PDFView/Open
05_chapter 1.pdf332.08 kBAdobe PDFView/Open
06_chapter 2.pdf2.42 MBAdobe PDFView/Open
07_chapter 3.pdf719.84 kBAdobe PDFView/Open
08_chapter 4.pdf1.27 MBAdobe PDFView/Open
09_chapter 5.pdf3.45 MBAdobe PDFView/Open
10_chapter 6.pdf85.08 kBAdobe PDFView/Open
11_annexures.pdf351.34 kBAdobe PDFView/Open
80_recommendation.pdf89.68 kBAdobe PDFView/Open


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial 4.0 International (CC BY-NC 4.0).

Altmetric Badge: