Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/423833
Title: Efficient Implementation of Adaptive Filters and Classifiers Using Multilayer Perceptron Feedforward Neural Network
Researcher: Sakshi
Guide(s): Kumar, Ravi
Keywords: Engineering
Engineering and Technology
Engineering Electrical and Electronic
Neural networks (Computer science)
University: Thapar Institute of Engineering and Technology
Completed Date: 2020
Abstract: Artificial Neural Networks (ANNs) are the mainstay of machine learning. It is hard to talk about pattern analysis in general without mentioning ANNs or its modern-day variants (e.g. deep networks). Used prolifically as a tool since decades, ANNs as a computational block have until recently been treated as a black box with a grain of suspicion about its actual efficacy. However, eventually the black box has become larger and harder to fathom about since the advent of deep learning which has now propelled the machine learning research in some sort of a revolutionary path where everyone treading it is armed with some variant of a deep learning tool . The time is now ripe to open up the black box of ANN and play around with its components to see how stuff works and may be make it work better. This thesis is a compilation of efforts made for improving the performance of a typical ANN with a basic text book architecture with an aim to extract an efficient and versatile performance an ANN with a minimalist architecture in order to ensure a hardware implementable design and an IC compatible implementation. In general the following methodology was adopted for identifying an optimal architecture and measuring its performance. The data were first partitioned into training and test sets and several candidate architectures of the ANN were initialized. All the architectures were trained separately were trained repeatedly and Mean Squared Error (MSE) was recorded for each set of experiments. In order to avoid over-fitting K-fold cross validation scheme was adopted and the data in the training and test sets were replaced after a fixed set of experiments. The ANN architecture giving minimum average MSE was selected as the optimal architecture and as then subjected to test data for evaluating its generalization performance. In three out of four chapters describing backpropagation (BP) trained ANN, the above-mentioned methodology was adopted.
Pagination: 183p.
URI: http://hdl.handle.net/10603/423833
Appears in Departments:Department of Electronics and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File68.11 kBAdobe PDFView/Open
02_prelim pages.pdf611.2 kBAdobe PDFView/Open
03_content.pdf125.2 kBAdobe PDFView/Open
04_abstract.pdf143.25 kBAdobe PDFView/Open
05_chapter 1.pdf332.08 kBAdobe PDFView/Open
06_chapter 2.pdf2.42 MBAdobe PDFView/Open
07_chapter 3.pdf719.84 kBAdobe PDFView/Open
08_chapter 4.pdf1.27 MBAdobe PDFView/Open
09_chapter 5.pdf3.45 MBAdobe PDFView/Open
10_chapter 6.pdf85.08 kBAdobe PDFView/Open
11_annexures.pdf351.34 kBAdobe PDFView/Open
80_recommendation.pdf89.68 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: