Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/458415
Title: Improvements on support vector machine and single hidden layer feedforward network base algorithms for binary classification
Researcher: Borah, Parashjyoti
Guide(s): Gupta, Deepak
Keywords: Extreme Learning Machine
Random Vector Functional-link Networks
Support Vector Machine
University: National Institute of Technology Arunachal Pradesh
Completed Date: 2020
Abstract: Support vector machine (SVM) employs the linear Hinge loss function for its misclassification error that exhibits noise and outlier sensitivity, and also class imbalance learning is not inherently incorporated into its optimization problem. SVM guarantees a global optimum solution obtained by solving a quadratic programming problem(QPP) which comes at a high computational cost. Twin support vector machines (TWSVM) is an SVM variant that, unlike SVM, finds two non-parallel proximal class hyperplanes which are closest to their corresponding class samples and at least unit relative distance away from the opposite class. Theoretically, with a reduced training cost of approximately four times of SVM, TWSVM also improves the generalization performance of SVM. Single hidden layer feedforwards networks (SLFNs) are another category of generalization efficient classification algorithms that consist of only one hidden layer. Random vector Functional-link networks (RVFL) and extreme learning machines (ELM) are two efficient classes of SLFN algorithms which are non-iterative closed-form solution-based approaches. RVFL and ELM are capable of handling some issues of the traditional artificial neural networks(ANNs), such as slow convergence rate and the presence of local minima. Borrowing the underlying philosophy from TWSVM, the recently proposed twin extreme learning machines(TELM) for binary classification construct two non-parallel hyperplanes in the ANN feature space. Similar to TWSVM, TELM requires solving two QPPs of class sizes. Keeping generalization efficiency as the primary criterion, in this research, improvements in respect to noise and outlier sensitivity, class imbalance learning and/or training computational complexity are suggested on some SVM and SLFN-based classification methods. An extensive study on the loss functions of SVM-based classification methods is carried out and SVM models with some robust loss functions are proposed.
Pagination: xv, 197
URI: http://hdl.handle.net/10603/458415
Appears in Departments:Department of Computer Science and Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File371.04 kBAdobe PDFView/Open
02_prelim pages.pdf810.56 kBAdobe PDFView/Open
03_content.pdf576.7 kBAdobe PDFView/Open
04_abstract.pdf468.98 kBAdobe PDFView/Open
05_chapter 1.pdf345.68 kBAdobe PDFView/Open
06_chapter 2.pdf609.28 kBAdobe PDFView/Open
07_chapter 3.pdf842.03 kBAdobe PDFView/Open
08_chapter 4.pdf1.11 MBAdobe PDFView/Open
09_chapter 5.pdf1.76 MBAdobe PDFView/Open
10_chapter 6.pdf1.77 MBAdobe PDFView/Open
12_annexures.pdf1.18 MBAdobe PDFView/Open
80_recommendation.pdf225.25 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: