Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/578686
Title: Investigations into Learning Algorithms in Intelligent Machines
Researcher: Harikrishnan N. B.
Guide(s): Nagaraj, Nithin
Keywords: Causality
Chaos
Computer Science
Computer Science Artificial Intelligence
Engineering and Technology
Machine Learning
Neurochaos Learning
Stochastic Resonance
University: Institute of Trans-disciplinary Health Science and Technology
Completed Date: 2022
Abstract: In this thesis, we address these research gaps by proposing a novel brain inspired learning algorithm namely Neurochaos Learning (NL). NL is comprised of an input layer of chaotic 1D Generalized Lüroth Series (GLS) neurons. NL fundamentally uses the Topological Transitivity property of chaos and Stochastic Resonance to perform classification tasks. NL has mainly two architectures: (a) ChaosNet (an input layer of 1D GLS neurons followed by cosine similarity classifier), and (b) hybrid architecture: chaos based features + Machine Learning classifiers (features extracted from the chaotic neural trace followed by Machine Learning classifiers such as Decision Tree, Random Forest, AdaBoost, Support Vector Machine, k-Nearest Neighbours, Gaussian Naive Bayes etc.). We demonstrate the following rich properties of NL in this thesis: (1) NL satisfies the Universal Approximation Theorem, (2) NL supports the incorporation of chaotic biological neuronal models such as Hindmarsh-Rose neuronal model, (3) superior performance in the limited training sample regime when compared to ML algorithms (with training using just nine samples per class, NL gives an F1-score in the range [0.6, 0.98]), (4) the flexibility of NL allows for developing hybrid NL-ML algorithms to boost the performance of existing Machine Learning algorithms, (5) robustness to additive parametric noise, (6) exhibits stochastic resonance at the level of individual as well as layer of neurons, (7) in the context of continual learning, the rate of catastrophic forgetting in NL is much less compared to Deep Learning algorithms, and (8) the features extracted from the input layer of NL preserves the inherent causal structure in the time series data. newline
Pagination: xxiii, 146
URI: http://hdl.handle.net/10603/578686
Appears in Departments:Centre for Traditional Knowledge, Data Sciences and Informatics

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File266.24 kBAdobe PDFView/Open
02_prelim_pages.pdf257.62 kBAdobe PDFView/Open
03_contents.pdf112.4 kBAdobe PDFView/Open
04_abstract.pdf77.17 kBAdobe PDFView/Open
05_chapter1.pdf133.4 kBAdobe PDFView/Open
06_chapter2.pdf376.87 kBAdobe PDFView/Open
07_chapter3.pdf823.07 kBAdobe PDFView/Open
08_chapter4.pdf12.06 MBAdobe PDFView/Open
09_chapter5.pdf22.92 MBAdobe PDFView/Open
10_chapter6.pdf3.17 MBAdobe PDFView/Open
11_chapter7.pdf217.59 kBAdobe PDFView/Open
12_annexures.pdf226.65 kBAdobe PDFView/Open
80_recommendation.pdf436.18 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: