Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/578686
Title: | Investigations into Learning Algorithms in Intelligent Machines |
Researcher: | Harikrishnan N. B. |
Guide(s): | Nagaraj, Nithin |
Keywords: | Causality Chaos Computer Science Computer Science Artificial Intelligence Engineering and Technology Machine Learning Neurochaos Learning Stochastic Resonance |
University: | Institute of Trans-disciplinary Health Science and Technology |
Completed Date: | 2022 |
Abstract: | In this thesis, we address these research gaps by proposing a novel brain inspired learning algorithm namely Neurochaos Learning (NL). NL is comprised of an input layer of chaotic 1D Generalized Lüroth Series (GLS) neurons. NL fundamentally uses the Topological Transitivity property of chaos and Stochastic Resonance to perform classification tasks. NL has mainly two architectures: (a) ChaosNet (an input layer of 1D GLS neurons followed by cosine similarity classifier), and (b) hybrid architecture: chaos based features + Machine Learning classifiers (features extracted from the chaotic neural trace followed by Machine Learning classifiers such as Decision Tree, Random Forest, AdaBoost, Support Vector Machine, k-Nearest Neighbours, Gaussian Naive Bayes etc.). We demonstrate the following rich properties of NL in this thesis: (1) NL satisfies the Universal Approximation Theorem, (2) NL supports the incorporation of chaotic biological neuronal models such as Hindmarsh-Rose neuronal model, (3) superior performance in the limited training sample regime when compared to ML algorithms (with training using just nine samples per class, NL gives an F1-score in the range [0.6, 0.98]), (4) the flexibility of NL allows for developing hybrid NL-ML algorithms to boost the performance of existing Machine Learning algorithms, (5) robustness to additive parametric noise, (6) exhibits stochastic resonance at the level of individual as well as layer of neurons, (7) in the context of continual learning, the rate of catastrophic forgetting in NL is much less compared to Deep Learning algorithms, and (8) the features extracted from the input layer of NL preserves the inherent causal structure in the time series data. newline |
Pagination: | xxiii, 146 |
URI: | http://hdl.handle.net/10603/578686 |
Appears in Departments: | Centre for Traditional Knowledge, Data Sciences and Informatics |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 266.24 kB | Adobe PDF | View/Open |
02_prelim_pages.pdf | 257.62 kB | Adobe PDF | View/Open | |
03_contents.pdf | 112.4 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 77.17 kB | Adobe PDF | View/Open | |
05_chapter1.pdf | 133.4 kB | Adobe PDF | View/Open | |
06_chapter2.pdf | 376.87 kB | Adobe PDF | View/Open | |
07_chapter3.pdf | 823.07 kB | Adobe PDF | View/Open | |
08_chapter4.pdf | 12.06 MB | Adobe PDF | View/Open | |
09_chapter5.pdf | 22.92 MB | Adobe PDF | View/Open | |
10_chapter6.pdf | 3.17 MB | Adobe PDF | View/Open | |
11_chapter7.pdf | 217.59 kB | Adobe PDF | View/Open | |
12_annexures.pdf | 226.65 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 436.18 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: