Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/448111
Title: | Analysis and design of sigmoidal activation functions for feed forward artificial neural networks |
Researcher: | Ruchi Sehrawat |
Guide(s): | Pravin Chandra |
Keywords: | Computer Science Computer Science Software Engineering Engineering and Technology |
University: | Guru Gobind Singh Indraprastha University |
Completed Date: | 2020 |
Abstract: | The field of artificial neural networks using sigmoidal activation functions are briefly summarized in the framework of the system learning model. The role of the activation function in possession of the universal approximation property by these networks, is summarized. The class of sigmoidal activation functions represented by 10 functions reported in literature are empirically examined over a set of thirty learning tasks to identify the activation function that performs best during training and on the generalization behaviour. These tasks are: twelve function approximation tasks, twelve real-life regression problems and six tasks of forecasting of dynamical system evolution. On the basis of the experiments conducted, one activation function is identified as the best performing activation function. One of the conventional adages applied in feed-forward artificial neural network literature is that an anti-symmetric activation function should be preferred to an asymmetric activation function. With the aim of verifying this statement in an empirical manner, experiments were conducted with the backpropagation algorithm. From the obtained results, reported for the case wherein the standard backpropagation algorithm is used for the training of the feed-forward artificial neural networks, it may be concluded that there is a weak support for the preference of the usage of anti-symmetric functions as activation function at hidden layer nodes of these networks. This result is for networks trained with the backpropagation algorithm, and may be considered to be applicable only when the networks are trained using the simple backpropagation algorithm. Thus, the experiments were repeated with the usage of a variant of the resilient backpropagation algorithm which can be described as improved resilient backpropagation algorithm with weight update backtracking (iRPROP+). The study of the effect of activation function symmetry (or its absence) is reported for networks trained with the iRPROP+ algorithm. |
Pagination: | 311 |
URI: | http://hdl.handle.net/10603/448111 |
Appears in Departments: | University School of Information and Communication Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
80_recommendation.pdf | Attached File | 2.53 MB | Adobe PDF | View/Open |
ruchi.pdf | 5.72 MB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: