Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/357595
Title: quotNovel Reinforcement Learning and Meta Heuristic Approaches for Optimizing Connected Dominating Sets In Manetquot
Researcher: John Deva Prasanna, D S
Guide(s): JOHN ARAVINDHAR, D
Keywords: Computer Science
Computer Science Artificial Intelligence
Engineering and Technology
University: Hindustan University
Completed Date: 2021
Abstract: Mobile Ad hoc Network is an infrastructure-free wireless network that follows newlinemulti-hop communication paradigm. The dynamic and unpredictable nature of newlinethe MANET demands routing solutions to be highly efficient. Message newlineexchanges between nodes often results in broadcast storm and consumes newlineresidual energy of the nodes. Communication through virtual backbone solves newlinethis problem by routing all transactions through backbone nodes. Virtual newlinebackbones can be constructed using the technique of Connected Dominating newlineSets (CDS) in graph theory. However, mathematical algorithms could not newlinemeet the design issues of MANET due to its dynamic nature. Reinforcement newlineLearning (RL) is suitable for partially observable conditions like MANET and newlinemeets the design issues of the MANET too. Hence, this thesis proposed four newlinedifferent techniques for constructing and optimization of CDS. In the proposed QCDS approach, the CDS is computed using Q learning newlinealgorithm, which is a Reinforcement Learning technique. In this algorithm, the newlinenodes learn about the residual energy and link quality of the neighbour nodes newlineby interacting with them and estimate a Q value. The estimated Q values are newlinethen used for constructing the CDS through greedy approach and hence the newlinealgorithm prefers longer routes over unstable shorter routes. The proposed extended QCDS algorithm endeavors to minimize the newlineoccurrence of weaker links in the CDS formed by QCDS algorithm by newlineextending the learning episode. In this approach, nodes with high quality newlineneighbour nodes will receive a higher Q value than the nodes having low newlinequality neighbour nodes. Through this, the visibility of a node is increased to newlinetwo hops from one-hop and the obtained CDS has more stability than QCDS. The third approach, Energy Efficient QACO (EEQ-ACO) is developed by the newlinehybridizing Q Learning and Ant Colony Optimization(ACO). In this technique, Q learning is infused in ACO to update the pheromone value and to newlinemodify the state transition probability.
Pagination: 
URI: http://hdl.handle.net/10603/357595
Appears in Departments:Department of Computer Science and Engineering

Files in This Item:
File Description SizeFormat 
10 -chapter 3.pdfAttached File956.27 kBAdobe PDFView/Open
11 - chapter 4.pdf155.32 kBAdobe PDFView/Open
12-chapter 5.pdf565.83 kBAdobe PDFView/Open
13-chapter 6.pdf238.58 kBAdobe PDFView/Open
1-title.pdf112.19 kBAdobe PDFView/Open
2-certificates.pdf844.97 kBAdobe PDFView/Open
3-declaration.pdf155.12 kBAdobe PDFView/Open
4-ack.pdf525.95 kBAdobe PDFView/Open
5-content.pdf727.58 kBAdobe PDFView/Open
7-tables.pdf673.24 kBAdobe PDFView/Open
80_recommendation.pdf778.37 kBAdobe PDFView/Open
8-chapter 1.pdf6.25 MBAdobe PDFView/Open
9- chapter 2.pdf6.2 MBAdobe PDFView/Open
abstract.pdf24.94 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: