Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/547583
Title: An investigation on high utility itemset extraction using evolutionary approaches assimilated with off and on policy reinforcement learning algorithms
Researcher: Logeswaran, K
Guide(s): Suresh, P and Anandamurugan, S
Keywords: algorithms
Computer Science
Computer Science Information Systems
Engineering and Technology
itemset extraction
off and on
University: Anna University
Completed Date: 2024
Abstract: In the era of digitalization a huge volume of data are generated everyday. Hence it has become significant to effectively analyze the digital data and to extract meaning from them. Utility Mining is an intensive domain in the field of data mining. It is used to extract consequential patterns from digital data in an efficient way. newlineThe area of Artificial Intelligence that mimics the biological evolution of living things and deals with the complex optimization problem in a stochastic way is called evolutionary computation. Over the past years, evolutionary computation has been widely applied to utility mining problems to obtain an optimal solution with a stochastic approach. newlineIn the present research, evolutionary computation-based utility mining approaches are applied to the benchmark dataset and the patterns with high utility are extracted from them. The utility of the pattern is evaluated using the fitness formula. In evolutionary computation, the quality of a solution and the performance of an algorithm largely depends on the strategy parameters used during the execution of the evolutionary approach. In the conventional evolutionary approaches, the strategy parameters are established arbitrarily which leads to the poor quality of the solution and destitute performance. newlineThe current research focuses on setting the value of strategy parameters consistently by using temporal difference approaches, which, in turn, would improve the quality of the solution with optimal performance. The proposed approaches use Q-Learning and SARSA learning which are the types of unsupervised temporal difference approaches that belong to the machine learning category called Reinforcement Learning (RL) newline newline
Pagination: xviii,184p.
URI: http://hdl.handle.net/10603/547583
Appears in Departments:Faculty of Information and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File10.09 kBAdobe PDFView/Open
02_prelim pages.pdf4.1 MBAdobe PDFView/Open
03_content.pdf639.01 kBAdobe PDFView/Open
04_abstract.pdf715.35 kBAdobe PDFView/Open
05_chapter 1.pdf9.86 MBAdobe PDFView/Open
06_chapter 2.pdf8.55 MBAdobe PDFView/Open
07_chapter 3.pdf1.05 MBAdobe PDFView/Open
08_chapter 4.pdf9.48 MBAdobe PDFView/Open
09_chapter 5.pdf7.82 MBAdobe PDFView/Open
10_chapter 6.pdf10.61 MBAdobe PDFView/Open
11_chapter 7.pdf6.43 MBAdobe PDFView/Open
12_annexures.pdf3.7 MBAdobe PDFView/Open
80_recommendation.pdf653.72 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: