Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/547583
Title: | An investigation on high utility itemset extraction using evolutionary approaches assimilated with off and on policy reinforcement learning algorithms |
Researcher: | Logeswaran, K |
Guide(s): | Suresh, P and Anandamurugan, S |
Keywords: | algorithms Computer Science Computer Science Information Systems Engineering and Technology itemset extraction off and on |
University: | Anna University |
Completed Date: | 2024 |
Abstract: | In the era of digitalization a huge volume of data are generated everyday. Hence it has become significant to effectively analyze the digital data and to extract meaning from them. Utility Mining is an intensive domain in the field of data mining. It is used to extract consequential patterns from digital data in an efficient way. newlineThe area of Artificial Intelligence that mimics the biological evolution of living things and deals with the complex optimization problem in a stochastic way is called evolutionary computation. Over the past years, evolutionary computation has been widely applied to utility mining problems to obtain an optimal solution with a stochastic approach. newlineIn the present research, evolutionary computation-based utility mining approaches are applied to the benchmark dataset and the patterns with high utility are extracted from them. The utility of the pattern is evaluated using the fitness formula. In evolutionary computation, the quality of a solution and the performance of an algorithm largely depends on the strategy parameters used during the execution of the evolutionary approach. In the conventional evolutionary approaches, the strategy parameters are established arbitrarily which leads to the poor quality of the solution and destitute performance. newlineThe current research focuses on setting the value of strategy parameters consistently by using temporal difference approaches, which, in turn, would improve the quality of the solution with optimal performance. The proposed approaches use Q-Learning and SARSA learning which are the types of unsupervised temporal difference approaches that belong to the machine learning category called Reinforcement Learning (RL) newline newline |
Pagination: | xviii,184p. |
URI: | http://hdl.handle.net/10603/547583 |
Appears in Departments: | Faculty of Information and Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 10.09 kB | Adobe PDF | View/Open |
02_prelim pages.pdf | 4.1 MB | Adobe PDF | View/Open | |
03_content.pdf | 639.01 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 715.35 kB | Adobe PDF | View/Open | |
05_chapter 1.pdf | 9.86 MB | Adobe PDF | View/Open | |
06_chapter 2.pdf | 8.55 MB | Adobe PDF | View/Open | |
07_chapter 3.pdf | 1.05 MB | Adobe PDF | View/Open | |
08_chapter 4.pdf | 9.48 MB | Adobe PDF | View/Open | |
09_chapter 5.pdf | 7.82 MB | Adobe PDF | View/Open | |
10_chapter 6.pdf | 10.61 MB | Adobe PDF | View/Open | |
11_chapter 7.pdf | 6.43 MB | Adobe PDF | View/Open | |
12_annexures.pdf | 3.7 MB | Adobe PDF | View/Open | |
80_recommendation.pdf | 653.72 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: