Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/547583
Full metadata record
DC FieldValueLanguage
dc.coverage.spatialAn investigation on high utility itemset extraction using evolutionary approaches assimilated with off and on policy reinforcement learning algorithms
dc.date.accessioned2024-02-26T11:47:05Z-
dc.date.available2024-02-26T11:47:05Z-
dc.identifier.urihttp://hdl.handle.net/10603/547583-
dc.description.abstractIn the era of digitalization a huge volume of data are generated everyday. Hence it has become significant to effectively analyze the digital data and to extract meaning from them. Utility Mining is an intensive domain in the field of data mining. It is used to extract consequential patterns from digital data in an efficient way. newlineThe area of Artificial Intelligence that mimics the biological evolution of living things and deals with the complex optimization problem in a stochastic way is called evolutionary computation. Over the past years, evolutionary computation has been widely applied to utility mining problems to obtain an optimal solution with a stochastic approach. newlineIn the present research, evolutionary computation-based utility mining approaches are applied to the benchmark dataset and the patterns with high utility are extracted from them. The utility of the pattern is evaluated using the fitness formula. In evolutionary computation, the quality of a solution and the performance of an algorithm largely depends on the strategy parameters used during the execution of the evolutionary approach. In the conventional evolutionary approaches, the strategy parameters are established arbitrarily which leads to the poor quality of the solution and destitute performance. newlineThe current research focuses on setting the value of strategy parameters consistently by using temporal difference approaches, which, in turn, would improve the quality of the solution with optimal performance. The proposed approaches use Q-Learning and SARSA learning which are the types of unsupervised temporal difference approaches that belong to the machine learning category called Reinforcement Learning (RL) newline newline
dc.format.extentxviii,184p.
dc.languageEnglish
dc.relationp.177-184
dc.rightsuniversity
dc.titleAn investigation on high utility itemset extraction using evolutionary approaches assimilated with off and on policy reinforcement learning algorithms
dc.title.alternative
dc.creator.researcherLogeswaran, K
dc.subject.keywordalgorithms
dc.subject.keywordComputer Science
dc.subject.keywordComputer Science Information Systems
dc.subject.keywordEngineering and Technology
dc.subject.keyworditemset extraction
dc.subject.keywordoff and on
dc.description.note
dc.contributor.guideSuresh, P and Anandamurugan, S
dc.publisher.placeChennai
dc.publisher.universityAnna University
dc.publisher.institutionFaculty of Information and Communication Engineering
dc.date.registered
dc.date.completed2024
dc.date.awarded2024
dc.format.dimensions21cm
dc.format.accompanyingmaterialNone
dc.source.universityUniversity
dc.type.degreePh.D.
Appears in Departments:Faculty of Information and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File10.09 kBAdobe PDFView/Open
02_prelim pages.pdf4.1 MBAdobe PDFView/Open
03_content.pdf639.01 kBAdobe PDFView/Open
04_abstract.pdf715.35 kBAdobe PDFView/Open
05_chapter 1.pdf9.86 MBAdobe PDFView/Open
06_chapter 2.pdf8.55 MBAdobe PDFView/Open
07_chapter 3.pdf1.05 MBAdobe PDFView/Open
08_chapter 4.pdf9.48 MBAdobe PDFView/Open
09_chapter 5.pdf7.82 MBAdobe PDFView/Open
10_chapter 6.pdf10.61 MBAdobe PDFView/Open
11_chapter 7.pdf6.43 MBAdobe PDFView/Open
12_annexures.pdf3.7 MBAdobe PDFView/Open
80_recommendation.pdf653.72 kBAdobe PDFView/Open


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: