Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/591911
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.date.accessioned | 2024-09-26T12:38:11Z | - |
dc.date.available | 2024-09-26T12:38:11Z | - |
dc.identifier.uri | http://hdl.handle.net/10603/591911 | - |
dc.description.abstract | Recommendation Systems offer to help consumers determine their interests by predicting their ratings and preferences for particular products. The Reinforcement Learning agent's capacity to learn from the environment as well as reward without training data makes it a perfect approach for such systems. Traditional works have examined Deep Reinforcement Learning (DRL) as a recommendation system because of its capacity. Existing studies experienced challenges such as scalability, overlapping values, information loss, and inappropriate model training, resulting in inaccurate proposals. As a result, the purpose of this study is to determine and tackle these existing issues. The proposed work shows a DRR (DRLbased Recommendation) system based on actor-critic learning. In an actor system, DWL-FA (Deep Weighted Likelihood-Factor Analysis) has been suggested to adapt an existing DNN (Deep Neural Network) to adjust for environmental shifts by removing undesirable regions from network outcomes. The attention mechanism provides the decoder with relevant information from the encoder's hidden states. This attention mechanism, together with the DWL-FA model, may focus on useful sequences and learn their connections. This helps the trained model learn better. In critical networks, HMP-WU (Hidden Markov ProbabilityWeight Updation) has been suggested to optimize user interactions with recommended items and the recommender system (agent). Weight Updation improves awareness of related sequences and reduces inaccurate predictions. The proposed techniques enhanced the system's outcomes by 5.74% in terms of the average p-value... | - |
dc.format.extent | xvii,154 p. | - |
dc.language | English | - |
dc.rights | university | - |
dc.title | A Novel Approach for Long Term Dynamic Recommendation System With Deep Reinforcement Learning Techniques | - |
dc.creator.researcher | S, Krishnamoorthi | - |
dc.subject.keyword | Computer Science | - |
dc.subject.keyword | Computer Science Software Engineering | - |
dc.subject.keyword | Deep Neural Networks | - |
dc.subject.keyword | Deep Reinforcement Learning | - |
dc.subject.keyword | Deep Reinforcement Learning Techniques | - |
dc.subject.keyword | Dynamic Recommendation System | - |
dc.subject.keyword | Engineering and Technology | - |
dc.subject.keyword | Long-Term User Engagement | - |
dc.contributor.guide | Shyam, Gopal K. | - |
dc.publisher.place | Ittagalpura | - |
dc.publisher.university | Presidency University, Karnataka | - |
dc.publisher.institution | School of Engineering | - |
dc.date.registered | 2019 | - |
dc.date.completed | 2024 | - |
dc.date.awarded | 2024 | - |
dc.format.accompanyingmaterial | DVD | - |
dc.source.university | University | - |
dc.type.degree | Ph.D. | - |
Appears in Departments: | School of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 195.75 kB | Adobe PDF | View/Open |
02_prelim pages.pdf | 630.76 kB | Adobe PDF | View/Open | |
03_content.pdf | 250.28 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 9.93 kB | Adobe PDF | View/Open | |
05_chapter 1.pdf | 1.51 MB | Adobe PDF | View/Open | |
06_chapter 2.pdf | 2.03 MB | Adobe PDF | View/Open | |
07_chapter 3.pdf | 734.66 kB | Adobe PDF | View/Open | |
08_chapter 4.pdf | 484.66 kB | Adobe PDF | View/Open | |
09_chapter 5.pdf | 518.25 kB | Adobe PDF | View/Open | |
10_chapter 6.pdf | 460.64 kB | Adobe PDF | View/Open | |
11_chapter 7.pdf | 778.85 kB | Adobe PDF | View/Open | |
12_annexures.pdf | 237.8 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 114.17 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: