Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/527575
Title: Deep Learning and Multi Attention based Transformer Language Models for Generative Knowledge Inference
Researcher: Shobhan Kumar
Guide(s): C, Pavan Kumar and Chauhan, Arun
Keywords: Computer Science
Computer Science Artificial Intelligence
Engineering and Technology
University: Indian Institute of Information Technology Dharwad
Completed Date: 2023
Abstract: Textbooks play a vital role in any educational system, as they serve as the primary source of information. Many times, the textbooks that are prescribed are not adequate to fulfil the curiosity of the students; hence, they often use community Question-Answering (cQA) systems along with textbooks to gain adequate knowledge of a concept. A large number of Question-Answers (QAs) are available to access in the cQA forums. Due to the high volume of QAs, there is a high variance in the quality of questions and answers in cQA forums; hence, it is hard for the students to go through all possible QAs for a better understanding of the concepts. To address this issue, this work presents a technological solution: Text enrichment with cQA-QAs . The proposed model augments eBooks at the sentence level by recommending the relevant Question-Answers (QA) from community Question-Answering (cQA) forums. In the next phase, the enrichment model selects the key topics of the eBook using topic modeling; these topics are enough to convey a concise summary of the eBook s contents. Later, for each of these selected topics, it recommends the relevant QA from Quora cQA forums. The probability of redundancy in questions is high due to the increasing influx of users on different cQA forums. Because of this redundancy, the responses are scattered through variations of the same question, which results in unsatisfactory search results for a specific question. To address the redundancy issues, this work proposes a transformer based Siamese Network architecture to generate semantically meaningful sentence embeddings. Subsequently, Siamese-Bidirectional Encoder Representations from Transformers(BERT) are applied to assess the similarity between the questions. However, reading only recommended e-content (cQA - QA pairs) does not make students learning effective. Posing appropriate questions during the reading process can also aid in the learner s newlinecomprehension.
Pagination: xxi, 305 p.
URI: http://hdl.handle.net/10603/527575
Appears in Departments:Computer Science and Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File29.9 kBAdobe PDFView/Open
02_prelim pages.pdf335.09 kBAdobe PDFView/Open
03_content.pdf27.1 kBAdobe PDFView/Open
04_abstract.pdf26.44 kBAdobe PDFView/Open
05_chapter 1.pdf54.61 kBAdobe PDFView/Open
06_chapter 2.pdf357.17 kBAdobe PDFView/Open
07_chapter 3.pdf592.14 kBAdobe PDFView/Open
08_chapter 4.pdf1.35 MBAdobe PDFView/Open
09_chapter 5.pdf237.55 kBAdobe PDFView/Open
10_annexures.pdf3.19 MBAdobe PDFView/Open
80_recommendation.pdf45.23 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: