Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/425048
Title: | Linguistically motivated deep learning models for measuring semantic textual similarity |
Researcher: | Kleenankandy, Jeena |
Guide(s): | Nazeer, K A Abdul |
Keywords: | Engineering and Technology Computer Science Computer Science Software Engineering Natural Language Processing Semantic computing |
University: | National Institute of Technology Calicut |
Completed Date: | 2022 |
Abstract: | Over the past few years, Natural Language Processing (NLP) has swiftly shifted from newlinestatistical feature-based methods to deep neural network-based models. These models newlinerely solely on input words to learn abstract representations of sentence semantics, newlinerendering linguistic features like Parts-of-Speech (POS) tags and parse trees no newlinelonger a necessity. This research shows how deep learning models can still benefit newlinefrom linguistic features by composing better sentence representations, particularly in newlinesemantic similarity-related tasks. newlineSemantic textual similarity refers to the degree of equivalence in the meaning of newlinetwo text snippets irrespective of their words and syntax. Its applications include but newlineare not limited to semantic relatedness scoring, paraphrase identification, recognizing newlinetextual entailment, question answering, machine translation evaluation, and automatic newlinetext summarization. Recurrent Neural Network (RNN) and its recursive variant, newlinenamely Tree-RNN, are the state-of-the-art models used in language processing. They newlinerepeatedly apply the same neural network on each word to compose sentence vectors newlineirrespective of the semantic role or syntactic functions of the words. We address newlinethis limitation of RNNs and Tree-RNNs by proposing three Deep Learning (DL) newlinemodels that use grammar-based non-uniform neural nets for semantic composition. newlineExperiments were conducted using two benchmark datasets Sentence Involving newlineCompositional Knowledge (SICK) and Stanford Sentiment Treebank (SST). newlineThe first contribution addresses the inability of Tree-RNN models in semanti- newlinecally differentiating sentences with identical parse trees. We show that grammatical newlinerelations, also known as typed dependencies, are essential to identify such differ- newlineences. We propose a dependency tree-based RNN model that can efficiently learn |
URI: | http://hdl.handle.net/10603/425048 |
Appears in Departments: | COMPUTER SCIENCE AND ENGINEERING |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 62.69 kB | Adobe PDF | View/Open |
02_prelim pages.pdf | 1.08 MB | Adobe PDF | View/Open | |
03_content.pdf | 41.78 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 40.9 kB | Adobe PDF | View/Open | |
05_chapter 1.pdf | 588.69 kB | Adobe PDF | View/Open | |
06_chapter 2.pdf | 312.58 kB | Adobe PDF | View/Open | |
07_chapter 3.pdf | 350.06 kB | Adobe PDF | View/Open | |
08_chapter 4.pdf | 596.17 kB | Adobe PDF | View/Open | |
09_chapter 5.pdf | 321.93 kB | Adobe PDF | View/Open | |
10_annexures.pdf | 85.43 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 89.11 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: