Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/310039
Title: | Developing Metric For Automatic Evaluation Of Machine Translation |
Researcher: | Samiksha Tripathi |
Guide(s): | Vineet Kansal |
Keywords: | Computer Science Computer Science Artificial Intelligence Engineering and Technology |
University: | Dr. A.P.J. Abdul Kalam Technical University |
Completed Date: | 2019 |
Abstract: | newlineThis thesis highlights the issues pertaining to automatic evaluation of English to Hindi newlinetranslations. Originated from the Indo-European family, both these languages have newlineundergone a host of changes owing to regional/sub-regional influence. To attain the newlinerequisite objectives, the authors have adopted latest MT terminology using Deep Neural newlineNetwork and linguistic approach for Metric for Automated Machine Translation newlineevaluation . newlineMachine translation evaluation (MTE) helps in evaluating translations by creating a newlinescorecard for these translations. Although a lot of efforts are incorporated in evaluation newlineof translations by MT systems, but still the MT research community is still juggling to newlinehave a globally acceptable metric. The two major drawbacks of the exiting metrics are newlineIt doesn t provide words relevance as well as insights into error analysis. A few specific newlineMT strategies often prove inappropriate when it comes to scores generation. Not to newlinemention, the efficiency and accuracy of various existing evaluation metrics differ w.r.t newlinelanguage pair under consideration. Due to difference in source and target language, this newlineobservation is considered more precise when related to Indian subcontinent languages. newlineEvaluation of Machine Translation (MT) is a time-consuming, but one of the critical newlinetasks. A number of prevalent metrics w.r.t MT Evaluation like BLEU and METEOR newlinehave been criticised by machine translation community for their word-order and newlinemorphologically rich language. In this regard, automatic evaluation metrics help newlinedetermine the comprehensiveness and naturalness of a translated sentence. Also, it newlinesuccessfully compares two different translation systems. However, it doesn t offer newlineinsights into the type of errors a translation system has committed. newline |
Pagination: | |
URI: | http://hdl.handle.net/10603/310039 |
Appears in Departments: | dean PG Studies and Research |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
80_recommendation.pdf | Attached File | 793.45 kB | Adobe PDF | View/Open |
certificate.pdf | 153.64 kB | Adobe PDF | View/Open | |
chapter_1.pdf | 989.37 kB | Adobe PDF | View/Open | |
chapter_2.pdf | 395.85 kB | Adobe PDF | View/Open | |
chapter_3.pdf | 607.92 kB | Adobe PDF | View/Open | |
chapter_4.pdf | 1.18 MB | Adobe PDF | View/Open | |
chapter_5.pdf | 303.29 kB | Adobe PDF | View/Open | |
chapter_6.pdf | 1.29 MB | Adobe PDF | View/Open | |
chapter_7.pdf | 203.03 kB | Adobe PDF | View/Open | |
chapter_8.pdf | 407.87 kB | Adobe PDF | View/Open | |
chapter_9.pdf | 98.59 kB | Adobe PDF | View/Open | |
preliminary.pdf | 197.71 kB | Adobe PDF | View/Open | |
title.pdf | 22.38 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: