Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/17502
Title: Implications of linguistic feature based evaluation in improving machine translation quality a case of english to hindi machine translation
Researcher: Joshi, Nisheeth
Guide(s): Darbari, Hemant
Keywords: MT Metrology
Human Evaluation
Automatic Evaluation
Statistical Significance Testing
Metric Combination based MT Evaluation
Supervised Machine Learning
Language Model based MT Evaluation
Upload Date: 25-Mar-2014
University: Banasthali Univesity
Completed Date: 10/02/2013
Abstract: Since beginning, research in Machine Translation Evaluation has been a prominent area of study, with initial MT systems being evaluated by only human evaluators. This thesis is focused on studying various methods incorporated in MT Evaluation. A lot of evaluation campaigns have been undertaken for English as a target language, but evaluation of English-Hindi MT has not been taken up so well. Thus, we have focused our evaluations for this language pair. The basic motivation for undertaking this study was to help system managers of MT projects, as many times they have to wait for human evaluations to complete their assessments for an MT system. This process takes days to finish, which hinders the development process. Using the methods suggested in this thesis, a system manager can very easily analyze the performance of his system which can help him in keeping up with deadlines. We have developed a human evaluation metric which provides at par results with popular human adequacy and fluency metrics and further provides better answers as to why certain translations are better or worse. We have also studied various automatic evaluation metrics available across linguistic levels and tried to correlate the results of these metrics with human evaluation. For this we have used both, single as well as multiple reference translations. Further, we have used a combination of these metrics to get better correlations with human evaluations. We have also performed statistical significance testing on the results produced by automatic evaluation metrics to verify if the results produced by the metrics are really valid or are they good by just a matter of chance. At times it gets very difficult to get reference translations. Without reference translations no metric can provide evaluation results. Thus, we have also looked at measures to evaluate MT system without human references. For this we incorporated a trigram language model to evaluate the translations by providing ranks to outputs of various MT engines.
Pagination: xix, 211 p.
URI: http://hdl.handle.net/10603/17502
Appears in Departments:Department of Computer Science

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File30.84 kBAdobe PDFView/Open
02_dedication.pdf23.88 kBAdobe PDFView/Open
03_certificates.pdf275.61 kBAdobe PDFView/Open
04_acknowledgement.pdf13.21 kBAdobe PDFView/Open
05_abstract.pdf9.13 kBAdobe PDFView/Open
06_contents.pdf38.31 kBAdobe PDFView/Open
07_list of figures tables.pdf85.63 kBAdobe PDFView/Open
08_abbreviations.pdf9.52 kBAdobe PDFView/Open
09_chapter 1.pdf126.77 kBAdobe PDFView/Open
10_chapter 2.pdf278.41 kBAdobe PDFView/Open
11_chapter 3.pdf254.49 kBAdobe PDFView/Open
12_chapter 4.pdf236.81 kBAdobe PDFView/Open
13_chapter 5.pdf492.09 kBAdobe PDFView/Open
14_chapter 6.pdf286.37 kBAdobe PDFView/Open
15_chapter 7.pdf378.41 kBAdobe PDFView/Open
16_chapter 8.pdf771.35 kBAdobe PDFView/Open
17_chapter 9.pdf29.95 kBAdobe PDFView/Open
18_references.pdf35.06 kBAdobe PDFView/Open
19_appendix a.pdf56.61 kBAdobe PDFView/Open
20_appendix b.pdf1.63 MBAdobe PDFView/Open


Items in Shodhganga are protected by copyright, with all rights reserved, unless otherwise indicated.