Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/17502
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.coverage.spatial | Computer Science | en_US |
dc.date.accessioned | 2014-03-25T08:56:39Z | - |
dc.date.available | 2014-03-25T08:56:39Z | - |
dc.date.issued | 2014-03-25 | - |
dc.identifier.uri | http://hdl.handle.net/10603/17502 | - |
dc.description.abstract | Since beginning, research in Machine Translation Evaluation has been a prominent area of study, with initial MT systems being evaluated by only human evaluators. This thesis is focused on studying various methods incorporated in MT Evaluation. A lot of evaluation campaigns have been undertaken for English as a target language, but evaluation of English-Hindi MT has not been taken up so well. Thus, we have focused our evaluations for this language pair. The basic motivation for undertaking this study was to help system managers of MT projects, as many times they have to wait for human evaluations to complete their assessments for an MT system. This process takes days to finish, which hinders the development process. Using the methods suggested in this thesis, a system manager can very easily analyze the performance of his system which can help him in keeping up with deadlines. We have developed a human evaluation metric which provides at par results with popular human adequacy and fluency metrics and further provides better answers as to why certain translations are better or worse. We have also studied various automatic evaluation metrics available across linguistic levels and tried to correlate the results of these metrics with human evaluation. For this we have used both, single as well as multiple reference translations. Further, we have used a combination of these metrics to get better correlations with human evaluations. We have also performed statistical significance testing on the results produced by automatic evaluation metrics to verify if the results produced by the metrics are really valid or are they good by just a matter of chance. At times it gets very difficult to get reference translations. Without reference translations no metric can provide evaluation results. Thus, we have also looked at measures to evaluate MT system without human references. For this we incorporated a trigram language model to evaluate the translations by providing ranks to outputs of various MT engines. | en_US |
dc.format.extent | xix, 211 p. | en_US |
dc.language | English | en_US |
dc.relation | No. of references 95 | en_US |
dc.rights | self | en_US |
dc.title | Implications of linguistic feature based evaluation in improving machine translation quality a case of english to hindi machine translation | en_US |
dc.title.alternative | en_US | |
dc.creator.researcher | Joshi, Nisheeth | en_US |
dc.subject.keyword | MT Metrology | en_US |
dc.subject.keyword | Human Evaluation | en_US |
dc.subject.keyword | Automatic Evaluation | en_US |
dc.subject.keyword | Statistical Significance Testing | en_US |
dc.subject.keyword | Metric Combination based MT Evaluation | en_US |
dc.subject.keyword | Supervised Machine Learning | en_US |
dc.subject.keyword | Language Model based MT Evaluation | en_US |
dc.description.note | Abstract p. v, References p. 199-205, Appendices p. 206-211 | en_US |
dc.contributor.guide | Darbari, Hemant | en_US |
dc.publisher.place | Banasthali | en_US |
dc.publisher.university | Banasthali Univesity | en_US |
dc.publisher.institution | Department of Computer Science newline | en_US |
dc.date.registered | 12/12/2007 | en_US |
dc.date.completed | 10/02/2013 | en_US |
dc.date.awarded | 21/09/2013 | en_US |
dc.format.dimensions | -- | en_US |
dc.format.accompanyingmaterial | None | en_US |
dc.type.degree | Ph.D. | en_US |
dc.source.selfsubmission | Self Submission | en_US |
Appears in Departments: | Department of Computer Science |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 30.84 kB | Adobe PDF | View/Open |
02_dedication.pdf | 23.88 kB | Adobe PDF | View/Open | |
03_certificates.pdf | 275.61 kB | Adobe PDF | View/Open | |
04_acknowledgement.pdf | 13.21 kB | Adobe PDF | View/Open | |
05_abstract.pdf | 9.13 kB | Adobe PDF | View/Open | |
06_contents.pdf | 38.31 kB | Adobe PDF | View/Open | |
07_list of figures tables.pdf | 85.63 kB | Adobe PDF | View/Open | |
08_abbreviations.pdf | 9.52 kB | Adobe PDF | View/Open | |
09_chapter 1.pdf | 126.77 kB | Adobe PDF | View/Open | |
10_chapter 2.pdf | 278.41 kB | Adobe PDF | View/Open | |
11_chapter 3.pdf | 254.49 kB | Adobe PDF | View/Open | |
12_chapter 4.pdf | 236.81 kB | Adobe PDF | View/Open | |
13_chapter 5.pdf | 492.09 kB | Adobe PDF | View/Open | |
14_chapter 6.pdf | 286.37 kB | Adobe PDF | View/Open | |
15_chapter 7.pdf | 378.41 kB | Adobe PDF | View/Open | |
16_chapter 8.pdf | 771.35 kB | Adobe PDF | View/Open | |
17_chapter 9.pdf | 29.95 kB | Adobe PDF | View/Open | |
18_references.pdf | 35.06 kB | Adobe PDF | View/Open | |
19_appendix a.pdf | 56.61 kB | Adobe PDF | View/Open | |
20_appendix b.pdf | 1.63 MB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: