Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/367053
Title: | Near Duplicate Image Retrieval using Features Extracted from Convolutional Neural Network at Multilevel Blocks |
Researcher: | MEHTA TEJAS BIPINBHAI |
Guide(s): | Bhensdadia C.K. |
Keywords: | Computer Science Computer Science Interdisciplinary Applications Engineering and Technology |
University: | Charotar University of Science and Technology |
Completed Date: | 2021 |
Abstract: | Near duplicate image retrieval is to obtain identical or near identical newlineimages/videos acquired from different camera or change in viewpoints, newlinedifferent lightning conditions, undergone with various editing operations such newlineas addition, deletion or content modification, different foreground or newlinebackground objects etc. newlineMatching only local features does not necessarily identify visually similar newlineimages. Global features are fast at matching but may give less accurate newlineresults. Our retrieval task focuses on matching image pairs based on local newlineand global level. Matching local image patches by considering neighbors at newlinedifferent level provides robustness to our retrieval model. Two approaches are newlineintroduced in order to match features at local and global level. An adaptive newlineapproach starts from local patch matching followed by increasing window of newlineneighboring region recursively in order to perform matching. Alternatively, newlinefeatures from local patches at different levels along with global features are newlineextracted and stored for matching in later stage. newlineTraditional hand-crafted features have been widely used in many image newlineretrieval techniques. With the invention of Convolutional Neural network newline(CNN), features extracted are found to be robust in various computer vision newlinetasks including near duplicate image retrieval. Our approach makes the newlinecombined use of traditional features such as Speeded-Up Robust Features newline(SURF) as well as features extracted from CNN. newlineImages are segmented into fixed sized blocks followed by extracting newlineneighboring regions for matching. In our first approach, we first match SURF newlinefeatures and then match CNN features. In our second approach, we utilized newlineSURF feature points to detect local region and we use extract CNN features newlinewhich we use for matching. newline |
Pagination: | |
URI: | http://hdl.handle.net/10603/367053 |
Appears in Departments: | Faculty of Technology and Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
80_recommendation.pdf | Attached File | 383.14 kB | Adobe PDF | View/Open |
chapter-1.pdf | 231.28 kB | Adobe PDF | View/Open | |
chapter-2.pdf | 697.08 kB | Adobe PDF | View/Open | |
chapter-3.pdf | 378.76 kB | Adobe PDF | View/Open | |
chapter-4.pdf | 1.03 MB | Adobe PDF | View/Open | |
chapter-5.pdf | 908.09 kB | Adobe PDF | View/Open | |
chapter-6.pdf | 2.18 MB | Adobe PDF | View/Open | |
file1 - title page.pdf | 84.8 kB | Adobe PDF | View/Open | |
file2 - certificate.pdf | 83.49 kB | Adobe PDF | View/Open | |
file3 - preliminary pages.pdf | 128.19 kB | Adobe PDF | View/Open | |
full thesis.pdf | 5.1 MB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: