Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/520161
Title: | Generative and adversarial learning for object recognition |
Researcher: | Verma, Astha |
Guide(s): | Subramanyam, A V and Shah, Rajiv Ratn |
Keywords: | Engineering Engineering and Technology Engineering Electrical and Electronic |
University: | Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi) |
Completed Date: | 2023 |
Abstract: | Generative modeling and adversarial learning have significantly advanced the field of computer vision, particularly in object recognition and synthesis, unsupervised domain adaptation, and adversarial attacks and defenses. These techniques have enabled the creation of more accurate and robust models for critical applications. In particular, we develop algorithms for fine-grained object recognition (Re-ID) and classification tasks. Re-ID involves matching objects across non-overlapping cameras, which is challenging due to visual recognition hurdles like pose change, occlusion, illumination variation, low resolution, and modality differences. On the other hand, object classification is another aim to categorize input data into pre-defined classes, using patterns learned from training data. In this context, our thesis is motivated by the potential of generative modelling to synthesize novel human views, which can be used for unsupervised learning of Re ID models. Unsupervised Re-ID suffers from domain discrepancies between labeled source and unlabeled target domains. Existing methods adapt the model using aug mented samples, either by translating source samples or assigning pseudo labels to the target. However, translation methods may lose identity details, while label assignment may give noisy labels. Our approach is distinct from other methods in that it decou ples the ID and non-ID features in a cyclic manner, which promotes better adaptation to pose and background, thereby resulting in richer novel views. This approach could improve the accuracy of Re-ID models for the unlabeled target domain, thus enhancing their robustness in real-world settings. Furthermore, we aim to analyze the robustness of Re-ID and classification models and propose adversarial attack and defense methods to enhance their reliability. Adver sarial attacks are a malicious technique that manipulates input data to cause machine learning models to make incorrect predictions or classifications. |
Pagination: | 165 p. |
URI: | http://hdl.handle.net/10603/520161 |
Appears in Departments: | Electronics and Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 50.46 kB | Adobe PDF | View/Open |
02_prelim pages.pdf | 364.7 kB | Adobe PDF | View/Open | |
03_content.pdf | 63.75 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 48.79 kB | Adobe PDF | View/Open | |
05_chapter 1.pdf | 626.91 kB | Adobe PDF | View/Open | |
06_chapter 2.pdf | 437.22 kB | Adobe PDF | View/Open | |
07_chapter 3.pdf | 3.7 MB | Adobe PDF | View/Open | |
08_chapter 4.pdf | 1.18 MB | Adobe PDF | View/Open | |
09_chapter 5.pdf | 489.14 kB | Adobe PDF | View/Open | |
10_annexures.pdf | 170.91 kB | Adobe PDF | View/Open | |
11_chapter 6.pdf | 577.65 kB | Adobe PDF | View/Open | |
12_chapter 7.pdf | 50.89 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 140.99 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: