Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/520161
Title: Generative and adversarial learning for object recognition
Researcher: Verma, Astha
Guide(s): Subramanyam, A V and Shah, Rajiv Ratn
Keywords: Engineering
Engineering and Technology
Engineering Electrical and Electronic
University: Indraprastha Institute of Information Technology, Delhi (IIIT-Delhi)
Completed Date: 2023
Abstract: Generative modeling and adversarial learning have significantly advanced the field of computer vision, particularly in object recognition and synthesis, unsupervised domain adaptation, and adversarial attacks and defenses. These techniques have enabled the creation of more accurate and robust models for critical applications. In particular, we develop algorithms for fine-grained object recognition (Re-ID) and classification tasks. Re-ID involves matching objects across non-overlapping cameras, which is challenging due to visual recognition hurdles like pose change, occlusion, illumination variation, low resolution, and modality differences. On the other hand, object classification is another aim to categorize input data into pre-defined classes, using patterns learned from training data. In this context, our thesis is motivated by the potential of generative modelling to synthesize novel human views, which can be used for unsupervised learning of Re ID models. Unsupervised Re-ID suffers from domain discrepancies between labeled source and unlabeled target domains. Existing methods adapt the model using aug mented samples, either by translating source samples or assigning pseudo labels to the target. However, translation methods may lose identity details, while label assignment may give noisy labels. Our approach is distinct from other methods in that it decou ples the ID and non-ID features in a cyclic manner, which promotes better adaptation to pose and background, thereby resulting in richer novel views. This approach could improve the accuracy of Re-ID models for the unlabeled target domain, thus enhancing their robustness in real-world settings. Furthermore, we aim to analyze the robustness of Re-ID and classification models and propose adversarial attack and defense methods to enhance their reliability. Adver sarial attacks are a malicious technique that manipulates input data to cause machine learning models to make incorrect predictions or classifications.
Pagination: 165 p.
URI: http://hdl.handle.net/10603/520161
Appears in Departments:Electronics and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File50.46 kBAdobe PDFView/Open
02_prelim pages.pdf364.7 kBAdobe PDFView/Open
03_content.pdf63.75 kBAdobe PDFView/Open
04_abstract.pdf48.79 kBAdobe PDFView/Open
05_chapter 1.pdf626.91 kBAdobe PDFView/Open
06_chapter 2.pdf437.22 kBAdobe PDFView/Open
07_chapter 3.pdf3.7 MBAdobe PDFView/Open
08_chapter 4.pdf1.18 MBAdobe PDFView/Open
09_chapter 5.pdf489.14 kBAdobe PDFView/Open
10_annexures.pdf170.91 kBAdobe PDFView/Open
11_chapter 6.pdf577.65 kBAdobe PDFView/Open
12_chapter 7.pdf50.89 kBAdobe PDFView/Open
80_recommendation.pdf140.99 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: