Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/566313
Full metadata record
DC FieldValueLanguage
dc.coverage.spatial
dc.date.accessioned2024-05-24T05:36:04Z-
dc.date.available2024-05-24T05:36:04Z-
dc.identifier.urihttp://hdl.handle.net/10603/566313-
dc.description.abstractDeep Neural Networks (DNNs) have achieved remarkable success across various machine learning and computer vision tasks, especially when abundant training samples are available. In Convolutional Neural Network (CNN) research, it has been established that a model s generalization capability improves with the combination of complex architectures, strong regularization, domain- specific loss functions, and extensive databases. However, training DNNs in environments with limited data remains a significant challenge, calling for attention from the research community. Many applications lack the requisite volume of data needed to train models effectively. Data constraint in this context is influenced by factors such as 1) a scarcity of domain experts, 2) long-tail distribution in large datasets, 3) insufficient domain-specific data, and 4) the challenge of mimicking human cognition and learning. The issues above are common challenges encountered while designing deep models, underscoring the importance of addressing Data Constrained Learning (DCL). This thesis investigates the formulation of deep learning strategies explicitly tailored for scenarios with DCL. The objective is to ensure that the training of numerous parameters does not adversely affect the model s ability to learn meaningful patterns, as this could elevate the risk of overfitting and result in suboptimal generalization performance. To address the DCL challenge, we introduce a novel strength parameter in deep learning named SSF-CNN, which concentrates on learning both the quotstructurequot and quotstrengthquot of filters. The filter structure is initialized using a dictionary-based filter learning algorithm, while the strength is learned under data-constrained settings. This architecture demonstrates adaptability, delivering robust performance even when used with small databases and consistently attaining high accuracy. We validate the effectiveness of our algorithm on databases such as MNIST, CIFAR10, and NORB, with varying training sample sizes. The results indicate
dc.format.extent168 p.
dc.languageEnglish
dc.relation
dc.rightsuniversity
dc.titleData constrained deep learning
dc.title.alternative
dc.creator.researcherKeshari, Rohit
dc.subject.keywordComputer Science
dc.subject.keywordComputer Science Artificial Intelligence
dc.subject.keywordEngineering and Technology
dc.description.note
dc.contributor.guideSingh, Richa and Vatsa, Mayank
dc.publisher.placeDelhi
dc.publisher.universityIndraprastha Institute of Information Technology, Delhi (IIIT-Delhi)
dc.publisher.institutionComputer Science and Engineering
dc.date.registered
dc.date.completed2024
dc.date.awarded2024
dc.format.dimensions29 cm.
dc.format.accompanyingmaterialNone
dc.source.universityUniversity
dc.type.degreePh.D.
Appears in Departments:Department of Computer Science and Engineering

Files in This Item:
File Description SizeFormat 
01-title.pdfAttached File54.64 kBAdobe PDFView/Open
02_prelim pages.pdf424.52 kBAdobe PDFView/Open
03_content.pdf99.43 kBAdobe PDFView/Open
04_abstract.pdf67.6 kBAdobe PDFView/Open
05_chapter 1.pdf2.17 MBAdobe PDFView/Open
06_chapter 2.pdf1.85 MBAdobe PDFView/Open
07_chapter 3.pdf1.5 MBAdobe PDFView/Open
08_chapter 4.pdf1.64 MBAdobe PDFView/Open
09_chapter 5.pdf380.81 kBAdobe PDFView/Open
10_annexures.pdf555.26 kBAdobe PDFView/Open
80_recommendation.pdf560.84 kBAdobe PDFView/Open


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: