Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/602993
Title: | Multimodal Machine Learning for an Efficient Information Retrieval Step into Next Generation Computing |
Researcher: | Saklani, Avantika |
Guide(s): | Tiwari, Shailendra and Pannu, H S |
Keywords: | Computer Science Computer Science Information Systems Engineering and Technology Information retrieval Machine learning |
University: | Thapar Institute of Engineering and Technology |
Completed Date: | 2024 |
Abstract: | What kind of a perception living creatures learn about the external environment including their own body is perceived through sensory information or modalities such as visuals, touch and hearing. Due to the rich characteristics of the environment, it is infrequent that a single modality provides efficient complete knowledge about any phenomena of interest. As when several senses are occupied in the processing of knowledge, we can have a better understanding. The increase in the obtainability of modalities on the same space provides new degrees of freedom for the fusion of modalities. Fusion of modalities is the process of combining features from different sources to obtain complementary information from each. This dissertation focuses on information fusion of multimodal data to provide high accuracy, scalability and enhanced performance for various tasks. In this research work we integrated the visual and linguistic modalities to have the improved decision making machine learning models. For this we have proposed three different frameworks for multimodal classification. The primary focus is to develop robust frameworks that utilize deep learning architectures for enhancement of multimodal classification accuracy and efficiency. In the first proposed work we address the challenge of effectively fusing features to improve food classification accuracy. The proposed model is evaluated on the UPMC Food 101 dataset and a newly created Bharatiya Food dataset. It involves feature extraction using fine-tuned Inception-v4 for visual and RoBERTa for its related text, followed by earlystage fusion to integrate these features effectively. The second proposed work introduces Deep Attentive Multimodal Fusion Network (DAMFN) which is an improvement to the previous model for multimodal food classification system. In this model majorly two significant improvements have been done - one update is in the feature extraction model of visual component and other is the increase in the size of the newly developed dataset. The model |
Pagination: | xiv, 139p. |
URI: | http://hdl.handle.net/10603/602993 |
Appears in Departments: | Department of Computer Science and Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 125.48 kB | Adobe PDF | View/Open |
02_prelimpages.pdf | 592.41 kB | Adobe PDF | View/Open | |
03_content.pdf | 63.67 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 75.86 kB | Adobe PDF | View/Open | |
05_chapter 1.pdf | 2.71 MB | Adobe PDF | View/Open | |
06_chapter 2.pdf | 95.27 kB | Adobe PDF | View/Open | |
07_chapter 3.pdf | 258.72 kB | Adobe PDF | View/Open | |
08_chapter 4.pdf | 1.08 MB | Adobe PDF | View/Open | |
09_chapter 5.pdf | 8.16 MB | Adobe PDF | View/Open | |
10_chapter 6.pdf | 5.46 MB | Adobe PDF | View/Open | |
11_chapter 7.pdf | 52.52 kB | Adobe PDF | View/Open | |
12_annexure.pdf | 134.64 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 157.29 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: