Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/547764
Title: Optimized DNN Architecture for Vision Based Applications on Embedded Platforms
Researcher: Kulkarni, Uday N
Guide(s): Meena, S M
Keywords: Computer Science
Computer Science Artificial Intelligence
Engineering and Technology
University: KLE Technological University
Completed Date: 2023
Abstract: Deep Neural Networks (DNNs) have demonstrated exceptional performance across newlinevarious domains, particularly in image and computer vision tasks. DNN algorithms newlineinvolve powerful multilevel feature extractions resulting in an extensive range of newlineparameters and memory footprints. However, their extensive range of parameters newlineand memory footprints pose challenges when deploying these models on resourceconstrained newlineembedded platforms for real time applications. We present a comprehensive newlinestudy on model optimization techniques aimed at addressing memory bandwidth newlinerequirements, memory footprint, and power consumption for efficient DNN deployment newlineon embedded platforms. newlineWe explore the key issues associated with DNN model deployment on resource newlineconstrained devices, highlighting the importance of memory efficiency and energy newlineconsumption. We conduct a survey of various DNN models which can be ported newlineon to embedded platform using ImageNet dataset. MobileNet architecture proves to newlinebe the best compared to all the current DNN architectures for porting on to edge newlinedevices. We make a survey of model optimization methods, including model optimization newlinetechniques such as quantization, and pruning, which significantly reduce newlinememory requirements without compromising on the performance. In addition to the newlinemodel optimization, the dissertation delves into the design of efficient architectures newlinespecifically tailored for embedded platforms. Techniques such as lightweight network newlinearchitectures, depth-wise separable convolutions, and model parallelism are investigated newlineto reduce computation and memory demands. Moreover, hardware acceleration newlineapproaches are explored to leverage specialized hardware units for efficient DNN computations, newline newline
Pagination: xx,147
URI: http://hdl.handle.net/10603/547764
Appears in Departments:SCHOOL OF COMPUTER SCIENCE AND ENGINEERING

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File17 kBAdobe PDFView/Open
02_prelim pages.pdf136.18 kBAdobe PDFView/Open
03_content.pdf80.25 kBAdobe PDFView/Open
04_abstract.pdf79.92 kBAdobe PDFView/Open
06_chapter_2.pdf2.33 MBAdobe PDFView/Open
07_chapter_3.pdf1.96 MBAdobe PDFView/Open
08_chapter_4.pdf12.67 MBAdobe PDFView/Open
09_chapter_5.pdf100.74 kBAdobe PDFView/Open
10_annexure.pdf427.9 kBAdobe PDFView/Open
80_recommendation.pdf117.35 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: