Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/593015
Title: | Underwater image quality enhancement using deep learning based adaptive gan model |
Researcher: | Vijay Anandh R |
Guide(s): | Rukmani Devi S |
Keywords: | Color Correction Process Generative Adversarial Networks Underwater Image |
University: | Anna University |
Completed Date: | 2024 |
Abstract: | Underwater image processing is a multidisciplinary field that involves various challenges and offers a wide scope for research and development. The difficulties in underwater image processing arise from factors such as light attenuation, scattering, and color distortion caused by the water medium, which degrades image quality and limits accurate analysis and interpretation. The aim of underwater image processing research is to develop effective techniques to overcome these challenges and enhance the visual quality of underwater images. This research, address these challenges and propose novel solutions for underwater image processing. newlineIn this work, a technique called Adaptive Weighted Saliency Color Correction (AWSCC) is proposed to enhance the visual quality of underwater images. AWSCC involves a comprehensive workflow that includes the conversion to double precision, computation of saliency and weight maps, and adaptive color correction based on salient regions. The saliency map highlights visually significant areas, guiding the color correction process. By applying adaptive weights, color correction is intensified in visually salient regions, resulting in improved clarity and vibrant color appearance. Experimental results demonstrate that AWSCC significantly enhances underwater images, producing visually appealing and high-quality results newlineA single image enhancement model is proposed that does not rely on external datasets. The method involves two main processes: color restoration and image fusion. The color restoration process corrects degraded colors using veiling light and transmission light evaluation techniques and the outputs are then applied to scene recovery in the fusion process. newline |
Pagination: | xiv,132p. |
URI: | http://hdl.handle.net/10603/593015 |
Appears in Departments: | Faculty of Information and Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 50.6 kB | Adobe PDF | View/Open |
02_prelim_pages.pdf | 2.61 MB | Adobe PDF | View/Open | |
03_contents.pdf | 17.27 kB | Adobe PDF | View/Open | |
04_abstracts.pdf | 15 kB | Adobe PDF | View/Open | |
05_chapter1.pdf | 676.2 kB | Adobe PDF | View/Open | |
06_chapter2.pdf | 210.18 kB | Adobe PDF | View/Open | |
07_chapter3.pdf | 262.42 kB | Adobe PDF | View/Open | |
08_chapter4.pdf | 756.41 kB | Adobe PDF | View/Open | |
09_chapter5.pdf | 434.56 kB | Adobe PDF | View/Open | |
10_chapter6.pdf | 400.63 kB | Adobe PDF | View/Open | |
11_chapter7.pdf | 47.86 kB | Adobe PDF | View/Open | |
12_annexures.pdf | 137.58 kB | Adobe PDF | View/Open | |
80_recommendation.pdf | 71.68 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: