Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/458501
Title: Vission Based Detection for Driver Assistance System Intelligent Vehicles
Researcher: Safwan Ghanem
Guide(s): Kanungo,Priyadarshi and Panda,Ganapati
Keywords: Computer Science
Computer Science Artificial Intelligence
Engineering and Technology
University: C.V. Raman Global University
Completed Date: 2022
Abstract: Driver-assistance systems are groups of automated technologies provided in the newlinecar that assist drivers in driving and parking functions. The input data for these newlinesystems are acquired from sensors and cameras and then used to detect obstacles newlineor driver failure and take over the control to prevent accidents and achieve higher newlineroad safety measures. newlineSafety features are installed in vehicles to reduce the possibility of newlineaccidents or collisions. That is done by providing alerts to the driver according to newlineroad conditions. The assistance system examples are automated lighting, speed newlinecontrolling and keeping, collision avoiding, lane departure warning, and lane newlinecentering. newlineLane detection under different illumination conditions is a vital part of lane newlinedeparture warning systems and vehicle localization which are current trends in newlinefuture smart cities. Recently, vision-based methods are proposed to detect lane newlinemarkers in different road situations including abnormal marker cases. The newlinemajority of lane detection algorithms failed in tunnel scenarios because of the newlineartificial colored light that makes it hard to binarize the lane markers apart from newlinethe other objects on the road. newlineIn this work, a novel lane detection and tracking method is proposed for newlineautonomous vehicles under artificial light in the tunnel and on highways. An newlineillumination invariance method that fulfills the real-time requirements is newlinepresented to detect lane markers under different light conditions. newlineThe extraction and fitting problems of lane markers from the road images newlinehave been addressed in recent research studies. However, these are still newlineineffective under curved lanes and color light conditions. Illumination changes newlineand the road structure mainly affect the efficiency of lane detection which may newlinelead to traffic accidents, especially in the case of a curved road. newlineIn this study, a novel method based on a low complexity, but efficient newlinefunctional link artificial neural network (FLANN) model is proposed to estimate newlinethe entire lane by interpolating the lane markers under different road scenarios. newlineV newlineThe road image is divided into regions and the extracted lane markers from each newlineregion are employed in the proposed trigonometric, polynomial, exponential, and newlineChebyshev functional expansion-based FLANN models for the estimation of the newlinelane curvature. newlineThe performance of each model is evaluated and tested on road images newlineusing three standard datasets. In terms of mean accuracy and computational time newlineout of four FLANN models, the Chebyshev FLANN (CFLNN) outperforms the newlineother three proposed methods. newlineDeep learning algorithms are used recently for lane detection. However, newlinechallenging conditions like rain, shadow, and illumination reduces the overall newlineperformance of vision-based methods for lane detection. Multi-task learning and newlinecontextual-based models have been employed to address this problem. That newlinerequires manual annotations and introducing extra inferences. newlineA day-to-night image style transfer approach is proposed. This method newlineuses generative adversarial networks (GANs) to render images in low-light newlineconditions, which increases the environmental reconciliation of the lane detector. newlineThe proposed solution consists of two parts: data enhancement, and lane detector. newlineData enhancement is performed using GANs. Whereas you only look once newline(YOLO) model is employed in the lane detector module. The dimensions of the newlineanchor boxes in YOLO are fine-tuned to be more appropriate to detect different newlinelane markings scales. newline
Pagination: vii,84
URI: http://hdl.handle.net/10603/458501
Appears in Departments:Department of Computer Science and Engineering

Files in This Item:
File Description SizeFormat 
80_recommendation.pdfAttached File107.05 kBAdobe PDFView/Open
abstract.pdf29.21 kBAdobe PDFView/Open
ch-1.pdf112.24 kBAdobe PDFView/Open
ch-2.pdf661.05 kBAdobe PDFView/Open
ch-3.pdf3.19 MBAdobe PDFView/Open
ch-4.pdf3.17 MBAdobe PDFView/Open
ch-5.pdf3.77 MBAdobe PDFView/Open
content.pdf57.82 kBAdobe PDFView/Open
title all.pdf606.01 kBAdobe PDFView/Open
title.pdf31.87 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: