Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/454370
Title: Development of Robust Defending Mechanism against Adversarial Attacks in Classification Models
Researcher: Meenakshi, K
Guide(s): Maragatham G
Keywords: Computer Science
Computer Science Information Systems
Engineering and Technology
University: SRM Institute of Science and Technology
Completed Date: 2022
Abstract: Machine learning plays an important role in various security related applications such as spam filtering, malware detection, intrusion detection, bio metric applications etc. Data distributions in training and test data are assumed to be similar in machine learning algorithms. Data naturally evolve over time, causing the test data distribution to diverge from the training data distribution, and malicious adversaries will alter the training data. Due to these reasons the above mentioned assumptions fail. Most of the real time applications use data that arrive dynamically for retraining the model. So the adversary attempts to develop crafted and manipulated data points in the training data (poisoning attack) which reduce the performance of the machine learning models newline
Pagination: 
URI: http://hdl.handle.net/10603/454370
Appears in Departments:Department of Computer Science Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File172.34 kBAdobe PDFView/Open
02_preliminary pages.pdf645.19 kBAdobe PDFView/Open
03_content.pdf387.03 kBAdobe PDFView/Open
04_abstract.pdf270.21 kBAdobe PDFView/Open
05_chapter 1.pdf711.15 kBAdobe PDFView/Open
06_chapter 2.pdf1.1 MBAdobe PDFView/Open
07_chapter 3.pdf1.54 MBAdobe PDFView/Open
08_chapter 4.pdf2.27 MBAdobe PDFView/Open
09_chapter 5.pdf2.42 MBAdobe PDFView/Open
10_chapter 6.pdf269.66 kBAdobe PDFView/Open
11_annexures.pdf948.75 kBAdobe PDFView/Open
80_recommendation.pdf308.39 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: