Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/565655
Title: A Real Time Balanced and Secured Cloud Storage Model for Improving Accessing Efficiency of Big Data
Researcher: Leekha, Alka
Guide(s): Shaikh, Alam and Varaprasad, G
Keywords: Computer Science
Computer Science Software Engineering
Engineering and Technology
University: Visvesvaraya Technological University, Belagavi
Completed Date: 2022
Abstract: In today s digital age, data storage and retrieval are crucial.The newlinemain goals of computer security are data availability, data newlineintegrity, and data secrecy. Cloud storage is the only way to newlinemeet this constant need for data from diverse devices worldwide newline. The term quotBig Dataquot emerged as a result of this onslaught newlineof massive volumes and various types of data flowing from newlinenumerous devices like mobile phones, PDAs, IoT devices, client newlinemachines, etc., at any time of the day and with varying velocities. newlineBig data processing is complex since it depends on various tools newlineand approaches, is expensive, and demands expertise. However, newlineone issue with cloud computing is the potential for massive data newlineduplication and fraudulent information. The issues that large newlinebusinesses and service providers confront today include security newlinebreaches, cost management, performance, migration, backups, newlinesegmented usage, and adaptation. Such repetition also places a newlinesignificant demand on storage spaces. newlineThis thesis aims to propose a balanced and secure cloud newlinestorage model for improving accessing efficiency of Big Data so newlinethat these challenges can be minimised. A new SHA-256 based newlineon a 64-bit architecture is proposed and used to calculate digital newlinefingerprints of chunks of data stored in the INS database. Already newlineexisting algorithms like MD5 and SHA-1 are used by various newlineresearchers to solve the same, but these algorithms have collision newlineissues in the case of big data. The unique identification of each newlinechunk for such a vast volume and variety of data with extensive newlineredundancy floating on the servers is complex, with existing newlinealgorithms. newlinevi newlineFlooding duplicate data decreases access efficiency and newlineincreases the cost of maintaining these servers. It also increases newlinethe requirement of the number of servers and the transfer of newlineload between them, which leads to challenging migration and newlinesecurity issues. Cloud Service Providers (CSPs) frequently use newlinedata deduplication techniques to get rid of duplicate data to newlinesave storage. Depending on the cloud solution being
Pagination: 207
URI: http://hdl.handle.net/10603/565655
Appears in Departments:Department of Computer Science and Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File291.09 kBAdobe PDFView/Open
02_prelim pages.pdf389.26 kBAdobe PDFView/Open
03_content.pdf41.41 kBAdobe PDFView/Open
04_abstract.pdf16.19 kBAdobe PDFView/Open
05_chapter 1.pdf463.23 kBAdobe PDFView/Open
06_chapter 2.pdf139.77 kBAdobe PDFView/Open
07_chapter 3.pdf268.79 kBAdobe PDFView/Open
08_chapter 4.pdf373.31 kBAdobe PDFView/Open
09_chapter 5.pdf978.95 kBAdobe PDFView/Open
10_annexures.pdf88.36 kBAdobe PDFView/Open
80_recommendation.pdf32.4 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: