Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/522356
Title: Improve the efficiency usage of data center storage using fuzzy based deduplication techniques in cloud
Researcher: Rajkumar, K
Guide(s): Dhanakoti, V
Keywords: Center storage
Cloud computing
Computer Science
Computer Science Information Systems
Deduplication techniques
Engineering and Technology
University: Anna University
Completed Date: 2023
Abstract: Cloud computing was pay-as-you-go distribution of IT internet services. Besides purchasing, operating, and sustaining data centres and servers, you can rent computing power, memory, and databases from a cloud provider except Amazon Web Services often on a premise. Cloud computing is a new platform which is rapidly replacing on premise solutions in practically each firm. Cloud computing, if public, private, or hybrid, has become a critical component for businesses to stay competitive. Some of cloud computing advantages are cost efficiency, high speed, reliability, easy implementation etc. De-duplication is critical in cloud computing because it allows for detection of encrypted data duplication with little compute and expense. De-duplication aids determine suitable owner of cloud material by cleaning up cloud data centre s unneeded capacity. While each file format as in cloud has just one copy, cloud has a large number of cloud users that possess that data source. To overcome de-duplication challenge, current solution presented a convergent encryption method. It also created a mechanism that prevents duplicate data from being stored in cloud. However, in cloud, strategy may not guarantee consistency, dependability, or anonymity. Duplicate files might be saved on cloud server by similar or different cloud users, because cloud storage uses a large amount of storage. To address aforementioned issues, it proposes a first way of utilizing four phases, notably is presented, that breaks down data into fixed length blocks and is instantly fingerprinted by a hashing algorithm to guarantee data verification, initialization is done using traditional b-tree indexing, and commonality operation is measured to measure similarity score in files. The fuzzy interference system is built by designing suitable criteria for decision-making method of assessing duplicate and non-duplicate documents by getting an efficient de-duplication ratio of current approaches after computing newline newline newline
Pagination: xvi-160p.
URI: http://hdl.handle.net/10603/522356
Appears in Departments:Faculty of Information and Communication Engineering

Files in This Item:
File Description SizeFormat 
01_title.pdfAttached File152.18 kBAdobe PDFView/Open
02_prelim pages.pdf4.34 MBAdobe PDFView/Open
03_content.pdf184.24 kBAdobe PDFView/Open
04_abstract.pdf141.64 kBAdobe PDFView/Open
05_chapter 1.pdf2.15 MBAdobe PDFView/Open
06_chapter 2.pdf2.45 MBAdobe PDFView/Open
07_chapter 3.pdf2.15 MBAdobe PDFView/Open
08_chapter 4.pdf1.35 MBAdobe PDFView/Open
09_chapter 5.pdf1.13 MBAdobe PDFView/Open
10_chapter 6.pdf623.93 kBAdobe PDFView/Open
11_annexures.pdf1.52 MBAdobe PDFView/Open
80_recommendation.pdf331.64 kBAdobe PDFView/Open
Show full item record


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: