Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/522356
Title: | Improve the efficiency usage of data center storage using fuzzy based deduplication techniques in cloud |
Researcher: | Rajkumar, K |
Guide(s): | Dhanakoti, V |
Keywords: | Center storage Cloud computing Computer Science Computer Science Information Systems Deduplication techniques Engineering and Technology |
University: | Anna University |
Completed Date: | 2023 |
Abstract: | Cloud computing was pay-as-you-go distribution of IT internet services. Besides purchasing, operating, and sustaining data centres and servers, you can rent computing power, memory, and databases from a cloud provider except Amazon Web Services often on a premise. Cloud computing is a new platform which is rapidly replacing on premise solutions in practically each firm. Cloud computing, if public, private, or hybrid, has become a critical component for businesses to stay competitive. Some of cloud computing advantages are cost efficiency, high speed, reliability, easy implementation etc. De-duplication is critical in cloud computing because it allows for detection of encrypted data duplication with little compute and expense. De-duplication aids determine suitable owner of cloud material by cleaning up cloud data centre s unneeded capacity. While each file format as in cloud has just one copy, cloud has a large number of cloud users that possess that data source. To overcome de-duplication challenge, current solution presented a convergent encryption method. It also created a mechanism that prevents duplicate data from being stored in cloud. However, in cloud, strategy may not guarantee consistency, dependability, or anonymity. Duplicate files might be saved on cloud server by similar or different cloud users, because cloud storage uses a large amount of storage. To address aforementioned issues, it proposes a first way of utilizing four phases, notably is presented, that breaks down data into fixed length blocks and is instantly fingerprinted by a hashing algorithm to guarantee data verification, initialization is done using traditional b-tree indexing, and commonality operation is measured to measure similarity score in files. The fuzzy interference system is built by designing suitable criteria for decision-making method of assessing duplicate and non-duplicate documents by getting an efficient de-duplication ratio of current approaches after computing newline newline newline |
Pagination: | xvi-160p. |
URI: | http://hdl.handle.net/10603/522356 |
Appears in Departments: | Faculty of Information and Communication Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 152.18 kB | Adobe PDF | View/Open |
02_prelim pages.pdf | 4.34 MB | Adobe PDF | View/Open | |
03_content.pdf | 184.24 kB | Adobe PDF | View/Open | |
04_abstract.pdf | 141.64 kB | Adobe PDF | View/Open | |
05_chapter 1.pdf | 2.15 MB | Adobe PDF | View/Open | |
06_chapter 2.pdf | 2.45 MB | Adobe PDF | View/Open | |
07_chapter 3.pdf | 2.15 MB | Adobe PDF | View/Open | |
08_chapter 4.pdf | 1.35 MB | Adobe PDF | View/Open | |
09_chapter 5.pdf | 1.13 MB | Adobe PDF | View/Open | |
10_chapter 6.pdf | 623.93 kB | Adobe PDF | View/Open | |
11_annexures.pdf | 1.52 MB | Adobe PDF | View/Open | |
80_recommendation.pdf | 331.64 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: