Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/594694
Title: | A Novel Approach To Design A Prototype Model For Efficeint Scalability In Big Data |
Researcher: | AJEET KUMAR VISHWAKARMA |
Guide(s): | Parveen Kumar |
Keywords: | Computer Science Computer Science Software Engineering Engineering and Technology |
University: | Nims University Rajasthan |
Completed Date: | 2021 |
Abstract: | Explosive generation of unstructured, semi-structured data all over the world at an newlineexponential rate is termed as Big Data. An enormous volume of data is collected and studied newlinein various domains. Data generated from a selection of connected devices are growing at an newlineexponential rate. In 2011, digital information grew nine fold in volume in just five years [30]. newlineIt is the beginning of a revolution that will touch every business and every life on this planet. newlineThis humungous production of data brought fore/out the vulnerabilities of relational newlinedatabases and age old storage techniques. The most significant problem that cropped up was newlinestorage of this voluminous amount of data. This data could be structured, semi-structured or newlineunstructured in nature. There was an uncertainty in data that was generated. Storage of this newlinecharacteristically disparate and humungous data was a challenge in itself. Various technical newlineevolutions were made in field of distributed systems and database technologies to store Big newlineData efficiently. newlineMajor information engineering are planned to manage the ingestion, preparing, and fishing newlineexpedition of data that is excessively complex and enormous for regular data set newlineframeworks. There was also a need to store this data in distributed manner to help make newlinetechnology scalable as data created was at a rapid pace. Hadoop rose to the occasion and newlineprovided a scalable and fault-tolerant big data storage technology. It is an open source newlinedistributed framework to store and its MapReduce Framework came to light for analysis of newlinethis magnanimous amount of data. Its architecture includes a DataNodes to store data and newlineNameNode to monitor the DataNodes. newlineThis thesis throws light on Big Data and different technologies that were created to support newlineBig Data. Thesis primarily focuses on Hadoop s current architecture and proposes a model to newlineovercome existing drawbacks making it more scalable and fault tolerant. Proposed Model newlinewill take scalability and fault tolerance by introducing IntermediateN |
Pagination: | |
URI: | http://hdl.handle.net/10603/594694 |
Appears in Departments: | Department of Computer Science and Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2 prelims.pdf | Attached File | 1.38 MB | Adobe PDF | View/Open |
3 contents.pdf | 484.67 kB | Adobe PDF | View/Open | |
4 abstract.pdf | 374.88 kB | Adobe PDF | View/Open | |
5 biblography.pdf | 3.82 MB | Adobe PDF | View/Open | |
80_recommendation.pdf | 6.69 kB | Adobe PDF | View/Open | |
chapter 1.pdf | 692.54 kB | Adobe PDF | View/Open | |
chapter 2.pdf | 423.86 kB | Adobe PDF | View/Open | |
chapter 3.pdf | 756.48 kB | Adobe PDF | View/Open | |
chapter 4.pdf | 421.8 kB | Adobe PDF | View/Open | |
chapter 5.pdf | 711.33 kB | Adobe PDF | View/Open | |
chapter 6.pdf | 237.83 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: