Please use this identifier to cite or link to this item:
http://hdl.handle.net/10603/2415
Title: | Design of a novel incremental parallel webcrawler |
Researcher: | Yadav, Divakar |
Guide(s): | Gupta, J P Sharma, A K |
Keywords: | Computer Science webcrawler Information retrieval World wide web Information Technology |
Upload Date: | 25-Aug-2011 |
University: | Jaypee Institute of Information Technology |
Completed Date: | 2010 |
Abstract: | World Wide Web (WWW) is a huge repository of interlinked hypertext documents known as Web pages. Users access these hypertext documents via Internet. Since its inception in 1990, WWW has become many folds in size, now it contains more than 50 billion publicly accessible web documents distributed all over the world on thousands of web servers and still growing at exponential rate. It is very difficult to search information from such a huge collection of World Wide Web as the web pages/documents are not organized as books on shelves in a library, nor are web pages completely catalogued at one central location. Search engine is basic information retrieval tool, used to access information from WWW. Users provide search queries in the Search engine’s interface. In response to the search query provided, Search engines use their database to search the relevant documents and produce the result after ranking on the basis of relevance. In fact, the Search engine builds its database, with the help of Web Crawlers, where a WebCrawler is a program that traverses the Web and collects information about web documents. To maximize the download rate and to retrieve the whole or significant portion of the Web search engines run multiple crawlers in parallel. Overlapping of downloaded web documents, quality, network bandwidth and refreshing of web documents are the major challenging problems faced by existing parallel web crawlers that are addressed in this work. A Multi Threaded (MT) Server based novel architecture for incremental parallel web crawler has been designed that helps to reduce overlapping, quality and network bandwidth problems. Additionally, web page change detection methods have been developed to refresh the web document by detecting the structural, presentation and content level changes in web documents. These change detection methods help to detect whether version of a web page, existing at Search engine side has got changed at Web server end or not. If it has got changed at Web server end, the WebCrawler should replace the existing version at Search engine database side to keep its repository up-to-date. |
Pagination: | xvi, 160p. |
URI: | http://hdl.handle.net/10603/2415 |
Appears in Departments: | Department of Computer Science Engineering and Information Technology |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
01_title.pdf | Attached File | 21.65 kB | Adobe PDF | View/Open |
02_table of contents.pdf | 15.99 kB | Adobe PDF | View/Open | |
03_declaration.pdf | 9.78 kB | Adobe PDF | View/Open | |
04_certificate.pdf | 10.01 kB | Adobe PDF | View/Open | |
05_acknowledgement.pdf | 10.39 kB | Adobe PDF | View/Open | |
06_abstracts.pdf | 11.02 kB | Adobe PDF | View/Open | |
07_list of acronyms & abbreviations.pdf | 10.13 kB | Adobe PDF | View/Open | |
08_list of figures.pdf | 14.21 kB | Adobe PDF | View/Open | |
09_list of tables.pdf | 10 kB | Adobe PDF | View/Open | |
10_chapter 1.pdf | 119.84 kB | Adobe PDF | View/Open | |
11_chapter 2.pdf | 271.94 kB | Adobe PDF | View/Open | |
12_chapter 3.pdf | 158.14 kB | Adobe PDF | View/Open | |
13_chapter 4.pdf | 155.62 kB | Adobe PDF | View/Open | |
14_chapter 5.pdf | 499.26 kB | Adobe PDF | View/Open | |
15_chapter 6.pdf | 109.77 kB | Adobe PDF | View/Open | |
16_references.pdf | 119.94 kB | Adobe PDF | View/Open | |
17_appendix.pdf | 690.53 kB | Adobe PDF | View/Open | |
19_synopsis.pdf | 52.65 kB | Adobe PDF | View/Open |
Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Altmetric Badge: