Please use this identifier to cite or link to this item: http://hdl.handle.net/10603/92173
Full metadata record
DC FieldValueLanguage
dc.date.accessioned2016-05-25T07:21:20Z-
dc.date.available2016-05-25T07:21:20Z-
dc.identifier.urihttp://hdl.handle.net/10603/92173-
dc.description.abstractWorld Wide Web (WWW) is the largest repository of information that covers data from almost all the areas known to mankind. It is a source of information that is most frequently accessed publicly. This information over the WWW comprises of the hypertext markup language (HTML) documents interconnected through hyperlinks. The Surface Web or the Publically Indexable Web (PIW) includes the content that can be accessed by purely following the hyperlink structure and thus can be crawled and indexed by popular search engines. On the other hand, the Hidden Web refers to the content that is stored in Web databases and distributed through the creation of dynamic web pages. These dynamic web pages are generated based on the results retrieved in response to queries specified at the interface offered by the underlying web database. newlineCrawling the contents of the hidden Web is a very challenging problem especially because of the fundamental reasons of its scale and restricted search interfaces offered by the Web databases. To overcome the issue of scale, a parallel architecture of the Hidden Web crawler that seems to be a better option as compared to single-process crawler architecture has been proposed in this work. The proposed crawler is also targeted to automatically extract and integrate the search environment by modelling the search forms and filling them in to retrieve the Hidden Web contents from databases in different domains like Books, travel, Auto etc. But, when multiple instances of the crawler run in parallel, the same web document might be downloaded multiple times as one instance of the web crawler may not be aware of another having already downloaded the page. Thus, it is very important to minimize such multiple downloads to save network bandwidth and increase the crawler s effectiveness by coordinating the parallel processes must be coordinated to minimize overlap. However, the coordination between individual crawling processes needs communication which consumes network bandwidth. So, an important objective is-
dc.languageEnglish-
dc.rightsuniversity-
dc.titleDesign and Implementation of Parallel Hidden Web Crawler-
dc.creator.researcherSonali Gupta-
dc.subject.keywordWeb Crawler-
dc.contributor.guideDr. Komal Kumar Bhatia-
dc.publisher.placeFaridabad-
dc.publisher.universityYMCA University of Science and Technology-
dc.publisher.institutionDepartment of Computer Engineering newline-
dc.date.registered05/05/2011-
dc.date.completed2016-
dc.date.awarded04/04/2016-
dc.format.accompanyingmaterialDVD-
dc.type.degreePh.D.-
dc.source.selfsubmissionSelf Submission-
Appears in Departments:Department of Computer Engineering

Files in This Item:
File Description SizeFormat 
table of contents.docxAttached File11.39 kBMicrosoft Word XMLView/Open


Items in Shodhganga are licensed under Creative Commons Licence Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).

Altmetric Badge: