File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Scalable, Generic, And Adaptive Systems For Focused Crawling

TitleScalable, Generic, And Adaptive Systems For Focused Crawling
Authors
Issue Date2014
PublisherACM.
Citation
The 25th ACM Conference on Hypertext and Social Media (HT'14), Santiago, Chile, 1-4 September 2014. In the Proceedings of the 25th ACM Conference on Hypertext and Social Media, 2014, p. 35-45 How to Cite?
AbstractFocused crawling is the process of exploring a graph iteratively, focusing on parts of the graph relevant to a given topic. It occurs in many situations such as a company collecting data on competition, a journalist surfing the Web to investigate a political scandal, or an archivist recording the activity of influential Twitter users during a presidential election. In all these applications, users explore a graph (e.g., the Web or a social network), nodes are discovered one by one, the total number of exploration steps is constrained, some nodes are more valuable than others, and the objective is to maximize the total value of the crawled subgraph. In this article, we introduce scalable, generic, and adaptive systems for focused crawling. Our first effort is to define an abstraction of focused crawling applicable to a large domain of real-world scenarios. We then propose a generic algorithm, which allows us to identify and optimize the relevant subsystems. We prove the intractability of finding an optimal exploration, even when all the information is available. Taking this intractability into account, we investigate how the crawler can be steered in several experimental graphs. We show the good performance of a greedy strategy and the importance of being able to run at each step a new estimation of the crawling frontier. We then discuss this estimation through heuristics, self-trained regression, and multi-armed bandits. Finally, we investigate their scalability and efficiency in different real-world scenarios and by comparing with state-of-the-art systems.
Persistent Identifierhttp://hdl.handle.net/10722/201108
ISBN

 

DC FieldValueLanguage
dc.contributor.authorGouriten, Gen_US
dc.contributor.authorManiu, Sen_US
dc.contributor.authorSenellart, Pen_US
dc.date.accessioned2014-08-21T07:13:35Z-
dc.date.available2014-08-21T07:13:35Z-
dc.date.issued2014en_US
dc.identifier.citationThe 25th ACM Conference on Hypertext and Social Media (HT'14), Santiago, Chile, 1-4 September 2014. In the Proceedings of the 25th ACM Conference on Hypertext and Social Media, 2014, p. 35-45en_US
dc.identifier.isbn9781450329545-
dc.identifier.urihttp://hdl.handle.net/10722/201108-
dc.description.abstractFocused crawling is the process of exploring a graph iteratively, focusing on parts of the graph relevant to a given topic. It occurs in many situations such as a company collecting data on competition, a journalist surfing the Web to investigate a political scandal, or an archivist recording the activity of influential Twitter users during a presidential election. In all these applications, users explore a graph (e.g., the Web or a social network), nodes are discovered one by one, the total number of exploration steps is constrained, some nodes are more valuable than others, and the objective is to maximize the total value of the crawled subgraph. In this article, we introduce scalable, generic, and adaptive systems for focused crawling. Our first effort is to define an abstraction of focused crawling applicable to a large domain of real-world scenarios. We then propose a generic algorithm, which allows us to identify and optimize the relevant subsystems. We prove the intractability of finding an optimal exploration, even when all the information is available. Taking this intractability into account, we investigate how the crawler can be steered in several experimental graphs. We show the good performance of a greedy strategy and the importance of being able to run at each step a new estimation of the crawling frontier. We then discuss this estimation through heuristics, self-trained regression, and multi-armed bandits. Finally, we investigate their scalability and efficiency in different real-world scenarios and by comparing with state-of-the-art systems.-
dc.languageengen_US
dc.publisherACM.en_US
dc.relation.ispartofProceedings of the 25th ACM Conference on Hypertext and Social Mediaen_US
dc.titleScalable, Generic, And Adaptive Systems For Focused Crawlingen_US
dc.typeConference_Paperen_US
dc.identifier.emailManiu, S: smaniu@cs.hku.hken_US
dc.identifier.doi10.1145/2631775.2631795-
dc.identifier.hkuros232987en_US
dc.identifier.spage35-
dc.identifier.epage45-
dc.publisher.placeNew York-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats