词语大全 > Web数据挖掘(人民邮电出版社书籍)

Web数据挖掘(人民邮电出版社书籍)

《Web数据挖掘》是2009年2月1日人民邮电出版社出版的图书,作者是查凯莱巴蒂。

本书是Web挖掘与搜索引擎领域的经典著作,自出版以来深受好评,已经被斯坦福、普林斯顿、卡内基梅隆等世界名校采用为教材。书中首先介绍了Web爬行和搜索等许多基础性的问题,并以此为基础,深入阐述了解决Web挖掘各种难题所涉及的机器学习技术,提出了机器学习在系统获取、存储和分析数据中的许多应用,并探讨了这些应用的优劣和发展前景。 全书分析透彻,富于前瞻性,为构建Web挖掘创新性应用奠定了理论和实践基础,既适用于信息检索和机器学习领域的研究人员和高校师生,也是广大Web开发人员的优秀参考书。

“本书深入揭示了搜索引擎的技术内幕!有了它,你甚至能够自己开发一个搜索引擎。”

searchenginewatch网站

“本书系统、全面而且深入,广大Web技术开发人员都能很好地理解和掌握其中内容。作者是该研究领域的领军人物之一,在超文本信息挖掘和检索方面有着渊博的知识和独到的见解。”

Joydeep Ghosh,得克萨斯大学奥斯汀分校教授,IEEE会士

“作者将该领域的所有重要工作融合到这部杰作中,并以一种通俗易懂的方式介绍了原本非常

深奥的内容。有了这本书,Web挖掘终于有可能成为大学的一门课程了。”

Jaideep Srivastava,明尼苏达大学教授,IEEE会士

《Web数据挖掘》是适用于数据挖掘学术研究和开发的专业人员的参考书,同时也适合作为高等院校计算机及相关专业研究生的教材。书中首先论述了Web的基础(包括Web信息采集机制、Web标引机制以及基于关键字或基于相似性搜索机制),然后系统地描述了Web挖掘的基础知识,着重介绍基于超文本的机器学习和数据挖掘方法,如聚类协同过滤监督学习半监督学习,最后讲述了这些基本原理在Web挖掘中的应用。《Web数据挖掘》为读者提供了坚实的技术背景和最新的知识。

INTRODUCTION

1.1 Crawling and Indexing

1.2 Topic Directories

1.3 Clustering and Classification

1.4 Hyperlink Analysis

1.5 Resource Discovery and Vertical Portals

1.6 Structured vs. Unstructured Data Mining

1.7 Bibliographic Notes

PART Ⅰ INFRASTRUCTURE

2 CRAWLING THE WEB

2.1 htmlL and HTTP Basics

2.2 Crawling Basics

2.3 Engineering Large-Scale Crawlers

2.3.1 DNS Caching, Prefetching, and Resolution

2.3.2 Multiple Concurrent Fetches

2.3.3 Link Extraction and Normalization

2.3.4 Robot Exclusion

2.3.5 Eliminating Already-Visited URLs

2.3.6 Spider Traps

2.3.7 Avoiding Repeated Expansion of Links on Duplicate Pages

2.3.8 Load Monitor and Manager

2.3.9 Per-Server Work-Queues

2.3.10 Text Repository

2.3.11 Refreshing Crawled Pages

2.4 Putting Together a Crawler

2.4.1 Design of the Core Components

2.4.2 Case Study: Using w3c-1 i bwww

2.5 Bibliographic Notes

3 WEB SEARCH AND INFORMATION RETRIEVAL

3.1 Boolean Queries and the Inverted Index

3.1.1 Stopwords and Stemming

3.1.2 Batch Indexing and Updates

3.1.3 Index Compression Techniques

3.2 Relevance Ranking

3.2.1 Recall and Precision

3.2.2 The Vector-Space Model

3.2.3 Relevance Feedback and Rocchio's Method

3.2.4 Probabilistic Relevance Feedback Models

3.2.5 Advanced Issues

3.3 Similarity Search

3.3.1 Handling "Find-Similar" Queries

3.3.2 Eliminating Near Duplicates via Shingling

3.3.3 Detecting Locally Similar Subgraphs of the Web

3.4 Bibliographic Notes

PART Ⅱ LEARNING

SIMILARITY AND CLUSTERING

4.1 Formulations and Approaches

4.1.1 Partitioning Approaches

4.1.2 Geometric Embedding Approaches

4.1.3 Generative Models and Probabilistic Approaches

4.2 Bottom-Up and Top-Down Partitioning Paradigms

4.2.1 Agglomerative Clustering

4.2.2 The k-Means Algorithm

4.3 Clustering and Visualization via Embeddings

4.3.1 Self-Organizing Maps (SOMs)

4.3.2 Multidimensional Scaling (MDS) and FastMap

4.3.3 Projections and Subspaces

4.3.4 Latent Semantic Indexing (LSI)

4.4 Probabilistic Approaches to Clustering

4.4.1 Generative Distributions for Documents

4.4.2 Mixture Models and Expectation Maximization (EM)

4.4.3 Multiple Cause Mixture Model (MCMM)

4.4.4 Aspect Models and Probabilistic LSI

4.4.5 Model and Feature Selection

4.5 Collaborative Filtering

4.5.1 Probabilistic Models

4.5.2 Combining Content-Based and Collaborative Features

4.6 Bibliographic Notes

5 SUPERVISED LEARNING

5.1 The Supervised Learning Scenario

5.2 Overview of Classification Strategies

5.3 Evaluating Text Classifiers

5.3.1 Benchmarks

5.3.2 Measures of Accuracy

5.4 Nearest Neighbor Learners

5.4.1 Pros and Cons

5.4.2 Is TFIDF Appropriate?

5.5 Feature Selection

5.5.1 Greedy Inclusion Algorithms

5.5.2 Truncation Algorithms

5.5.3 Comparison and Discussion

5.6 Bayesian Learners

5.6.1 Naive Bayes Learners

5.6.2 Small-Degree Bayesian Networks

5.7 Exploiting Hierarchy among Topics

5.7.1 Feature Selection

5.7.2 Enhanced Parameter Estimation

5.7.3 Training and Search Strategies

5.8 Maximum Entropy Learners

5.9 Discriminative Classification

5.9.1 Linear Least-Square Regression

5.9.2 Support Vector Machines

5.10 Hypertext Classification

5.10.1 Representing Hypertext for Supervised Learning

5.10.2 Rule Induction

5.11 Bibliographic Notes

6 SEMISUPERVISED LEARNING

6.1 Expectation Maximization

6.1.1 Experimental Results

6.1.2 Reducing the Belief in Unlabeled Documents

6.1.3 Modeling Labels Using Many Mixture Components

……

PART Ⅲ APPLICATIONS

……

序言

This book is about finding significant statistical patterns relating hypertext documents, topics, hyperlinks, and queries and using these patterns to connect users to information they seek. The Web has become a vast storehouse of knowledge, built in a decentralized yet collaborative manner. It is a living, growing, populist, and participatory medium of expression with no central editorship. This has positive and negative implications. On the positive side, there is widespread participation in authoring content. Compared to print or broadcast media, the ratio of content creators to the audience is more equitable. On the negative side, the heterogeneity and lack of structure makes it hard to frame queries and satisfy information needs. For many queries posed with the help of words and phrases, there are thousands of apparently relevant responses, but on closer inspection these turn out to be disappointing for all but the simplest queries. Queries involving nouns and noun phrases, where the information need is to find out about the named entity, are the simplest sort of information-hunting tasks. Only sophisticated users succeed with more complex queriesfor instance, those that involve articles and prepositions to relate named objects, actions, and agents. If you are a regular seeker and user of Web information, this state of affairs needs no further description.

Detecting and exploiting statistical dependencies between terms, Web pages, and hyperlinks will be the central theme in this book. Such dependencies are also called patterns, and the act of searching for such patterns is called machine learning, or data mining. Here are some examples of machine learning for Web applications. Given a crawl of a substantial portion of the Web, we may be interested in constructing a topic directory like Yahoo!, perhaps detecting the emergence and decline of prominent topics with passing time. Once a topic directory is available, we may wish to assign freshly crawled pages and sites to suitable positions in the directory.

Soumen Chakrabarti,Web搜索与挖掘领域的知名专家,ACM Transactions on the Web副主编。加州大学伯克利分校博士,是印度理工学院计算机科学与工程系副教授。曾经供职于IBM Almaden研究中心,从事超文本数据库和数据挖掘方面的工作。他有丰富的实际项目开发经验,开发了多个Web挖掘系统,并获得了多项美国专利。

相关解释:

词语大全 8944.net

copyright ©right 2010-2021。
词语大全内容来自网络,如有侵犯请联系客服。zhit325@126.com