Detecting Spam Pages Using Statistics

Spam web pages that are machine generated tend to differ in a number of ways from most other web pages, and can possibly be identified through statistical analysis. A paper from Microsoft Research titled "Spam, Damn Spam, and Statistics: Using statistical analysis to locate spam web pages" (pdf) [by Dennis Fetterly, Mark Manasse, Marc Najork] looks at some ways of finding those pages. Spam web pages tend to have these characteristics:

* Host names with many characters, dots, dashes, and digits are likely to be spam web sites.

* "One piece of folklore among the SEO community is that search engines (and Google in particular), given a query q, will rank a result URL u higher if u’s host component contains q. SEOs try to exploit this by populating pages with URLs whose host components contain popular queries that are relevant to their business, and by setting up a DNS server that resolves those host names. The latter is quite easy, since DNS servers can be configured with wildcard records that will resolve any host name within a domain to the same IP address. For example, at the time of this writing, any host within the domain highriskmortgage.com resolves to the IP address 65.83.94.42."

* Linkage properties: looking at the number of links embedded on a page compared to the number of links pointing to those pages. Are they similar to what is seen on other pages on other sites?

* Content properties: A large number of automatically generated pages contain the exact same number of words, though individual words will differ from page to page.

* "Overall, the web evolves slowly, 65% of all pages will not change at all from one week to the next, and only about 0.8% of all pages will change completely." Spam pages tend to fall in the last category, because many of them are generated at each request.

Related:
Detecting near-duplicate documents
Related Posts Plugin for WordPress, Blogger...