By Danny Roberts | 1/20/2015 | Search Engine Marketing

Web crawler

A web crawler, also known as a search engine robot, spider, Internet bot, or just crawler for short, is a software application that systematically browses and scans the Internet for the purpose of indexing pages. In general, a web crawler works by reading and identifying the hyperlinks on one page, then systematically browsing each hyperlink recursively. 

Internet search engines and a few other types of sites make use of crawlers to refresh their index of pages of others web site's content. Web crawlers record and store the pages they visit (or crawl) which are then processed by a search engine. The engine indexes the content allowing for the fast  and efficient searching that todays internet users have become accustom to.

Websites can insert specific code that will tell the crawler that they do not want to be indexed by search engines. This technique of staying invisible to web crawlers is often used for sites and applications that are still in development.

{{CommentsModel.TotalCount}} Comments

Your Comment


Recent Stories

Top DiscoverCloud Experts

User photo
ten seos
Best local SEO companies and services
Sales and Marketing | SEO and SEM
View Profile
User photo
Ivan Filimonov
Content Marketeer
Sales and Marketing | Project Management and 7 more
View Profile
User photo
Andy Pham
CEO and Founder of Zillable
Software Development | IT and Infrastructure and 17 more
View Profile
Show All

Compare Products

Select up to three two products to compare by clicking on the compare icon () of each product.


Now comparing:

{{product.ProductName | createSubstring:25}} X
Compare Now