A search engine spider is actually just a server based software application that is designed to compile and maintain search engine databases. They are also known as robots. These applications were nicknamed spiders because they craw the “web” looking for information to add to the search engine’s database.
In theory a search engine spider does it s job by following networks of links to find and then grab information from your web site pages. Each search engine (such as Google, Excite, Alta Vista, Lycos etc.) has its own criteria for how it spiders information. This criterion is put together into a specific algorithm that helps the search engine determine where your site fits in its ranking system. The problem of second guessing search engine spiders is that they are always changing the requirements of this algorithm so web masters are always second guessing how to get their URLs listed in the top web site positions. Also the different search engines have different criteria for how they rank information and might have to do with their database’s needs. One search engine may rank your site in terms of its content and yet another may be more interested in the number of links you have and yet another may be interested in how many hits you are getting.
In theory a search engine spider could do any one of several things when it visits your web page. Some spiders simply confirm the existence of the page without indexing it. Some spiders index the page content. Yet others will identify all of the hyperlinks you have on a page to other web resources and also other pages on your site. A spider may visit your pages once or it may visit it several times.
One of the latest trends is to keep your index pages lively by adding up to 50 to 70 new bytes of information regularly. This is because the Alta Vista pages love fresh content. This is what makes SEO techniques so interesting to follow. The rules change along with the changing needs of the spiders.
No comments:
Post a Comment