Saturday, November 5, 2011

SEO - What Do Spider Engines Look For

Things have changed since the early days of search engine optimization when all that was required was for the webmaster to simply submit the site along with metatags attached to keywords and well worded accurate descriptions of the content of its pages. However years of abuse of the system by greedy marketers left the system in a mess as misleading information, popular keywords and other black hat SEO techniques were used to trick the search engines into giving pages a higher ranking then they deserved. One of the biggest problems were web content providers that manipulated a number of attributes within the HTML code of a page by employing popularly searched for keywords that had nothing to do with their site.

Since those days the search engines have caught on and now look at a variety of different factors including the text within the title tag, the domain name, the URL directories and file names of the site, the HTML tags, term frequency, keyword proximity, keyword adjacency, keyword sequence, photo captions, text within NOFRAMES tags and web content. All of this information is used to create an algorithm that determines how highly ranked your page will be in the search engine rankings.

Abuse of the search engine index system still exists and in fact many SEO experts say it is not abuse at all to manipulate code or use fake keywords that do not represent what is on your site. Many of them simply see it as normal practice. Of course the person that it does matter to is the kid researching their homework who is led to a page of travel auction deals while trying to do a paper on the geography of a country. Ethically it is up to each individual web master or web site owner how much they want to be part of the degradation of information on a medium that had the potential to become a resource that was greater than the massive Greek Alexandria library.

No comments:

Post a Comment