Which websites do not like search engines?
In the moment, when we are trying to create a website, we hope that it will be interesting for users, regardless of which purpose the users have. And, if we want people to have access to our new website, first of all, search engine should be able to find it. The position of new generated website and hence the quantity of its visitors depend only on the decision of search engine, weather it “likes” site, or not.
Search engine – is just a robot, which only makes portraits of sites and evaluates it, considering a great amount of facts. Each and every retrieval system highly rank resources, which, on their mind, were made for people, and publicly dislike sites, which are completely opposite to this concept. There are a lot of reasons, why search engines may “dislike” the website, but if you cope with several main reasons, the following life of your site will much more easier.
At the first term, you should pay attention to the technical aspects which can be faced by web-masters during the moment of creating or actuating the project.
Anyway, you should start from choosing the hosting. Search engines really dislike sites, which are not available: if the project is not available – there is no sense in showing it to the users. One really useful advice – if you are choosing the hosting now, pay attention to its reliability and not to its price.
The speed of site loading depends on choosing CMS (content management system). Nowadays, there are some “heavy” CMS, which contain heavy pages code. During the page creating, lots of scripts, uploading files and other elements are being used. The search engines try to provide their users only with the best sites, which are loading easily and quickly, and don’t enforce visitors to wait. It’s obvious that preferences will be given to exactly such sites. Hence it’s better for you to choose optimized CMS, which automatically generated code won’t be an obstacle when indexing a site.
Let’s go on to the content of the documents. What should you pay attention at?
Content – is the main part of the site. Qualitative content – is half the battle. If your website includes a unique content, it is on the right way. The searching robot can’t mark your text from the perspective of usefulness and the level of informational content for users from the first scan, but it will definitely mark it immediately after the moment, when links, huskies and other citations are appeared.
From this information we can make a simple output, search engines dislike simple copy – paste. For such text borrowing you can get a lowering in the ranking or even ban.
This is not a secret, that if you want your page to become a relevant to certain key demands, you have to use this query a certain amount of times. Sometimes web-masters so enthusiastically use this factor, that as a result they have text, which is full of spam keywords. The principle “the more the better” in this case doesn’t work. Such text will be considered as over optimized, and will not rank by the search engine. Therefore, you should always monitor the ratio between the number of symbols and the number of inquiries, when use key queries in the text.
Search engines also consider attempt to hide the text on the site as fraud. You must openly place content, and in particular text, not hiding it in hidden fields, the color of the font must be different from the background color, and finally, you are not allowed to move the text outside of the user’s display.
Also, it should be mentioned, that searching robot won’t visit the site pretty often, if its content is not updated for a long time. The more often you update content on your site, the better searchers will treat to you. Adding thematic articles or news, you say that your project is developing and renewing.
Avoid empty pages, because they don’t carry any information to the user and will be excluded by the search engines. When you create a project in its infancy and not all content is already filled on the site, accesses to it have to be closed for a search robot with help of robots.txt. During the following filling the site and adding extra information to it, some problems connected with re-index of these pages can appear.
Every page of the site should be unique; uniqueness is the sum of content, meta tags of the page, titles, code and other factors. The same basic content of meta tags as title or description will have negative impact on your project. Therefore, you should prescribe a unique meta tags for every page to reflect the main content of the page.
You are able to view the presence of similar meta tags in the web-master account panel.
URL of site pages
Addresses of pages may also become an obstacle for search robots in the moment of indexing the content of your page. Pages with dynamic URL may not rank in the SERPs and, what is more, may not even get into the index searcher.
If CMS is automatically formed by the URL system, it should be replaced by the NC. Here we can have an argument, because this factor is gradually leaving the discharge barriers, but there are several reasons due to which using NC will be much more effective. First of all, they are friendly to quick indexation. Secondly, they are readable and allow user easily navigate on the site. And, thirdly, needed key in the URL request will be considered in rankings in SERP.
Number of links on a page (external and internal)
A great amount of external links on the page lead it to the existence spam on it. In this case, all weight of the page flows into external resources. Generally, external links to the partners’ and friends’ sites don’t damage the site if subjects are similar, or if links are to a fairly authoritative resource such as Wikipedia. But if you are referring to dozens of different sites, you can be sure that a search robot will not approve it.
There isn’t any clear opinion relative to the count of external links on a page. Each and every search engine gives its own recommendation for this problem. The most important thing is not to overdo and not to place links on every page to all other pages. The more internal links on the page exist, the less weight on them is transmitted, and the longer page will be loaded.
Quite often young projects are faced with the problem of duplicate pages, and sometimes even with the duplicate sites. Search Engines punish such mistakes, because you create two pages with different addresses, but with the same content. None of these pages will be ranked well.
Not only user but a content management system may be reasons for duplication as well. The most common duplication types:
1) The main page
We can easily find lots of situations, when the main page is available by several addresses at the same time, for example:
2) The inner pages
Usually web-masters don’t pay any attention to the fact, that pages can be available by two and more addresses. The most common mistake is the presence of page in the index, which is ended by “/” or without it:
Such problems are solved by setting up redirects to a page.
You should pay attention on such trifles when creating the site in order to avoid penalties from search engines.
3) Creating mirrors of a site
Mirror of a site in the internet – is an accurate copy of another site. Mirror sites are most commonly used to provide multiple sources of the same information. Big or popular files are often situated on the several mirrors to download acceleration and load balancing.
When creating the site web-masters pretty often use such technologies for adding extra data, but in the same time forget to close clone from the search robot. The search engine finds two similar sites and can through away the main one from the index or decreases its rank. After it, restoration will take lots of time and forth. Mirror should be beware and you have to avoid their creation.
What conclusion we can make after all receiving information? Live peacefully, execute all demands of search engines, align with the leaders and you will be loved by search engines. Every website, which you create for people and which contains lots of useful information will get good attitude from the search engines, even despite the fact, that in its optimization and development can be permitted system errors.