featured-image

Google: Indexing Search Result Pages Is A Watering Down Indexed Pages Issue

Google’s John Mueller responded to a comment about Google’s guidelines around “Use the robots.txt file on your web server to manage your crawling budget by preventing crawling of infinite spaces such as search result pages.” He said this is less about spam and more about ” watering down your indexed content with useless pages that compete with each other.”

He posted this on Twitter:

Here is the original tweet from Lily Ray from this conversarion:

In 2007, Google told webmasters to block internal search results from being indexed. The original guideline read “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines.” Now it reads “Use the robots.txt file on your web server to manage your crawling budget by preventing crawling of infinite spaces such as search result pages.”

Then ten years later, Google’s John Mueller explained why Google doesn’t want your search result pages in its index. He said “they make infinite spaces (crawling), they’re often low-quality pages, often lead to empty search results/soft-404s.”

So it really wasn’t always about spam but blocking pages that might not be as relevant to Google.

Forum discussion at Twitter.

Source

Post Your Comment

Your email address will not be published. Required fields are marked *

Copyright Β© 2020 Maxxhost.net. All rights reserved.
<!--Start of Tawk.to Script--> <script type="text/javascript"> var Tawk_API=Tawk_API||{}, Tawk_LoadStart=new Date(); (function(){ var s1=document.createElement("script"),s0=document.getElementsByTagName("script")[0]; s1.async=true; s1.src='https://embed.tawk.to/5e4c036e298c395d1ce88e95/default'; s1.charset='UTF-8'; s1.setAttribute('crossorigin','*'); s0.parentNode.insertBefore(s1,s0); })(); </script> <!--End of Tawk.to Script-->
Register your Domain