Find answers, ask questions, and connect with our
community around the world.

Activity Forums Search Engine Optimization Duplicate content issues – crawl budget optimization

  • Duplicate content issues – crawl budget optimization

    updated 2 weeks, 4 days ago 0 Member · 1 Post
  • Jacob

    Member
    October 30, 2019 at 4:03 am

    Hello, I have an ecommerce website and multiple phisical stores in different cities. To cope with all the stock problems, I have a general store that I index in Google, and multiple stores that are blocked from index with robots.txt. Here is an example so that you can better understand the issue: https://www.example.com/countryabbreviation/extendible-sofas/c/12 (allowed in robots.txt, meta robots tag index/follow, canonical itself) https://www.example.com/city1/extendible-sofas/c/12 (blocked by robots.txt, meta robots tag index/follow, canonical to number 1) https://www.example.com/city2/extendible-sofas/c/12 (blocked by robots.txt, meta robots tag index/follow, canonical to number 1) So, as you can see, all cities are blocked by robots.txt and canonical to the country abbreviation page (that we want to index). After reaching our site, the users are asked to select a country in order to make a purchase. My questions are: What do you think about this strategy? What do you think about the crawl budget, since we have nearly 50 cities and all pages on our site are duplicated 50 times, the original version of a page + 50 stores. Even though we are blocking the 50 stores through robots.txt, I believe that we are wasting crawl budget (Google still crawls a page even if its blocked by robots.txt). Would it be ok to “noindex/nofollow” the URLs that contain stores? Would this optimiza crawl budget? Or a better strategy is needed? Thanks for you opinnions. – by hq overview Thinkbig1988 – –

Reply to: Jacob
Your information:

Cancel
Original Post
0 of 0 posts June 2018
Now