Often changing title tags
Q. The problem with changing title tags very often lies in the fact that Google won’t be able to recrawl them that often.
- (01:02) The person asking the question talks about his website in the mutual fund industry, where it is necessary that the title tag changes every day depending on the number representing the stock prices. John says that it’s something where Google wouldn’t give any special weight if the title tag keeps changing. But the thing is that if the website owner is to change the titles on a daily basis, Google might not re-crawl that page on a daily basis. So it might be the case that the titles are changed every day, but in the search results, the title that Google shows is a few days old just because that’s the last version that it picked up from that page. It’s more like a practical effect rather than a strategic effect.
Reusing a domain
Q. There’s no strict time frame within which Google catches up with a domain being reused for different purposes.
- (04:04) Google catching up with the fact that a domain is being used for a different purpose is something that happens over time organically. If an existing domain is being reused, and there is new content that is different from the one before, then over time, Google will learn that it’s a new website and treat it accordingly. But there’s no specific time frame for that.
- There are two things to watch out for in situations like this. The first one is whether the website was involved in some shady practices before, like, for example, around external links. That might be something that needs to be cleaned up.
- The other aspect is if there’s any webspam manual action, then, that’s something that needs to be cleaned up, so that the website starts from a clean slate.
- Its’ never going to be completely clean if something else was already hosted on the website before, but at least it puts the website in some sort of reasonable state where it doesn’t have to drag that baggage along.
Robots.txt and traffic
Q. Failure in traffic might not be necessarily linked to technical failures, for example, robots.txt failure.
- (10:20) The person asking the question is concerned with lower traffic on his website and links it to several days of robots.txt failure and connectivity issues. John says there are two things to watch out for in situations like this. On the one hand, if there are server connectivity issues, Google wouldn’t see it as a quality problem. So it wouldn’t be that the ranking for the pages would drop. That’s the first step. So if there’s a drop in the ranking of the pages, then that would not be from the technical issue.
- On the other hand, what does happen with these kinds of server connectivity problems is that if Google can’t reach the robots.txt file for a while, then it will assume that it can’t crawl anything on the website, and that can result in some of the pages from the website being dropped from the index. That’s kind of a simple way to figure out whether it’s from a technical problem or not. Are the pages gone from the index, and if so, that’s probably from a technical problem. If it is from a technical problem, if these pages are gone, then usually Google will retry those missing pages after a couple of days maybe and try to index them again.
- If the problem has happened a while ago and there were steps taken in an attempt to fix that, and the problem keeps recurring, it is worthy to double-check with the Crawl Error section in Search Console to see if there is still, perhaps, a technical issue where sometimes maybe Googlebot is blocked.
Indexing the comment section
Q. It’s important to make sure that the way the comment section is technically handled on the page makes it easy for Google to index comments.
- (16:47) John says that it is up to a website owner whether he wants the comments to show in SERPs or not, but comments are essentially a technical element on the page. So it’s not that there’s a setting in Search Console to turn it on or off. It’s basically there are different ways of integrating comments on web pages, and some of those ways are blocked from indexing and some of those ways are easy to index. So if there’s a need to have the comments indexed, then it’s important to make sure to implement them in a way that’s easy to index. The Inspect URL tool in Search Console will show a little bit of what Google finds on the page, so it can be seen whether Google can index the comments.
URL not indexed
Q. If Google crawls the URL, it doesn’t automatically mean it will index it.
- (21:10) The person asking the question is concerned by the fact that even though his URLs get crawled, he gets the “URL discovered, not indexed”, or “URL crawled, not indexed” messages – he thinks that maybe the content is not good enough for Google to index it. John says that it is kind of an early easy assumption to say “Oh, Google looked at it but decided not to index it”. Most of the time Google crawls something, it doesn’t necessarily mean that it will automatically index it. John says these two categories of not indexed can be treated as a similar thing. It’s tricky because Google doesn’t index everything, so that can happen.
CDN
Q. Whenever a website moves to CDN or changes its current one, it affects crawling but doesn’t really affect rankings.
- (26:28) From a ranking point of view, moving to a CDN or changing the current one wouldn’t change anything. If the hosting is changed significantly, what will happen on Google’s side is the crawling rate will move into a more conservative area first, where Google will crawl a little bit less first because it saw a bigger change in hosting. Then, over time, in, probably, a couple of weeks, maybe a month or so, Google will increase the crawl rate again to see where it thinks it will settle down. Essentially that drop and craw rate overall for the move to a CDN or change of a CDN that can be normal.
- The crawl rate itself doesn’t necessarily mean that there is a problem, because if Google was crawling two million pages of the website before, it’s unlikely that these 2 million pages would be changing every day. So it’s not necessarily the case that Google would miss all of the new content on the website. It would just try to prioritise again and figure out which of these pages it actually needs to re-crawl on a day-to-day basis. So just because the crawl rate drops, it’s not necessarily a sign for concern.
- Some other indicators, like, for example, the change in average response time would be of more priority. Because the crawl rate that Google chooses is based on the average response time. It’s also based on server errors etc. if the average response time goes up significantly, Google will stick to a lower crawl rate.
Changing the rendering and ranking
Q. There might be a couple of reasons why after changing the client-side rendering to a server-side rendering the website doesn’t recover from a drop in rankings.
- (36:22) John says there might be two things at play whenever a website sees a drop in rankings. On the one hand, it might be that with the change of the infrastructure, the website layout and structure has changed as well. That could include things like internal linking, maybe even the URLs that are findable on the website. Those kind of things can affect ranking. The other thing could be that maybe there were just changes in ranking overall that were happening, and they just happened to coincide with when the technical changes were made.
HREF tag
Q. The image doesn’t matter as much as the Alt text, and Alt texts are treated the same way as an anchor text associated directly with the link.
- (40:33) With regards to an image itself, Google would probably not find a lot of value in that as an anchor text. If there is an anchor text associated with the image, then Google would treat that, essentially, the same as any anchor text that has been associated with the link directly. So from Google’s point of view, the Alt text would, essentially, be converted into a text on the page and be treated in the same way. It’s not that one or the other would have more value or not. They’re basically equivalent from Google’s side, and the order doesn’t matter as much. John says that it probably doesn’t matter at all. It’s essentially just both on the page. However, one thing he advises against doing is removing the visible text purely for usability reasons, since the visible text doesn’t matter as much or the same as Alt text. Because other search engines might not see it that way, and it might also be for accessibility reasons that it actually makes sense to have a visible text as well.
- So it’s not about blindly removing it to a minimum, but rather knowing that there’s no loss in having both of them there.
Moving Domains
Q. There are a couple of things that can be done to ensure that moving from one domain to another takes the value of the old domain with it.
- (42:04) There are two things related to moving from one domain to another. On the one hand, if there’s a movement from one website to another, and the redirects are used to move things over, and the various tools such as the Change of Address tool in search Console are used, then that helps Google to really understand that everything from the old domain should just be forwarded to the new one.
- The other aspect there is on a per-page basis. Google also tries to look at cannonicalisation, and for that, it tries to look at a number of different factors that come in. on the one hand, redirects play a role, things like internal linking play a role, the rel=”canonical” on the pages play a role, but external links also play a role. So what could happen in probably more edge cases is that if Google sees a lot of external links going to the old URL and maybe even some internal links going to the old URL, it actually indeed the old URL instead of the new one. Because from Google’s point of view, it starts to look like the old URL is the right one to show, and the new one is maybe more of a temporary thing. Because of this, what John recommends for a migration from one domain to another is not to only set up the redirect and use the Change of Address tool, but also to go off and try to find the larger websites that were linking to the previous domain, and see if they can update those links to the new domain.
Robots.txt and indexing
Q. If the pages blocked by robots.txt are still indexed, it is not necessary to put a no-index tag on them.
- (44:25) If the URL is blocked by robots.txt, Google doesn’t see any of the meta tags on the page. It doesn’t see the rel=”canonical” on the page because it doesn’t crawl that page at all. So if the rel=”canonical” or a no-index on the page needs to be taken into account, the page needs to be crawlable.
- The other aspect here is that oftentimes these pages may get indexed if they’re blocked by robots.txt, but they’re indexed without any of the content because Google can’t crawl it. Usually, that means that these pages don’t show up in the search results anyway. So if someone is searching for some kind of product that is sold on the website, then Google is not going to dig and see if there’s also a page that is blocked by robots.txt, which would be relevant because there are already good pages from the website that can be crawled and indexed normally that Google can show. On the other hand, if a suite query is done for that specific URL, then maybe the URL would be seen in the search results without any content. So a lot of times it is more of a theoretical problem rather than a practical problem, and theoretically, these URLs can get indexed without content, but in practice, they’re not going to cause any problems in search. And if they’re being seen showing up for practical queries on the website, then most of the time that’s more a sign that the rest of the website is really hard to understand.
- So if someone searches for one of the product types, and Google shows one of these roboted kinds of categories or facet pages, then that would be a sign that the visible content on the website is not sufficient for Google to understand that the normal pages that could have been indexed are actually relevant here.
- That would be the first step there is to try and figure out whether normal users see these pages when they search normally. If they don’t see them, then that’s fine. It can be ignored. If they do see these pages when they search normally, then that’s a sign that maybe it is worth focusing on other things, on the rest of the website.
Google Favicon
Q. Google favicon picks up homepage redirects.
- (47:04) If the homepage is redirected or if the Favicon file is redirected to a different part of the website, Google should be able to pick that up. Because practically what would happen here is Google would follow that redirect, but it would probably still index it as the homepage anyway. So from a practical point of view, if the name of the website is searched for, probably Google would show the root URL even though it redirects to a lower=level page.
Product Review Images
Q. To stand out in terms of images in product reviews, using original photos is ideal.
- (52:49) The person asking the question wonders whether to stand out in terms of product review images, it is okay to have photoshopped versions of the images found online or it is better to upload original photos. John says that the guidelines that Google has for reviews recommend focusing on unique photos created of these products and not artificial review photos. He doesn’t think Google systems would automatically recognise that, but it’s something that Google would probably look at, at least on a manual basis from time to time.
Sign up for our Webmaster Hangouts today!