Landing Page, AHREF Lang
Q. If you have a doorway page to different country options, set that up as the X-Default (apart of HREF Lang tags) so Google use that URL where there isn’t a Geo targeted landing page
- (00:40) Some websites that have multiple versions for different regions and languages might run into a problem, where the landing page from which Google is supposed to redirect users according to their geographic and language settings gets picked up as the main landing page. John suggests that hreflang would be the right approach in these kinds of situations, as well as making sure that default “landing page” is set as an x default. The idea here is that Google then understands that it’s a part of the set of pages, whereas if an x default isn’t specified, the reason being the other pages having content and this one kind of being like a doorway, then Google will treat it as a separate page. Google in a way views it as a situation where it could show a content page or the main page/global home page and then it might show the latter. So the x default tag is put on the directory page, default page that applies if none of the specific country versions apply.
No-index pages, Google’s evaluation
Q. Google doesn’t take no-index pages into account when evaluating a website
- (04:39) Google doesn’t take no-index pages into account. It really focuses on the content that it has indexed for a website and that’s the basis it has with regards to all of its quality updates. Google doesn’t show no-index pages in search and doesn’t use it to promise anything to users who are searching, so from Google’s point of view it’s up to the website’s owner as to what they want to do with those pages. The other point is that if Google doesn’t have these pages indexed and doesn’t have any data for these pages, it can’t aggregate any of that data for its systems across the website. From that point of view – if pages are no-index, Google doesn’t take them into account.
Country Code Top Level Domain
Q. Country Code Top Level Domain does play a role in rankings
- (09:19) Country Code Top Level Domain is used as a factor in geo targeting, in particular if someone is looking for something local and Google knows that the website is focused on that local market. Google will then try to promote that website in the search results and it uses the top level domain if it’s a country code top level domain. If it’s not a country code top level domain, then it will check the Search Console settings to see if there are any countries specified there for international targeting. So if the top-level domain of the website is generic, John advises to focus on a specific country by setting that up in Search Console. Google uses that for queries where it can tell that the user is looking for something local. For example, if someone is searching for something such as a washing machine repair manual, the person probably isn’t looking for something local, whereas if someone is just searching for washing machine repair, they’re probably looking for something local. So it makes sense to look at the website and think if it’s worth targeting these local queries or something to cover a broader range of people searching globally.
Google Update on Titles
Q. Google changing titles is on a per page basis, purely algorithmic and can help to rearrange things on the page appropriately
- (12:43) One of the big changes regarding titles is that titles are no longer tied to the individual query – it’s now on a per-page basis. On one hand, it means that titles don’t adapt dynamically, so it’s a little bit easier to test. On the other hand, it also means that it’s easier for website owners to try different things out, in the sense that they can change things on the pages and then submit through the indexing tool and see what happens in Google Search Results: what does it look like now? Because of that, John suggests trying different approaches. When there are strange or messy titles on the pages, try a different approach and see what works for the type of content that is there. Based on that, it’s then easier to expand this to the rest of the website.
It’s not the case that Google has any manual list to decide how to display the title – it’s all algorithmic.
Title Tags
Q. Although titles do play a minor factor in ranking, it’s more about what’s on the website page
- (15:37) Google uses titles as a tiny factor in rankings. That’s what John says that although it’s better not to make titles that are irrelevant to what’s on the page, it’s not a critical issue if the title that Google shows in the search results don’t match what’s on the page. From Google’s perspective, that’s perfectly fine and it uses what is on the page when it comes to search. Other things like the company name and different kinds of separators are more a matter of personal taste and decoration. The only thing is that users like to have an understanding of the bigger picture of where does the page fit and sometimes it makes sense to show the company name or a brand name for the website title links (title tags are called title links now).
Disavow file on backlinks
Q. Disavow tool is purely a technical tool – there isn’t any kind of penalty or black flag associated with that.
- (17:59) Disavow tool can be used whenever there are links pointing at the website that the website owner doesn’t want Google to take into account: it doesn’t necessarily mean for Google that the owner created those links. So, there isn’t any kind of penalty or black flag or mark for anything associated with the disavow tool – it’s just a technical tool that helps to manage the external associations with the website.
With regards to Google Search, in most cases if there are random links coming to the website, there is no need to use a disavow tool. But if there’s something where the website owner knows they definitely didn’t do and if they think that if someone from Google was to manually look at the website and assume that they did this, then it might make sense to use the disavow tool. From that point of view, it doesn’t mean that the owner did this and they’re admitting to doing link games in the past, again, for Google it’s purely technical.
Manual Action
Q. One manual action is resolved, the website is back to being treated like any other website. Google doesn’t memorise the past manual actions and evaluates websites from that point of view.
- (19:57) John reveals that in general, if the manual action on the website is resolved and if the issue is cleaned up, then Google treats the website as it would treat any other website. It’s not like it has some kind of memory in the system that would remember the manual action taking place at some point and see the website as a shady one in the future as well.
For some kinds of issues, it does take a little bit longer for things to settle down, just because Google has to reprocess everything associated with the website, and that takes a bit of time. However, that doesn’t go to show that there is some kind of a grudge in the algorithms that’s holding everything back.
Same Content in Different Languages
Q. Same content in different languages isn’t perceived as duplicate content by Google, but there are still things to double-check for a website run in different languages
- (22:12) Anything that is translated is perceived as completely different content and it’s definitely not something where Google would say that is duplicate just because it’s a translated version of a piece of content. From Google’s point of view, duplicate content is really about the words and everything matching. In cases like that, it might pick one of these pages and show and not show the other one. But if they’re translated, they’re completely different pages. The ideal configuration here would be to use hreflang between these pages on a per page basis to make sure users with the wrong language don’t go to the wrong page. That can be checked in the Search Console in the Performance Report when looking at the queries that reach the website, especially the top queries. By estimating which language the queries are in and looking at the pages that were shown in the search results or that were visited from there, it can be seen whether Google shows the right pages in the search results. If Google already shows the right pages, there is no need to set up hreflang, but if it shows the wrong pages in the search results, then definitely hreflang annotations would help.
This is usually an issue when people search generic queries like a company name, because based on that, Google might not know which language the user is searching for and might show the wrong page.
Copying Content
Q. There are different factors that come into play when deciding whether to and how to take down content copied from another website
- (28:34) Some websites don’t care about things such as copyright and take content from other people and republish that and the way to handle that is nuanced and includes lots of things.
The first thing to consider for a site owner seeing their content has been copied is to think about whether or not this is a critical issue for the website at the moment. If it’s a critical issue, John advises seeing if there are legal things to help the site owner solve the problem, for example, DMCA.
There are some other things that come into play when content gets copied. Sometimes copies are relevant in a sense that when it’s not a pure one-to-one copy of something but rather someone is taking in a section of a page and writing about this content, they might be creating something bigger and newer. For example, that can be often seen with Google blog posts – other sites would take the blog posts and include either the whole blog post or large sections of it, but they’ll also add lots of commentary and try to explain what Google actually means here or what is being said between the lines and so on. On the one hand, they’re taking Google’s content and copying it, but on the other hand, they’re creating something useful, and they would appear in the search results too, but they would provide a slightly different value than the original content.
The person asking the question was wondering if Google takes into account the time when the content was indexed and see that the original was there earlier. However, John sheds some light on situations from the past when spammers or scrapers would be able to get content indexed almost faster than the original source. So, if Google was to purely focus on that factor, it could accidentally favour those who are technically better at publishing content and sending it into Google, compared to those who publish their content naturally.
Therefore, Google tries to look at the bigger picture for a lot of things when it comes to websites and if it sees that a website is regularly copying content from other sources, then it’s a lot easier for it to understand that the website isn’t providing a lot of unique value on its own and Google will treat it appropriately.
The last thing to mention is that in the case that another website is copying content and it really causes problems, spam reports can be submitted to Google to let them know about these kinds of issues.
Social Media Presence and SEO
Q. Social media presence doesn’t affect the SEO side of the website, except when the social media page is a webpage itself
- (34:54) For the most part, Google doesn’t take into account the social media activity when it comes to rankings. The one exception that could play a role here is when sometimes Google sees social media sites as normal web pages and if they’re normal web pages and have actual content on them with links to other pages, then Google can see them as any other kind of web page. For example, if someone has a social media profile and it links to individual pages from the website, then Google can see that profile page as a normal web page and if those links are normal HTML links that Google can follow then it will treat those as normal HTML links that it can follow. Also, if that profile page is a normal HTML page, it can be something that can be indexed as well. It can rank in the search results normally like anything else.
So, it’s not a matter of Google doing anything special for social media sites or social media profiles, but rather that in many cases these profiles and these pages are normal HTML pages as well, and Google can process those HTML pages like any other HTML page. But Google wouldn’t go there and see that the profile has many likes and therefore rank the pages that are associated with this profile higher. It’s more about the page being a HTML page and having some content and maybe being associated with other HTML pages and linking together. Based on this, Google gets a better understanding of this group of pages. Those pages can be ranked individually, but it’s not based on the social media metrics.
Penguin Penalty
Q. For Google to lose trust in a website, it takes a strong pattern of spammy links rather than a few individual links
- (37:06) For the most part, Google can recognise that something is problematic when spammy links cannot be ignored or isolated. If there is a strong pattern of spammy links across a website, then it can be that algorithms lose trust with the website and at the moment based on the bigger picture on the web, Google has to take almost a conservative side when it comes to understanding a website’s content and ranking it in the search results, then there can be a drop in the visibility. But the web is pretty messy and Google recognises that it has to ignore a lot of the links out there.
Zero Good URLs
Q. If Google doesn’t have data on a website’s core web vitals then it cant take it into account for ranking
- (45:50) When the 0 good URLs problem occurs, there can be two things at play. On the one hand, Google doesn’t have data for all websites – especially Core Web Vitals, that rely on field data. Field data is what people actually see when using the website and what is reported back through Mobile Chrome etc. So, Google needs a certain amount of data before it can understand what the individual metrics mean for a particular website. When there is no data at all in Search Console with regards to the individual Core Web Vital metrics, usually that means there isn’t enough data at the moment and from the ranking point of view, that means Google can’t really take that into account. That could be the reason for 0 good URLs issue – Google just has 0 URLs that it’s tracking for the Core Web Vitals at the moment for this particular website.
Web Stories
Q. For a page to appear in the Web Stories, it has to be integrated within the website as a normal HTML page and have some amount of textual information
- (47:54) When it comes to appearing in the Web Stories, there are two aspects that need to be considered. On the one hand, Web Stories are normal pages – they can appear in the normal search results. From a technical point of view, they’re built on AMP,, but they’re normal HTML pages. That also means that they can be linked normally within the website, which is critical for Google to understand that these are part of the website and maybe they’re an important part of the website. To show that they’re important they need to be linked in an important way, for example, from the home page or some other pages which are very important for the website.
The other aspect here is that since these are normal HTML pages, Google needs to find some text on these pages that can be used to rank them. Especially with Web Stories that is tricky because they’re very visual in nature, and it’s very tempting to show a video or a large image in the Web Stories. When that is done without also providing some textual content, there is very little that Google can use to rank these pages.
So, the pages have to be integrated within the website like a normal HTML page would and also have some amount of textual content so that they can be ranked for queries.
John suggests checking out Google Creators channel and blog – there is a lot of content on Web Stories and guides for optimising Web Stories for SEO.
Sign up for our Webmaster Hangouts today!