Paid Links

Q. The way Google decides whether the link is a paid link or not does not just depend on someone reporting the link as a paid link.

  • (00:42) Google takes a lot of different things into account when deciding whether the link is a paid link. It does not give every link that it finds full weight. So even if it is not sure, something can be somewhere in between, but it is a number of things that it takes into account there. It’s not just someone reporting this as a paid link, because random people on the internet report lots of things that aren’t necessarily true. At the same time, sometimes it’s useful information. So it’s a lot of things that come together with regards to paid links.

Internal Link Placement

Q. The placement of the internal link doesn’t really matter, but the placement of the content itself matters.

  • (02:15) For internal links, on the one hand, Google uses them to understand the context better. So things like anchor text helps it, but another really important part is really just being able to crawl the website. For that, it doesn’t matter where that link is on a page to kind of crawl the rest of the website. Sometimes things are in the footer, sometimes in the header, sometimes in a shared menu or in a sidebar or within a body of content. All of those linked places are all fine from Google’s point of view. Usually what Google differentiates more with regards to location on a page is the content itself to try to figure out what is really relevant for this particular page. For that, it sometimes really makes sense to focus on the central part of the page, the primary piece of content that changes from page to page and not so much the headers and the sidebars and the footers or things like that. Because those are a part of the website itself, but it’s not the primary reason for this page to exist and the primary reason for Google to rank that page. That’s kind of the difference that Google takes when it comes to different parts of the page. As for links, it’s usually more to understand the context of pages and to be able to crawl the website, and for that Google doesn’t really need to differentiate between different parts of the page.

Product Review Websites

Q. There are not really strict guidelines or a checklist for websites to be classified as product page websites.

  • (04:34) The person asking the question has a fashion website, from which he started to link products in different stores, as his viewers started asking where they can buy the products from their articles, he now is not sure if his website would classify as a product review website or not. John says that he doesn’t think Google would differentiate that much with these kinds of websites. It’s not that there is a binary decision on which type of website something is. From his point of view, it sounds like there is some kind of review content and informational content on the person’s website, as well as some affiliate content. All of these things are fine. It’s not a case that website owners have to pick one type of website and say that everything on their website fits some kind of criteria exactly. In most cases, on the web, there is a lot of grey room between the different types of websites. From Google’s point of view, that’s fine. That is kind of expected. John says it shouldn’t be really worrisome whether Google would think the website is a product review website. Essentially, it’s good to use the information that Google gives for product review websites, but it’s not something where there is a checklist that one has to fulfill for anything that is classified exactly as a product review website.

Local Directories

Q. Making sure that the information about the business is correct and matching across different websites and tools is important for Google to not get confused.

  • (07:41) John is not sure how having the exact same business information across the web plays into Google Business Profile, local listings and that part of things. One place where he has seen a little in that direction, which might not be perfectly relevant for local businesses, but just generally in Google recognizing the entity behind a website or a business. For that, it does sometimes help to really make sure that Google has consistent information, that it can recognise that the information is correct because it found it in multiple places on the web. Usually, this plays more into the knowledge graph, the knowledge panel side of things, where if Google can understand, that this is the entity behind the website. If there are different mentions of that entity in different places, and the information there is kind of consistent, then Google can trust that information. Whereas if Google finds conflicting information across the web, then it’s harder. For example, if there is a situation where there is local business structure data on website pages with local profiles with opening hours or phone numbers, then on the website there is a marked up info conflicting with that. On Google’s side, it has to make a judgment call then and it doesn’t know what is correct. In those kinds of situations, it’s easy for Google’s systems to get confused and use the wrong information. Whereas if website owners find some way to consistently provide the correct information everywhere, then it’s a lot easier for Google to say what the correct information is.

Links

Q. Linking back to a website that has linked to you is not viewed as a linked scheme, as long as all the rules are followed.

  • (11:24) The person asking the question is in a situation where there are a few websites that are linking to his website. He doesn’t know whether he is getting any value from that, but assuming he is, he wonders if linking to those websites, following all the rules, not making any illegal link exchanges, would result in him losing some value. John says that it is perfectly fine and natural, especially if this is a local business linking to its neighbours. Or if the website is mentioned in the news somewhere, and the person mentions that on his website that would be okay. Essentially, he is linking back and forth. It’s kind of a reciprocal link, which is a natural kind of link. It’s not something that is there, because there is some crazy linked scheme. If that is done naturally and there isn’t any weird deal behind the scenes, it should be fine.

Cleaning Up Website After Malware Attacks

Q. There are a few things that can be done to make the unwanted links drop out of the index after a hacker attack on a website.

  • (24:10) The person asking the question has experienced a malware attack on his website that resulted in lots of pages that he doesn’t want to be indexed being indexed. He has cleaned up after the attack, but the results of the malware are still being shown and he can’t use a temporary removal tool, as there are too many links. John suggests that first of all he needs to double-check that the pages he removed were actually removed. Some types of website hacks are done in a way that if these are checked manually, it looks like the pages are removed. But actually, for Google, they are still there. It can be checked with the Inspect URL tool. Then for the rest, there are two approaches. On the one hand, the best approach is to make sure that the more visible pages are manually removed, that means searching for the company name, for the website name, primary products etc., seeing the pages that show up in the search results and making sure that anything that shouldn’t be shown is not shown. Usually, that results in maybe up to 100 URLs, where the website owner can say that these are hacked and need to be removed as quickly as possible. The removal tool takes those URLs out within about a day.
  • The other part is the URLs that are remaining – they will be recrawled over time. But usually, when it comes to lots of URLs on a website, that’s something that takes a couple of months. So on the one hand, those could be just left to be, as they are not visible to people unless they explicitly search for the hacked content or do a site query of the website. These will drop out over time in half a year. Then they can be double-checked afterwards to see if they’re actually completely cleaned up. If that needs to be resolved as quickly as possible, the removal tool with the prefix setting can be used. It is essentially trying to find common prefixes for these pages, which might be a folder name or a filename or something that is in the beginning and filtering those out. The removal tool doesn’t take them out of Google’s index, so it doesn’t change anything for the ranking. But it doesn’t show them in the results anymore.

Emojis

Q. Using emojis in title tags and meta descriptions doesn’t really affect anything.

  • (33:04) One can definitely use emojis in titles and descriptions of pages. Google doesn’t show all of these in the search results, especially if it thinks that it kind of disrupts the search results in terms of looking misleading perhaps and these kinds of things. But it’s not that emojis cause any problems, so it’s okay to keep them there. John doesn’t think they would give any significant advantage, because at most what Google tries to figure out is what is the equivalent of that emoji. Maybe Google will use that word as well associated with the page, but it’s not like the website will get an advantage from that. It doesn’t harm or help SEO in any way.

API and Search Console

Q. API and Search Console take their data from the same source but present it a little bit differently.

  • (34:15) The data in the API and the data in the UI is built from the exact same database tables. So it’s not that there is any kind of more in-depth or more accurate data in the API than in the UI. The main difference between the API and the URL is that in the API there are more rows of examples that can be retrieved when downloading things. So sometimes that is useful. The other thing that is perhaps a little bit confusing with the API and the data in Search Console is that when looking at a report in Search Console, there will be numbers on top that give the number of clicks and impressions overall. The data that is provided in the API is essentially the individual rows that are visible in the table below the overall data in Search Console. For privacy reasons and various other reasons, Google filters out queries that have very few impressions. So in the UI in Search Console on top with the number Google includes the aggregate full count, but the rows that are shown there don’t include the filtered information. So what can happen is that if one is to look at the overall total in Search Console it’ll be a different number than if the totals are taken from the API, where all of the rows are added up. That’s something where it’s a little bit confusing at first, but essentially it’s the same data presented in a slightly different way in the API.

FAQ In Rich Results

Q. There are three criteria that need to be followed by FAQ schema to have an opportunity to be featured in rich results.

  • (36:15) FAQ rich results are essentially similar to other types of rich results in that there are several levels that Google takes into account before it shows them in the search results. On the one hand, they need to be technically correct. On the other hand, they need to be compliant with Google policies. John says he doesn’t think there are any significant policies around FAQ-rich results other than that the content should be visible on the page. The third issue that sometimes comes into play here is that Google needs to be able to understand that the website is trustworthy, in that it can trust the data to be correct. That is sometimes something where kind of from a quality point of view, Google may be not convinced about a website and then it wouldn’t show the results. Those are the three steps to look at FAQ rich results.

Seasonal Pages

Q. Both removing seasonal pages when they are no longer relevant and leaving them be is okay. The thing to remember is to use the same URL every year.

  • (37:38) On Google’s side, it’s totally up to the website owner to choose how to deal with seasonal pages. Keeping the pages there is fine, removing them after a while is fine if they’re no longer relevant. Essentially, what would be seen is that traffic to these pages will go down when it’s not seasonal, but no one is missing out on any impressions there. If the pages are made to be no index or 404 for a while and then brought back later, that’s essentially perfectly fine. The one thing to watch out for is to reuse the same URL year after year. So instead of having a page that is called Black Friday 2021 and then Black Friday 2022, it’s better to just have Black Friday. That way, if the page is reused, all of the signals that were associated with that page over the years will continue to work in the website’s favour. That’s the main recommendation there. It’s okay to delete these pages when they’re not needed and just recreate the same URL later and it’s okay to keep those pages for a longer period of time.

CLS Scores

Q. There are no fixed rankings for CLS scores.

  • (40:19) There’s nothing like a fixed number with regards to how strong CLS scores work for websites. From Google’s point of view, these metrics are taken into account when it comes to the Core Web Vitals and the page experience ranking factor, and Google tries to look at them overall. Google tries to focus especially on the area where the website is in that reasonable area with regards to these scores. So if a website is not in, let’s call them “poor” or “bad” scores, bad section, then that’s something where Google decides it’s reasonable to take those into account. Google doesn’t have any fixed rankings or algorithmic functions where it would take ½ of FCP and ½ of CLS and take ⅓ of this into account.
  • It’s really something where Google needs to look at the bigger picture.

Geo-redirects

Q. Geo-redirects make it hard for Google to find and index content.

  • (53:26) Geo-redirects have a negative impact on the content being indexed, and that applies to all kinds of websites. From Google’s point of view, usually, the geo-redirects are more a matter of making it technically hard for Google to crawl the content. Especially if the users from the US are being redirected to a different version of a website, Google will just follow that redirect. Googlebot usually crawls from one location. Then it’s less a matter of quality signals or anything like that, it’s more that if Google can’t see the web pages, then it can’t index them. That’s essentially the primary reason why John says they don’t recommend doing these things. Maybe some big websites do a thing where they redirect some users and don’t redirect others, and maybe Googlebot is not being redirected – it’s possible. From Google’s point of view, it doesn’t do them any favours because it would usually end up in a situation where there are multiple URLs with exactly the same content in the search results. The website is competing with itself. Google sees it as a website duplicating itself with the content in multiple locations, Google doesn’t know which to rank best and makes a guess at that.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH