Search Console Says I’m Ranking #1, but I Don’t See It.

Q. (00:56) The question is about the average position in Google Search Engine. A few days ago, I realised that over 60 queries show that I am in position #1. I still don’t understand how that works because some of the keywords I am searching on search results do not appear first. I have seen some of them coming from the Google Business Profile, but some of them are not really appearing. And I don’t really understand why it says that the average position is #1. What does it really mean? Does it sometimes say I am on position #1 and it takes time to appear, or how does it really work?

  • (54:13): The average top position is based on what was actually shown to users. It is not a theoretical ranking where you would appear at #1 if you did the right kind of thing. It is really: “We showed it to users, and there was ranking at #1 at that time”. Sometimes it is tricky to reproduce that. Which I think kind of makes it a little bit hard in that, and sometimes it is for users in a specific location or the search in a specific way – may be just mobile or just on the desktop. Sometimes it is something in image one box or knowledge graph or local business entry that you mentioned. All of these things also paces where could be the link to your website. And if you see something listed ranking #1, usually what I recommend doing is trying to narrow that down into what exactly was searched by using the different filters in the Search Console and figure out which country that was in and what type of query was it – was it mobile or on a desktop and see if you can try to reproduce it that way. Another thing that I sometimes do is look at the graph over time, so specifically for the query, your keywords’ average position ranking #1 but the total impressions and the total clicks don’t seems like make much sense, and you think that a lot of people are searching, but you should expect to see a lot of traffic, a lot of impressions at least, that also could be a sign of that you were shown at position #1 in whatever way, but you aren’t always shown that way. That could be something very temporary or something that fluctuates a little bit, anything along those lines. But it is not the case that it is a theoretical position. You are shown in the search results, and when we were shown, you were shown like this.

Why Is My Homepage Ranking Instead of an Article?

Q. (05:55) When I am checking the Search Console for the pages, the only page that is actually shown is mostly the home page. Some of the queries of home pages are ranking have their own pages.

  • (6:21) I guess it is just our algorithms that see your home page as a more important page overall. Sometimes it is something more if you continue to work on your website and make it better and better, it is easier for our systems to recognise that actually there is a specific page that is more suited for this particular query. But it is a matter of working on your site. Working on your site shouldn’t be something that just generates more and more content than you actually creating something better. And that could be by deleting a bunch of content and combining things together. So that is improving your website isn’t the same as adding more content.

Does Locating a Sitemap in a Folder Affect Indexing?

Q.  (07:11) We have our sitemaps in the subfolder of our website. I have noticed recently that a lot more pages say ‘indexed’ but not ‘submitted’. Do you think that might be due to moving the sitemaps into the subfolder? We used to have them in our ‘root’ but due to technology change, we had to move them.

  • (7:30) The locations of the sitemaps shouldn’t really matter. It is something we can put in a subdirectory or subdomain. It depends on how you submitted the sitemap file. For example, list it in your ‘robots.txt’ file. You can put it anywhere; it doesn’t matter.

I Have a New Blog, No Links. Should I Submit in Search Console?

Q. (07:59) I am new to blogging and starting a new blog. It has been an amazing experience to start from scratch. Brand new blog, no links to it. Would you recommend that you submit URLs as you published them using the Google Search Console and then request indexing for new blogs that have no links to them, or there is no point, and it is not really helpful?

  • (8:30) It is not that there is any disadvantage to doing that. If it is a new site that we really have absolutely no signals, no information about it at all, at least telling us about URLs is the way of getting down the initial foot in the door, but it is not a guarantee that it will pick that up. That is something where you probably know someone else who is blogging, and you can work together and maybe get the link to your site. Something along those lines probably would do a lot more than just going to Search Console and saying, ‘I want this URL indexed immediately’. 

How Fast Does the Links Report Update?

Q. (09:22) How long does it typically take for Google to add the links on a brand new blog on the Google Search Console ‘Links’ report?

  • (9:53) A lot of the reports on Search Console are recalculated every 3-4 days. In terms of about a week, you should probably see some data there. The tricky part is we show the samples of the links on your website, and it doesn’t mean that we immediately populate that one links that we found to your side. It is not a matter of months, and it is not a matter of hours as it is in the performance report. From one up to a week is a reasonable time.

Should the Data in Search Console Align With Analytics?

Q. (10:56) Google and my website have stated that the data we are getting from Google Analytics and Google Search Console will not match exactly, but they will make sense directly. This means for your organic search. All your clicks always are under the sessions that you get it. Is there any understanding, correct?

  • (11:17) I guess it depends on what you have set up and what you are looking at specifically. Suppose you are looking at it at the site level that is about to be correct. If you are looking at it per year level on a very large site, it could happen that those individuals are not tracked in the Search Console, and you would see slightly different changes, or you would see changes over time. Some days it is tracked, and some days it is not tracked, particularly if you have a very large website. But if you look at the website overall, that should be pretty close. Search Console measures what is shown on the search results – the clicks and impressions from there, and Analytics uses Java Script to track what is happening on the website side. Those tracking methods are slightly different and probably have slightly different ways of deduplicating things. So I would never expect two of these two to line up completely. However, overall, they should be pretty close.    

Why Are Disallowed Pages Getting Traffic?

Q. (16:58) And my next question is, in my robots file, what I’ve done is, I have disallowed some of the pages, certain pages that I’ve disallowed. But it is quite possible that Google had probably in the past indexed those pages. And when I have blocked them, disallowed crawling, today, to this date, I see them getting organic sessions. Why is that happening, and how can I fix that? And read there is something called ‘‘noindex’’ directive. But is there the right way to go about it? Or how should I pursue this?

  • (17:30) If these are pages that you don’t want to have indexed, then using ‘noindex’ would be better than using the disallow and robots.txt. The ‘noindex’ would be a metatag on the page though. So it’s a robots metatag with ‘noindex’. And you would need to allow crawling for that to happen.

Does Google Use Different Algorithms per Niche?

Q. (23:47) Is it true that Google has different algorithms for the indexing and ranking of different niches? We have two websites of the same type, and we’ve built them with the same process. The only difference is that the two sites are different niches. And currently, one is working while the other one has lost all ranking.

  • (24:07) So I don’t think we have anything specific with regards to different niches. But obviously, different kinds of content is differently critical to our search results. And if you look at something like our quality raters guidelines, we talk about things like your money your life sites, where we do kind of work to have a little bit more critical algorithms involved in the crawling, indexing, and ranking of their sites. But it’s not the case that you would say it’s like a bicycle shop has completely different algorithms than, I don’t know, a shoe store, for example. They’re essentially both e-commerce type stores. But the thing that you also mentioned in the question is that these are content aggregator sites, and they’re built with the same process. And some of them do work, and some of them don’t. That, to me, feels like it’s– I don’t know your sites. It feels a bit like low effort affiliate sites, where you’re just taking content feeds and publishing them. And that’s the kind of thing where our algorithms tend not to be so invested in making sure that we can crawl and index all of that content. Because essentially, it’s the same content that we’ve already seen elsewhere on them. So from that point of view, if you think that might apply to your site, I would recommend focusing on making fewer sites and making them significantly better. So that it’s not just aggregating content from other sources, but actually that you’re providing something unique and valuable in the sense that if we were to not index your website properly, then the people on the internet would really miss a resource that provides them with value. Whereas, if it’s really the case that if we didn’t index your website, then people would just go to one of the other affiliate aggregator sites, then there is no real reason for us to focus and invest on crawling and indexing your site. So that’s something where, again, I don’t know your websites. But that’s something that I would look into a little bit more rather than just, oh, “Google doesn’t like bicycle stores; they like shoe stores instead”.

What Counts as a Click in FAQ Rich Snippets?

Q.  (26:23) Two related questions. What counts as a click for an FAQ rich snippet? Does Google ever add links to FAQ answers, even if there isn’t one included in the text?

  • (26:29) You link to the help centre article on that, which I think is pretty much the definitive source on the clicks and impressions and position counting in Search Console. In general, we count it as a click if it’s really a link to your website and someone clicked on it. And with regards to the rich results, I can’t say for sure that we would never add a link to a rich result that we show in the search results. Sometimes I could imagine that we do. But it’s not the case that we would say, Oh, there’s a rich result on this page. Therefore we’ll count it as a click, even though nobody clicked on it. It’s really, if there’s a link there and people click on it and go to your website, that’s what we would count. And similarly, for impressions, we would count it as an impression if one of those links to your sites were visible in the search results. And it doesn’t matter where it was visible on the page if it’s on the top or the bottom of the search results page. If it’s theoretically visible to someone on that search results page, we’ll count it as an impression.

Why Do Parameter Urls Get Indexed?

Q. (30:31) Why do parameter URLs end up in Google’s index even though we’ve excluded them from crawling with the robots.txt file and with the parameter settings in Search Console. How do we get parameter URLs out of the index again without endangering the canonical URLs?

  •  (30:49) So, I think there’s a general assumption here that parameter URLs are bad for a website. And that’s not the case. So it’s definitely not the case that you need to fix the indexed URLs of your website to get rid of all parameter URLs. So from that point of view, it’s like, I would see this as something where you’re polishing the website a little bit to make it a little bit better. But it’s not something that I would consider to be critical. With regards to the robots.txt file and the parameter handling tool, usually, the parameter handling tool is the place where you could do these things. My feeling is the parameter handling tool is a little bit hard to find and hard to use by people. So personally, I would try to avoid that and instead use the more scalable approach in the robots.txt file. But you’re welcome to use it in Search Console. With the robots.txt file, you essentially prevent the crawling of those URLs. You don’t prevent indexing of those URLs. And that means that if you do something like a site query for those specific URLs, it’s very likely that you’ll still find those URLs in the index, even without the content itself being indexed. And I took a look at the forum thread that you started there, which is great. But there, you also do this fancy site query, where you pull out these specific parameter URLs. And that’s something where if you’re looking at URLs that you’re blocking by robots.txt, then I feel that is a little bit misleading. Because you can find them if you look for them, it doesn’t mean that they cause any problems, and it doesn’t mean that there is any kind of issue that a normal user would see in the search results. So just to elaborate a little bit. If there is some kind of term on those pages that you want to be found for, and you have one version of those pages that is indexable and crawlable and another version of the page that is not crawlable, where we just have that URL indexed by itself, if someone searches for that term, then we would pretty much always show that page that we actually have crawled and indexed. And the page that we theoretically also have indexed, because it has– it’s blocked by robots.txt, and theoretically, it could also have that term in there, that’s something where it wouldn’t really make sense to show that in search results because we don’t have as much confirmation that it matches that specific query. So from that point of view for normal queries, people are not going to see those ‘robotic’ URLs. And it’s more if someone searches for that exact URL, or does a specific site query for those parameters, then they could see those pages. If it’s a problem that these pages are findable in the search results, then I would use the URL removal tool for that, if you can. Or you would need to allow crawling and then use a ‘noindex’ directive, robots.txt directive, to tell us that you don’t want these pages indexed. But again, for the most part, I wouldn’t see that as a problem. It’s not something where you need to fix that with regards to indexing. It’s not that we have a cap on the number of pages that we index for a website. It’s essentially, that we’ve seen a link to these. We don’t know what is there. But we’ve indexed that URL should someone search specifically for that URL.

Does Out-Of-Stock Affect the Ranking of a Product Page?

Q. (37:41) Let’s say my product page is ranking for a transactional keyword. Would it affect its ranking if the product is out of stock?

  • (37:50) Out of stock, it’s possible. Let’s kind of simplify like that. I think there are multiple things that come into play when it comes to products themselves, in that they can be shown as a normal search result. They can also be shown as an organic shopping result as well. And if something is out of stock, I believe the organic shopping result might not be shown. Not 100% sure. And when it comes to the normal search results, it can happen that when we see that something is out of stock, we will assume it’s more like a soft 404 error, where we will drop that URL from the search results as well. So theoretically, it could essentially affect the visibility in Search if something goes out of stock. It doesn’t have to be the case. In particular, if you have a lot of information about that product anyway on those pages, then that page can still be quite relevant for people who are searching for a specific product. So it’s not necessarily that something goes out of stock, and that page disappears from search. The other thing that is also important to note here is that even if one product goes out of stock, the rest of the site’s rankings are not affected by that. So even if we were to drop that one specific product, because we think it’s more like a soft 404 page, then people searching for other products on the site, we would still show those normally. It’s not that there would be any kind of a negative effect that swaps over into the other parts of the site.

Could a Banner on My Page Affect Rankings?

Q. (39:30) Could my rankings be influenced by a banner popping up on my page?

  • (39:35) And yes, they could be as well. There are multiple things that kind of come into play with regards to banners. On the one hand, we have within the Page Experience Report; we have that aspect of intrusive interstitials. And if this banner comes across as an intrusive interstitial, then that could negatively affect the site there. The other thing is that often with banners, you have side effects on the cumulative layout shift, how the page renders when it’s loaded, or with regards to the – I forgot what the metric is when we show a page, the LCP I think, also from the Core Web Vitals side with regards to that page. So those are different elements that could come into play here. It doesn’t mean it has to be that way. But depending on the type of banner that you’re popping up, it can happen.

Do Links on Unindexed Pages Count?

Q. (49:03) How about you have been linked from some pages, but those pages have not been indexed. But those mentions or the link has been already present on those particular things. So it is still counted just because the page is not indexed, and so those links won’t be counted? Or even if the page is not indexed but if there is a link, those things can be counted as well?

  • (49:39) Usually, that wouldn’t count. Because for a link, in our systems at least, we always need a source and a destination. And both of those sides need to be canonical indexed URLs. And if we don’t have any source at all in our systems, then essentially, that link disappears because we don’t know what to do with it. So that means if the source page is completely dropped out of our search results, then we don’t really have any link that we can use there. Obviously, of course, if another page were to copy that original source and also show that link, and then we go off and index that other page, then that would be like a link from that other page to your site. But that original link, if that original page is no longer indexed, then that would not count as a normal link.

What Can We Do About Changed Image Urls?

Q. (50:50) My question is on the harvesting of images for the recipe gallery. Because we have finally identified something which I think has affected some other bloggers and it’s really badly affected us, which is that if you have lots and lots of recipes indexed in the recipe gallery, and you change the format of your images, as the metadata is refreshed, you might have 50,000 of the recipes get new metadata. But there is a deferral process for actually getting the new images. And it could be months before those new images have been picked up. And while they’re being harvested, you don’t see anything. But when you do a test on Google Search Console, it does it in real-time and says, yeah, everything’s right because the image is there. So there’s no warning about that. But what it means is you better not make any changes or tweaks to slightly improve the formatting of your image URL. Because if you do, you disappear.

  • (51:39) Probably what is happening there is the general crawling and indexing of images, which is a lot slower than normal web pages. And if you remove one image URL and you add a new one on a page, then it does take a lot of time to be picked up again. And probably that’s what you see there. What we would recommend in a case like that is to redirect your old URLs to new ones, also for the images. So if you do something, like you have an image URL which has the file size attached to the URL, for example, then that URL should redirect to a new one. And in that case, it’s like we can keep the old one in our systems, and we just follow the redirect to the new one.

Does Crawl Budget Affect Image Re-Indexing?

Q. (54:25) Does crawl budget affect image re-indexing?

  • (54:31) Yeah, I mean, what you can do is make sure that your site is really fast in that regard. And that’s something in the crawl stats report; you should be able to see some of that, where you see the average response time. And I’ve seen sites that have around 50 milliseconds. And other sites that have 600 and 700 milliseconds. And obviously, if it’s faster, it’s easier for us to request a lot of URLs. Because otherwise, we just get bogged down because we send, I don’t know, 50 Google bots to your site at one time. And we’re waiting for responses before we can move forward.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH