LION Digital, Author at Premium eCommerce marketing services - Page 7 of 10

WebMaster Hangout – Live from June 03, 2022

Can I use two HTTP result codes on a page?

Q: (01:22) All right, so the first question I have on my list here is it’s theoretically possible to have two different HTTP result codes on a page, but what will Google do with those two codes? Will Google even see them? And if yes, what will Google do? For example, a 503 plus a 302.

  • (01:41) So I wasn’t aware of this. But, of course, with the HTTP result codes, you can include lots of different things. Google will look at the first HTTP result code and essentially process that. And you can theoretically still have two HTTP result codes or more there if they are redirects leading to some final page. So, for example, you could have a redirect from one page to another page. That’s one result code. And then, on that other page, you could serve a different result code. So that could be a 301 redirect to a 404 page is kind of an example that happens every now and then. And from our point of view, in those chain situations where we can follow the redirect to get a final result, we will essentially just focus on that final result. And if that final result has content, then that’s something we might be able to use for canonicalization. If that final result is an error page, then it’s an error page. And that’s fine for us too.

Does using a CDN improve rankings if my site is already fast in my main country?

Q: (02:50) Does putting a website behind a CDN improve ranking? We get the majority of our traffic from a specific country. We hosted our website on a server located in that country. Do you suggest putting our entire website behind a CDN to improve page speed for users globally, or is that not required in our case?

  • (03:12) So obviously, you can do a lot of these things. I don’t think it would have a big effect on Google at all with regards to SEO. The only effect where I could imagine that something might happen is what users end up seeing. And kind of what you mentioned, if the majority of your users are already seeing a very fast website because your server is located there, then you’re kind of doing the right thing. But of course, if users in other locations are seeing a very slow result because perhaps the connection to your country is not that great, then that’s something where you might have some opportunities to improve that. And you could see that as something in terms of an opportunity in the sense that, of course, if your website is really slow for other users, then it’s going to be rarer for them to start going to your website more because it’s really annoying to get there. Whereas, if your website is pretty fast for other users, then at least they have an opportunity to see a reasonably fast website, which could be your website. So from that point of view, if there’s something that you can do to improve things globally for your website, I think that’s a good idea. I don’t think it’s critical. It’s not something that matters in terms of SEO in that Google has to see it very quickly as well or anything like that. But it is something that you can do to kind of grow your website past just your current country. Maybe one thing I should clarify, if Google’s crawling is really, really slow, then, of course, that can affect how much we can crawl and index from the website. So that could be an aspect to look into. In the majority of websites that I’ve looked at, I haven’t really seen this as being a problem with regards to any website that isn’t millions and millions of pages large. So from that point of view, you can double-check how fast Google is crawling in the Search Console and the crawl stats. And if that looks reasonable, even if that’s not super fast, then I wouldn’t really worry about that.

Should I disallow API requests to reduce crawling?

Q: (05:20) Our site is a live stream shopping platform. Our site currently spends about 20% of the crawl budget on the API subdomain and another 20% on image thumbnails of videos. Neither of these subdomains has content which is part of our SEO strategy. Should we disallow these subdomains from crawling, or how are the API endpoints discovered or used?

  • (05:49) So maybe the last question there first. In many cases, API endpoints end up being used by JavaScript on our website, and we will render your pages. And if they access an API that is on your website, then we’ll try to load the content from that API and use that for the rendering of the page. And depending on how your API is set up and how your JavaScript is set up, it might be that it’s hard for us to cache those API results, which means that maybe we crawl a lot of these API requests to try to get a rendered version of your pages so that we can use those for indexing. So that’s usually the place where this is discovered. And that’s something you can help by making sure that the API results can also be cached well, that you don’t inject any timestamps into URLs, for example, when you’re using JavaScript for the API, all of those things there. If you don’t care about the content that’s returned with these API endpoints, then, of course, you can block this whole subdomain from being crawled with the robots.txt file. And that will essentially block all of those API requests from happening. So that’s something where you first of all need to figure out are these API results are actually part of the primary content or important critical content that I want to have indexed from Google?

Q: (08:05) Is it appropriate to use a no-follow attribute on internal links to avoid unnecessary crawler requests to URLs which we don’t wish to be crawled or indexed?

  • (08:18) So obviously, you can do this. It’s something where I think, for the most part, it makes very little sense to use nofollow on internal links. But if that’s something that you want to do, go for it. In most cases, I will try to do something like using the rel=canonical to point at URLs that you do want to have indexed or using the robots.txt for things that you really don’t want to have crawled. So try to figure out is it more like a subtle thing that you have something that you prefer to have indexed and then use rel=canonical for that? Or is it something where you say actually, when Googlebot accesses these URLs, it causes problems for my server. It causes a large load. It makes everything really slow. It’s expensive or what have you. And for those cases, I would just disallow the crawling of those URLs. And try to keep it kind of on a basic level there. And with the rel=canonical, obviously, we’ll first have to crawl that page to see the rel=canonical. But over time, we will focus on the canon that you’ve defined. And we’ll use that one primarily for crawling and indexing.

Why don’t site:-query result counts match Search Console counts?

Q: (09:35) Why don’t the search results of a site query, which returns so many giant numbers of results, match what Search Console and the index data have for the same domain?

  • (09:55) Yeah, so this is a question that comes up every now and then. I think we’ve done a video on it separately as well. So I would double-check that out. I think we’ve talked about this a long time already. Essentially, what happens there is that there are slightly different optimisations that we do for site queries in terms of we just want to give you a number as quickly as possible. And that can be a very rough approximation. And that’s something where when you do a site query, that’s usually not something that the average user does. So we’ll try to give you a result as quickly as possible. And sometimes, that can be off. If you want a more exact number of the URLs that are actually indexed for your website, I would definitely use Search Console. That’s really the place where we give you the numbers as directly as possible, as clearly as possible. And those numbers will also fluctuate a little bit over time. They can fluctuate depending on the data centre sometimes. They go up and down a little bit as we crawl new things, and we kind of have to figure out which ones we keep, all of those things. But overall, the number in Search Console for in, I think the indexing report that’s really the number of URLs that we have indexed for your website. I would not use the about number for any diagnostics purposes in the search results. It’s really meant as a very, very rough approximation.

What’s the difference between JavaScript and HTTP redirects?

Q: (11:25) OK, now a question about redirects again, about the differences between JavaScript versus 301, HTTP, status code redirects, and which one would I suggest for short links.

  • (11:43) So, in general, when it comes to redirects, if there’s a server-side redirect where you can give us a result code as quickly as possible, that is strongly preferred. The reason that it is strongly preferred is just that it can be processed immediately. So any request that goes to your server to one of those URLs, we’ll see that redirect URL. We will see the link to the new location. We can follow that right away. Whereas, if you use JavaScript to generate a redirect, then we first have to render the JavaScript and see what the JavaScript does. And then we’ll see, oh, there’s actually a redirect here. And then we’ll go off and follow that. So if at all possible, I would recommend using a server-side redirect for any kind of redirect that you’re doing on your website. If you can’t do a server-side redirect, then sometimes you have to make do. And a JavaScript redirect will also get processed. It just takes a little bit longer. The meta refresh type redirect is another option that you can use. It also takes a little bit longer because we have to figure that out on the page. But server-side redirects are great. And there are different server-side redirect types. So there’s 301 and 302. And I think, what is it, 306? There’s 307 and 308, something along those lines. Essentially, the differences there are whether or not it’s a permanent redirect or a temporary redirect. A permanent redirect tells us that we should focus on the destination page. A temporary redirect tells us we should focus on the current page that is redirecting and kind of keep going back to that one. And the difference between the 301, 302, and the 307, and I forgot what the other one was, is more of a technical difference with regards to the different request types. So if you enter a URL in your browser, then you do what’s called a GET request for that URL, whereas if you send something to a form or use specific types of API requests, then that can be a POST request. And the 301, 302 type redirects would only redirect the normal browser requests and not the forms and the API requests. So if you have an API on your website that uses POST requests, or if you have forms where you suspect someone might be submitting something to a URL that you’re redirecting them, then obviously, you would use the other types. But for the most part, it’s usually 301 or 302.

Should I keep old, obsolete content on my site, or remove it?

Q: (14:25) I have a website for games. After a certain time, a game might shut down. Should we delete non-existing games or keep them in an archive? What’s the best option so that we don’t get any penalty? We want to keep informed of the game through videos, screenshots, et cetera.

  • (14:42) So essentially, this is totally up to you. It’s something where you can remove the content of old things if you want to. You can move them to an archive section. You can make those old pages no-index so that people can still go there when they’re visiting your website. There are lots of different variations there. The main thing that probably you will want to do if you want to keep that content is moving it into an archive section, as you mentioned. The idea behind an archive section is that it tends to be less directly visible within your website. That means it’s easy for users and for us to recognise this is the primary content, like the current games or current content that you have. And over here is an archive section where you can go in, and you can dig for the old things. And the effect there is that it’s a lot easier for us to focus on your current live content and to recognise that this archive section, which is kind of separated out, is more something that we can go off an index. But it’s not really what you want to be found for. So that’s kind of the main thing I would focus on there. And then whether or not you make the archive contains no index after a certain time or for other reasons, that’s totally up to you.

Q: (16:02) Is there any strategy by which desired pages can appear as a site link in Google Search results?

  • (16:08) So site links are the additional results that are sometimes shown below a search result, where it’s usually just a one-line link to a different part of the website. And there is no meta tag or structured data that you can use to enforce a site link to be shown. And it’s a lot more than our systems try to figure out what is actually kind of related or relevant for users when they’re looking at this one web page as well? And for that, our recommendation is essentially to have a good website structure, to have clear internal links so that it’s easy for us to recognise which pages are related to those pages, and to have clear titles that we can use and kind of show as a site link. And with that, it’s not that there’s a guarantee that any of this will be shown like that. But it kind of helps us to figure out what is related. And if we do think it makes sense to show a site link, then it’ll be a lot easier for us to actually choose one based on that information.

Our site embeds PDFs with iframes, should we OCR the text?

Q: (17:12) More technical one here. Our website uses iframes and a script to embed PDF files onto our pages and our website. Is there any advantage to taking the OCR text of the PDF and pasting it somewhere into the document’s HTML for SEO purposes, or will Google simply parse the PDF contents with the same weight and relevance to index the content?

  • (17:40) Yeah. So I’m just momentarily thrown off because it sounds like you want to take the text of the PDF and just kind of hide it in the HTML for SEO purposes. And that’s something I would definitely not recommend doing. If you want to have the content indexable, then make it visible on the page. So that’s the first thing there that I would say. With regards to PDFs, we do try to take the text out of the PDFs and index that for the PDFs themselves. From a practical point of view, what happens with a PDF is as one of the first steps, we convert it into an HTML page, and we try to index that like an HTML page. So essentially, what you’re doing is kind of framing an indirect HTML page. And when it comes to iframes, we can take that content into account for indexing within the primary page. But it can also happen that we index the PDF separately anyway. So from that point of view, it’s really hard to say exactly kind of what will happen. I would turn the question around and frame it as what do you want to have to happen? And if you want your normal web pages to be indexed with the content of the PDF file, then make it so that that content is immediately visible on the HTML page. So instead of embedding the PDF as a primary piece of content, make the HTML content the primary piece and link to the PDF file. And then there is a question of do you want those PDFs indexed separately or not? Sometimes you do want to have PDFs indexed separately. And if you do want to have them indexed separately, then linking to them is great. If you don’t want to have them indexed separately, then using robots.txt to block their indexing is also fine. You can also use the no index [? x-robots ?] HTTP header. It’s a little bit more complicated because you have to serve that as a header for the PDF files if you want to have those PDF files available in the iframe but not actually indexed. I don’t know. Timing, we’ll have to figure out how long we make these.

Q: (20:02) We want to mask links to external websites to prevent the passing of our link juice. We think the PRG approach is a possible solution. What do you think? Is the solution overkill, or is there a simpler solution out here?

  • (20:17) So the PRG pattern is a complicated way of essentially making a POST request to the server, which then redirects somewhere else to the external content so Google will never find that link. From my point of view, this is super overkill. There’s absolutely no reason to do this unless there’s really a technical reason that you absolutely need to block the crawling of those URLs. I would either just link to those pages normally or use the rel nofollow to link to those pages. There’s absolutely no reason to go through this weird POST redirect pattern there. It just causes a lot of server overhead. It makes it really hard to cache that request and take users to the right place. So I would just use a nofollow on those links if you don’t want to have them followed. The other thing is, of course, just blocking all of your external links. That rarely makes any sense. Instead, I would make sure that you’re taking part in the web as it is, which means that you link to other sites naturally. They link to you naturally, taking part of the normal part of the web and not trying to keep Googlebot locked into your specific website. Because I don’t think that really makes any sense.

Does it matter which server platform we use, for SEO?

Q: (21:47) For Google, does it matter if our website is powered by WordPress, WooCommerce, Shopify, or any other service? A lot of marketing agencies suggest using specific platforms because it helps with SEO. Is that true?

  • (22:02) That’s absolutely not true. So there is absolutely nothing in our systems, at least as far as I’m aware, that would give any kind of preferential treatment to any specific platform. And with pretty much all of these platforms, you can structure your pages and structure your website however you want. And with that, we will look at the website as we find it there. We will look at the content that you present, the way the content is presented, and the way things are linked internally. And we will process that like any HTML page. As far as I know, our systems don’t even react to the underlying structure of the back end of your website and do anything special with that. So from that point of view, it might be that certain agencies have a lot of experience with one of these platforms, and they can help you to make really good websites with that platform, which is perfectly legitimate and could be a good reason to say I will go with this platform or not. But it’s not the case that any particular platform has an inherent advantage when it comes to SEO. You can, with pretty much all of these platforms, make reasonable websites. They can all appear well in search as well.

Does Google crawl URLs in structured data markup?

Q: (23:24) Does Google crawl URLs located in structured data markup, or does Google just store the data?

  • (23:31) So, for the most part, when we look at HTML pages, if we see something that looks like a link, we might go off and try that URL out as well. That’s something where if we find a URL in JavaScript, we can try to pick that up and try to use it. If we find a link in a text file on a site, we can try to crawl that and use it. But it’s not really a normal link. So it’s something where I would recommend if you want Google to go off and crawl that URL, make sure that there’s a natural HTML link to that URL, with a clear anchor text as well, that you give some information about the destination page. If you don’t want Google to crawl that specific URL, then maybe block it with robots.txt or on that page, use a rel=canonical pointing to your preferred version, anything like that. So those are the directions I would go there. I would not blindly assume that just because it’s in structured data, it will not be found, nor would I blindly assume that just because it’s in structured data, it will be found. It might be found. It might not be found. I would instead focus on what you want to have to happen there. If you want to have it seen as a link, then make it a link. If you don’t want to have it crawled or indexed, then block crawling or indexing. That’s all totally up to you.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

Are You Ready for Google Analytics 4?

With all the new changes in the past decade in the digital marketing landscape, a more sophisticated way to collect and organise user data was much needed. In the fall of 2020, Google introduced an updated software called Google Analytics 4 (GA4), a version that, so far, has worked in parallel with its predecessor Google Universal Analytics (UA). However, Google recently announced that this version would be sunsetting on July 1, 2023, including its premium version 360 Universal Analytics, which will stop processing data in October of next year as well. It is worth noting that the premium features from 360 Universal Analytics will be rolled into the new iteration of GA4 as well.

Getting used to new software takes time, especially in this case, considering that Google Analytics 4 presents an entirely different interface and configuration to UA. This is most certainly why Google made this announcement in advance, to allow businesses still using the UA tool to migrate and get used to the latest version. It is also worth noting that GA4 doesn’t provide any historical data you’ve tracked in Universal Analytics, which adds another good reason why you should start migrating to GA4 since data continuity and reporting are paramount to your business.

Some of the main tools integrated with Google Analytics 4

Event-based data model

Probably the most significant change in the software, Google Analytics 4 introduces an event-based model that offers flexibility in the way we collect data while also providing a new set of reports based on the model.

With Google Analytics Universal, businesses relied on “sessions”, which accounted for a more fragmented model since it only collected data in limited slots. Also, it only worked with specific and predefined information categories, making custom-type data much more challenging to obtain. But now, since everything can be an event, there’s a broader opportunity to understand and compare client behaviour through different custom-type data across various platforms.

Operation across platforms

Previous to GA4, businesses required different tools to analyse both website and app data separately; this made it difficult to obtain a global picture of its user traffic. But now, GA4 added a new kind of property that merges app and web data for reporting and analysis.

Thanks to this update, if you have users coming to you through different platforms, you can now use a single set of data to know which marketing channels are acquiring more visitors and conversions.

No need to rely on cookies

As mentioned at the beginning of this article, a lot has changed in the last decade regarding digital marketing; this includes an ever-growing emphasis on user privacy.

Big tech companies, such as Apple, have started to develop a first-privacy policy, which is why Safari started blocking all third-party cookies in 2020. So it comes as no surprise that Google also announced that they will do the same in 2023 for Chrome.

With GA4, Google is moving away from a cookies-dependent model, no longer needing to store IP addresses for its functionality and looking to be more compliant with today’s international privacy standards.

Audience Triggers

This is a cool feature and lets brands set conditions for a user to move from one audience group to another (like they’ve bought into a specific product category). Then you can better personalise the ad experience, offering complimentary/similar products across the display, video and discovery placements and bring them back to shop more with you.

More Sophisticated insights

GA4 promised a more modernised way of collecting and organising data. Still, the most important thing for businesses is “how” to use this data. Advanced AI learning has been applied in Google Analytics 4 to generate sophisticated predictive insights about user behaviour and conversions, pivotal to improving your marketing.

Integrations

GA4 brings deeper integrations with other Google products, such as Google Ads, allowing you to optimise marketing campaigns by using data to build custom audiences that are more relevant to your marketing objectives and utilising Google Optimise for AB testing

In summary, Google Analytics 4 combines features designed to understand client behaviour in more detail than Universal Analytics previously allowed whilst prioritising user privacy. It also brings about a very friendly interface, with some drag-and-drop functionality to help build reports, reminiscent of Adobe Analytics Workspace.

Adobe Analytics Workspace

GA4 Drag and Drop

You can chat with the team at LION Digital and we can help you set up on GA4

We had a good chat with a colleague at our first Shopify Plus Partner meetup who was developing a Shopify Plus site for their client. They noted GA4 setup they had to do was quite complex and time-consuming as event-tracking needed to be configured, including eCommerce tracking, and Data Studio reports needs to be rebuilt. Took him a good 3 hours that he was keen not to repeat. Thankfully we’ve got a bunch of skilled specialists to help you set up GA4 and we can connect this to our Digital ROI Dashboard to help you get the insights you need, and look at your Channel Action Plans.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Dimas Ibarra –
Digital Marketing Executive

What to Expect When Performance Max Replaces Smart Shopping

ICYMI: Google is rolling out a new campaign type called Performance Max that replaces Smart Shopping campaigns. Smart Shopping has been great from a channel diversification perspective, expanding your real estate from the Shopping tab across Google’s Display Network, YouTube and Gmail without having to set up specific campaigns across these verticals.

Performance Max builds on Smart Shopping by making available new inventory, including Dynamic Search Ads, Discovery Ads and YouTube Instream. Google likes PMAX so much that they’re forcing everyone to migrate over to these campaigns come July 1st – this means the time to test and learn is closing fast, and we have seen there is an element of learning from the machine’s side before ROAS returns to normal before starting to trend in the right direction.

Similar to Google Analytics 4’s event-based system, PMAX is touted as a goals-based, automated solution targeted at unlocking maximum performance comprised of the following three components:

  1. A single, goal-based campaign focused on achieving the performance objectives, leveraging automation and machine learning.
  2. Path-to-purchase aware, ensuring the right ad served at the right time in line with your marketing objectives.
  3. Access to the best inventory across Google properties to reach customers where they are, efficiently and at scale.

Are you ready to move from Smart Shopping to Performance Max?

Performance Max is about to be adopted by all eCommerce spenders, and the window of first-mover advantage and test and learn is closing rapidly. Act now. LION Digital’s Search specialists can support you throughout your transition to PMAX and other adaptive ad technologies.

We recently achieved outstanding results for our client Helly Hansen!
To know more check our case study.

Work with leaders in the eCommerce space to transform your channel strategy.

LION stands for Leaders In Our Niche. We pride ourselves on being true specialists in each eCommerce Marketing Channel. LION Digital has a team of certified experts and the head of the department with 10+ years of experience in eCommerce and SEM. We follow a ROI-focused approach in paid search backed by seamless coordination and detailed reporting, thus helping our clients meet their goals.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Leonidas Comino – Founder & CEO

Leo is a, Deloitte award winning and Forbes published digital business builder with over a decade of success in the industry working with market-leading brands.

Like what we do? Come work with us

Thought Starters to help your business thrive in an economic downturn

No doubt you’ve read in the news that there are concerns about an economic downturn in Australia and abroad.

COVID tailwinds, the conflict in Ukraine, the US stock market declines, and the increased cost of shipping goods from China have both businesses and consumers concerned for their livelihoods.

We wanted to write a Thought Starter piece (the first of many to come) that summarises consumer and business reactions to past downturns, shifts in consumer behaviour observed over the past two years, and suggested actions you can take to prepare for the road ahead.

Consumer and Business Reactions to downturns

We’ve just come out of a pandemic, which typically results in a rapid V shape halt and subsequent snapback, as illustrated by the 2020-2021 consumer confidence swings. But we don’t see this in 2022, which belies a gradual decline in confidence as more news reaches consumers and their purchasing behaviour responds accordingly.

Source: ANZ Roy Morgan Consumer Confidence Index 2020-2022

Consumers respond in different ways to economic downturns; they are quick to act when they sense a tightening but less responsive when times are good. During trying times, they may buy fewer consumer durables, become more price-sensitive and seek cheaper alternatives, stockpile savings and shift spend away from status and lifestyle purchases to focus on items of necessity (non-perishables, essential items, healthcare, and apparently, toilet paper).

Businesses respond to downturns by reviewing and cutting discretionary spend – which often includes a round of redundancies and declines in marketing spend. There have been many studies from academics like Peter Field, Byron Sharp and Mark Ritson advising against cutting marketing budgets during a recession, with many advertisers believing that fewer companies will be advertising, and they can maintain their Share of Voice for a lower cost. This has been proven wrong time and again, with studies like the below from Engagement Labs showing without positive marketing messages, consumers focus on the negatives they may hear in the news or, worse, they may forget you completely, being wrapped up in competitor narratives.

The other side of this, which many brands are still grappling with from the pandemic is when consumers switch to alternatives, often they are slow to switch back when normality returns. Consider how many people now have a coffee machine instead of doing their morning coffee run – great for a coffee supplier, but not so great for the coffee shop owner. 

However, there is only so far that prior studies can take us, as there are a number of unique factors that still linger from COVID that need to be considered when making judgements about the upcoming economic environment.

Macro factors of the 2020’s

Ad spend reaches record high

First and foremost, the ad industry in Australia has surged to over $22.8 billion, with year-to-date financial ad spend reaching record levels, up 14.2 per cent. Digital, powered by search and social, grew 24.2 per cent (thanks to eCommerce, Government and Election campaigns), Outdoor bookings grew 11.9 per cent and TV ad spend is up 7.4 per cent. Ad execs don’t expect to see a spending decline until December or even into 2023.

Source: IAB, SMI, April 2022

It’s been reported in some industries that CPCs for search terms have risen as much as 800%; this coupled with Google’s rollout of Performance Max next month means brands are likely to be paying more to achieve the same results if they’re not careful.

Any eCommerce vendor who hasn’t tested Performance Max yet should look to expedite their migration as a priority, as PMAX campaigns tend to experience a learning curve for the first few weeks before ROI starts to trend in the right direction.

Shifting Consumer Behaviours

The second unique factor in the current environment is they precede sweeping changes in consumer buying behaviour – key outtakes from the AusPost 2022 Ecomm Report include:

  • 81% of households bought online in the last 12 months
  • Average 23% growth in spending on online physical goods
  • Consumers are shopping with more retailers – averaging 15/year vs 9 in 2019
  • Consumers more frequently than ever before – with almost 60% of Australians shopping online more than once a month (up from 42% in 2019)

As we noted earlier, consumers are slow to return to the norm after switching preferences, which they’ve barely had a chance to do before economic concerns arose. We can expect more people to be shopping online and greater price sensitivity, supported by Shopping, Marketplaces and offers from many vendors to impact consumer buying decisions in the months ahead.

Source: Australia Post: 2022 Inside Australian Online Shopping – Ecommerce Industry Report

Fading mental availability and attention

Attention has been cited as a challenge for many brands targeting Gen Z consumers. However, recent research by Karen Nelson-Field, backed by Peter Field and Orlando Wood, indicates this is a much broader issue, highlighting that left-brain targeted ads, particularly in the field of display, are not resonating with the type of right-hemisphere attention that parses the world and our place in it.

People, not product, they argue should be front and centre:

That means advertising, by and large, that involves the living [i.e. people not products] doing interesting things in a definable place. Not always, but mostly those are the sorts of things that capture our attention, elicit an emotional response and put things into long-term memory.

Orlando Wood

They warn that left-brain-targeted advertising is eroding the tried and tested ESOV (excess share of voice) principles that have underpinned advertising for the past 30 years.

The trio will present their findings and advice for marketers at the Cannes Lion Festival of Creativity this week.

Source: Mi-3 – Why mental availability, ESOV are fading: Peter Field, Karen Nelson-Field, Orlando Wood warn ad industry faces triple jeopardy threat, effectiveness rulebook ripped up

So what can you do about all this?

Here are a few Thought Starters to help you plan and prepare for the uncertainty ahead.

Find your own alternatives

Price may be a conscious factor in your consumers’ decision to move away from you, so can you find alternatives that can reduce your overheads, like sharing/renting warehouse space, trialling new suppliers or looking at drop-shipping solutions, to pass savings onto your customers? This is a short-term solution but if your consumers are thinking this way, best be proactive and consider what you can do to keep them. 

Talk to your customers

Above all, don’t forget to talk to your customers. They are feeling the same way you are and a bit of reassurance and care can go a long way! You’ll likely find ideas from even the most casual consumer that can help you navigate this environment and retain your customers.

Review Marketing ROI and how this has changed

Take a good look at where you are directing your marketing dollars and the ROI this yields – look beyond ROAS to actual purchase outcomes, order value growth and expected lifetime value of your customers. The Assisted Conversions and Model Comparison tools in Google Analytics are a good way of measuring cross-channel interplay; we’ve seen time and again that consumers first touch and engage through Search and convert through Direct – you can’t neglect the source channel and expect Direct sales to continue growing.

Look at how cost and acquisition has changed since the pandemic – could you afford similar jumps as competition and CPCs rise? 

At what point do you face diminishing returns on your Marginal ROI? 

What are your contingency plans and channels you can shift to if this occurs?

These are critical questions to discuss with your team and will help you plan for the future

What channels have you not considered?

We know Performance is the driving force for acquisition but what other channels are you not playing in that can yield new customers and returns? You would know best what you’re doing and what you’re not, but look to competitors and market leaders for ideas, or look even further afield to related industries for inspiration (I’ve seen some great Health Insurance providers leaning into the Healthcare space to become more involved with consumers with health concerns, and they are front of mind when premiums come up as a result).

Email remains the top converting channel – do you have segmented audiences in play with offers going out to your customers to bring them back to the store? Have you considered gamifying this channel with quizzes to better understand the products your customers like, which can in turn drive better segmentation, more compelling offers and engaging emails? Is your CRM up to the task, or is it time to enrich your audience list, bolstering it with earned activity or by buying third-party prospecting lists? This is particularly effective in the B2B space, it may be worth considering taking a page out of their book!

Loyalty – Do you have a loyalty program? How is it performing? How do you know what your customers want from you? Don’t be afraid to ask the questions – consumers love being engaged and you will likely get really valuable insights and excite your loyal customers with the offers to come!

SMS might surprise you – with the right offers and focus on what value, this can be a powerful channel! SMS works effectively when paired with Email offers, but consider the role this should play in the overall mix and be cautious of frequency, lest you lose subscribers.

Content – Yes, content is a channel! Going back to Peter Field’s earlier interview, performance is only so effective during the decision-making process – be there with content that helps your prospective customer understand the category, questions they need to be asking and what’s really important when comparing like-for-like products – they will love you for it! Need help with ideas? Chat to the LION SEO team, we can research topics using SEO tools and pull out insights from your own data to help connect you to the questions consumers are asking and you’ll have everything you need to get started.

Partnerships – you are not alone, many business owners are concerned about the near term. What contacts do you have (suppliers, friends/of friends, frenemies) that you can support and what can they do for you in return? Reciprocity is the key to good partnerships!

Search/Social – Yes, we’ve come back to Search and Social! But the power of these channels cannot be overstated and it’s all about finding the right levels of investment; investment of time on the organic side (focused on reach), and smart investment on the paid side (focused on revenue and ROI). There is always more you can do with Search, so look closely at what’s performing well and where you have untapped potential.  You should look to outsmart your competitors rather than outspend them – use your depth of knowledge in the industry to win customers over and leverage the wealth of data in your campaigns to leapfrog your sleeping competitors.

Consolidate channels – We see lots of business owners and marketing managers who work with specialist agencies for different channels. While there is merit to the argument that silos are great for specialism, the downside of managing many vendors, having to switch hats each time and align the agencies yourself stretches you too thin, not to mention higher costs and holistic, cross-channel efficiencies you miss out on by working with disparate teams. 

Pick a niche player – Another challenge we see eCommerce businesses dealing with is working with generalised agencies that work across local, service and eCommerce clients. You know what they say, Jack of all trades, master of none.

LION stands for Leaders In Our Niche. We pride ourselves on being among the top experts in each eCommerce Marketing Channel!

Thanks for reading! If any of what you’ve read resonates call me for a chat

Article by

Leonidas Comino – Founder & CEO

Leo is a, Deloitte award winning and Forbes published digital business builder with over a decade of success in the industry working with market-leading brands.

Like what we do? Come work with us

WebMaster Hangout – Live from May 06, 2022

Can I use Web Components for SEO?

Q. (03:04) Is there any problem with using web components for SEO, or is it OK to use them for my website?

  • (03:15) So web components are essentially a way of using predefined components within a website. Usually, they’re integrated with a kind of JavaScript framework. And the general idea there is that as a web designer, as a web developer, you work with these existing components, and you don’t have to reinvent the wheel every time something specific is supposed to happen. For example, if you need a calendar on one of your pages, then you could just use a web component for a calendar, and then you’re done. When it comes to SEO, these web components are implemented using various forms of JavaScript, and we can process pretty much most JavaScript when it comes to Google Search. And while I would like to say just kind of blindly everything will be supported, you can test this, and you should test this. And the best way to test this is in Google Search Console; there’s the Inspect URL tool. And there, you can insert your URL, and you will see what Google will render for that page, the HTML. You can see it on a screenshot essentially, first of all, and then also in the rendered HTML that you can look at as well. And you can double-check what Google is able to pick up from your web components. And if you think that the important information is there, then probably you’re all set. If you think that some of the important information is missing, then you can drill down and try to figure out what is getting stuck there? And we have a lot of documentation on JavaScript websites and web search nowadays. So I would double-check that. Also, Martin Splitt on my team has written a lot of these documents. He’s also findable on Twitter. He sometimes also does office hours like this, where he goes through some of the JavaScript SEO questions. So if you’re a web developer working on web components and you’re trying to make sure that things are being done well, I would also check that.

Is it ok to use FAQ schema across different parts of a page?

Q. (05:18) Is it OK to use the FAQ schema to markup questions and answers that appear in different sections of a blog post that aren’t formatted as a traditional FAQ list? For example, a post may have ten headings for different sections. A few of those are questions with answers.

  • (05:35) So I double-checked the official documentation. That’s where I would recommend you go for these kinds of questions as well. And it looks like it’s fine. The important part when it comes to FAQ snippets and structured data, in general, is that the content should be visible on the page. So it should really be the case that both the question and the answer are visible when someone visits that page, not that it’s kind of hidden away in a section of a page. But if the questions and the answers are visible on the page, even if they’re in different places on the page, that’s perfectly fine. The other thing to keep in mind is that, like all structured data, FAQ snippets are not guaranteed to be shown in the search results. Essentially, you make your pages eligible to have these FAQ snippets shown, but it doesn’t guarantee that they will be shown. So you can use the testing tool to make sure that everything is implemented properly. And if the testing tool says that’s OK, then probably you’re on the right track. But you will probably still have to kind of wait and see how Google actually interprets your pages and processes them to see what is actually shown in the search results. And for structured data, I think it’s the case for FAQs, but at least for some of the other types of schema, there are specific reports in Search Console as well that give you information on the structured data that was found and the structured data that was actually shown in the search results, so that you can roughly gauge, is it working the way that you want it to, or is it not working the way that you want it to? And for things like this, I would recommend trying them out and making a test page on your website, kind of seeing how things end up in the search results, double-checking if it’s really what you want to do, and then going off to actually implement it across the rest of your website.

Our site is not user-friendly with JavaScript turned off. Is it a problem?

Q.  (10:23) Our website is not very user-friendly if JavaScript is turned off. Most of the images are not loaded, and our flyout menu can’t be opened. However, the Chrome ‘inspect’ feature in the ‘all menu links are there in the source code. Might our dependence on JavaScript still be a problem for Googlebot?

  • (10:45) From my point of view, like with the first question that we had on web components, I would test it. So probably everything will be OK. And probably, I would assume if you’re using JavaScript reasonably, if you’re not doing anything special to block the JavaScript on your pages, it will probably just work. But you’re much better off not just believing me but rather using a testing tool to try it out. And the testing tools that we have available are quite well-documented. There are lots of variations on things that we recommend with regards to improving things if you run into problems. So I would double-check our guides on JavaScript and SEO and think about maybe, I don’t know, trying things out, making sure that they actually work the way that you want, and then taking that to improve your website overall. And you mentioned user-friendly with regards to JavaScript. So from our point of view, the guidance that we have is essentially very technical, in the sense that we need to make sure that Googlebot can see the content from a technical point of view and that it can see the links on your pages from a technical point of view. It doesn’t primarily care about user-friendliness. But, of course, your users care about user-friendliness. And that’s something where maybe it makes sense to do a little bit more so that your users are really for sure having a good experience on your pages. And this is often something that isn’t just a matter of a simple testing tool, but rather something where maybe you have to do a small user study, or kind of interview some users, or at least do a survey on your website to understand where do they get stuck? What kind of problems are they facing? Is it because of these, I don’t know. You mentioned the fly-out menus. Or is it something may be completely different where they see problems, that may be the text is too small, or they can’t click the buttons properly, those kinds of things which don’t really align with technical problems, but are more user-side things, if you can improve those, and if you can make your users happier, they’ll stick around, and they’ll come back, and they’ll invite more people to visit your website as well.

We use some static HTML pages and some WordPress pages, does that matter?

Q. (13:07) Our static page is built with HTML, and our blog is built with WordPress. The majority of our blog posts are experiencing indexing issues in Google. How do I fix this?

  • (13:21) So I think, first of all, it’s important to know that these are essentially just different platforms. And essentially, with all of these platforms, you’re creating HTML pages. And the background or the backend side of your website that ends up creating these HTML pages that’s something that Googlebot doesn’t really look at. Or at least, that’s something that Googlebot doesn’t really try to evaluate. So if your pages are written in HTML, and you write them in an editor, and you load them on your server, and they serve like that, we can see that they’re HTML pages. If they’re created on the fly on your server based on a database in WordPress or some other kind of platform that you’re using, and then it creates HTML pages, we see those final HTML pages, and we essentially work with those. So if you’re seeing kind of issues with regards to your website overall when it comes to things like crawling, indexing, or ranking, and you can kind of exclude the technical elements there, and  Googlebot is able to actually see the content, then usually what remains is kind of the quality side of things. And that’s something that definitely doesn’t rely on the infrastructure that you use to create these pages, but more it’s about the content that you’re providing there and the overall experience that you’re providing on the website. So if you’re seeing something that, for example, your blog posts are not being picked up by Google or not ranking well at Google, and your static HTML pages are doing fine on Google, then it’s not because they’re static HTML pages that they’re doing well on Google, but rather because Google thinks that these are actually good pieces of content that it should recommend to other users. And on that level, that’s where I would take a look, and not focus so much on the infrastructure, but really focus on the actual content that you’re providing. And when it comes to content, it’s not just the text that’s like the primary part of the page. It’s like everything around the whole website that comes into play. So that’s something where I would really try to take a step back and look at the bigger picture. And if you don’t see kind of from a bigger picture point of view where maybe some quality issues might lie or where you could improve things, I would strongly recommend doing a user study. And for that, maybe invite, I don’t know, a handful of people who aren’t directly associated with your website and have some tasks on your website, and then ask them really tough questions about where they think maybe there are problems on this website, or if they would trust this website or any kind of other question around understanding the quality of the website. And we have a bunch of these questions in some of our blog posts that you can also use for inspiration. It’s not so much that I would say you have to ask the questions that we ask in our blog post. But sometimes, having some inspiration for these kinds of things is useful. In particular, we have a fairly old blog post on one of the early quality updates, and we have a newer blog post; maybe I guess it’s like two years old now. It’s not a super new blog post on core updates. And both of these have a bunch of questions on them that you could ask yourself about your website. But especially if you have a group of users that are willing to give you some input, then that’s something that you could ask them, and really take their answers to heart and think about ways that you can improve your website overall.

I have some pages with rel=canonical tags, some without. Why are they both shown in the search?

Q. (17:10) I have a set of canonical tags or I have set canonical URLs on five pages, but Google is showing it on the third page as well. Why is it not only showing the URLs which I’ve set a canonical on it for?

  • (17:30) So I’m not 100% sure I understand this question correctly. But kind of paraphrasing, it sounds like on five pages of your website, you set a rel=canonical. And there are other pages on your website where you haven’t set a rel=canonical. And Google is showing all of these pages kind of indexed essentially in various ways. And I think the thing to keep in mind is the rel=canonical is a way of specifying which of the pages within a set of duplicate pages you want to have indexed. Or essentially, which address do you want to have used. So, in particular, if you have one page, maybe with the file name in uppercase, and one page with the file name in lowercase, then in some situations, your server might show exactly the same content; technically, they are different addresses. Uppercase and lowercase are slightly different. But from a practical point of view, your server is showing exactly the same thing. And Google, when it looks at that, says, well, it’s not worthwhile to index two addresses with the same content. Instead, I will pick one of these addresses and use it kind of to index that piece of content. And with the rel=canonical, you give Google a signal and tell it, hey, Google, I really want you to use maybe the lowercase version of the address when you’re indexing this content. You might have seen the uppercase version, but I really want you to use the lowercase version. And that’s essentially what the rel=canonical does. It’s not a guarantee that we would use the version that you specify there, but it’s a signal for us. It helps us to figure out all things else being kind of equal; you really prefer this address, so we will try to use that address. And that’s kind of the preference part that comes into play here. And it comes into play when we’ve recognised there are multiple copies of the same piece of content on your website. And for everything else, we will just try to index it to the best of our abilities. And that also means that for the pages where you have a rel=canonical on it, sometimes it will follow that advice that you give us. Sometimes our systems might say, well, actually, I think maybe you have it wrong. You should have used the other address as the primary version. That can happen. It doesn’t mean it will rank differently, or it will be worse off in search. It’s just, well, Google systems are choosing a different one. And for other pages on your website, you might not have a rel=canonical set at all. And for those, we will just try to pick one ourselves. And that’s also perfectly fine. And in all of these cases, the ranking will be fine. The indexing will be fine. It’s really just the address that is shown in the search results that varies. So if you have the canonical set on some pages but not on others, we will still try to index those pages and find the right address to use for those pages when we show them in search. So it’s a good practice to have the rel=canonical on your pages because you’re trying to take control over this vague possibility that maybe a different address will show. But it’s not an absolute necessity to have a rel=canonical tag.

What can we do about spammy backlinks that we don’t like?

Q. (20:56) What can we do if we have thousands of spammy links that are continuously placed as backlinks on malicious domains? They contain spammy keywords and cause 404s on our domain. We see a strong correlation between these spammy links and a penalty that we got after a spam update in 2021. We disavowed all the spammy links, and we reported the domain which is listed as a source of the links of spam. What else can we do?

  • (21:25) Yeah. I think this is always super frustrating as a site owner when you look at it and you’re like, someone else is ruining my chances in the search results. But there are two things I think that are important to mention in this particular case. On the one hand, if these links are pointing at pages on your website that are returning 404, so they’re essentially linking to pages that don’t exist, then we don’t take those links into account because there’s nothing to associate them with on your website. Essentially, people are linking to a missing location. And then we’ll say, well, what can we do with this link? We can’t connect it to anything. So we will drop it. So that’s kind of the first part. Like a lot of those are probably already dropped. The second part is you mentioned you disavowed those spammy backlinks. And especially if you mention that these are like from a handful of domains, then you can do that with the domain entry in the disavow backlinks tool. And that essentially takes them out of our system as well. So we will still list them in Search Console, and you might still find them there and kind of be a bit confused about that. But essentially, they don’t have any effect at all. If they’re being disavowed, then we tell our systems that these should be taken into account neither in a positive nor a negative way. So from a practical point of view, both from the 404 sides and from the disavow, probably those links are not doing anything negative to your website. And if you’re seeing kind of significant changes with regards to your website in Search, I would not focus on those links, but rather kind of look further. And that could be within your own website kind of to understand a little bit better what is actually the value that you’re providing there. What can you do to really stand up above all of the other websites with regards to kind of the awesome value that you’re providing users? How can you make that as clear as possible to search engines? That’s kind of the direction I would take there. So not lose too much time on those spammy backlinks. You can just disavow the whole domain that they’re coming from and then move on. There’s absolutely nothing that you need to do there. And especially if they’re already linking to 404 pages, they’re already kind of ignored.

What’s the stand on App Indexing?

Q. (26:51) What’s the latest stand on app indexing? Is the project shut? How to get your app ranked on Google if app indexing is no longer working?

  • (26:58) So app indexing was a very long time ago, I think a part of Search Console and some of the things that we talked about, where Google will be able to crawl and index parts of an app as it would appear, and try to show that in the search results. And I think that migrated a long time ago over to Firebase app indexing. And I double-checked this morning when I saw this question, and Firebase has also migrated to yet another thing. But it has a bunch of links there for kind of follow-up things that you can look at with regards to that. So I would double-check the official documentation there and not kind of listen to me talk about app indexing as much because I don’t really know the details around Android app indexing. The one thing that you can do with regards to any kind of an app, be it a mobile phone, smartphone app like these things, or a desktop app that you install, you can absolutely make a homepage for it. And that’s something that can be shown in Search like anything else. And for a lot of the kinds of smartphone apps, there will also be a page on the Play Store or the App Store. I don’t know what they’re all called. But usually, they’re like landing pages that also exist, which are normal web pages which can also appear in Search. And these things can appear in search when people search for something around your app. Your own website can appear in search when people are searching for something around the app. And especially when it comes to your own website, you can do all of the things that we talk about when it comes to SEO for your own website. So I would not like to say, oh, app indexing is no longer the same as it was 10 years ago. Therefore, I’m losing out. But rather, you have so many opportunities in different ways to be visible in Search. You don’t need to rely on just one particular aspect.

Our CDN blocks Translate. Is that bad for SEO?

Q. (29:08) The bot crawling is causing a real problem on our site. So we have our CDN block unwanted bots. However, this also blocks the Translate This Page feature. So my questions are, one, is it bad for Google SEO if the Translate This Page feature doesn’t work? Does it also mean that Googlebot is blocked? And second, is there a way to get rid of the ‘Translate This Page’ link for all of our users?

  • (29:37) So I think there are different ways or different infrastructures on our side to access your pages. And there’s, on the one hand, Googlebot and the associated infrastructure. And I believe the translate systems are slightly different because they don’t go through robots.txt, but rather they look at the page directly kind of thing. And because of that, it can be that these are blocked in different ways. So, in particular, Googlebot is something you can block on an IP level using a reverse DNS lookup, or you can allow it on an IP level. And the other kinds of elements are slightly different. And if you want to block everything or every bot other than Googlebot, or other than official search engine bots, that’s totally up to you. When it comes to SEO, Google just needs to be able to crawl with Googlebot. And you can test that in Search Console to see does Googlebot have access? And through Search Console, you can get that confirmation that it’s working OK. How it works for the Translate This Page backend systems, I don’t know. But it’s not critical for SEO. And the last question, how can you block that Translate This Page link? There is a “no translate” meta tag that you can use on your pages that essentially tells Chrome and the systems around translation that this page does not need to be translated or shouldn’t be offered as a translation. And with that, I believe you can block the Translate This Page link in the search results as well. And the “no translate” meta tag is documented in our search developer’s documentation. So I would double-check that.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from April 29, 2022

Should we make a UK version of our English blog?

Q. (03:15) The company I work at is working on expanding its market in the UK and recently launched a UK subsite. 70% of our US traffic comes from our blog, and we don’t currently have a blog on the UK subsite. Would translating our US blog post into Queen’s English be beneficial for UK exposure?

  • (03:42) I doubt that would make a big difference. So, in particular, when it comes to hreflang, which is the way that you would connect different language or country versions of your pages, we would not be ranking these pages better just because we have a local version. It’s more that if we understand that there is a local version, and we have that local version indexed, and it’s a unique page, then we could swap out that URL at the same position in the search results to the more local version. So it’s not that your site will rank better in the UK if you have a UK version. It’s just that we would potentially show the link to your UK page instead of the current one. For, essentially, informational blog posts, I suspect you don’t really need to do that. And one of the things I would also take into account with internationalisation is that fewer pages are almost always much better than having more pages. So, if you can limit the number of pages that you provide by not doing a UK version of the content that doesn’t necessarily need to have a UK version, then that’s almost always better for your site, and it’s easier for you to maintain.

How is a language mismatch between the main content & the rest of a page treated?

Q. (05:04) How might Google treat a collection of pages on a site that is in one language per URL structure, for example, example.com/de/blogarticle1, and the titles may be in German, but the descriptions are in English, and the main content is also in English?

  • (05:25) So the question goes on a little bit with more variations of that. In general, we do focus on the primary content on the page, and we try to use that, essentially, for ranking. So, if the primary content is in English, that’s a really strong sign for us that this page is actually in English. That said, if there’s a lot of additional content on the page that is in a different language, then we might either understand this page is in two languages, or we might be a little bit confused if most of the content is in another language and say, well, perhaps this page is actually in German, and there’s just a little bit of English on it as well. So that’s one thing to kind of watch out for. I would really make sure that, if the primary content is in one language, that it’s really a big chunk of primary content, and that it’s kind of useful when people go to that page who are searching in that language. The other thing to watch out for is the titles of the page and things like the primary headings on the page. They should also match the language of the primary content. So if the title of your page is in German, and the primary content of your page is in English, then it’s going to be really, really hard for us to determine what we should show in the search results because we try to match the primary content. That means we would try to figure out what would be a good title for this page. That also means that we would need to completely ignore the titles that you provide. So if you want to have your title shown, make sure that they also match the primary language of the page.

Does an age-interstitial block crawling or indexing?

Q.  (08:20)  If a website requires users to verify their age before showing any content by clicking a button to continue, is it possible that Google would have problems crawling the site? If so, are there any guidelines around how to best handle this?

  • (08:40) So, depending on the way that you configure this, yes, it is possible that there might be issues around crawling the site. In particular, Googlebot does not click any buttons on a page. So it’s not that Google would be able to navigate through an interstitial like that if you have something that is some kind of a legal interstitial. And especially if it’s something that requires verifying an age, then people have to enter something and then click Next. And Googlebot wouldn’t really know what to do with those kinds of form fields. So that means, if this interstitial is blocking the loading of any other content, then probably that would block indexing and crawling as well. A really simple way to test if this is the case is just to try to search for some of the content that’s behind that interstitial. If you can find that content on Google, then that probably means that we were able to actually find that content. From a technical point of view, what you need to watch out for is that Google is able to load the normal content of the page. And, if you want to show an interstitial on top of that, using JavaScript or HTML, that’s perfectly fine. But we need to be able to load the rest of the page as well. So that’s kind of the most important part there. And that also means that if you’re using some kind of a redirect to a temporary URL and then redirecting that to your page, that won’t work. But, if you’re using JavaScript/CSS to kind of display an interstitial on top of your existing content that’s already loaded, then that would work for Google Search. And, from a kind of a policy point of view, that’s fine. That’s not something that we would consider to be cloaking because the content is still being loaded there. And especially if people can get to that content after navigating through that interstitial, that’s perfectly fine.

Is using the Indexing API for a normal website good or bad?

Q. (15:35)  Is using API index or the indexing API good or bad for a normal website?

  • (08:40) So the indexing API is meant for very specific kinds of content, and using it for other kinds of content doesn’t really make sense. That’s similar, I think, I don’t know– using construction vehicles as photos on your website. Sure, you can put it on a medical website, but it doesn’t really make sense. And, if you have a website about building houses, then, sure, put construction vehicles on your website. It’s not that it’s illegal or that it will cause problems if you put construction vehicles on your medical website, but it doesn’t really make sense. It’s not really something that fits there. And that’s similar with the indexing API. It’s really just for very specific use cases. And, for everything else, that’s not what it’s there for.

Does Googlebot read the htaccess file?

Q. (16:31) Does Googlebot read the htaccess file?

  • (16:36) The short answer is no because, usually, a server is configured in a way that we can’t access that file, or nobody can access that file externally. The kind of longer answer is that the htaccess file controls how your server responds to certain requests, assuming you’re using an Apache server, which uses this as a control file. And essentially, if your server is using this file to control how it responds to certain requests, then, of course, Google and anyone will see the effects of that. So it’s not– again, assuming this is a control file on your server, it’s not that Google would read that file and do something with it, but rather Google would see the effects of that file. So, if you need to use this file to control certain behavior on your website, then go for it.

How does Google Lens affect SEO?

Q. (17:39) How does Multisearch in Google Lens can affect SEO?

  • (17:44) So this is something, I think, that is still fairly new. We recently did a blog post about this, and you can do it in Chrome, for example, and on different types of phones. Essentially, what happens is you can take a photo or an image from a website, and you can search using that image. For example, if you find a specific piece of clothing or a specific don’t know anything, basically, that you would like to find more information on, you can highlight that section of the image and then search for more things that are similar. And, from an SEO point of view, that’s not really something that you would do manually to make that work, but rather, if your images are indexed, then we can find your images, and we can highlight them to people when they’re searching in this way. So it’s not that there’s like a direct effect on SEO or anything like that. But it’s kind of like, if you’re doing everything right, if your content is findable in Search, if you have images on your content, and those images are relevant, then we can guide people to those images or to your content using multiple ways.

I’m unsure what to do to make my Blogger or Google Sites pages indexable.

Q. (21:24) I’m unsure what is needed to have my Blogger and Google Sites pages searchable. I assumed Google would crawl its own platforms

  • (21:34) So I think the important part here is that we don’t have any special treatment for anything that is hosted on Google’s systems. And, in that regard, you should treat anything that you host on Blogger or on Google Sites or anywhere else essentially the same as any content that you would host anywhere on the web. And you have to assume that we need to be able to crawl it. We need to be able to, well, first, discover that it actually exists there. We need to be able to crawl it. We need to be able to index it, just like any other piece of content. So just because it’s hosted on Google’s systems doesn’t give it any kind of preferential treatment when it comes to the search results. It’s not that there is a magic way that all of this will suddenly get indexed just because it’s on Google, but rather, we have these platforms. You’re welcome to use them, but they don’t have any kind of special treatment when it comes to Searching. And also, with these platforms, it’s definitely possible to set things up in a way that won’t work as well for Search as they could. So, depending on how you configure things with Blogger and with regards to Google Sites, how you set that up, and which kind of URL patterns that you use, it may be harder than the basic setup. So just because something is on Google’s systems doesn’t mean that there’s a preferential way of that being handled.

Why is a specific URL on my site not crawled?

Q. (25:29) Is there anything like a URL format penalty. I’m facing a weird problem where a particular URL doesn’t get crawled. It’s the most linked page on a website, and still, not even one internal link is found for this URL, and it looks like Google is simply ignoring this kind of URL. However, if we slightly change the URL, like an additional character or word, the URL gets crawled and indexed. The desired URL format is linked within the website, present and rendered HTML, and submitted in a sitemap as well, but still not crawled or indexed.

  • (26:06) So it’s pretty much impossible to say without looking at some examples. So this is also the kind of thing where I would say, ideally, go to the Help forums and post some of those sample URLs and get some input from other folks there. And the thing to also keep in mind with the Help forums is that the product experts can escalate issues if they find something that looks like something is broken in Google. And sometimes things are broken in Google, so it’s kind of good to have that path to escalate things.

Does adding the location name to the description help ranking?

Q. (26:50) Does adding the location name in the meta description matter to Google in terms of a ranking if the content quality is maintained?

  • (26:59) So the meta description is primarily used as a snippet in the search results page, and that’s not something that we would use for ranking. But, obviously, having a good snippet on a search results page can make it more interesting for people to actually visit your page when they see your page ranking in the search results.

Which structured data from schema do I add to a medical site?

Q. (27:20) How does schema affect a medical niche’s website? What kind of structure data should be used there?

  • (27:27) So I would primarily when it comes to structured data, I would primarily focus on the things that we have documented in our developer documentation and the specific features that are tied to that. So, instead of saying, what kind of structured data should I use for this type of website, I would kind of turn it around and say, what kind of visible attributes do I want to have found in the search results? And then, from there, look at what are the requirements for those visual attributes, and can I implement the appropriate structure data to fulfill those requirements? So that’s kind of the direction I would head there.

Does every page need schema or structured data?

Q. (28:06) Does every page need schema or structured data?

  • (28:10) No, definitely not. As I mentioned, use the guide of what visual elements I want to have visible for my page, and then find the right structured data for that. It’s definitely not the case that you need to put structured data on every page.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from April 01, 2022

Crawling of Website.

Q. (00:55) My question is about the crawling of our website. We have different numbers from the crawling of the Search Console in our server log. For instance, we have three times the number of the crawling from Google in our server, and in Search Console, we have one of the three bars. Could it be possible that there maybe something wrong as the numbers are different.

  • (1:29) I  think the numbers would always be different, just because of the way that in Search Console we report on all crawls that go through the infrastructure of Googlebot. But that also includes other types of requests. So for example, I think Adsbot also uses the Googlebot infrastructure. Those kinds of things. And they have different user agents. So if you look at your server logs, and only look at the Googlebot user agent that we use for web search, those numbers will never match what we show in that Search Console.

Why Did Discover Traffic Drop?

Q. (03:10) I have a question in mind. We have a website. And from the last three to four months, we have been working on Google Web Stories.  It was going very well. On the last one, the 5th of March, actually, we were having somewhere around 400 to 500 real-time results coming from Google Discover on our Web Stories. But suddenly, we saw an instant drop in our visitors from Google Discover, and our Search Console is not even showing any errors. So what could be a possible reason for that?

  • (3:48) I don’t think there needs to be any specific reason for that, because, especially with Discover, it’s something that we would consider to be additional traffic to a website. And it’s something that can change very quickly. And anecdotally, I’ve seen that from, or I’ve heard that from sites in these Office Hours that sometimes, they get a lot of traffic from Discover and then suddenly it goes away. And then it comes back again. So it doesn’t necessarily mean that you’re doing anything technically wrong. It might just be that, in Discover, things have slightly changed, and then suddenly you get more traffic or suddenly you get less traffic. We do have a bunch of policies that apply to content that we show in Google Discover, and I would double-check those, just to make sure that you’re not accidentally touching on any of those areas. That includes things, I don’t know offhand, but I think something around the lines of clickbaity content, for example, those kinds of things. But I would double-check those guidelines to make sure that you’re all in line there. But even if everything is in line with the guidelines, it can be that suddenly you get a lot of traffic from Discover, and then suddenly, you get a lot less traffic.

Can We Still Fix the robots.txt for Redirects?

Q.  (14:55) We have had a content publishing website since 2009, and we experienced a bad migration in 2020, where we encountered a huge drop in organic traffic. So the question here is that we had a lot of broken links, so we use the 301 redirect to redirect these broken links to the original articles. But what we did in robots.txt, we disallowed these links so that the crawling budget won’t be gone on crawling this for all four pages. So the main thing here, if we fixed all these redirects, we redirected to the same article with the proper name, can we remove these links from the robots.txt, and how much time does it take to actually be considered by Google.

  • (15:53) So if the page is blocked by the robots.txt, we wouldn’t be able to see the redirect. So if you set up a redirect, you would need to remove that block in the robots.txt. With regards to the time that takes, there is no specific time, because we don’t crawl all pages at the same speed. So some pages we may pick up within a few hours, and other pages might take several months to be recrawled. So that’s, I think, kind of tricky. The other thing, I think, is worth mentioning here is, that if this is from a migration that is two years back now, then I don’t think you would get much value out of just making those 404 links show content now. I can’t imagine that would be the reason why a website would be getting significantly less traffic. Mostly, because it’s– unless these pages are the most important pages of your website, but then you would have noticed that. But if these are just generic pages on a bigger website, then I can’t imagine that the overall traffic to a website would drop because they were no longer available.

Some of the Blogs Posts Aren’t Indexed, What Can We Do?

Q. (18:54) My question is a crawling question pertaining to discovered not indexed. We have run a two-sided marketplace since 2013 that’s fairly well established. We have about 70,000 pages, and about 70% of those are generally indexed. And then there’s kind of this budget that crawls the new pages that get created, and those, we see movement on that, so that old pages go out, new pages come in. At the same time, we’re also writing blog entries from our editorial team, and to get those to the top of the queue, we always use this request indexing on those. So they’ll go quicker. We add them to the sitemap, as well, but we find that we write them and then we want them to get in to Google as quickly as possible.  As we’ve kind of been growing over the last year, and we have more content on our site, we’ve seen that that sometimes doesn’t work as well for the new blog entries. And they also sit in this discovered not indexed queue for a longer time. Is there anything we can do to internal links or something? Is it content-based, or do we just have to live with the fact that some of our blogs might not make it into the index?

  • (20:13) Now, I think overall, it’s kind of normal that we don’t index everything on a website. So that can happen to the entries you have on the site and also the blog post on the site. It’s not tied to a specific kind of content. I think using the Inspect URL tool to submit them to indexing is fine. It definitely doesn’t cause any problems. But I would also try to find ways to make those pages as clear as possible that you care about them. So essentially, internal linking is a good way to do that. To really make sure that, from your home page, you’re saying, here are the five new blog posts, and you link to them directly. So that it’s easy for Googlebot when we crawl and index your home page, to see, oh, there’s something new, and it’s linked from the home page. So maybe it’s important. Maybe we should go off and look at that.

Can a Low Page Speed Score Affect the Site’s Ranking?

Q. (27:28) Does low-rating mobile results on Google page speed like LCP, FID, might have affected our website rank after the introduction of the new algorithm last summer? Because we were like the fourth in my city? If I check a web agency or a keyword that we saw after the introduction of this algorithm and go on Google Search Console, we find out that these parameters like LCP, FID for mobile, have a bad rating, like 48, not for desktop, there is 90. So it’s OK. So could this be the problem?

  • (28:24) Could be. It’s hard to say just based on that. So I think there are maybe two things to watch out for. The number that you gave me sounds like the PageSpeed Insights score that is generated, I think, on desktop and mobile. Kind of that number from 0 to 100, I think. We don’t use that in search, for the rankings. We use the Core Web Vitals, where there is LCP FID and CLS, I think. And the metrics that we use are based on what users actually see. So if you go into the Search Console, there’s the Core Web Vitals report. And that should show you those numbers. If it’s within good or bad, kind of in those ranges.

Can Google Crawl Pagination With “View More” Buttons?

Q. (39:51) I recently redesigned my website and changed the way I list my blog posts and other pages from pages one, two, three, four to a View More button. Can Google still crawl the ones that are not shown on the main blog page? What is the best practice? If not, let’s say those pages are not important when it comes to search and traffic, would the whole site as a whole be affected when it comes to how relevant it is for the topic for Google?

  • (40:16) So on the one hand, it depends a bit on how you have that implemented. A View More button could be implemented as a button that does something with JavaScript, and those kinds of buttons, we would not be able to crawl through and actually see more content there. On the other hand, you could also implement a View More button, essentially as a link to page two of those results, or from page two to page three. And if it’s implemented as a link, we would follow it as a link, even if it doesn’t have a label that says page two on it. So that’s, I think, the first thing to double-check. Is it actually something that can be crawled or not? And with regards to if it can’t be crawled, then usually, what would happen here is, we would focus primarily on the blog posts that would be linked directly from those pages. And it’s something where we probably would keep the old blog posts in our index because we’ve seen them and indexed them at some point. But we will probably focus on the ones that are currently there. One way you can help to mitigate this is if you cross-link your blog post as well. So sometimes that is done with category pages or these tag pages that people add. Sometimes, blogs have a mechanism for linking to related blog posts, and all of those kinds of mechanisms add more internal linking to a site and that makes it possible that even if we, initially, just see the first page of the results from your blog, we would still be able to crawl to the rest of your website. And one way you can double-check this is to use a local crawler. There are various third-party crawling tools available. And if you crawl your website, and you see that oh, it only picks up five blog posts, then probably, those are the five blog posts that are findable. On the other hand, if it goes through those five blog posts. And then finds a bunch more and a bunch more, then you can be pretty sure that Googlebot will be able to crawl the rest of the site, as well.

To What Degree Does Google Follow the robots.txt Directives?

Q. (42:34) To what degree does Google honour the robots.txt? I’m working on a new version of my website that’s currently blocked with a robots.txt file and I intend to use robots.txt to block the indexing of some URLs that are important for usability but not for search engines. So I want to understand if that’s OK.

  • (42:49) That’s perfectly fine. So when we recognise disallow entries in a robots.txt file, we will absolutely follow them. The only kind of situation I’ve seen where that did not work is where we were not able to process the robots.txt file properly. But if we can process the robots.txt file properly, if it’s properly formatted, then we will absolutely stick to that when it comes to crawling. Another caveat here is, that usually, we update the robots.txt files, maybe once a day, depending on the website. So if you change your robots.txt file now, it might take a day until it takes effect. With regards to blocking crawling– so you mentioned blocking indexing, but essentially, the robots.txt file would block crawling. So if you blocked crawling of pages that are important for usability but not for search engines, usually, that’s fine. What would happen, or could happen, is that we would index the URL without the content. So if you do a site query for those specific URLs, you would still see it. But if the content is on your crawlable pages, then for any normal query that people do when they search for a specific term on your pages, we will be able to focus on the pages that are actually indexed and crawled, and show those in the search results. So from that point of view, that’s all fine.

If 40% Of Content Is Affiliate, Will Google Consider Site a Deals Website?

Q. (53:27) So does the portion of content created by a publisher matter, and I mean that in the sense of affiliate, or maybe even sponsored context. Context is a Digiday newsletter that went out today that mentioned that publishers were concerned that if you have, let’s say 40%, of your traffic or content as commerce or affiliate, your website will become or considered by Google, a deals website, and then your authority may be dinged a little bit. Is there such a thing that’s happening in the ranking systems algorithmically?

  • (54:05) I don’t think we would have any threshold like that. Partially, because it’s really hard to determine a threshold like that. You can’t, for example, just take the number of pages and say, this is this type of website because it has 50% pages like that. Because the pages can be visible in very different ways. Sometimes, you have a lot of pages that nobody sees. And it wouldn’t make sense to judge a website based on something that, essentially, doesn’t get shown to users.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from March 25, 2022

Search Console Says I’m Ranking #1, but I Don’t See It.

Q. (00:56) The question is about the average position in Google Search Engine. A few days ago, I realised that over 60 queries show that I am in position #1. I still don’t understand how that works because some of the keywords I am searching on search results do not appear first. I have seen some of them coming from the Google Business Profile, but some of them are not really appearing. And I don’t really understand why it says that the average position is #1. What does it really mean? Does it sometimes say I am on position #1 and it takes time to appear, or how does it really work?

  • (54:13): The average top position is based on what was actually shown to users. It is not a theoretical ranking where you would appear at #1 if you did the right kind of thing. It is really: “We showed it to users, and there was ranking at #1 at that time”. Sometimes it is tricky to reproduce that. Which I think kind of makes it a little bit hard in that, and sometimes it is for users in a specific location or the search in a specific way – may be just mobile or just on the desktop. Sometimes it is something in image one box or knowledge graph or local business entry that you mentioned. All of these things also paces where could be the link to your website. And if you see something listed ranking #1, usually what I recommend doing is trying to narrow that down into what exactly was searched by using the different filters in the Search Console and figure out which country that was in and what type of query was it – was it mobile or on a desktop and see if you can try to reproduce it that way. Another thing that I sometimes do is look at the graph over time, so specifically for the query, your keywords’ average position ranking #1 but the total impressions and the total clicks don’t seems like make much sense, and you think that a lot of people are searching, but you should expect to see a lot of traffic, a lot of impressions at least, that also could be a sign of that you were shown at position #1 in whatever way, but you aren’t always shown that way. That could be something very temporary or something that fluctuates a little bit, anything along those lines. But it is not the case that it is a theoretical position. You are shown in the search results, and when we were shown, you were shown like this.

Why Is My Homepage Ranking Instead of an Article?

Q. (05:55) When I am checking the Search Console for the pages, the only page that is actually shown is mostly the home page. Some of the queries of home pages are ranking have their own pages.

  • (6:21) I guess it is just our algorithms that see your home page as a more important page overall. Sometimes it is something more if you continue to work on your website and make it better and better, it is easier for our systems to recognise that actually there is a specific page that is more suited for this particular query. But it is a matter of working on your site. Working on your site shouldn’t be something that just generates more and more content than you actually creating something better. And that could be by deleting a bunch of content and combining things together. So that is improving your website isn’t the same as adding more content.

Does Locating a Sitemap in a Folder Affect Indexing?

Q.  (07:11) We have our sitemaps in the subfolder of our website. I have noticed recently that a lot more pages say ‘indexed’ but not ‘submitted’. Do you think that might be due to moving the sitemaps into the subfolder? We used to have them in our ‘root’ but due to technology change, we had to move them.

  • (7:30) The locations of the sitemaps shouldn’t really matter. It is something we can put in a subdirectory or subdomain. It depends on how you submitted the sitemap file. For example, list it in your ‘robots.txt’ file. You can put it anywhere; it doesn’t matter.

I Have a New Blog, No Links. Should I Submit in Search Console?

Q. (07:59) I am new to blogging and starting a new blog. It has been an amazing experience to start from scratch. Brand new blog, no links to it. Would you recommend that you submit URLs as you published them using the Google Search Console and then request indexing for new blogs that have no links to them, or there is no point, and it is not really helpful?

  • (8:30) It is not that there is any disadvantage to doing that. If it is a new site that we really have absolutely no signals, no information about it at all, at least telling us about URLs is the way of getting down the initial foot in the door, but it is not a guarantee that it will pick that up. That is something where you probably know someone else who is blogging, and you can work together and maybe get the link to your site. Something along those lines probably would do a lot more than just going to Search Console and saying, ‘I want this URL indexed immediately’. 

How Fast Does the Links Report Update?

Q. (09:22) How long does it typically take for Google to add the links on a brand new blog on the Google Search Console ‘Links’ report?

  • (9:53) A lot of the reports on Search Console are recalculated every 3-4 days. In terms of about a week, you should probably see some data there. The tricky part is we show the samples of the links on your website, and it doesn’t mean that we immediately populate that one links that we found to your side. It is not a matter of months, and it is not a matter of hours as it is in the performance report. From one up to a week is a reasonable time.

Should the Data in Search Console Align With Analytics?

Q. (10:56) Google and my website have stated that the data we are getting from Google Analytics and Google Search Console will not match exactly, but they will make sense directly. This means for your organic search. All your clicks always are under the sessions that you get it. Is there any understanding, correct?

  • (11:17) I guess it depends on what you have set up and what you are looking at specifically. Suppose you are looking at it at the site level that is about to be correct. If you are looking at it per year level on a very large site, it could happen that those individuals are not tracked in the Search Console, and you would see slightly different changes, or you would see changes over time. Some days it is tracked, and some days it is not tracked, particularly if you have a very large website. But if you look at the website overall, that should be pretty close. Search Console measures what is shown on the search results – the clicks and impressions from there, and Analytics uses Java Script to track what is happening on the website side. Those tracking methods are slightly different and probably have slightly different ways of deduplicating things. So I would never expect two of these two to line up completely. However, overall, they should be pretty close.    

Why Are Disallowed Pages Getting Traffic?

Q. (16:58) And my next question is, in my robots file, what I’ve done is, I have disallowed some of the pages, certain pages that I’ve disallowed. But it is quite possible that Google had probably in the past indexed those pages. And when I have blocked them, disallowed crawling, today, to this date, I see them getting organic sessions. Why is that happening, and how can I fix that? And read there is something called ‘‘noindex’’ directive. But is there the right way to go about it? Or how should I pursue this?

  • (17:30) If these are pages that you don’t want to have indexed, then using ‘noindex’ would be better than using the disallow and robots.txt. The ‘noindex’ would be a metatag on the page though. So it’s a robots metatag with ‘noindex’. And you would need to allow crawling for that to happen.

Does Google Use Different Algorithms per Niche?

Q. (23:47) Is it true that Google has different algorithms for the indexing and ranking of different niches? We have two websites of the same type, and we’ve built them with the same process. The only difference is that the two sites are different niches. And currently, one is working while the other one has lost all ranking.

  • (24:07) So I don’t think we have anything specific with regards to different niches. But obviously, different kinds of content is differently critical to our search results. And if you look at something like our quality raters guidelines, we talk about things like your money your life sites, where we do kind of work to have a little bit more critical algorithms involved in the crawling, indexing, and ranking of their sites. But it’s not the case that you would say it’s like a bicycle shop has completely different algorithms than, I don’t know, a shoe store, for example. They’re essentially both e-commerce type stores. But the thing that you also mentioned in the question is that these are content aggregator sites, and they’re built with the same process. And some of them do work, and some of them don’t. That, to me, feels like it’s– I don’t know your sites. It feels a bit like low effort affiliate sites, where you’re just taking content feeds and publishing them. And that’s the kind of thing where our algorithms tend not to be so invested in making sure that we can crawl and index all of that content. Because essentially, it’s the same content that we’ve already seen elsewhere on them. So from that point of view, if you think that might apply to your site, I would recommend focusing on making fewer sites and making them significantly better. So that it’s not just aggregating content from other sources, but actually that you’re providing something unique and valuable in the sense that if we were to not index your website properly, then the people on the internet would really miss a resource that provides them with value. Whereas, if it’s really the case that if we didn’t index your website, then people would just go to one of the other affiliate aggregator sites, then there is no real reason for us to focus and invest on crawling and indexing your site. So that’s something where, again, I don’t know your websites. But that’s something that I would look into a little bit more rather than just, oh, “Google doesn’t like bicycle stores; they like shoe stores instead”.

What Counts as a Click in FAQ Rich Snippets?

Q.  (26:23) Two related questions. What counts as a click for an FAQ rich snippet? Does Google ever add links to FAQ answers, even if there isn’t one included in the text?

  • (26:29) You link to the help centre article on that, which I think is pretty much the definitive source on the clicks and impressions and position counting in Search Console. In general, we count it as a click if it’s really a link to your website and someone clicked on it. And with regards to the rich results, I can’t say for sure that we would never add a link to a rich result that we show in the search results. Sometimes I could imagine that we do. But it’s not the case that we would say, Oh, there’s a rich result on this page. Therefore we’ll count it as a click, even though nobody clicked on it. It’s really, if there’s a link there and people click on it and go to your website, that’s what we would count. And similarly, for impressions, we would count it as an impression if one of those links to your sites were visible in the search results. And it doesn’t matter where it was visible on the page if it’s on the top or the bottom of the search results page. If it’s theoretically visible to someone on that search results page, we’ll count it as an impression.

Why Do Parameter Urls Get Indexed?

Q. (30:31) Why do parameter URLs end up in Google’s index even though we’ve excluded them from crawling with the robots.txt file and with the parameter settings in Search Console. How do we get parameter URLs out of the index again without endangering the canonical URLs?

  •  (30:49) So, I think there’s a general assumption here that parameter URLs are bad for a website. And that’s not the case. So it’s definitely not the case that you need to fix the indexed URLs of your website to get rid of all parameter URLs. So from that point of view, it’s like, I would see this as something where you’re polishing the website a little bit to make it a little bit better. But it’s not something that I would consider to be critical. With regards to the robots.txt file and the parameter handling tool, usually, the parameter handling tool is the place where you could do these things. My feeling is the parameter handling tool is a little bit hard to find and hard to use by people. So personally, I would try to avoid that and instead use the more scalable approach in the robots.txt file. But you’re welcome to use it in Search Console. With the robots.txt file, you essentially prevent the crawling of those URLs. You don’t prevent indexing of those URLs. And that means that if you do something like a site query for those specific URLs, it’s very likely that you’ll still find those URLs in the index, even without the content itself being indexed. And I took a look at the forum thread that you started there, which is great. But there, you also do this fancy site query, where you pull out these specific parameter URLs. And that’s something where if you’re looking at URLs that you’re blocking by robots.txt, then I feel that is a little bit misleading. Because you can find them if you look for them, it doesn’t mean that they cause any problems, and it doesn’t mean that there is any kind of issue that a normal user would see in the search results. So just to elaborate a little bit. If there is some kind of term on those pages that you want to be found for, and you have one version of those pages that is indexable and crawlable and another version of the page that is not crawlable, where we just have that URL indexed by itself, if someone searches for that term, then we would pretty much always show that page that we actually have crawled and indexed. And the page that we theoretically also have indexed, because it has– it’s blocked by robots.txt, and theoretically, it could also have that term in there, that’s something where it wouldn’t really make sense to show that in search results because we don’t have as much confirmation that it matches that specific query. So from that point of view for normal queries, people are not going to see those ‘robotic’ URLs. And it’s more if someone searches for that exact URL, or does a specific site query for those parameters, then they could see those pages. If it’s a problem that these pages are findable in the search results, then I would use the URL removal tool for that, if you can. Or you would need to allow crawling and then use a ‘noindex’ directive, robots.txt directive, to tell us that you don’t want these pages indexed. But again, for the most part, I wouldn’t see that as a problem. It’s not something where you need to fix that with regards to indexing. It’s not that we have a cap on the number of pages that we index for a website. It’s essentially, that we’ve seen a link to these. We don’t know what is there. But we’ve indexed that URL should someone search specifically for that URL.

Does Out-Of-Stock Affect the Ranking of a Product Page?

Q. (37:41) Let’s say my product page is ranking for a transactional keyword. Would it affect its ranking if the product is out of stock?

  • (37:50) Out of stock, it’s possible. Let’s kind of simplify like that. I think there are multiple things that come into play when it comes to products themselves, in that they can be shown as a normal search result. They can also be shown as an organic shopping result as well. And if something is out of stock, I believe the organic shopping result might not be shown. Not 100% sure. And when it comes to the normal search results, it can happen that when we see that something is out of stock, we will assume it’s more like a soft 404 error, where we will drop that URL from the search results as well. So theoretically, it could essentially affect the visibility in Search if something goes out of stock. It doesn’t have to be the case. In particular, if you have a lot of information about that product anyway on those pages, then that page can still be quite relevant for people who are searching for a specific product. So it’s not necessarily that something goes out of stock, and that page disappears from search. The other thing that is also important to note here is that even if one product goes out of stock, the rest of the site’s rankings are not affected by that. So even if we were to drop that one specific product, because we think it’s more like a soft 404 page, then people searching for other products on the site, we would still show those normally. It’s not that there would be any kind of a negative effect that swaps over into the other parts of the site.

Could a Banner on My Page Affect Rankings?

Q. (39:30) Could my rankings be influenced by a banner popping up on my page?

  • (39:35) And yes, they could be as well. There are multiple things that kind of come into play with regards to banners. On the one hand, we have within the Page Experience Report; we have that aspect of intrusive interstitials. And if this banner comes across as an intrusive interstitial, then that could negatively affect the site there. The other thing is that often with banners, you have side effects on the cumulative layout shift, how the page renders when it’s loaded, or with regards to the – I forgot what the metric is when we show a page, the LCP I think, also from the Core Web Vitals side with regards to that page. So those are different elements that could come into play here. It doesn’t mean it has to be that way. But depending on the type of banner that you’re popping up, it can happen.

Do Links on Unindexed Pages Count?

Q. (49:03) How about you have been linked from some pages, but those pages have not been indexed. But those mentions or the link has been already present on those particular things. So it is still counted just because the page is not indexed, and so those links won’t be counted? Or even if the page is not indexed but if there is a link, those things can be counted as well?

  • (49:39) Usually, that wouldn’t count. Because for a link, in our systems at least, we always need a source and a destination. And both of those sides need to be canonical indexed URLs. And if we don’t have any source at all in our systems, then essentially, that link disappears because we don’t know what to do with it. So that means if the source page is completely dropped out of our search results, then we don’t really have any link that we can use there. Obviously, of course, if another page were to copy that original source and also show that link, and then we go off and index that other page, then that would be like a link from that other page to your site. But that original link, if that original page is no longer indexed, then that would not count as a normal link.

What Can We Do About Changed Image Urls?

Q. (50:50) My question is on the harvesting of images for the recipe gallery. Because we have finally identified something which I think has affected some other bloggers and it’s really badly affected us, which is that if you have lots and lots of recipes indexed in the recipe gallery, and you change the format of your images, as the metadata is refreshed, you might have 50,000 of the recipes get new metadata. But there is a deferral process for actually getting the new images. And it could be months before those new images have been picked up. And while they’re being harvested, you don’t see anything. But when you do a test on Google Search Console, it does it in real-time and says, yeah, everything’s right because the image is there. So there’s no warning about that. But what it means is you better not make any changes or tweaks to slightly improve the formatting of your image URL. Because if you do, you disappear.

  • (51:39) Probably what is happening there is the general crawling and indexing of images, which is a lot slower than normal web pages. And if you remove one image URL and you add a new one on a page, then it does take a lot of time to be picked up again. And probably that’s what you see there. What we would recommend in a case like that is to redirect your old URLs to new ones, also for the images. So if you do something, like you have an image URL which has the file size attached to the URL, for example, then that URL should redirect to a new one. And in that case, it’s like we can keep the old one in our systems, and we just follow the redirect to the new one.

Does Crawl Budget Affect Image Re-Indexing?

Q. (54:25) Does crawl budget affect image re-indexing?

  • (54:31) Yeah, I mean, what you can do is make sure that your site is really fast in that regard. And that’s something in the crawl stats report; you should be able to see some of that, where you see the average response time. And I’ve seen sites that have around 50 milliseconds. And other sites that have 600 and 700 milliseconds. And obviously, if it’s faster, it’s easier for us to request a lot of URLs. Because otherwise, we just get bogged down because we send, I don’t know, 50 Google bots to your site at one time. And we’re waiting for responses before we can move forward.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from March 18, 2022

Changing Page Title and Descriptions

Q.  (1:01) A few days ago we optimised a page title and description, after that we saw the title and description changed when using the site to search in Google. After a long while, the title and description have been the one in the first place. In this case, does Google think the formal title and description are better than the one we optimised and would there be any other possible reasons that may cause this?

  • (1:36) I would not necessarily assume that if Google changes it to something that Google thinks it’s better, then you should use that title too. It is more than our systems have selected a different title. And usually, the title is picked on a per-page basis.

Best Practice to Have Content Display

Q. (4:29) Every franchisee has a microsite after the domain, like domain/branch1 and their corresponding URLs. What would be the best practice to have content displayed well? It is really hard to have 100 branches and 100 different contents. Is there a problem from the perspective of Google? What does Google think about it?

  • (05:48) Guidelines, usually doorway pages are more about swapping out one word on a page essentially, where the rest of the content is the same. If you have a specific service you create pages for every city nearby or every street or every region nearby, and you create hundreds of these pages that are essentially just driving traffic to the same business. I think franchises probably that’s a lesser issue because these are essentially separate businesses. It is essential to separate the businesses.

Decrease in Getting Articles Indexed

Q. (09:25) I work in a content creation agency. And we work with different blogs. And the agency has been running for more than 10 years. And we’ve never had problems with getting our articles indexed. But for the last six months, all of the new blogs especially are having big problems getting their articles indexed. So recently, we’re working on organic traffic. And if we’re not getting our articles indexed, then we can’t work on that or optimise any content. I was just wondering if there’s any specific reason, maybe there’s been some sort of change, or if we just have to wait and see what happens? Because we have done a lot of technical revisions of the site amps and everything. But we have just noticed a decrease in articles getting indexed.

  • (10:14) It is hard to say in general without being able to kinda look at some of these example sites. If you have examples of pages that you have that are not super fresh, a couple of weeks old that are still not getting indexed I would love to get some of those examples. In general, though, I think what I see with a lot of these questions that tend to come up around my content not being indexed is that from a technical point of view, a lot of these sites are really good, that they’re doing the right things about site maps, the internal linking is set up well. It is more than on our side from a quality point of view, it is something where it feels like the bar is slowly going up and that more people are creating content that is technically okay but not from a quality point of view.

SEO Tool in Duplicate Content

Q. (15:40) We have had a news publishing website since 2009. We post articles related to recipes, health, fitness and stuff like that. We have articles that are considered personal SEO tools as duplicate content. We tend to recreate another version of this recipe or tweak it around maybe sugar-free or salt-free everything related to that. What the SEO tool suggested is to remove it because none of the duplicate content is being ranked or indexed by Google. What is the solution for this?

  • (16:53) To make assumptions with regards to what Google will do and what will happen. And sometimes those assumptions are okay and sometimes they are not correct. This kind of feedback from SEO tools is useful because it is still something that you can take a look at it and make a judgment call. You might choose to say, I’m ignoring the tool in this case and I’m kind of following the guidance in a different case. If you are seeing something even from a very popular SEO tool that tells you, you should disavow these links and delete this content. Always use your judgment first before blindly following that.

Ranking Service Pages to Get More Leads

Q.  (23:16) I’m currently working on a website that is based in India and we get leads from all over India. We can provide services all over the world, but first I want to rank my service pages to get more leads from the USA. Can you help me know what things I can do so that I can rank top of my competitors?

  • (24:16) If you’re going from a country-specific website to something more global then it helps to make sure that from a technical point of view, your website is available for that. Using a generic top-level domain instead of a country-specific top-level domain can help. Any time when you go from a country-level website to a global-level website, the competition changes completely.

Getting the Best Approach for Client Credits

Q. (27:18) We are working with an eCommerce client, and it is an open-source online store management system. Their blog is WordPress. The main URL is example.com, whereas the blog is blog.example.com. What would be the best approach for this client to get credit from the blogs?

  •  (28:06) Some SEOs have very strong opinions about subdomains and subdirectories and would probably want to put this all on the same domain, from our point of view you could do it like this as well. This setup would be fine. If you did want to move it into the same domain, then practically speaking, that usually means you have to do some technical tricks, where essentially you proxy one subdomain as a subdirectory somewhere else. You have to make sure that all of the links work.

Describing Products Using Alt Text

Q. (37:22) Should I write alt text for products for an e-Commerce site since there is already text beneath that describes the product?

  • (37:35) The alt text is meant as a replacement or description of the image. That is particularly useful for people who cannot look at individual images, who use things like screen readers. It also helps search engines to understand what this image is about

Using Alt Tags for Longer Text With Decoration

Q.  (40:07) Would you use alt tags for images that use only decoration within the longer text? How would you treat those mostly stock images?

  • (40:28) From an SEO point of view, the alt text helps us to understand the image better for image search. If you do not care about this image for image search then that is fine. You would focus more on the accessibility aspect there rather than the pure SEO aspect. It is not the case that we would say a textual webpage has more value. It is just well, we see the alt text and we apply it to the image.

Added Links in an Underscore Cell

Q.  (44:09) So one of my technical members has added the links in the form of an underscore target or underscore equally to blank. How are Google bots able to crawl these things? Do they understand that there are links added to this particular node that typed something over there?

  • (46:15) I think we just ignore it. Because it makes more sense from a browser point of view what happens. The target attribute refers to how that link should be open. If you have a frame on a page then it will open that link in a frame. What we focus on is if there is a href value given there, then essentially that link goes to the same page and we ignore that.

Home Page Disappearance and Ranking

Q.  (50:33) For every query there is only the — know my home page is getting ranked, and all of the other pages are ignored, suddenly disappear from this. How’s Google treating it?

  • (51:19) Sometimes we think the home page is a better match for that specific query and it could be that some of the information is on the home page itself. The more detailed page is seen as not such a good page from our point of view. It is something where you can experiment with removing some of that information from the home page.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from March 11, 2022

Page Disappearing From Top Results

Q.  (0:35) It is related to the pages which are 8 at 10 or 12 and suddenly they disappear from the sub. Is it really like Google is still trying to understand the overall website quality? Or there could be some other technical issues? Because when the pages are live, they are in the sublinks. When they get indexed they are placed at like 8th position, 10 or 12 or 15., so being very close or near to the first page.

  • (01:12) If you’re creating something new or if you are just updating something existing like if you have a page where you’re just updating the prices then you don’t need to make a new page for every price change that you make. You just update those prices. On the other hand, if you’re providing information then that feels like something that should live on a separate URL where people can go directly to that specific place of information. The advantage of keeping the same URL is that over time it builds a little bit more value and then people understand that it’s actually the place to go for this piece of information. For example, every day for a new price you would create a new page then if people search for, what is the current price for this product, then they’re going to find some of these, but it’s going to be unclear to us which one of them they should show. On the other hand, if you have one page, where you just update the current price, then we know, for the price, this is the page.

Traffic and Engagement and Its Impact on Rankings

Q. (6:57) We recently added a page for our site that is consistently driving significant traffic at levels we’ve never seen before. So it’s really through the roof. Can a single page with extremely high engagement and traffic have an influence on the domains as a whole? Do these signals trickle to other pages on the site and play a positive role at that domain level?

  • (07:23)  I don’t think we would use engagement as a factor. But it is the case that, usually, pages within a website are interlinked with the rest of the website. And through those internal links, across the website, we do forward some of the signals. So if we see that a page is a really good page and we would like to show it in search a lot, maybe it also has various external links going there, then that gives us a lot of additional contexts about that page. And we can kind of forward some of that to the rest of the website. So usually that’s a good thing.

Core Web Vitals for Lower Value Pages Drag the Site Down?

Q. (8:37) We prioritise our high-search pages for product improvements like anyone else would do. We prioritise our high-search pages for product improvements like anyone else would do. Can a subset of pages with poor LCP or CLS, say only the video page on the site that aren’t the main or secondary or even tertiary search-traffic-driving pages on the site, impact the rest of the site’s overall Core Web Vitals score? So what I mean by this is, can a group of bad pages with little search traffic in the grand scheme of things actually impact—drag the overall score of the site down? And do we need to prioritise those bad pages even though they aren’t high-traffic pages?

  • (09:14) Usually, that wouldn’t be the problem. There are two aspects there. On the one hand for the Core Web Vitals, we look at a sample of the traffic to those pages, which is done through, I don’t know, the Chrome User Experience Report functionality. I believe that’s documented on the Chrome side somewhere. It’s essentially a portion of the traffic to your website. That means that, for the most part, the things that we will look at the most are the pages that get the most visits. So if you have random pages on the side that nobody ever looks at and they are slow, then those wouldn’t be dragging your site down.

Internal Links Will Play a Role in Indexing Priority

Q. (16:39) We found that many category pages didn’t get indexed faster than other specific pages like product pages. And these category pages are in a formal place like they are close to the home page. Wondering if the theory is correct?

  • (17:09) I think a difficult part there is also that kind of linked closer from their home page is a general rule of thumb. But it doesn’t have to be the case. Because we have a lot of systems also in play to try to figure out how often we should recrawl a page. And that depends on various factors. It also depends on how well it’s linked within the website, but also based on what we expect will happen with this page, how often do we think it will change, or how often do we think it will change significantly enough that it’s worthwhile to recrawl and re-index it.

Flat Hierarchy vs. URL Hierarchy

Q.  (18:13) Can you comment on the flat hierarchy versus a strict kind of URL? Because there is no such thing as a closer flat structure.

  • (18:58) We don’t care so much about the folder structure. We essentially focus on the internal link. And it also kind of links from the home page, not links to the home page. So from that point of view, if you have a URL structure that doesn’t have any subdirectories at all, we still see that structure based on the internal linking. And a lot of times, the architecture of the website is visible in the URL structure, but it doesn’t have to be the case.

Website Quality

Q. (22:28) How do you improve the perceived quality of a whole website at Google side?

  •  (22:53) I think we’ve given some types of things that you can focus on with the reviews updates think we’ve given some types of things that you can focus on with the reviews updates. that we’ve done for product reviews. Some of that might apply. But I don’t think there is one solution to improving the overall quality of any larger website. And especially on an e-commerce site, I imagine that’s quite tricky. There are sometimes things like working  to improve the quality of the reviews that people leave if it’s user-generated reviews, for example, making sure that you’re highlighting the user reviews, for example.

Crawl Statistics Report

Q. (25:47) We have looked at the crawl stats reports on the Search Console and have been trying to identify if there might be some issue on the technical side with Google crawling our website. What are some of the signals or things to identify that will point us to if Google is struggling to crawl something or if Googlebot is distracted by irrelevant files and things that it’s trying to index that are irrelevant to us?

  • (26:34) Crawl reports will not be useful in that case. You are looking at an aggregate view of the crawling of your website. And usually, that makes more sense if you have something like, I don’t know, a couple hundred thousand pages. Then you can look at that and say, on average, the crawling is slow. Whereas if you have a website that’s maybe around 100 pages or so, then essentially, even if the crawling is slow, then those 100 pages, we can still get that, like once a day, worst case, maybe once a week. It’s not going to be a technical issue with regards to crawling. It’s essentially more a matter of understanding that the website offers something unique and valuable that we need to have indexed. So less of an issue about the crawling side, and more about the indexing side.

Google Search Console Not Matching Up to Real Time Search

Q. (30:09) On 1st March, my website home page, was gone from the search results, completely gone from there. OK, it’s a kind of Google thing. The home page was not in the search results. But the interesting thing, in the Google Search Console, for every keyword, Search Console is saying I’m still ranking in the first position for every keyword that was ranking before 1st March. But the significant amount of impressions and clicks had gone. About 90% had gone. Rankings and the CTR are the same. For about one week, I tried everything, but nothing works out for me. Google Search Console is saying I am ranking in the first position still.

  • (32:06) Try to figure out whether it’s a technical issue or not. And one way you could try to find out more there is to use the URL inspection tool. it’s indexed but it’s not ranking, at least when you search. And the thing with, I think, the performance report in Search Console, especially the position number there, that is based on what people saw. 

Service and Recipe Website

Q.  (41:12) So on the service website, I have different FAQs based on different city pages for my services. Do I have to create separate pages for FAQs, or can I just add them to the same city pages? And from our point of view, you can do whatever you want?

  • (41:35) If you make a separate page with the FAQ markup on it and that separate page is shown in the search results, then we can show it. If that separate page is not shown, then we wouldn’t be able to show it. And let’s see, I think the same applies to the recipe website example there. We’ve seen that most recipe websites are not providing very useful information in my country, and we try to change that by providing more useful information and going to the point of adding FAQs to every recipe.

Shortcut Handlings

Q.  (48:42) On our website, we have a lot of shortcuts like the English version will be EG, for example. How does Google handle that?

  • (48:58) We don’t do anything special with those kinds of things. We essentially treat them as tokens on a page. And a token is essentially a kind of like a word or a phrase on a page. And we would probably recognise that there are known synonyms for some of these and understand that a little bit, but we wouldn’t do anything specific there in that we’d have a glossary of what this abbreviation means and handle that in a specific way. Sometimes plays a role with regards to elements that do have a visible effect on schema.org the requirements are sometimes not the same as in Google Search. So it will have some required properties and some optional properties and it kind of validates based on those Google Search sometimes we have a stricter set of requirements that we have documented in our Help Center as well. So from our point of view, if Google doesn’t show it in the search results then we would not show it in the testing tool and if the requirements are different, Google’s requirements are stricter and you don’t follow these guidelines then we would also flag that as I don’t know.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH