LION Digital, Author at Premium eCommerce marketing services - Page 5 of 10

The Future Trends of the eCommerce Industry in 2023 and Beyond

At first glance, the Playing Field hasn’t changed much since the world started leveraging eCommerce and retail technologies. The eCommerce ecosystem is now moving towards sophistication that was once inaccessible for many retailers and reasons Chief Technology Officer of eStar Matt Neale outlines. The infrastructure has upgraded, becoming highly dynamic, reacting to customer and competitor behaviour in live mode, targeting views and offers to provide maximum value.

In this article, we will cover the online shopping trends that will shape the eCommerce industry in 2023 and sometimes even further future of online businesses, including but limited to the technical aspects.

Online Shopping Market Landscape Trends

In 2022 global retail market experienced the tendency to return to offline sales after the pandemic, which slowed down the eCommerce increase pace. Nevertheless, the eCommerce revenue growth is forecasted to bounce back in 2023 and grow until 2025.

Worldwide eCommerce revenue and growth rates from 2017 to 2025 IN BN US$
Sourced from Statista

While in the past year, only essential needs like Food & Beverages were performing well, this year, the growth rate for all major eCommerce categories will increase, and several of them can even outperform the 2021 indicators.

eCommerce categories’ year-on-year growth rates in %
Sourced from Statista

Social media as an eCommerce trend

The time when marketing practitioners pouring budgets on social media was asked, “Does Social Media Sell?” had passed. 94% of all internet users worldwide nowadays are also social media users, and it directly affects the development of eCommerce. The significance of social media, which is from being barely one of the communication channels boosted towards being an essential sales tool, is unnegotiable, and it expresses in the generated revenue increasing exponentially yearly.

GLOBAL SOCIAL COMMERECE REVENUE IN BN US$
Sourced from Statista

Preference prediction in the eCommerce industry

Intimate human experience, help with browsing, personalised advice and product recommendations are the points where traditional retail still has advantages in front of eCommerce. Online shopping trends incline eCommerce retail businesses to imitate face-to-face shopping assistance further by attentively tracking customer preferences and behaviour, retrieving deepened insights and optimising the shopping process by applying intelligent solutions. As a result, the “recommended products” block on the websites and other similar features are expected to become even more advanced.

Voice search impact on eCommerce

The way people search for services and products is also evolving. Data suggests that by 2024 over 8.4 billion digital voice assistants will be used worldwide. Consumer query that was typed focuses on specific keywords, whereas voice search is more likely to happen in the form of a question. For instance, “black shoes” vs. “Where can I buy black shoes?” Considering significant differences between typed-in and voice search, eCommerce business owners should observe this online shopping trend closely since it might seriously change the best practices of organic and paid search engine marketing channels.

Cryptocurrency for e-business growth

When integrating cryptocurrency into the business, the owners’ main anxiety was the lack of understanding of the crypto concept among the consumers. However, the studies demonstrated solid proof for the inconsistency of this doubt: 43% per cent of respondents claim their understanding of the concept, 35% consider it as a legitimate form of currency, and 28% view it as the future of currency. Moreover, surveys have shown that over 1 in 5 Gen Z, Millennial, or Gen X respondents invest in crypto, whereas 34% of crypto owners already have used it to make purchases other than buying crypto.

Thus, it could become one of the main tendencies that shape the future of eCommerce and brands that integrate cryptocurrency features into their eCommerce platforms can facilitate faster payments with an expansive multinational customer reach.

Chatbots in conversational marketing development in eCommerce

In real-time one-on-one interactions in their preferred channel, lay opportunities to build close personal relationships with customers and provide more value to their experience with the company. Additionally, conversational marketing helps gather more data and information from your customers, neatly nudges customers further along the funnel and enriches the customer experience by feeling more connected to the organisation. 

Customisable conversational marketing interactions can be tailored to the customer’s needs and, therefore, could replace long lead forms and complement each touchpoint at every stage of the customer journey. However, keeping up with the “always-online” customer is not feasible for most businesses. If their questions are simple or they don’t want to wait for a human, most consumers who ask for a live chat while shopping online are happy to interact with chatbots as long as they know indeed that it is a chatbot. Recent generation live chats allow brands to keep interactions coherent with their brand voice. Moreover, the recent tendencies in AI-guided chatbots like ChatGPT and related competition will further warm the conversational marketing trend in eCommerce.

Virtual reality and metaverse trends that will affect the future of marketing

A new level of brands’ interaction with consumers develops towards a virtual environment known as “the metaverse”. 40% of respondents already claim that they understand the metaverse concept. Among the over 80% of those who reported shopping across at least three channels over the last six months, one out of three say they used a virtual reality (VR) channel, and a significant number of them used it to buy retail products and luxury goods.

Have visited metaverse
Sourced from Hubspot

Metaverses are built as the elements of the web3 concept. As one of the ways for brands to improve customer engagement, virtual stores in the metaverse let retailers provide their customers with an immersive experience, already promising to become a game-changer in the future of online business. Although there are still open questions, the milestones of consumer interaction, as for the entire eCommerce now, will still be the smooth transition and seamless connection. Whereas interconnected and trustworthy omnichannel brand experience importance will only increase. 

Redefined retail experience for the future of online businesses

Although the scope of tools hasn’t changed much since the world started leveraging eCommerce and retail technologies, being a specialist eCommerce marketing agency, LION Digital closely observes a new way of people’s thinking about e-shopping. We understand the constant need for redefinition and choose to partner with companies that strive to upgrade online retail for the healthy future development of eCommerce.

eStar is a proven, only enterprise-level total eCommerce solutions platform that works directly with brands and businesses to deliver ongoing growth. eStar’s mission is: “Empower client success by redefining the retail experience”. Client portfolio includes companies like David Jones, Country Road Group, Briscoe Group, Air New Zealand, Bed Bath & Beyond, Stirling Sports and many more. 

To achieve outstanding outcomes eStar has a passion for working collaboratively with clients, thus, is a perfect partner: 

  • For CEOs and owners who are concerned and frustrated by the lack of sales and online growth
  • For Digital Executives and Marketing who are struggling with low conversion rates
  • For Successful in the past retailers who are now experiencing anxiety due to stagnant and lacklustre results.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

ASSELYA Sekerova –
MARKETING & PROJECT DIRECTOR

6 Tips for optimising your website with JavaScript

Introduction

ALAN KENT: (00:00) JavaScript is commonly used to build modern websites to create richer, more engaging experiences for users. Javascript is also a common source of performance problems on websites. My name is Alan Kent, and I’m a developer advocate at Google, specialising in e-commerce. In this episode, I run through six tips related to Javascript libraries and frameworks that can help improve your e-commerce sign. First, what is Javascript? Javascript is a programming language that has become popular, as it is supported by web browsers. Javascript allows web developers to write code that reacts to user interactions, manipulating the HTML markup on a page to change what the user sees. What Javascript has made possible are richer and more sophisticated user interactions than are supported by native HTML markup alone. For example, a mini cart on an e-commerce site is typically implemented using Javascript. The cart icon often shows the dynamically updated number of items in the cart and, when clicked on, displays the current cart contents, allowing users to view and adjust the cart contents. Advanced site navigation menus are also frequently implemented using Javascript. Javascript can also be used to collect site analytics to give you greater insights into how your site is performing. These days there are many Javascript frameworks and libraries, and components available that you can use on your own site. One reason for the development of Javascript libraries is that not all browsers are compatible with Javascript and CSS support. Sophisticated components can require substantial development to be reliable across a range of browsers, so it is natural to want to reuse them across multiple projects. While improving user experience and saving development time, watch out for the following problems.

Tip #1: Avoid JavaScript file proliferation

ALAN KENT: (01:58) Tip number one is to avoid proliferation in the number of Javascript files on your site. The number of Javascript files may rise if care is not taken, especially if each UI component is kept in a separate file. There are overheads per downloaded file, especially for websites that only support HTTP1. There are a number of free tools available to work at if your site has too many Javascript files. One tool that combines both data from real users and lab testing is PageSpeed Insights. To use PageSpeed Insights, simply enter the URL of a page on your public site. The opportunity section of the PageSpeed Insights report lists recommendations specific to your site. For example, the recommendation to keep request counts low and transfer sizes small, when expanded, summarises the number and sizes of resource types requested, including Javascript files. There are a number of techniques that can be used to reduce the number of files to download but solving the problem also depends on the flexibility of the platform or tools you are using. For example, many content management systems restrict access to Javascript to simplify the job for content creators and reduce the risk of mistakes. This, however, can also make it harder to address issues that the platform does not solve. If you have a large number of small Javascript files, it may be more efficient to join those files together to have a single larger file to download. In practice, it is common to bundle files into a few larger files that can be downloaded in parallel for greater efficiency. If you have control over the Javascript files on your site, you may find Javascript bundling tools, such as Webpack, useful to simplify this process. Note that supporting HTTP2 on your site can improve performance without joining files, as HTTP2 improves the efficiency of downloading multiple small files.

Tip #2: Avoid excessive DNS lookups

ALAN KENT: (3:55) The second tip is to avoid an excessive number of DNS lookups for the reference Javascript files. If Javascript files are loaded from different domain names, there may be a DNS look-up overhead per domain name referenced. If excessive, this can slow down the first visit of a user to your site. Reports such as PageSpeed Insights may show you a list of domain names used in URLs in sections such as reducing Javascript execution time. But you may find using the network tab inside Chrome Developer Tools a more reliable way to see all the domain names referenced. Note that, unlike cookies, you cannot easily request the DNS cache to be cleared, making DNS issues harder to detect. To reduce the number of DNS lookups, consider whether to host a copy of externally referenced Javascript files on your own site. This is not always a clear-cut decision whether to self-host Javascript files, as if you download a popular Javascript library from a central site, it may already be in the browser cache due to the user visiting some other site that also uses the same library. Putting a copy on your own site may save you the DNS lookup but at a higher cost of downloading the file a second time.

Tip #3: Eliminate inefficient JavaScript

ALAN KENT: (5:11) The third tip is to eliminate inefficient Javascript from your site. Poor quality Javascript can slow down web pages, leading to bad user experiences. There are multiple opportunities reported by PageSpeed Insights that can be hints of inefficient Javascript on your site. Reduce Javascript execution time reports scripts where a relatively large amount of CPU time was spent parsing or executing Javascript code. Eliminate render-blocking resources, including Javascript, which may be executed before the page can be rendered, making the user wait longer to see any content. The Javascript function document.write(), if misused, can cause significant performance issues on a page, as it blocks other operations from occurring. For example, performance testing has shown that adding a script inclusion via document. Write () can double the length of time it takes to load a webpage, especially on slow networks. Not using passive listeners can also slow down a site. A passive listener is a hint to the browser that Javascript code will not call a function that prevents scrolling, allowing the browser to scroll the page, even while the Javascript is still executing. These were a few common examples, but there are many other causes of performance issues. Making Javascript more efficient is a large topic and is beyond the scope of this video. The solutions generally involve writing the Javascript code differently. There are many good resources available on the web describing various techniques, from profiling existing code to running your own cut-down versions of more powerful components.

Tip #4: Eliminate unused JavaScript

ALAN KENT: (6:48) Unused Javascript is another form of inefficiency, but it is common enough to be called out as its own tip. Reuse of code across sites can lead to sites including Javascript that is not needed. For example, most websites do not use all of the functionality provided by a library or framework, or a component may be used that has more features than are needed. Javascript code that is never called still needs to be downloaded and parsed by the web browser, wasting resources. To see if your site has potentially unused Javascript, the PageSpeed Insights report has a reduce unused Javascript section. This includes Javascript, which was not executed as part of loading a page. The PageSpeed Insights avoid enormous network payloads that can also be the result of downloading large Javascript libraries, which may also identify potential areas for improvement. In addition, minimised main thread work includes time spent parsing, compiling, and executing Javascript. Eliminating unused Javascript can reduce these overheads. There is a range of tools to identify Javascript that is not used. Techniques such as tree shaking can be used to identify Javascript that is never called on a site, and so it can be deleted from downloads. Care must be taken, as the execution of Javascript may be dependent upon environmental factors. For example, with AB testing, a section of Javascript may only be run for some users. The code must stay on the site, even if the profiler reports it is not executed.

Tip #5: Compress JavaScript files

ALAN KENT: (8:18) Tip number five is to make sure your Javascript files are compressed when downloaded, especially for larger files. Javascript file generally compresses well, reducing the number of bytes to be downloaded by the web browser. While the web browser does have to spend more CPU time to decompress the file contents, compression is normally an overall win. Again, the PageSpeed Insights report has a section describing Javascript files that may benefit from being compressed. Expand the enabled text compression opportunity to see which files are recommended to be compressed. Uncompressed Javascript downloads are usually relatively straightforward to fix when identified. Most web browsers or content management systems have built-in support to compress downloads if appropriately configured.

Tip #6: Set appropriate cache durations for JavaScript code

ALAN KENT: (9:06) Another worthwhile tip is to check that your Javascript files are returned with appropriate cache expiry time headers. This helps browsers avoid the overhead of checking if Javascript files in their cache are out of date, improving performance. To check if your site is set up appropriately, the networking tab of Chrome Developer Tools can be used to check the HTTP response headers for Javascript files that are downloaded. Look for headers such as cache control. Also, the serve static assets with an efficient cache policy opportunity in the PageSpeed Insights report lists resources, including Javascript files, that may benefit from appropriately set cache headers. The first step to fixing any issues on your site is to make sure the website is returning appropriate cache lifetime headers to help browsers cache Javascript files correctly. However, care must be taken to make sure that Javascript files can be updated when required to correct site defects or introduce new functionality. One strategy is to include a version number or hash of the file contents as part of the URL on the downloaded file. That way, a new URL is used for each variation of the file. Another approach to enhance the caching of commonly used Javascript files is to reference files from a shared public location. If a user visits sites that reuse the same Javascript file, the browser can use the previously downloaded copy of the file, improving performance.

Conclusion

ALAN KENT: (10:35) To wrap up, Javascript has made it possible to significantly improve the experience of users on your website. Care must, however, be taken to avoid common performance problems that can arise when using Javascript. There are many great resources on the web to help with these different issues. My colleague, Martin Splitt, also has some great content focusing on Javascript and websites. Thanks for watching. If you enjoyed this video, make sure to click subscribe to keep up with the latest videos from Google Search Central.

Sign up for eCommerce Essentials today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from January 31, 2023

WEBMASTER HANGOUT – LIVE FROM JANUARY 31, 2023

Introduction

Lizzi: (00:00) It’s January, it’s 2023, it’s the new year. And what do we have for you today?

It’s office hours, and it’s me. And some questions and some answers, and let’s get into it. I’m Lizzy from the Search Relations team.

Do meta keywords matter? 

Lizzi: (00:15)  Do meta keywords still help with SEO?

A: (00:20) Nope. It doesn’t help. If you’re curious about why there’s a blog post from 2009 that goes into more detail about why Google doesn’t use meta keywords.

Why is my brand name not shown as-is?

Gary: (00:31) Hi, this is Gary from the search team. Kajal is asking; my brand name is Quoality. That is, Quebec Uniform Oscar Alpha Lima India Tango Yankee, and when someone searches for our brand name, Google shows the results for quality. That is the correct spelling. Why is Google doing this?

A: (00:53) Great question. When you search for something that we often see as a misspelling of a common word, our algorithms learn that and will attempt to suggest a correct spelling or even just do a search for the correct spelling altogether As your brand grows, eventually, our algorithms learn your brand name and perhaps stop showing results for what our algorithms initially detected as the correct spelling. It will take time, though.

Which date should I use as lastmod in sitemaps?

John: (01:20) Hi, this is John from the search relations team in Switzerland. Michael asks, the lastmod in a sitemap XML file for a news article. Should that be the date of the last article update or the last comment?

A: (01:36) Well since the site map file is all about finding the right moment to crawl a page based on its changes, the lastmod date should reflect the date when the content has significantly changed enough to merit being re-crawled. If comments are a critical part of your page, then using that date is fine. Ultimately, this is a decision that you can make. For the date of the article itself, I’d recommend looking at our guidelines on using dates on a page. In particular, make sure that you use the dates on a page consistently and that you structured data, including the time zone, within the markup

Can I have both a news and a general sitemap?

Gary: (02:14 ) Helen is asking, do you recommend having a news sitemap and a general sitemap on the same website? Any issue if the news site map and general sitemap contain the same URL?

A: (02:24) You can have just one site map, a traditional web sitemap as defined by sitemaps.org, and then add the news extension to the URLs that need it. Just keep in mind that, you’ll need to remove the news extension from URLs that are older than 30 days. For this reason, it’s usually simpler to have a separate site map for news and for the web. Just remove the URLs altogether from the news site map when they become too old for news. Including the URLs in both site maps, while not very nice, but it will not cause any issues for you..

What can I do about irrelevant search entries?

John: (03:02) Jessica asks. In the suggested search, in Google, at the bottom of the page, there’s one suggestion that is not related to our website. And after looking at the results, our website is not to be found for that topic. Kind of hard for me to determine exactly what you mean, but it sounds like you feel that something showing up in the search isn’t quite the way that you’d expect it, something perhaps in one of the elements of the search results page.

A: (03:29) For these situations, we have a feedback link on the bottom for the whole search results page, as well as for many of the individual features. If you submit your feedback there, it’ll be sent through your appropriate teams. They tend to look for ways to improve these systems for the long run for everyone. This is more about feedback for a feature overall and less that someone will explicitly look at your site and try to figure out why this one page is not showing up there

Why is my site’s description not shown?

Lizzi: (03:57 ) Claire is asking, I have a site description on my Squarespace website, but the Google description is different. I have reindexed it. How do I change it?

A: (04:08) Something to keep in mind here is that it’s not guaranteed that Google will use a particular meta description that you write for a given page. Snippets are actually auto-generated and can vary based on what the user was searching for. Sometimes different parts of the page are more relevant for a particular search query. We’re more likely to use the description that you write if it more accurately describes the page than what Google can pull from the page itself. We have some best practices about how to write meta descriptions in our documentation, so I recommend checking that out.

How can I fix the spam score for a used domain?

John: (04:40) Mohamed asks, I bought this domain, and I found out it got banned or has a spam score, so what do I need to do?

A: (04:49) Well, like many things, if you want to invest in a domain name, it’s critical that you do your due diligence ahead of time or that you get help from an expert. While for many things, a domain name can be reused, sometimes it comes with a little bit of extra ballast that you have to first clean up. This is not something that Google can do for you. That said, I couldn’t load your website at all when I tried it here. So perhaps the primary issue might be a technical one instead.

Are spammy links from porn sites bad for ranking?

Lizzi: (05:20) Anonymous is asking; I’ve seen a lot of spammy backlinks from porn websites

linking to our site over the past month using the Google Search Console link tool. We do not want these. Is this bad for ranking, and what can I do about it?

A: (05:35) This is not something that you need to prioritise too much since Google Systems are getting better at figuring out if a link is spammy. But if you’re concerned or you’ve received a manual action, you can use the disavow tool in Search Console. You’ll need to create a list of the spammy links and then upload it to the tool. Do a search for disavow in Search Console for more steps on how to do this.

Does Google use keyword density?

John: (05:59) The next question I have here is, does Google consider keyword density for the content?

A: (06:05) Well, no, Google does not have a notion of optimal keyword density. Over the years, our systems have gotten quite well at recognising what a page is about, even if the keywords are not mentioned at all. That said, it is definitely best to be explicit. Don’t rely on search engines guessing what your page is about and for which queries it should be shown. If your homepage only mentions that you “add pizazz to places” and show some beautiful houses, both users and search engines won’t know what you’re trying to offer. If your business paints houses, then just say that. If your business sells paints, then say that. Think about what users might be searching for and use the same terminology. It makes it easier to find your pages, and it makes it easier for users to recognise that they have found what they want. Keyword density does not matter, but being explicit does matter and contrary to the old SEO myth, story, joke, and commentary, you don’t need to mention all possible variations either.

Why is our title mixed up with the meta description?

Lizzi: (07:12) Michael is asking, what we should do if we are seeing that certain pages have meta descriptions in SERPs displaying the exact same text as the title tag, not our custom descriptions or snippets from the page.

A: (07:26) Hey, Michael. Well, first, I’d check that the HTML is valid and that there’s not any issue with how it’s being rendered with the URL inspection tool. It’s hard to give any more advice without seeing more context, so I’d head to the Search Central forums, where you can post some examples of the page and the search results you’re seeing for it. The folks there can take a look and give some more specific advice on how to debug the issue further.

How can I remove my staging sub-domain?

Gary: (07:50) Anonymous is asking, I have a staging site which is on a subdomain, and unfortunately, it got indexed. How can I remove it from search results?

A: (07:59) Well, these things happen, and it’s not a reason to be worried. First, ensure that your staging site is actually returning a 404 or 410 status code, so Googlebot can update our records about that site. And then if it’s a bother that the staging site appears in search, submit a site removal request in Search Console. Just mind that you are going to need to verify the staging site in Search Console first.

Will disavowing links make my site rank better?

John: (08:25) Jimmy asks, will disavowing spammy links linking to my website help recover from an algorithmic penalty?

A: (08:33) So first off, I’d try to evaluate whether your site really created those spammy links. It’s common for sites to have random, weird links, and Google has a lot of practice ignoring those. On the other hand, if you actively built significant spammy links yourself, then yes, cleaning those up would make sense. The disavow tool can help if you can’t remove the links at the source. That said, this will not position your site as it was before, but it can help our algorithms to recognise that they can trust your site again, giving you a chance to work up from there. There’s no low-effort, magic trick that makes a site pop up back afterwards. You really have to put in the work, just as if you did it from the start.

How can I best move my site?

Gary: (09:21) Clara Diepenhorst is asking, I want to implement a new name for my company while the product and site stay mostly the same. This new name changes my URLs. How do I keep my credits of the old name?

A: (09:36) Great question. And this is, again, a site move question. Site moves are always fun and scary. The most important thing you need to do is to ensure that the old URLs are redirecting to the new URLs. This is the most important thing. Once you have your new domain, make sure that you verify it in Search Console. See if you get any red flags in the security section and other reports. And once you are already with the redirections, you can submit a site move request in Search Console also. Since it’s a really big undertaking to do a site move, we have very detailed documentation about this topic. Try searching for something like “Google site move” on your favourite search engine and really just have a read, prepare yourself

Why doesn’t my site show up in Google?

John: (10:22) Rob asks, my site does not show up on Google searches. I can’t get it indexed.

A: (10:28) So Rob mentioned the URL, and I took a quick look, and it turns out that the homepage returns a 404 status code to us. Essentially for Google, the page does not exist at all. Trying it out a bit more, it looks like it returns a 404 status code to all Googlebot user agents and users can see it normally. You can test that using a user agent switcher in Chrome in the developer tools there. This seems to be more of a misconfiguration of your server, so you might need help from your hosting provider to resolve it. Google will keep retrying the page, and once it’s resolved, it should be visible in the search results again without maybe, a week or so.

How can I get my mobile version into Google?

Lizzi: (11:11) Matheus is asking Google Search Console looks at the desktop version of some, but not all articles on my website, even though it has a mobile version. How can I tell Google to look at the mobile version?

A: (11:24) Well, we’ve got a list of things that you can check in our documentation on mobile-first indexing, so I’d recommend going through that checklist and the troubleshooting section. Most of it boils down to this. Make sure that you’re providing the same content on both versions of your site and that both your users and Google can access both versions. If you’re still having issues, we recommend posting in the forum so folks there can take a look at those specific pages that are not showing up as mobile-friendly.

Why does a competitor’s social account with the same name show up?

John: (11:54) Anthony asks, my company’s social media account is no longer appearing in the search results. Only my competitors are appearing now, and we have the same name.

A: (12:06) It looks like it’s more than just two sites using the particular name that you mentioned, and this kind of situation will always be hard to find your site, and it won’t be clear to us or to users, which one is the so-called right one? I mean, they’re all called the same; they’re all essentially legitimate results. If you want people to find your site by name, then you should make sure that the name is a clear identifier and not a term that many others also use.

What could be a reason for a URL removal not to work?

Gary: (12:36) Lou is asking, why my link is still showing up on Google after I used the content removal tool and it got approved? Please help me understand this phenomenon.

A: (12:47) Using the URL removal tool is very fast. Usually, it removes the specified URL from search results within a few hours. If it didn’t remove a URL that was approved for removal by the tool, that usually means that you specified the wrong URL. Try to click the actual result and see where you land. Is it the same URL that’s shown in the search? If not, submit another removal request for that particular URL.

Which structured data should I use on a service-website?

John: (13:13) The next question I have here is; our website is a service, not a product. The price will vary on the estimate. How do I fix the invalid item for a service like ours when I use product structured data?

A: (13:28) For a local business, like the one that you mentioned, I’d recommend looking at the local business structure data. This also lets you specify a price range for your services. We have more information about this markup in the search developer documentation.

Why might my content not be indexed?

Gary: (13:43) Anonymous is asking what could be the reason for our relatively healthy and content-rich country site to repeatedly be de-indexed and our old 404 subdomains and sub holders to be reindexed instead?

A: (13:55) Well, without the site URL, it’s fairly impossible to give an exact answer, but it sounds like we just haven’t visited all the URLs on those old subdomains and in subfolders, and that’s why those URLs are still surfacing in search. If you are certain that the country site keeps falling out of Google’s index and not just, for example, not appearing for the keywords you’d like, that could be a sign of both technical and quality issues. I suggest you drop by the Google Search Central Forums and see if the community can identify what’s up with your site.

Can I get old, moved URLs ignored by Search?

John: (14:29) Alex asks, if you move a ton of content with 301 redirects, do you need to request the removal of the old URLs from the index? Because even a decade later, Google still crawls the old URLs. What’s up? Thank you.

A: (14:44) No, you do not need to request re-indexing of moved URLs or request them to be removed. This happens automatically over time. The effect that you’re seeing is that our systems are aware of your content has been on other URLs. So if a user explicitly looks for those old URLs, we’ll try to show them, and this can happen for many years. It’s not a sign of an issue. There is nothing that you need to fix in a case like this. If you check the URL’s in Search Console, you’ll generally see that the canonical URL has shifted when the redirect is being used. In short, don’t worry about these old URLs showing up when you specifically search for those old URLs.

Does Search Console verification affect Search?

Gary: (15:27) Avani is asking, changing Search Console ownership or verification code – does it affect website indexing?

A: (15:35) Having your site verified in Search Console or changing the verification code and method has no effect on indexing or ranking whatsoever. You can use the data that Search Console gives you to improve your site and thus potentially do better in search with your site, but otherwise has no effect on search whatsoever.

Why might my translated content not appear in Google?

John: (15:54 ) Now, a question from Allan, about two months ago, I added another language to my website. I can’t find the translated version through Google Search. What could be the reason for that?

A: (16:07) When adding another language to a website, there are things that you need to do and things you could additionally do. In particular, you need to have separate URLs for each language version. This can be as little as adding a parameter to the URL, like question mark language equals German, but you must have separate URLs that specifically lead to that language version. Some systems automatically swap out the content on the same URL. This does not work for search engines. You must have separate URLs. The other important thing is that you should have links to the language versions. Ideally, you’d link from one language version to all versions of that page. This makes it easy for users and search engines to find that language version. Without internal links to those pages, Google might not know that they exist. And finally, using the hreflang annotations is a great way to tell us about connections between pages. I’d see this more as an extra; it’s not required. You can find out more about sites that use multiple language versions in our developer’s documentation.

Does the URL depth of an image affect ranking?

Lizzi: (17:21) Sally is asking does the URL depth of an image affects image ranking and will adding the srcset and size code of an image in the HTML be good for image ranking?

A: (17:33) Whether an image is three levels deep or five levels deep isn’t really going to matter. What’s more important is using a structure that makes sense for your site, and it makes it easy for you to organise your images in some kind of logical pattern, while still making sure that the file names are descriptive. For example, it might make sense to have a directory called photos slash, dog slash, havanese slash molly dot png, but if you don’t have a ton of Havanese photos, then maybe just photos and then Molly Havanese dog dot png might make sense. As far as srcset and size code goes, add those if it makes sense for your image. We recommend these for responsive images in particular so we can understand more about the different versions of a given image. Hope that helps.

What happens when a part of an hreflang cluster is bad?

Gary: (18:20) Anonymous is asking, is there a difference in how hreflang clusters are treated, depending on if the hreflang tag is broken or includes a noindex or a different canonical in the clusters?

A: (18:34) Complicated topic. Hreflang clusters are formed with the hreflang links that we could validate. Validate in this context, meaning the backlinks between the hreflang tags. If an hreflang link couldn’t be validated, that link will simply not appear in the cluster. The cluster will be created regardless of the other valid links. If one of the links is no index, then that won’t be eligible for getting into the cluster.

Are sitewide footer links bad?

Lizzi: (19:05) Nazim is asking, are sitewide footer links that refer to the designer companies or the CMS harmful for SEO?

A: (19:54) In general, if the links are boilerplate stuff like “made by Squarespace” that comes with the website theme, this is not something that you need to worry about. If you have control over the link, we recommend that you add nofollow to these types of links. Also, check to make sure that the anchor text is something reasonable. For example, make sure that the link isn’t gratuitously keyword rich, for example, “made by the best Florida SEO.”

How can I speed up a site move?

Gary: (19:39) Mohamed is asking; I made a transfer request because I changed the domain name of our website in Search Console. What can I do to speed up this process? This is very, very important for me. 

A: (19:48) This is a good question. The most important thing you need to do is to ensure that your old URLs are redirecting to your new site. This will have the largest positive impact on your site move. The site move request in Search Console is a nice thing to submit, but even without it, site moves should do just fine, if you redirect the old URLs to the new ones and they are working properly. Search for something like “Google Site move” on your favourite search engine and check out our documentation about site moves, if you want to learn more.

How do I link desktop and mobile versions for m-dot sites?

Lizzi: (20:24) Nilton is asking, at the moment, my site is not responsive. It has a desktop version and an m-dot site. In the documentation, it says the treatment we need to do is something in relation to canonical and alternate. My question is, do I need to put the canonical in the desktop version? The documentation doesn’t make it very clear. 

A: (20:42)Thank you for your feedback; I will definitely try to make this clearer in the docs. The desktop URL is always the canonical URL, and the m-dot is the alternate version of that URL. So on the desktop version, you’ll need a rel canonical that points to itself and a rel alternate that points to the m-dot version. And then, on your m-dot page, you’ll have only a rel-canonical that points to the desktop version of that page. Hope that helps.

How important is EXIF data?

Gary: (21:14) Sagar is asking, how important is EXIF data from an SEO perspective for an e-commerce site or sites where images play key roles?

A: (21:25) Well, this is an easy question. I really like easy questions. The answer is that Google doesn’t use EXIF data for anything at the moment. The only image data, or metadata, that we currently use is IPTC.

Conclusion

John: (21:41) And that was it for this episode. I hope you found the questions and the answers useful. If there’s anything you submitted, which didn’t get covered here, I’d recommend posting in the Search Central help community. There are lots of passionate experts active there who can help you to narrow things down. And of course, if there’s more on your mind, please submit those questions with a form linked below. Your questions here are useful to us and to those who catch up on recordings, so please keep them coming. If you have general feedback about these episodes, let us know in the comments or ping us on social media. 

I hope the year has started off well for you. For us, well, it’s been a mixed bag, as you’ve probably seen in the news, things are a bit in flux over here. You can imagine that it’s been challenging for the team, those we interact with internally, and also me. In any case, the questions you submit give us a chance to do something small and useful, hopefully, so please keep them coming. In the meantime, may your site’s traffic go up and your crawl errors go down. Have a great new year and see you soon. Bye.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

Why is SEO documentation so confusing?

Intro

MARTIN SPLITT: Why do SEOs give strange recommendations to us developers sometimes?

MICHAEL KING: Why can’t the Google documentation be up to date?

MARTIN SPLITT: Why won’t all SEOs use their tools and documentation properly?

MICHAEL KING: Why is the Google documentation written so strangely?

MARTIN SPLITT: Hello, and welcome back to another episode of SEOs and Developers. Today, my guest is Michael King, who is not only running an SEO agency and has a technical background, but he’s also a rapper. So I’m really looking forward to see what we’ll be getting into today. 

MICHAEL KING: And I’m here with Googler Martin Splitt, who’s a diver, magician, and an individual who is as dynamic as his hair. Very excited to speak with him today.

Checklists, beginner SEOs, and tools

MARTIN SPLITT: (01:00) So I’m really, really excited to be joined in this episode by Mike King. And, Mike, I actually have a really good question for you. So you’re running your SEO agency. And I know that you have a technical background, so you maybe understand where I’m coming from when I say, as a developer, I have met so many SEOs who are basically barging into the development room and going to the team, like standing in front of the entire team, going, oh my god, stop everything that you’re doing. We have a massive issue. And then you’re like, so what’s the issue? We need to fix our canonicals. And you’re like, but why? Oh, you know, it’s going to break everything, and everything is going to be like we’re going to get penalties, and everything is going to shit, and we have to really get everything back up to speed. And, oh my god, this has to happen now, now, now. And I’m like, why is it that a bunch of people are operating like this? Where does that come from?

MICHAEL KING: (01:55) Well, it comes from the fact that everyone in SEO comes from a different background. Like, not too many people are as technical as someone like me or some of the other great people in the space. And so a lot of it is like, OK, I read this checklist. It tells me, OK, I need to do these things. I have a tool that tells me that everything is on fire. So I’m going to focus on the thing it says is most on fire. So what it really comes down to is differing levels of education. I think that there’s some difficulty with people understanding what priorities are or how they align with priorities and an actual business, and then also from the perspective of what it’s going to be the impact of doing any of these things. So it’s very easy for an SEO who is inexperienced to put together some sort of PDF report from a tool that they’re using that says, OK, these are the 10 things that must happen right now. But it doesn’t necessarily reflect what the impact is going to be of those things.

MARTIN SPLITT: (02:59) Right. Yeah, I’ve seen these PDF reports, and that has me wondering like, why can’t tools not just be doing the right things? Like, why are these tools generating these 30-page reports with all this stuff in it? How did we end up here? 

MICHAEL KING: (03:18) Yeah, I mean, that’s the thing like, when you build a generic tool around a series of SEO best practices, it’s not going to take into account the context of the actual website, right? So in the example that you gave with canonical tags, there may be a reason that you have so many duplicates. There may be a reason that the site needs that, right? Like, if you think about a site that has a bunch of franchises, and that content isn’t any different per franchise, it makes sense that you’re not canonicalizing those to one version of the page. Like, the business wants to have these different permutations of pages for different service areas. And there are any number of reasons why this may be of value to the actual business. So a tool is just going to say, well, the best practice is for every URL to be canonicalized so the version of it that’s very similar to it or is an exact duplicate. But it doesn’t know that that actually makes sense for that business. So I think that there’s an opportunity, I think this is generally true that technology for SEO is very much behind, of course, what Google is doing. But it’s also behind what it can actually do, right? I think that there needs to be some sort of layer that’s added to these SEO tools, where it’s like, I’m this type of business. We have this type of concern. These are our priorities. All right, now spit out something that is prioritized or takes into account what makes sense for this business. And so when you don’t have that, you need an expert that’s able to interpret it from a business-use-case perspective, from a perspective of, what can we technically do? And again, because you don’t have enough people in SEO that are able to interpret things that way, you get these reports straight out of the tool that’s like, again, everything is on fire. And so that’s what our job is, is to interpret that to the frame of what the business actually needs to do. 

Why does context matter for automation? 

MARTIN SPLITT: (05:18) All right, OK, I get that. So one takeaway from me, don’t be mad at me here is that from a developer’s perspective, I always thought this stuff can probably be automated away, right? We don’t really need that many actual people doing it. But it sounds like that’s not the case, right? There’s a bunch of stuff that depends on the context, and the tools can’t capture this, right?

MICHAEL KING: (05:43) Well, I’ll put that back on you. Like, at this point, we’ve also got tools that can automatically write code. We don’t need developers, right? It’s the same sort of thing. You know what I mean? Like, of course, we need developers. Like, there still needs to be that interpretation of, what are we trying to do here, and how do we account for the nuances of what we’re trying to do? So to your point, yes, I agree that a lot of SEO can be automated, but there are things that let’s say, for instance, we’re talking about an internal linking structure. That could be entirely automated. But if you don’t have the right rules in place, it could go crazy really quickly, right? Like, let’s say you even got it to the point where you’ve identified all the pages that own individual keywords. So let’s say you’ve got your whole keyword list, and you’re like, OK, there’s a mapping of keyword to URL. And then, you have something that crawls the site and looks for the instances of those keywords so that you can automatically build a keyword-relevant internal linking structure. But that could easily go crazy, where you have every word on the page has internal links on it now. And now it’s like a completely bad user experience, and there’s any number of filters that could be tripped as a result of that. And so you still always need that human interpretation so that we’re doing things right, and it just doesn’t go haywire.

MARTIN SPLITT: (07:08) Yeah, yeah, I see that. No, that makes perfect sense. 

Do tools give outdated recommendations? 

MARTIN SPLITT: (07:11) And another thing that I’m surprised by, let’s put it that way, is that there’s a lot of the guidelines you said, like, the best practices and the guidelines are there, and the tools are going along with them. But a bunch of the tools seem to be making proposals or suggestions that I think are either not quite there or actually completely wrong and outdated and no longer are a thing. How does that happen?

MICHAEL KING: (07:46) Yeah, you’ve got things like text-to-code ratio, or W3C compliance that are still, I mean, I’m embarrassed to see that type of stuff still, because it’s like, was that ever really a concern? Or was it just something that some SEO at some point was like, hey, I think this is a thing, and then every tool just has it as a result? But, yeah, I think no one’s really gone back and looked and taken a critical look at things and said, hey, what do we actually need to be looking at? What do we actually need to have rules around? I think it’s largely been like, you have to have feature parity with other tools. And so they’re just copying what they saw Moz do or what they saw SEMrush do, or whoever, and this just continued to persist. But I think that there needs to be. I think SEO as an industry probably just needs a product manager to stand up and be like, yo, let’s stop doing these dumb things.

MARTIN SPLITT: (08:48) Oh, man. I mean, I understand that that kind of cruft accumulates over time, but we have so much in terms of updates and documentation and reading material and guidance out there that we are trying to update whenever something changes. But yet, the tools are still spouting things. And for instance, the W3C thing that has been a tricky one because, obviously, writing compliant, semantic, and correct HTML is great because that makes it easier for us to understand what you’re trying to accomplish there in terms of the semantics of the content. But if you make mistakes, it’s not that we stop understanding the page and be like, oh, we don’t know what that is. We are out of here. We are still trying to make sense of it. We just might need to, like, we might lose confidence on the way, right? It’s like. This might be a headline, but we’re not sure.

MICHAEL KING: (09:40) Right, but realistically, if that was actually a requirement, I’m going to guess that over 90% of the web just wouldn’t load, you know? Because what is truly compliant across the web? And so to that end, obviously, you guys, the crawling capability is fantastic. And you’re rendering the full page anyway, so if my browser works, it’s likely that your crawler will work. And so just the fact that we’re still, like, even considering that is difficult. But at the same time, there are things that you do that achieve compliance that do help. So I agree with what you’re saying, but it’s not the metric that we should be looking at to determine it.

Documentation drift and doing your own research 

MICHAEL KING: (10:27) I think that there’s a lot of instances where, if we’re talking about documentation, where the documentation may be out of phase with where you are as an organisation. And I think you can say that not just from what’s public facing, I’m sure that’s happening internally as well. And so the reality of it is that it’s very difficult to look at that documentation as the single source of truth because things are changing so quickly. And so even if all the SEO tools were like, OK, let’s follow Google’s documentation perfectly, it still wouldn’t necessarily be the ideal state for how these tools would tell you things.

MARTIN SPLITT: (11:08) OK, I’m guessing this also aims a little bit in the direction of previous/next links, where we had this thing. OK, yeah. So what happened there was unfortunate. And you’re right; the docs are not always in phase. We are doing our best to work with the teams and help them to keep their documentation updated, but it does every now and then happen. In this case, a bunch of engineers in search of quality figured out, hey, hold on. We actually don’t really need the rel=”next” and rel=”prev” links anymore to figure out that there is pagination going on. We can figure that out from other things on the page by themselves. So they just removed the code. And then we were in this, and now, we go into our position, come to our side of the force, and what do you do? Do you either update the docs to like just quietly remove that part because it is no longer relevant? Or do you go like, hey, by the way, this is no longer necessary? And, truthfully speaking, it hasn’t been necessary in the last six months, knowing very well that people are reading the documentation, making recommendations based on it to other people, and these people then invest work and time and money into making that happen. And the alternative would just be to let it live there in the documentation. Even though it’s wrong, it doesn’t hurt. So we went with the full frontal way of going like, OK, here’s the thing this has been removed a while ago. We are sorry about that, but now our docs are updated. And I think none of the choices are easy or necessarily perfectly good, but it’s just what happened. So I think we are trying to keep the docs updated as much as possible.

MICHAEL KING: (12:50) Yeah and I get that it’s hard. Again, you have a large team, which is like an offshoot of another even larger team. You’re both moving quite quickly. I get it. It’s just very difficult from this side, where you’re making recommendations to people. And then you’ve got an engineer who’s second-guessing you. And then they find something in the documentation that’s out of date, and they’re like, no, you’re wrong. You don’t know what you’re talking about. Google says this right here. So it just makes our job a lot more difficult. So I get what you’re saying, but you also got to come to our side and see what we’re dealing with because I’m just Mike King. You guys are Google, right?

MARTIN SPLITT: (13:31) Yeah, no, no worries. That’s exactly why we are so transparent and say like, hey, by the way, this has been removed. We’re sorry, but the docs are now updated, because we understand that we have to make sure, to our best knowledge and to our best ability, that the docs are a source of truth. 

Who is the documentation written for?

MARTIN SPLITT: (13:37) Nonetheless, it is tricky, because of what you just said, like, oh, the engineer finds this piece of documentation and argues their way around it, it’s so hard to write these docs to the right audience.

MICHAEL KING: (14:03) Right. Yeah, and that’s the thing it seems like from what I’ve read and I’ve read most of it, from what I can tell it’s like, the writer’s aiming for the middle, but that doesn’t really support the use case, right? Like, it doesn’t necessarily align with the people that are actually going to use this. At the same time, I get that there’s a very wide audience of “webmasters” how many small businesspeople are really digging into documentation like this? So why are we necessarily writing for them? Or should it be the type of thing where we have a section that’s more for the advanced user or more for the enterprise user, where you’re able to speak to these use cases in more specific ways? There are a lot of situations in the documentation where there are just too many edge cases for you to say the things that are being said. Like, there’s something that came out more recently where it’s like, hey if you see a traffic trend or a click trend that looks like this, that means this is what happened. I’ve seen plenty of trends that looked like all four of those things that are shown in the documentation that wasn’t the thing that the documentation says it is. So now, I’ll have a client that’ll come to me and say, well, we saw this, and the documentation showed us a screenshot of this, so this must be why. And so they may not be so receptive to what’s actually going to need to happen in order to recover. So that’s the thing it just doesn’t really solve the problem in the way that we would hope it does.

MARTIN SPLITT: (15:38) And I understand that, and I understand that it’s tricky. And we learned that, and that’s why we relaunched our documentation at some point in the past. I think it was in November 2020 we relaunched or February 2021. I can’t remember when exactly we launched the new Dev site. But we are trying to group it differently because we realised that even though with the new grouping, we’re still seeing feedback to very technical things, like robots.txt, coming from small business owners being like, I don’t even know what any of this means. Ahh! And we’re like, OK, but this is a very technical thing. Like, how did you end up here? Why did you end up here? And what are you trying to accomplish? So it’s really, really tricky, and we have to try to write for the broad use case and then the edge cases. That’s a tricky place. Where do they live?

We read documentation feedback. Give us feedback! 

MARTIN SPLITT: (16:27) I did that for JavaScript. I have this extra fixed JavaScript issues page with all the edge cases where things might go wrong. But it’s tricky. It’s not easy to catch all these things. And that’s why giving us feedback on the docs and in the docs, there is an opportunity to give us feedback right there. We read this. It’s really weird because we can’t give a response back as easily. Like, you don’t know that we read it, but we do read it. And it’s quite interesting. Like, a lot of it is really useful, constructive, helpful feedback that allows us to improve our documentation. A bunch of it is people saying like, aw, you guys suck. And I guess that’s the reality of the internet– wherever you give people the opportunity to say things, they might say things that you might not want to hear, but that’s OK. If they think we suck, that’s, you know, I’m sorry.

MICHAEL KING: (17:15) Well, I do want to give some props because, especially around the JavaScript thing, I really appreciate what you did with that because that was very much something that we were very early on discovering at my agency. Like, we wrote a blog post a number of years ago, probably, like, 10 years at this point, called “Googlebot is Chrome,” where we introduced the SEO world to the idea of headless browsing. And I also wrote some subsequent stuff around that about how this could be done. And I appreciated that you especially came out and were like, no, here’s how we do it. Here are some specific things to know because it was a lot of speculation on our part and a lot of testing. But I appreciate that you were able to say like, no, no, here’s the way that we do it, and then we can refine it across the industry. So that’s what I mean. Like, there are definitely instances where the documentation has been incredibly valuable in shedding light on how we need to be thinking about things because, for a long time, we may have been going in the wrong direction entirely. So yeah, there’s definitely some value to it. I just think that there are instances where things are very vague or don’t really speak to the problem and create more problems for SEOs.

Why documentation causes misunderstanding

MARTIN SPLITT: (18:35) So with the create more problems, that’s exactly what we want to improve on and what we want to constantly get better at. And also, thank you very much for the positive feedback. And for the other bit, like, it’s very generic or very strangely written, that one is a tricky one because it is what you said it. You said it yourself– SEOs come from a wide variety of backgrounds. And they have a wide variety of different focuses, and they look at the same thing from very different angles, like the W3C validator thing. If you ask me like, so does our HTML need to be written well? My answer would be, yes, it has to be written well. But I don’t mean specifically compliant with W3C specs, which is what some people might hear who are coming from that angle. Someone else might be like, oh, so it doesn’t have to be compliant, but it needs to be well done? OK, fair enough. And it’s not just documentation where this is hard. I find that also with the tooling, it is incredibly hard to do that from the tooling that we provide. PageSpeed Insights, for instance, or Lighthouse, gives you a score. That’s so dangerous. I know, but some people need that score. 

MICHAEL KING: (19:45) But let’s dig into that a little bit. So one of the common things that I hear, and I heard it at a conference the other day. They’re like, oh, I ran it three times. Why is it different? People don’t understand that network conditions impact all of these scores here. And so if there was some sort of callout, maybe there is. Maybe it’s in a tooltip that no one clicks on, but I think there’s some value in helping them understand that because you’ll see your score is, like, 45 right now. Now, suddenly, it’s 52. And you’re like; these tools don’t work. I don’t trust these tools. And then also, let’s talk a little bit about the difference between a click in GSE versus a session in GA. Most people don’t know. Like, it was very widely misunderstood that those are not the same things. And so I ended up writing a blog post, going into very great detail. I did some patent diving and looked at some of your documentation and showed people, no, here’s why they are measured differently. One of these comes from logs. One of these comes from clickstream, and so on. And so if that information surfaced a bit better and again, I’m not saying you don’t have it. There was a section in there that talks about the differences to some degree like, what is average position versus what is a ranking? Things like that. These are things that are just not obvious to people that I think could be surfaced a bit better, so these questions don’t come up as much. 

Challenges with trust and supporting each other

MARTIN SPLITT: (21:09) That’s very, very good feedback. That’s exactly what we’re trying to do. And I know that especially the Lighthouse team, for instance, tries to be ridiculously transparent. Like, you can figure out how the score is being calculated and evaluate, and as well as how that changes over time because the same score might actually change even if everything else is the same. Over time, you might actually see different scores because the way that the different metrics are weighted is changing. It’s challenging, though.

MICHAEL KING: (21:39) Of course, of course. I think the bigger challenge, though, is that, again, sometimes a developer will come into the project. They’ll look into the documentation. And they’re like; this doesn’t match up with what the SEO has told me. And then they just don’t trust them. And then there’s also some messaging that I recall from a year or two ago, where there was a video talking about how you should choose an SEO. Obviously, that created a bit of a firestorm in our space because it felt like Google saying, this is the way that we need to be. Here are the expectations you should have. I wish there was one where y’all were like, hey, here are the expectations you should have of our documentation.

MARTIN SPLITT: (22:25) Yeah, I understand. I understand. Yeah, see, and this is exactly what happens because I like two things, particularly what you just said. Number one, this video created a firestorm amongst SEOs. It was not even meant for them. It was not even meant for their customers, necessarily. It was meant for small businesses that are like, I am running a kid’s clothing store in a back street in Zurich. And I have zero budget. I’m basically running it out of my basement, and people WhatsApp me or Facebook message me or FaceTime me or Hangout me or write me a text message or whatever to pick up their order or something like that. But I want to be taking part in the online world of e-commerce. How can I do that? And it was meant for this audience. Like, if you get an SEO, look for these very obvious red flags and things that might not be up for what you are trying to accomplish. And because of the wide variety of people, that is what happened. Like, it was misunderstood and misrepresented, and it wasn’t necessarily presenting itself very well. And the other thing was trust. We said like, a developer comes in and doesn’t trust what the SEO says based on what is in the documentation. And that seems to be the core of all this. I just realised, thanks to you, SEOs, and also developers, probably, as well, come from so many different backgrounds. And it’s unfortunate that we choose to use this, like, the trajectory that we come from, like, I’m coming from here. You’re coming from here. Instead of looking at the entire spectrum in between and learning from each other’s perspective, it seems to be more like, I come from this angle, and I see this. You come from this angle. You see this. You must be wrong.

MICHAEL KING: (24:33) Right. Yeah, I don’t think it should be that us-versus-them dynamic that we currently have. And I think that there’s so much beauty in the fact that SEOs come from so many different backgrounds. Like we said in our introduction, I rap. That’s what I did before I did SEO full-time. And there’s so many people that you meet with so many interesting perspectives on so many things, and I wish, when we work with engineers, we were able to come to the table better and be like, look. I want to know everything about what it is that you’re doing. And what can I learn here? How can we work together more effectively? And that’s very much my approach. I don’t try to come into it like; this is the way we must do it because it’s the way I think. I more try to come to it from the perspective of, like, hey, I know a little bit about what it is that you do, probably more than you might expect. But I want to learn this from your perspective, and I want to help support what you’re trying to do. One of the things that we do at the beginning of any of our audits, there’s this thing it’s like the prime directive from some agile thing that we read, where it’s something like, we’re just assuming that everyone did their best with what they knew and the whole thing. So that we’re coming from the perspective of a level playing field, where it’s like, I’m not just hating on your work. I’m just trying to say like, hey, from the perspective where I sit, these are the things that can be improved so that we can make things perform better. And at the same time, I need to understand your limitations. I need to understand what you need in a user story for this to work for you. I need to know what sort of supporting data and documentation you need to see in order for you to get this on your prioritisation schedule. And I think a lot of SEOs aren’t really thinking that way because a lot of how things are presented both in our space and from that us-versus-them dynamic is like, all right, well, you’re going to bring all this great stuff to the table. And no one’s going to do anything with it because they don’t trust you. And it’s all just humans working together, so what do we need to do so that we can all come to a space of trust and make this work and make it, so everyone gets bonuses at the end of the year?

MARTIN SPLITT: (26:54) That’s a nice way of looking at it. From a developer’s perspective, what I can say is that what I observed myself as well is that, as developers, we are used to know that we don’t know something, and then we have to get better at figuring it out. There are still a lot of developers who are like, I know everything, which is unfortunate. But usually, developers are curious creatures, and we are like, oh, I don’t know how that works or how that is supposed to be. And then we research, and then we prototype, and then we have something that solves the problem. We are problem solvers. So when someone comes in, and often, SEOs come in, and I think it’s also because there is such a wide variety of people’s backgrounds in SEO, they might feel inclined to say, oh, it’s like this, even though they are not sure about it, or they don’t know it. So they cling on to their PDF reports. They’re like; this report says this is a problem without necessarily thinking about it. And I would love to see more SEOs admitting, hm, I actually don’t know. Let’s find out together, or let’s test this together. Let’s do some research together.

Knowing, not knowing, and doing researching

MICHAEL KING: (28:01) Well, that is easier to do when you’re in-house. It’s harder to do when you’re a consultant because they don’t expect you to know everything when you’re in-house. They expect you to know enough and work through it. Whereas when you’re a consultant, they want you to have the answer, even if there isn’t a definitive answer. But as far as developers, I like to think of them on a spectrum. And I think I’ve mentioned this to you via email before. I think of it as the Anderson-Alderson spectrum, where Anderson is Thomas Anderson, like Neo from “The Matrix,” who hated his job, and didn’t want anything to do with anybody. And then you’ve got Elliot Alderson, “Mr Robot,” who was the guy that was working all hours of the night, just kind of doing work without anyone telling him, being super proactive. And so there are those developers like you’re saying, this side of the Anderson scale, it’s like, I know everything. You don’t know anything, and they present that way very much, even when they are typically more intellectually curious amongst their peers. And obviously, those people are very difficult to work with, and you’ve got to have documentation for everything. And again, that’s the person that’s going to find the Google documentation that says that you’re wrong, and that’s that. Yeah, exactly. Whereas on the Alderson side, I had a guy who’s a developer working with a client at one point. We were presenting the site audit on-site with them, walking through everything. And he was committing code, fixing things, as we were talking and asking questions. And probably, that’s not the best way to do development work. Of course, you need to run it through your whole process. But it was really good to see how receptive he was to what we were doing, and he was very collaborative. That’s the ideal situation: someone who’s going to ask questions, someone who’s going to help you clarify things so you can make the recommendation far more pointed and actually make it happen. And obviously, those are the extremes. But obviously, something in the middle is where things work best, where it’s like, hey, SEO, you understand that you don’t know everything about this website because you didn’t build it. And, hey, engineer, you understand you don’t know everything about SEO because those requirements are not in your day-to-day. So let’s all get together and figure out how to make this work and trust each other, and that’s it.

MARTIN SPLITT: (30:37) Sounds easier said than done, but I think, yeah, to sum it up, SEOs should value the difference of perspective from different people and be receptive to doing their own research. And also, developers need to be more trusting towards SEOs and take them on the journey with them and work together to figure things out together. And I think that would probably make things easier for everyone.

MICHAEL KING: (31:05) Yeah, I’ll tell you that no SEO is trying to ruin your website. Like, they’re not actively trying to mess things up for you. Their remit is helping it improve its visibility, and drive traffic, ultimately driving conversions. So the reality of the situation is it’s an antagonistic relationship because the whole job, or a lot of the job, is telling an engineer that they’ve done something wrong. And again, we need to reframe that so it’s a better working relationship.

MARTIN SPLITT: (31:39) Yeah, interesting. So let’s hope that people out there watching this conversation are finding their way of reframing it and actually getting onto a table and collaborating with each other rather than trying to prove each other wrong. Because I think if we try to just prove each other wrong, we’re not getting anywhere, and we have the same goal. We want better websites.

Conclusion

MARTIN SPLITT: (32:02) Awesome. Mike, thank you so much for being here with me and taking the time to talk this out. And I think I am full of new ideas, and I will definitely try to make our documentation better together with our lovely team and our fantastic tech writer, Lizzie Harvey, and all the others. And yeah, thanks a lot.

MICHAEL KING: (32:20) Thanks for having me, Martin. This has been great. And I, again, just want to really thank you guys for all the things you’ve been doing. I’ve been doing SEO for 15 years, and it’s been a dramatic improvement in how you’ve engaged with the community, the documentation you’re doing, the tools, and so on. So I definitely appreciate the progress and look forward to where this continues to go.

MARTIN SPLITT: (32:43) Thank you. Thanks a lot for the nice feedback. And thanks to everyone watching this. I hope that you had a good time. Stay safe and healthy, and bye-bye.

Sign up for SEO & Web Developers today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

WebMaster Hangout – Live from DECEMBER 29, 2022

WEBMASTER HANGOUT – LIVE FROM DECEMBER 29, 2022

Introduction

Lizzi: (00:00) Hello, hello, and welcome to the December edition of the Google SEO Office Hours, a monthly audio recording coming to you from the Google Search team answering questions about search submitted by you. Today, you’ll be hearing from Alan, Gary, John, Duy and me, Lizzy. All right, let’s get started.

How to reduce my site from 30,000 products to 2,500?

Alan: (00:22) Vertical web asks. My old site is going from 30,000 products down to two and a half thousand. I will generate 400 thousand 301 redirects. Is it better to start on a clean URL and redirect what needs to be redirected to the new site or do it on an old URL? 

  • A: (00:44) We generally recommend keeping your existing domain name where possible. We support redirecting to a new domain name as Google will recognise the 301 permanent redirects and so understand your content is moved. However, there’s a greater risk of losing traffic if a mistake is made in the migration project, it is fine to clean up old pages and either have them return a 404 or redirect to new versions, even if this affects lots of pages on your site.

Does Google ignore links to a page that was a 404?

Gary: (01:09) Sina is asking; it’s been formally asserted that Google ignores links to a 404 page. I want to know whether links to that page will still be ignored when it is no longer 404.

  • A: (01:22) Well, as soon as a page comes back online, the links will be counted again to that page after the linking pages have been recrawled and the fillings have been deemed still relevant by our systems.

Do speed metrics other than Core Web Vitals affect my site’s rankings?

John: (01:37) If my website is failing on the Core Web Vitals but performs excellently on the GTMetrix speed test, does that affect my search rankings?

  • A: (01:47) Well, maybe. There are different ways to test speed and different metrics, and there’s testing either on the user side or in a lab. My recommendation is to read up on the different approaches and work out which one is appropriate for you and your website.

Why doesn’t Google remove all spam

Duy: (02:06) Somebody asked, why does Google not remove spam webpages? 

  • A: (02:11) Well, over the years we blogged about several spam-specific algorithms that either demote or remove spam results completely. One such example is Spambrain, our artificial intelligence system that’s very good at catching spam. Sometimes for some queries where we don’t have any good results to show, you might still see low-quality results. If you see spam sites are still ranking, please continue to send them to us using the spam report form. We don’t take immediate manual actions on user spam reports, but we do actually use the spam reports to monitor and improve our coverage in future spam updates. Thank you so much.

Do too many 301 redirects have a negative effect?

John: (02:55) Lisa asked. I create 301 redirects for every 404 error that gets discovered on my website. Do too many 301 redirects have a negative effect on search ranking for a website? And if so, how many are too many?

  • A: (03:13) You can have as many redirecting pages as you want. Millions are fine if that’s what you need or want. That said, focus on what’s actually a problem so that you don’t create more unnecessary work for yourself. It’s fine to have 404 pages and to let them drop out of the search. You don’t need to redirect. Having 404 errors listed in Search Console is not an issue if you know that those pages should be returning 404.

How does Google determine what a product review is?

Alan: (03:42) John asks, how does Google determine what a product review is for the purposes of product review updates? If it’s affecting non-product pages, how can site owners prevent that?

  • A: (03:54) Check out our Search Central documentation on best practices for product reviews. For examples of what we recommend, including in product reviews. It is unlikely that a non-product page would be mischaracterized as a product review. And it is unlikely that it would have a significant effect on ranking, even if it was, it’s more likely to be other ranking factors or algorithm changes that have impacted the ranking of your page.

Should I delete my old website when I make a new one?

John: (04:23) I bought a Google domain that came with a free webpage. I now decided to self-host my domain, and I wanted to know if I should delete my free Google page.I don’t want to have two web pages. 

  • A: (04:37) If you set up a domain name for your business and have since moved on to a new domain, you should ideally redirect the old one to the new domain, or at least delete the old domain. Keeping an old website online when you know that it’s obsolete is a bad practice and can confuse both search engines and users.

Should paginated pages be included in an XML sitemap?

Alan: (04:59) Should paginated pages such as /category?page=2 be included in an XML sitemap? It makes sense to me, but I almost never see it.

  • A: (05:12) You can include them, but assuming each category page has a link to the next category page that may not be many benefits, we will discover the subsequent pages automatically. Also, since subsequent pages are for the same category, we may decide to only index the first category page on the assumption that the subsequent pages are not different enough to return separately in search results.

My site used to be hacked, do I have to do something with the hacked pages?

John: (05:37) Nacho asks, we were hacked early in 2022 and still see Search Console 404 error pages from spammy pages created by the hacker. These pages were deleted from our database. Is there anything else that I should do?

  • A: (05:55) Well, if the hack is removed, if the security issue is resolved, and if the pages are removed, then you’re essentially all set. These things can take a while to disappear completely from all reports, but if they’re returning 404, that’s fine. 

Does Google care about fast sites?

Alan: (06:11) Tarek asks, does Google care about fast sites?

  • A: (06:15) Yes. Google measures core web vitals for most sites, which includes factors such as site speed, and core web vitals is used as a part of the page experience ranking factor. While it’s not something that overrides other factors like relevance, it is something that Google cares about and equally important users care about it too.

Can Google follow links inside a menu that shows on mouseover?

Lizzi: (06:38) Abraham asks, can Google follow links inside a menu that appears after a mouseover on an item?

  • A: (06:45) Hey, Abraham. Great question. And yes, Google can do this. The menu still needs to be visible in the HTML, and the links need to be crawlable, which means they need to be proper A tags with an href attribute. You can use the URL inspection tool in Google Search Console to see how Google sees the HTML on your site, and check to see if the menu links are there. Hope that helps. 

Why did the reporting shift between my mobile and desktop URLs?

John: (07:10) Luki asked, we use sub-domains for desktop and mobile users. We found a strange report in Search Console in early August where the desktop performance has changed inversely with the mobile performance. And the result is that our traffic has decreased. 

  • A: (07:30) The technical aspect of the indexing and reporting, shifting to the mobile version of a site is normal and expected. This happens with mobile-first indexing and can be visible in reports if you look at the host names individually. However, assuming you have the same content on mobile and desktop, that wouldn’t affect ranking noticeably. If you see ranking or traffic changes, they would be due to other reasons. 

Does having many redirects affect crawling or ranking?

Gary: (07:56) Marc is asking, Do many redirects, let’s say twice as many as actual URLs affect crawling or ranking in any way? 

  • A: (08:05) Well, you can have as many redirects as you like on your site overall; there shouldn’t be any problem there. Just make sure that individual URLs don’t have too many hops in the redirect chains if you are chaining redirects, otherwise, you should be fine.

Can I use an organization name instead of an author’s name?

Lizzi: (08:21) Anonymous is asking, when an article has no author, should you just use organization instead of a person on author markup? Will this have a lesser impact on results?

  • A: (08:36) It’s perfectly fine to list an organization as the author of an article. We say this in our article, structured data documentation. You can specify an organization or person as an author, both are fine. You can add whichever one is accurate for your content. 

What can we do if someone copies our content?

Duy: (08:53) Somebody asked a competitor’s copying all of our articles with small changes. In time, it ranks higher than us. DMCA doesn’t stop them or seem to lower their ranking. What else can we do, if their site has more authority?

  • A: (09:09) If the site simply scrapes content without creating anything of the original value, that’s clearly a violation of our spam policies, and you can report them to us using our spam report form so that we can improve our algorithms to catch similar sites. Otherwise, you can start a thread on our Search Central Help community, so product experts can advise on what would be some of the possible solutions. They would also be able to escalate to us for further assessment.

Do URL, page title, and H1 tag have to be the same?

Lizzi: (09:35) Anonymous is asking: the URL page title and H1 tag. Do they have to be the same? 

  • A: (09:44) Great question, and no, they don’t need to be exactly the same. There’s probably going to be some overlap in the words you’re using. For example, if you have a page that’s titled “How to Knit a Scarf”, then it probably makes sense to use some of those words in the URL too, like /how-to-knit-a-scarf or /scarf-knitting-pattern, but it doesn’t need to be a word for word match. Use descriptive words that make sense for your readers and for you when you’re maintaining your site structure and organization. And that’ll work out for search engines as well.

Is redirecting through a page blocked by robots.txt a valid way to prevent passing PageRank?

John: (10:17) Sha asks, is redirecting through a page blocked by robots.txt still a valid way of preventing links from passing PageRank?

  • A: (10:28) Yes, if the goal is to prevent signals from passing through a link, it’s fine to use a redirecting page that’s blocked by robots.txt.

Why is my site flagged as having a virus?

Alan: (10:37) Some pages on my website collect customer information, but my site is always reported as being infected via a virus or deceptive by Google. How can I avoid this happening again without removing those pages?

  • A: (10:53) Your site might have been infected by a virus without you knowing it. Check out https://web.dev/request-a-review for instructions on how to register your site in Search. Console, check for security alerts, then request Google to review your site again after removing any malicious files, some break-ins hide themselves from the site owner so they can be hard to track down.

Is there any way to get sitelinks on search results?

Lizzi: (11:20) Rajath is asking, is there any way to get sitelinks on SERPs?

  • A: (11:25) Good question. One thing to keep in mind is that there’s not really a guarantee that sitelinks or any search feature will show up. Sitelinks specifically only appear if they’re relevant to what the user was looking for and if it’ll be useful to the user to have those links. There are some things that you can do to make it easier for Google to show sitelinks. However, like making sure you have a logical site structure and that your titles, headings, and link text are descriptive and relevant. There’s more on that in our documentation on sitelinks, so I recommend checking that out. 

Does having two hyphens in a domain name have a negative effect?

John: (11:59) My site’s domain name has two hyphens. Does that have any negative effect on its rankings? 

  • A: (12:06) There’s no negative effect from having multiple dashes in a domain.

How important are titles for e-commerce category page pagination?

Alan: (12:12) Bill asks, how important are unique page titles for e-commerce category product listing page pagination? Would it be helpful to include the page number in the title?

  • A: (12:25) There is a good chance that including the page number in your information about a page will have little effect. I would include the page number if you think it’s gonna help users understand the context of a page. I would not include it on the assumption it’ll help with ranking or increasing the likelihood of the page being indexed. 

Is it better to post one article a day or many a day?

John: (12:44) Is it better for domain ranking to regularly post one article every day or to post many articles every day or to post many articles every day? 

  • A: (12:53) So here’s my chance to give the SEO answer: it depends. You can decide how you want to engage with your users: on the downside, that means there’s no absolute answer for how often you should publish,on the upside, this means that you can decide for yourself.

What is the main reason for de-indexing a site after a spam update?

Gary: (13:12) Faiz Ul Ameen is asking, what is the main reason for de-indexing of sites after the Google spam update?

  • A: (13:19) Well, glad you asked. If you believe you were affected by the Google Spam update. You have to take a really, really deep look at your content and, considerably improve it. Check out our spam policies, and read more about the Google spam update on Search Central.

Can Google read infographic images?

John: (13:38) Zaid asks, can Google read infographic images? What’s the best recommendation there?

  • A: (13:45) While it’s theoretically possible to scan images for text, I wouldn’t count on it when it comes to a web search. If there’s a text that you want your pages to be recognized for, then place that as text on your pages. For infographics, that can be in the form of captions and all texts, or just generally, well, you know, text on the page.

Is it possible to remove my site completely if it was hacked?

Gary: (14:08) Anonymous is asking whether it’s possible to completely remove a site from Google Search because it has been hacked and leads to thousands of invalid links.

  • A: (14:20) Well, first and foremost, sorry to hear that your site was hacked. Our friends at Web.dev have great documentation about how to prevent this from happening in the future, but they also have documentation about how to clean up after a hack. To answer your specific question, you can remove your site from search by serving a 404 or similar status code, or by adding noindex rules to your pages. We will need to recrawl your site to see the status codes and noindex rules. But that’s really the best way to do it.

Why does my Search Console miss a period of data?

John: (14:54) I’m missing months of data from my domain property on Search Console from April 2022it connects directly to August 2022. What happened?

  • A: (15:07) This can happen if a website loses verification in Search Console for a longer period of time. Unfortunately, there is no way to get this data back. One thing you could try, however, is to verify a different part of your website and see if it shows some of the data there. 

How can I deindex some bogus URLs?

Gary: (15:25) Anonymous is asking, I want to deindex some bogus URLs. 

  • A: (15:30) There’s really only a handful of ways to deindex URLs: removing the page and serving a 404 or 410 or similar status code. Or by adding a noindex rule to the pages and allowing Googlebot to crawl those pages. These you can all do on your own site. You don’t need any specific tool. But Googlebot will need to recrawl those pages to see the new statuses and rules. If we are talking about only a couple of pages, then you can request indexing of these pages in the Search Console.

Why is some structured data detected only in the schema validator?

Lizzi: (16:04) Frank asks why is some structured data markup detected on the schema validator, but not on Google’s rich result test?

  • A: (16:14) Hey, Frank. This is a really common question. These tools are actually measuring different things. I think you’re referencing the Schema.org markup validator, which checks if your syntax in general, is correct, whereas the rich result test checks if you have markup that may enable you to get a rich result in Google Search. it doesn’t actually check every type that’s on schema.org, it only checks those that are listed in the list of structured data markup that Google supports, which is about 25 to 30 features, so it’s not fully comprehensive of everything that you’d see on Schema.org, for example. 

Do you have people who can make a website for me?

John: (16:47) Do you have people that I can work with to create a functioning site?

  • A: (16:52) Unfortunately, no. We don’t have a team that can create a website for you. If you need technical help, my recommendation would be to use a hosted platform that handles all of the technical details for you. There are many fantastic platforms out there now, everything from Blogger from Google to Wix, or Squarespace, Shopify, and many more. They all can work very well with search and usually, they can help you to get your site off the ground.

Why are some sites crawled and indexed faster?

Gary: (17:21) Ibrahim is asking why are some websites crawled and indexed faster than others?

  • A: (17:25) This is a great question. Much of how fast a site is crawled and indexed depends on how the site is perceived on the internet. For example, if there are many people talking about the site, it’s likely the site’s gonna be crawled and indexed faster. However, the quality of the content also matters a great deal. A site that’s consistently publishing high-quality content is going to be crawled and indexed faster. 

Why do Google crawlers get stuck with a pop-up store selector?

Alan: (17:51) Why do Google crawlers get stuck with a pop-up store selector? 

  • A: (17:56) It can depend on how the store selector is implemented in HTML. Google follows a href links on a page. If the selector is implemented in JavaScript, Google might not see that the other stores exist and so not find the product pages for those stores.

How can I verify my staging site in Search Console?

Gary: (18:13) Anonymous is asking if we have a staging site that is allow-listing only specific developers’ IP addresses, if we upload a Search Console HTML file, which I suppose is the verification file, will Search Console be able to verify that site?

  • A: (18:30) Well, the short answer is no. To remove your staging site from search, using the removal tool for site owners first, you need to ensure that Googlebot can actually access the site, so you can verify it in Search Console. We publish our list of IP addresses on Search Central. So you can use that list to allow-list the IPs that belong to Googlebot so it can access the verification file. Then you can use the removal tool to remove the staging site. Just make sure that the staging site, in general, is serving a status code that suggests it cannot be indexed, such as 404 or 410.

How can I get a desktop URL indexed?

John: (19:08) How can we get a desktop URL indexed?  The message Search Console says the page is not indexed because it’s a page with a redirect. We have two separate URLs for our brand, desktop and mobile.

  • A: (19:21) With mobile-first indexing. That’s normal. Google will focus on the mobile version of a page. There’s nothing special that you need to do about that, and there’s no specific trick to index just the desktop version…

Is it possible to report sites for stolen content?

Lizzi: (19:36) Christian is asking, is it possible to report sites for stolen content, such as text, original images, that kind of thing?

  • A: (19:46) Yes, you can report a site. Do a search for “DMCA request Google”, and use the “report content on Google” troubleshooter to file a report. 

Is adding Wikipedia links a bad practice?

John: (19:57) Is adding Wikipedia links to justify the content bad practice?

  • A: (20:03) Well, I’d recommend adding links to things that add value to your pages. Blindly adding Wikipedia links to your pages doesn’t add value.

Is there any difference if an internal link is under the word “here”?

Lizzi: (20:14) Gabriel is asking, is there any difference if an internal link is under the word “here” or if it is linked in a keyword?

  • A: (20:23) Hey Gabriel, good question. It doesn’t matter if it’s an internal link to something on your site or if it’s an external link pointing to something else, “here” is still bad link text. It could be pointing to any page, and it doesn’t tell us what the page is about. It’s much better to use words that are related to that topic so that users and search engines know what to expect from that link.

Why does my news site’s traffic go up and down?

Gary: (20:46) Niraj is asking, I follow the same pattern of optimization, but my news website traffic is up and down.

  • A: (20:53) Well, for most sites, it’s actually normal to have periodic traffic fluctuations. For example, seasonality affects e-commerce sites quite a bit. For news sites, specifically user interest in the topics you cover can cause fluctuations, but all in all, it is normal and not something that you have to worry about usually. 

Is changing the URL often impacting my SEO performance?

John: (21:16) Is changing the URL often impacting my SEO performance? For example, a grocery site might change a URL from /christmas/turkey-meat to /easter/turkey-meat. The page is the same, and the URL is just changed with a redirect. 

  • A: (21:35) I wouldn’t recommend constantly changing URLs. At the same time, if you must change your URLs, then definitely make sure to redirect appropriately. 

How does freshness play a role in ranking seasonal queries like Black Friday deals?

Alan: (21:45) How does freshness play a role in the ranking? For seasonal queries like Black Friday deals, it makes sense to update frequently as news or deals are released, but what about something less seasonal?

  • A: (21:58) You may decide to update a Black Friday deals page frequently to reflect the latest offers as they come out. Remember, however, that Google does not guarantee how frequently a page will be reindexed, so not all of the updates are guaranteed to be indexed. Also, a good quality page that does not change much may still be returned in search results if we think its content is still relevant. I would recommend focusing on creating useful content and not spending too much time thinking about how to make static pages more dynamic.

Is there a way to appeal Safe Search results?

John: (22:33) Adam asks, is there a way to appeal Safe Search results? I work with a client that has been blocked from their own brand term while resellers and affiliates are still appearing. 

  • A: (22:44) So first off, I think it’s important to realize that Safe Search is not just about adult content. There’s a bit of nuance involved there, so it’s good to review the documentation. Should you feel that your website is ultimately incorrectly classified, there’s a review request link in an article called “SafeSearch and your website” in the Search developer documentation. 

How can I update my site’s brand name?

Lizzi: (23:08) Danny is asking. My site name in search reflects the old domain’s brand name, even with structured data and metatags. What else can I do to update this information? 

  • A: (23:22) Hello, Danny. The site name documentation has a troubleshooting section with a list of things to check that’s more detailed than what I can cover here. You want to make sure that your site name is consistent across the entire site, not just in the markup. And also, check any other versions of your site and make sure that those are updated too. For example, HTTP and HTTPS. If you’re still not having any luck, go to the Search Console help forum and make posts there. The folks there can help.

When migrating platforms, do URLs need to remain the same?

John: (23:51) Aamir asks while migrating a website from Blogger to WordPress, do the URLs need to be the same, or can I do a bulk 301 redirect?

  • A: (24:02) You don’t need to keep the URLs the same. With many platform migrations, that’s almost impossible to do. The important part is that all old URLs redirect to whatever specific new URLs are relevant. Don’t completely redirect from one domain to another. Instead, redirect on a per URL basis.

How much do I have to do to update an algorithmic penalty?

Duy: (24:24) Johan asked if a website gets algorithmically penalized for thin content, how much of the website’s content do you have to update before the penalty is lifted? 

  • A: (24:34) Well, it’s generally a good idea to clean up low-quality content or spammy content that you may have created in the past. For algorithmic actions. It can take us several months to reevaluate your site again to determine that it’s no longer spammy. 

How can I fix long indexing lead times for my Google-owned site?

John: (24:49) Vinay asks, we’ve set up Google Search Console for a Google-owned website where the pages are dynamically generated. We’d like to get insights into what we should do to fix long indexing lead times.

  • A: (24:05) Well, it’s interesting to see someone from Google posting here. As you listeners might know, my team is not able to give any Google sites SEO advice internally, so they have to pop in here like anyone else. First off, as with any bigger website, I’d recommend finding an SEO agency to help with this holistically. Within Google, in the marketing organization, there are folks that work with external SEO companies, for example. Offhand, one big issue I noticed was that the website doesn’t use normal HTML links, which basically makes crawling it a matter of chance. For JavaScript sites, I’d recommend checking out the guidance in our documentation and our videos. 

How does the helpful content system determine that visitors are satisfied?

Duy: (25:49) Joshua asked, how exactly does the helpful content system determine whether visitors feel they’ve had a satisfying experience?

  • A: (25:58) We published a pretty comprehensive article called “What creators should know about Google’s August 2022 helpful content update” where we outline the type of questions you can ask yourself to determine whether or not you’re creating helpful content for users. Such as, are you focusing enough on people-first content? Are you creating content to attract search users using lots of automation tools? Did you become an expert on a topic overnight and create many articles seemingly out of nowhere? Personally, I think not just SEOs, but digital marketers, content writers, and site owners should be familiar with these concepts in order to create the best content and experience for users. 

Should we have 404 or noindex pages created by bots on our website?

John: (26:40) Ryan asks, bots have swarmed our website and caused millions of real URLs with code tacked on to be indexed on our website through a vulnerability in our platform. Should we 404 these pages or noindex them?

  • A: (26:56) Either using a 404 HTTP result code or a noindex robots metatag is fine. Having these on millions of pages doesn’t cause problems. Depending on your setup. You could also use robots.txt to disallow the crawling of those URLs. The effects will linger in Search Console’s reporting for a longer time, but if you’re sure that it’s fixed, you should be all set.

Will adding a single post in Spanish to my English site affect my search rankings?

Lizzi: (27:20) Bryan asks if my site is all in English and I add a single post in Spanish, will that affect search rankings? 

  • A: (27:29) Hey, Bryan. Sure. That’s totally fine. It’s not going to harm your search rankings. I also recommend checking out our guide to managing multilingual websites, as there’s a lot more to cover when you’re thinking about publishing content in multiple languages.

Do all penalties show up in Search Console?

Duy: (27:44) Stepan asked In Google Search Console exists a section called Manual Actions. Do Google show all penalties there and always notify domain owners when a domain is hit with some penalties?

  • A: (27:58) We have manual actions, which are issued by human reviewers and algorithmic actions, which are driven entirely by our spam algorithms, such as Spambrain. We only communicate manual actions to site owners through Search Console. You can search for manual actions reports. There’s a page there that lists a lot of information to help you understand more about our different types of manual actions, as well as how to file a reconsideration request when you receive and already address the manual action.

Will SEO decline? Should I study something different?

John: (28:33) Caroline asks, will SEO decline in favour of SEA and SMA? I’m starting my internship and need to know if I better redirect my path or continue on my way and specialise myself in accessibility.

  • A: (28:49) I’m not quite sure what SMA is, but regardless, there are many critical parts that lead to a website’s and a business’s success. I definitely wouldn’t say that you shouldn’t focus on SEO, but at the same time, it’s not, well, the answer to everything. My recommendation would be to try things out. Find where your passions and your talents lie, and then try more of that. Over the years things will definitely change, as will your interests. In my opinion, it’s better to try and evolve than towait for the ultimate answer. 

Does the number of outgoing links affect my rankings?

Duy: (29:24) Jemmy asked, does the number of outgoing links both internaland external, dilute PageRank, or is PageRank distributed differently for each type of link?

  • A: (29:35) I think you might be overthinking several things. First of all, focusing too much on PageRank, through building unnatural links whether it violates a policy or not, it takes time and effort away from other more important factors on your, such as helpful content and great user experience. Second of all, sites with internal links allowed us to discover not only new pages, but also understand your site better. Limiting them explicitly would likely do more harm than good.

Conclusion

John: (30:07) And that was it for this episode. I hope you found the questions and answers useful. If there’s anything you submitted which didn’t get covered here, I’d recommend posting in the Search Central Help community. There are lots of passionate experts there who can help you to narrow things down. And of course, if there’s more on your mind, please submit those questions with the form linked below. Your questions here are useful to us and to those who catch up on these recordings, so please keep them coming. If you have general feedback about these episodes, let us know in the comments or ping us on social media. I hope the year has gone well. For us things have certainly evolved over the course of the year with well ups and downs and a bunch of new launches. I’m looking forward to catching up with you again next year, perhaps in another episode of these office hours. In the meantime, may your site’s traffic go up and your crawl errors go down.
Have a great new year and see you soon. Bye!

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

Australia Post. Inside Australian Online Shopping eCommerce update – November/2022

Australia Post. Inside Australian Online Shopping eCommerce update - November/2022

Black Friday and Cyber sales drove Australian online shopping in November 2022, making it hit a historical record with a +3% increase year-on-year (YOY) compared to November 2021 and a 38% month-on-month growth compared to October 2022.

Although the forecast of global economic recession directly affects the consumer’s behaviour on the Australian market and the tendency to save money over the festive season is observable, half of a million (570k) households on top of October’s results shopped online in November, achieving a record of more than 6 million households overall. eCommerce purchases in Queensland grew with the biggest lead of 11.2% YoY, whereas online shopping decreased by an anti-record 6.4% YoY in the Australian Capital Territory. Purchases being down by 3.8% in the last 12 months compared to the previous year is quite expected, considering that in 2021 lockdown-driven spending took place. 

Preceding the Cyber Weekend period, Click Frenzy’s sales run between the 6th and 19th of November gave a splash of 20% online purchases increase in comparison with the previous two weeks and 4.2 million households that shopped online over this period. General clothing and Beauty were the top categories, with 36% and 35%, respectively.

Despite increased foot traffic back into retail stores, Black Friday and Cyber Monday online sales events were almost 7% higher than last year’s record Cyber Weekend period. To capture a larger share of wallets, in 2022, retailers started sales earlier, which explains an immediate increase that took place a few days before the official takeoff on November 25th. Overall, during Black Friday and Cyber Monday 2022 events, between the 20th of November and the 3rd of December: online purchases surpassed last year’s by 6.6% and increased by an additional 42% compared to the Click Frenzy period; 4.9 million households shopped online which is 700k more households than during the Click Frenzy period. The most popular categories during the Cyber sales were Athleisure, Sporting & Outdoor Goods and Fashion Accessories. 

Here we share with you data from Australia Post. The results summarise the data and give insights into Australian Online Shopping trends for November 2022.

The Benefits of Having One Agency in a Soft Economy

The market’s competitiveness directly affects the intensity of Online Retailers’ search for additional ways to acquire new customers and upgrade the service for existing ones to retain them. In a softening economy, it’s sensible for businesses to shop around for the best deals – better efficiency and results for the same price.

As consumers become more advanced in their eCommerce habits, they are looking for goods and services in whatever digital channel they’ve grown accustomed to. Moreover, depending on different aspects, the same potential customer could use more than one channel during their journey towards the actual purchase. When several parties with different approaches and levels of transparency are involved, managing these eCommerce marketing channels can become challenging.

In this article, we focus on the benefits of working with a single eCommerce specialist agency for all digital marketing channels as one of the ways to optimise marketing expenditures in a soft economy. Spoiler – it is all about more efficiency.

360-degree observation

Once a reliable eCommerce marketing agency that manages all channels is found, the business will be able to observe the complete picture of business indicators throughout all marketing activities. Moreover, having a full view allows more appropriate budgeting for the most effective channels while keeping the rest on the minimum necessary activity level.

This is different from when several, not synchronised agencies manage the channels, and the risk of evaluating the global results wrongly and retrieving doubtful strategic insights based on fragmented information is high.

Scalability, flexibility and timely results

Data is the lifeblood of any organisation. Without data, decisions are based on guesswork, not objective analysis. When you can’t see the full picture, data is scattered and hard to access.

When the full picture is observable and accessible in one place, it’s much easier to make changes and scale in whatever direction with less loss in terms of always-scarce resources like time, money and effort.

Accumulate data and avoid ineffective overlaps

In marketing, the rule of 7 is a guideline that states you need seven interactions with a brand before a prospect decides to buy. However, this doesn’t mean that you must strictly limit the number of interactions to this specific “magic” number in all channels.

Marketing practitioners will tell you it’s preferable where it’s technically attainable, but they also raise an important issue: resources and efforts are sometimes wasted to keep impacting through an inappropriate channel or with an unattractive message.

Exclude cannibalisation

For instance, if you’re working with a performance-based partnership with a CPL (cost-per-lead) model — where the uniqueness of the attracted clients and transactions are your most significant indicator — data accumulated from different resources in one place and analysed using a uniform approach allows for more targeted exposure to your audience and avoids ineffective expenditures.

One of the biggest challenges that agencies face is channel cannibalisation. When two or more different parties separately manage channels, the appearance of the situation when these parties compete to achieve their own KPIs and eventually dilute the benefits for business is probable. For instance, SEO and SEM channels are very interconnected. The crucial part of agency work here is to ensure that paid traffic growth will not negatively affect the organic traffic through the cannibalisation of keywords and instead fill the gaps and increase the value of both channels through the Blended Search Marketing approach.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

ASSELYA Sekerova –
MARKETING & PROJECT DIRECTOR

Guide To Current & Retired Google Ranking Systems

The new Google Ranking System guide defines the relevant systems Google uses and the old ones that are retired and no longer in use to rank search results.

What is the difference between systems and updates?

Systems are always running in the background. Updates, however, refer to one-time changes to the ranking systems. For example, helpful content systems run in the background whenever Google provides search results but may receive updates to improve performance. Other examples of one-time changes to ranking systems are core algorithm updates and spam updates.

Let’s look at the highlights from the Google Ranking System guide.

Current Google Ranking Systems

Here is a list of the Google ranking systems currently in operation.

  • BERT
    Stands for Bidirectional Encoder Representation of Transformers and allows Google to understand how word combinations can express different meanings and intentions.
  • Crisis information system
    Google has procedures to provide specific sets of information in times of crisis. For instance, SOS alerts when searching for natural disasters.
  • Deduplication system
    The Google search engine tries to avoid duplicate or near-duplicate web pages.
  • The exact match domain system
    This system prevents Google from overly trusting websites with domain names that match search queries.
  • Freshness system
    Designed to display up-to-date content where it’s needed and where it’s expected.
  • Helpful content system
    Makes it easy for people to see original, useful content rather than content created primarily to drive traffic from search engines.
  • Link analysis systems and PageRank
    Determines which pages are the most useful in response to a query based on how the pages are linked.
  • Local news systems
    Displays local news sources relevant to your search query.
  • MUM or Multitask Unified Model
    An artificial intelligence system that can understand and generate speech. This powers the featured callout and is not used for the overall ranking.
  • Neural matching
    Helps Google understand and match conceptual expressions for queries and pages.
  • Original content systems
    Helps Google display original content, including actual reports, in search results.
  • Removal-based demotion systems
    Downgrading websites based on mass content removal requests.
  • Page experience system
    Evaluates various criteria to determine if a website provides a good user experience.
  • Passage ranking system
    An artificial intelligence system that Google uses to identify individual sections or “snippets” of web pages to understand better how relevant the page is to searchers.
  • Product reviews system
    Rewards quality product reviews by experienced writers with insightful analysis and original research.
  • Rank Brain
    An artificial intelligence system that helps Google understand the relationship between words and concepts. Allows Google to return results that contain different terms than the exact words used in the query.
  • Reliable information systems
    Google has several techniques for displaying reliable information. To promote authoritative pages, demote low-quality content, and reward high-quality journalism.
  • Site diversity system
    Prevents Google from listing its web pages from the same website in the top search results on two or more listings.
  • Spam detection system
    Processes content or activity that violates Google’s spam policy.

Outdated Google Ranking Systems

The following systems are marked for historical purposes. They are integrated into other systems or part of Google’s primary ranking system.

  • Hummingbird
    A significant improvement from the Google ranking system was introduced in 2013.
  • Mobile-friendly ranking system
    Prioritised content that plays better on mobile devices. It has been incorporated into the Google Pages interaction system.
  • Page speed system
    Prioritised content that loads quickly on mobile devices, introduced in 2018. It has since been incorporated into the Google Pages interaction system.
  • Panda system
    Prioritised quality and original content, introduced in 2011. In 2015, it became part of Google’s primary ranking system.
  • Penguin system
    Downgraded websites that use spam link building, introduced in 2012.
  • Secure site system
    Prioritised HTTPS-protected websites, introduced in 2014. It has since become part of the Google Pages experience.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Leonidas Comino – Founder & CEO

Leo is a, Deloitte award winning and Forbes published digital business builder with over a decade of success in the industry working with market-leading brands.

Like what we do? Come work with us

How to optimise images for your eCommerce website

Introduction

ALAN KENT: (00:07) They say a picture is worth a thousand words. And there is no field where that is not more true than eCommerce . My name is Alan Kent, and I’m a developer advocate at Google. In this episode, I’ll explore six tips to optimise images on your eCommerce  website. It is not uncommon for an eCommerce  page to reference hundreds of images. These images are everything from full-sized product images to smaller product thumbnails, category images, banners, page decorations, and button icons. Given their abundance, how can you make sure that they are fast and efficient?

Tip #1: Eliminate image Cumulative Layout Shift

(00:46) The first tip for optimizing the image used on your site is to eliminate cumulative layout shifts. Cumulative Layout Shift, or CLS for short, is where the contents of a page visibly move around on the screen. You know those sites you stop reading, or you try and click on a link, and suddenly the page content moves. It’s really annoying. Images can contribute to this problem if used incorrectly. CLS is so impactful to a user’s experience.

Google has defined CLS as one of three Core Web Vitals. These are factors that Google considers important for user experience on all web pages. So why can images cause CLS? To load a page, your web browser starts to download the HTML markup of the page. Most browsers will start displaying the top of the page before the whole page has been downloaded. To reduce your wait time, any references to images encountered are added to a queue of resources to download. JavaScript and CSS files are also added to the queue. These files are then downloaded in parallel to the main page, a few at a time. The problem is when the browser does not know the image dimensions before rendering the page content. Layout shift occurs if the browser discovers it did not leave the right amount of space for an image. CLS is often easy to spot on a page manually by watching it load. But there are also automated tools that can measure it. But let’s first take a slight detour and talk about lab versus field data.

Lab data is collected by testing tools you point to your web page, such as Google’s Lighthouse. You can perform a lab test at any time and have complete control over the process. Field data is collected by measuring what happens to real users on your site. In production, field data can be collected using JavaScript you embed in your own web pages or via anonymized data collected by Chrome. Chrome makes data for popular sites available in the Chrome User Experience Report, or CrUX, for short. Lab data can be easier for developers to collect and analyze, but it has some limitations. For example, data can miss shifts that occur after a page finishes loading. Ultimately, it is field data that demonstrates whether you’ve really solved a problem for your users. PageSpeed Insights is a useful tool, as it presents both lab and field data in one report. For CLS, look for warnings such as avoiding large layout shifts and images that do not have explicit width and height. Just be aware that layout shifts in the report can be caused by things other than images, such as JavaScript. Fixing image CLS issues can be as simple as including image dimensions in the HTML markup. That way, the browser immediately knows exactly how much space to reserve for the image. There are other CSS tricks that can be used as well if the CSS is loaded properly.

Tip #2: Correctly size your images

(3:56) The second tip is to pick the right width and height for your images. Larger files take longer to download, particularly on mobile phones with slower network connections. Larger files also require more processing time, especially on mobile phones with less powerful CPUs. Sizing images correctly can be complicated by the range of device sizes and resolutions that access your site. If the browser shrinks or crops the image, the download file is larger than needed, which is wasteful. One easy way to detect incorrectly sized images is using the properly sized images section under Opportunities in the PageSpeed Insights report. Page speed insight identifies images on a page that have larger dimensions than needed, listing the URLs. Once you have detected there is a problem, how to fix it. Responsive images refer to techniques to make images behave well on different-sized devices. For example, in HTML, there is a source set attribute that allows you to list URLs for different sizes and formats of images so the browser can pick the best one to download. This requires you to resize the images in advance or perform imagery sizing on demand. If resizing images is too much work for your own site, consider using a Content Delivery Network or CDM. Many such services can resize images and convert them to more efficient formats on your behalf.

Tip #3: Use the best image file format

(05:28) The next tip is to think about the file format of your images, such as whether to use PNG, JPEG, or webP files. The file format affects the file size. Care should be taken, however, as formats such as JPEG and webP can reduce files using lossy compression algorithms. Lossy means image quality may be reduced as a trade-off for reducing the file size. If pixel-perfect images are required, such as button icons, less efficient but pixel-perfect formats should be used. While lower-quality images may sound like a bad idea, remember that the degradation in quality may not be noticeable to shoppers. And the speed benefit can be substantial. Shoppers may abandon your page if it takes too long to load. To detect if your site can benefit from using a different image format, look at the serve images in the Next Gen Format section of the PageSpeed Insights report. This report lists images on a page that are candidates to be converted to a more efficient file format. So is there a single best image format to use? One complication is not all image formats will work on all browsers. The caniuse.com site can be used to check which browsers support image file formats. For example, webP is now supported by almost all browsers in use. So it offers a good combination of efficiency and adoption. Alternatively, rather than picking a single format, you can have your website return the most efficient format that the browser says it supports. Again, this is a service offered by CDMs.

Tip #4: Compress images appropriately

(07:17) Tip number four is to use the right quality factor for your images to encode them efficiently while retaining the desired image quality. The Encode Images Efficiently section of the PageSpeed Insights report can be used to identify candidate images for compression optimisation. The report also shows potential file size savings. Be aware, however, that the report does not perform a visual check on your compressed images. The report is based on commonly used compression factors. To find a quality factor you are happy with, use your favourite image conversion tool on several images using different quality values. A common default value for webP is 75. The Squoosh.app site can be useful for this purpose, as it makes it easy to compare the before and after versions of images. Remember also that there are times when you want higher resolution images, such as when you want to allow the shopper to Zoom in on a product image. Want to go deeper? Jake and Surma had a great session on image compression they gave at Web.dev Live.

Tip #5: Cache images in the browser

(08:22) Tip number five tells the browser how long it can safely cache images. When you return an image from your site, you can include an HTTP response header with caching guidance, such as how long it is recommended for a browser to cache an image. One approach to detect if the HTTP response cache headers have been set appropriately on your site is, again, to use the PageSpeed Insights report. The Serve Static Assets With an Efficient Cache Policy section of the PageSpeed Insights report identifies images that may benefit from caching improvements. Another approach is to use a networking tab in developer tools inside Chrome to examine the HTP cache response headers. To fix issues on your site, check to see if you have platform or web server settings you can change to adjust the cache lifetime for images on your site. If you do not change images frequently, or if you always give images a new URL, then you can set a very long cache lifetime. In addition to a cache duration, using a CDN frequently makes downloads faster by caching copies of your images in multiple locations around the world, closer to where users connect from.

Tip #6: Correctly sequence your image downloads

(09:37) The final tip is a more advanced tip. Correctly sequencing the order in which resources, including images, are downloaded can significantly improve page performance. Because downloading the images one by one can be slow, browsers using HTTP1 typically download several images in parallel over independent network connections to the website. If the website supports HTTP2, most browsers now multiplex downloads over a single network connection. This is generally faster and avoids problems such as large files blocking the downloads of smaller files. Whichever approach is used, there is still a network bandwidth bottleneck. In general, you want images to be downloaded in the following order. First, you want to download large hero images at the top of the page, as they can affect the largest Contentful Paint score for the page. The largest Contentful Paint, or LCP for short, is the time it takes to show the user the main content of the screen. Largest Contentful Paint, like cumulative layout shift, is a core web vital metric. Next, you want other images that will be visible without scrolling to be downloaded. Images visible without the user scrolling are referred to as above the fold. The rest are referred to below the fold. As a web page may be viewed on devices with different screen sizes, it is common to estimate which images are above and below the fold by checking your site on multiple devices. Finally, you want images to be downloaded that are just off the screen so that they can be ready for display when a user starts scrolling. Other images that are not likely to be displayed soon are often best to load lazily. If the user does not scroll the page, fetching them would be a waste of resources. To detect if your site is loading images efficiently, again, the PageSpeed Insights report can help. For example, the Defer Offscreen Images section of the report identifies images that could be loaded after other images. There are other sections that can be useful, such as Avoid Chaining Critical Resources. Although these chains typically involve JavaScript and CSS files. A common technique to improve the order of image loading is lazy loading. This is where images are not downloaded until the user scrolls to that portion of the page.

Lazy loading was originally implemented using JavaScript. But now, most browsers support the loading equals lazy attribute of HTML. Care should be taken, as performance degradation can occur if lazy loading is used for images above the fold. Recent versions of Lighthouse will highlight if an image is lazily loaded, which will impact LCP. With the advent of HTTP2, there are additional optimizations that are possible if the browser and website both support HTTP2. An HTTP2 website can start pushing images to the browser that knows they are going to be needed without waiting for the browser to request them. The HTTP/2 also allows browsers to download multiple images in parallel over a single network connection. To take advantage of HTTP2, either your web server must be configured, so it knows which resources to push or use a CDN with HTTP2 support and configure it to push resources, as required.

Conclusion

(12:59) To wrap up, I’ve shown common problems that can occur on eCommerce  sites with static images. Some have easy fixes, such as ensuring that image tags in HTML always specify the image width and height attributes or using the loading equals lazy image attribute. 

There are more advanced techniques that you can implement directly on your website, but it may be easiest to use a third-party CDN with suitable support. Such services can: 

  • Serve images in the best format supported by the browser; 
  • Convert images from a single source image to more efficient formats as needed; 
  • Pre-scale images to multiple sizes for efficient download;
  • Display across a range of devices, and compress images to reduce download sizes. 

Thanks for watching. If you enjoyed this video, make sure to click Subscribe. Google Search Central has new videos posted weekly.

Sign up for eCommerce Essentials today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

Customer retention strategy for eCommerce before, during and after the holiday season

Succeeding with engaging marketing activities in the holiday season assumes thoughtful planning of customer acquisition and retention strategies and their creative realisation. To help you get started, LION compiled a list of initiatives grouped into:

  • Tactical – organised by readjusting standard channels and tools that are typically already in place in the majority of eCommerce companies, and the outcomes will be seen immediately after appliance;
  • Strategic tools for customer retention need more time and effort to plan, launch and manage. However, the results are a solid base for the entire business’s longevity and subsequent scaling.

Tactical initiatives

Customer research and feedback

Generally, understanding the customer starts from target audience research and continues on the various marketing channel touch points. Capture the intention and attention of holiday shoppers by:

  • Surveying to collect information about the holiday shopping plans of the customers
  • Researching general shopping trends in the market
  • Sharing gift lists with popular items
  • Introducing holiday trends to inspire
  • Running teasing campaigns
  • Launching early access for loyal customers
  • Optimising the purchase process.

However, the customer journey doesn’t end with the purchase! 37% of respondents claim that more than five and 33% – more than three purchases are needed to create solid brand loyalty. Therefore, post-purchase communications are invaluable contributors to the overall business management process. Why did customers buy this particular product? What is the level of their satisfaction? How do they use the product or service? If they don’t use it, then why? What is the definition of customer loyalty specifically for your company? Moreover, remember that enriched data and an extra occasion to be in touch with the customer is another perfect way to cross-sell.

Encourage users at the right moment to create an account

The importance of account creation for repurchasing and increasing the customer retention rate is apparent. The problem is that the mandatory creation of an account could be an issue because many online shoppers prefer to purchase as a guest, so asking to create an account could prevent them from placing their first orders. However, you can suggest an account creation immediately after the first purchase is completed and even simplify the process by applying the information from the details of this order.

Send only value-adding emails

There is a list of must-have emails to start:

  • Welcome email
    Don’t miss the opportunity of using the email with the highest possible open rate of 50-60% at its maximum capacity. Personalise it apart from just using the name, but applying the details of the purchase and incorporating the personalised suggestion of similar products and services blocks.
  • Content email
    Send a selection of relevant content in different formats to maintain customer engagement even after holiday sales seasons: promoting new offers, bundles, and special gifts to a specific segment of the target; sharing relevant content from the company’s blog and establishing the brand as a thought-leader in the industry; interacting with the audience by surveying and asking questions about the experience with the brand; updates about new products and information that your audience will find valuable.
  • Upsell email
    Existing customers already have a history of successful experiences. Therefore, they trust the company and are more eager to purchase again. At the same time, the data collected during the previous order allows one to personalise subsequent communications easily.
  • Abandoned cart email
    The goal is the suggestion to proceed, provide some promo code that could be applied to gain additional perks or simply show that the company is ready to receive feedback.

Emails can help to build customer relationships before and after purchases, but only if they add benefits to the customer experience that holiday shoppers probably wouldn’t want to miss. Worth putting yourself in the client’s shoes and asking, “So what?” – the question client asks when reads the email, critically assessing the information and the relevance of the received stimulant to act.

Retarget ads on social media

Apart from organic coverage that could be gained through appealing social media posts and encouraging the clients through various communication channels to follow and engage with the brand, as one of the best customer retention strategies, consider plugging in the social media’s retargeting power, which allows showing ads on social to people who already somehow were engaged with the website starting from those visited it once to those who dropped the cart.

Discount or credit for those who return

When the margins are low, applying discounts or credit strategies could negatively affect the bottom line. However, sending them for existing clients’ next purchases or retaining those who haven’t purchased for long could be a winning strategy to increase customer retention rate. Considering the amounts on discount and credit as a way to cut customer acquisition costs, increasing thought as a standard discount of 10% up to 20% or even more doesn’t seem excessive.

Strategic initiatives

Boost your customer support to the next level

A proper level of customer support became an unspoken golden standard for the highly-competitive eCommerce field. Online shoppers are most likely unpleasantly surprised if the company doesn’t match these standards. However, creating additional value could add to communications with clients an element of surprise and delight that puts the business, in the eyes of this particular customer, in a special place, spotlighting among the competitors:

  • Sustainable 24-hour service
    Attract customer support agents, sourcing them across different time zones to provide outstanding 24-hour service alongside sustainable working conditions.
  • Live chat
    The flexibility of the time to send the request and receive the responses makes live chat the communication type the eCommerce customers prefer over the phone and email communications.
  • Omnichannel customer service options
    An omnichannel customer support strategy guarantees that you have agents spread across multiple channels, ready to meet and provide timely and eligible support to customers where they are.
  • FAQ page and store policies
    Big holiday sales seasons are the sources for a large amount of data collection, and it is an omission not to use insights to make the next year’s customer experience more advanced. Collect, analyse and systemise the information about the most frequent queries related to the business on FAQ pages and predefined store policies, and the next holiday sales season may proceed much more smoothly.

Own the responsibility even if others are to blame

The customer experience contains different stages, elements and actors. The customers cannot and shouldn’t have to be able to separate these elements. Therefore, they apprehend the experience as a whole, and if something small licks, the overall customer experience impression could be damaged. Being able to own the responsibility for clients’ difficulties and turn the negativity into positivity should be one of the key methods in customer retention strategy and loyalty management.

Personalise the customer journey

The main point of difference for an eCommerce business could be creating a customer journey that feels as if it was explicitly designed with customers in mind and was tailored to the specific customer’s need. Improve the customer retention rate by differentiating the company from its competitors and making loyal customers flow away harder:

  • Positive emotions and entertaining experiences
    Gamification on the website, reusable packaging and other ideas at each stage of the customer journey – positive associations make the company product or service much more memorable in the customer’s mind.
  • Unexpected gifts and thank you notes
    Miniature versions of the product samples or personalised and branded thank you notes with a handwritten signature included in the order are a striking touch perfectly appropriate for the holiday seasons and birthdays.

Return policy as the security guarantee to clients

92% of consumers polled claim they will be ready to buy from the online store again if the product return process is easy. It is connected to feeling secure from the risks of wasting money and extra time beyond what was already invested on an item that can or cannot match the requirements or the initially stated product description. Remind online shoppers that they can trust you if they’re not satisfied with the product, and they will be more likely to buy from your business, even if they’re unsure about a product.

A good return policy considers all conditions when a request can be qualified and specifically underlines the situations when it cannot be qualified as a return. Ensure ahead that the return policy is clear, reasonable and fair. 

Customer loyalty program – never dying classics

A sustainable customer portfolio contains a balanced range of new and retained clients. The strategies for working with retained clients consider the frequency of purchases as the key indicator to focus on. The importance of customer loyalty and engagement cannot be overestimated. The essence of customer loyalty programs is to reward the customer for various actions, from authorising the credit card and actual purchasing to leaving a review and inviting a friend. What effort should be rewarded and how – depends on the business model specifics and strategy. Regardless, customer loyalty programs are proven to be one of the most efficient ways for customer retention.

Subscription service

Subscriptions provide regular revenue for the business by locking people into purchasing monthly. Moreover, they keep existing customers constantly engaged by delivering personalised experiences, and it shouldn’t have to be mandatorily the entire business model for the online shop. For example, for eCommerce cosmetics retailers, monthly subscription boxes could include miniature versions of the best-selling products.

Conclusion

We hope these LION tips will help you succeed in highly competitive holiday season markets!

Whatever initiatives to retain the customers you choose, uphold a data-driven and creative approach. Don’t forget to measure the efficiency of your efforts and distinguish valuable insights to adjust and change the direction if needed. Last but not least – select dedicated partners in your technology stack.

At LION Digital, we value relevancy the most in complex customer retention strategies. Although we make agnostic recommendations based on customer needs, we recommend Yotpo who is a quality service partner, to help accelerate our clients’ growth by enabling advocacy and maximising customer lifetime value. Yotpo includes the most advanced solutions for SMS marketing, loyalty and referrals, subscriptions, reviews, and visual user-generated content – you can choose depending on what customer retention strategies you want to apply.

Given the influx of volume to the website, Yotpo heavily emphasises leveraging all the first-party data collected during the holiday promotions for post-holiday communications to make smarter, segmented audiences for email and SMS flows. Make this data work for you and provide hyper-personalized marketing to your subscribers. For example, get them back on site with a new product that complements one they have already purchased.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

ASSELYA Sekerova –
MARKETING & PROJECT MANAGER