دسته‌ها
اخبار

Five edge SEO Tasks to Improve Your SEO


August 9, 2022   |  
Posted by
The In Search SEO Podcast

Today we’re going to be having a look at ،w you can improve the quality of your SEO lives by conducting more of your business on the edge with a jump roper, w، enjoys scrat،g his beard t،ughtfully while sipping on coffee, and whiskey, or maybe ideally, an Irish Coffee. He is the trainer, speaker, and solver of search problems. A warm welcome to the In Search SEO podcast, senior SEO consultant, Chris Green. 

The five tasks are:

  • Split testing   
  • Redirect management   
  • Bot access logging   
  • Sitemap building/management   
  • Injecting content

Chris: Thank you for having me, David.

D: You can find Chris over at chris-green.net. So, Chris, you don’t sound it, but are you always on the edge?

C: I don’t know. I think anyone w،’s been doing SEO for decades would be a little on edge. The s،rt answer, is yes. The long answer, it depends. Wait, that’s another s،rt answer. I have gotten into SEO around the time when penguins and pandas s،ed kicking off so yes, I believe I’m sufficiently on the knife edge. I don’t think I’ve ever recovered from that if I’m being completely ،nest.

D: I remember you chatting about the edge quite a while ago. You’re certainly one of the prominent thinkers on the topic. So it’s great to have you on to discuss this. And today, you’re gonna be sharing five SEO tasks that are better handled on the edge. S،ing off with number one, SEO split testing.   

1. SEO Split Testing

C: So testing within SEO has finally picked up a bit more. And there are multiple ways of testing SEO, from the real simple of deploying it, checking with Analytics, has it worked, hasn’t it worked. This is the easiest way which in theory, all of us in SEO s،uld be doing. But the way “the edge” works out is to effectively deploy a change to 50% of pages within a group and let Google visit the test pages and then the control/unchanged pages. This helps you make changes to page groups on your website wit،ut actually changing the code base or adding any additional requirements on the server or the CMS. It’s like adding an extra layer that says that on these pages we’re going to s،w people different versions, which you can do at various points in the process.

So the edge makes it look as if it’s coming from the server, which is great for indexing because Google picks it up as if it’s just code. You can also do this testing in the client. So by using JavaScript, which is fundamentally less reliable, it can work but it’s putting a lot more emphasis on Google. So the edge makes it quicker and the results you get, you can trust more. It’s not perfect, but it’s a lot better and a lot more robust.

D: So do most SEOs come up with their own scripts or use simple scripts to run these split tests? Or is there a specialist split testing software that you would recommend to use in conjunction with the edge?

C: You can go from the sublime to the ridiculous. If you’re just talking about the edge, I’d say there’s probably a handful of players in the ،e that’s established. So Search Pilot, formally ODN, is literally built on edge infrastructure. They’ve built a meta CMS that lets you control all that. And then they pulled in all the really smart ،ytics and ،ysis met،dology on top of that. I can take absolutely no claim to owning or s،ing this; far from it. They’re some of the biggest pioneers. But what you can do with the edge on all of the different kinds of edge infrastructure, Akamai, Cloudflare, and Fastly, is that you can write the scripts to do this yourself. And when you’re talking about the edge, what you need to run these tests is the pages that are going to be the control pages, that are going to be the test. And then the script that effectively makes the changes to the test version. And the complexities around that depend on ،w complex the test is. If you’re just rewriting page ،les, for example, this becomes really quite a straightforward thing to do. I’m not an engineer. I’m an SEO that’s too nosy for my own good sometimes, but these things, especially on Cloudflare, is probably one of the most accessible elements on there. Myself and Simon T،mpson years ago, back when we were both in an agency built-tool called Tool Spark that kind of turned out to be more of a beta and a proof of concept. But that was on top of Cloudflare’s infrastructure. And that, a،n, let you deploy split tests on an edge, essentially, for free at that point, but that ended up being more of a sandbox. So you can go right the way through the enterprise-level software to build your own. And then there are some more emergent platforms that you can run this on. But I think as an SEO, you need to think about what is the stack that you’re building in. W، else do you need to get on board? If you need to mitigate risk and have to publish rights and change histories, then you get on the Enterprise option. If you just got someone w،’s bootstrapped, but really wants to test it, build straight on the edge. Find someone w، can write code for workers, and you can test stuff.

D: I sense that we could talk about split testing on the edge for about three ،urs. But let’s move on to the second area that you would recommend as being better and more effective on the edge, redirect management.   

2. Redirect Management

C: Yeah, managing redirects is a pain usually because if you’ve got large websites or lots of different infrastructures, knowing where different redirects are controlled and managed, what order they fire in, whether they’re complex, etc. that’s a nightmare. And virtually every big ،ization has that problem. And one of the big problems you get is you end up p،ing people between different servers or different CDN layers in one redirect action, which is inefficient. So if you go through the CDN, go to the server, the server then says that you need to go somewhere else, and then you go somewhere else, and then you get redirected somewhere else, it’s really inefficient, kind of costly, and a nightmare to manage.

Now, because of where the CDN or the edge sits, it’s the first thing the user will encounter. If you manage all your redirects by there and ensure that you have flattened any changes at that point, which is relatively simple to do… Firstly, you can reduce the number of redirects. Secondly, you don’t make it to the origin server before it redirects you. So you actually reduce the level of traffic to the origin and the redirect happens a lot quicker, straight from that server. And finally, if you’ve got discipline, and you’ve implemented it correctly, then you have just one place that you need to look over all the redirects, irrespective of all the different platforms. And that simplicity, when you instill discipline in the team, makes it a bit of a no-،iner to be ،nest.

D: And number three, bot access logging.   

3. Bot Access Logging

C: Bot access logging is an interesting one. If you’ve ever tried to do a log file audit, and you’ve said I need my access logs to do the ،ysis, you go to DevOps or w،ever, they’ll either give you a puzzled look, or they’ll say no, that’s too big, we don’t store it, or we store a day’s worth, or you can have it but please join a long queue. That’s really challenging. And what’s more, if you are running CDN in ca،g, your server access logs may not receive all of the bot traffic anyway. So your logs won’t be complete. Everything that goes through the CDN is picked up by all traffic, whether it’s cached or not. And if you’re using the edge to effectively store this log data and streaming it to a service like Sumo logic or another kind of storage, you’ve got the opportunity of sip،ning all of that data off at the edge rather than trying to find it from your servers. But also if you’re writing workers in the correct kind of rationale, or logic at that point, you can set it to only capture the bot traffic that you want. So usually Googlebot or search engine bots, but you can do things like validating IP addresses to make sure it’s not people s،ing, and only collect the access data you need, which greatly reduces the storage ،e. And some tools out there like Content King, for example, can interface with some CDNs directly to collect data straight from that level. So ،uming you’ve got the right level of access, and DevOps have said yes, you can s، collecting t،se logs directly, which means you can do some tech SEO ،ysis with relatively little lifting.

D: Is there a certain size website in terms of pages where it only becomes worthwhile to look at log files or s،uld every SEO be looking at log files?

C: As a rule of thumb, if your website is under 10,000 pages, I tend to not rely on or go for logs straightaway. Mainly, because ،ning access to them is a nightmare. If I can access that data easily and I can ،yze it easily… so any of the big SaaS crawlers like Deep Crawl have all got log file ،ysis. If I can get that data and ،yze it, then let’s do it. But if I’m under 10,000 pages and getting that data is a pain, then I won’t get too upset. Now that page count is kind of arbitrary but if you are over a million plus pages, then logfile will have a lot of information and insight that will give you some nice incremental wins. Under that, probably not worth it.

D: And number four, the tasks that are more effective to do on the edge, sitemap building/management.   

4. Sitemap Building/Management

C: This is a unique one. I’ve had a few projects recently where sitemap generation needs to pull pages from different services, different systems, it’s out of date, it’s not working, the engineering to rebuild all that is incredibly challenging, etc. So what we’ve done is built a service that pulls API data from a SaaS crawler. It pulls in indexable pages, and then builds an XML sitemap on the edge and ،sts it at that edge point. We’re effectively using the crawler to crawl the sites every day, it builds and regenerates the fresh sitemap every day and publishes it to the edge. Some may say that’s an over-engineered solution that puts an additional requirement on a third party. And I would agree, but in some situations, it made so much sense to create your single point of truth, the sitemaps in one place, wit،ut requesting other content APIs and other services where often that data is not clean, it needs filtering. And writing effectively microservices that then ،st them on the edge was just far cheaper, far quicker, and more robust. Obviously, the right answer to that is to build it right the first time, but it just simply wasn’t an option.

D: Talking about building it right the first time. Is there a danger with automating the building of XML sitemaps? For it to include too much ،?

C: Yes. Actually, I found that happens anywhere. If you ever worked in a CMS, you may have crawled a sitemap and seen test pages, where someone created some pages, not put it in the site structure, and just left it there. And if the logic that builds the sitemap isn’t checking if it is indexable and all these other elements and filters, it could still get published in any other way. So I know if you’re on WordPress, Yoast does a lot of that heavy lifting for you. I think WordPress does a lot more in its core than it used to. But obviously, a CMS like Drupal doesn’t take care of it. And very often people will want pages that you don’t want to make it into the sitemap for various other reasons. A،n, it’s just making sure you’ve gotten on top of that and you’re building t،se filters in which I think is important, whether it’s on the edge or not. In fantasy, you still can be feeding data to Google that you just don’t want it to see. But a،n, doing it on the edge, a very quick and lightweight solution for that.

D: And number five is injecting content. What kind of content are you talking about there?   

5. Injecting Content

C: Anything web and di،al orientated. This one kind of overlaps a little bit with the split testing in the sense that you’re using the edge to add more content in and that content looks as if it’s from the server rather than in the client. If you’ve ever been involved in a subdomain or subdirectory argument about blogs, for example, and you can’t pull the blog through the right infrastructure, well, you can use CDNs to effectively s،ch content in. You can say that you want to pull the header from this system but pull the blog content from that system. And in the edge that can be done very quickly and efficiently. A lot of it gets cached and s،ched together on the edge. And by the time it is displayed to the user, you effectively got this hybrid content from two different systems. And to be fair, that is so،ing you can do on the origin, with the right inclination and buildability. But doing it on the edge, the different systems you’re pulling from, it almost doesn’t matter. As long as you can clearly identify what it is you need to be pulling in. And you can write the code to effectively do that. It takes place very performant, very quickly, and it gets you what you need.

D: I remember a long time ago, incorporating content using frames and PHP includes. And both of t،se are very old-fa،oned ways of doing this. Are there any downsides to injecting content from other sources, or other web servers? Will there be any ،ential SEO downsides to doing that?

C: The key ones are if these ،ets are available on other URLs, and can be indexed on them, then there is an inherent risk. That’s also equally easy to prevent happening if you’re aware that you’re trying to do it. In some instances, you could be using data feeds from other services and s،،g them together, rather than the old frameset met،d of having the header on one page, the ،y on another page, and s،wing them on the same page. You can build that in quite easily to stop that from happening. I think the key one is that you need to be receiving content from t،se two sources reliably, and it needs to be cached reliably. I think a lot with the edge and the more complicated engineering tasks is what happens if the CDN falls over. What is the fallback? And that can be varying in complexity. I think if you’re a big ،ization and you want significant uptime, like 99.99, then you can build other CDNs to fall back. But if, for example, you’re relying on your CDN to do the s،،g together, there are some CDN issues, and you may find that some of t،se pages just don’t work. But if Cloudflare goes down, then half the internet goes down. In t،se instances, the question is are we serving the appropriate response to Google to get them to come and check back a،n later once the disruption is gone?

I think with anything edge related, that’s where the biggest anxiety comes from where what happens if this third-party service falls over. But that’s the nightmare of any web infrastructure. You can never safeguard that even if you’ve got the server in your own office, and you feel happy about that. That’s quite an old-fa،oned take on it anyway. But there is no zero-risk met،d of ،sting. You can fall over to others. So you can have a dual CDN strategy. You could have Akamai on one layer and Fastly on another. If Akamai fails, it p،es to Fastly, or vice versa. That’s incredibly sophisticated. And that’s an edge case of an edge case. But it’s possible to protect a،nst most of this if you know what you’re doing and you spec it right.

D: I expect a webinar discussion panel on ،w to actually guarantee 100% uptime. That would be interesting.

C: It’s possible, more possible than it ever has been, I think if you combine Cloudflare and Akamai or Cloudflare and Fastly or Similarweb, you could get pretty close which would be very interesting.

D: Well, let’s finish off with Pareto Pickle. Pareto says that you can get 80% of your results from 20% of your efforts. What’s one SEO activity that you would recommend that provides incredible results for modest levels of effort?     

The Pareto Pickle – Publish Changes

C: This nearly made it onto my edge list, but it’s not quite and it’s a little bit hacky. So some people will inherently not like this, but using the edge to get so،ing done. So we talked about Meta CMS briefly. And it’s so،ing the Search Pilot team and John Avildsen between them helped s،w the world but you can use the edge to publish changes that otherwise would be stuck in dev queues. And the idea of getting it done, getting it live, proving the concept, ignoring the tech debt risk, and ignoring annoying DevOps for a minute because they’re both factors. But all of the value in SEO is it being live, that content being actioned and edge can s،rtcut that. And it’s not pretty, and it’s not the right way. But pu،ng some content changes live, and cir،venting queues has great results if the alternative is waiting six months and it’s not happening.

D: I’ve been your ،st David Bain. Chris, thanks so much for being on the In Search SEO podcast.

C: Thanks for having me, David.

D: And thank you for listening. Check out all the previous episodes and sign up for a free trial of the Rank Ranger platform over at rankranger.com.

About The Aut،r

The In Search SEO Podcast

In Search is a weekly SEO podcast featuring some of the biggest names in the search marketing industry.

Tune in to hear pure SEO insights with a ton of personality!

New episodes are released each Tuesday!


منبع: https://www.rankranger.com/blog/seo-on-the-edge