More businesses are also able to use existing technologies such as Cloudflare Workers and Distilled ODN to get around platform restrictions and congested development queues.
Cloudflare has helped start the discussions in marketing departments around the world and helped early adopters advance their SEO campaigns. But the next phase of adoption will be spurred on by two key elements:
- The advancement and introduction of alternatives such as Akamai Edge Workers and Fastly’s unnamed WASM solution.
- How we, as marketing and SEO professionals, develop processes, business cases, and deployment protocols around these new technologies.
Edge SEO Use Cases
For the most part, the implementation possibilities through “edge SEO” are almost limitless. Use cases include:
- Implementing hreflang.
- Collecting a form of “logfile” based on server request
Other use cases can be:
- Much more situation-dependent.
- Help relieve development queues.
- Make use of the latest technologies – such as browser-level lazy loading.
Akamai Edge Workers are set for beta in October.
Fastly, from what we can tell, are working on a solution using WASM.
Browser Level Lazy Loading via Edge
Enabling lazy-loading can be useful in achieving faster page load times, and therefore better performance for users and within organic search results.
Given that a lot of websites could benefit from the new Chrome 76 functionality, but may struggle to expedite implementation through traditional development methods as quickly as they would like, edge SEO offers an alternative implementation method.
Simon Cox has documented this process, using Cloudflare Workers and Spark via his blog.
While the solution may not be “best practice”, it can provide a valuable stop-gap before “correct” implementation.
For smaller websites and businesses, it can provide the end solution where the development costs may be prohibitive versus the relatively low costs of Cloudflare Workers.
This led to Google releasing a series of articles and videos to help SEO pros get to grips with the technology, with dynamic rendering being an important recommendation.
“Using dynamic rendering to serve completely different content to users and crawlers can be considered cloaking. For example, a website that serves a page about cats to users and a page about dogs to crawlers can be considered cloaking.”
The costs of prerendering/dynamic rendering can vary depending on the chosen provider, as well as the number of pages your website has, and how often you want/need the cache to be refreshed.
Using workers, it’s possible to reduce the number of cached page requests and costs in using a third-party rendering integrator. This is done by setting up a worker to follow the below process:
- Identifying who the request is coming from. Is it:
- A search engine?
- Or a general user?
- If it’s:
- A search engine – check storage for a pre-rendered cached version of the page and return it, or fallback.
- A general user – return a version of the page for client-side rendering.
There is a secondary function to this and that is what happens in the “fallback” if no cached version of the page exists in storage.
This can happen either through a page being brand new and not yet being cached by the worker, or the page not being included in the XML sitemaps or URL lists inputted into the Google Cloud Functions (GCF).
During the “fallback” phase, we can trigger pre-rendering of the page. But this then leads to further decisions to make. We can either:
- Wait and return the pre-rendered version of the page. But if this takes longer than 3000ms, it could negatively impact performance.
- We could return the page after a prescribed time limit, such as 2000ms. But depending on the page itself, the page after 2000ms of loading might be useful and lack key content, links, etc. Again, it could negatively impact performance.
Alternatively, rather than trigger pre-rendering of the page, we can return a 503 status code, and trigger the prerendering of a version for caching. From experience, this is the preferred option.
Collecting ‘Log Files’
Collecting log files can sometimes be an issue due to a number of factors such as:
- Important internal security and infosec gatekeepers.
- Platform incapabilities.
- Some development teams just not collecting or storing logs at all.
Using edge workers, we can collect a form of server logfile by registering and logging the request.
Through tools such as Sloth, log file collection into an Amazon S3 bucket is made simpler. It also allows for collection in tools such as LogDNA, for export and analysis in other third-party tools.
Further, this solution allows for log collection on Salesforce CommerceCloud/Demandware websites – unfortunately not for Shopify websites.
In theory, it should work. However, the relationship between Shopify and Cloudflare went “grey cloud” in early 2019. This means it’s not possible to use Cloudflare Workers for any edge SEO purpose on the platform.
The advent of Akamai Edge Workers might, however, provide an alternate route.
Getting Started with Edge SEO
For the most part, CDNs such as Cloudflare, Akamai, AWS, and Fastly are in wide use.
Being able to unlock and use these additional features can be vital in implementing essential and critical fixes.
Implementing things through these methods can be:
- Cost-effective in comparison to other tools such as Distilled ODN, which has the capability to perform edge SEO tasks while being designed as a testing tool than an implementation tool.
- Cheaper than using a service such as DeepCrawl for logfile collection.
BrightonSEO Slides: Produced by author, September 2019
Spark lazy-loading image on Spark: SimonCox.com
Sloth.cloud Dashboard: Screenshots taken by author, September 2019