Website Performance Solutions for WordPress

PageSpeed website performance results graph
Your mileage may vary…

Making Google happy is… difficult. This is especially true when is comes to website performance, which Google has confirmed can affect SEO. A slow site has, of course, all the pitfalls you’d expect. Higher bounce raters, lower visitor retention, worse search engine ranking, and even more expensive ads. And even if your site feels very fast, Google still might be very unhappy. This is because we tend to view our own sites in ideal settings. A fast, private network. A capable device with processing power to spare. And not to mention a very full browser cache. But when Google tests page speed, its Lighthouse simulates a page load on a mid-tier device (Moto G4) on a mobile network, with network throttling.

And this is a good thing. If sites are only tailored for those with speed and RAM in spades, then we’d be missing out on a huge swath of potential visitors. But even after you’ve followed all of the standard online tips (compress your images! smaller scripts! CDN all the things!), the needle probably hasn’t moved all that much. This is because website performance must be considered from the ground up.

And it isn’t just about your CMS. Marketing needs to be on board, so does design, so does tech. Website performance often requires difficult strategic and architectural considerations, especially for large sites with heavy integrations. So in this article we’ll dig into what it really takes to build a maintain a truly fast site. WordPress will be our anchor to real-world scenarios and solutions (hey, we like WordPress), but most of what’s below can be applied to any web stack.

The basics – website hosting and asset management

Let’s start with what you just absolutely, cannot live without. Fortunately, these are also the things you don’t have to worry about much once they are setup.

Hosting and Server Setup

There’s just no replacement for an excellent host. The tiniest files and the leanest scripts won’t mean a thing if a site’s initial response time is low. This is why server response time (also known as Time to First Byte) is always the first metric we check. This is the point at which your site starts sending data to the browser. Before that, it’s just a white screen. Google wants this metric to stay below 600ms (or three-fifths of a second), but ideally, it doesn’t creep above 100ms for most page hits. Here’s a few things we always check:

Server response time graph
  • For WordPress sites, are you on a WordPress-focused host? And I don’t mean a host that supports WordPress, I mean a host that is built around WordPress, knows it in and out, and has features, support, and a knowledge base focused around WordPress? If not, changing hosts is probably the first big step. Our unabashed favorite is Pantheon, but other excellent options include WPEngine and Flywheel.
  • Is your host a cloud-based platform that supports built-in caching functionality like Redis and reporting like New Relic? Is all incoming traffic served through a cache layer like Varnish? If not, then you’re likely serving uncached pages to visitors, and this is a big no-no. The standard visitor will not need a totally fresh version of your site. Even daily content updates should be cached.
  • When a page isn’t cached (you can force this on most sites by added a unique parameter to your url, like ?cache-bust=12345), how process-intensive is it? If the server response time is especially low (several seconds), you maybe have structural problems that need to be addressed. For WordPress sites, the culprit is often poorly-written third party plugins.

Asset Management

For most website performance tip lists, this is where they start and end. And it can help quite a lot, but we consider it a baseline, a foregone conclusion that all sites should adhere to:

Properly size images warning
Next-gen image format warning
  • Images should be as small as possible. This means limiting their file size with optimization, and limiting their output size to ensure a user isn’t served an image that is far larger than necessary. Google recommends that you go one step further and convert your site images to WebP. This definitely improve file size, but there is a notable difference in quality that needs to be considered. To implement these updates automatically in WordPress, we recommend Imagify. It’s inexpensive, has lots of configuration options, and will handle the WebP file conversion and output.
  • Asset files, Javascript and CSS, need to be minified and combined. But! That doesn’t mean combining every single CSS and JS file into huge files that are loaded on every page of your site. Minification is an excellent first step to get the ball rolling, but also consider what is being loaded on each page. Does your site include complex forms on a few pages with their own styles and scripts? Don’t load this everywhere!
  • Finally, use a CDN (Content Delivery Network). Many hosts offer CDN’s out of the box (ahem, Pantheon). They’re easier than ever to take advantage of.

Getting fancy – eliminating render-blocking resources

Rendering is the process your browser goes through to make a webpage load and become functional. This includes loading all HTML and images, parsing all scripts, applying all styles, etc. And there are quite a lot of ways this process can get bogged down. When the browser can’t complete the load of a page because something else is in the way, this is called render-blocking. In some cases render-blocking can be fixed simply by reordering or deferring resources. But as your site gets larger, this process will get more complicated, and require more fine-tuned solutions.

Reordering and Deferring Resources

The later you load a resource, the smaller impact it can have on render. For javascript, this means either loading non-critical scripts in the footer, or loading them asynchronously. For a script to be critical, it must be required for your page to load correctly at all. Very few scripts meet this criteria, but often scripts will change an element’s layout to prepare it for custom behavior.

For example, let’s say your developer added a big ole slider to the top of the page. Without the javascript that runs the slider, it’s a linear list of full-width backgrounds. Once the script kicks in, it adds the functionality and styles necessary to turn those long, chunky elements into slides. But this means in order for our page to render properly to the user, our slider script must load early – and in doing so, it blocks the render of the rest of the page. Instead, the developer needs to ensure that the slider looks accurate without javascript, and that the slider script only handles the slidey bits. That way the slide javascript can be deferred until after the page has rendered.

As a rule, all or most of your site’s javascript should be loaded at the very bottom of the page. If it can’t be, it’s time to work with a developer to figure out why.

Google Tag Manager

As your site gets bigger, your marketing team have likely spiked the site with more and more tracking tags. These often get loaded in via Google Tag Manager, and so they easily go unnoticed by the web team. However, their impact can be severe. Check out the example below. The same page is checked against Google’s PageSpeed tool for mobile devices. On the left, no change. On the right, Google Tag Manager has been removed.

Website performance with Google Tag Manager
With Google Tag Manager scripts firing
Website performance without Google Tag Manager
Without GTM scripts

Now, the problem isn’t Google Tag Manager. It’s an excellent tool for managing third-party scripts and tags. The problem is the amount of third-party javascript that is being added to the page. In this case our only recourse is to work with the marketing team to remove as many scripts as we can, and to use tools within GTM to defer the rest. Even then, it’s clear that hard-won performance gains on the website can be easily lost with the reckless addition of scripts over time.

Google understands that this is a problem and is working on a solution by moving to a server-side approach. It’s still very early, but the results look promising.


To recap solutions for render-blocking resources:

  • Remove any script that isn’t being used. This might be an old tracking tag for a service you no longer used, it might be a web script for elements that no longer exist.
  • Defer any script that isn’t critical to page load. If the site is built correctly, this should be most or all of them.
  • Work with the marketing team to ensure tracking tags are reasonable and do not negatively affect performance.

And finally – get a site audit

The best thing you can do for a growing site is sit down with an expert to discuss it. Performance is extremely important, but it isn’t the only goal of a website. It’s a balancing act to ensure your site is fast and accessible without sacrificing the functionality and design necessary to support your brand and your online goals. There are plenty of online tools for running site reports that will give you a nice, long list of problems to fix. Others will promise an automated solution. But a performant site takes conversation and compromise, as well as ongoing support from a critical eye.

If that happens to be something you need, let us know, and we can help.

Scroll to Top