How to Optimize the WordPress Front-End for Speed

Let’s set a few ground rules. The speed of your website (also known as load time) depends on 4 things.

  1. The server hosting the website
  2. The visitor’s internet connection
  3. The DNS configuration of the domain for the website
  4. The file size and file structure of the website

The first three items of this list we can assume are optimized or perhaps not within your control. That leaves you with optimizing the website which is 9 times out of 10 what needs to be “fixed”. Please also note that each page is unique and must be diagnosed separately. How do we diagnose the page?

Head on over to http://tools.pingdom.com and enter the url to any page in your website. At a glance, this tool will calculate the total file size of all the images, scripts and other downloads required to bring up your page. This number should never exceed 2 mb if you expect it to load within 2 seconds on a high-speed internet connection.

I have seen e-commerce pages with dozens of products total 18 MB and take more than 20 seconds to load. Gruesome I know, sadly it’s true. It’s as simple as uploading a couple of high-res photographs that are set to display at 90 px width but the reality is that file is actually 9,000 px and just being squeezed down to 90px for display but you are still downloading the same 3mb of data for that 9,000 px wide photo (the one that looks like it’s only 100px wide on your web page).

Another common issue I see is too many “http requests”… an http request is counted every time a file loads. That could be your css file, your image file, your html file, your cgi file, your htaccess file, your xml file, and the list goes on and on and on. Every single file counts, that’s every image, every script, everything. Even if they are all tiny tiny file sizes, a new connection needs to be made for each one of them… there are ways of merging files so that there are fewer connections required, i.e. http requests

That’s your meat and potatoes on optimizing speed performance for a web page. Each one has it’s own set of http requests (connections) and it’s own set of images, videos, etc. (file size). The recommended limit for a web page (a unique url) is 2 MB and less than 100 http requests in order for it to be “lightning fast”. Otherwise, even the fastest server with the fastest internet connection is going to be slow to load your website. That being said, some of the first few things I would look at are;

  1. Compressing Image Files
  2. Disabling unused plugins
  3. Merging CSS files

If you ever notice your website is dreadfully slow, trace back your steps, more often than not something has been deleted but is still being requested. Perhaps an image that was deleted via FTP but not deleted in the HTML code… so the browser that tries to load the website gets stuck on that missing image that is no longer on the server. A missing file like that could stall the page by a good 2 to 4 seconds. What happens is the browser tries several times to download that file before it gives up and moves on to another one. Once the code is cleaned out and the image properly deleted, then the request for the missing file doesn’t even happen and you’ve just shaved off 4 seconds on your loadtime and are back down to 1.64 seconds ;)

Comments (6)

  1. Hi Christien,

    thanks for sharing your tips! We also work a lot on optimizing the speed of wordpress and came up with a solution [1] that creates a static and optimized version of a wordpress site after each change. We use this to manage hundreds of wordpress sites. Do you think this would also be interesting for individuals?

    Best Regards,
    Paul
    [1] http://oneclickwp.com/

        • The idea is to have minimal redirects. I use Enom to register my domains and manage DNS with them. I use A records to point directly to the server and I use MX records and cname for the mail server (Google Apps). Originally I use my own custom name servers which point effectively route the visitor to Enom where the domain was registered, then Enom would route it back to my name servers (on my server) which would then send instructions to the visitor for the mail server and the web server. By managing DNS directly with Enom, I have shaved off one process of the redirect, i.e. I am no longer sending the visitor to Enom, then the server and then back out to Google Apps… it goes straight from Enom to Google Apps. That’s what you need to think about, making the resolution of your domain as short as possible by not bloating your DNS settings via too many servers. Make sense?

  2. 1) htaccess files aren’t accessed (or downloaded) directly, but are processed server-side for each request.

    2) Unless your server is slow, there is rarely a time when a DNS fetch would cause any delay whatsoever. In fact, almost ever ISP caches the DNS data, so it doesn’t go beyond their server unless you have ridiculously low SOA settings.

    3) The most effective way of ensuring multiple HTTP connections are processed quickly is to offload some of them to other domains. Even if you’re using caching or compression, you’ll realize huge gains by using subdomains or domain aliases to load static content such as images and scripts. If you alternate between a few subdomains/domain aliases then you can increase the browser fetch speed significantly (each new alias doubles the overall potential performance). Browsers limit the number of active connections they open to any given domain (usually 2 to 4 simultaneous connections), so this can have a significant impact.

Participate