How to Set Up WordPress Multisite with Nginx

Recently, due to the many attacks launched on the internet, especially the botnet DDoS attacking WordPress hosts, I was forced to move some of my sites to a shared server — a virtual machine — with little memory and just a slice of one CPU core.

This required rethinking the whole strategy of hosting them: instead of having huge server with almost unlimited memory, disk space, many CPU cores, and infinite resources, I had to somehow extract the same amount of performance out of this tiny virtual server. How?

After a whole week of reading mostly outdated tutorials, and evaluating many different approaches and strategies, it was clear that my beloved LAMP environment, favored by so many for running WordPress, had to go. Apache simply refused to be fit into such strict limits.

The alternative seemed to be Nginx, which I had absolutely no experience with, so I was not expecting miracles — and I was aware that WordPress has been designed to take good use of Apache’s tricks, like mod_rewrite to get not only pretty URLs but also work tightly with disk caches.

But to my utter surprise, not only Nginx plays nicely with WordPress, but the result was unsurpassable performance that I never thought to be possible! Small is beautiful, but it can be ultrafast too. Here’s how!


For this tutorial you will need three things:

  1. A virtual machine (also known as virtual private server), either created on your own computer (using VMWare or similar software), or, more likely, leased from a commercial provider. You can get a good overview of pricing and features at CompareVPS. I’m using a VPS with 512 MB of RAM, 40 GB of disk, and 500 GB of monthly traffic for a bit less than US$10/month.
  2. A pre-installation of Ubuntu. For this tutorial we will use Ubuntu 12.04. There are more recent versions; and many of the commands and configurations will probably work under Debian Linux as well. Commercial providers will usually pre-install the operating system when you sign up with them.
  3. Some familiarity with Unix console commands. At least you should not be afraid to experiment with them!

Quick overview

So here is what we’re going to install. First, we’ll begin with MySQL, and tweak it a bit to get it to fit into our limited-memory environment.

Then comes Nginx with a basic configuration. Nginx requires an external way to communicate with PHP, so we will need to install PHP-FPM — a way of managing PHP FastCGI processes which shows good performance on benchmarks (and yes, that’s precisely what is running to achieve their levels of performance), and fine-tune it all.

We’ll be using PHP with the Alternative PHP Cache (APC) — a way to speed up PHP processing — which plays nicely with the W3 Total Cache plugin (which, in turn, is fully Nginx-aware).

And finally we’ll explain how you can host multiple sites with completely different domain names using a single WordPress multisite installation.

Caveats and disclaimers

Before you start following this tutorial, you should have in mind a few things. Choosing the “best” setup for WordPress is a tricky business, because, on one hand, it depends on the definition of “best”; but on the other hand, it depends on your WordPress setup (and the hardware it’s running on), what it’s being used for, the kind of data (like images and multimedia files) you’re hosting, and, more importantly, your visitors and what they’re doing.

Benchmark results are helpful, but consider your own environment.

There are plenty of benchmarks on the Web attempting to “prove” that one solution is “better” than others. I did the same for my own particular setup, and what will be described below is the result of my own tests. But you might have a different environment and not be able to reproduce the same results.

For instance, some people question very seriously the claim that Nginx + PHP-FPM is actually slightly slower than Apache + mod_php unless you have a lot of static content (because Nginx will serve it directly without the need of contacting the PHP-FPM backend). If you have plenty of memory to spare, a solution using Varnish + Apache + mod_php might beat a very fine-tuned Nginx + PHP-FPM solution. Just because Nginx + PHP-FPM might work best for the kind of setup that has, it doesn’t mean it’s the best for you.

But if you have a very tight environment with few resources — or, instead of opting for a huge server with lots of memory and CPU, you prefer to distribute your load among several small cloud instances — then this tutorial might help you out with extracting the most performance out of your tiny virtual private server.

Installing MySQL

So, your virtual server provider has just sent you the access password to your own slice! It’s time to to log in via SSH and start installing things. We’ll begin with MySQL.

Some pre-installed versions of Ubuntu might have MySQL 5.5 already as part of the package list. If not, what you need is to run:

sudo apt-get install mysql-server

Now it’s the time to tweak MySQL to make it fit into as little memory as possible, but still perform adequately well. The first choice is to either go with MyISAM or InnoDB, the two most popular table engines. MyISAM is the oldest one, InnoDB comes as default with MySQL 5.5.

Discussions have been raging on the Internet about which solution is best for WordPress, and, again, it might be a matter of personal taste and specific environment. What is important here is that if you just use one of them. It’s pointless to let MySQL run both, and that will save you some memory.

After some reflection, specially after reading Mark Maunder’s article (who benchmarked MySQL using both approaches), it seems that MyISAM might be a preferred choice on single-CPU environments. Since for this tutorial we’re using a tiny virtual server, which might just have one CPU core, we’ll stick with MyISAM.

Open /etc/mysql/my.cnf (you will need a text editor; nano is a popular one and should be installed on most systems; if not, sudo apt-get install nano should get you that) and change/add the following:

# * MySQL configuration for tiny memory footprint
port = 3306
socket = /var/run/mysqld/mysqld.sock

socket = /var/run/mysqld/mysqld.sock
nice = 0

# * Basic Settings
user = mysql
pid-file = /var/run/mysqld/
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
key_buffer = 24M
sort_buffer_size = 4M
read_buffer_size = 4M
#binlog_cache_size = 2M
max_allowed_packet = 12M
thread_stack = 128K
thread_cache_size = 8

# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP

#max_connections = 200
#table_cache = 64
table_cache = 128
thread_cache = 256
#thread_concurrency = 10
thread_concurrency = 4
myisam_sort_buffer_size = 1M
tmp_table_size = 12M
max_heap_table_size = 12M
wait_timeout = 200
interactive_timeout = 300
max_connect_errors = 10000

# * Query Cache Configuration
query_cache_type = 1
query_cache_limit = 1M
query_cache_size = 16M

# * InnoDB
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!

default-storage-engine = myisam

max_allowed_packet = 16M

#no-auto-rehash # faster start of mysql but no tab completition

key_buffer = 16M
# * IMPORTANT: Additional settings that can override those from this file!
# The files must end with '.cnf', otherwise they'll be ignored.
!includedir /etc/mysql/conf.d/

Some short explanations on the above configuration: you might have seen tutorials for improving performance under MySQL, even from Matt Mullenweg himself, and the settings are a bit higher. Here we’re looking at a compromise: we don’t want MySQL to have terrible performance, but we don’t want to have it consuming too much memory.

The more surprising aspect might be “no networking” (and all the related aspects to that, e.g. skipping name resolving, and so forth) and getting rid of InnoDB completely. This saves us some networking buffers, but, of course, it means that WordPress will need to contact MySQL on an Unix socket and be installed on the same machine; we’ll see how this works later on.

You might prefer to run two servers.

If you prefer to run two servers, side-by-side, one with MySQL, the other with Nginx/WordPress, then of course you will need to turn networking on. This might be a more suitable environment for cloud-based networks — some providers allow you to allocate a set amount of CPUs, memory and disk, but you can launch as many instances as you wish.

Usually, only some of those will be accessible by the outside world, and the rest is inside a “private” network, with no routing to the exterior. Cloud providers usually do not charge anything for traffic among your virtual instances — only for traffic that crosses the boundary to the “real world”.

This will mean that although you have open network connections in this case, they’re completely shielded from the outside world, and, as such, are secured. And, of course, you can later duplicate the MySQL instances (or the front-end instances) if you need.

But for this tutorial, we’re keeping it simple: everything is inside the same virtual private server, and, as such, networking is not necessary — we can communicate via Unix sockets instead.

Start MySQL with:

service mysql start

At the end, you should most definitely set a root password (also known as “administrative account”) for MySQL, since by default it’s empty. There are many ways to do that, but Ubuntu 12.04, for MySQL 5.5, has a neat command:

sudo dpkg-reconfigure mysql-server-5.5

Under other distributions, you will need to use the following commands:

sudo mysqladmin -u root -h localhost password 'mypassword'
sudo mysqladmin -u root -h myhostname password 'mypassword'

Remember to pick a very hard-to-figure-out password, preferably randomly generated.

Installing Nginx

The next step is to install Nginx. This is a software application that is under constant development, and it pays off to get the latest batch of security enhancements. Unfortunately, the Ubuntu core developers are not always up-to-date with Nginx, so the recommended choice, as per the Nginx Wiki, is to add it from a third-party repository (or, as the Ubuntu crowd calls them, from a Personal Package Archive [PPA]), which is maintained by volunteers and is not distributed by It has some additional compiled-in modules and may be more fitting for your environment.

sudo -s
nginx=stable # use nginx=development for latest development version
add-apt-repository ppa:nginx/$nginx
apt-get update
apt-get install nginx

If you get an error about add-apt-repository not existing, you will want to install python-software-properties:

sudo apt-get install python-software-properties

and then just run the above commands again.

Overview of the configuration for Nginx

Apache Web Server

You can run Nginx and Apache side-by-side on the same server (for instance, letting Nginx deal with static content and having Apache handling PHP), but for this tutorial, we’re going to assume that only Nginx will be running, and we will use the same data directory structure (the one where the actual files for the websites are going to be) used by Apache.

Why? It will make changing to Apache easy, if you decide to drop Nginx; or, if you’re following some tutorials on the Web, which assume you have a “standard” structure for a Linux distribution with Apache, then you won’t be much confused about the right directory to place your files.

What this means is that all data will be under the /var/www directory. Nginx itself follows a configuration style which is similar to all applications under Debian/Ubuntu. The main configuration directory is /etc/nginx. The main configuration file is /etc/nginx/nginx.conf. Additional configuration files (we will use that for adding WordPress-specific configurations) are under /etc/nginx/conf.d; they will be automatically loaded when Nginx restarts/reloads.

And finally, all website-specific configurations (for each virtual host) will be under /etc/nginx/sites-available. Each time you create a new virtual host, that configuration file will be symbolically linked to /etc/nginx/sites-enabled.

Some Nginx configurations that you might find out there will probably just use one single file for everything (Nginx usually doesn’t have very long configuration files anyway). Here, however, we will split everything according to the usual tradition of Debian/Ubuntu. The idea is that each virtual host will have as little different information  as possible, and draw from common rules for everything.

Installing PHP5, PHP5 Extensions and PHP-FPM

Nginx, as you might remember, does only handle static files — everything else needs to be passed to an external service. In our case, we’ll use PHP-FPM to handle PHP5 on behalf of Nginx. PHP-FPM is like a mini-webserver, with its own options, but which will only process PHP — we’ll get back to it later.

Figuring out what PHP5 extensions you really, really need to have is not always easy! For this tutorial, we want to have PHP5 with as few extensions as possible (to make sure it consumes little memory!), but we need at least a few, since WordPress (or some of the plugins) will depend on them.

I have mostly followed Rahul Bansal‘s suggestions. The first thing is to make sure we get PHP 5.4 (instead of the default PHP 5.3 which comes with Ubuntu 12.04 LTS), and that means adding another repository to get the latest version. Newer versions of Ubuntu might already have PHP 5.4 as the default, so you might wish to skip this step.

sudo add-apt-repository ppa:ondrej/php5
sudo apt-get update

Now we need to install PHP5 and all the necessary modules:

sudo apt-get install php5-common php5-mysql php5-xmlrpc php5-cgi php5-curl php5-gd php5-cli php5-fpm php-apc php5-dev php5-mcrypt

For some WordPress plugins you might need to add php5-pear to that list, as well as a few others (like php5-imap if you are using some sort of newsletter which gets mailed out to your users). International users will probably add php5-intl. I normally add php5-tidy which gets used by W3 Total Cache, but it is not strictly necessary.

Configuring Nginx

To give you a rough overview of what the Nginx configuration below does, it’s good to understand that Nginx is configured with rules: as it gets an URL, Nginx will need what to do with it — look up into a certain directory for a static file (for images, CSS, and so forth), pass PHP scripts to PHP-FPM, or block access (for security reasons).

Nginx can obviously do quite a lot more processing, like adding no-expiry headers and removing cookie requests for static files (for better caching), or gzip‘ing everything on the fly (for saving bandwidth).

Here is the /etc/nginx/nginx.conf file which handles most of the common features:

user www-data;
worker_processes 1;
pid /var/run/;
events {
 worker_connections 256;
 # multi_accept on;
http {
 # Basic Settings
 sendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout 65;
 types_hash_max_size 2048;
 server_tokens off;
 client_max_body_size 8m;
 reset_timedout_connection on;
 # server_names_hash_bucket_size 64;
 # server_name_in_redirect off;
 index index.php index.html index.htm;
 include /etc/nginx/mime.types;
 default_type application/octet-stream;
 # Logging Settings
 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;
 # Gzip Settings
 gzip on;
 gzip_disable "msie6";
upstream php5-fpm {
 keepalive 8;
 server unix:/var/run/php5-fpm.sock;
# include /etc/nginx/conf.d/*.conf;
 include /etc/nginx/sites-enabled/*;

Notice a few things. First, worker_processes should be set to 1 per CPU (in my own VPS I just have one CPU). client_max_body_size is the size of uploaded files through POST; I believe it defaults to 1m (one megabyte), but 8 is the default used by PHP for file uploads, so I suggest those two settings are kept with the same values.

The upstream command is the setup to talk to PHP-FPM: as you can see, like we did with MySQL, we’re using Unix sockets to communicate with PHP-FPM. If you had a setup with Nginx on one VPS and PHP-FPM on another (using Nginx, say, as a front-end reverse proxy/caching server), you would use server my.ip.address:portnumber instead.

In this tutorial, we’ll show you both the single-site and multisite configuration for WordPress. The configuration files for those two choices will be stored under /etc/nginx/conf.d/, so we will activate the appropriate ones on demand. That’s why this line is commented out — we don’t want to load both configurations, since WordPress works rather differently under multisite mode!

The configuration for single-site WordPress (save it under /etc/nginx/conf.d/wordpress.conf) is as follows, inspired by the recommendations at and the entry for WordPress on the Nginx Wiki, which describe best practices as well how to avoid some common pitfalls.

# WordPress single blog rules.
# Designed to be included in any server {} block.
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
 try_files $uri $uri/ /index.php?$args;
# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
# Directives to send expires headers and turn off 404 error logging.
location ~* ^.+\.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
 access_log off; log_not_found off; expires max;
# Uncomment one of the lines below for the appropriate caching plugin (if used).
#include global/wordpress-wp-super-cache.conf;
#include global/wordpress-w3-total-cache.conf;
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ \.php$ {
 # Zero-day exploit defense.
 # Won't work properly (404 error) if the file is not stored on this server, which is
 #  entirely possible with php-fpm/php-fcgi.
 # Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine.  #  And then cross your fingers that you won't get hacked.
 try_files $uri =404;
 #fastcgi_split_path_info ^(.+\.php)(/.+)$;
 #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
 include fastcgi_params;
 fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# fastcgi_intercept_errors on;
 fastcgi_keep_conn on;
 fastcgi_pass php5-fpm;

And now the rules for WordPress running in multisite mode (save them under /etc/nginx/conf.d/wordpress-mu.conf):

# WordPress multisite subdirectory rules.
# Designed to be included in any server {} block.
index index.php;
# This order might seem weird - this is attempted to match last if rules below fail.
location / {
 try_files $uri $uri/ /index.php?$args;
# Add trailing slash to */wp-admin requests.
rewrite /wp-admin$ $scheme://$host$uri/ permanent;
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ \.php$ {
 # Zero-day exploit defense.
 # Won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi.
 # Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine. And then cross your fingers that you won't get hacked.
 try_files $uri =404;
# fastcgi_split_path_info ^(.+\.php)(/.+)$;
 #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
include fastcgi_params;
 fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# fastcgi_intercept_errors on;
 fastcgi_pass php5-fpm;
location ~ ^/files/(.*)$ {
 try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ;
 # access_log on; log_not_found on; expires max;
#avoid php readfile()
location ^~ /blogs.dir {
 alias /var/www/wordpress/wp-content/blogs.dir ;
 access_log off; log_not_found off; expires max;
# Directives to send expires headers and turn off 404 error logging.
location ~* ^.+\.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
 access_log off; log_not_found off; expires max;

The difference is mostly dealing with file uploads, since each WordPress multisite installation will have a “common” area, but separate areas for the uploads. We will see later how this magic happens (hint: we will need to map each subdomain to the correct $blogid).

For now, notice that this configuration is not perfect: I had to explicitly add alias /var/www/wordpress/wp-content/blogs.dir; — ideally, this should be set from each virtual server’s configuration, or this will work with just one multisite installation…

Beyond these rules, we’ll add also a common set of restrictions, in an attempt to make Nginx more secure. Place them under /etc/nginx/conf.d/restrictions.conf:

# Global restrictions configuration file.
# Designed to be included in any server {} block.</p>
location = /favicon.ico {
 log_not_found off;
 access_log off;
location = /robots.txt {
 allow all;
 log_not_found off;
 access_log off;
# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~ /\\. {
 deny all;
# Deny access to any files with a .php extension in the uploads directory
# Works in sub-directory installs and also in multisite network
# Keep logging the requests to parse later (or to pass to firewall utilities such as fail2ban)
location ~* /(?:uploads|files)/.*\\.php$ {
 deny all;

All that is left to do are the configuration files for the individual websites! But first, we need to set up PHP-FPM; then, after we install WordPress, we will be able to see all of this working together: at this stage, all you can do is to see if there are any configuration errors:

sudo service nginx configtest

If all’s well, you should just see:

Testing nginx configuration: nginx.

Configuring PHP-FPM

PHP5 itself is configured from /etc/php5. Under Debian/Ubuntu, each different way of launching PHP5 will have its own, separate configuration — e.g. apache2 for the Apache configuration, cli for the command-line version of PHP5, and, naturally, fpm for PHP-FPM. They are all set independently, which sometimes might be confusing, as you can launch different modules and have different settings for each configuration.

We can start with /etc/php5/fpm/php.ini first, since it doesn’t need many changes. Just check that memory_limit = 128M (you can tweak this to consume less memory, but keep in mind that W3 Total Cache will consume a fair amount of memory — in exchange for superfast performance). You might have noticed from the Nginx configuration that we ought to use cgi.fix_pathinfo=0. Also remember to set date.timezone for your own timezone (it’s mandatory for PHP 5.4). The rest should be pretty much the standard.

Check under /etc/php5/fpm/conf.d which modules PHP5 should load. In my case, I had to delete 20-snmp.ini, which I didn’t need. Each of those files calls the appropriate extension and allows you to set extra parameters. You should have something like this list:


Now we need to configure the service that provides PHP to Nginx. PHP-FPM has an extra configuration file for that, stored under /etc/php5/fpm/php-fpm.conf. We will not need to change this file; this is the global configuration for PHP-FPM. Then it launches several pools.

The analogy is that PHP-FPM works a bit like a webserver with virtual hosts: php-fpm.conf has the overall configuration, and then we have separate configurations for the pools under /etc/php5/fpm/pool.d/. By default there is just one pool, www.conf, and this is all we need to edit. It’s a long file, I’m just showing some of the changes and checks you should do:

listen = /var/run/php5-fpm.sock
pm = dynamic
pm.max_children = 20
pm.start_servers = 3
pm.min_spare_servers = 1
pm.max_spare_servers = 5
pm.max_requests = 500

This looks familiar, right? First, we must make sure that PHP-FPM is using the same Unix socket than Nginx. Then we’ll use dynamic allocation of pools — in this case, we tell PHP-FPM to start with 3 servers (that means three processes ready to listen to PHP requests from Nginx), limit it to a maximum of 20, don’t keep too many spare servers around, and, every time a child process serves 500 requests, it gets killed.

This naturally reflects my own setup — 512 MB of RAM, some of which is naturally also needed for MySQL and Nginx itself (Nginx doesn’t eat much memory, though). After some careful tuning of the parameters, this allows me to serve something like 10 concurrent requests and have all processes in memory — no swapping!

But, of course, this depends on how many extensions you have loaded on PHP5, how much concurrency you really need (10 simultaneous connections is not that much), how quickly your webpage loads (which, in turn, depends on the plugins, widgets, external calls, and so forth)…

The whole trick is to twiddle with these numbers until you get acceptable performance, avoid disk swapping, and don’t leave your users endlessly waiting for pages!

If you want to have separate logs, which will report things like processes dying too early from lack of resources or connectivity issues with Nginx and similar errors, add the following to the www.conf pool configuration file:

php_admin_value[error_log] = /var/log/fpm-php.www.log

If not, PHP-FPM will just pipe most errors via Nginx (but not all), and they will appear on the webserver’s log.

You start PHP-FPM with:

sudo service php5-fpm start

Double-check that Ubuntu is launching MySQL, Nginx, and PHP-FPM when it boots (this is the default behaviour for newly installed packages providing network services); if not, use update-rc.d to check them all in.

Installing WordPress


Most of you will install WordPress directly from the sources, and a good reference for that is on the Ubuntu Server Guide website. I personally dislike the way Ubuntu handles WordPress as an application. It will be better integrated into the overall system, but you will only get upgrades when the Ubuntu team feels they should upgrade it. Looking at the dates of the last update, this was quite a while back. In the case of WordPress, the latest and greatest is also the safest (security-wise) choice, so I recommend to install it manually.

Although Nginx can read pretty much anything from any place on the directory structure, as mentioned before, I’m true to the “Apache/Ubuntu” way of organizing things, and that means placing everything under /var/www  — including, in this case, the virtual host that will hold our WordPress installation.

sudo -i
cd /var/www
wget -O wordpress.tar.gz
tar -zxvf wordpress.tar.gz
chown -R www-data:www-data /var/www/wordpress
rm wordpress.tar.gz

Now we need to handle the database. WordPress needs a “clean” database (freshly created). In this example, we’ll also add a user just for that database.

If you prefer to use a Web-based database configuration tool, just follow the instructions on this tutorial about installing phpmyadmin. If you’re fine with using command prompts to make database changes, then log in to mysql server as the root user:

mysql -u root -p

Create a database with the name wordpress:


Create a new user, which will have access to this database only; its username will also be wordpress:

CREATE USER wordpress;

Set the password to the user wordpress to be VeryHardToFigureOut2013! (use your own, but make it hard to guess or just generate it randomly as suggested before) :

SET PASSWORD FOR wordpress = PASSWORD("VeryHardToFigureOut2013!");

Grant user wordpress all permissions on its database of the same name:

GRANT ALL PRIVILEGES ON wordpress.* TO [email protected] IDENTIFIED BY 'VeryHardToFigureOut2013!';

And now you can log out from the session by typing:


I usually try to log in immediately afterwards with the username/password just created, to be sure everything is fine.

Next, it’s configuration time! WordPress will do pretty much everything on its own, but first we need to let Nginx become aware of our new site.

Open /etc/nginx/sites-available/mydomain.conf and type the following:

map $http_host $blogid {
 default 0; 1;
server {
 root /var/www/wordpress;
 access_log /var/log/nginx/;
 error_log /var/log/nginx/;
 include conf.d/restrictions.conf;
 include /var/www/wordpress/nginx.conf;
 include conf.d/wordpress-mu.conf;

We’ll get into this later.

cd /etc/nginx/sites-enabled
ln -s /etc/nginx/sites-available/mydomain.conf
touch /var/www/wordpress/nginx.conf

Note that the last command is a requirement for W3 Total Cache (that file has to exist and be readable by the webserver’s user).

Finally, to make sure all this is readable by the webserver, do

chown -R www-data:www-data /var/www/wordpress

Running the WordPress self-installer

Go ahead and let WordPress create it.

Click on Let’s Go:

Note that the database host is: localhost:/var/run/mysqld/mysqld.sock. This will get WP to talk to the database via the Unix socket which is set up for MySQL by default, and, as said, totally avoid any TCP-based network connections.

If all goes well, you should be able to get to the familiar steps below:

If not, two things might be wrong. The first is that our “unusual” MySQL setup is not properly configured. The second one is that somehow one password or setting was written wrongly; just go back and fix it. Remember not to use the “admin” name for the Super Administrator: as said, the latest BotNet attack on WordPress sites looked specifically for “admin” and tried to crack its password.

After that, you should be able to login; WordPress is still running in “single site” mode. Now to the next step!

Defining Multisite

To enable WP in Multisite mode, you need to open /var/www/wordpress/wp-config.php with your favourite text editor. If you have done this before, it should be easy. Above the bit that says:

/* That's all, stop editing! Happy blogging. */

Add the following lines:

/* Multisite */
define('WP_ALLOW_MULTISITE', true);

Refresh your browser, and you should have a new option under Tools > Network Setup:

For this tutorial, I will be setting it up as separate sub-domains. There is a reason for that — the Nginx rules later will be a bit easier. At this point, if you press Install, WP will do some validations and probably return an error about a missing “wildcard domains”. Don’t worry. What matters next is that wp-config.php needs another change. As you can see, WP “assumes” you’re running under Apache, so we will pretty much ignore step 2, and just add the prompted lines on step 1 to wp-config.php:

define('MULTISITE', true);
define('SUBDOMAIN_INSTALL', true);
define('DOMAIN_CURRENT_SITE', '');
define('PATH_CURRENT_SITE', '/');
define('SITE_ID_CURRENT_SITE', 1);
define('BLOG_ID_CURRENT_SITE', 1);

Remember, this will come below the line saying define(‘WP_ALLOW_MULTISITE’, true); but before the line saying 

/* That's all, stop editing! Happy blogging. */

You will need to login again, but that’s it, your WP Multisite install is pretty much finished.

Adding plugins

Add your favorite plugins.

We will want at least W3 Total Cache. I will assume you’re familiar with the plugin installation procedures, so I won’t go into much detail here. Go to My Sites > Network Admin > Dashboard and then choose Plugins > Add New, search for W3 Total Cache, install it, and set it to Network Activate. One of the great features of W3TC is that you can configure it for all sites in a multisite environment at once, and that’s exactly what we want to do here.

At this stage, you’ll probably be adding all your favourite plugins. I’m personally a fan of Jetpack, since it includes so many useful things like Akismet anti-spam measures, statistics, Photon to cache your images on’s cloud for free (which is very useful to keep traffic off your website!), and a reasonably good system for managing all your social networking integration. And due to the many security incidents with WP, I tend to install at least Limit Login Attempts.

I’m also very fond of Human Made Limited‘s pair of plugins, WP Remote and BackUpWordPress. The first will allow you to centrally manage all your WP blogs (even if hosted on different servers!) for free, making sure you keep them always up to date (core, plugins, and themes), and never forget to upgrade them all. BackUpWordPress is probably one of the simplest free backup plugins which will backup both the content and the database, and, in my experience, it’s one of the easiest to use if you wish to migrate from one server to another — which is always a mess under WordPress.

As a bonus, when both are installed, you can easily retrieve your backups from WP Remote’s backoffice, from any site. So if you’re administering a lot of WP blogs on different servers, both are a must. Since they’re both free, simple to use, and do their job right, there is little reason not to install them, even though there are better (paid) alternatives around. But, of course, this is all up to you!

You’ll definitely want domain mapping.

What we’ll definitely add is WordPress MU Domain Mapping. There are deep theological discussions about why this isn’t part of the WordPress core. Basically, you have two options of having a network of sites: either they’re all under the same domain, but on different directories (e.g.,, and so forth), or under different subdomains (, But in most real scenarios, what you have is totally different domains for each site, and you want the ability to manage them all together.

This is the job for WordPress MU Domain Mapping: you will tell it to point a certain domain — say, and— to specific sites on your install. Obviously you will need a little help from Nginx. The purpose of the next steps is to add these two sites, and make sure they’re properly pointed to the right place, and that Nginx can correctly forward the requests to the right place. All of that while still making sure that W3 Total Cache is working!

First, let’s confirm that W3 Total Cache likes the configuration so far. While still on the Network Administration panel, follow the link on Performance > Dashboard and click on Check Configuration. If all goes well, you should have something like this:

The important thing here is that Nginx should have been detected. You will also see that we have PHP with the Alternative PHP Cache (APC) module enabled.

Now go to Performance > General Settings and enable at least Page CacheMinifyDatabase CacheObject Cache (and Browser Cache should be on by default). For the method use Opcache: Alternative PHP Cache (APC). Save the configuration by clicking on Save All Settings. W3TC should tell you to Empty the Page Cache, so go ahead and do that.

If all went well, W3TC has done some under-the-hood magic for you. If you now open the /var/www/wordpress/nginx.conf file, you should have a surprise: W3TC will already have filled it on your behalf! (If you have an empty file, or got an error, it means that you have either forgotten to touch this file before or it doesn’t have the right user/group ownership or permissions; just take a look again if it’s set to www-data:www-data and is writable).

Fine-tuning W3TC is more an art than a science, although it has become quite easier on recent versions. Page Cache should be fine by default. Minify depends on a lot of things and is probably the trickiest bit. If you’re using CloudFlare, and are as lazy as I am, just let CloudFlare handle minification for you. If you add the login data for CloudFlare on W3TC, the latest versions will communicate with your account, correctly identify that it’s set to auto-minify things, and disable those options on W3TC.

It’s always better to allow CloudFlare to waste CPU cycles on minification, instead of spending your precious resources on that. However, I have a particular instance of a website where CloudFlare’s minification does not work well, but W3TC’s does. This will depend a lot on the theme and the plugins you’re using, and it’s great to know that you have this option.

If you’re not using CloudFlare, try the automatic settings, and on Performance > Minify enable at least HTML & XML (with Inline CSS minification and Inline JS minification), JS and CSS. This should also combine all JS and CSS automatically, which will give you extra points on Google PageRank.

I usually don’t mess around with Database Cache and Object Cache, but I go wild on Browser Cache and turn everything on except 

Note that W3TC is a plugin with a very active development cycle. This means that many options are constantly being added and/or removed, specially if they’re a bit obscure or hard to understand what they’re doing. If you’re reading this tutorial many years after it was written, I would recommend you to check a recent W3TC-specific tutorial, to see what options have changed and what they do.

Now let’s install the WordPress MU Domain Mapping plugin. Remember, this has to be network activated to work. Then go to Domains > Domain Mapping and set the checkboxes under Domain Options like this:

Adding two new sites

So when this tutorial is finished, you should have a network of three sites: and What we’re doing is the following mapping:

  • points to the overall installation, the default site, i.e.
  • points to
  • points to

Firstly, you need to go to your DNS provider and add records for all that. We’ve seen how was already configured to point to your IP address. Now you will need to point,, and all to the same IP address (Nginx will handle the rest).

Once DNS has refreshed (and you can ping those domains and make sure they’re all pointing to the correct IP address — always the same one!) we can start adding the two sites. This, of course, is what you can do from Network Admin > Sites > Add Site. On Site Address put site1; under Site Title use myotherdomain; and the admin email could be the same as for the main site ([email protected] in this tutorial). Similarly, for site2, use anotherdomain for the title, and the same email address once more.

Let’s get it all properly mapped. Go to Settings > Domains. You will see the following message popping up:

Please copy sunrise.php to /var/www/wordpress/wp-content/sunrise.php and ensure the SUNRISE definition is in /var/www/wordpress/wp-config.php

First, let’s copy that file (this is the handler for domain mapping):

cp /var/www/wordpress/wp-content/plugins/wordpress-mu-domain-mapping/sunrise.php /var/www/wordpress/wp-content/

Now edit /var/www/wordpress/wp-config.php and add

define( 'SUNRISE', 'on' );

just above of:

/* That's all, stop editing! Happy blogging. */

Go to Settings > Domains again, now it should show:

We’re ready to add our own mappings. Sadly, the panels for WordPress MU Domain Mapping are not very user-friendly — we need to figure out the Site IDs on our own.

Fortunately, this is not too hard, since they’re listed on the Sites > All Sites panel:

Now when you hover with the mouse over the Domain name, it should give you it’s ID, with an URL like this:

Notice that there is an extra column, called Mapping, which was added by WordPress MU Domain Mapping. It starts as being blank. If you have followed this tutorial, and haven’t added and then deleted any domains, the logic is simple: the first site is ID 1, the second 2, and so forth (but as soon as you add and delete domains, this can quickly get out of order).

So these are the assignments we wish to do:

  • uses ID 1 (default — no need to add it)
  • uses ID 2
  • uses ID 3

If the options have been correctly set, it should now look like this:

And, under Sites > All Sites, you should have:

WordPress is now configured to handle the domain mapping, but we have to let Nginx know about it too!

So let’s get back to opening /etc/nginx/sites-available/mydomain.conf. You will have noticed the map directive at the top. What we’re going to do is to pretty much replicate here what we have setup via WordPress:

map $http_host $blogid {
 default 0; 1; 2; 3;
server {
 root /var/www/wordpress;
access_log /var/log/nginx/;
 error_log /var/log/nginx/;
include conf.d/restrictions.conf;
 include /var/www/wordpress/nginx.conf;
 include conf.d/wordpress-mu.conf;

Restart nginx with:

sudo service nginx reload

And now it’s testing time! If all went well, you should be able to view and on your browser and they will be properly redirected.

Final note: How to test all the above before going into production?

Time to do some testing.

Following the above tutorial requires owning at least three domain names which you’re not using for any purpose, and, of course, adapting every line of code to reflect your real domain names. But you might wish to do some testing first to be sure that you have the configuration right, before you move to a production environment.

Here is a neat little trick that you can use: using the HOSTS file for creating “fake” domains. Most computers are pre-configured to read static IP address assignments from their HOSTS file first, and only then hit the DNS nameservers. All you need to know is your server’s IP address.

You should do this on two places: on the server where you’re running your WordPress installation and on your desktop computer. Under Linux/Mac OS X, the file is under /etc/hosts, so with sudo nano /etc/hostsyou should be able to edit it and add the following line at the bottom:

where, of course, you should replace with your server’s real IP address. Under Windows, it depends a bit on what version you’ve got; it’s usually under C:\WINDOWS\system32\drivers\etc\hosts. Use something like Notepad to edit it (don’t use Write or Word, since it will add lots of useless formatting and break everything!).

After you’ve done the changes, you will very likely need to exit your browser and launch it again (because most browsers will cache DNS).

There are a few caveats, though: since you’re not using “real” DNS, your WordPress installation will not be able to use any plugins that require a XML-RPC call to your server. A typical example is Jetpack, which really requires “real” addresses, because it contacts your server directly to do its magic.

However, almost all other plugins — even the core auto-update feature which needs to contact the “outside” world — don’t have that restriction. CloudFlare may also have a few issues: you have to be careful not to let CloudFlare clear the cache, or it will be caching the wrong site instead! The best is to turn it off on your development environment and just activate CloudFlare again when you move the site to the production environment.

photo credits: aussiegal, sachinpurohit, rdecom,
world map from BigStockPhoto

109 Responses

    • Well, to be honest, with Nginx you can replicate what .htaccess does and go far, far further… it’s a whole new world of regular expressions :) The most interesting things are doing a bunch of regexp’s to avoid the dreaded conditionals. It’s certainly a different approach, but it easily accomplishes the same goal.

      Apache also has some things straight out of the box, like WebDAV. You can certainly do the same with Nginx, but all these things require more rules, and a different approach to things; it’s just different.

      Varnish + Apache are probably “not just as fast” as you say, but very likely, they are “quite faster”. I sort of mentioned that in that article: if you have plenty of memory to spare, your best option is to go with Varnish + Apache. Some people have benchmarked that: Apache with mod_php has an edge on Nginx, on servers with plenty of resources. Put Varnish on top of that (Varnish, btw, has the same rule-based approach as Nginx; both are closely inspired by each other), assign a few GBytes of cache to Varnish, and it’s hard to beat the performance!

      Nginx, however, seems to be much better on environments where memory and CPU are at a premium. While I haven’t asked this question to Matt, I believe that Automattic might be launching hundreds of thousands of very small virtual instances on the cloud, to handle the load better. And for that kind of strategy, Nginx works best. Other top websites might prefer slightly larger virtual instances running Apache + mod_php, and having a few Varnish servers on top of everything, sharing a common cache. Personally I think that the latter approach might lead to slightly better results, based on a few benchmarks I saw (sadly, these benchmarks are confidential and I cannot show them…).

      But if all you have is a tiny slice of a virtual server, and still expect some performance out of it, well, then, Nginx might be the right choice, the best choice, or, in some cases, even the only choice. Similarly, if you have an already-overloaded server with some heavy-duty application, but need some Web access to it, then easing the load by moving the Web infrastructure to Nginx might help the application to run better — they will have more CPU cycles and far more memory to spare. I have done that with one server, with very interesting results, and I have also noticed that several Internet-based games do the same thing: most of them use Nginx as the front-end, while running the “game server” and letting it consume all possible resources.

      But, at the end of the day, I have to concur that it’s hard to drop Apache :) After two decades of tinkering with Apache and its predecessor, let me tell you that it was just with the utmost reluctance that I even considered testing anything else. It was just because I was in a tight spot — I couldn’t fit Apache into the small virtual server I had — that I gave Nginx a try. I’m still running Apache on plenty of other servers, which have so much memory and CPU that it’s highly unlikely they will ever switch :)

    • Benchmarks are always a problem (as pointed out in the article). Most of them, to be honest, are done by enthusiastic Nginx fans that want to “prove” that Nginx is “always fastest and better” than Apache, when this is clearly not the case.

      Here is an example of a relatively fair comparison:

      As you can see, what they conclude (in a more “real” test, which includes WordPress 3.5 as the tested application) is that Apache clearly beats Nginx on servers with more resources and at higher concurrencies; while Nginx always beats Apache on small, resource-starved instances. Also note that on pure PHP benchmarking, Apache will always have an edge: mod_php running inside Apache is always slightly better than PHP-FPM. But a real website will have a mix of static elements (images, CSS, JS…) and PHP calls. On those, Nginx will serve the static elements instantly, while Apache will need to deal with all the overhead. You can see how sometimes the results are different for the phpinfo() test (pure PHP), compared to WordPress (a mix of static and dynamic content).

      There are more benchmarks that reach the same conclusion. I’m not on my “main” computer right now, but I’m sure I have a list somewhere on the other one :)

  • Great article Gwyn

    I’d like to understand from a capacity point of view (I’m thinking in terms of page views/hour) how does this set up perform?

    I have a site on a traditional LAMP stack that struggles on a 2Gig Cloud server instance with circa 260K page views per month at certain times.

    Upping the instance to 4Gig resolves the issue but of course if we can do the same with less using nginx then why wouldn’t we be interested right?

    Look forward to getting your opinions.

    • Hi @mark_skeet,

      To be honest, my most sincere opinion is “I don’t know”. The feeling I have is that putting Varnish with 1 GB of cache in front of Apache, signing up with CloudFlare (it’s free) and activating Jetpack’s Photon (it’s also free) will probably get rid of almost all the traffic coming from the images — Photon and CloudFlare will deal with that pretty easily, and Varnish will make sure that those requests will never hit Apache. With some tweaking of Apache’s parameters, it should most definitely be able to handle all that load — if it’s stuck doing what it does best, namely, running PHP (which you can even improve using an accelerator like APC or Xcache).

      But if you have good reasons not to use all of that (for instance, if you want to make sure that all images are really being served out of your server), Nginx might be a good choice. With Nginx you can actually forfeit Varnish; it does everything that Varnish does, thanks to its own fast caching module (I haven’t shown that configuration on this article), and will certainly serve all those images at the same speed as a Varnish + Apache configuration with far, far less overhead…

      You’d certainly would add a lot of changes to the above configuration, though. With four times as more memory as I’ve used for the tutorial, there are tons of extra tweaks you can do — forking way more processes, both at the Nginx side of things and on the PHP-FPM side as well, for example. And increasing the table and key caches for MySQL.

      Since you have a cloud-based server, your cloud provider might allow you to deploy several instances, as long as they globally don’t use more than the allocated memory. So, a more drastic approach would be to split things up — running MySQL on a separate instance and nothing else; placing all images on a separate domain, served by Nginx statically, without running PHP-FPM on it; and duplicate your website among three or four instances with Nginx — Nginx can do load-balancing as well. I think this is the kind of approach used by as well.

    • Go for it :) Look at the CompareVPS list, pick the cheapest VPS provider on that list which gives you a slice with the same characteristics you already have now and which doesn’t require a long-term plan, and replicate your environment. For around $10 you can do all the testing for a month, compare the results, and then just stop paying.

      In fact, that was my intention when I first wished to try Nginx out; I was thinking of a “temporary” solution while my main hosting provider was fighting the DDoS botnet plague.

      Then, as the results (in my case) were so extraordinary, the “temporary” solution became “permanent”…

  • First, I have to say that this is by far an excellent tutorial.

    One thing, though:
    “a solution using Varnish + Apache + mod_php might beat a very fine-tuned Nginx + PHP-FPM solution.”

    I just can not agree that adding Varnish to Apache and comparing it to Nginx without Varnish proves anything. Add Varnish to Nginx and compare.

    There is also a nice way to speed up your WP sites by using Nginx as reverse proxy in front of your LAMP. :) This works great, too.

    Anyway, thanks for the great article.

    • @codeforest, there is a heated discussion about this very subject at rtCamp; my point is that the real benefits of both Nginx and Varnish are serving static files blindingly fast, one area where Apache is not so good at.

      Nginx + PHP-FPM are “similar” to Varnish + Apache, with a difference: Nginx “knows” how to fetch web pages directly and serve them (caching is handled by extra rules) — while Varnish does not — while Nginx “does not know” about PHP and needs to contact an extra server, PHP-FPM in this case, to deal with PHP processing.

      Why is Nginx so much better than Apache in this case? Consider how it handles, say, WordPress. We all know where the static files are: wp-content/upload directories for example. So we can write one-line rules on Nginx to check for these directories and serve the static content immediately and cache it in memory. If it’s a PHP script, well, then it has to be passed for processing to PHP-FPM.

      Apache, however, knows “nothing”. It gets an URL. Now the URL gets decoded into its component parts, and Apache will call mod_rewrite to see if there is a rule to handle that content. Sometimes there is, meaning that at least it doesn’t need to load mod_php as well, and just needs to call the module that serves static content back to the user. But sometimes there isn’t, and Apache has to call mod_php first, which will evaluate the URL and figure out it’s static content, but by then it’s too late: Apache has already done a lot of processing in order to figure that out…

      Varnish, as said, is “blind”. It either has things on its cache or not; if not, it will always contact the backend server. It has no concept of the directory structures of the backend webserver, so we cannot tell Varnish to do the trick of serving content directly, bypassing Nginx or Apache. What this means is that on a Varnish + Nginx environment, Varnish will always need to ask Nginx first, even for content that is static. Of course, afterwards, it doesn’t need to ask Nginx anything — it serves from its own cache. But Nginx can do precisely the same. So you basically have two layers of caching doing precisely the same thing, but one — Varnish — is “blind and dumb” and has constantly to open requests to the webserver for serving static content, since it doesn’t know anything about “content”. Nginx, by contrast, is very well aware of the directory structure, and knows when it can handle the content immediately, and when it has to pass to PHP-FPM for further processing.

      Varnish + Apache is another story! Obviously Varnish doesn’t know what content is going to be fetched, and, the first time it asks for static content, Apache will have to do all its complex processing until it figures out it is, indeed, just static content, and pass it along to Varnish. This requires a lot of time — but just once. Afterwards, Varnish will never need to ask Apache again for any static content. This is why Varnish + Apache solutions show such a huge boost in performance. In fact, they work a bit like Nginx + PHP-FPM, where the front-end (Varnish) serves all static content and the back-end (Apache) only deals with PHP.

      It also makes some sense to have Nginx + Apache, for the same reason. In theory, since Nginx is “aware” of directories with static content, it might have a slight edge over Varnish for the first time the content is asked for — Nginx will be able to serve it directly, while Apache will need to do all its processing until it can handle Varnish a static copy. But in practice this is hardly experienced — you can always pre-populate the Varnish cache (including “purging” it) as soon as you create new static content, so that there no “first request penalty”. But Nginx + Apache is a popular solution, too. In fact, from what I read, I believe that the first implementations of Nginx were employed mostly as a static file redirector for Apache, and this was a very popular usage for Nginx, before it acquired the ability to also serve dynamic content via PHP-FPM or similar mechanisms (for Ruby on Rails, for example).

      There are intriguing clever uses of Nginx (using its caching module) and the Nginx fastcgi-cache module which are able to get rid of any cache plugins at the WordPress level. This is rather a curious way of looking at things. There are always two levels of caching at play — one is merely “web caching”, i.e. serving static content and avoiding to have the web server and PHP processor to do any work. The other level — which is what WP Super Cache, W3 Total Cache, etc. all provide — is “application-level caching”. This is a way to pre-generate whole pages from the existing content, mostly to avoid MySQL hits and complex processing at the application level. W3 Total Cache, for example, stores “.php” files on one of its directories — they are not “static” in the sense that an image is. They are, however, the result of calling a lot of PHP scripts with many database calls to render a whole page — so the PHP processor will not need to do all that job again, but just grab the result, process it, and send the rendered HTML back to Nginx. It can’t be easily cached because it’s still dynamic content.

      Of course, figuring out how exactly to create those pre-rendered pages is the task of application caches, which work “inside” WordPress (as plugins) and have an understanding of what the application is supposed to be doing.

      What this approach with FastCGI cache describes is an alternative method of handling caching. Basically, as far as I understand it, it asks for the PHP-FPM service to run the whole lot of things at WordPress to generate a page, and then caches the dynamically-generated page. You will see the problem here: what about things like cookies, Javascript for counters and analytics, RSS tickers, dynamic slideshows, etc., and all those kinds of things that we expect to be dynamic? Well, the trick is to know how to purge the cache. Let’s imagine you have a website getting hit by a thousand requests per minute. How likely is the dynamic content to change in a single minute? Probably not much. So this approach would do all the work for the first request, but the remaining 999 would all come from Nginx’s cache. The backend — even for PHP content! — will not be questioned again. More than that: even if the backend dies, Nginx will continue to serve the fully-rendered page without giving errors, a feature that no WP-based application cache can offer (CloudFlare provides this service too, btw. No wonder — CloudFlare uses Nginx, too).

      Of course, the problem is that after this minute — and one thousand cached requests later — you need to invalidate the cache, so that dynamic content remains, well, dynamic. There are lots of mechanisms to deal with that, and, on the WordPress side, there is the Nginx Helper plugin which will handle it automatically.

      So what this means is that you would get far better performance without running a WP cache plugin — because, most of the time, even for dynamic content, PHP-FPM would never be called at all.

      Weird, huh? But I see your objection: WP cache plugins are not only about caching; they’re also about optimisation. W3 Total Cache, for instance, will deal with compression, whitespace removal, minifying and combining JS and CSS, and handling all proper headers to make sure that the browser caches as much as it can. These tricks are fundamental these days, and one cannot live without them. Nginx FastCGI caching would avoid all that.

      Enter Google PageSpeed. A module for Nginx has been released just a month ago. PageSpeed does all that, and much more. If it is installed and configured inside Nginx, it can do all that processing on top of the cached pages, and just deliver the resulting HTML to the client’s browser. No need for any processing at the WordPress backend. Ever.

      Why would anyone prefer that approach? After all, we would be doing more effort on Nginx in order to have a plugin less running under WordPress, right? Well, the problem is that the better the WordPres plugin is at optimising resources, the more bloated it is. While I adore W3 Total Cache, I’m pretty well aware that it’s by far the plugin consuming the most resources — and in certain environments, I had to spend a lot of time tweaking things to make sure that at least W3TC was doing its job, because it would immediately pay off afterwards.

      Pushing all that work to the Nginx front-end where there is no PHP processing, no database access, and all that for just a few extra lines on the superfast Nginx rule system… well, you can imagine the performance increase you can get: it will easily outperform any other technology you throw at the problem (even taking into account that Google’s PageSpeed has been available for Apache for some time now), and make it impossible to compare with any other benchmarks — it’s simply a completely new way to address an old problem, bearing little resemblance to what we usually see (but not conceptually new; I have worked for several years with commercial CMSes who have been doing something very similar as early as 1999. They used Apache on the backend only, to pre-generate everything as static files and dealing with purging them at the appropriate times, but the front-end could just be a “dumb” Web server with a tiny footprint serving static content ultra-quickly — after all, the first Web server ever written was just three lines of Perl. Technically, you don’t need more than that, if all you have is static content!).

      Still, this area is rather new and has a lot of pitfalls, like making sure that all other plugins that require retrieving remote content — like WordPress stats and Google Analytics — still work as they should. And think about comments — we want them to appear dynamically, specially if they are called via AJAX, as soon as someone presses the “submit” button. Dealing with all those cases is hard.

      The fun bit is to know that it’s possible to have a fully-cached environment just with Nginx + PHP-FPM without using application-level caching plugins — thus shrinking memory requirements even more and reducing CPU usage each time a new page/article is generated — and still dramatically outperform a “classical” Varnish + Apache (with PageSpeed) + W3 Total Cache solution. But would I recommend this approach to potential customers? Not yet. It’s still “too new”. There are possibly too many still unknown pitfalls. A few articles I’ve read from enthusiasts of this kind of solution have reverted back to the “old” way of caching because they couldn’t figure out how to deal with some annoying limitations. So my recommendation would be to take it easy and wait a few more months until things are more researched and put in production.

      But I’m itching to try this combination out on some of my personal websites :) Tee hee :)

      • I can’t agree more about PageSpeed. I read all about it and WOW, when the kinks are finally worked out, it is a complete game changer. I’d even give up a number of plugins that I use on most sites just to have that type of optimization. So far though, I have yet to find a really great how-to for PageSpeed + WordPress, otherwise I’d be running it already.

        I’m really hoping that as time goes by, sooner than later, that enough devs jump on the PageSpeed bandwagon and get the setup working correctly to where the avg. coder like myself will feel comfortable using it.

        *If you are aware of any good PageSpeed ‘how-to’ articles, please do share.

  • Thanks for this great tutorial. I managed to follow it and move my personal blog ( as well as few other sites from my shared web hosting to a Virtual Machine on Windows Azure.

    Had very few hiccups one of them being the following:

    The following rule in /etc/nginx/conf.d/restrictions.conf was actually blocking all the requests.

    location ~ /. {
    deny all;

    Changed the above to the following to make it work:
    location ~ /\. {
    deny all;

    Otherwise the tutorial is pretty spot on and am very happy with the performance of my site on this new configuration. Thanks once again!

  • This article came at a great time. I have a fairly feature heavy buddypress setup and without being live it’s already dragging and I’m getting harassed by my host. I’m expecting to get a lot of traffic and want to be prepared.

    Should just add nginx to my Apache install, to be run at the same time, or should I choose one? I’m looking at some of your comments, and you’re saying Apache works better with Varnish than nginx so now i’m wondering if there’s a point.

    • This depends mostly on how much memory you have. If you have 4 or more GBytes, you might be better off just placing Varnish in front of your current Apache install, and give Varnish 1 GB (default) of memory for caching. If you cannot afford so much memory, and really need the extra features from Apache, you might be able to use Nginx in front of Apache, and at least benefit from the fast serving of static content (assuming you cannot allocate much memory for Nginx to do memory caching as well). If your host only has little memory for you, then the best option is to get rid of Apache completely and just use Nginx, which has a tiny memory footprint.

  • I’m adding just a note to whoever might be reading this.

    The configuration here uses “minimal” gzip compression. Basically it’s turned on and avoids the bugs with IE6, but… Nginx can do better:

    gzip_vary on;

    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    This should add extra tags for gzip’ed files as well as gzipping included files like CSS, JS, etc.

    There is a reason for not having included this on the tutorial: as mentioned, I use CloudFlare on top of most of my sites — which does gzipping on its own as well as CSS/JS/HTML minifying. It’s pointless to waste precious CPU cycles and extra memory to gzip things to CloudFlare, since they will ask for those only once.

    But if you’re not using CloudFlare, it might be better to add more gzipping to pretty much everything…

  • Fixed ;)

    Took a dig through /var/log/nginx/error.log and found the following:
    > 2013/05/24 14:57:52 [emerg] 8966#0: could not build the map_hash, you should increase map_hash_bucket_size: 32

    I then defined and altered and added the following in /etc/nginx/nginx.conf

    > server_names_hash_bucket_size 128;
    > map_hash_bucket_size 128;

    Everything is now working. Thanks again for a fantastic tutorial with a step by step walk through. Other than this small issue it all seems to work now.

  • Hi Gwyneth,

    great tuto, just 1 problem for me with the virtual host, cause before follow your tuto my wordpress site have a ssl certificat, this my previous virtual host config :

    server {
    listen 80;
    root /home/fred/web/site-web01;
    index index.html index.htm index.php;
    # Make site accessible from http://localhost/
    location / {
    try_files $uri $uri/ /index.html;
    #prise en charge PHP
    location ~ .php$ {
    include /etc/nginx/fastcgi_params;
    fastcgi_index index.php;

    server {
    listen 443;

    ssl on;
    ssl_certificate /etc/ssl/certs/;
    ssl_certificate_key /etc/ssl/private/;
    root /home/fred/web/site-web01;
    index index.html index.htm index.php;
    # Make site accessible from http://localhost/
    location / {
    try_files $uri $uri/ /index.html;
    #prise en charge PHP
    location ~ .php$ {
    include /etc/nginx/fastcgi_params;
    fastcgi_index index.php;

    but now with your virtual host :

    map $http_host $blogid {
    default 0; 1;
    server {
    root /var/www/wordpress;
    access_log /var/log/nginx/;
    error_log /var/log/nginx/;
    include conf.d/restrictions.conf;
    include /var/www/wordpress/nginx.conf;
    include conf.d/wordpress-mu.conf;

    i don’t know how i can fo for restore the ssl with your config
    thank’s for your help

  • Hi this is my second problem when i try to migrate with importbuddy.php my previous site i have already this, and i modified the value in the php.ini, but it seems to have no effect why : ?

    PHP Timeout or Fatal Error Occurred The page did not finish loading as expected. The most common cause for this is the PHP process taking more time than it has been allowed by your host (php.ini setting max_execution_time). If a PHP error is displayed above this can also cause this error.ImportBuddy Error Code 9021 – Click for more details.


    • Tricky, tricky! I did actually see your comment, and I was having a similar problem — in my case, my backup process took longer than 60 seconds to run, and I was wondering why it was ignoring the max_execution_time on php.ini.

      Apparently there are a few more spots where you need to change the timeouts. What I did (which seemed to fix things on most cases, but sometimes not all…) was the following:

      On /etc/nginx/conf.d/wordpress-mu.conf add the following after fastcgi_pass php5-fpm;:

      fastcgi_connect_timeout 600;
      fastcgi_send_timeout 500;
      fastcgi_read_timeout 500;

      And on /etc/php5/fpm/pool.d/www.conf:

      pm.process_idle_timeout = 600s;

      As said, this doesn’t work 100% of the time (specially the config above, which might be ignored by PHP-FPM). Also, once you did all the imports, you should go back to the default settings (basically deleting those lines again): on a VPS with little memory and CPU, it’s a bad idea to allow scripts to run for a long time, because those processes will be “hung” and cannot be reused while they’re waiting for timeouts. Thus the recommendation to allow them to run for just one minute. Of course, things like importing huge files or processing backups (I also use a plugin called WP-Filebase which allows setting up a download service for large files, and that one also requires a lot of processing time) might need much larger timeouts than the default… those are valid reasons for increasing the timeout, specially if it’s done just temporarily or occasionally, and then the values should be reverted back to shorter timeouts.

  • Greetings Gwyneth:

    Great article! Very clear and easy to follow. I, unfortunately, messed something up somewhere along the process.

    Question: What ownership (both user:group) and permissions should be assigned to the various nginx directories and nginx itself?

    I ask, b/c when I run “nginx -t” I receive the following error:

    nginx: [alert] could not open error log file: open() “/var/log/nginx/error.log” failed (13: Permission denied)
    2013/06/08 19:25:18 [warn] 1516#0: the “user” directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1

    Despite having “user www-data;” on the first line of /etc/nginx/nginx.conf

  • Hi Gwyneth,

    Awesome article! Trying to migrate my 1000+ blog setup from Apache to Nginx and I almost have it working but my setup is slightly different in that the blog is served from /blog for the sub-domains i.e. ‘’.
    I am guessing that I would need to change the ‘ location / { ‘ directive block to ‘ location /blog { ‘ but would there need to be any other changes made to any of the other elements?



  • Fantastic article, very clear and cpmprehensive, thank you. I just used it to set up nginx/php5/wordpress on a clean install of Debian Wheezy; I am not familiar with any of these technologies, and this was just what I needed.

    One small problem: this was my first time installing wordpress, and your article suddenly jumps to “Running the WordPress self-installer” without explaining how to do that (open …/wp-admin/index.php in a web browser). This held me up for a few minutes.

    One large problem: after following your instructions, php did not work at all. Every php script returned an empty response (blank white page), although html pages were served correctly. No errors were reported in the logs.

    The solution was to uncomment the line
    # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    in /etc/nginx/conf.d

    This held me up for a couple of hours. Please you could mention that some installations need this change?

    OS: Debian Wheezy
    Nginx: 1.4.1
    Php5: 5.4.4
    WordPress: 3.5.1

  • Great stuff, but just a couple of questions:

    1. I get the dreaded, index.php in between my clean permalinks. I can’t seem to find a way to correct this in the nginx config files.

    2. do the single and multi conf files stored in conf.d only play nice with the wpmu plugin? In other words, is that getting loaded dynamically by nginx or are we just keeping them there for safe keeping?

  • Just thought I’d share this link to

    I’ve been talking to Branden Lawe (the developer) who has been very supportive during the beta testing of this server setup script.

    For those who might prefer not to dive too deep with server setups (like me) this is really worth checking out.

    I had a server running nginx up and running on Amazon EC2 within a few minutes.

    It’s also free for a year!

  • Hi Gwen, this has been an amazing tutorial that has been razor fast so far, except I have one issue.

    When I set up multisite and a 2nd domain, I followed your instructions with so that it re-maps to using the WordPress plugin. However, whenever I upload any media in the WordPress admin panel from, WordPress creates a permalink that goes to…/file.png instead of…/file.png. Why is this happening, and how can I fix this?

    If it helps, I also cannot access the WordPress admin panel via All it shows is a Nginx splash page.

    • That’s a tricky one (meaning I might not have an answer!). I had the problem you’ve mentioned on the last paragraph happening to a Nginx-WP-multisite install I’m working on, where some of the domains are life, and others not yet (so I rely on /etc/hosts to test the domains that aren’t life yet). Clearly I wished to make sure that all backoffices work, and, to do that, I went to the Options > Domain Mapping panel (from the Network Administration panel), and clicked on the many checkboxes to give me what I wanted :) The “solution” seemed to have just “Permanent redirect (better for your blogger’s pagerank)” and “User domain mapping page” checked, and the others off.

      But I think that this might not be enough, and that you might need to manually change the domain names on Sites > All Sites > [Edit site] > Options. Search for all URLs there and see if they match what you expect.

      Hope this helps. As said, I’m not quite sure if this is enough.

      • I had the same problem as @aikitect (visiting 2ndarysite/wp-admin was failing to an nginx splash screen), and yes your solution to turn off the ‘redirect administration pages’ solved it.

        but I’m wondering.. if I leave the ‘redirect admin pages’ on (in DomainMappingConfiguration for those following along) can’t I just add to the /etc/nginx/sites-enabled/ server line?

        or,.. alternately posed-question:
        How can I get working without disabling the redirectAdminPages option in DomainMappingConfiguration?

        ps. Your article as the key to me getting as far as I have with this Nginx/WP-mu setup. Thanks so much,.. and even more so for you answers concerning performance and other tips. I also enjoyed your mentioning CloudFlare use which I was already doing (but only because I’ve put off setting up my own BIND install).

    • Heh. The first, of course, would be to run everything in memory — namely, creating a virtual memory filesystem to store MySQL and the WP install, and just leaving the media files on disk.

      Secondly, and perhaps even better, one could activate nginx’s own caching system (yes, it will cache PHP too, not only static files). There are a few tutorials around there explaining how it’s done. It should also be easier to do than pushing MySQL/WordPress into a memory-based filesystem, without the obvious issue that when the server crashes, you won’t lose everything. In fact, nginx caching is often used for “always on” solutions — nginx will continue to serve your WP site even if MySQL fails (or, to a degree, even if there is a corrupted filesystem on the WP install/media directory).

      Both require a lot more memory than I have available, but the results are impressive.

  • If anyone’s interested, I’m building a script to automate the installation of MariaDB (the open-source alternative to MySQL… it’s better in may ways and completely compatible with WP), PHP-FPM and nginx.
    On top of that, the script downloads and installs WordPress.
    It’s still a work in progress, and I’d really like your input and advise.
    You can get it here (includes instructions): if you have any recommendations or tweaks that you ‘d like to see, feel free to fork it, edit it and submit a pull request on github.


  • Hello and thank you so much for the tutorial! I am a little confused by this section of wordpress-mu.conf. Can you elaborate what it’s for?

    location ~ ^/files/(.*)$ {
    try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ;
    # access_log on; log_not_found on; expires max;
    #avoid php readfile()
    location ^~ /blogs.dir {
    alias /var/www/wordpress/wp-content/blogs.dir ;
    access_log off; log_not_found off; expires max;

    • When using fastcgi_pass php5-fpm ; you’re actually referring to a block named php5-fpm; if you look closely, that’s what we have before:

      upstream php5-fpm {
      keepalive 8;
      server unix:/var/run/php5-fpm.sock;

      Basically, you’re side-stepping this block and forcing the connection to go directly to the Unix socket for the php5-fpm server. That’s fine! The only reason for having that block is to be able to add further parameters to the upstream server (in the example, changing the keepalive setting). If you have no reason to ever change any of the parameters, you can skip that block and make the upstream call directly.

      If you got an error, I suppose that something is not quite right on the upstream block… maybe something was mistyped there?

  • Whats up with line 13 in your virtual host file?

    include /var/www/wordpress/nginx.conf;

    Looks like your keeping a site-specific configuration file in your root directory, which seems fine, but you never mentioned it in the tutorial.

    What’s the purpose of line 33 in multi-site configuration?

    You mention that its not perfect but don’t go into any details about its role in the configuration.

    alias /var/www/wordpress/wp-content/blogs.dir ;

    • Hi @kj_prince, sorry for the late answer. nginx.conf contains specific rules written by the W3 Total Cache plugin. I just mentioned it on the article by saying:

      “Note that the last command is a requirement for W3 Total Cache (that file has to exist and be readable by the webserver’s user).”

      If you’re not using W3 Total Cache, and not planning ever to use it, you can skip that line.

      As to your other question, what does line 33 do: alias /var/www/wordpress/wp-content/blogs.dir;

      … I have to humbly admit I’m not proficient enough with nginx configuration to explain it. The point of that section is to make sure that the blogs.dir directory is not browseable (neither by hackers looking for files they shouldn’t have access to, neither by search ‘bots), but that existing files in the media directories which are referred with a complete path (this will happen with all media embedded on a WP post/page) will be retrieved by nginx directly.

  • I follow these steps to the letter… and for some reason, when I get to the part about installing WordPress, right after I enter sudo -i (and therefore start with [email protected]:/$, I’m not able to go to the next directory: cd /var/www/ Terminal kicks the following back at me, telling me there is no such directory:

    -bash: cd: /var/www: No such file or directory

    I’m assuming this should have been created already, seeing as though you keep saying it’s defaulting stuff to this place. However, I do not have this directory yet. Am I supposed to make it now? If I do that, will it mess anything up?

    • You’re quite right and I do apologise for that, my text wasn’t clear.

      Apache does indeed keep all its websites under /var/www. I mention in the text that I’m following a similar organisation, because, if at some point, you wish to move back to Apache, it will be good to have things already in the right place. Nginx does, indeed, create different directories for storing the website, but for this tutorial, I’m assuming that you’ll do a directory structure similar to Apache’s.

      This means that it should have been more clear that you need, indeed, to create /var/www (e.g. mkdir /var/www)

      • How would I enable wildcard subdomains here? Your tutorial involves a process of manually adding site IDs into a configuration file. I want these things created on the fly, without having to update files like that for a site to be created. A way of automating the process, like all of my previous shared hosts have done for me, and how does it.

        When I make a multisite with this configuration above, I’m not able to simply visit the dashboard right away. How would I get that set up? I’m assuming it’s easy, but that could just be my ignorance talking.

      • Actually… I think I may have figured out what went wrong there so far. My A record was pointed at my server… but not my name servers. As a result, the *.domain CNAME record I have with the VPS wasn’t even working.

        I’ll wait for that to change over, and it’ll probably fix that.

        One question for you though: I’ve activated the network with a address. I’d like this to be instead. The reason I didn’t set it up with that domain is because it’s a live site already, and the primary business site for my company. I didn’t want it to go down while I experimented with servers and stuff.

        How easy is it, now that I’ve made it all the way through your tutorial (the only thing I can’t get working is phpmyadmin)… how would I go about changing the domain name of the existing network, if there are no subsites activated yet (I’d delete them).

        Appreciate your valuable time.

        – Charlie

        • The easy way: point both and to the same address of your server, and just change the mydomain.conf line to something like:

          server {

          or even

          server {
          server_name * *;

          since I’m guessing you’re going to use multisite subdomains.

          This, of course, will ‘redirect’ everything that goes to to instead.

          However, I think that you wish precisely the reverse, i.e. have all sites under instead.

          Do the above changes on the nginx configuration and make sure you have backups of the database first, and try the following instructions out with the /etc/hosts ‘trick’ before you make any final changes on the DNS!

          If you’re comfortable with using phpmyadmin (or a similar MySQL editing tool), look for all entries on the database that have and manually change them to instead. These are usually two, on the table wp_options, and they’re named siteurl and home. Save the changes and try to access the sites now with instead.

          You can also follow the tutorial on the WordPress Codex. As you can see, there are plenty of options to change the URL of your websites.

          • Yes, you are correct. I’d like to do the opposite. Redirect all traffic to .net when it comes to .org. Is it a matter of the order I place these things?

            When I tried the switch, both the .net and .org domain get “Welcome to Nginx!” again.

            My main site isn’t down. I adjusted my computer’s host files. Perhaps I’ve missed something here. I’m basically configured exactly as you have it above in the tutorial, except I now want to change the domain of the multisite installation, and import the database from the existing install on the other host, into my new install at the Digital Ocean VPS

            I figured my workflow would be:
            1) Create snapshot of droplet so mistakes can be undone quickly
            2) Export existing database from Shared host
            3) Change host files on computer so that I see where .org is already
            4) Dump database tables of existing multisite install at Digital Ocean
            5) Import database tables for existing install at shared host.
            6) Change wp-config on existing multisite files to match database and url information for on the droplet
            7) Change .conf file in nginx for the site
            8) Restart nginx
            9) Try to access the site.

            Am I missing any steps there? I shouldn’t have to run through phpmyadmin for tables, because I’m loading the tables that already existed with the proper domain name.

          • I guess what I’m ultimately wondering here, is that since my “original domain” was when setting up the server from scratch, are there any other files which need to have this updated, beyond what WordPress files need to know? All my WordPress files are now updated for on the new server, and it’s now pointing to a new database with all the previous tables imported into it. Yet I’m still getting the “Welcome to Nginx” screen, so the domain clearly isn’t reaching it’s ultimate destination yet. It’s still just pointing at the server only.

          • Also, I don’t know if this really matters all that much (since I’m not moving WordPress core files from one server to another – only the database and wp-content files), but I’m moving from a shared host with Apache2 installed, to a VPS with Nginx. I know things are handled differently there, but I’m wondering if that might have something to do with things.

            I’ve changed hostnames, wp-config, mysite.conf, and another instance on the server as well which had .org instead of .net. I simply replaced everything on the server I could find that has .org, replacing it with .net.

  • If you have had patience to read all comments so far, here is a small bonus for you. Recently I saw another tutorial on ‘best practices’ in nginx configuration with WordPress, and someone mentioned that WordPress, although it’s fully nginx-compatible, somehow thinks that nginx cannot do pretty URLs (read: URL rewrites), and, as such, does them in PHP.

    Needless to say, one of the areas where nginx truly shines and is blazingly fast is by parsing URLs and accessing files directly. ‘Forcing’ all URL prettifying (specially the ones referring to static media) to go through the PHP processor, when you have an ultra-fast webserver on top of it, is really a waste of resources!

    In fact, a new section appeared on the entry for nginx on the WordPress Codex. I’m pasting it here for reference, because this is something you really should do:

    URL Rewrites / Permalinks

    WordPress includes checks for Apache mod_rewrite before enabling permalinks. This check will fail on nginx, which can leave ‘index.php’ in the permalink structure.

    To force WordPress to enable permalinks completely, add the following to a plugin or use Nginx Helper plugin. Nginx Helper also provides support for Nginx Map

    add_filter( 'got_rewrite', '__return_true' );

    If placed in an MU plugin, like ‘/wp-content/mu-plugins/nginx.php’, this code will not be accidentally disabled. Also, WordPress 3.0 or higher is required to have a filter ‘__return_true’.

    So, what I do now for every site is:

    cd /var/www/[your website directory]
    mkdir wp-content/mu-plugins
    echo "<?php add_filter( 'got_rewrite', '__return_true' ); ?>" > wp-content/mu-plugins/nginx.php

    But I haven’t yet tried this out on multisite installs. As soon as I do that, I’ll add that as an additional step on the tutorial. For now, I’d be happy if you could test it out and see if you notice an improvement. I can report that the improvement I saw on single-site installs seemed to be mostly a decrease in CPU usage (I have so many caching levels that it seems to be hard to figure out the difference).

  • First of All great tutorial Kudos on this
    Some bit of help here Please :) i keep getting the welcome nginx page
    I cant seem to figure this out right ….. thanks

    1. /etc/nginx.nginx.conf

    #Generic startup file;
    #user {user} {group};

    user www-data;

    #ususally equal to number of CPU’s you have. run command “grep processor /proc/cpuinfo | wc -l” to find it
    worker_processes 2;

    error_log /var/log/nginx/error.log;
    pid /var/run/;

    # Keeps the logs free of messages about not being able to bind().
    daemon off;

    events {
    worker_connections 1024;

    http {
    # rewrite_log on;

    include mime.types;
    default_type application/octet-stream;
    access_log /var/log/nginx/access.log;
    sendfile on;
    keepalive_timeout 3;
    # tcp_nodelay on;
    # tcp_nopush on;
    # gzip on;

    #php max upload limit cannot be larger than this
    client_max_body_size 13m;
    index index.php index.html index.htm;

    # Upstream to abstract backend connection(s) for PHP.
    # this should match value of “listen” directive in php-fpm pool
    upstream php {
    server unix:/var/run/php-fpm.sock;


    include /etc/nginx/sites-enabled/*;

    2. /etc/nginx/sites-available/mydomain.conf

    #Redirect everything to the main site. We use a separate server statement and NOT an if statement – see

    server {
    server_name _;
    rewrite ^ $scheme://$request_uri redirect;

    server {
    root var/www/wordpress;

    access_log /var/log/nginx/;
    error_log /var/log/nginx/;

    include conf.d/restrictions.conf;
    include /var/www/wordpress/nginx.conf;
    include conf.d/wordpress-mu.conf;


  • thanks for the response

    I have checked my location and its ok and is a fictitious domain name and have tried to use “localhost” in its place but still nothing.

    sudo nginx -t
    nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
    nginx: configuration file /etc/nginx/nginx.conf test is successful

    and no its not just a copy paste

    still have that Welcome to nginx staring back at me

    • Hm. I’m a little bit stumped with your question, but I’m pretty sure I have seen this before. Basically, nginx is not recognizing as a virtual host. So when DNS resolves, it correctly points to the IP address of your server, but nginx ‘thinks’ it should return the default website for that server instead of the virtual host for that domain.

      Try to delete the lines saying:

      `server {
      server_name _;
      rewrite ^ $scheme://$request_uri redirect;

      and see what happens! Do that on every virtual host you’re using on your server, if you have more than one.

    • You most certainly can do that :) This tutorial is more focused on Multisite WordPress, because, well, you know, it’s called WPMU… lol

      Single-site WordPress is actually a bit simpler to configure. If you followed the tutorial, you will have also configured /etc/nginx/conf.d/wordpress.conf, which is used for the single-site WP configuration.

      Do all the above for the multisite configuration as before.

      Unarchive WordPress as before on the directories /var/www/mysinglesite1 and /var/www/mysinglesite2.

      Now create two files under the nginx configuration tree.

      First one will be named /etc/nginx/sites-available/mysinglesite1.conf (change server_name for the domain you’re adding):

      server {
      root /var/www/mysinglesite1;

      access_log /var/log/nginx/mysinglesite1-access.log;
      error_log /var/log/nginx/mysinglesite1-error.log;

      include conf.d/restrictions.conf;
      include /var/www/mysinglesite1/nginx.conf;
      include conf.d/wordpress.conf;

      and of course for the other site (save the file as /etc/nginx/sites-available/mysinglesite2.conf):

      server {
      root /var/www/mysinglesite2;

      access_log /var/log/nginx/mysinglesite2-access.log;
      error_log /var/log/nginx/mysinglesite2-error.log;

      include conf.d/restrictions.conf;
      include /var/www/mysinglesite2/nginx.conf;
      include conf.d/wordpress.conf;

      Now link them under sites-enabled:

      ln -s /etc/nginx/sites-available/mysinglesite1.conf /etc/nginx/sites-enabled/mysinglesite1.conf
      ln -s /etc/nginx/sites-available/mysinglesite2.conf /etc/nginx/sites-enabled/mysinglesite2.conf

      That should be all. This, again, assumes that you might use W3 Total Cache in the future, so make sure you do on the terminal (running as the webserver user):

      touch /var/www/mysinglesite1/nginx.conf
      touch /var/www/mysinglesite2/nginx.conf

      to create those files (initially empty).

      Then reload nginx. It ‘knows’ that it should launch each and every site configured under sites-enabled as separate virtual hosts.

      Of course, this simple configuration assumes that every web domain in your server is under your full control. If you need to give separate permissions for different users to log in via SFTP to add/remove files, etc., and wish to make sure that nginx runs under the proper permissions for each user, then you’re looking at a much more complex configuration!

      In fact, for those complex cases, I’m lazy and just use ISPConfig3 :-)

  • Yay+ for ISPConfig3 mention :) Absolutely wonderful tutorial and followup thread. I only just discovered it and intend to test out your instructions and would be interested on scripting this up for Ubuntu 14.04 on 256/512Mb DO droplets and similar small VPSs or local docker/lxc containers. However I always use something like this $domain regex to handle vhosts without having to use separate conf files in sites-enabled/. It generally works well (except $domain will not work in a ssl_certificate line) and I’d like to explore adapting WPMU with this vhost-conf-less approach. I add a /etc/passwd entry for different users per domain and use that UID:GID in the fpm pools config (idea from ISPConfig3) to provide user isolation and a simple tweak to sshd_config to force both ssh and sftp chroots per user/domain (rather than the cumbersome Jailkit). If this $domain regex approach could work with WPMU then it may go quite some way towards automating the setup procedure and minimising config steps. I also like the idea of using the native nginx cache system as much as possible to reduce WP plugins and most likely based on ppa:sandyd/nginx-current-pagespeed.

    Just wondering if your good self or anyone else might be interested in coop’ing a small github project, or just a gist, for this script as my WP-fu is limited?

    server {
    listen 443 ssl spdy;
    server_name ~^(?.+)$;
    root /home/ns/$domain/var/www;
    location ~ \.php$ {
    try_files $uri /index.php =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass unix:/run/fpm-$domain.sock;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param SERVER_NAME $domain;
    fastcgi_intercept_errors on;

    • There are a few things you can certainly do!

      On /etc/nginx/nginx.conf,

      Increase worker_processes to 4 (you have 4 cores)
      Increase worker_connections to 768 or even 1024 (your system should be able to handle that many)
      Increase client_max_body_size to 64M (allows for larger file uploads)

      If you expect a lot of traffic, it’s better to use TCP/IP sockets than Unix sockets. That requires replacing the references to ‘unix:/var/run/php5-fpm.sock’ to something like ‘’

      On /etc/php5/fpm/pool.d/www.conf change the following:

      listen =
      listen.allowed_clients =

      pm = dynamic
      pm.max_children = 10
      pm.start_servers = 2
      pm.min_spare_servers = 1
      pm.max_spare_servers = 5
      pm.max_requests = 0

      On /etc/php5/fpm/php.ini you can change:
      memory_limit = 256M
      upload_max_filesize = 64M

      That should give you a little more breathing space. Of course, the more powerful your hardware is, the more you can tweak those parameters!

  • Thanks for the great tutorial. Just one problem related to SEO. I can’t get an XML Sitemap running with this setup. every plugin can’t write it in the public directory of the site (…/wordpress/). I suspect it has something to do with de Nginx configuration, preventing the Sitemap-Plugins to write a file like “sitemap.xml”. maybe I have to adjust the OS’ read/write permissions?

Comments are closed.