Month: May 2017

Machine Learning – Basic Implementation (Part I – Linear Regression)

Machine Learning – Basic Implementation (Part I – Linear Regression)


This post is a series of how to do basic implement specific machine learning algorithm to solve a problem. This implementation will help you, step-by-step, tune plenty of processes, in order, to optimise model.


You should have overview mathematical of linear regression and concept of terms: features, gradient descent, cost function …
We also assume that all data be collected and cleaned before doing any implementation


1. Normalize data
Base on the variety of data scale, we should normal data before do any visualisation . It can easy apply with python numpy library

import numpy as np
def normalize_features(df):
    Normalize the features in the data set.
    mu = df.mean()
    sigma = df.std()
    if (sigma == 0).any():
        raise Exception("One or more features had the same value for all samples, and thus could " + \
                        "not be normalized. Please do not include features with only a single value " + \
                        "in your model.")
    df_normalized = (df - df.mean()) / df.std()

    return df_normalized, mu, sigma

2. Visualize data
Of course, first steps to choose any model to train is to see how data (x,y) interact by graph. You will know linear regression to be correct algorithm if your data (or graph of your data) satisfy all criteria below:

  • The scatter of points has to be around the best-fit line and same standard deviation all long the curve. if there are many points too far (high or low) from best-line, that should be not a linear regression
  • The measurement of x (the features) should be exactly correct. Imprecision of measuring X (if happen) should be very small compared to biological variability of Y
  • The data input should be independent each other. it mean if there is a change in one data experiment, the others should not be change
  • Is a correlation between X and Y. for example: midterm score vs total score. while midterm score is a parameter (or component) to calculate total score, linear regression is not valid for these datas.

3. Gradient descent vs statmodel OLS
Before talking about tuning model, we get started with a basic step using gradient descent and statmodel OLS to find a first set of parameter theta. This set of theta might not be the best, but it provide overview step how to tune model later.

All cleaned data can be download from here: Turnstile Data of New York Subway
By gradient descent

def compute_cost(features, values, theta):

    m = len(values)
    sum_of_square_errors = np.square(, theta) - values).sum()
    cost = sum_of_square_errors / (2 * m)
    return cost

def gradient_descent(features, values, theta, alpha, num_iterations):

    m = len(values)
    cost_history = []

    for i in range(num_iterations):
        predict_v =, theta)
        theta = theta - alpha / m * - values), features)
        cost = compute_cost(features, values, theta)
    return theta, pandas.Series(cost_history)

def predictions(dataframe):

    # Select Features (try different features!) - all feature to predict ENTRIESn_hourly]
    features = dataframe[['Hour', 'maxpressurei','maxdewpti','mindewpti', 'minpressurei','meandewpti','meanpressurei', 'meanwindspdi','mintempi','meantempi', 'maxtempi','precipi']]
    dummy_units = pandas.get_dummies(dataframe['UNIT'], prefix='unit')
    features = features.join(dummy_units)

    # Values - or y in model
    values = dataframe['ENTRIESn_hourly']
    m = len(values)

    features, mu, sigma = normalize_features(features)
    # Add a column of 1s (for theta0)
    features['ones'] = np.ones(m)

    # Convert features and values to numpy arrays
    features_array = np.array(features)
    values_array = np.array(values)

    # Set values for alpha, number of iterations.
    # learning rate
    alpha = 0.1
    # # of data set want to try
    num_iterations = 15000

    # Initialize theta, perform gradient descent
    theta_gradient_descent = np.zeros(len(features.columns))
    theta_gradient_descent, cost_history = gradient_descent(features_array,

    predictions =, theta_gradient_descent)
    #coefficient of determination
    r = math.sqrt(compute_r_squared(values_array, predictions))

By stamodel OLS

def predictions(df_in):
    # select the features to use
    #feature_names = ['meantempi', 'Hour']
    feature_names = ['Hour', 'maxpressurei','maxdewpti','mindewpti', 'minpressurei','meandewpti','meanpressurei', 'meanwindspdi','mintempi','meantempi', 'maxtempi','precipi']

    # initialize the Y values
    X = sm.add_constant(df_in[feature_names])
    Y = df_in['ENTRIESn_hourly']

    # initialize the X features by add dummy units, standardize, and add constant
    dummy_units = pd.get_dummies(df_in['UNIT'], prefix='unit')
    X = df_in[feature_names].join(dummy_units)
    X, mu, sigma = normalize_features(X)

    # add constant in model will improve a little bit
    X = sm.add_constant(X)

    # ordinary least squares model
    model = sm.OLS(Y, X)

    # fit the model
    results =
    prediction = results.predict(X)

    return prediction

  • Both technique required add dummy_variable to isolate categorial features. This will help to improve a lot of model
  • add constant in statmodel (mean shift Y a constant value) is not really meaningless, but just improve model little bit
  • we both use coefficient determination (R) to validate model. close to 1 mean better model

(to be continued)

How to Set Up a shared folder using Samba on Ubuntu with auto mount

How to Set Up a shared folder using Samba on Ubuntu with auto mount


One of the most popular questions long-time Linux users have been asked is, “How do I create a network share that Windows can see?

The best solution for sharing Linux folders across a network is a piece of software that’s about as old as Linux itself: Samba.

In this guide, we’ll cover how to configure Samba mounts on an Ubuntu 14.04 server.


For the purposes of this guide, we are going to refer to the server that is going to be sharing its directories the host and the server that will mount these directories as the client.

In order to keep these straight throughout the guide, I will be using the following IP addresses as stand-ins for the host and server values:

  • Host:
  • Client:

You should substitute the values above with your own host and client values.

Download and Install the Components

Samba is a popular Linux tool, and chances are you already have it installed. To see if that’s the case, open up a terminal and test to see if its configuration folder exists:


$ ls -l /etc/samba

If Samba hasn’t been installed, let’s install it first.


$ sudo apt-get update
$ sudo apt-get install samba

Configuring network share on host

Set a password for your user in Samba


$ sudo smbpasswd -a <user_name>

Note: Samba uses a separate set of passwords than the standard Linux system accounts (stored in /etc/samba/smbpasswd), so you’ll need to create a Samba password for yourself. This tutorial implies that you will use your own user and it does not cover situations involving other users passwords, groups, etc…

Tip1: Use the password for your own user to facilitate.

Tip2: Remember that your user must have permission to write and edit the folder you want to share. Eg.:

$ sudo chown <user_name> /var/opt/blah/blahblah
$ sudo chown :<user_name> /var/opt/blah/blahblah

Tip3: If you’re using another user than your own, it needs to exist in your system beforehand, you can create it without a shell access using the following command:

$ sudo useradd USERNAME --shell /bin/false

You can also hide the user on the login screen by adjusting lightdm’s configuration, in /etc/lightdm/users.conf add the newly created user to the line :


Create a directory to be shared


$ mkdir /home/<user_name>/<folder_name>

After Samba is installed, a default configuration file called smb.conf.default can be found in /etc/samba. This file needs to be copied to the same folder with the name of smb.conf, but before doing this, it’d be worth running the same ls -l /etc/samba command as above to see if your distro has that file there already. If it doesn’t exist, it’s as simple as entering sudo (or sudo -s to retain escalated privileges for the time-being, or su for systems without sudo) and making use of the default file:


$ cp /etc/samba/smb.conf.default /etc/samba/smb.conf

Make a safe backup copy of the original smb.conf file to your home folder, in case you make an error


$ sudo cp /etc/samba/smb.conf ~

Edit file /etc/samba/smb.conf


$ sudo nano /etc/samba/smb.conf

Once smb.conf has loaded, add this to the very end of the file:


path = /home/<user_name>/<folder_name>
valid users = <user_name>
read only = no

Tip: There Should be in the spaces between the lines, and note que also there should be a single space both before and after each of the equal signs.

The [Share Name] is the name of the folder that will be viewable after entering the network hostname (eg: \\LINUXPC\Share Name). The path will be the Linux folder that will be accessible after entering this share. As for the options, there are many. As I mentioned above, the smb.conf file itself contains a number of examples; for all others, there’s a huge page over at the official website to take care of the rest. Let’s cover a couple of the more common ones, though.

guest ok = yes
— Guest accounts are OK to use the share; aka: no passwords.
guest only = yes
Only guests may use the share.
writable = yes
— The share will allow files to be written to it.
read only = yes
— Files cannot be written to the share, just read.
force user = username
— Act as this user when accessing the share, even if a different user/pass is provided.
force group = groupname
— Act as this usergroup when accessing the share. username = username, username2, @groupname
— If the password matches one of these users, the share can be accessed.
valid users = username, username2, @groupname
— Like above, but requires users to enter their username.

Here are a couple of example shares I use:

The force user and force group options are not go-to options, so I’d recommend trying to create a share without them first. In some cases, permission issues will prevent you from writing to certain folders, a situation I found myself in where my NAS mounts and desktop folder were concerned. If worse comes to worse, simply add these force options and retest.

Each time the smb.conf file is edited, Samba should be restarted to reflect the changes. On most distros, running this command as sudo (or su) should take care of it:


$ /etc/init.d/samba restart

For Ubuntu-based distros, the service command might need to be used. As sudo:


$ sudo service smbd restart

If neither of these commands work, refer to your distro’s documentation.

Once Samba has restarted, use this command to check your smb.conf for any syntax errors


$ testparm

With Samba all configured, let’s connect to our shares!

Connect to Samba Network Share from clients

Once a Samba share is created, it can be accessed through applications that support the SMB protocol, as one would expect. While Samba’s big focus is Windows, it’s not the only piece of software that can take advantage of it. Many file managers for mobile platforms are designed to support the protocol, and other Linux PCs can mount them as well.

To access your network share from your desktop client, use your username (<user_name>) and password through the path smb://<HOST_IP_OR_NAME>/<folder_name>/ (Linux users) or \\<HOST_IP_OR_NAME>\<folder_name>\ (Windows users). Note that <folder_name> value is passed in [<folder_name>], in other words, the share name you entered in /etc/samba/smb.conf.

We’ll take a more detailed look at a couple of different solutions for accessing our Samba shares.

Connect to Samba Network Share from Windows client

First things first: The Microsoft side. In Windows, browsing to “Network” should reveal your Linux PC’s hostname; in my particular case, it’s TGGENTOO. After double-clicking on it, the entire list of shares will be revealed.

Network shares are all fine and good, but it’s a hassle that to get to one, it requires a bunch of clicking and waiting. Depending on Windows’ mood, the Linux hostname might not even appear on the network. To ease the pain of having to go through all that again, mounting a share from within Windows is a great option. For simple needs, a desktop shortcut might provide quick enough access, but shortcuts by design are quite limited. When mounting a network share as a Windows network drive, however, it acts just as a normal hard drive does.

Mounting a Samba share as a network drive is simple:


$ net use X: \\HOSTNAME\Share
$ net use X: “\\HOSTNAME\Share Name”

Quotation marks surrounding the entire share is required if there’s a space in its name.

A real-world example:

Connect to Samba Network Share from Linux client

Install smbclient


$ sudo apt-get install smbclient

List all shares:


$ smbclient -L //<HOST_IP_OR_NAME>/<folder_name> -U <user>



$ smbclient //<HOST_IP_OR_NAME>/<folder_name> -U <user>

Mounting a Samba Network Share from Linux client

You’ll need the cifs-utils package in order to mount SMB shares:


$ sudo apt-get install cifs-utils

After that, just make a directory…


$ mkdir ~/Desktop/Windows-Share

… and mount the share to it.


$ mount -t cifs -o username=username //HOSTNAME/Share /mnt/location

For example


$ sudo mount.cifs //WindowsPC/Share /home/geek/Desktop/Windows-Share -o user=geek

Using this command will provide a prompt to enter a password; this can be avoided by modifying the command to username=username,password=password (not great for security reasons).

In case you need help understanding the mount command, here’s a breakdown:

$ sudo mount.cifs – This is just the mount command, set to mount a CIFS (SMB) share.

WindowsPC – This is the name of the Windows computer.  Type “This PC” into the Start menu on Windows, right click it, and go to Properties to see your computer name.

//Windows-PC/Share – This is the full path to the shared folder.

/home/geek/Desktop/Windows-Share – This is where we’d like the share to be mounted.

-o user=geek – This is the Windows username that we are using to access the shared folder.

I recommend mounting the network share this way before inputting it into /etc/fstab, just to make sure all is well. Afterwards, your system’s fstab file can be edited as $ sudo (or su) to automatically mount a share at boot.

Adding values for a network share to fstab isn’t complicated, but making sure they’re the correct values can be. Depending on the network, the required hostname might have to be the text value (TGGENTOO), or an IP address ( Some trial and error can be expected, and at worst, a trip to your distro’s website might have to be taken.

Here are two examples of what could go into an fstab:

// /mnt/nas-storage cifs username=guest,uid=1000,iocharset=utf8 0 0
//TGGENTOO/Storage /mnt/nasstore cifs username=guest,uid=1000,iocharset=utf8 0 0

If a given configuration proves problematic, make sure that the Samba share can be mounted outside of fstab, with the mount command (as seen above). Once a share is properly mounted that way, it should be a breeze translating it into fstab speak.


Samba provides a quick and easy way to share files and folders between a Windows and Linux machines.

If you only want a network share between Linux machines only, you can also Set Up a Network Shared using NFS.

If you want a distributed network share with high availability and automatic replication, you can Set Up a Network Shared using GlusterFS.

Samba is said to provide the best performance for smaller files, but I haven’t done a benchmark myself so I can’t guarantee anything.

Hope you find this post helpful and get your network share running. Until next time!

How To Set Up a shared folder using NFS on Ubuntu 14.04 with auto mount

How To Set Up a shared folder using NFS on Ubuntu 14.04 with auto mount


NFS, or Network File System, is a distributed filesystem protocol that allows you to mount remote directories on your server. This allows you to leverage storage space in a different location and to write to the same space from multiple servers easily. NFS works well for directories that will have to be accessed regularly.

In this guide, we’ll cover how to configure NFS mounts on an Ubuntu 14.04 server.


In this guide, we will be configuring directory sharing between two Ubuntu 14.04 servers. These can be of any size. For each of these servers, you will have to have an account set up with $ sudo privileges. You can learn how to configure such an account by following steps 1-4 in our initial setup guide for Ubuntu 14.04 servers.

For the purposes of this guide, we are going to refer to the server that is going to be sharing its directories the host and the server that will mount these directories as the client.

In order to keep these straight throughout the guide, I will be using the following IP addresses as stand-ins for the host and server values:

  • Host:
  • Client:

You should substitute the values above with your own host and client values.

Download and Install the Components

Before we can begin, we need to install the necessary components on both our host and client servers.

On the host server, we need to install the nfs-kernel-server package, which will allow us to share our directories. Since this is the first operation that we’re performing with apt in this session, we’ll refresh our local package index before the installation:


$ sudo apt-get update
$ sudo apt-get install nfs-kernel-server

Once these packages are installed, you can switch over to the client computer.

On the client computer, we’re going to have to install a package called nfs-common, which provides NFS functionality without having to include the server components. Again, we will refresh the local package index prior to installation to ensure that we have up-to-date information:


$ sudo apt-get update
$ sudo apt-get install nfs-common

Create the Share Directory on the Host Server

We’re going to experiment with sharing two separate directories during this guide. The first directory we’re going to share is the /home directory that contains user data.

The second is a general purpose directory that we’re going to create specifically for NFS so that we can demonstrate the proper procedures and settings. This will be located at /var/nfs.

Since the /home directory already exists, go ahead and start by creating the /var/nfs directory:


$ sudo mkdir /var/nfs

Now, we have a new directory designated specifically for sharing with remote hosts. However, the directory ownership is not ideal. We should give the user ownership to a user on our system namednobody. We should give the group ownership to a group on our system named nogroup as well.

We can do that by typing this command:


$ sudo chown nobody:nogroup /var/nfs

We only need to change the ownership on our directories that are used specifically for sharing. We wouldn’t want to change the ownership of our /home directory, for instance, because it would cause a great number of problems for any users we have on our host server.

Configure the NFS Exports on the Host Server

Now that we have our directories created and assigned, we can dive into the NFS configuration file to set up the sharing of these resources.

Open the /etc/exports file in your text editor with root privileges:


$ sudo nano /etc/exports

The files that you see will have some comments that will show you the general structure of each configuration line. Basically, the syntax is something like:

directory_to_share     client(share_option1, ... , share_optionN)

So we want to create a line for each of the directories that we wish to share. Since in this example or client has an IP of, our lines will look like this:



We’ve explained everything here but the specific options we’ve enabled. Let’s go over those now.

  • rw: This option gives the client computer both read and write access to the volume.
  • sync: This option forces NFS to write changes to disk before replying. This results in a more stable and consistent environment, since the reply reflects the actual state of the remote volume.
  • no_subtree_check: This option prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
  • no_root_squash: By default, NFS translates requests from a root user remotely into a non-privileged user on the server. This was supposed to be a security feature by not allowing a root account on the client to use the filesystem of the host as root. This directive disables this for certain shares.

When you finish making your changes, save and close the file.

Next, you should create the NFS table that holds the exports of your shares by typing:


$ sudo exportfs -a

However, the NFS service is not actually running yet. You can start it by typing:


$ sudo service nfs-kernel-server start

This will make your shares available to the clients that you configured.

Create the Mount Points and Mount Remote Shares on the Client Server

Now that your host server is configured and making its directory shares available, we need to prep our client.

We’re going to have to mount the remote shares, so let’s create some mount points. We’ll use the traditional /mnt as a starting point and create a directory called nfs under it to keep our shares consolidated.

The actual directories will correspond with their location on the host server. We can create each directory, and the necessary parent directories, by typing this:


$ sudo mkdir -p /mnt/nfs/home
$ sudo mkdir -p /mnt/nfs/var/nfs

Now that we have some place to put our remote shares, we can mount them by addressing our host server, which in this guide is, like this:


$ sudo mount /mnt/nfs/home
$ sudo mount /mnt/nfs/var/nfs

These should mount the shares from our host computer onto our client machine. We can double check this by looking at the available disk space on our client server:


$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda               59G  1.3G   55G   3% /
none                  4.0K     0  4.0K   0% /sys/fs/cgroup
udev                  2.0G   12K  2.0G   1% /dev
tmpfs                 396M  324K  396M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  2.0G     0  2.0G   0% /run/shm
none                  100M     0  100M   0% /run/user          59G  1.3G   55G   3% /mnt/nfs/home

As you can see at the bottom, only one of our shares has shown up. This is because both of the shares that we exported are on the same filesystem on the remote server, meaning that they share the same pool of storage. In order for the Avail and Use% columns to remain accurate, only one share may be added into the calculations.

If you want to see all of the NFS shares that you have mounted, you can type:


$ mount -t nfs on /mnt/nfs/home type nfs (rw,vers=4,addr=,clientaddr= on /mnt/nfs/var/nfs type nfs (rw,vers=4,addr=,clientaddr=

This will show all of the NFS mounts that are currently accessible on your client machine.

Test NFS Access

You can test the access to your shares by writing something to your shares. You can write a test file to one of your shares like this:


$ sudo touch /mnt/nfs/home/test_home

Let’s write test file to the other share as well to demonstrate an important difference:


$ sudo touch /mnt/nfs/var/nfs/test_var_nfs

Look at the ownership of the file in the mounted home directory:


$ ls -l /mnt/nfs/home/test_home
-rw-r--r-- 1 root   root      0 Apr 30 14:43 test_home

As you can see, the file is owned by root. This is because we disabled the root_squash option on this mount that would have written the file as an anonymous, non-root user.

On our other test file, which was mounted with the root_squash enabled, we will see something different:


$ ls -l /mnt/nfs/var/nfs/test_var_nfs
-rw-r--r-- 1 nobody nogroup 0 Apr 30 14:44 test_var_nfs

As you can see, this file was assigned to the “nobody” user and the “nogroup” group. This follows our configuration.

Make Remote NFS Directory Mounting Automatic

We can make the mounting of our remote NFS shares automatic by adding it to our fstab file on the client.

Open this file with root privileges in your text editor:


$ sudo nano /etc/fstab

At the bottom of the file, we’re going to add a line for each of our shares. They will look like this:

client    /mnt/nfs/home   nfs auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800 0 0    /mnt/nfs/var/nfs   nfs auto,noatime,nolock,bg,nfsvers=4,sec=krb5p,intr,tcp,actimeo=1800 0 0

The options that we are specifying here can be found in the man page that describes NFS mounting in thefstab file:


$ man nfs

This will automatically mount the remote partitions at boot (it may take a few moments for the connection to be made and the shares to be available).

Unmount an NFS Remote Share

If you no longer want the remote directory to be mounted on your system, you can unmount it easily by moving out of the share’s directory structure and unmounting, like this:


$ cd ~
$ sudo umount /mnt/nfs/home
$ sudo umount /mnt/nfs/var/nfs

This will remove the remote shares, leaving only your local storage accessible:


$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         59G  1.3G   55G   3% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            2.0G   12K  2.0G   1% /dev
tmpfs           396M  320K  396M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            2.0G     0  2.0G   0% /run/shm
none            100M     0  100M   0% /run/user

As you can see, our NFS shares are no longer available as storage space.


NFS provides a quick and easy way to access remote systems over a network. However, the protocol itself is not encrypted. If you are using this in a production environment, consider routing NFS over SSH or a VPN connection to create a more secure experience.

How to add custom css to WordPress

How to add custom css to WordPress

While managing your WordPress, you may sometimes want to edit the styling of some parts of your post or your site. Moreover, you want to edit the CSS or add custom CSS without modifying the theme’s source code. Let’s look at some ways to do that.

Use JetPack’s Custom CSS Editor

If you already have JetPack by plugin installed, go to your JetPack plugin settings -> Appearance -> Custom CSS -> Configure your Custom CSS settings, click to open the Custom CSS Editor, then append your custom CSS there and click save.

The custom css will be appended to all the pages in your WordPress website.

Use WP Add Custom CSS plugin

WP Add Custom CSS plugin not only lets you add custom CSS to your whole website, but also to any specific posts or pages that you want.

Once the plugin is installed, it will give you an extra edit box in the post editor and page editor to add your custom css.

Add custom CSS to the whole website.
Add custom CSS to specific posts.
Add custom CSS to specific pages.
Add custom CSS to specific custom post types (such as Woocommerce products).

Use Simple Custom CSS and JS plugin

Simple Custom CSS and JS plugin not only lets you add custom CSS, but also custom JavaScript to your website.

At the time this post is written, this plugin hasn’t let us add custom CSS and Javascript to specific pages or posts. However, it let us manage multiple custom CSS and JavaScript without having to put all of them in the same spot as with the plugins mentioned above.

Manage Custom Codes
Add/Edit CSS
Add/Edit Javascript

Final words

These plugins let us modify the CSS without touching the theme’s source code.

Which of these options suits you best? Let me know in the comment! 😀

How to add custom slots to your WordPress theme

How to add custom slots to your WordPress theme

This tutorial will show you how to add custom slots to your WordPress theme so that you can drag and drop widgets to a location that is not yet defined by your theme.

Custom slots… what?

By custom slots I mean the pre-defined locations on your theme where you can drag and drop widgets to make widgets appears on your website. As you can see in the screenshot below, the custom slots are the elements wrapped in the red square.

The official term for custom slots in WordPress is sidebar. In the early days of WordPress, we can only drag widgets to the side columns of the website, hence the term sidebar. Then, as WordPress grows, the original so-called sidebar can be used everywhere on the website: on the top, in the footer, in between posts, etc. However, the name stays unchanged and the WordPress developers still use that term to refer to the custom slots.

In this tutorial, just to clear the confusion, I will use the term custom slots.

And to make this tutorial more understandable, I will use the Hemingway theme.

After this tutorial, I will have an extra slot as below:

The widgets dragged to that new slot will appear on top of the post column of my Hemingway theme.

Step by step tutorial

Open your theme editor

Do add the extra slot to your theme, first you have to open your theme editor.

On your wp-admin side menu, click on Appearance > Editor. Your theme editor will open.

Select the theme you want to edit.

Register your widget slot

Find the code where your theme already register its slots. Normally it would be in the function.php file.

After the you found the code, copy and insert another block of code for your new custom slot.

As you can see in the screenshot, the theme registers its default slots by adding an action to widgets_init.

add_action( 'widgets_init', 'hemingway_sidebar_reg' );

You can also see that I have added my new custom slot to the hemingway_sidebar_reg function (the block of code in red square).

    'name' => __( 'Content Column Top', 'hemingway' ),
    'id' => 'content-column-top',
    'description' => __( 'Widgets in this area will be shown on top of content column.', 'hemingway' ),
    'before_title' => '<h3 class="widget-title">',
    'after_title' => '</h3>',
    'before_widget' => '<div class="widget %2$s"><div class="widget-content">',
    'after_widget' => '</div><div class="clear"></div></div>'

The rest of the code in the hemingway_sidebar_reg function is default code that the theme use to register its original slots, which are Footer A, Footer B, Footer C, Sidebar.

Now if you go to your widgets drag and drop screen, the new extra slot should appear there for you to insert widgets.

For the demo purpose, I dragged a Search widget there.

Make the new slot appear on front end

Now that you can config your new slot, let’s move on to displaying it on the front end.

To do this, open the file that render the html where you want the new slot to appear, and make WordPress render whatever dragged to the new slot there using the function:

<?php dynamic_sidebar( 'content-column-top' ); ?>

To better understand this, look at the screenshot from my website

As you can see, I have inserted some code in the red square, telling WordPress to render the new slot content in between two layers of div.

Now if we go to our front end, the new slot content should appear there. However the styling may be a little bit messed up since we have added new elements to the website.

Adding new CSS to your theme

To make the website beautiful again, I added new CSS to the theme’s CSS file to format the new slot.

You can add custom CSS by editing your theme’s CSS file or by using a plugin.

In my case, I edited my theme’s CSS file as below:

Now the new custom slots appears correctly on my front page.

Final words

Adding a new custom slots (sidebars) to your theme may be a little bit different but you get the idea.

Hope this tutorial can help you customize your theme more conveniently.

Have better ideas? Share with me in the comment!

How to setup a network shared folder using GlusterFS on Ubuntu Servers with backup server array and auto mount

How to setup a network shared folder using GlusterFS on Ubuntu Servers with backup server array and auto mount


GlusterFS, NFS and Samba are the most popular 3 ways to setup a network shared folder on Linux.

In this tutorial, we will learn how to setup a network shared folder using GlusterFS on Ubuntu Servers.

For demonstration purposes, we will setup the shared folder on two server machines, and, and mount that shared folder on one client machine,

Let’s say the servers’ IP’s are and, and the client’s IP is

Prepare the enviroment

Configure DNS resolution

In order for our different components to be able to communicate with each other easily, it is best to set up some kind of hostname resolution between each computer.

The easiest way to do this is editing the hosts file on each computer.

Open this file with root privileges on your first computer:

$ sudo nano /etc/hosts

You should see something that looks like this:       localhost client1

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Below the local host definition, you should add each VPS’s IP address followed by the long and short names you wish to use to reference it.

It should look something like this when you are finished:

server1/server2/client1       localhost hostname

# add our machines here server1 server2 client1
# end add our machines

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

The and server1 portions of the lines can be changed to whatever name you would like to use to access each computer. We will be using these settings for this guide.

When you are finished, copy the lines you added and add them to the /etc/hosts files on your other computer. Each /etc/hosts file should contain the lines that link your IPs to the names you’ve selected.

Save and close each file when you are finished.

Set Up Software Sources

Although Ubuntu 12.04 contains GlusterFS packages, they are fairly out-of-date, so we will be using the latest stable version as of the time of this writing (version 3.4) from the GlusterFS project.

We will be setting up the software sources on all of the computers that will function as nodes within our cluster, as well as on the client computer.

We will actually be adding a PPA (personal package archive) that the project recommends for Ubuntu users. This will allow us to manage our packages with the same tools as other system software.

First, we need to install the python-software-properties package, which will allow us to manage PPAs easily with apt:


$ sudo apt-get update
$ sudo apt-get install python-software-properties

Once the PPA tools are installed, we can add the PPA for the GlusterFS packages by typing:


$ sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4

With the PPA added, we need to refresh our local package database so that our system knows about the new packages available from the PPA:


$ sudo apt-get update

Repeat these steps on all of the VPS instances that you are using for this guide.

Install Server Components

On our cluster member machines (server1 and server2), we can install the GlusterFS server package by typing:


$ sudo apt-get install glusterfs-server

Once this is installed on both nodes, we can begin to set up our storage volume.

On one of the hosts, we need to peer with the second host. It doesn’t matter which server you use, but we will be preforming these commands from our server1 server for simplicity:


$ sudo gluster peer probe

Console should output:

peer probe: success

This means that the peering was successful. We can check that the nodes are communicating at any time by typing:


$ sudo gluster peer status

Console should output:

Number of Peers: 1

Port: 24007
Uuid: 7bcba506-3a7a-4c5e-94fa-1aaf83f5729b
State: Peer in Cluster (Connected)

At this point, our two servers are communicating and they can set up storage volumes together.

Create a Storage Volume

Now that we have our pool of servers available, we can make our first volume.

This step needs to be run only on either one of the two servers. In this guide we will be running from server1.

Because we are interested in redundancy, we will set up a volume that has replica functionality. This will allow us to keep multiple copies of our data, saving us from a single point-of-failure.

Since we want one copy of data on each of our servers, we will set the replica option to “2”, which is the number of servers we have. The general syntax we will be using to create the volume is this:

$ sudo gluster volume create volume_name replica num_of_servers transport tcp force

The exact command we will run is this:


$ sudo gluster volume create volume1 replica 2 transport tcp force

The console would output like this:

volume create: volume1: success: please start the volume to access data

This will create a volume called volume1. It will store the data from this volume in directories on each host at /gluster-storage. If this directory does not exist, it will be created.

At this point, our volume is created, but inactive. We can start the volume and make it available for use by typing:


$ sudo gluster volume start volume1

Console should output:

volume start: volume1: success

Our volume should be online currently.

Install and Configure the Client Components

Now that we have our volume configured, it is available for use by our client machine.

Before we begin though, we need to actually install the relevant packages from the PPA we set up earlier.

On your client machine (client1 in this example), type:


$ sudo apt-get install glusterfs-client

This will install the client application, and also install the necessary fuse filesystem tools necessary to provide filesystem functionality outside of the kernel.

We are going to mount our remote storage volume on our client computer. In order to do that, we need to create a mount point. Traditionally, this is in the /mnt directory, but anywhere convenient can be used.

We will create a directory at /storage-pool:


$ sudo mkdir /storage-pool

With that step out of the way, we can mount the remote volume. To do this, we just need to use the following syntax:

$ sudo mount -t glusterfs path_to_mount_point

Notice that we are using the volume name in the mount command. GlusterFS abstracts the actual storage directories on each host. We are not looking to mount the /gluster-storage directory, but the volume1 volume.

Also notice that we only have to specify one member of the storage cluster.

The actual command that we are going to run is this:


$ sudo mount -t glusterfs /storage-pool

This should mount our volume. If we use the df command, you will see that we have our GlusterFS mounted at the correct location.

Testing the Redundancy Features

Now that we have set up our client to use our pool of storage, let’s test the functionality.

On our client machine (client1), we can type this to add some files into our storage-pool directory:


$ cd /storage-pool
$ sudo touch file{1..20}

This will create 20 files in our storage pool.

If we look at our /gluster-storage directories on each storage host, we will see that all of these files are present on each system:


# on and
$ cd /gluster-storage
$ ls

Console should output:

file1  file10  file11  file12  file13  file14  file15  file16  file17  file18  file19  file2  file20  file3  file4  file5  file6  file7  file8  file9

As you can see, this has written the data from our client to both of our nodes.

If there is ever a point where one of the nodes in your storage cluster is down and changes are made to the filesystem. Doing a read operation on the client mount point after the node comes back online should alert it to get any missing files:


$ ls /storage-pool

Set Up Backup Server(s) for the Client

Normally, once client1 has connected to server1, server1 will send all the nodes’ information to client1, so that client1 can connect to any node in the pool to get the data afterwards.

However, in our set up so far, if server1 is not available before client1 first connects to server1, client1 will not know about server2 and therefore can’t connect to our gluster volume.

To enable client1 to connect to server2 when server1 is not available, we can use the option backupvolfile-server as following


$ sudo mount -t glusterfs /storage-pool -o

If our gluster pool has more then one backup server, we can list all the server using the backupvolfile-servers as following (notice the plural s at the end of the param)


$ sudo mount -t glusterfs /storage-pool -o

Set Up Auto Mounting on the Client

In theory adding the following line to the client’s fstab file should make the client mount the GlusterFS share at boot:

client1 /storage-pool glusterfs defaults,_netdev 0 0

Normally this should work since the _netdev param should force the filesystem to wait for a network connection.

If this didn’t work for you because the GlusterFS client wasn’t running when the fstab file was processed, try opening root’s crontab file and add a command to mount the share at reboot. This command opens the crontab file:


$ sudo crontab -u root -e

Add this line, and press control-o and return to save changes, and control-x to quit from nano:


@reboot sleep 10;mount -t glusterfs /storage-pool -o

This will execute two commands when the server boots up: the first is just a 10 second delay to allow the GlusterFS daemon to boot, and the second command mounts the volume.

You may need to make your client wait longer before running mount. If your client doesn’t mount the volume when it boots, try using ‘sleep 15’ instead.  This isn’t an ideal way to fix this problem, but it’s ok for most uses.

Another appropriate way to setup auto mounting is that instead of mounting the GlusterFS share manually on the client, you add the mount command to /etc/rc.local file. We will not add it to /etc/fstab as rc.local is always executed after the network is up which is required for a network file system.

Open /etc/rc.local


$ nano /etc/rc.local

Append the following line:


/usr/sbin/mount.glusterfs /storage-pool

To test if your modified /etc/rc.local is working, reboot the client:

$ reboot

After the reboot, you should find the share in the outputs of…

$ df -h

… and…

$ mount

Restrict Access to the Volume

Now that we have verified that our storage pool can be mounted and replicate data to both of the machines in the cluster, we should lock down our pool.

Currently, any computer can connect to our storage volume without any restrictions. We can change this by setting an option on our volume.

On one of your storage nodes, type:


$ sudo gluster volume set volume1 auth.allow gluster_client_IP_addr

You will have to substitute the IP address of your cluster client (client1) in this command. Currently, at least with /etc/hosts configuration, domain name restrictions do not work correctly. If you set a restriction this way, it will block all traffic. You must use IP addresses instead.

If you need to remove the restriction at any point, you can type:


$ sudo gluster volume set volume1 auth.allow *

This will allow connections from any machine again. This is insecure, but may be useful for debugging issues.

If you have multiple clients, you can specify their IP addresses at the same time, separated by commas:


$ sudo gluster volume set volume1 auth.allow gluster_client1_ip,gluster_client2_ip

Getting Info with GlusterFS Commands

When you begin changing some of the settings for your GlusterFS storage, you might get confused about what options you have available, which volumes are live, and which nodes are associated with each volume.

There are a number of different commands that are available on your nodes to retrieve this data and interact with your storage pool.

If you want information about each of your volumes, type:


$ sudo gluster volume info

Console output:

Volume Name: volume1
Type: Replicate
Volume ID: 3634df4a-90cd-4ef8-9179-3bfa43cca867
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Options Reconfigured:

Similarly, to get information about the peers that this node is connected to, you can type:


$ sudo gluster peer status

Console output:

Number of Peers: 1

Port: 24007
Uuid: 6f30f38e-b47d-4df1-b106-f33dfd18b265
State: Peer in Cluster (Connected)

If you want detailed information about how each node is performing, you can profile a volume by typing:


$ sudo gluster volume profile volume_name start

When this command is complete, you can obtain the information that was gathered by typing:


$ sudo gluster volume profile volume_name info

Console output:

Cumulative Stats:
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.00       0.00 us       0.00 us       0.00 us             20     RELEASE
      0.00       0.00 us       0.00 us       0.00 us              6  RELEASEDIR
     10.80     113.00 us     113.00 us     113.00 us              1    GETXATTR
     28.68     150.00 us     139.00 us     161.00 us              2      STATFS
     60.52     158.25 us     117.00 us     226.00 us              4      LOOKUP
    Duration: 8629 seconds
   Data Read: 0 bytes
Data Written: 0 bytes
. . .

You will receive a lot of information about each node with this command.

For a list of all of the GlusterFS associated components running on each of your nodes, you can type:


$ sudo gluster volume status

Console output:

Status of volume: volume1
Gluster process                                         Port    Online  Pid
Brick             49152   Y       2808
Brick             49152   Y       2741
NFS Server on localhost                                 2049    Y       3271
Self-heal Daemon on localhost                           N/A     Y       2758
NFS Server on                      2049    Y       3211
Self-heal Daemon on                N/A     Y       2825

There are no active volume tasks

If you are going to be administering your GlusterFS storage volumes, it may be a good idea to drop into the GlusterFS console. This will allow you to interact with your GlusterFS environment without needing to type sudo gluster before everything:


$ sudo gluster

This will give you a prompt where you can type your commands. This is a good one to get yourself oriented:

> help

When you are finished, exit like this:

> exit


At this point, you should have a redundant storage system that will allow us to write to two separate servers simultaneously. This can be useful for a great number of applications and can ensure that our data is available even when one server goes down.

4 ways to make your WordPress server run faster and more resource efficient

4 ways to make your WordPress server run faster and more resource efficient

If you find your WordPress server running slow, before considering a hardware upgrade, try the following tweaks first.

1. Enable caching

If your website doesn’t need to be real-time all the time, you should enable content caching on your WordPress.

You can enable caching just by installing and activating the well-known WP Super Cache plugin.

The default settings should be good enough so after you install the plugin, just go to the Settings page of the WP Super Cache plugin and make sure Caching On is checked.

This plugin caches your pages and posts as html files and serves those files to users instead of calculating and rendering the posts or pages content every time.

If you must clear the cache, you can always go to the plugin’s Content tab to delete any cached page you want or all of the cache with just one click. The new version will be generated and cached after that.

2. Use a CDN

Using a CDN will free your server from serving static files, especially images. Serving these static files uses a lot of CPU power and disk I/O and stresses your server a lot.

Enabling a CDN on WordPress is as easy as one two three and the best part about it is that you can get an unlimited CDN service for FREE as stated in this post.

Applying a CDN for your WordPress not only makes your server run a lot lighter, but also saves you a lot of bandwidth and traffic cost.

Learn how the 3 best free CDN services can speed up your website and optimize cost.

3. Enable gzip

Enabling gzip compression will make each of your web responses smaller by 2 to 3 times, which results in less network I/O and therefore makes your web server run faster.

The cost of compressing the content is a lot less than the cost of transferring the extra bytes, so after the math, your server runs more resource efficient.

To learn more about gzip and how to enable it for your WordPress, check this post.

4. Enable lazy load

If lazy load is enabled for your website, each image in your web content will not load until the user scroll to the part that it is located and it become visible on the user screen. Practically, this saves a lot of requests and responses between your server and the user’s browser.

If you have a WordPress website, you can enable lazy load by installing the Lazy Load plugin.

Lazy load not only makes your website run a lot more resource efficient but also saves you a lot of network I/O and traffic cost as stated in this post.


How do you tweak your WordPress server performance? Let me know in the comment!

5 tips to optimize traffic cost of your WordPress website

5 tips to optimize traffic cost of your WordPress website


If you host your website on a cloud service, you may find out that your traffic costs even more than the hardware (CPU, RAM, HDD). If that’s the case, check the following tips on how to optimize your traffic cost.

Monitoring your traffic throughput

Before you optimize your traffic, you should have a monitoring tool for your network, so that you know how much traffic was optimized every time you do an experiment.

If you don’t know any traffic monitoring tool yet, try iftop. This tool can show you the real-time traffic throughput for every IP that is connected to your server. iftop is just like the top command in linux, but instead of monitoring the CPU or RAM like the top command, iftop monitors your network.

Install iftop

# fedora or centos
yum install iftop -y

# ubuntu or debian
$ sudo apt-get install iftop

Run iftop

$ sudo iftop -n -B

iftop console

After you run iftop, the iftop screen would appear like below

The first column is the IP of your current machine.

The second column are the IPs of the remote hosts that connect to your server. For each IP, the first row is the traffic that is sent to the remote host (notice the icon =>) and the second row is the traffic that is received from that remote host (hence the icon <=)

The third column is the real-time traffic amount that is sent or received, where the first sub-column is the average bandwidth per second in the last 2 seconds, the second sub-column is the average bandwidth per second in the last 10 seconds, and the third one is the average bandwidth per second in the last 40 seconds. Most of the time, I look at this third column.

Decide what to optimize

Deciding what cost to optimize is often depends on your hosting plan.

Some hosting plans do not charge for traffic. If that’s the case, the network optimizing is more about resource optimizing, where the lower the network traffic, the more requests your server can handle with the same CPU, RAM and network interface.

Some hosting plans do not charge for in going traffic and only charge for out going traffic. For example, Google Cloud only charge for out going traffic, and the rates differ by the destination zone. The cost for traffic between internal IP’s is a lot a lot cheaper than the traffic between external IP’s between zones, or regions, or continents.

By looking at the network traffic between your server and the remote IP’s and the traffic charging plan of your hosting service, you can decide which traffic to optimize first, and what can be left to be optimized later.

Tip #1. Changing to internal IP where possible

If your hosting plan rates are a lot expensive for external traffic than for internal traffic, try changing the external IP’s of your applications to internal IP’s can save you a lot of money.

For example, Google Cloud does not charge for traffic between internal IP’s within the same zone. Therefore, switching your servers to the same zone and configure them to talk to one another using the internal IP’s can save you a lot. Check your redis, memcached, kafka, rabbitmq, mysql or whatever services that can be run internally, make sure their configurations are optimized.

This action can reduce your traffic cost by 3-10 times.

Tip #2. Enable gzip

If you want to know more about gzip, check this post.

By enabling gzip, the traffic sent out will be compressed and therefore you can save a lot of network bandwidth.

This action can reduce your traffic cost by 3-5 times.

Tip #3. Using a free CDN service

If your website has a lot of images, try using a free CDN service to reduce the cost of serving images.

Believe it or not, the free CDN setup will take only 5 minutes and your traffic can be reduced by 5-10 times.

If you have a WordPress website, just install Jetpack plugin by and then turn on it’s Photon feature and your website is now powered with Jetpack’s free CDN service.

If you don’t want to use Jetpack Photon, you can always use CloudFlare or Incapsula CDN services, which are also free without any limitation on bandwidth or anything.

If your website has a lot of visitors in real-time, you can easily test the effects of the CDN by looking at the iftop console when Jetpack Photon is enabled and when it is disabled.

To read more on how to use a free CDN service on your website, click here.

Tip #4. Enable lazyload

When lazyload is enabled, the images on your website won’t be load until they are visible on the browser. Which means the images that stay at the end for your web pages won’t load at first. Then when the user scrolls the web page to where the images are located, the images will be loaded and shown for the user.

If you have a WordPress website, you can enable lazyload by installing the Lazy Load plugin.

Tip #5. Change your hosting service

I don’t know if this should be counted a tip. Anyways, if your current hosting service is charging too much for you traffic, consider changing to another hosting plan or service. Some hosting services do not charge for traffic such as BlueHost, GoDaddy and OVH.

However, even if you switch to a hosting service with free traffic, you can still consider applying the above tips as they can make your website perform better with less hardware resources.


How do these tips work for you? Let me know in the comment! 😀

3 best free CDN services to speed up your website and optimize cost

3 best free CDN services to speed up your website and optimize cost


If you own a website and your traffic is growing, it’s time to look into using a CDN for your website. Here are 3 best free CDN services that you can use.

Why should I use a CDN

As your website’s traffic grows, your web server may spend a lot of resources serving static files, especially media files like images. Your traffic cost may become a pain in the ass because on an average website, the traffic sent for images is usually 4-10 times the traffic sent for the html, css, and javascript content. On the other hand, CDNs are very good at caching image and optimizing resources for static files serving, as well as choosing the best server in their network to serve the images to the end user (usually the nearest server to the user).

So, it’s time for your server to do what it’s best at, which is generating web contents; and let the CDNs do what they are best at. That also saves you a bunch since you will only have to pay a lot less for your web traffic. Moreover, the setup is unbelievably easy.

1. Jetpack’s Photon

If you have a WordPress website, the fastest and easiest way to give your website the CDN power is to use the Photon feature of Jetpack by plugin.

First, you will have to install and activate the plugin.

The plugin will ask you to login using the account. Don’t worry. Just create a free account and log in.

In the Jetpack settings page, switch to Appearance tab and enable Photon option.

That’s it. Now your images will be served from’s CDN, and that doesn’t cost you a cent.

How Jetpack’s Photon works

Jetpack’s Photon hooks into the rendering process of your web pages and changes your image urls to the ones cached on CDN. Therefore, every time your users open your web pages, they will be downloading cached images from CDN instead of your web server.

Known issues

Jetpack’s Photon use their algorithm to decide the image resolution to be served to the users. In some rare cases, the algorithm doesn’t work so well and the image will be stretched out of the original width and height ratio. For example, if your image size is actually 720×480 but your image’s name is my_image_720x720.jpg, Photon will guess that your image ratio is 720×720 and set the width and height of the img tag to 1:1 ratio, while the cached image is still at 720:480 ratio, which will make the image stretched out of its correct ratio.

Except for that, everything works perfect for me.

If you ask if I would recommend using Jetpack’s Photon CDN, the answer is definitely yes.

2. Cloudflare

Cloudflare offers a free plan with no limit on the bandwidth nor traffic. The free plan is just limited on some advanced functions like DDOS protection or web application firewall, which most of us may not need.

Cloudflare requires you to change the NS records of your domain to their servers, and that’s it. Cloudflare will take care of the rest. You don’t have to do anything else.

How Cloudflare works

After replace your domain’s NS with Cloudflare’s one, all your users will be redirected to Cloudflare servers. When a user request a resource on your website, whether an html page, an image, or anything else, Cloudflare will serve the cached version on their CDN network to the user without accessing your web server. If a cached version does not exist or has expired, Cloudflare will ask your web server for the resource, and then send it to the user as well as cache it on their CDN if that resource is suitable for caching.

I find Cloudflare doesn’t have the image ratio problem like Photon, since Cloudflare doesn’t try to change the html tags, but instead serve the original html content. The CDN works without changing the image url, because Cloudflare has set your domain records to point to their servers by taking advantage of the NS records we set to their name servers earlier.

3. Incapsula

Like Cloudflare, Incapsula offers the same thing. You will have to edit your domain records to point to their servers. However, with Incapsula, you don’t have to change your NS records. You will just have to change the A record and the www record, which may sound somewhat less scary than giving the service full control of our domain like in the case of Cloudflare.

Incapsula works the same way as Cloudflare. It redirects all the requests to its servers and serves the cached version first if available.

Final words

Trying these CDN services does not cost you anything, and on the other hand may save you a lot of traffic costs as well as make your website more scalable. I would recommend that you try at least one of these services. If you don’t like a CDN after using it, you can always disable it and everything will be back to normal. In my case, the CDN saves me 80 percent of my traffic cost, even though my website does not have a lot of images.


Did you find this post helpful? What CDN do you use? Tell me in the comment! 😀

How to enable gzip compression for your website

How to enable gzip compression for your website

Most of the time, gzip compression will make your server perform better and more resource efficient. This tutorial will show you how to enable gzip compression for your nginx server.

What is gzip compression

Gzip is a method of compressing files (making them smaller) for faster network transfers. It is also a file format but that’s out of the scope of this post.

Compression allows your web server to provide smaller file sizes which load faster for your website users.

Enabling gzip compression is a standard practice. If you are not using it for some reason, your webpages are likely slower than your competitors.

Enabling gzip also makes your website score better on Search Engines.

How compressed files work on the web

When a request is made by a browser for a page from your site, your webserver returns the smaller compressed file if the browser indicates that it understands the compression. All modern browsers understand and accept compressed files.

How to enable gzip on Apache web server

To enable compression in Apache, add the following code to your config file

AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript

How to enable gzip on your Nginx web server

To enable compression in Nginx, you will need to add the following code to your config file

gzip on;
gzip_comp_level 2;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;

# Disable for IE < 6 because there are some known problems
gzip_disable "MSIE [1-6].(?!.*SV1)";

# Add a vary header for downstream proxies to avoid sending cached gzipped files to IE6
gzip_vary on;

As with most other directives, the directives that configure compression can be included in the http context or in a server or location configuration block.

The overall configuration of gzip compression might look like this.

server {
    gzip on;
    gzip_types      text/plain application/xml;
    gzip_proxied    no-cache no-store private expired auth;
    gzip_min_length 1000;

How to enable gzip on your WordPress

If you have a WordPress website and you can’t edit the Apache or Nginx config file, you can still enable gzip using a plugin like WP Super Cache by Automattic.

After installing the plugin, go to its Advanced Settings tab and check on the settings “Compress pages so they’re served more quickly to visitors. (Recommended) to enable gzip compression.

However, keep in mind that this plugin comes with a lot of more features, some of which you may not want. So if you don’t like the extra features, you can just use a more simple plugin like Gzip Ninja Speed Compression or Check and Enable Gzip Compression.

How to check if gzip is successfully enabled and working

Using Firefox to check gzip compression

If you are using Firefox, do the following steps:

  • Open Developer Tools by one of these methods:
    • Menu > Developer > Toggle Tools
    • Ctrl + Shift + I
    • F12
  • Switch to Network Tab in the Developer Tools.
  • Launch the website that you want to check.
  • If gzip is working, the request to html, css, javascript and text files will have the Transferred column smaller than the Size column, where Transferred column displays the size of the compressed content that was transferred, and the Size column shows the size of the original content before compression.

Using Chrome to check gzip compression

If you are using Chrome, do the following:

  • Open Developer Tools by one of these methods:
    • Menu > More tools > Developer Tools
    • Ctrl + Shift + I
    • F12
  • Switch to Network Tab in the Developer Tools.
  • Launch the website that you want to check.
  • Click on the request you want to check (html, css, javascript or text files), the request detail will be displayed.
  • Toggle Response Headers of that request.
  • Check for Content-Encoding: gzip.
  • If gzip is working, the Content-Encoding: gzip will be there.
  • Make sure you check the Response Headers and not the Request Headers.

Frequently Asked Questions

How efficient is gzip?

As you can see in the Firefox Developer Tools Network Tab, the compressed size is normally one third or one fourth the original size. This ratio differs from requests to requests but usually that’s the ratio for html, css, javascript and text files.

Will gzip make my server slower?

OK, that’s a smart question. Since the server has to do the extra work to compress the response, it may need some more CPU power. However, the CPU power that is saved during transferring the response usually makes up for that, not to say that more CPU power is saved. Therefore, at the end of the day, normally your server would be more CPU efficient.

Should I enable gzip for image files (and media files in general)?

Image files are usually already compressed, so gzip compressing the image will not save you a lot of bytes (normally less than 5%), but on the other hand requires a lot of processing resource. Therefore, you shouldn’t enable gzip for your images and should only enable gzip for html, css, javascript and text files.