Author: Tan Nguyen

How to setup a network shared folder using GlusterFS on Ubuntu Servers with backup server array and auto mount

How to setup a network shared folder using GlusterFS on Ubuntu Servers with backup server array and auto mount

Introduction

GlusterFS, NFS and Samba are the most popular 3 ways to setup a network shared folder on Linux.

In this tutorial, we will learn how to setup a network shared folder using GlusterFS on Ubuntu Servers.

For demonstration purposes, we will setup the shared folder on two server machines, server1.example.com and server2.example.com, and mount that shared folder on one client machine, client1.example.com.

Let’s say the servers’ IP’s are 10.128.0.1 and 10.128.0.2, and the client’s IP is 10.128.0.3.

Prepare the enviroment

Configure DNS resolution

In order for our different components to be able to communicate with each other easily, it is best to set up some kind of hostname resolution between each computer.

The easiest way to do this is editing the hosts file on each computer.

Open this file with root privileges on your first computer:

$ sudo nano /etc/hosts

You should see something that looks like this:

127.0.0.1       localhost client1

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Below the local host definition, you should add each VPS’s IP address followed by the long and short names you wish to use to reference it.

It should look something like this when you are finished:

server1/server2/client1

127.0.0.1       localhost hostname

# add our machines here
10.128.0.1 server1.example.com server1
10.128.0.2 server2.example.com server2
10.128.0.3 client1.example.com client1
# end add our machines

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

The server1.example.com and server1 portions of the lines can be changed to whatever name you would like to use to access each computer. We will be using these settings for this guide.

When you are finished, copy the lines you added and add them to the /etc/hosts files on your other computer. Each /etc/hosts file should contain the lines that link your IPs to the names you’ve selected.

Save and close each file when you are finished.

Set Up Software Sources

Although Ubuntu 12.04 contains GlusterFS packages, they are fairly out-of-date, so we will be using the latest stable version as of the time of this writing (version 3.4) from the GlusterFS project.

We will be setting up the software sources on all of the computers that will function as nodes within our cluster, as well as on the client computer.

We will actually be adding a PPA (personal package archive) that the project recommends for Ubuntu users. This will allow us to manage our packages with the same tools as other system software.

First, we need to install the python-software-properties package, which will allow us to manage PPAs easily with apt:

server1/server2/client1

$ sudo apt-get update
$ sudo apt-get install python-software-properties

Once the PPA tools are installed, we can add the PPA for the GlusterFS packages by typing:

server1/server2/client1

$ sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4

With the PPA added, we need to refresh our local package database so that our system knows about the new packages available from the PPA:

server1/server2/client1

$ sudo apt-get update

Repeat these steps on all of the VPS instances that you are using for this guide.

Install Server Components

On our cluster member machines (server1 and server2), we can install the GlusterFS server package by typing:

server1/server2

$ sudo apt-get install glusterfs-server

Once this is installed on both nodes, we can begin to set up our storage volume.

On one of the hosts, we need to peer with the second host. It doesn’t matter which server you use, but we will be preforming these commands from our server1 server for simplicity:

server1

$ sudo gluster peer probe server2.example.com

Console should output:

peer probe: success

This means that the peering was successful. We can check that the nodes are communicating at any time by typing:

server1

$ sudo gluster peer status

Console should output:

Number of Peers: 1

Hostname: server2.example.com
Port: 24007
Uuid: 7bcba506-3a7a-4c5e-94fa-1aaf83f5729b
State: Peer in Cluster (Connected)

At this point, our two servers are communicating and they can set up storage volumes together.

Create a Storage Volume

Now that we have our pool of servers available, we can make our first volume.

This step needs to be run only on either one of the two servers. In this guide we will be running from server1.

Because we are interested in redundancy, we will set up a volume that has replica functionality. This will allow us to keep multiple copies of our data, saving us from a single point-of-failure.

Since we want one copy of data on each of our servers, we will set the replica option to “2”, which is the number of servers we have. The general syntax we will be using to create the volume is this:

$ sudo gluster volume create volume_name replica num_of_servers transport tcp domain1.com:/path/to/data/directory domain2.com:/path/to/data/directory force

The exact command we will run is this:

server1

$ sudo gluster volume create volume1 replica 2 transport tcp server1.example.com:/gluster-storage server2.example.com:/gluster-storage force

The console would output like this:

volume create: volume1: success: please start the volume to access data

This will create a volume called volume1. It will store the data from this volume in directories on each host at /gluster-storage. If this directory does not exist, it will be created.

At this point, our volume is created, but inactive. We can start the volume and make it available for use by typing:

server1

$ sudo gluster volume start volume1

Console should output:

volume start: volume1: success

Our volume should be online currently.

Install and Configure the Client Components

Now that we have our volume configured, it is available for use by our client machine.

Before we begin though, we need to actually install the relevant packages from the PPA we set up earlier.

On your client machine (client1 in this example), type:

client1

$ sudo apt-get install glusterfs-client

This will install the client application, and also install the necessary fuse filesystem tools necessary to provide filesystem functionality outside of the kernel.

We are going to mount our remote storage volume on our client computer. In order to do that, we need to create a mount point. Traditionally, this is in the /mnt directory, but anywhere convenient can be used.

We will create a directory at /storage-pool:

client1

$ sudo mkdir /storage-pool

With that step out of the way, we can mount the remote volume. To do this, we just need to use the following syntax:

$ sudo mount -t glusterfs domain1.com:volume_name path_to_mount_point

Notice that we are using the volume name in the mount command. GlusterFS abstracts the actual storage directories on each host. We are not looking to mount the /gluster-storage directory, but the volume1 volume.

Also notice that we only have to specify one member of the storage cluster.

The actual command that we are going to run is this:

client1

$ sudo mount -t glusterfs server1.example.com:/volume1 /storage-pool

This should mount our volume. If we use the df command, you will see that we have our GlusterFS mounted at the correct location.

Testing the Redundancy Features

Now that we have set up our client to use our pool of storage, let’s test the functionality.

On our client machine (client1), we can type this to add some files into our storage-pool directory:

client1

$ cd /storage-pool
$ sudo touch file{1..20}

This will create 20 files in our storage pool.

If we look at our /gluster-storage directories on each storage host, we will see that all of these files are present on each system:

server1/server2

# on server1.example.com and server2.example.com
$ cd /gluster-storage
$ ls

Console should output:

file1  file10  file11  file12  file13  file14  file15  file16  file17  file18  file19  file2  file20  file3  file4  file5  file6  file7  file8  file9

As you can see, this has written the data from our client to both of our nodes.

If there is ever a point where one of the nodes in your storage cluster is down and changes are made to the filesystem. Doing a read operation on the client mount point after the node comes back online should alert it to get any missing files:

client1

$ ls /storage-pool

Set Up Backup Server(s) for the Client

Normally, once client1 has connected to server1, server1 will send all the nodes’ information to client1, so that client1 can connect to any node in the pool to get the data afterwards.

However, in our set up so far, if server1 is not available before client1 first connects to server1, client1 will not know about server2 and therefore can’t connect to our gluster volume.

To enable client1 to connect to server2 when server1 is not available, we can use the option backupvolfile-server as following

client1

$ sudo mount -t glusterfs server1.example.com:/volume1 /storage-pool -o backupvolfile-server=server2.example.com

If our gluster pool has more then one backup server, we can list all the server using the backupvolfile-servers as following (notice the plural s at the end of the param)

client1

$ sudo mount -t glusterfs server1.example.com:/volume1 /storage-pool -o backupvolfile-servers=server2.example.com:server3.example.com:server4.example.com

Set Up Auto Mounting on the Client

In theory adding the following line to the client’s fstab file should make the client mount the GlusterFS share at boot:

client1

server1.example.com:/volume1 /storage-pool glusterfs defaults,_netdev 0 0

Normally this should work since the _netdev param should force the filesystem to wait for a network connection.

If this didn’t work for you because the GlusterFS client wasn’t running when the fstab file was processed, try opening root’s crontab file and add a command to mount the share at reboot. This command opens the crontab file:

client1

$ sudo crontab -u root -e

Add this line, and press control-o and return to save changes, and control-x to quit from nano:

client1

@reboot sleep 10;mount -t glusterfs server1.example.com:/volume1 /storage-pool -o backupvolfile-server=server2.example.com

This will execute two commands when the server boots up: the first is just a 10 second delay to allow the GlusterFS daemon to boot, and the second command mounts the volume.

You may need to make your client wait longer before running mount. If your client doesn’t mount the volume when it boots, try using ‘sleep 15’ instead.  This isn’t an ideal way to fix this problem, but it’s ok for most uses.

Another appropriate way to setup auto mounting is that instead of mounting the GlusterFS share manually on the client, you add the mount command to /etc/rc.local file. We will not add it to /etc/fstab as rc.local is always executed after the network is up which is required for a network file system.

Open /etc/rc.local

client1

$ nano /etc/rc.local

Append the following line:

client1

[...]
/usr/sbin/mount.glusterfs server1.example.com:/volume1 /storage-pool

To test if your modified /etc/rc.local is working, reboot the client:

$ reboot

After the reboot, you should find the share in the outputs of…

$ df -h

… and…

$ mount

Restrict Access to the Volume

Now that we have verified that our storage pool can be mounted and replicate data to both of the machines in the cluster, we should lock down our pool.

Currently, any computer can connect to our storage volume without any restrictions. We can change this by setting an option on our volume.

On one of your storage nodes, type:

server1

$ sudo gluster volume set volume1 auth.allow gluster_client_IP_addr

You will have to substitute the IP address of your cluster client (client1) in this command. Currently, at least with /etc/hosts configuration, domain name restrictions do not work correctly. If you set a restriction this way, it will block all traffic. You must use IP addresses instead.

If you need to remove the restriction at any point, you can type:

server1

$ sudo gluster volume set volume1 auth.allow *

This will allow connections from any machine again. This is insecure, but may be useful for debugging issues.

If you have multiple clients, you can specify their IP addresses at the same time, separated by commas:

server1

$ sudo gluster volume set volume1 auth.allow gluster_client1_ip,gluster_client2_ip

Getting Info with GlusterFS Commands

When you begin changing some of the settings for your GlusterFS storage, you might get confused about what options you have available, which volumes are live, and which nodes are associated with each volume.

There are a number of different commands that are available on your nodes to retrieve this data and interact with your storage pool.

If you want information about each of your volumes, type:

server1/server2

$ sudo gluster volume info

Console output:

Volume Name: volume1
Type: Replicate
Volume ID: 3634df4a-90cd-4ef8-9179-3bfa43cca867
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1.example.com:/gluster-storage
Brick2: server2.example.com:/gluster-storage
Options Reconfigured:
auth.allow: 111.111.1.11

Similarly, to get information about the peers that this node is connected to, you can type:

server1/server2

$ sudo gluster peer status

Console output:

Number of Peers: 1

Hostname: server1.example.com
Port: 24007
Uuid: 6f30f38e-b47d-4df1-b106-f33dfd18b265
State: Peer in Cluster (Connected)

If you want detailed information about how each node is performing, you can profile a volume by typing:

server1/server2

$ sudo gluster volume profile volume_name start

When this command is complete, you can obtain the information that was gathered by typing:

server1/server2

$ sudo gluster volume profile volume_name info

Console output:

Brick: server2.example.com:/gluster-storage
--------------------------------------------
Cumulative Stats:
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls         Fop
 ---------   -----------   -----------   -----------   ------------        ----
      0.00       0.00 us       0.00 us       0.00 us             20     RELEASE
      0.00       0.00 us       0.00 us       0.00 us              6  RELEASEDIR
     10.80     113.00 us     113.00 us     113.00 us              1    GETXATTR
     28.68     150.00 us     139.00 us     161.00 us              2      STATFS
     60.52     158.25 us     117.00 us     226.00 us              4      LOOKUP
 
    Duration: 8629 seconds
   Data Read: 0 bytes
Data Written: 0 bytes
. . .

You will receive a lot of information about each node with this command.

For a list of all of the GlusterFS associated components running on each of your nodes, you can type:

server1/server2

$ sudo gluster volume status

Console output:

Status of volume: volume1
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick server1.example.com:/gluster-storage             49152   Y       2808
Brick server2.example.com:/gluster-storage             49152   Y       2741
NFS Server on localhost                                 2049    Y       3271
Self-heal Daemon on localhost                           N/A     Y       2758
NFS Server on server1.example.com                      2049    Y       3211
Self-heal Daemon on server1.example.com                N/A     Y       2825

There are no active volume tasks

If you are going to be administering your GlusterFS storage volumes, it may be a good idea to drop into the GlusterFS console. This will allow you to interact with your GlusterFS environment without needing to type sudo gluster before everything:

server1/server2

$ sudo gluster

This will give you a prompt where you can type your commands. This is a good one to get yourself oriented:

> help

When you are finished, exit like this:

> exit

Conclusion

At this point, you should have a redundant storage system that will allow us to write to two separate servers simultaneously. This can be useful for a great number of applications and can ensure that our data is available even when one server goes down.

4 ways to make your WordPress server run faster and more resource efficient

4 ways to make your WordPress server run faster and more resource efficient

If you find your WordPress server running slow, before considering a hardware upgrade, try the following tweaks first.

1. Enable caching

If your website doesn’t need to be real-time all the time, you should enable content caching on your WordPress.

You can enable caching just by installing and activating the well-known WP Super Cache plugin.

The default settings should be good enough so after you install the plugin, just go to the Settings page of the WP Super Cache plugin and make sure Caching On is checked.

This plugin caches your pages and posts as html files and serves those files to users instead of calculating and rendering the posts or pages content every time.

If you must clear the cache, you can always go to the plugin’s Content tab to delete any cached page you want or all of the cache with just one click. The new version will be generated and cached after that.

2. Use a CDN

Using a CDN will free your server from serving static files, especially images. Serving these static files uses a lot of CPU power and disk I/O and stresses your server a lot.

Enabling a CDN on WordPress is as easy as one two three and the best part about it is that you can get an unlimited CDN service for FREE as stated in this post.

Applying a CDN for your WordPress not only makes your server run a lot lighter, but also saves you a lot of bandwidth and traffic cost.

Learn how the 3 best free CDN services can speed up your website and optimize cost.

3. Enable gzip

Enabling gzip compression will make each of your web responses smaller by 2 to 3 times, which results in less network I/O and therefore makes your web server run faster.

The cost of compressing the content is a lot less than the cost of transferring the extra bytes, so after the math, your server runs more resource efficient.

To learn more about gzip and how to enable it for your WordPress, check this post.

4. Enable lazy load

If lazy load is enabled for your website, each image in your web content will not load until the user scroll to the part that it is located and it become visible on the user screen. Practically, this saves a lot of requests and responses between your server and the user’s browser.

If you have a WordPress website, you can enable lazy load by installing the Lazy Load plugin.

Lazy load not only makes your website run a lot more resource efficient but also saves you a lot of network I/O and traffic cost as stated in this post.

 

How do you tweak your WordPress server performance? Let me know in the comment!

5 tips to optimize traffic cost of your WordPress website

5 tips to optimize traffic cost of your WordPress website

Introduction

If you host your website on a cloud service, you may find out that your traffic costs even more than the hardware (CPU, RAM, HDD). If that’s the case, check the following tips on how to optimize your traffic cost.

Monitoring your traffic throughput

Before you optimize your traffic, you should have a monitoring tool for your network, so that you know how much traffic was optimized every time you do an experiment.

If you don’t know any traffic monitoring tool yet, try iftop. This tool can show you the real-time traffic throughput for every IP that is connected to your server. iftop is just like the top command in linux, but instead of monitoring the CPU or RAM like the top command, iftop monitors your network.

Install iftop

# fedora or centos
yum install iftop -y

# ubuntu or debian
$ sudo apt-get install iftop

Run iftop

$ sudo iftop -n -B

iftop console

After you run iftop, the iftop screen would appear like below

The first column is the IP of your current machine.

The second column are the IPs of the remote hosts that connect to your server. For each IP, the first row is the traffic that is sent to the remote host (notice the icon =>) and the second row is the traffic that is received from that remote host (hence the icon <=)

The third column is the real-time traffic amount that is sent or received, where the first sub-column is the average bandwidth per second in the last 2 seconds, the second sub-column is the average bandwidth per second in the last 10 seconds, and the third one is the average bandwidth per second in the last 40 seconds. Most of the time, I look at this third column.

Decide what to optimize

Deciding what cost to optimize is often depends on your hosting plan.

Some hosting plans do not charge for traffic. If that’s the case, the network optimizing is more about resource optimizing, where the lower the network traffic, the more requests your server can handle with the same CPU, RAM and network interface.

Some hosting plans do not charge for in going traffic and only charge for out going traffic. For example, Google Cloud only charge for out going traffic, and the rates differ by the destination zone. The cost for traffic between internal IP’s is a lot a lot cheaper than the traffic between external IP’s between zones, or regions, or continents.

By looking at the network traffic between your server and the remote IP’s and the traffic charging plan of your hosting service, you can decide which traffic to optimize first, and what can be left to be optimized later.

Tip #1. Changing to internal IP where possible

If your hosting plan rates are a lot expensive for external traffic than for internal traffic, try changing the external IP’s of your applications to internal IP’s can save you a lot of money.

For example, Google Cloud does not charge for traffic between internal IP’s within the same zone. Therefore, switching your servers to the same zone and configure them to talk to one another using the internal IP’s can save you a lot. Check your redis, memcached, kafka, rabbitmq, mysql or whatever services that can be run internally, make sure their configurations are optimized.

This action can reduce your traffic cost by 3-10 times.

Tip #2. Enable gzip

If you want to know more about gzip, check this post.

By enabling gzip, the traffic sent out will be compressed and therefore you can save a lot of network bandwidth.

This action can reduce your traffic cost by 3-5 times.

Tip #3. Using a free CDN service

If your website has a lot of images, try using a free CDN service to reduce the cost of serving images.

Believe it or not, the free CDN setup will take only 5 minutes and your traffic can be reduced by 5-10 times.

If you have a WordPress website, just install Jetpack plugin by WordPress.com and then turn on it’s Photon feature and your website is now powered with Jetpack’s free CDN service.

If you don’t want to use Jetpack Photon, you can always use CloudFlare or Incapsula CDN services, which are also free without any limitation on bandwidth or anything.

If your website has a lot of visitors in real-time, you can easily test the effects of the CDN by looking at the iftop console when Jetpack Photon is enabled and when it is disabled.

To read more on how to use a free CDN service on your website, click here.

Tip #4. Enable lazyload

When lazyload is enabled, the images on your website won’t be load until they are visible on the browser. Which means the images that stay at the end for your web pages won’t load at first. Then when the user scrolls the web page to where the images are located, the images will be loaded and shown for the user.

If you have a WordPress website, you can enable lazyload by installing the Lazy Load plugin.

Tip #5. Change your hosting service

I don’t know if this should be counted a tip. Anyways, if your current hosting service is charging too much for you traffic, consider changing to another hosting plan or service. Some hosting services do not charge for traffic such as BlueHost, GoDaddy and OVH.

However, even if you switch to a hosting service with free traffic, you can still consider applying the above tips as they can make your website perform better with less hardware resources.

 

How do these tips work for you? Let me know in the comment! 😀

3 best free CDN services to speed up your website and optimize cost

3 best free CDN services to speed up your website and optimize cost

Introduction

If you own a website and your traffic is growing, it’s time to look into using a CDN for your website. Here are 3 best free CDN services that you can use.

Why should I use a CDN

As your website’s traffic grows, your web server may spend a lot of resources serving static files, especially media files like images. Your traffic cost may become a pain in the ass because on an average website, the traffic sent for images is usually 4-10 times the traffic sent for the html, css, and javascript content. On the other hand, CDNs are very good at caching image and optimizing resources for static files serving, as well as choosing the best server in their network to serve the images to the end user (usually the nearest server to the user).

So, it’s time for your server to do what it’s best at, which is generating web contents; and let the CDNs do what they are best at. That also saves you a bunch since you will only have to pay a lot less for your web traffic. Moreover, the setup is unbelievably easy.

1. Jetpack’s Photon

If you have a WordPress website, the fastest and easiest way to give your website the CDN power is to use the Photon feature of Jetpack by WordPress.com plugin.

First, you will have to install and activate the plugin.

The plugin will ask you to login using the WordPress.com account. Don’t worry. Just create a free WordPress.com account and log in.

In the Jetpack settings page, switch to Appearance tab and enable Photon option.

That’s it. Now your images will be served from WordPress.com’s CDN, and that doesn’t cost you a cent.

How Jetpack’s Photon works

Jetpack’s Photon hooks into the rendering process of your web pages and changes your image urls to the ones cached on WordPress.com CDN. Therefore, every time your users open your web pages, they will be downloading cached images from WordPress.com CDN instead of your web server.

Known issues

Jetpack’s Photon use their algorithm to decide the image resolution to be served to the users. In some rare cases, the algorithm doesn’t work so well and the image will be stretched out of the original width and height ratio. For example, if your image size is actually 720×480 but your image’s name is my_image_720x720.jpg, Photon will guess that your image ratio is 720×720 and set the width and height of the img tag to 1:1 ratio, while the cached image is still at 720:480 ratio, which will make the image stretched out of its correct ratio.

Except for that, everything works perfect for me.

If you ask if I would recommend using Jetpack’s Photon CDN, the answer is definitely yes.

2. Cloudflare

Cloudflare offers a free plan with no limit on the bandwidth nor traffic. The free plan is just limited on some advanced functions like DDOS protection or web application firewall, which most of us may not need.

Cloudflare requires you to change the NS records of your domain to their servers, and that’s it. Cloudflare will take care of the rest. You don’t have to do anything else.

How Cloudflare works

After replace your domain’s NS with Cloudflare’s one, all your users will be redirected to Cloudflare servers. When a user request a resource on your website, whether an html page, an image, or anything else, Cloudflare will serve the cached version on their CDN network to the user without accessing your web server. If a cached version does not exist or has expired, Cloudflare will ask your web server for the resource, and then send it to the user as well as cache it on their CDN if that resource is suitable for caching.

I find Cloudflare doesn’t have the image ratio problem like Photon, since Cloudflare doesn’t try to change the html tags, but instead serve the original html content. The CDN works without changing the image url, because Cloudflare has set your domain records to point to their servers by taking advantage of the NS records we set to their name servers earlier.

3. Incapsula

Like Cloudflare, Incapsula offers the same thing. You will have to edit your domain records to point to their servers. However, with Incapsula, you don’t have to change your NS records. You will just have to change the A record and the www record, which may sound somewhat less scary than giving the service full control of our domain like in the case of Cloudflare.

Incapsula works the same way as Cloudflare. It redirects all the requests to its servers and serves the cached version first if available.

Final words

Trying these CDN services does not cost you anything, and on the other hand may save you a lot of traffic costs as well as make your website more scalable. I would recommend that you try at least one of these services. If you don’t like a CDN after using it, you can always disable it and everything will be back to normal. In my case, the CDN saves me 80 percent of my traffic cost, even though my website does not have a lot of images.

 

Did you find this post helpful? What CDN do you use? Tell me in the comment! 😀

How to enable gzip compression for your website

How to enable gzip compression for your website

Most of the time, gzip compression will make your server perform better and more resource efficient. This tutorial will show you how to enable gzip compression for your nginx server.

What is gzip compression

Gzip is a method of compressing files (making them smaller) for faster network transfers. It is also a file format but that’s out of the scope of this post.

Compression allows your web server to provide smaller file sizes which load faster for your website users.

Enabling gzip compression is a standard practice. If you are not using it for some reason, your webpages are likely slower than your competitors.

Enabling gzip also makes your website score better on Search Engines.

How compressed files work on the web

When a request is made by a browser for a page from your site, your webserver returns the smaller compressed file if the browser indicates that it understands the compression. All modern browsers understand and accept compressed files.

How to enable gzip on Apache web server

To enable compression in Apache, add the following code to your config file

AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript

How to enable gzip on your Nginx web server

To enable compression in Nginx, you will need to add the following code to your config file

gzip on;
gzip_comp_level 2;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;

# Disable for IE < 6 because there are some known problems
gzip_disable "MSIE [1-6].(?!.*SV1)";

# Add a vary header for downstream proxies to avoid sending cached gzipped files to IE6
gzip_vary on;

As with most other directives, the directives that configure compression can be included in the http context or in a server or location configuration block.

The overall configuration of gzip compression might look like this.

server {
    gzip on;
    gzip_types      text/plain application/xml;
    gzip_proxied    no-cache no-store private expired auth;
    gzip_min_length 1000;
    ...
}

How to enable gzip on your WordPress

If you have a WordPress website and you can’t edit the Apache or Nginx config file, you can still enable gzip using a plugin like WP Super Cache by Automattic.

After installing the plugin, go to its Advanced Settings tab and check on the settings “Compress pages so they’re served more quickly to visitors. (Recommended) to enable gzip compression.

However, keep in mind that this plugin comes with a lot of more features, some of which you may not want. So if you don’t like the extra features, you can just use a more simple plugin like Gzip Ninja Speed Compression or Check and Enable Gzip Compression.

How to check if gzip is successfully enabled and working

Using Firefox to check gzip compression

If you are using Firefox, do the following steps:

  • Open Developer Tools by one of these methods:
    • Menu > Developer > Toggle Tools
    • Ctrl + Shift + I
    • F12
  • Switch to Network Tab in the Developer Tools.
  • Launch the website that you want to check.
  • If gzip is working, the request to html, css, javascript and text files will have the Transferred column smaller than the Size column, where Transferred column displays the size of the compressed content that was transferred, and the Size column shows the size of the original content before compression.

Using Chrome to check gzip compression

If you are using Chrome, do the following:

  • Open Developer Tools by one of these methods:
    • Menu > More tools > Developer Tools
    • Ctrl + Shift + I
    • F12
  • Switch to Network Tab in the Developer Tools.
  • Launch the website that you want to check.
  • Click on the request you want to check (html, css, javascript or text files), the request detail will be displayed.
  • Toggle Response Headers of that request.
  • Check for Content-Encoding: gzip.
  • If gzip is working, the Content-Encoding: gzip will be there.
  • Make sure you check the Response Headers and not the Request Headers.

Frequently Asked Questions

How efficient is gzip?

As you can see in the Firefox Developer Tools Network Tab, the compressed size is normally one third or one fourth the original size. This ratio differs from requests to requests but usually that’s the ratio for html, css, javascript and text files.

Will gzip make my server slower?

OK, that’s a smart question. Since the server has to do the extra work to compress the response, it may need some more CPU power. However, the CPU power that is saved during transferring the response usually makes up for that, not to say that more CPU power is saved. Therefore, at the end of the day, normally your server would be more CPU efficient.

Should I enable gzip for image files (and media files in general)?

Image files are usually already compressed, so gzip compressing the image will not save you a lot of bytes (normally less than 5%), but on the other hand requires a lot of processing resource. Therefore, you shouldn’t enable gzip for your images and should only enable gzip for html, css, javascript and text files.

How to add featured image column to all posts listing page in WordPress

How to add featured image column to all posts listing page in WordPress

If you want to check if any of your post is missing a featured image, the following plugin will add a column to your All posts listing page (wp-admin/edit.php) to show the posts’ featured images for your convenience.

The plugin is  Featured Image Column by Austin Passy. After you install and activate it, your All posts will be added with an extra column showing the featured image of every post.

Do you know that you can also automatically set the featured image for your post using the first image in that post?

What are your favorite WordPress plugins? Share with me in the comment! 😀

WordPress: How to auto set featured image using the first image in post

WordPress: How to auto set featured image using the first image in post

Introduction

If you are a heavy editor in WordPress, you may find out that most of the time, you just set the first image of your post to be the featured image. Here’s a quick and handy way to help you skip the tedious work and save some labor.

Use Easy Add Thumbnail plugin

The easiest way to automate this is to install the Easy Add Thumbnail plugin by Samuel Aguilera.

Just install, activate it and you’re good to go.

Once activated, this plugin will automatically set the post’s featured image to be the first image in your post content if your post doesn’t have a featured image yet.

If you want to select your own featured image, it will leave the featured image untouched and display your selection instead.

How this works

Let’s take a look at the plugin’s code, which is very simple.

if ( function_exists( 'add_theme_support' ) ) {

    add_theme_support( 'post-thumbnails' ); // This should be in your theme. But we add this here because this way we can have featured images before swicth to a theme that supports them.

    function easy_add_thumbnail($post) {

        $already_has_thumb = has_post_thumbnail();
        $post_type = get_post_type( $post->ID );
        $exclude_types = array('');
        $exclude_types = apply_filters( 'eat_exclude_types', $exclude_types );

        // do nothing if the post has already a featured image set
        if ( $already_has_thumb ) {
            return;
        }

        // do the job if the post is not from an excluded type
        if ( ! in_array( $post_type, $exclude_types ) ) {
            // get first attached image
            $attached_image = get_children( "order=ASC&post_parent=$post->ID&post_type=attachment&post_mime_type=image&numberposts=1" );

            if ( $attached_image ) {
                $attachment_values = array_values( $attached_image );
                // add attachment ID
                add_post_meta( $post->ID, '_thumbnail_id', $attachment_values[0]->ID, true );
            }
        }
    }

    // set featured image before post is displayed (for old posts)
    add_action('the_post', 'easy_add_thumbnail');

    // hooks added to set the thumbnail when publishing too
    add_action('new_to_publish', 'easy_add_thumbnail');
    add_action('draft_to_publish', 'easy_add_thumbnail');
    add_action('pending_to_publish', 'easy_add_thumbnail');
    add_action('future_to_publish', 'easy_add_thumbnail');
}

What this code does is that it creates a function called easy_add_thumbnail, which checks if a featured image has been set for your post, and then if a featured image hasn’t been set, it will scan through the images in your post and assign one of them to be the featured image.

Next, this code hooks that easy_add_thumbnail function to the post editing events so that the function will be called every time the post is published or displayed (for old posts).

If you don’t want to install the plugin, you can just add this code snippet to your theme’s function.php file or create your own plugin, whichever you prefer.

Does Easy Add Thumbnail plugin work well with Image Teleporter plugin?

If you don’t know it yet, Image Teleporter plugin helps you upload all the external hosted images in your post to your WordPress hosting and then change the images’ url correspondingly.

Fortunately, Easy Add Thumbnail works well with Image Teleporter. This means that even if the images in your post are hosted externally, when you update or publish your post, the Image Teleporter will first upload them to your hosting and attach the new internally hosted images to your post; then the Easy Add Thumbnail plugin will pick the first among those internal hosted images and set it as the featured image (if you haven’t specified a featured image by that time).

Bonus tip

If you want to check if your posts have featured image or not right in the “All posts” page, try the plugin in this post. Once installed and activated, this plugin adds a column to the posts listing page with the featured image if it exists.

 

Did you find this post helpful?

What are your favorite WordPress tips? Let me know in the comment! 😀

How to schedule tasks in Linux using crontab

How to schedule tasks in Linux using crontab

Introduction

Crontab is an utility in Linux that allow us to run jobs or commands on a specific schedule, just like Task Scheduler in Windows.

What is crontab

The crontab (cron derives from chronos, Greek for time; tab stands for table) command, found in Unix and Unix-like operating systems, is used to schedule commands to be executed periodically. To see what crontabs are currently running on your system, you can open a terminal and run:

sudo crontab -l

If you have never used it before, the system may tell you that there’s no crontab for your current user.

To edit the cronjobs, you can run

sudo crontab -e

How crontab works

Each user in Linux system has a file located at

/var/spool/cron/

containing a list of commands that will be executed on a timely basis.

The content of a crontab file looks like this:

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 */3 * * * /home/myname/scripts/my-script.sh

Notice the last line

0 */3 * * * /home/myname/scripts/my-script.sh

That line tells the system to run /home/myname/scripts/my-script.sh every 3 hours.

A crontab file can have multiple entries, each of which represents a cron job.

Note that this file is not intended to be edited directly, but rather by using the crontab command.

Don’t worry, we will go into detail how that works in a couple of minutes.

Structure of a crontab entry

As you can see in the last section, a crontab entry has a syntax like this:

# m h dom mon dow command
0 */3 * * * /home/myname/scripts/my-script.sh

For each entry, the following parameters must be included:

  1. m, a number (or list of numbers, or range of numbers), representing the minute of the hour (0-59);
  2. h, a number (or list of numbers, or range of numbers), representing the hour of the day (0-23);
  3. dom, a number (or list of numbers, or range of numbers), representing the day of the month (1-31);
  4. mon, a number (or list, or range), or name (or list of names), representing the month of the year (1-12);
  5. dow, a number (or list, or range), or name (or list of names), representing the day of the week (0-7, 0 or 7 is Sunday); and
  6. command, which is the command to be run, exactly as it would appear on the command line.

A field may be an asterisk (*), which always stands for “first through last”.

Ranges of numbers are allowed. Ranges are two numbers separated with a hyphen. The specified range is inclusive; for example, 8-11 for an “hours” entry specifies execution at hours 8, 9, 10 and 11.

Lists are allowed. A list is a set of numbers (or ranges) separated by commas. Examples: “1,2,5,9“, “0-4,8-12“.

Step values can be used in conjunction with ranges. For example, “0-23/2” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say “every two hours”, you can use “*/2“.

Names can also be used for the “month” and “day of week” fields. Use the first three letters of the particular day or month (case doesn’t matter). Ranges or lists of names are not allowed.

The “sixth” field (the rest of the line) specifies the command to be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the cronfile. Percent-signs (%) in the command, unless escaped with backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input.

Note that the day of a command’s execution can be specified by two fields: day of month, and day of week. If both fields are restricted (in other words, they aren’t *), the command will be run when either field matches the current time. For example, “30 4 1,15 * 5” would cause a command to be run at 4:30 am on the 1st and 15th of each month, plus every Friday.

How to use crontab examples

List current crontab

crontab -l

Edit your crontab

crontab -e

This command will open up the crontab in a editor (usually vi or vim). Edit and save your cron jobs as you wanted.

Remove your crontab (un-scheduling all crontab jobs)

crontab -r

Back up your crontab to my-crontab file

crontab -l > my-crontab

Load your crontab from my-crontab file

crontab my-crontab

This does the same syntax checking as crontab -e.

Edit the crontab of the user named “charles”

sudo crontab -u charles -e

Note that the -u option requires administrator privileges, so that command is executed using sudo.

View the crontab of user “jeff”

sudo crontab -u jeff -l

Remove the crontab of user “sandy”

sudo crontab -u sandy -r

Check if your crontab is working

If you want to check if your cron job is executed as expected, or when it was last run, check the crontab log as following.

On a default installation the cron jobs get logged to

/var/log/syslog

You can see just cron jobs in that logfile by running

grep CRON /var/log/syslog

If you haven’t reconfigured anything, the entries will be in there.

Examples of crontab entries

Run the shell script /home/melissa/backup.sh on January 2 at 6:15 A.M.

15 6 2 1 * /home/melissa/backup.sh

Same as the above entry. Zeroes can be added at the beginning of a number for legibility, without changing their value.

15 06 02 Jan * /home/melissa/backup.sh

Run /home/carl/hourly-archive.sh every hour, on the hour, from 9 A.M. through 6 P.M., every day.

0 9-18 * * * /home/carl/hourly-archive.sh

Run /home/wendy/script.sh every Monday, at 9 A.M. and 6 P.M.

0 9,18 * * Mon /home/wendy/script.sh

Run /usr/local/bin/backup at 10:30 P.M., every weekday.

30 22 * * Mon,Tue,Wed,Thu,Fri /usr/local/bin/backup

Run /home/bob/script.sh every 3 hours.

0 */3 * * * /home/bob/script.sh

Run /home/mirnu/script.sh every minute (of every hour, of every day of the month, of every month and every day in the week).

* * * * * /home/mirnu/script.sh

Run /home/mirnu/script.sh 10 minutes past every hour on the 1st of every month.

10 * 1 * * /home/mirnu/script.sh

Special keywords

For the first (minute) field, you can also put in a keyword instead of a number:

@reboot Run once, at startup
@yearly Run once a year "0 0 1 1 *
@annually (same as @yearly)
@monthly Run once a month "0 0 1 * *"
@weekly Run once a week "0 0 * * 0"
@daily Run once a day "0 0 * * *"
@midnight (same as @daily)
@hourly Run once an hour "0 * * * *"

Leaving the rest of the fields empty, this would be valid:

@daily /bin/execute/this/script.sh

Storing the crontab output

By default cron saves the output of /bin/execute/this/script.sh in the user’s mailbox (root in this case). But it’s prettier if the output is saved in a separate logfile. Here’s how:

*/10 * * * * /bin/execute/this/script.sh >> /var/log/script_output.log 2>&1

Explanation

Linux can report on different levels. There’s standard output (STDOUT) and standard errors (STDERR). STDOUT is marked 1, STDERR is marked 2. So the following statement tells Linux to store STDERR in STDOUT as well, creating one datastream for messages & errors:

2>&1

Now that we have 1 output stream, we can pour it into a file. Where > will overwrite the file, >> will append to the file. In this case we’d like to to append:

>> /var/log/script_output.log

Mailing the crontab output

By default cron saves the output in the user’s mailbox (root in this case) on the local system. But you can also configure crontab to forward all output to a real email address by starting your crontab with the following line:

MAILTO="yourname@yourdomain.com"

Mailing the crontab output of just one cronjob

If you’d rather receive only one cronjob’s output in your mail, make sure this package is installed:

$ aptitude install mailx

And change the cronjob like this:

*/10 * * * * /bin/execute/this/script.sh 2>&1 | mail -s "Cronjob ouput" yourname@yourdomain.com

Trashing the crontab output

Now that’s easy:

*/10 * * * * /bin/execute/this/script.sh > /dev/null 2>&1

Just pipe all the output to the null device, also known as the black hole. On Unix-like operating systems, /dev/null is a special file that discards all data written to it.

 Caveats

Many scripts are tested in a Bash environment with the PATH variable set. This way it’s possible your scripts work in your shell, but when run from cron (where the PATH variable is different), the script cannot find referenced executables, and fails.

It’s not the job of the script to set PATH, it’s the responsibility of the caller, so it can help to echo $PATH, and put PATH=<the result> at the top of your cron files (right below MAILTO).

How to add a blank line at the end of the post while editing in WordPress

How to add a blank line at the end of the post while editing in WordPress

This tutorial will show you how to add a blank line at the end of a WordPress post.

During editing in WordPress, sometimes after adding an image (especially an image with a caption) or a “blockquote” at the end of the post, you may find it difficult to add another “normal” line at the end. If you hit ENTER, the editor just add another line of the current image caption or current blockquote, which is definitely not what you want.

If you face the same problem, this simple trick may help you:

  • Switch to Text Editor mode.
  • Scroll to the end of the post in Text Editor mode (you should see your image or your blockquote there
  • Make a new line and type or paste this code at the end of your post (should be after the image or blockquote mentioned above): &nbsp; (including the & in the front and ; at the end)
  • Switch back to Visual Editor mode, you should see a new line as you wanted.

How it works

&nbsp; stands for non-breaking space, a common character entity used in HTML.

Non-breaking Space

A non-breaking space is a space that will not break into a new line.

Two words separated by a non-breaking space will stick together (not break into a new line). This is handy when breaking the words might be disruptive.

Examples:

  • § 10
  • 10 km/h
  • 10 PM

Another common use of the non-breaking space is to prevent that browsers truncate spaces in HTML pages.

If you write 10 spaces in your text, the browser will remove 9 of them. To add real spaces to your text, you can use the &nbsp; character entity.

In our case, &nbsp; tell WordPress to insert a new blank space at the end of the post. And, because &nbsp; stays outside of the previous image or blockquote tag, WordPress takes it into a new line. That’s how the trick works.

Example

For example, if you have a blockquote at the end of your post, when switching to Text Editor, you’ll find something like this

<blockquote>This is the quoted content</blockquote>

Then you after you insert &nbsp; the editor should look something like

<blockquote>This is the quoted content</blockquote>

&nbsp;

Then if you switch back to Visual Editor, a new “normal” line should appear after your blockquote.

 

What are your favorite WordPress tricks? Tell us in the comment! 😀

How to insert a responsive iframe into your WordPress post

How to insert a responsive iframe into your WordPress post

Inserting an iframe to your WordPress post can be somewhat troublesome, but still can be done if you know some tricks. In this post, I’m going to show you how to embed a responsive iframe that can work with your responsive WordPress website beautifully.

To make things more understandable along the post, let’s try to embed the famous slither.io game to our post.

slither.io is a multiplayer game written in HTML5. If you click on their website here, you’ll find out that the game is responsive to most of devices and browsers currently.

Now let’s say we want to put that game into our post as an iframe.

First try

Switch to Text Editor and paste in the iframe linking to the original website.

<iframe src="http://slither.io/"></iframe>

When I switch to Visual Editor, the iframe is there but is very small.

Moreover, the game doesn’t seem to be able to response to the resolution that small and the UI starts to break.

Second try

Switching back to Text Editor, I find out that WordPress has kindly added the width and height for my iframe.

I changed the width and height to 600 and 400 in that order, and the iframe appears better.

<iframe src="http://slither.io/" width="600" height="400"></iframe>

Third try

Now to make the iframe centered in the post, I apply the same trick I used to insert and center a YouTube video in WordPress in a previous post, which adds an extra wrapping div as below.

<div style="text-align: center;">
    <iframe src="http://slither.io/" width="600" height="400"></iframe>
</div>

Everything works perfectly until now.

However, this set-up does not response well every screen resolution. I want it to always takes 100% the width of the parent container. The problem with this is while I can set the width to 100%, I can’t specify the appropriate height of the iframe.

  • If I set the height to a percentage like 50%, I can’t tell the height of the game when it appears, it can be half the screen or it can be larger than the screen, or it can be 0, depending on the parent container.
  • If I set the height to a fixed height like 400px, the iframe won’t response well on some devices, especially mobile devices when the user experience on the portrait mode and landscape mode differ a lot.

The working solution

After some research, I came across this article and ended up with the solution below.

<div style="position: relative; padding-bottom: 56.25%; padding-top: 25px; height: 0;">
    <iframe style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;" 
        src="http://slither.io/" width="100%" height="100%" frameborder="0" 
        scrolling="no" seamless="seamless" allowfullscreen="allowfullscreen">
    </iframe>
</div>

What this code does is that it set the div’s width to 100%, and then make the div’s resolution to always be 16:9 ratio by setting

padding-bottom: 56.25%;

Setting the wrapping div’s position to be relative and the iframe’s position to be absolute also plays an important role here.

Now the div always responses to the parent container and always keeps the resolution ratio at 16:9.

You can change the ratio by changing the "padding-bottom: 56.25%" to the ratio you want (notice that 56.25/100 == 9/16).

If you want to make your code more beautiful, you can add a custom CSS to your site (by altering the theme’s code files or adding a custom Text Widget) as following

.responsiveWrapper {
    position: relative;
    padding-bottom: 56.25%; /* 16:9 */
    padding-top: 25px;
    height: 0;
}
.responsiveWrapper iframe {
    position: absolute;
    top: 0;
    left: 0;
    width: 100%;
    height: 100%;
}

And then, just add your iframe like this

<div class="responsiveWrapper">
    <iframe src="http://slither.io/" frameborder="0" 
        scrolling="no" seamless="seamless" allowfullscreen="allowfullscreen">
    </iframe>
</div>

It should work now:

 

Side note. The iframe does not appear on my website because of the mixed content policy, where my website uses https but the iframe source only supports http. Other than that, everything should work fine.

That’s it. Does this solution work for you? Do you have any tips on this? Let me know in the comment! 😀