Category: Sysadmin

5 tips to optimize traffic cost of your WordPress website

5 tips to optimize traffic cost of your WordPress website

Introduction

If you host your website on a cloud service, you may find out that your traffic costs even more than the hardware (CPU, RAM, HDD). If that’s the case, check the following tips on how to optimize your traffic cost.

Monitoring your traffic throughput

Before you optimize your traffic, you should have a monitoring tool for your network, so that you know how much traffic was optimized every time you do an experiment.

If you don’t know any traffic monitoring tool yet, try iftop. This tool can show you the real-time traffic throughput for every IP that is connected to your server. iftop is just like the top command in linux, but instead of monitoring the CPU or RAM like the top command, iftop monitors your network.

Install iftop

# fedora or centos
yum install iftop -y

# ubuntu or debian
$ sudo apt-get install iftop

Run iftop

$ sudo iftop -n -B

iftop console

After you run iftop, the iftop screen would appear like below

The first column is the IP of your current machine.

The second column are the IPs of the remote hosts that connect to your server. For each IP, the first row is the traffic that is sent to the remote host (notice the icon =>) and the second row is the traffic that is received from that remote host (hence the icon <=)

The third column is the real-time traffic amount that is sent or received, where the first sub-column is the average bandwidth per second in the last 2 seconds, the second sub-column is the average bandwidth per second in the last 10 seconds, and the third one is the average bandwidth per second in the last 40 seconds. Most of the time, I look at this third column.

Decide what to optimize

Deciding what cost to optimize is often depends on your hosting plan.

Some hosting plans do not charge for traffic. If that’s the case, the network optimizing is more about resource optimizing, where the lower the network traffic, the more requests your server can handle with the same CPU, RAM and network interface.

Some hosting plans do not charge for in going traffic and only charge for out going traffic. For example, Google Cloud only charge for out going traffic, and the rates differ by the destination zone. The cost for traffic between internal IP’s is a lot a lot cheaper than the traffic between external IP’s between zones, or regions, or continents.

By looking at the network traffic between your server and the remote IP’s and the traffic charging plan of your hosting service, you can decide which traffic to optimize first, and what can be left to be optimized later.

Tip #1. Changing to internal IP where possible

If your hosting plan rates are a lot expensive for external traffic than for internal traffic, try changing the external IP’s of your applications to internal IP’s can save you a lot of money.

For example, Google Cloud does not charge for traffic between internal IP’s within the same zone. Therefore, switching your servers to the same zone and configure them to talk to one another using the internal IP’s can save you a lot. Check your redis, memcached, kafka, rabbitmq, mysql or whatever services that can be run internally, make sure their configurations are optimized.

This action can reduce your traffic cost by 3-10 times.

Tip #2. Enable gzip

If you want to know more about gzip, check this post.

By enabling gzip, the traffic sent out will be compressed and therefore you can save a lot of network bandwidth.

This action can reduce your traffic cost by 3-5 times.

Tip #3. Using a free CDN service

If your website has a lot of images, try using a free CDN service to reduce the cost of serving images.

Believe it or not, the free CDN setup will take only 5 minutes and your traffic can be reduced by 5-10 times.

If you have a WordPress website, just install Jetpack plugin by WordPress.com and then turn on it’s Photon feature and your website is now powered with Jetpack’s free CDN service.

If you don’t want to use Jetpack Photon, you can always use CloudFlare or Incapsula CDN services, which are also free without any limitation on bandwidth or anything.

If your website has a lot of visitors in real-time, you can easily test the effects of the CDN by looking at the iftop console when Jetpack Photon is enabled and when it is disabled.

To read more on how to use a free CDN service on your website, click here.

Tip #4. Enable lazyload

When lazyload is enabled, the images on your website won’t be load until they are visible on the browser. Which means the images that stay at the end for your web pages won’t load at first. Then when the user scrolls the web page to where the images are located, the images will be loaded and shown for the user.

If you have a WordPress website, you can enable lazyload by installing the Lazy Load plugin.

Tip #5. Change your hosting service

I don’t know if this should be counted a tip. Anyways, if your current hosting service is charging too much for you traffic, consider changing to another hosting plan or service. Some hosting services do not charge for traffic such as BlueHost, GoDaddy and OVH.

However, even if you switch to a hosting service with free traffic, you can still consider applying the above tips as they can make your website perform better with less hardware resources.

 

How do these tips work for you? Let me know in the comment! 😀

3 best free CDN services to speed up your website and optimize cost

3 best free CDN services to speed up your website and optimize cost

Introduction

If you own a website and your traffic is growing, it’s time to look into using a CDN for your website. Here are 3 best free CDN services that you can use.

Why should I use a CDN

As your website’s traffic grows, your web server may spend a lot of resources serving static files, especially media files like images. Your traffic cost may become a pain in the ass because on an average website, the traffic sent for images is usually 4-10 times the traffic sent for the html, css, and javascript content. On the other hand, CDNs are very good at caching image and optimizing resources for static files serving, as well as choosing the best server in their network to serve the images to the end user (usually the nearest server to the user).

So, it’s time for your server to do what it’s best at, which is generating web contents; and let the CDNs do what they are best at. That also saves you a bunch since you will only have to pay a lot less for your web traffic. Moreover, the setup is unbelievably easy.

1. Jetpack’s Photon

If you have a WordPress website, the fastest and easiest way to give your website the CDN power is to use the Photon feature of Jetpack by WordPress.com plugin.

First, you will have to install and activate the plugin.

The plugin will ask you to login using the WordPress.com account. Don’t worry. Just create a free WordPress.com account and log in.

In the Jetpack settings page, switch to Appearance tab and enable Photon option.

That’s it. Now your images will be served from WordPress.com’s CDN, and that doesn’t cost you a cent.

How Jetpack’s Photon works

Jetpack’s Photon hooks into the rendering process of your web pages and changes your image urls to the ones cached on WordPress.com CDN. Therefore, every time your users open your web pages, they will be downloading cached images from WordPress.com CDN instead of your web server.

Known issues

Jetpack’s Photon use their algorithm to decide the image resolution to be served to the users. In some rare cases, the algorithm doesn’t work so well and the image will be stretched out of the original width and height ratio. For example, if your image size is actually 720×480 but your image’s name is my_image_720x720.jpg, Photon will guess that your image ratio is 720×720 and set the width and height of the img tag to 1:1 ratio, while the cached image is still at 720:480 ratio, which will make the image stretched out of its correct ratio.

Except for that, everything works perfect for me.

If you ask if I would recommend using Jetpack’s Photon CDN, the answer is definitely yes.

2. Cloudflare

Cloudflare offers a free plan with no limit on the bandwidth nor traffic. The free plan is just limited on some advanced functions like DDOS protection or web application firewall, which most of us may not need.

Cloudflare requires you to change the NS records of your domain to their servers, and that’s it. Cloudflare will take care of the rest. You don’t have to do anything else.

How Cloudflare works

After replace your domain’s NS with Cloudflare’s one, all your users will be redirected to Cloudflare servers. When a user request a resource on your website, whether an html page, an image, or anything else, Cloudflare will serve the cached version on their CDN network to the user without accessing your web server. If a cached version does not exist or has expired, Cloudflare will ask your web server for the resource, and then send it to the user as well as cache it on their CDN if that resource is suitable for caching.

I find Cloudflare doesn’t have the image ratio problem like Photon, since Cloudflare doesn’t try to change the html tags, but instead serve the original html content. The CDN works without changing the image url, because Cloudflare has set your domain records to point to their servers by taking advantage of the NS records we set to their name servers earlier.

3. Incapsula

Like Cloudflare, Incapsula offers the same thing. You will have to edit your domain records to point to their servers. However, with Incapsula, you don’t have to change your NS records. You will just have to change the A record and the www record, which may sound somewhat less scary than giving the service full control of our domain like in the case of Cloudflare.

Incapsula works the same way as Cloudflare. It redirects all the requests to its servers and serves the cached version first if available.

Final words

Trying these CDN services does not cost you anything, and on the other hand may save you a lot of traffic costs as well as make your website more scalable. I would recommend that you try at least one of these services. If you don’t like a CDN after using it, you can always disable it and everything will be back to normal. In my case, the CDN saves me 80 percent of my traffic cost, even though my website does not have a lot of images.

 

Did you find this post helpful? What CDN do you use? Tell me in the comment! 😀

How to enable gzip compression for your website

How to enable gzip compression for your website

Most of the time, gzip compression will make your server perform better and more resource efficient. This tutorial will show you how to enable gzip compression for your nginx server.

What is gzip compression

Gzip is a method of compressing files (making them smaller) for faster network transfers. It is also a file format but that’s out of the scope of this post.

Compression allows your web server to provide smaller file sizes which load faster for your website users.

Enabling gzip compression is a standard practice. If you are not using it for some reason, your webpages are likely slower than your competitors.

Enabling gzip also makes your website score better on Search Engines.

How compressed files work on the web

When a request is made by a browser for a page from your site, your webserver returns the smaller compressed file if the browser indicates that it understands the compression. All modern browsers understand and accept compressed files.

How to enable gzip on Apache web server

To enable compression in Apache, add the following code to your config file

AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript

How to enable gzip on your Nginx web server

To enable compression in Nginx, you will need to add the following code to your config file

gzip on;
gzip_comp_level 2;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;

# Disable for IE < 6 because there are some known problems
gzip_disable "MSIE [1-6].(?!.*SV1)";

# Add a vary header for downstream proxies to avoid sending cached gzipped files to IE6
gzip_vary on;

As with most other directives, the directives that configure compression can be included in the http context or in a server or location configuration block.

The overall configuration of gzip compression might look like this.

server {
    gzip on;
    gzip_types      text/plain application/xml;
    gzip_proxied    no-cache no-store private expired auth;
    gzip_min_length 1000;
    ...
}

How to enable gzip on your WordPress

If you have a WordPress website and you can’t edit the Apache or Nginx config file, you can still enable gzip using a plugin like WP Super Cache by Automattic.

After installing the plugin, go to its Advanced Settings tab and check on the settings “Compress pages so they’re served more quickly to visitors. (Recommended) to enable gzip compression.

However, keep in mind that this plugin comes with a lot of more features, some of which you may not want. So if you don’t like the extra features, you can just use a more simple plugin like Gzip Ninja Speed Compression or Check and Enable Gzip Compression.

How to check if gzip is successfully enabled and working

Using Firefox to check gzip compression

If you are using Firefox, do the following steps:

  • Open Developer Tools by one of these methods:
    • Menu > Developer > Toggle Tools
    • Ctrl + Shift + I
    • F12
  • Switch to Network Tab in the Developer Tools.
  • Launch the website that you want to check.
  • If gzip is working, the request to html, css, javascript and text files will have the Transferred column smaller than the Size column, where Transferred column displays the size of the compressed content that was transferred, and the Size column shows the size of the original content before compression.

Using Chrome to check gzip compression

If you are using Chrome, do the following:

  • Open Developer Tools by one of these methods:
    • Menu > More tools > Developer Tools
    • Ctrl + Shift + I
    • F12
  • Switch to Network Tab in the Developer Tools.
  • Launch the website that you want to check.
  • Click on the request you want to check (html, css, javascript or text files), the request detail will be displayed.
  • Toggle Response Headers of that request.
  • Check for Content-Encoding: gzip.
  • If gzip is working, the Content-Encoding: gzip will be there.
  • Make sure you check the Response Headers and not the Request Headers.

Frequently Asked Questions

How efficient is gzip?

As you can see in the Firefox Developer Tools Network Tab, the compressed size is normally one third or one fourth the original size. This ratio differs from requests to requests but usually that’s the ratio for html, css, javascript and text files.

Will gzip make my server slower?

OK, that’s a smart question. Since the server has to do the extra work to compress the response, it may need some more CPU power. However, the CPU power that is saved during transferring the response usually makes up for that, not to say that more CPU power is saved. Therefore, at the end of the day, normally your server would be more CPU efficient.

Should I enable gzip for image files (and media files in general)?

Image files are usually already compressed, so gzip compressing the image will not save you a lot of bytes (normally less than 5%), but on the other hand requires a lot of processing resource. Therefore, you shouldn’t enable gzip for your images and should only enable gzip for html, css, javascript and text files.

How to schedule tasks in Linux using crontab

How to schedule tasks in Linux using crontab

Introduction

Crontab is an utility in Linux that allow us to run jobs or commands on a specific schedule, just like Task Scheduler in Windows.

What is crontab

The crontab (cron derives from chronos, Greek for time; tab stands for table) command, found in Unix and Unix-like operating systems, is used to schedule commands to be executed periodically. To see what crontabs are currently running on your system, you can open a terminal and run:

sudo crontab -l

If you have never used it before, the system may tell you that there’s no crontab for your current user.

To edit the cronjobs, you can run

sudo crontab -e

How crontab works

Each user in Linux system has a file located at

/var/spool/cron/

containing a list of commands that will be executed on a timely basis.

The content of a crontab file looks like this:

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
0 */3 * * * /home/myname/scripts/my-script.sh

Notice the last line

0 */3 * * * /home/myname/scripts/my-script.sh

That line tells the system to run /home/myname/scripts/my-script.sh every 3 hours.

A crontab file can have multiple entries, each of which represents a cron job.

Note that this file is not intended to be edited directly, but rather by using the crontab command.

Don’t worry, we will go into detail how that works in a couple of minutes.

Structure of a crontab entry

As you can see in the last section, a crontab entry has a syntax like this:

# m h dom mon dow command
0 */3 * * * /home/myname/scripts/my-script.sh

For each entry, the following parameters must be included:

  1. m, a number (or list of numbers, or range of numbers), representing the minute of the hour (0-59);
  2. h, a number (or list of numbers, or range of numbers), representing the hour of the day (0-23);
  3. dom, a number (or list of numbers, or range of numbers), representing the day of the month (1-31);
  4. mon, a number (or list, or range), or name (or list of names), representing the month of the year (1-12);
  5. dow, a number (or list, or range), or name (or list of names), representing the day of the week (0-7, 0 or 7 is Sunday); and
  6. command, which is the command to be run, exactly as it would appear on the command line.

A field may be an asterisk (*), which always stands for “first through last”.

Ranges of numbers are allowed. Ranges are two numbers separated with a hyphen. The specified range is inclusive; for example, 8-11 for an “hours” entry specifies execution at hours 8, 9, 10 and 11.

Lists are allowed. A list is a set of numbers (or ranges) separated by commas. Examples: “1,2,5,9“, “0-4,8-12“.

Step values can be used in conjunction with ranges. For example, “0-23/2” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say “every two hours”, you can use “*/2“.

Names can also be used for the “month” and “day of week” fields. Use the first three letters of the particular day or month (case doesn’t matter). Ranges or lists of names are not allowed.

The “sixth” field (the rest of the line) specifies the command to be run. The entire command portion of the line, up to a newline or % character, will be executed by /bin/sh or by the shell specified in the SHELL variable of the cronfile. Percent-signs (%) in the command, unless escaped with backslash (\), will be changed into newline characters, and all data after the first % will be sent to the command as standard input.

Note that the day of a command’s execution can be specified by two fields: day of month, and day of week. If both fields are restricted (in other words, they aren’t *), the command will be run when either field matches the current time. For example, “30 4 1,15 * 5” would cause a command to be run at 4:30 am on the 1st and 15th of each month, plus every Friday.

How to use crontab examples

List current crontab

crontab -l

Edit your crontab

crontab -e

This command will open up the crontab in a editor (usually vi or vim). Edit and save your cron jobs as you wanted.

Remove your crontab (un-scheduling all crontab jobs)

crontab -r

Back up your crontab to my-crontab file

crontab -l > my-crontab

Load your crontab from my-crontab file

crontab my-crontab

This does the same syntax checking as crontab -e.

Edit the crontab of the user named “charles”

sudo crontab -u charles -e

Note that the -u option requires administrator privileges, so that command is executed using sudo.

View the crontab of user “jeff”

sudo crontab -u jeff -l

Remove the crontab of user “sandy”

sudo crontab -u sandy -r

Check if your crontab is working

If you want to check if your cron job is executed as expected, or when it was last run, check the crontab log as following.

On a default installation the cron jobs get logged to

/var/log/syslog

You can see just cron jobs in that logfile by running

grep CRON /var/log/syslog

If you haven’t reconfigured anything, the entries will be in there.

Examples of crontab entries

Run the shell script /home/melissa/backup.sh on January 2 at 6:15 A.M.

15 6 2 1 * /home/melissa/backup.sh

Same as the above entry. Zeroes can be added at the beginning of a number for legibility, without changing their value.

15 06 02 Jan * /home/melissa/backup.sh

Run /home/carl/hourly-archive.sh every hour, on the hour, from 9 A.M. through 6 P.M., every day.

0 9-18 * * * /home/carl/hourly-archive.sh

Run /home/wendy/script.sh every Monday, at 9 A.M. and 6 P.M.

0 9,18 * * Mon /home/wendy/script.sh

Run /usr/local/bin/backup at 10:30 P.M., every weekday.

30 22 * * Mon,Tue,Wed,Thu,Fri /usr/local/bin/backup

Run /home/bob/script.sh every 3 hours.

0 */3 * * * /home/bob/script.sh

Run /home/mirnu/script.sh every minute (of every hour, of every day of the month, of every month and every day in the week).

* * * * * /home/mirnu/script.sh

Run /home/mirnu/script.sh 10 minutes past every hour on the 1st of every month.

10 * 1 * * /home/mirnu/script.sh

Special keywords

For the first (minute) field, you can also put in a keyword instead of a number:

@reboot Run once, at startup
@yearly Run once a year "0 0 1 1 *
@annually (same as @yearly)
@monthly Run once a month "0 0 1 * *"
@weekly Run once a week "0 0 * * 0"
@daily Run once a day "0 0 * * *"
@midnight (same as @daily)
@hourly Run once an hour "0 * * * *"

Leaving the rest of the fields empty, this would be valid:

@daily /bin/execute/this/script.sh

Storing the crontab output

By default cron saves the output of /bin/execute/this/script.sh in the user’s mailbox (root in this case). But it’s prettier if the output is saved in a separate logfile. Here’s how:

*/10 * * * * /bin/execute/this/script.sh >> /var/log/script_output.log 2>&1

Explanation

Linux can report on different levels. There’s standard output (STDOUT) and standard errors (STDERR). STDOUT is marked 1, STDERR is marked 2. So the following statement tells Linux to store STDERR in STDOUT as well, creating one datastream for messages & errors:

2>&1

Now that we have 1 output stream, we can pour it into a file. Where > will overwrite the file, >> will append to the file. In this case we’d like to to append:

>> /var/log/script_output.log

Mailing the crontab output

By default cron saves the output in the user’s mailbox (root in this case) on the local system. But you can also configure crontab to forward all output to a real email address by starting your crontab with the following line:

MAILTO="yourname@yourdomain.com"

Mailing the crontab output of just one cronjob

If you’d rather receive only one cronjob’s output in your mail, make sure this package is installed:

$ aptitude install mailx

And change the cronjob like this:

*/10 * * * * /bin/execute/this/script.sh 2>&1 | mail -s "Cronjob ouput" yourname@yourdomain.com

Trashing the crontab output

Now that’s easy:

*/10 * * * * /bin/execute/this/script.sh > /dev/null 2>&1

Just pipe all the output to the null device, also known as the black hole. On Unix-like operating systems, /dev/null is a special file that discards all data written to it.

 Caveats

Many scripts are tested in a Bash environment with the PATH variable set. This way it’s possible your scripts work in your shell, but when run from cron (where the PATH variable is different), the script cannot find referenced executables, and fails.

It’s not the job of the script to set PATH, it’s the responsibility of the caller, so it can help to echo $PATH, and put PATH=<the result> at the top of your cron files (right below MAILTO).

[Solved] Dell Precision M3800 sleep issue: does not wake up after sleep

[Solved] Dell Precision M3800 sleep issue: does not wake up after sleep

Introduction

If you own a Dell Precision M3800, you may have faced the common Dell Precision M3800 sleep issue, where your laptop freezes at a black screen when you try to wake it from sleep.

The symptoms

I bought my M3800 and upgraded it to Windows 10. Everything worked fine until one day. I believed that after some Windows Auto Updates, the problem began.

At first, I noticed that if I left the laptop sleep through the night, when I opened the lid the day after, sometimes it froze at a black screen. This happened more and more frequently over time.

Then after some other Windows Updates, I believe, the problem became more serious. Whenever the laptop went to sleep, it wouldn’t wake up even if I opened it only several minutes later. If I held the power button for 10-20 seconds, it might wake up if it wanted to, or just kept staying dead if it decided that it needed some more sleep. Then I had to hold the power button for 10-20 seconds several times to do a hard shutdown, then started it up again.

Some approaches

Obviously, I went to ask Dr. Google for the solution. Good thing was a lot of other people had the same issue as mine (Am I a little bit mean to think that it’s good because people have problems? :D).

Here are some of the solutions:

  • Some said that turning “Allow hybrid sleep” in “Edit power plan” to either On or Off worked for them.
  • Some said that upgrading the BIOS solved their problem.
  • Some said that after upgrading the Display Driver (both Intel and Nvidia), the problem disappeared.
  • Some said that rolling back the Nvidia Driver to a previous stable version fixed their problem.
  • Some said that disabling the Nvidia Display Adapter (in Device Manager console) fixed their problem.
  • Some said that Dell told them the problem was because of the motherboard, and they sent their laptop to Dell for motherboard replacement. After that, the problem was fixed.

Unfortunately, none of the solutions above worked for me. However, they may work for you, so you may want to give them a try.

What worked for me

Too desperate, I went to BIOS settings to see if there’s anything that I could do. Surprisingly, I came up with this solution which worked for me:

  • Go to BIOS settings (press F2 at the boot screen).
  • Find the Option “USB Wake Support” and change it to “Enabled” (from the manual: this option allows you to enable USB devices to wake the system from Standby).
  • Save and quit (press F10).

Then when I restart the laptop, the wake-up thing is back to normal.

I did try to disable that option later on just to be sure, and the problem really happened again.

So, I re-applied the solution and the laptop woke up perfectly again.

Now when I restart the laptop, everything works beautifully, except for one thing: the laptop wakes up whenever I move my mouse.

The laptop wakes up every time I move my mouse?

Now that I enable “USB Wake Support“, another headache appears. Whenever I move my wireless mouse, or click on it, the laptop wakes up.

Fortunately, I can turn that feature off for my wireless mouse and still keep the BIOS setting enabled.

  • Go to Device Manager
  • Expand Mice and other pointing devices
  • Find your wireless mouse and right click, choose Properties
  • In the mouse properties window, click to Power Management tab
  • Uncheck “Allow this device to wake the computer

That’s it. Now my laptop works normally again.

If your mouse does not have the Power Management tab, you may have to turn off your mouse every time you put your computer to sleep.

Do these tips work for you? Let me know in the comment! 😀

Most useful WordPress plugins

Most useful WordPress plugins

Introduction

WordPress is no doubt the most popular framework for building a website. With WordPress, you can build a website from scratch within 5 minutes. If you are a WordPress beginner, the following plugins may help you get started even faster.

1. Insert Headers and Footers

Developed by WPBeginner

Insert headers and footers configuration screenshot

This plugin lets you insert JavaScript, meta tags, html tags, etc. in the header or footer of all pages in your website. While the concept is simple, this plugin becomes super handy when you want to add 3rd party services to your website via html or JavaScript.

Adding Google Analytics tracking script

If you don’t know it yet, Google Analytics is the best analytics tool out there for your website and what’s even better is it’s totally free. I believe almost every website in the world is integrated with Google Analytics. Therefore, I would highly recommend that you have your website integrated with this amazing tool.

When you want to integrate Google Analytics tracking script into your WordPress website, just copy the script provided by Google and paste it into the footer slot in the plugin configuration page. That’s it! Done! No additional plugin needed. No HTML editing needed. Your site is now fully integrated with Google Analytics.

Adding Webmasters Tools (Search Console) verification meta tag

While Webmasters Tools provide a lot of ways to verify your ownership of your website, the easiest way would be adding the verification meta tag to your website’s header. With Insert Headers and Footers plugin, it is as easy as copying and pasting the verification meta tag into the header slot in the plugin configuration page. Save the configuration and now you can go to Webmasters Tools and ask Google to verify your website.

Adding other 3rd party JavaScript

As your web building journey goes on, you will find yourself integrating a lot more 3rd party services to your website via JavaScript. In most of the cases, the 3rd party tells you to add a tracking script to the header or footer of every page in your website.

With this plugin installed, you just have to go to the plugin configuration page and add the new script to the existing other scripts that are already there. Oh! Did I forget to mention that you can add multiple scripts to the same slot (header or footer)? You can do that just by pasting the scripts one after another in the slot you want (header or footer). The whole slot’s content will be rendered into the html of every page in your website.

Header or footer?

In most of the cases, 3rd party scripts can be inserted into either header or footer and still work perfectly. However, whenever it’s possible, you should put the scripts in the footer because that will make your website load faster as your website will have a chance to render the content before loading the JavaScript.

2. Yoast SEO

Developed by Team Yoast

This plugin makes your SEO life a lot easier, especially if you are a beginner to SEO.

After this plugin is installed and activated, it will give you a configuration page with several tabs. The configurations are very easy to understand and the default settings are good enough, so you can just leave the default there and you’re good to go.

Now your website front-end already has the following features:

  • Auto generated meta description, social sharing meta tag (based on configured template),
  • Auto generated meta title (based on configured template),
  • Auto generated sitemap xml (default to yoursite/sitemap_index.xml).

And your back-end is provided with the following tools:

  • A dashboard on SEO performance,
  • Titles and Metas configurations,
  • Social platform integrations,
  • XML sitemaps configurations,
  • Robots.txt editor.

Yoast SEO also adds a form in your regular post editor, where it allows you to specify a keyword that you want your post to focus on, and then it uses some best practices’ rules to score the SEO performance of your post versus that focused keyword and also score the readability of the post. It also tells you what you did well and what you can improve to make your score better.

Yoast SEO has more than 1 million active installs (as of 2016). The number says it all. I would definitely recommend this plugin, especially if you are new to SEO.

3. Image Teleporter

Developed by Blue Medicine Labs

When you copy the image from another web page to your post editor, the image URL is still from the original host, not your WordPress website’s host. This plugin turns images in your posts that are hosted externally into images that are uploaded to your Media Library.

When you finish editing your post and hit Publish or Update, this plugin finds images that are still hosted externally in your post, downloads them, uploads them to your Media Library, changes the image links of your post to the Media Library version and saves the post again.

By changing the external linked image to your Media Library, your website performance will not be affected by the external host performance. Moreover, if the external host goes down, the images on your posts will still load beautifully.

Imagine the workload you would have to do without this plugin: downloading, uploading, inserting the image for every image in your post, multiplied by the number of posts you have; not to mention the troubles when the external host goes down several years later. Now, while this brilliant plugin is doing all the hard work, you can go out and drink beers with your buddies!

4. Jetpack by WordPress.com

Developed by Automattic

Jetpack provides us with a lot of cool features out of the box. Below are some of those:

  • Photon – free CDN hosting service by Jetpack. Jetpack automatically cache images on our website on their CDN and serve static contents like images on our website to our visitors from their CDN. This feature helps releasing stress to our server, with no cost at all.
  • Site stats: traffic, visitors
  • Infinite scroll: Load more posts as the reader scrolls down
  • Social: Add social sharing buttons to your posts, automatically share your posts to your fan page

There are a lot more waiting to be discover. I would recommend that you give this awesome plugin a try.

5. Page Builder by SiteOrigin

Developed by SiteOrigin
Also install SiteOrigin Widgets Bundle by the same developer.

SiteOrigin Page Builder is the most popular page creation plugin for WordPress. It makes it easy to create responsive column based content, using the widgets you know and love. Your content will accurately adapt to all mobile devices, ensuring your site is mobile-ready. Read more on SiteOrigin.

With SiteOrigin, your editors immediately become professional html designers. All of the layouts, components like google maps, carousel slider, etc. that you can think of, can be done with SiteOrigin using only drag-and-drop. No single line of code or html editing will be necessary.

I would strongly recommend that you install this plugin and give it a try.

6. WP Super Cache

Developed by Automattic

If your website content does not need to be 100% real-time, this plugin can help a lot with your hosting cost, because you can now use only one tenth of your hardware resources to serve the same amount of traffic. Moreover, the settings are pre-configured out of the box. You can install the plugin and just use the default options, yet everything still works perfectly.

This plugin generates static html files from your dynamic WordPress blog. After a html file is generated your webserver will serve that file instead of processing the comparatively heavier and more expensive WordPress PHP scripts.

The static html files will be served to the vast majority of your users, but because a user’s details are displayed in the comment form after they leave a comment those requests are handled by the legacy caching engine. Static files are served to:

  • Users who are not logged in.
  • Users who have not left a comment on your blog.
  • Or users who have not viewed a password protected post.

99% of your visitors will be served static html files. Those users who don’t see the static files will still benefit because they will see different cached files that aren’t quite as efficient but still better than uncached. This plugin will help your server cope with a front page appearance on digg.com or other social networking site.

7. WP Mail SMTP

Developed by Callum Macdonald

Some cloud hosting, including Google Cloud, Amazon Web Services and Microsoft Azure Cloud, won’t let us send emails via SMTP from within the web server. To send emails, we should use some 3rd party email services like SendGrid, MailGun, etc. that allow us to use some custom SMTP port, that can get through the cloud firewall policy.

This plugin helps us configure SMTP and also test the configuration easily with only one page of configuration. Without this plugin, configuring SMTP settings and testing them would be a pain, especially if you don’t have ssh access to the hosting server.

The plugin configuration page is so simple that an average user can do it by himself.

8. Contact Form 7

Developed by Takayuki Miyoshi

If you are looking for a contact form for your website, look no more. Contact Form 7 plugin provides a contact form out of the box. It also integrates well with Google’s ReCaptcha service to prevent your websites from spams. The configurations are simple yet very flexible.

If you already have SiteOrigin plugin installed, SiteOrigin also comes with a very powerful contact form in their Page Builder feature. In my opinion, SiteOrigin’s contact form and Contact Form 7 are equally good. It’s only a personal preference matter when it comes to choosing which to use.

 

Conclusion

In this topic, I have mentioned the plugins that I find useful and install on all of my WordPress websites. Hope they can help you, too!

What are your favorite WordPress plugins? Let me know in the comments! 😀

Nginx security vulnerabilities and hardening best practices – part II: SSL

Nginx security vulnerabilities and hardening best practices – part II: SSL

Have you read part I? Nginx security vulnerabilities and hardening best practices – part I

Introduction

HTTP is a plain text protocol and it is open to man-in-the-middle attacks and passive monitoring. If our website allow users to authenticate, we should use SSL to encrypt the content sent and received between users and our web server.

Google has already considered HTTPS one of their ranking factors:

Security is a top priority for Google. We invest a lot in making sure that our services use industry-leading security, like strong HTTPS encryption by default. That means that people using Search, Gmail and Google Drive, for example, automatically have a secure connection to Google.

Beyond our own stuff, we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure. For instance, we have created resources to help webmasters prevent and fix security breaches on their sites.

We want to go even further. At Google I/O a few months ago, we called for “HTTPS everywhere” on the web.

HTTPS are becoming a standard in HTTP environment. To help securing the HTTP environment, there are organizations that are issuing free SSL certificates that we can use on our website, such as Let’s Encrypt. So there’s no excuses for not applying HTTPS on our website.

The default SSL configuration on nginx are vulnerable to some common attacks. SSL Labs provides us with a free scan to check if our SSL configuration is secure enough. If our website gets anything below A, we should review our configurations.

In this tutorial, we are going to look at some common SSL vulnerabilites and how to harden nginx’s SSL configuration against them. At the end of this tutorial, hopefully we can get an A+ in SSL Labs’s test.

Prerequisites

In this tutorial, let’s assume that we already have a website hosted on an nginx server. We have also bought an SSL certificate for our domain from a Certificate Authority or got a free one from Let’s Encrypt.

If you need more information on SSL vulnerabilities, you can try following the links below:

We are going to edit the nginx settings in the file /etc/nginx/sited-enabled/yoursite.com (On Ubuntu/Debian) or in /etc/nginx/conf.d/nginx.conf (On RHEL/CentOS).

For the entire tutorial, you need to edit the parts between the server block for the server config for port 443 (ssl config). At the end of the tutorial you can find the complete config example.

Make sure you back up the files before editing them!

The BEAST attack and RC4

In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.

Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, RC4 has a growing list of attacks against it, many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”

Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.

Factoring RSA-EXPORT Keys (FREAK)

FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”

The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.

It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.

There are two parts of the attack as the server must also accept “export grade RSA.”

The MITM attack works as follows:

  • In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
  • The MITM attacker changes this message to ask for ‘export RSA’.
  • The server responds with a 512-bit export RSA key, signed with its long-term key.
  • The client accepts this weak key due to the OpenSSL/SecureTransport bug.
  • The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
  • When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
  • From here on out, the attacker sees plaintext and can inject anything it wants.

Logjam (DH EXPORT)

Researchers from several universities and institutions conducted a study that found an issue in the TLS protocol. In a report the researchers report two attack methods.

Diffie-Hellman key exchange allows that depend on TLS to agree on a shared key and negotiate a secure session over a plain text connection.

With the first attack, a man-in-the-middle can downgrade a vulnerable TLS connection to 512-bit export-grade cryptography which would allow the attacker to read and change the data. The second threat is that many servers and use the same prime numbers for Diffie-Hellman key exchange instead of generating their own unique DH parameters.

The team estimates that an academic team can break 768-bit primes and that a nation-state could break a 1024-bit prime. By breaking one 1024-bit prime, one could eavesdrop on 18 percent of the top one million HTTPS domains. Breaking a second prime would open up 66 percent of VPNs and 26 percent of SSH servers.

Later on in this guide we generate our own unique DH parameters and we use a ciphersuite that does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software. Updated browsers refuse DH parameters lower than 768/1024 bit as a fix to this.

Cloudflare has a detailed guide on logjam.

Heartbleed

Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.

What versions of the OpenSSL are affected by Heartbleed?

Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

By updating OpenSSL you are not vulnerable to this bug.

SSL Compression (CRIME attack)

The CRIME attack uses SSL Compression to do its magic. SSL compression is turned off by default in nginx 1.1.6+/1.0.9+ (if OpenSSL 1.0.0+ used) and nginx 1.3.2+/1.2.2+ (if older versions of OpenSSL are used).

If you are using al earlier version of nginx or OpenSSL and your distro has not backported this option then you need to recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.

SSLv2 and SSLv3

SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.

Again edit the config file:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Poodle and TLS-FALLBACK-SCSV

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.

Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:

  • OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
  • OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
  • OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.

More info on the NGINX documentation

The Cipher Suite

Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.

This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.

The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.

The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.

The recommended cipher suite:

ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

The recommended cipher suite for backwards compatibility (IE6/WinXP):

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.

Prioritization logic

  • ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers. No known attack currently target these ciphers.
  • PFS ciphersuites are preferred, with ECDHE first, then DHE.
  • AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
  • In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
  • RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses

Mandatory discards

  • aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
  • eNULL contains null-encryption ciphers (cleartext)
  • EXPORT are legacy weak ciphers that were marked as exportable by US law
  • RC4 contains ciphers that use the deprecated ARCFOUR algorithm
  • DES contains ciphers that use the deprecated Data Encryption Standard
  • SSLv2 contains all ciphers that were defined in the old version of the SSL standard, now deprecated
  • MD5 contains all the ciphers that use the deprecated message digest 5 as the hashing algorithm

Extra settings

Make sure you also add these lines:

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;

When choosing a cipher during an SSLv3 or TLSv1 handshake, normally the client’s preference is used. If this directive is enabled, the server’s preference will be used instead.

More info on sslpreferserver_ciphers.

More info on ssl_ciphers.

Forward Secrecy & Diffie Hellman Ephemeral Parameters

The concept of forward secrecy is simple: client and server negotiate a key that never hits the wire, and is destroyed at the end of the session. The RSA private from the server is used to sign a Diffie-Hellman key exchange between the client and the server. The pre-master key obtained from the Diffie-Hellman handshake is then used for encryption. Since the pre-master key is specific to a connection between a client and a server, and used only for a limited amount of time, it is called Ephemeral.

With Forward Secrecy, if an attacker gets a hold of the server’s private key, it will not be able to decrypt past communications. The private key is only used to sign the DH handshake, which does not reveal the pre-master key. Diffie-Hellman ensures that the pre-master keys never leave the client and the server, and cannot be intercepted by a MITM.

All versions of nginx as of 1.4.4 rely on OpenSSL for input parameters to Diffie-Hellman (DH). Unfortunately, this means that Ephemeral Diffie-Hellman (DHE) will use OpenSSL’s defaults, which include a 1024-bit key for the key-exchange. Since we’re using a 2048-bit certificate, DHE clients will use a weaker key-exchange than non-ephemeral DH clients.

We need generate a stronger DHE parameter:

cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096

And then tell nginx to use it for DHE key-exchange:

ssl_dhparam /etc/ssl/certs/dhparam.pem;

Note that generating a 4096-bit key will take a long time to finish (from 30 minutes to several hours). Although it is recommended to generate a 4096-bit one, you can use a 2048-bit at the moment. However, 1024-bit key is NOT acceptable.

OCSP Stapling

When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.

OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.

The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.

Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.

See tutorial on enabling OCSP stapling on NGINX.

HTTP Strict Transport Security

When possible, you should enable HTTP Strict Transport Security (HSTS), which instructs browsers to communicate with your site only over HTTPS.

Add this to nginx.conf:

add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; ";

HTTP Public Key Pinning Extension

You should also enable the HTTP Public Key Pinning Extension.

Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.

Read how to set it up here.

Conclusion

The final SSL config may look something like this:

# HTTP will be redirected to HTTPS
server {
  listen 80;
  server_name hexadix.com www.hexadix.com;
  rewrite ^ https://$server_name$request_uri permanent;
}

# HTTPS server configuration
server {
  listen       443 ssl;
  server_name hexadix.com;

  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers On;
  ssl_session_cache shared:SSL:10m;
  ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
  # ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
  ssl_certificate /etc/letsencrypt/live/hexadix.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/hexadix.com/privkey.pem;
  ssl_dhparam /etc/ssl/certs/dhparam.pem;
}

After applying the above config lines you need to restart nginx:

# Check the config first:
/etc/init.d/nginx configtest
# Then restart:
/etc/init.d/nginx restart

Now use the SSL Labs test to see if you get a nice A. And, of course, have a safe, strong and future proof SSL configuration!

Also read the Mozilla page on the subject.

References

http://www.acunetix.com/blog/articles/nginx-server-security-hardening-configuration-1/
https://geekflare.com/nginx-webserver-security-hardening-guide/
https://nealpoole.com/blog/2011/04/setting-up-php-fastcgi-and-nginx-dont-trust-the-tutorials-check-your-configuration/
http://www.softwareprojects.com/resources/programming/t-optimizing-nginx-and-php-fpm-for-high-traffic-sites-2081.html
https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
https://weakdh.org/sysadmin.html
https://www.cyberciti.biz/tips/linux-unix-bsd-nginx-webserver-security.html

How to setup multiple WordPress on Ubuntu 16.04

How to setup multiple WordPress on Ubuntu 16.04

Introduction

WordPress is the most popular website framework in the world, powering 30% of the web all around the world. Over 60 million people have chosen WordPress to power the place on the web they call “home”.

In this tutorial, we will demonstrate how to setup multiple WordPress websites on a same Ubuntu 16.04 server. The setup includes nginx, MySQL, PHP, and WordPress itself.

Prerequisites

Before you complete this tutorial, you should have a regular, non-root user account on your server with sudo privileges. You can learn how to set up this type of account by completing DigitalOcean’s Ubuntu 16.04 initial server setup.

Once you have your user available, sign into your server with that username. You are now ready to begin the steps outlined in this guide.

Step 1. Install the Nginx Web Server

Install nginx

$ sudo apt-get update
$ sudo apt-get install nginx

Enable ufw to allow HTTP

$ sudo ufw allow 'Nginx HTTP'

Verify the change

$ sudo ufw status

You should see HTTP traffic allowed in the displayed output:

Output
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere                  
Nginx HTTP                 ALLOW       Anywhere                  
OpenSSH (v6)               ALLOW       Anywhere (v6)             
Nginx HTTP (v6)            ALLOW       Anywhere (v6)

With the new firewall rule added, you can test if the server is up and running by accessing your server’s domain name or public IP address in your web browser.

Navigate your browser to test if nginx is working:

Test Nginx’s default landing page in browser

http://server_domain_or_IP

If you see the above page, you have successfully installed Nginx.

Step 2. Install MySQL to Manage Site Data

Now that we have a web server, we need to install MySQL, a database management system, to store and manage the data for our site.

Install MySQL server

$ sudo apt-get install mysql-server

You will be asked to supply a root (administrative) password for use within the MySQL system.

The MySQL database software is now installed, but its configuration is not exactly complete yet.

To secure the installation, we can run a simple security script that will ask whether we want to modify some insecure defaults. Begin the script by typing:

Secure MySQL installation

$ sudo mysql_secure_installation

You will be asked to enter the password you set for the MySQL root account. Next, you will be asked if you want to configure the VALIDATE PASSWORD PLUGIN.

Warning: Enabling this feature is something of a judgment call. If enabled, passwords which don’t match the specified criteria will be rejected by MySQL with an error. This will cause issues if you use a weak password in conjunction with software which automatically configures MySQL user credentials, such as the Ubuntu packages for phpMyAdmin. It is safe to leave validation disabled, but you should always use strong, unique passwords for database credentials.

Answer y for yes, or anything else to continue without enabling.

VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No:

If you’ve enabled validation, you’ll be asked to select a level of password validation. Keep in mind that if you enter 2, for the strongest level, you will receive errors when attempting to set any password which does not contain numbers, upper and lowercase letters, and special characters, or which is based on common dictionary words.

There are three levels of password validation policy:

LOW    Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary file

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1

If you enabled password validation, you’ll be shown a password strength for the existing root password, and asked you if you want to change that password. If you are happy with your current password, enter n for “no” at the prompt:

Using existing password for root.

Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n

For the rest of the questions, you should press Y and hit the Enter key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.

At this point, your database system is now set up and we can move on.

Step 3. Install PHP for Processing

We now have Nginx installed to serve our pages and MySQL installed to store and manage our data. However, we still don’t have anything that can generate dynamic content. We can use PHP for this.

Since Nginx does not contain native PHP processing like some other web servers, we will need to install php-fpm, which stands for “fastCGI process manager”. We will tell Nginx to pass PHP requests to this software for processing.

We can install this module and will also grab an additional helper package that will allow PHP to communicate with our database backend. The installation will pull in the necessary PHP core files. Do this by typing:

Install php-fpm and php-mysql

$ sudo apt-get install php-fpm php-mysql

Configure the PHP Processor

We now have our PHP components installed, but we need to make a slight configuration change to make our setup more secure.

Open the main php-fpm configuration file with root privileges:

$ sudo nano /etc/php/7.0/fpm/php.ini

What we are looking for in this file is the parameter that sets cgi.fix_pathinfo. This will be commented out with a semi-colon (;) and set to “1” by default.

This is an extremely insecure setting because it tells PHP to attempt to execute the closest file it can find if the requested PHP file cannot be found. This basically would allow users to craft PHP requests in a way that would allow them to execute scripts that they shouldn’t be allowed to execute.

We will change both of these conditions by uncommenting the line and setting it to “0” like this:

Change cgi.fix_pathinfo

# /etc/php/7.0/fpm/php.ini

cgi.fix_pathinfo=0

Save and close the file when you are finished.

Now, we just need to restart our PHP processor by typing:

Restart PHP processor

$ sudo systemctl restart php7.0-fpm

This will implement the change that we made.

Step 4. Install WordPress

The plan

To make the best use of the server, we will setup the server so that we can also host other websites in PHP or Python in the future. To do that, we will have the following arrangement:

  • Websites’ source code will stay in
    /www/hexadix.com/
    /www/example.com/
  • Each website configuration will be saved in
    /etc/nginx/sites-enabled/hexadix.com.conf
    /etc/nginx/sites-enabled/example.com.conf
  • The full configuration for each website will be in the above mentioned file. The default nginx configuration file will be removed.
  • /www folder will be chown to nginx’s user

    By default, nginx will run as www-data or nginx, you can find this out by running ps aux | grep nginx in your terminal)

Using this arrangement, we can freely add future websites in such manner so everything is kept organized.

Step 4.1. Create database for your WordPress

Login to MySQL shell

$ mysql -u root -p

Create database and mysql user for our WordPress

mysql> CREATE DATABASE wordpress CHARACTER SET utf8 COLLATE utf8_unicode_ci;
mysql> CREATE USER 'wpuser'@'localhost' IDENTIFIED BY 'wppassword';
mysql> GRANT ALL PRIVILEGES ON wordpress.* TO 'wpuser'@'localhost';
mysql> FLUSH PRIVILEGES;
mysql> SHOW GRANTS FOR 'wpuser'@'localhost';

A common practice here is to use website name as database name, and also as database user name. This way, each database user will have access to only its database, and the naming convention is easy to remember.

Step 4.2. Setup WordPress folder

Change directory to /www

$ cd /www

Download the latest WordPress.

$ wget http://wordpress.org/latest.tar.gz

Extract it.

$ tar -zxvf latest.tar.gz

Move it to our document root.

$ mv wordpress/* /www/hexadix.com

Copy the wp-sample-config.php file and make it as wp-config.php file.

$ cp /www/hexadix.com/wp-config-sample.php /www/hexadix.com/wp-config.php

Edit the config file and mention the database information.

$ vi /www/hexadix.com/wp-config.php

Default will look like below.

# /www/hexadix.com/wp-config.php

// ** MySQL settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'database_name_here');
/** MySQL database username */
define('DB_USER', 'username_here');
/** MySQL database password */
define('DB_PASSWORD', 'password_here');
/** MySQL hostname */
define('DB_HOST', 'localhost');

Modified entries according to the created database user and database will look like.

# /www/hexadix.com/wp-config.php

// ** MySQL settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'wordpress');
/** MySQL database username */
define('DB_USER', 'wpuser');
/** MySQL database password */
define('DB_PASSWORD', 'wppassword');
/** MySQL hostname */
define('DB_HOST', 'localhost');

Make nginx user as the owner to WordPress directory.

$ chown -R www-data:www-data /www/hexadix.com/

Step 4.3. Configure nginx

Open nginx main configuration file

$ sudo vi /etc/nginx/nginx.conf

Look for the line below.

include /etc/nginx/sites-enabled/*;

Change it to

include /etc/nginx/sites-enabled/*.conf;

The purpose of this step is to tell nginx to only load files with .conf extension. This way, whenever we want to disable a website, we only need to change the extension of that website’s configuration file to something else, which is more convenient.

Test new nginx configuration

$ sudo nginx -t

Reload nginx

$ sudo systemctl reload nginx

If everything works correctly, when you navigate your browser to your website now, the default Welcome to Nginx page would have been disabled.

Create nginx configuration file for our WordPress

$ sudo vi /etc/nginx/sites-enabled/hexadix.com.conf

Put the following content in the file

server {
  listen 80;
  server_name hexadix.com www.hexadix.com;
  root /www/hexadix.com;
  index index.php index.html index.htm;

  if (!-e $request_filename) {
    rewrite /wp-admin$ $scheme://$host$uri/ permanent;
    rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last;
    rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last;
  }
  location / {
    try_files $uri $uri/ /index.php?$args;
  }
  location ~ \.php$ {
    try_files $uri $uri/ /index.php?q=$uri&$args;
    fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
  }
}

Remember to change the server_name and root to match your website. Also, the fastcgi_pass path may differ in your server installation.

Test new nginx configuration

$ sudo nginx -t

Reload nginx

$ sudo systemctl reload nginx

If everything works correctly, by this time you can navigate to your domain (in this case http://hexadix.com) and see your WordPress Installation Wizard page. Follow the instruction on the Wizard to setup an admin account and you’re done.

You can now login to your WordPress administrator site by navigating to:

http://yourdomain.com/wp-admin/

Your front page is at:

http://yourdomain.com

Setup multiple WordPress

To setup another WordPress on the same server, repeat the whole Step 4 in the same manner with the corresponding configurations.

Conclusion

You should now have multiple WordPress websites on your Ubuntu 16.04. Now you can setup as many additional websites as you may need by following the same steps in the tutorial.

How to setup Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 16.04

How to setup Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 16.04

Introduction

The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.

In this guide, we will demonstrate how to install a LEMP stack on an Ubuntu 16.04 server. The Ubuntu operating system takes care of the first requirement. We will describe how to get the rest of the components up and running.

Prerequisites

Before you complete this tutorial, you should have a regular, non-root user account on your server with sudo privileges. You can learn how to set up this type of account by completing DigitalOcean’s Ubuntu 16.04 initial server setup.

Once you have your user available, sign into your server with that username. You are now ready to begin the steps outlined in this guide.

Step 1: Install the Nginx Web Server

In order to display web pages to our site visitors, we are going to employ Nginx, a modern, efficient web server.

All of the software we will be using for this procedure will come directly from Ubuntu’s default package repositories. This means we can use the apt package management suite to complete the installation.

Since this is our first time using apt for this session, we should start off by updating our local package index. We can then install the server.

Install nginx

sudo apt-get update
sudo apt-get install nginx

On Ubuntu 16.04, Nginx is configured to start running upon installation.

If you are have the ufw firewall running, as outlined in our initial setup guide, you will need to allow connections to Nginx. Nginx registers itself with ufw upon installation, so the procedure is rather straight forward.

It is recommended that you enable the most restrictive profile that will still allow the traffic you want. Since we haven’t configured SSL for our server yet, in this guide, we will only need to allow traffic on port 80.

Enable ufw to allow HTTP

sudo ufw allow 'Nginx HTTP'

Verify the change

sudo ufw status

You should see HTTP traffic allowed in the displayed output:

Output
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere                  
Nginx HTTP                 ALLOW       Anywhere                  
OpenSSH (v6)               ALLOW       Anywhere (v6)             
Nginx HTTP (v6)            ALLOW       Anywhere (v6)

With the new firewall rule added, you can test if the server is up and running by accessing your server’s domain name or public IP address in your web browser.

If you do not have a domain name pointed at your server and you do not know your server’s public IP address, you can find it by typing one of the following into your terminal:

Find server’s public IP address

ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'

This will print out a few IP addresses. You can try each of them in turn in your web browser.

As an alternative, you can check which IP address is accessible as viewed from other locations on the internet:

Alternative way to find server’s public IP address

curl -4 icanhazip.com

Type one of the addresses that you receive in your web browser. It should take you to Nginx’s default landing page:

Test Nginx’s default landing page in browser

http://server_domain_or_IP


If you see the above page, you have successfully installed Nginx.

Step 2: Install MySQL to Manage Site Data

Now that we have a web server, we need to install MySQL, a database management system, to store and manage the data for our site.

Install MySQL server

sudo apt-get install mysql-server

You will be asked to supply a root (administrative) password for use within the MySQL system.

The MySQL database software is now installed, but its configuration is not exactly complete yet.

To secure the installation, we can run a simple security script that will ask whether we want to modify some insecure defaults. Begin the script by typing:

Secure MySQL installation

sudo mysql_secure_installation

You will be asked to enter the password you set for the MySQL root account. Next, you will be asked if you want to configure the VALIDATE PASSWORD PLUGIN.

Warning: Enabling this feature is something of a judgment call. If enabled, passwords which don’t match the specified criteria will be rejected by MySQL with an error. This will cause issues if you use a weak password in conjunction with software which automatically configures MySQL user credentials, such as the Ubuntu packages for phpMyAdmin. It is safe to leave validation disabled, but you should always use strong, unique passwords for database credentials.

Answer y for yes, or anything else to continue without enabling.

VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No:

If you’ve enabled validation, you’ll be asked to select a level of password validation. Keep in mind that if you enter 2, for the strongest level, you will receive errors when attempting to set any password which does not contain numbers, upper and lowercase letters, and special characters, or which is based on common dictionary words.

There are three levels of password validation policy:

LOW    Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary                  file

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1

If you enabled password validation, you’ll be shown a password strength for the existing root password, and asked you if you want to change that password. If you are happy with your current password, enter n for “no” at the prompt:

Using existing password for root.

Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n

For the rest of the questions, you should press Y and hit the Enter key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.

At this point, your database system is now set up and we can move on.

Step 3: Install PHP for Processing

We now have Nginx installed to serve our pages and MySQL installed to store and manage our data. However, we still don’t have anything that can generate dynamic content. We can use PHP for this.

Since Nginx does not contain native PHP processing like some other web servers, we will need to install php-fpm, which stands for “fastCGI process manager”. We will tell Nginx to pass PHP requests to this software for processing.

We can install this module and will also grab an additional helper package that will allow PHP to communicate with our database backend. The installation will pull in the necessary PHP core files. Do this by typing:

Install php-fpm and php-mysql

sudo apt-get install php-fpm php-mysql

Configure the PHP Processor

We now have our PHP components installed, but we need to make a slight configuration change to make our setup more secure.

Open the main php-fpm configuration file with root privileges:

sudo nano /etc/php/7.0/fpm/php.ini

What we are looking for in this file is the parameter that sets cgi.fix_pathinfo. This will be commented out with a semi-colon (;) and set to “1” by default.

This is an extremely insecure setting because it tells PHP to attempt to execute the closest file it can find if the requested PHP file cannot be found. This basically would allow users to craft PHP requests in a way that would allow them to execute scripts that they shouldn’t be allowed to execute.

We will change both of these conditions by uncommenting the line and setting it to “0” like this:

Change cgi.fix_pathinfo

# /etc/php/7.0/fpm/php.ini

cgi.fix_pathinfo=0

Save and close the file when you are finished.

Now, we just need to restart our PHP processor by typing:

Restart PHP processor

sudo systemctl restart php7.0-fpm

This will implement the change that we made.

Step 4: Configure Nginx to Use the PHP Processor

Now, we have all of the required components installed. The only configuration change we still need is to tell Nginx to use our PHP processor for dynamic content.

We do this on the server block level (server blocks are similar to Apache’s virtual hosts). Open the default Nginx server block configuration file by typing:

Open nginx default configuration file

sudo nano /etc/nginx/sites-available/default

Currently, with the comments removed, the Nginx default server block file looks like this:

# /etc/nginx/sites-available/default

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;

    server_name _;

    location / {
        try_files $uri $uri/ =404;
    }
}

We need to make some changes to this file for our site.

  • First, we need to add index.php as the first value of our index directive so that files named index.php are served, if available, when a directory is requested.
  • We can modify the server_name directive to point to our server’s domain name or public IP address.
  • For the actual PHP processing, we just need to uncomment a segment of the file that handles PHP requests by removing the pound symbols (#) from in front of each line. This will be the location ~\.php$ location block, the included fastcgi-php.conf snippet, and the socket associated with php-fpm.
  • We will also uncomment the location block dealing with .htaccess files using the same method. Nginx doesn’t process these files. If any of these files happen to find their way into the document root, they should not be served to visitors.

The changes that you need to make are in bold in the text below:

# /etc/nginx/sites-available/default

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    root /var/www/html;
    index index.php index.html index.htm index.nginx-debian.html;

    server_name server_domain_or_IP;

    location / {
        try_files $uri $uri/ =404;
    }
    
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.0-fpm.sock;
    }

    location ~ /\.ht {
        deny all;
    }
}

When you’ve made the above changes, you can save and close the file.

Test your configuration file for syntax errors by typing:

Test nginx configurations

sudo nginx -t

If any errors are reported, go back and recheck your file before continuing.

When you are ready, reload Nginx to make the necessary changes:

Reload nginx

sudo systemctl reload nginx

Step 5: Create a PHP File to Test Configuration

Your LEMP stack should now be completely set up. We can test it to validate that Nginx can correctly hand .php files off to our PHP processor.

We can do this by creating a test PHP file in our document root. Open a new file called info.php within your document root in your text editor:

Create a new PHP file to test PHP installation

sudo nano /var/www/html/info.php

Type or paste the following lines into the new file. This is valid PHP code that will return information about our server:

# /var/www/html/info.php

<?php
phpinfo();

When you are finished, save and close the file.

Now, you can visit this page in your web browser by visiting your server’s domain name or public IP address followed by /info.php:

http://server_domain_or_IP/info.php

You should see a web page that has been generated by PHP with information about your server:

f you see a page that looks like this, you’ve set up PHP processing with Nginx successfully.

After verifying that Nginx renders the page correctly, it’s best to remove the file you created as it can actually give unauthorized users some hints about your configuration that may help them try to break in. You can always regenerate this file if you need it later.

For now, remove the file by typing:

sudo rm /var/www/html/info.php

Conclusion

You should now have a LEMP stack configured on your Ubuntu 16.04 server. This gives you a very flexible foundation for serving web content to your visitors.

HA Proxy using VIP and keepalived

HA Proxy using VIP and keepalived

 

Abstract

This post discusses how to leverage keepalived features to proxy request(s) (both internal and external) with only 2 proxy servers, without forfeiting high availability.

Prerequisites

2 installed CentOS with NginX server, a spare LAN IP, and a spare WAN IP from your cloud service.

Deployment

1. Install keepalived (if not already present):

yum install keepalived

2. Bind IP which not defined in system (kernel level)
This step help kernel understand that a interface can have 2 ip address.
Add this to /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind = 1

Force sysctl to apply new setting:

sysctl -p

3. Configure keepalived at BOTH proxy
Edit /etc/keepalived/keepalived.conf with the content below (there is a default config file when installed, ignore it):

vrrp_sync_group VG_1 {
    group { WAN_1 }
    group { LAN_1 }
}

vrrp_instance WAN_1 { #master WAN
    #just a name
    state MASTER # BACKUP in other proxy 
    interface eth0
    virtual_router_id 3
    dont_track_primary

    #LOWER is SLAVE
    priority 90 # should be <90 in other proxy

    preempt_delay 30
    garp_master_delay 1
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass yourpass
    }
    track_interface {
        eth2
    }
    virtual_ipaddress {
        130.65.156.140/24 dev eth2
    }
}

vrrp_instance LAN_1 { #backup LAN
    #just a name
    state BACKUP #MASTER in other proxy

    interface eth0
    virtual_router_id 4
    dont_track_primary

    #LOWER is SLAVE
    priority 80 # should be >90 in other proxy

    preempt_delay 30
    garp_master_delay 1
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass yourpass
    }
    track_interface {
        eth1
    }
    virtual_ipaddress {
        10.5.247.10/24 dev eth1
    }
}

4. Set BOOTPROTO=”static” in both 4 interface.
In some cloud environment, there is a periodically restart on interface (still get same IP address), but it would not start keepalived service if it’s dynamic in WAN interface
5. set chkconfig keepalived on (auto service when booted)
6. use ip addr show <interface> (with eth1 or eth0 in both proxy to check status)

Explanation

  • there are maximum 255 allowed instances on a proxy using vrrp
  •  instance WAN_1 consider proxy 1 as master of WAN traffic (any request to 130.65.156.140) will be proxied to web1 and web2 (based on nginx configuration of upstream, there are several load-balancing mechanism to apply (ip hash, round-robin, least-connect). if proxy 1 fail over, proxy 2 will take Master role of WAN traffic and proxied.
  • instance LAN_1 consider proxy 2 as master LAN traffic (DB request/return in this scenario – to 10.5.247.10)
  • either one of proxy is fail over (completely off in all interface), another will take role Master both LAN and WAN traffic, which mean eliminate Single Point Failure
  • flow of traffic:
    • http request from user will point to VIP WAN –> e2 (proxy 1) default –> load balancing to both web server through e1 (proxy 1)
    • e1 (proxy 2) ALWAYS ready to forward load balancing http to both web server, but there is no input traffic to e2 (proxy 2) until it claims VIP WAN; therefore e1 (proxy 2) is in idle to forward http traffic to both webs
    • db request from web server will point to VIP LAN –> e1 (proxy 2) default–> load balancing to both db server through e1 (proxy 2)
    • e1 (proxy 1) ALWAYS ready to forward load balance to both db server, but there is no input db traffic to e1 (proxy 1) until it claims VIP LAN
  • Important parameter(s):
    • interface : interface to use exchange packet of vrrp protocols for every instances. should be local ethernet interface both case
    • priority: lower is BACKUP, higher is MASTER for each vrrp instance
    • dont_track_primary: we use local ethernet interface to exchange vrrp information and need another interface to healthcheck other side interface (track_interface parameter). Checking primary interface health can cause an issue due sleep state of interface (but not completely fail over)
    • virtual_ipaddress : the address should correspondence master interface take
  • Discussion: How about only e1 (proxy 1) fail over (other interfaces still work)? and solution ? Please leave your comment