Slowloris DoS Attack gives a hacker the power to take down a web server in less than 5 minutes by just using a moderate personal laptop. The whole idea behind this attack technique is making use of HTTP GET requests to occupy all available HTTP connections permitted on a web server.
Technically, NGINX is not affected by this attack since NGINX doesn’t rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. However, we can see later in this article that in practice, the default configurations can make an NGINX web server “vulnerable” to Slowloris.
In this article, we are going to take a look at this attack technique and some way to mitigate this attack on NGINX.
A Slow HTTP Denial of Service (DoS) attack, otherwise referred to as Slowloris HTTP DoS attack, makes use of HTTP GET requests to occupy all available HTTP connections permitted on a web server.
A Slow HTTP DoS Attack takes advantage of a vulnerability in thread-based web servers which wait for entire HTTP headers to be received before releasing the connection. While some thread-based servers such as Apache make use of a timeout to wait for incomplete HTTP requests, the timeout, which is set to 300 seconds by default, is re-set as soon as the client sends additional data.
This creates a situation where a malicious user could open several connections on a server by initiating an HTTP request but does not close it. By keeping the HTTP request open and feeding the server bogus data before the timeout is reached, the HTTP connection will remain open until the attacker closes it. Naturally, if an attacker had to occupy all available HTTP connections on a web server, legitimate users would not be able to have their HTTP requests processed by the server, thus experiencing a denial of service.
This enables an attacker to restrict access to a specific server with very low utilization of bandwidth. This breed of DoS attack is starkly different from other DoS attacks such as SYN flood attacks which misuse the TCP SYN (synchronization) segment during a TCP three-way-handshake.
How it works
An analysis of an HTTP GET request helps further explain how and why a Slow HTTP DoS attack is possible. A complete HTTP GET request resembles the following.
GET /index.php HTTP/1.1[CRLF]
Pragma: no-cache[CRLF]
Cache-Control: no-cache[CRLF]
Host: testphp.vulnweb.com[CRLF]
Connection: Keep-alive[CRLF]
Accept-Encoding: gzip,deflate[CRLF]
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.63 Safari/537.36[CRLF]
Accept: */*[CRLF][CRLF]
Something that is of particular interest is the [CRLF] in the GET request above. Carriage Return Line Feed (CRLF), is a non-printable character that is used to denote the end of a line. Similar to text editors, an HTTP request would contain a [CRLF] at the end of a line to start a fresh line and two [CRLF] characters (i.e. [CRLF][CRLF]) to denote a blank line. The HTTP protocol defines a blank line as the completion of a header. A Slow HTTP DoS takes advantage of this by not sending a finishing blank line to complete the HTTP header.
To make matters worse, a Slow HTTP DoS attack is not commonly detected by Intrusion Detection Systems (IDS) since the attack does not contain any malformed requests. The HTTP request will seem legitimate to the IDS and will pass it onto the web server.
Perform a Slowloris DoS Attack
Performing a Slowloris DoS Attack is a piece of cake nowadays. We can easily find a lot of implementations of the attack hosted on GitHub with a simple Google search.
For demonstration, we can use a Python implementation of Slowloris to perform an attack.
How this code works
This implementation works like this:
We start making lots of HTTP requests.
We send headers periodically (every ~15 seconds) to keep the connections open.
We never close the connection unless the server does so. If the server closes a connection, we create a new one keep doing the same thing.
This exhausts the servers thread pool and the server can’t accept requests from other people.
How to install and run this code
You can clone the git repo or install using pip. Here’s how we run it.
That’s it. We are performing a Slowloris attack on example.com!
If you want to clone using git instead of pip, here’s how you do it.
git clone https://github.com/gkbrk/slowloris.git
cd slowloris
python3 slowloris.py example.com
By default, slowloris.py will try to keep 150 open connections to target web server and you can change this number by adding command line argument “-s 1000”.
However, because this code sends keep-alive messages for one connection after another (not in parallel), some connections will get timed out and disconnected by the target server before its turn of sending keep-alive message. The result is that, in practice, an instance of slowloris.py can only keep about 50-100 open connections to the target.
We can work around this limitation by opening multiple instances of slowloris.py, with each trying to keep 50 open connections. That way, we can keep thousands of connections open with only one computer.
3. Preventing and Mitigating Slowloris (a.k.a. Slow HTTP) DoS Attacks in NGINX
Slowloris works by opening a lot of connections to the target web server, and keeping those connections open by periodically sending keep-alive messages on each connections to the server. Understanding this, we can come up with several ways to mitigate the attack.
Technically, Slowloris only affects thread-based web servers such as Apache, while leaving event-driven (asynchronous) web servers like NGINX unaffected. However, the default configurations of NGINX can make NGINX vulnerable to this attack.
Identify the attack
First of all, to see if our web server are under attack, we can list connections to port 80 on our web server by running the following command:
We can further filter only the open connections by running the following command:
$ netstat -nalt | grep :80 | grep ESTA
The output will look like:
tcp 0 0 10.128.0.2:48406 169.254.169.254:80 ESTABLISHED
tcp 0 0 10.128.0.2:48410 169.254.169.254:80 ESTABLISHED
tcp 0 0 10.128.0.2:48408 169.254.169.254:80 ESTABLISHED
We can also count the open connections by adding -c to the end of the above command:
$ netstat -nalt | grep :80 | grep ESTA -c
The output will be the number of connections in the filtered result:
3
How is NGINX vulnerable to Slowloris?
NGINX can be vulnerable to Slowloris in the several ways:
Config #1: By default, NGINX limits the number of connections accepted by each worker process to 768.
Config #2: Default number of open connections limited by the system is too low.
Config #3: Default number of open connections limited for nginx user (usually www-data) is too low.
Config #4: By default, NGINX itself limits the number of open connections for its worker process no more than 1024.
For example, let’s say we run NGINX on a 2-core CPU server, with default configurations.
Config #1: NGINX will run with 2 worker process, which can handle up to 768 x 2 = 1536 connections.
Config #2: Default number of open connections limited by the system: soft limit = 1024, hard limit = 4096.
Config #3: Default number of open connections limited for nginx user (usually www-data): soft limit = 1024, hard limit = 4096.
Config #4: By default, NGINX itself limits the number of open connections for its worker process no more than 1024.
Therefore NGINX can handle at most 1024 connections. That will take only 20 instances of slowloris.py to take the server down. Wow!
Mitigation
We can mitigate the attack in some network related approach like:
Limiting the number of connections from one IP
Lower the timed out wait time for each http connection
However, by using proxies (such as TOR network) to make connections appear to be from different IPs, the attacker can easily by pass these network defense approaches. After that, 1024 connections is something that is pretty easy to achieve.
Therefore, to protect NGINX from this type of attack, we should optimize the default configurations mentioned above.
Config #1: NGINX worker connections limit
Open nginx configuration file, which usually located at /etc/nginx/nginx.conf, and change this setting:
events {
worker_connections 768;
}
to something larger like this:
events {
worker_connections 100000;
}
This settings will tell NGINX to allow each of its worker process to handle up to 100k connections.
Config #2: system open file limit
Even we told NGINX to allow each of its worker process to handle up to 100k connections, the number of connections may be further limited by the system open file limit.
To check the current system file limit:
$ cat /proc/sys/fs/file-max
Normally this number would be 10% of the system’s memory, i.e. if our system has 2GB of RAM, this number will be 200k, which should be enough. However, if this number is too small, we can increase it. Open /etc/sysctl.conf, change the following line to (or add the line if it’s not already there)
fs.file-max = 500000
Apply the setting:
$ sudo sysctl -p
Check the setting again:
$ cat /proc/sys/fs/file-max
The output should be:
fs.file-max = 500000
Config #3: user’s open file limit
Besides system-wide open file limit as mentioned in Config #2, Linux system also limit the number of open file per user. By default, NGINX worker processes will run as www-data user or nginx user, and is therefore limited by this number.
To check the current limit for nginx’s user (www-data in the example below), first we need to switch to www-data user:
$ sudo su - www-data -s /bin/bash
By default, www-data user is not provided with a shell, therefore to run commands as www-data, we must provide a shell with the -s argument and provide the /bin/bash as the shell.
After switching to www-data user, we can check the open file limit of that user:
$ ulimit -n
To check the hard limit:
$ ulimit -Hn
To check the soft limit:
$ ulimit -Sn
By default, the soft limit is 1024 and hard limit is 4096, which is too small to survive a Slowloris attack.
To increase this limit, open /etc/security/limits.conf and add the following lines (remember to switch back to a sudo user so that we can edit the file):
* soft nofile 102400
* hard nofile 409600
www-data soft nofile 102400
www-data hard nofile 409600
A note for RHEL/CentOS/Fedora/Scientific Linux users
For these systems, we will have to do an extra steps for the limit to take effect. Edit /etc/pam.d/login file and add/modify the following line (make sure you get pam_limts.so):
session required pam_limits.so
then save and close the file.
The user must be logout and re-login for the settings to take effect.
After that, run the above checks to see if the new soft limit and hard limit have been applied.
Config #4: NGINX’s worker number of open files limit
Even when the ulimit -n command for www-data returns 102400, the nginx worker process open file limit is still 1024.
To verify the limit applied to the running worker process, first we need to find the process id of the worker process by listing the running worker processes:
$ ps aux | grep nginx
root 1095 0.0 0.0 85880 1336 ? Ss 18:30 0:00 nginx: master process /usr/sbin/nginx
www-data 1096 0.0 0.0 86224 1764 ? S 18:30 0:00 nginx: worker process
www-data 1097 0.0 0.0 86224 1764 ? S 18:30 0:00 nginx: worker process
www-data 1098 0.0 0.0 86224 1764 ? S 18:30 0:00 nginx: worker process
www-data 1099 0.0 0.0 86224 1764 ? S 18:30 0:00 nginx: worker process
then take one of the process ids (e.g. 1096 in the above example output), and then check the limit currently applied to the process (remember to change 1096 to the right process id in your server):
$ cat /proc/1096/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 15937 15937 processes
Max open files 1024 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 15937 15937 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
You can see that the max open files is still 1024:
Max open files 1024 4096 files
That is because NGINX itself also limits the number of open files by default to 1024.
To change this, open NGINX configuration file (/etc/nginx/nginx.conf) and add/edit the following line:
worker_rlimit_nofile 102400;
Make sure that this line is put at the top level configurations and not nested in the events configuration like worker_connections.
The final nginx.conf would look something like this:
Restart NGINX, then verify the limit applied to the running worker process again. The Max open files should now change to 102400.
Congratulations! Now your NGINX server can survive a Slowloris attack. You can argue that the attacker can still make more than 100k open connections to take down the target web server, but that would become more of any DDoS attack than Slowloris attack specifically.
Conclusions
Slowloris is a very smart way that allows an attacker to use very limited resources to perform a DoS attack on a web server. Technically, NGINX servers are not vulnerable to this attack, but the default configurations make NGINX vulnerable. By optimizing the default configurations on NGINX server, we can mitigate the attack.
If you haven’t check your NGINX server for this type of attack, you should check it now, because tomorrow, who knows if a curious high school kid would perform a Slowloris attack (“for educational purposes”) from his laptop and take down your server. 😀
Most of the time, gzip compression will make your server perform better and more resource efficient. This tutorial will show you how to enable gzip compression for your nginx server.
What is gzip compression
Gzip is a method of compressing files (making them smaller) for faster network transfers. It is also a file format but that’s out of the scope of this post.
Compression allows your web server to provide smaller file sizes which load faster for your website users.
Enabling gzip compression is a standard practice. If you are not using it for some reason, your webpages are likely slower than your competitors.
Enabling gzip also makes your website score better on Search Engines.
How compressed files work on the web
When a request is made by a browser for a page from your site, your webserver returns the smaller compressed file if the browser indicates that it understands the compression. All modern browsers understand and accept compressed files.
How to enable gzip on Apache web server
To enable compression in Apache, add the following code to your config file
To enable compression in Nginx, you will need to add the following code to your config file
gzip on;
gzip_comp_level 2;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
# Disable for IE < 6 because there are some known problems
gzip_disable "MSIE [1-6].(?!.*SV1)";
# Add a vary header for downstream proxies to avoid sending cached gzipped files to IE6
gzip_vary on;
As with most other directives, the directives that configure compression can be included in the http context or in a server or location configuration block.
The overall configuration of gzip compression might look like this.
If you have a WordPress website and you can’t edit the Apache or Nginx config file, you can still enable gzip using a plugin like WP Super Cache by Automattic.
After installing the plugin, go to its Advanced Settings tab and check on the settings “Compress pages so they’re served more quickly to visitors. (Recommended)“ to enable gzip compression.
However, keep in mind that this plugin comes with a lot of more features, some of which you may not want. So if you don’t like the extra features, you can just use a more simple plugin like Gzip Ninja Speed Compression or Check and Enable Gzip Compression.
How to check if gzip is successfully enabled and working
Using Firefox to check gzip compression
If you are using Firefox, do the following steps:
Open Developer Tools by one of these methods:
Menu > Developer > Toggle Tools
Ctrl + Shift + I
F12
Switch to Network Tab in the Developer Tools.
Launch the website that you want to check.
If gzip is working, the request to html, css, javascript and text files will have the Transferred column smaller than the Size column, where Transferred column displays the size of the compressed content that was transferred, and the Size column shows the size of the original content before compression.
Using Chrome to check gzip compression
If you are using Chrome, do the following:
Open Developer Tools by one of these methods:
Menu > More tools > Developer Tools
Ctrl + Shift + I
F12
Switch to Network Tab in the Developer Tools.
Launch the website that you want to check.
Click on the request you want to check (html, css, javascript or text files), the request detail will be displayed.
Toggle Response Headers of that request.
Check for Content-Encoding: gzip.
If gzip is working, the Content-Encoding: gzip will be there.
Make sure you check the Response Headers and not the Request Headers.
Frequently Asked Questions
How efficient is gzip?
As you can see in the Firefox Developer Tools Network Tab, the compressed size is normally one third or one fourth the original size. This ratio differs from requests to requests but usually that’s the ratio for html, css, javascript and text files.
Will gzip make my server slower?
OK, that’s a smart question. Since the server has to do the extra work to compress the response, it may need some more CPU power. However, the CPU power that is saved during transferring the response usually makes up for that, not to say that more CPU power is saved. Therefore, at the end of the day, normally your server would be more CPU efficient.
Should I enable gzip for image files (and media files in general)?
Image files are usually already compressed, so gzip compressing the image will not save you a lot of bytes (normally less than 5%), but on the other hand requires a lot of processing resource. Therefore, you shouldn’t enable gzip for your images and should only enable gzip for html, css, javascript and text files.
Nginx security vulnerabilities and hardening best practices – part II: SSL
HTTP is a plain text protocol and it is open to man-in-the-middle attacks and passive monitoring. If our website allow users to authenticate, we should use SSL to encrypt the content sent and received between users and our web server.
Google has already considered HTTPS one of their ranking factors:
Security is a top priority for Google. We invest a lot in making sure that our services use industry-leading security, like strong HTTPS encryption by default. That means that people using Search, Gmail and Google Drive, for example, automatically have a secure connection to Google.
Beyond our own stuff, we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure. For instance, we have created resources to help webmasters prevent and fix security breaches on their sites.
We want to go even further. At Google I/O a few months ago, we called for “HTTPS everywhere” on the web.
HTTPS are becoming a standard in HTTP environment. To help securing the HTTP environment, there are organizations that are issuing free SSL certificates that we can use on our website, such as Let’s Encrypt. So there’s no excuses for not applying HTTPS on our website.
The default SSL configuration on nginx are vulnerable to some common attacks. SSL Labs provides us with a free scan to check if our SSL configuration is secure enough. If our website gets anything below A, we should review our configurations.
In this tutorial, we are going to look at some common SSL vulnerabilites and how to harden nginx’s SSL configuration against them. At the end of this tutorial, hopefully we can get an A+ in SSL Labs’s test.
Prerequisites
In this tutorial, let’s assume that we already have a website hosted on an nginx server. We have also bought an SSL certificate for our domain from a Certificate Authority or got a free one from Let’s Encrypt.
If you need more information on SSL vulnerabilities, you can try following the links below:
We are going to edit the nginx settings in the file /etc/nginx/sited-enabled/yoursite.com (On Ubuntu/Debian) or in /etc/nginx/conf.d/nginx.conf (On RHEL/CentOS).
For the entire tutorial, you need to edit the parts between the server block for the server config for port 443 (ssl config). At the end of the tutorial you can find the complete config example.
Make sure you back up the files before editing them!
The BEAST attack and RC4
In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.
Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, RC4 has a growing list of attacks against it, many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”
Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.
Factoring RSA-EXPORT Keys (FREAK)
FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”
The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.
It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.
There are two parts of the attack as the server must also accept “export grade RSA.”
The MITM attack works as follows:
In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
The MITM attacker changes this message to ask for ‘export RSA’.
The server responds with a 512-bit export RSA key, signed with its long-term key.
The client accepts this weak key due to the OpenSSL/SecureTransport bug.
The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
From here on out, the attacker sees plaintext and can inject anything it wants.
Logjam (DH EXPORT)
Researchers from several universities and institutions conducted a study that found an issue in the TLS protocol. In a report the researchers report two attack methods.
Diffie-Hellman key exchange allows that depend on TLS to agree on a shared key and negotiate a secure session over a plain text connection.
With the first attack, a man-in-the-middle can downgrade a vulnerable TLS connection to 512-bit export-grade cryptography which would allow the attacker to read and change the data. The second threat is that many servers and use the same prime numbers for Diffie-Hellman key exchange instead of generating their own unique DH parameters.
The team estimates that an academic team can break 768-bit primes and that a nation-state could break a 1024-bit prime. By breaking one 1024-bit prime, one could eavesdrop on 18 percent of the top one million HTTPS domains. Breaking a second prime would open up 66 percent of VPNs and 26 percent of SSH servers.
Later on in this guide we generate our own unique DH parameters and we use a ciphersuite that does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software. Updated browsers refuse DH parameters lower than 768/1024 bit as a fix to this.
Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.
What versions of the OpenSSL are affected by Heartbleed?
Status of different versions:
OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
OpenSSL 1.0.1g is NOT vulnerable
OpenSSL 1.0.0 branch is NOT vulnerable
OpenSSL 0.9.8 branch is NOT vulnerable
The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.
By updating OpenSSL you are not vulnerable to this bug.
SSL Compression (CRIME attack)
The CRIME attack uses SSL Compression to do its magic. SSL compression is turned off by default in nginx 1.1.6+/1.0.9+ (if OpenSSL 1.0.0+ used) and nginx 1.3.2+/1.2.2+ (if older versions of OpenSSL are used).
If you are using al earlier version of nginx or OpenSSL and your distro has not backported this option then you need to recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.
SSLv2 and SSLv3
SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.
Again edit the config file:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
Poodle and TLS-FALLBACK-SCSV
SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.
Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:
OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.
Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.
This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.
The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.
The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.
If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.
Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.
Prioritization logic
ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers. No known attack currently target these ciphers.
PFS ciphersuites are preferred, with ECDHE first, then DHE.
AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses
Mandatory discards
aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
When choosing a cipher during an SSLv3 or TLSv1 handshake, normally the client’s preference is used. If this directive is enabled, the server’s preference will be used instead.
The concept of forward secrecy is simple: client and server negotiate a key that never hits the wire, and is destroyed at the end of the session. The RSA private from the server is used to sign a Diffie-Hellman key exchange between the client and the server. The pre-master key obtained from the Diffie-Hellman handshake is then used for encryption. Since the pre-master key is specific to a connection between a client and a server, and used only for a limited amount of time, it is called Ephemeral.
With Forward Secrecy, if an attacker gets a hold of the server’s private key, it will not be able to decrypt past communications. The private key is only used to sign the DH handshake, which does not reveal the pre-master key. Diffie-Hellman ensures that the pre-master keys never leave the client and the server, and cannot be intercepted by a MITM.
All versions of nginx as of 1.4.4 rely on OpenSSL for input parameters to Diffie-Hellman (DH). Unfortunately, this means that Ephemeral Diffie-Hellman (DHE) will use OpenSSL’s defaults, which include a 1024-bit key for the key-exchange. Since we’re using a 2048-bit certificate, DHE clients will use a weaker key-exchange than non-ephemeral DH clients.
We need generate a stronger DHE parameter:
cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096
And then tell nginx to use it for DHE key-exchange:
ssl_dhparam /etc/ssl/certs/dhparam.pem;
Note that generating a 4096-bit key will take a long time to finish (from 30 minutes to several hours). Although it is recommended to generate a 4096-bit one, you can use a 2048-bit at the moment. However, 1024-bit key is NOT acceptable.
OCSP Stapling
When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.
OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.
The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.
The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.
Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.
Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.
WordPress is the most popular website framework in the world, powering 30% of the web all around the world. Over 60 million people have chosen WordPress to power the place on the web they call “home”.
In this tutorial, we will demonstrate how to setup multiple WordPress websites on a same Ubuntu 16.04 server. The setup includes nginx, MySQL, PHP, and WordPress itself.
Prerequisites
Before you complete this tutorial, you should have a regular, non-root user account on your server with sudo privileges. You can learn how to set up this type of account by completing DigitalOcean’s Ubuntu 16.04 initial server setup.
Once you have your user available, sign into your server with that username. You are now ready to begin the steps outlined in this guide.
You should see HTTP traffic allowed in the displayed output:
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
With the new firewall rule added, you can test if the server is up and running by accessing your server’s domain name or public IP address in your web browser.
Navigate your browser to test if nginx is working:
Test Nginx’s default landing page in browser
http://server_domain_or_IP
If you see the above page, you have successfully installed Nginx.
Step 2. Install MySQL to Manage Site Data
Now that we have a web server, we need to install MySQL, a database management system, to store and manage the data for our site.
Install MySQL server
$ sudo apt-get install mysql-server
You will be asked to supply a root (administrative) password for use within the MySQL system.
The MySQL database software is now installed, but its configuration is not exactly complete yet.
To secure the installation, we can run a simple security script that will ask whether we want to modify some insecure defaults. Begin the script by typing:
Secure MySQL installation
$ sudo mysql_secure_installation
You will be asked to enter the password you set for the MySQL root account. Next, you will be asked if you want to configure the VALIDATE PASSWORD PLUGIN.
Warning: Enabling this feature is something of a judgment call. If enabled, passwords which don’t match the specified criteria will be rejected by MySQL with an error. This will cause issues if you use a weak password in conjunction with software which automatically configures MySQL user credentials, such as the Ubuntu packages for phpMyAdmin. It is safe to leave validation disabled, but you should always use strong, unique passwords for database credentials.
Answer y for yes, or anything else to continue without enabling.
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?
Press y|Y for Yes, any other key for No:
If you’ve enabled validation, you’ll be asked to select a level of password validation. Keep in mind that if you enter 2, for the strongest level, you will receive errors when attempting to set any password which does not contain numbers, upper and lowercase letters, and special characters, or which is based on common dictionary words.
There are three levels of password validation policy:
LOW Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary file
Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1
If you enabled password validation, you’ll be shown a password strength for the existing root password, and asked you if you want to change that password. If you are happy with your current password, enter n for “no” at the prompt:
Using existing password for root.
Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n
For the rest of the questions, you should press Y and hit the Enter key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.
At this point, your database system is now set up and we can move on.
Step 3. Install PHP for Processing
We now have Nginx installed to serve our pages and MySQL installed to store and manage our data. However, we still don’t have anything that can generate dynamic content. We can use PHP for this.
Since Nginx does not contain native PHP processing like some other web servers, we will need to install php-fpm, which stands for “fastCGI process manager”. We will tell Nginx to pass PHP requests to this software for processing.
We can install this module and will also grab an additional helper package that will allow PHP to communicate with our database backend. The installation will pull in the necessary PHP core files. Do this by typing:
Install php-fpm and php-mysql
$ sudo apt-get install php-fpm php-mysql
Configure the PHP Processor
We now have our PHP components installed, but we need to make a slight configuration change to make our setup more secure.
Open the main php-fpm configuration file with root privileges:
$ sudo nano /etc/php/7.0/fpm/php.ini
What we are looking for in this file is the parameter that sets cgi.fix_pathinfo. This will be commented out with a semi-colon (;) and set to “1” by default.
This is an extremely insecure setting because it tells PHP to attempt to execute the closest file it can find if the requested PHP file cannot be found. This basically would allow users to craft PHP requests in a way that would allow them to execute scripts that they shouldn’t be allowed to execute.
We will change both of these conditions by uncommenting the line and setting it to “0” like this:
Change cgi.fix_pathinfo
# /etc/php/7.0/fpm/php.ini
cgi.fix_pathinfo=0
Save and close the file when you are finished.
Now, we just need to restart our PHP processor by typing:
Restart PHP processor
$ sudo systemctl restart php7.0-fpm
This will implement the change that we made.
Step 4. Install WordPress
The plan
To make the best use of the server, we will setup the server so that we can also host other websites in PHP or Python in the future. To do that, we will have the following arrangement:
The full configuration for each website will be in the above mentioned file. The default nginx configuration file will be removed.
/www folder will be chown to nginx’s user
By default, nginx will run as www-data or nginx, you can find this out by running ps aux | grep nginx in your terminal)
Using this arrangement, we can freely add future websites in such manner so everything is kept organized.
Step 4.1. Create database for your WordPress
Login to MySQL shell
$ mysql -u root -p
Create database and mysql user for our WordPress
mysql> CREATE DATABASE wordpress CHARACTER SET utf8 COLLATE utf8_unicode_ci;
mysql> CREATE USER 'wpuser'@'localhost' IDENTIFIED BY 'wppassword';
mysql> GRANT ALL PRIVILEGES ON wordpress.* TO 'wpuser'@'localhost';
mysql> FLUSH PRIVILEGES;
mysql> SHOW GRANTS FOR 'wpuser'@'localhost';
A common practice here is to use website name as database name, and also as database user name. This way, each database user will have access to only its database, and the naming convention is easy to remember.
Step 4.2. Setup WordPress folder
Change directory to /www
$ cd /www
Download the latest WordPress.
$ wget http://wordpress.org/latest.tar.gz
Extract it.
$ tar -zxvf latest.tar.gz
Move it to our document root.
$ mv wordpress/* /www/hexadix.com
Copy the wp-sample-config.php file and make it as wp-config.php file.
Edit the config file and mention the database information.
$ vi /www/hexadix.com/wp-config.php
Default will look like below.
# /www/hexadix.com/wp-config.php
// ** MySQL settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'database_name_here');
/** MySQL database username */
define('DB_USER', 'username_here');
/** MySQL database password */
define('DB_PASSWORD', 'password_here');
/** MySQL hostname */
define('DB_HOST', 'localhost');
Modified entries according to the created database user and database will look like.
# /www/hexadix.com/wp-config.php
// ** MySQL settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'wordpress');
/** MySQL database username */
define('DB_USER', 'wpuser');
/** MySQL database password */
define('DB_PASSWORD', 'wppassword');
/** MySQL hostname */
define('DB_HOST', 'localhost');
Make nginx user as the owner to WordPress directory.
$ chown -R www-data:www-data /www/hexadix.com/
Step 4.3. Configure nginx
Open nginx main configuration file
$ sudo vi /etc/nginx/nginx.conf
Look for the line below.
include /etc/nginx/sites-enabled/*;
Change it to
include /etc/nginx/sites-enabled/*.conf;
The purpose of this step is to tell nginx to only load files with .conf extension. This way, whenever we want to disable a website, we only need to change the extension of that website’s configuration file to something else, which is more convenient.
Test new nginx configuration
$ sudo nginx -t
Reload nginx
$ sudo systemctl reload nginx
If everything works correctly, when you navigate your browser to your website now, the default Welcome to Nginx page would have been disabled.
Create nginx configuration file for our WordPress
$ sudo vi /etc/nginx/sites-enabled/hexadix.com.conf
Remember to change the server_name and root to match your website. Also, the fastcgi_pass path may differ in your server installation.
Test new nginx configuration
$ sudo nginx -t
Reload nginx
$ sudo systemctl reload nginx
If everything works correctly, by this time you can navigate to your domain (in this case http://hexadix.com) and see your WordPress Installation Wizard page. Follow the instruction on the Wizard to setup an admin account and you’re done.
You can now login to your WordPress administrator site by navigating to:
http://yourdomain.com/wp-admin/
Your front page is at:
http://yourdomain.com
Setup multiple WordPress
To setup another WordPress on the same server, repeat the whole Step 4 in the same manner with the corresponding configurations.
Conclusion
You should now have multiple WordPress websites on your Ubuntu 16.04. Now you can setup as many additional websites as you may need by following the same steps in the tutorial.
How to setup Linux, Nginx, MySQL, PHP (LEMP stack) on Ubuntu 16.04
The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.
In this guide, we will demonstrate how to install a LEMP stack on an Ubuntu 16.04 server. The Ubuntu operating system takes care of the first requirement. We will describe how to get the rest of the components up and running.
Prerequisites
Before you complete this tutorial, you should have a regular, non-root user account on your server with sudo privileges. You can learn how to set up this type of account by completing DigitalOcean’s Ubuntu 16.04 initial server setup.
Once you have your user available, sign into your server with that username. You are now ready to begin the steps outlined in this guide.
Step 1: Install the Nginx Web Server
In order to display web pages to our site visitors, we are going to employ Nginx, a modern, efficient web server.
All of the software we will be using for this procedure will come directly from Ubuntu’s default package repositories. This means we can use the apt package management suite to complete the installation.
Since this is our first time using apt for this session, we should start off by updating our local package index. We can then install the server.
Install nginx
sudo apt-get update
sudo apt-get install nginx
On Ubuntu 16.04, Nginx is configured to start running upon installation.
If you are have the ufw firewall running, as outlined in our initial setup guide, you will need to allow connections to Nginx. Nginx registers itself with ufw upon installation, so the procedure is rather straight forward.
It is recommended that you enable the most restrictive profile that will still allow the traffic you want. Since we haven’t configured SSL for our server yet, in this guide, we will only need to allow traffic on port 80.
Enable ufw to allow HTTP
sudo ufw allow 'Nginx HTTP'
Verify the change
sudo ufw status
You should see HTTP traffic allowed in the displayed output:
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
With the new firewall rule added, you can test if the server is up and running by accessing your server’s domain name or public IP address in your web browser.
If you do not have a domain name pointed at your server and you do not know your server’s public IP address, you can find it by typing one of the following into your terminal:
Find server’s public IP address
ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
This will print out a few IP addresses. You can try each of them in turn in your web browser.
As an alternative, you can check which IP address is accessible as viewed from other locations on the internet:
Alternative way to find server’s public IP address
curl -4 icanhazip.com
Type one of the addresses that you receive in your web browser. It should take you to Nginx’s default landing page:
Test Nginx’s default landing page in browser
http://server_domain_or_IP
If you see the above page, you have successfully installed Nginx.
Step 2: Install MySQL to Manage Site Data
Now that we have a web server, we need to install MySQL, a database management system, to store and manage the data for our site.
Install MySQL server
sudo apt-get install mysql-server
You will be asked to supply a root (administrative) password for use within the MySQL system.
The MySQL database software is now installed, but its configuration is not exactly complete yet.
To secure the installation, we can run a simple security script that will ask whether we want to modify some insecure defaults. Begin the script by typing:
Secure MySQL installation
sudo mysql_secure_installation
You will be asked to enter the password you set for the MySQL root account. Next, you will be asked if you want to configure the VALIDATE PASSWORD PLUGIN.
Warning: Enabling this feature is something of a judgment call. If enabled, passwords which don’t match the specified criteria will be rejected by MySQL with an error. This will cause issues if you use a weak password in conjunction with software which automatically configures MySQL user credentials, such as the Ubuntu packages for phpMyAdmin. It is safe to leave validation disabled, but you should always use strong, unique passwords for database credentials.
Answer y for yes, or anything else to continue without enabling.
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?
Press y|Y for Yes, any other key for No:
If you’ve enabled validation, you’ll be asked to select a level of password validation. Keep in mind that if you enter 2, for the strongest level, you will receive errors when attempting to set any password which does not contain numbers, upper and lowercase letters, and special characters, or which is based on common dictionary words.
There are three levels of password validation policy:
LOW Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary file
Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1
If you enabled password validation, you’ll be shown a password strength for the existing root password, and asked you if you want to change that password. If you are happy with your current password, enter n for “no” at the prompt:
Using existing password for root.
Estimated strength of the password: 100
Change the password for root ? ((Press y|Y for Yes, any other key for No) : n
For the rest of the questions, you should press Y and hit the Enter key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes we have made.
At this point, your database system is now set up and we can move on.
Step 3: Install PHP for Processing
We now have Nginx installed to serve our pages and MySQL installed to store and manage our data. However, we still don’t have anything that can generate dynamic content. We can use PHP for this.
Since Nginx does not contain native PHP processing like some other web servers, we will need to install php-fpm, which stands for “fastCGI process manager”. We will tell Nginx to pass PHP requests to this software for processing.
We can install this module and will also grab an additional helper package that will allow PHP to communicate with our database backend. The installation will pull in the necessary PHP core files. Do this by typing:
Install php-fpm and php-mysql
sudo apt-get install php-fpm php-mysql
Configure the PHP Processor
We now have our PHP components installed, but we need to make a slight configuration change to make our setup more secure.
Open the main php-fpm configuration file with root privileges:
sudo nano /etc/php/7.0/fpm/php.ini
What we are looking for in this file is the parameter that sets cgi.fix_pathinfo. This will be commented out with a semi-colon (;) and set to “1” by default.
This is an extremely insecure setting because it tells PHP to attempt to execute the closest file it can find if the requested PHP file cannot be found. This basically would allow users to craft PHP requests in a way that would allow them to execute scripts that they shouldn’t be allowed to execute.
We will change both of these conditions by uncommenting the line and setting it to “0” like this:
Change cgi.fix_pathinfo
# /etc/php/7.0/fpm/php.ini
cgi.fix_pathinfo=0
Save and close the file when you are finished.
Now, we just need to restart our PHP processor by typing:
Restart PHP processor
sudo systemctl restart php7.0-fpm
This will implement the change that we made.
Step 4: Configure Nginx to Use the PHP Processor
Now, we have all of the required components installed. The only configuration change we still need is to tell Nginx to use our PHP processor for dynamic content.
We do this on the server block level (server blocks are similar to Apache’s virtual hosts). Open the default Nginx server block configuration file by typing:
Open nginx default configuration file
sudo nano /etc/nginx/sites-available/default
Currently, with the comments removed, the Nginx default server block file looks like this:
We need to make some changes to this file for our site.
First, we need to add index.php as the first value of our index directive so that files named index.php are served, if available, when a directory is requested.
We can modify the server_name directive to point to our server’s domain name or public IP address.
For the actual PHP processing, we just need to uncomment a segment of the file that handles PHP requests by removing the pound symbols (#) from in front of each line. This will be the location ~\.php$ location block, the included fastcgi-php.conf snippet, and the socket associated with php-fpm.
We will also uncomment the location block dealing with .htaccess files using the same method. Nginx doesn’t process these files. If any of these files happen to find their way into the document root, they should not be served to visitors.
The changes that you need to make are in bold in the text below:
When you’ve made the above changes, you can save and close the file.
Test your configuration file for syntax errors by typing:
Test nginx configurations
sudo nginx -t
If any errors are reported, go back and recheck your file before continuing.
When you are ready, reload Nginx to make the necessary changes:
Reload nginx
sudo systemctl reload nginx
Step 5: Create a PHP File to Test Configuration
Your LEMP stack should now be completely set up. We can test it to validate that Nginx can correctly hand .php files off to our PHP processor.
We can do this by creating a test PHP file in our document root. Open a new file called info.php within your document root in your text editor:
Create a new PHP file to test PHP installation
sudo nano /var/www/html/info.php
Type or paste the following lines into the new file. This is valid PHP code that will return information about our server:
# /var/www/html/info.php
<?php
phpinfo();
When you are finished, save and close the file.
Now, you can visit this page in your web browser by visiting your server’s domain name or public IP address followed by /info.php:
http://server_domain_or_IP/info.php
You should see a web page that has been generated by PHP with information about your server:
f you see a page that looks like this, you’ve set up PHP processing with Nginx successfully.
After verifying that Nginx renders the page correctly, it’s best to remove the file you created as it can actually give unauthorized users some hints about your configuration that may help them try to break in. You can always regenerate this file if you need it later.
For now, remove the file by typing:
sudo rm /var/www/html/info.php
Conclusion
You should now have a LEMP stack configured on your Ubuntu 16.04 server. This gives you a very flexible foundation for serving web content to your visitors.
Nginx security vulnerabilities and hardening best practices – part I
At the moment, nginx is one the of most popular web server. It is lightweight, fast, robust, supports the major operating systems and is the web server of choice for Netflix, WordPress.com and other high traffic sites. nginx can easily handle 10,000 inactive HTTP connections with as little as 2.5M of memory. In this article, I will provide tips on nginx server security, showing how to secure your nginx installation.
After installing nginx, you should gain a good understanding of nginx’s configuration settings which are found in nginx.conf. This is the main configuration file for nginx and therefore most of the security checks will be done using this file. By default nginx.conf can be found in [Nginx Installation Directory]/conf on Windows systems, and in the /etc/nginx or the /usr/local/etc/nginx directories on Linux systems.
#1. Turn on SELinux
Security-Enhanced Linux (SELinux) is a Linux kernel feature that provides a mechanism for supporting access control security policies which provides great protection. It can stop many attacks before your system rooted. See how to turn on SELinux for CentOS / RHEL based systems.
#2. Hardening /etc/sysctl.conf
You can control and configure Linux kernel and networking settings via /etc/sysctl.conf.
# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1
# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1
# Turn on syncookies for SYN flood attack protection
net.ipv4.tcp_syncookies = 1
# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
# No source routed packets here
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
# Make sure no one can alter the routing tables
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
# Don't act as a router
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
# Turn on execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1
# Tuen IPv6
net.ipv6.conf.default.router_solicitations = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 0
net.ipv6.conf.default.accept_ra_pinfo = 0
net.ipv6.conf.default.accept_ra_defrtr = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.default.max_addresses = 1
# Optimization for port usefor LBs
# Increase system file descriptor limit
fs.file-max = 65535
# Allow for more PIDs (to reduce rollover problems); may break some programs 32768
kernel.pid_max = 65536
# Increase system IP port limits
net.ipv4.ip_local_port_range = 2000 65000
# Increase TCP max buffer size setable using setsockopt()
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608
# Increase Linux auto tuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
# Tcp Windows etc
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1
#3. Disable any unwanted nginx modules
Nginx modules are automatically included during installation of nginx and no run-time selection of modules is currently supported, therefore disabling certain modules would require re-compilation of nginx. It is recommended to disable any modules which are not required as this will minimize the risk of any potential attacks by limiting the operations allowed by the web server. To do this, you would need to disable these modules with the configure option during installation. The example below disables the auto index module, which generates automatic directory listings and recompiles nginx.
$ ./configure --without-http_autoindex_module
$ make
$ make install
#4. Disable nginx server_tokens
By default the server_tokens directive in nginx displays the nginx version number in all automatically generated error pages. This could lead to unnecessary information disclosure where an unauthorized user would be able to gain knowledge about the version of nginx that is being used. The server_tokens directive should be disabled from the nginx configuration file by setting – server_tokens off.
#5. Install SELinux policy
By default SELinux will not protect the nginx web server. However, you can install and compile protection as follows. First, install required SELinux compile time support:
The following firewall script blocks everything and only allows:
Incoming HTTP (TCP port 80) requests
Incoming ICMP ping requests
Outgoing ntp (port 123) requests
Outgoing smtp (TCP port 25) requests
#!/bin/bash
IPT="/sbin/iptables"
#### IPS ######
# Get server public ip
SERVER_IP=$(ifconfig eth0 | grep 'inet addr:' | awk -F'inet addr:' '{ print $2}' | awk '{ print $1}')
LB1_IP="204.54.1.1"
LB2_IP="204.54.1.2"
# Do some smart logic so that we can use damm script on LB2 too
OTHER_LB=""
SERVER_IP=""
[[ "$SERVER_IP" == "$LB1_IP" ]] && OTHER_LB="$LB2_IP" || OTHER_LB="$LB1_IP"
[[ "$OTHER_LB" == "$LB2_IP" ]] && OPP_LB="$LB1_IP" || OPP_LB="$LB2_IP"
### IPs ###
PUB_SSH_ONLY="122.xx.yy.zz/29"
#### FILES #####
BLOCKED_IP_TDB=/root/.fw/blocked.ip.txt
SPOOFIP="127.0.0.0/8 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16 0.0.0.0/8 240.0.0.0/4 255.255.255.255/32 168.254.0.0/16 224.0.0.0/4 240.0.0.0/5 248.0.0.0/5 192.0.2.0/24"
BADIPS=$( [[ -f ${BLOCKED_IP_TDB} ]] && egrep -v "^#|^$" ${BLOCKED_IP_TDB})
### Interfaces ###
PUB_IF="eth0" # public interface
LO_IF="lo" # loopback
VPN_IF="eth1" # vpn / private net
### start firewall ###
echo "Setting LB1 $(hostname) Firewall..."
# DROP and close everything
$IPT -P INPUT DROP
$IPT -P OUTPUT DROP
$IPT -P FORWARD DROP
# Unlimited lo access
$IPT -A INPUT -i ${LO_IF} -j ACCEPT
$IPT -A OUTPUT -o ${LO_IF} -j ACCEPT
# Unlimited vpn / pnet access
$IPT -A INPUT -i ${VPN_IF} -j ACCEPT
$IPT -A OUTPUT -o ${VPN_IF} -j ACCEPT
# Drop sync
$IPT -A INPUT -i ${PUB_IF} -p tcp ! --syn -m state --state NEW -j DROP
# Drop Fragments
$IPT -A INPUT -i ${PUB_IF} -f -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL FIN,URG,PSH -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL ALL -j DROP
# Drop NULL packets
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " NULL Packets "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
# Drop XMAS
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " XMAS Packets "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
# Drop FIN packet scans
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " Fin Packets Scan "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP
# Log and get rid of broadcast / multicast and invalid
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j LOG --log-prefix " Broadcast "
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j DROP
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j LOG --log-prefix " Multicast "
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j DROP
$IPT -A INPUT -i ${PUB_IF} -m state --state INVALID -j LOG --log-prefix " Invalid "
$IPT -A INPUT -i ${PUB_IF} -m state --state INVALID -j DROP
# Log and block spoofed ips
$IPT -N spooflist
for ipblock in $SPOOFIP
do
$IPT -A spooflist -i ${PUB_IF} -s $ipblock -j LOG --log-prefix " SPOOF List Block "
$IPT -A spooflist -i ${PUB_IF} -s $ipblock -j DROP
done
$IPT -I INPUT -j spooflist
$IPT -I OUTPUT -j spooflist
$IPT -I FORWARD -j spooflist
# Allow ssh only from selected public ips
for ip in ${PUB_SSH_ONLY}
do
$IPT -A INPUT -i ${PUB_IF} -s ${ip} -p tcp -d ${SERVER_IP} --destination-port 22 -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -d ${ip} -p tcp -s ${SERVER_IP} --sport 22 -j ACCEPT
done
# allow incoming ICMP ping pong stuff
$IPT -A INPUT -i ${PUB_IF} -p icmp --icmp-type 8 -s 0/0 -m state --state NEW,ESTABLISHED,RELATED -m limit --limit 30/sec -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p icmp --icmp-type 0 -d 0/0 -m state --state ESTABLISHED,RELATED -j ACCEPT
# allow incoming HTTP port 80
$IPT -A INPUT -i ${PUB_IF} -p tcp -s 0/0 --sport 1024:65535 --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --sport 80 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
# allow outgoing ntp
$IPT -A OUTPUT -o ${PUB_IF} -p udp --dport 123 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p udp --sport 123 -m state --state ESTABLISHED -j ACCEPT
# allow outgoing smtp
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT
### add your other rules here ####
#######################
# drop and log everything else
$IPT -A INPUT -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " DEFAULT DROP "
$IPT -A INPUT -j DROP
exit 0
#7. Control Buffer Overflow Attacks
Buffer overflow attacks are made possible by writing data to a buffer and exceeding that buffers’ boundary and overwriting memory fragments of a process. To prevent this in nginx we can set buffer size limitations for all clients. This can be done through the Nginx configuration file using the following directives:
client_body_buffer_size – Use this directive to specify the client request body buffer size. The default value is 8k or 16k but it is recommended to set this as low as 1k as follows: client_body_buffer_size 1k
client_header_buffer_size – Use this directive to specify the header buffer size for the client request header. A buffer size of 1k is adequate for the majority of requests.
client_max_body_size – Use this directive to specify the maximum accepted body size for a client request. A 1k directive should be sufficient, however this needs to be increased if you are receiving file uploads via the POST method.
large_client_header_buffers – Use this directive to specify the maximum number and size of buffers to be used to read large client request headers. A large_client_header_buffers 2 1k directive sets the maximum number of buffers to 2, each with a maximum size of 1k. This directive will accept 2kB data URI.
You also need to control timeouts to improve server performance and cut clients. Edit it as follows:
client_body_timeout 10; – Directive sets the read timeout for the request body from client. The timeout is set only if a body is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408). The default is 60.
client_header_timeout 10; – Directive assigns timeout with reading of the title of the request of client. The timeout is set only if a header is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408).
keepalive_timeout 5 5; – The first parameter assigns the timeout for keep-alive connections with the client. The server will close connections after this time. The optional second parameter assigns the time value in the header Keep-Alive: timeout=time of the response. This header can convince some browsers to close the connection, so that the server does not have to. Without this parameter, nginx does not send a Keep-Alive header (though this is not what makes a connection “keep-alive”).
send_timeout 10; – Directive assigns response timeout to client. Timeout is established not on entire transfer of answer, but only between two operations of reading, if after this time client will take nothing, then nginx is shutting down the connection.
#8. Control simultaneous connections
You can use NginxHttpLimitZone module to limit the number of simultaneous connections for the assigned session or as a special case, from one IP address. Edit nginx.conf:
# Directive describes the zone, in which the session states are stored i.e. store in slimits.
# 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session
limit_zone slimits $binary_remote_addr 5m;
# Control maximum number of simultaneous connections for one session i.e.
# restricts the amount of connections from a single ip address
limit_conn slimits 5;
The above will limits remote clients to no more than 5 concurrently “open” connections per remote ip address.
#9. Allow access to our domain only
If bot is just making random server scan for all domains, just deny it. You must only allow configured virtual domain or reverse proxy requests. You don’t want to display request using an IP address:
# Only requests to our Host are allowed i.e. hexadix.com, static.hexadix.com, www.hexadix.com
if ($host !~ ^(hexadix.com|static.hexadix.com|www.hexadix.com)$ ) {
return 444;
}
#10. Disable any unwanted HTTP methods
It is suggested to disable any HTTP methods which are not going to be utilized and which are not required to be implemented on the web server. The below condition, which is added under the ‘server’ section in the Nginx configuration file will only allow GET, HEAD, and POST methods and will filter out methods such as DELETE and TRACE by issuing a 444 No Response status code.
if ($request_method !~ ^(GET|HEAD|POST)$ )
{
return 444;
}
#11. Deny certain User-Agents
You can easily block user-agents i.e. scanners, bots, and spammers who may be abusing your server.
# Block some robots
if ($http_user_agent ~* msnbot|scrapbot) {
return 403;
}
#12. Block referral spam
Referer spam is dangerous. It can harm your SEO ranking via web-logs (if published) as referer field refer to their spammy site. You can block access to referer spammers with these lines.
# Deny certain Referers
if ( $http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen) )
{
# return 404;
return 403;
}
#13. Stop image hot-linking
Image or HTML hotlinking means someone makes a link to your site to one of your images, but displays it on their own site. The end result you will end up paying for bandwidth bills and make the content look like part of the hijacker’s site. This is usually done on forums and blogs. I strongly suggest you block and stop image hotlinking at your server level itself.
# Stop deep linking or hot linking
location /images/ {
valid_referers none blocked www.example.com example.com;
if ($invalid_referer) {
return 403;
}
}
Another example with link to banned image:
valid_referers blocked www.example.com example.com;
if ($invalid_referer) {
rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last
}
See also: HowTo: Use nginx map to block image hotlinking. This is useful if you want to block tons of domains.
#14. Directory restrictions
You can set access control for a specified directory. All web directories should be configured on a case-by-case basis, allowing access only where needed.
Limiting Access By Ip Address
You can limit access to directory by ip address to /docs/ directory:
location /docs/ {
# block one workstation
deny 192.168.1.1;
# allow anyone in 192.168.1.0/24
allow 192.168.1.0/24;
# drop rest of the world
deny all;
}
Password Protect The Directory
First create the password file and add a user called vivek:
PHP is one of the popular server side scripting language. Edit /etc/php.ini as follows:
# Disallow dangerous functions
disable_functions = phpinfo, system, mail, exec
## Try to limit resources ##
# Maximum execution time of each script, in seconds
max_execution_time = 30
# Maximum amount of time each script may spend parsing request data
max_input_time = 60
# Maximum amount of memory a script may consume (8MB)
memory_limit = 8M
# Maximum size of POST data that PHP will accept.
post_max_size = 8M
# Whether to allow HTTP file uploads.
file_uploads = Off
# Maximum allowed size for uploaded files.
upload_max_filesize = 2M
# Do not expose PHP error messages to external users
display_errors = Off
# Turn on safe mode
safe_mode = On
# Only allow access to executables in isolated directory
safe_mode_exec_dir = php-required-executables-path
# Limit external access to PHP environment
safe_mode_allowed_env_vars = PHP_
# Restrict PHP information leakage
expose_php = Off
# Log all errors
log_errors = On
# Do not register globals for input data
register_globals = Off
# Minimize allowable PHP post size
post_max_size = 1K
# Ensure PHP redirects appropriately
cgi.force_redirect = 0
# Disallow uploading unless necessary
file_uploads = Off
# Enable SQL safe mode
sql.safe_mode = On
# Avoid Opening remote files
allow_url_fopen = Off
# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ \.php$ {
# Zero-day exploit defense.
# http://forum.nginx.org/read.php?2,88845,page=3
# Won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi.
# Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine. And then cross your fingers that you won't get hacked.
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi_params;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# fastcgi_intercept_errors on;
fastcgi_pass php;
}
#17. Run Nginx In A Chroot Jail (Containers) If Possible
Putting nginx in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. You can use traditional chroot kind of setup with nginx. If possible use FreeBSD jails, XEN, or OpenVZ virtualization which uses the concept of containers.
#18. Limits connections per IP at the firewall level
A webserver must keep an eye on connections and limit connections per second. This is serving 101. Both pf and iptables can throttle end users before accessing your nginx server.
Linux Iptables: Throttle Nginx Connections Per Second
The following example will drop incoming connections if IP make more than 15 connection attempts to port 80 within 60 seconds:
$ /sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set
$ /sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 60 --hitcount 15 -j DROP
$ service iptables save
BSD PF: Throttle Nginx Connections Per Second
Edit your /etc/pf.conf and update it as follows. The following will limits the maximum number of connections per source to 100. 15/5 specifies the number of connections per second or span of seconds i.e. rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits.
webserver_ip="202.54.1.1"
table persist
block in quick from
pass in on $ext_if proto tcp to $webserver_ip port www flags S/SA keep state (max-src-conn 100, max-src-conn-rate 15/5, overload flush)
Please adjust all values as per your requirements and traffic (browsers may open multiple connections to your site). See also:
#19. Configure operating system to protect web server
Turn on SELinux as described above. Set correct permissions on /nginx document root. The nginx runs as a user named nginx. However, the files in the DocumentRoot (/nginx or /usr/local/nginx/html) should not be owned or writable by that user. To find files with wrong permissions, use:
Pass -delete option to find command and it will get rid of those files too.
#20. Restrict outgoing nginx connections
The crackers will download file locally on your server using tools such as wget. Use iptables to block outgoing connections from nginx user. The ipt_owner module attempts to match various characteristics of the packet creator, for locally generated packets. It is only valid in the OUTPUT chain. In this example, allow vivek user to connect outside using port 80 (useful for RHN access or to grab CentOS updates via repos):
$ /sbin/iptables -A OUTPUT -o eth0 -m owner --uid-owner vivek -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
Add above rule to your iptables based shell script. Do not allow nginx web server user to connect outside.
#21. Make use of ModSecurity
ModSecurity is an open-source module that works as a web application firewall. Different functionalities include filtering, server identity masking, and null byte attack prevention. Real-time traffic monitoring is also allowed through this module. Therefore it is recommended to follow the ModSecurity manual to install the mod_security module in order to strengthen your security options.
#22. Set up and configure nginx access and error logs
Nginx access and error logs are enabled by default and are located at logs/error.log for error logs and at logs/access.log for access logs. The error_log directive in the nginx configuration file will allow you to set the directory where the error logs will be saved as well as specify which logs will be recorded according to their severity level. For example, a ‘crit’ severity level will log important problems that need to be addressed and any other issues which have a higher severity level than ‘crit’. To set the severity level of error logs to ‘crit’ the error_log directive needs to be set up as follows – error_log logs/error.log crit;. A complete list of error_log severity levels can be found in the official nginx documentation available here.
Alternatively, the access_log directive can be modified from the nginx configuration file to specify a location where the access logs will be saved (other than the default location). Also the log_format directive can be used to configure the format of the logged messages as explained here.
#23. Monitor nginx access and error logs
Continuous monitoring and management of the nginx log files will give a better understanding of requests made to your web server and also list any errors that were encountered. This will help to expose any attempted attacks made against the server as well as identify any optimizations that need to be carried out to improve the server’s performance. Log management tools, such as logrotate, can be used to rotate and compress old logs in order to free up disk space. Also the ngx_http_stub_status_module module provides access to basic status information, and nginx Plus, the commercial version of nginx, provides real-time activity monitoring of traffic, load and other performance metrics.
Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not.
Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not. # grep "/login.php??" /usr/local/nginx/logs/access_log # grep "...etc/passwd" /usr/local/nginx/logs/access_log # egrep -i "denied|error|warn" /usr/local/nginx/logs/error_log The auditd service is provided for system auditing. Turn it on to audit service SELinux events, authetication events, file modifications, account modification and so on. As usual disable all services and follow our “Linux Server Hardening” security tips.
#24. Configure Nginx to include an X-Frame-Options header
The X-Frame-Options HTTP response header is normally used to indicate if a browser should be allowed to render a page in a <frame> or an <iframe>. This could prevent clickjacking attacks and therefore it is recommended to enable the Nginx server to include the X-Frame-Options header. In order to do so the following parameter must be added to the nginx configuration file under the ‘server’ section – add_header X-Frame-Options "SAMEORIGIN";
server {
listen 8887;
server_name localhost;
add_header X-Frame-Options "SAMEORIGIN";
location / {
root html;
index index.html; index.htm;
}
}
#25. X-XSS Protection
Inject HTTP Header with X-XSS protection to mitigate Cross-Site scripting attack.
Modify default.conf or ssl.conf file to add following
add_header X-XSS-Protection "1; mode=block";
Save the configuration file and restart nginx. You can use Check Headers tool to verify after implementation.
#26. Keep your nginx up to date
As with any other server software, it is recommended that you always update your Nginx server to the latest stable version. These often contain fixes for vulnerabilities identified in previous versions, such as the directory traversal vulnerability that existed in Nginx versions prior to 0.7.63, and 0.8.x before 0.8.17. These updates frequently include new security features and improvements. Nginx security advisories can be found here and news about latest updates can be found here.