Tag: hardening

Nginx security vulnerabilities and hardening best practices – part II: SSL

Nginx security vulnerabilities and hardening best practices – part II: SSL

Have you read part I? Nginx security vulnerabilities and hardening best practices – part I

Introduction

HTTP is a plain text protocol and it is open to man-in-the-middle attacks and passive monitoring. If our website allow users to authenticate, we should use SSL to encrypt the content sent and received between users and our web server.

Google has already considered HTTPS one of their ranking factors:

Security is a top priority for Google. We invest a lot in making sure that our services use industry-leading security, like strong HTTPS encryption by default. That means that people using Search, Gmail and Google Drive, for example, automatically have a secure connection to Google.

Beyond our own stuff, we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure. For instance, we have created resources to help webmasters prevent and fix security breaches on their sites.

We want to go even further. At Google I/O a few months ago, we called for “HTTPS everywhere” on the web.

HTTPS are becoming a standard in HTTP environment. To help securing the HTTP environment, there are organizations that are issuing free SSL certificates that we can use on our website, such as Let’s Encrypt. So there’s no excuses for not applying HTTPS on our website.

The default SSL configuration on nginx are vulnerable to some common attacks. SSL Labs provides us with a free scan to check if our SSL configuration is secure enough. If our website gets anything below A, we should review our configurations.

In this tutorial, we are going to look at some common SSL vulnerabilites and how to harden nginx’s SSL configuration against them. At the end of this tutorial, hopefully we can get an A+ in SSL Labs’s test.

Prerequisites

In this tutorial, let’s assume that we already have a website hosted on an nginx server. We have also bought an SSL certificate for our domain from a Certificate Authority or got a free one from Let’s Encrypt.

If you need more information on SSL vulnerabilities, you can try following the links below:

We are going to edit the nginx settings in the file /etc/nginx/sited-enabled/yoursite.com (On Ubuntu/Debian) or in /etc/nginx/conf.d/nginx.conf (On RHEL/CentOS).

For the entire tutorial, you need to edit the parts between the server block for the server config for port 443 (ssl config). At the end of the tutorial you can find the complete config example.

Make sure you back up the files before editing them!

The BEAST attack and RC4

In short, by tampering with an encryption algorithm’s CBC – cipher block chaining – mode’s, portions of the encrypted traffic can be secretly decrypted. More info on the above link.

Recent browser versions have enabled client side mitigation for the beast attack. The recommendation was to disable all TLS 1.0 ciphers and only offer RC4. However, RC4 has a growing list of attacks against it, many of which have crossed the line from theoretical to practical. Moreover, there is reason to believe that the NSA has broken RC4, their so-called “big breakthrough.”

Disabling RC4 has several ramifications. One, users with shitty browsers such as Internet Explorer on Windows XP will use 3DES in lieu. Triple-DES is more secure than RC4, but it is significantly more expensive. Your server will pay the cost for these users. Two, RC4 mitigates BEAST. Thus, disabling RC4 makes TLS 1.0 users susceptible to that attack, by moving them to AES-CBC (the usual server-side BEAST “fix” is to prioritize RC4 above all else). I am confident that the flaws in RC4 significantly outweigh the risks from BEAST. Indeed, with client-side mitigation (which Chrome and Firefox both provide), BEAST is a nonissue. But the risk from RC4 only grows: More cryptanalysis will surface over time.

Factoring RSA-EXPORT Keys (FREAK)

FREAK is a man-in-the-middle (MITM) vulnerability discovered by a group of cryptographers at INRIA, Microsoft Research and IMDEA. FREAK stands for “Factoring RSA-EXPORT Keys.”

The vulnerability dates back to the 1990s, when the US government banned selling crypto software overseas, unless it used export cipher suites which involved encryption keys no longer than 512-bits.

It turns out that some modern TLS clients – including Apple’s SecureTransport and OpenSSL – have a bug in them. This bug causes them to accept RSA export-grade keys even when the client didn’t ask for export-grade RSA. The impact of this bug can be quite nasty: it admits a ‘man in the middle’ attack whereby an active attacker can force down the quality of a connection, provided that the client is vulnerable and the server supports export RSA.

There are two parts of the attack as the server must also accept “export grade RSA.”

The MITM attack works as follows:

  • In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.
  • The MITM attacker changes this message to ask for ‘export RSA’.
  • The server responds with a 512-bit export RSA key, signed with its long-term key.
  • The client accepts this weak key due to the OpenSSL/SecureTransport bug.
  • The attacker factors the RSA modulus to recover the corresponding RSA decryption key.
  • When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.
  • From here on out, the attacker sees plaintext and can inject anything it wants.

Logjam (DH EXPORT)

Researchers from several universities and institutions conducted a study that found an issue in the TLS protocol. In a report the researchers report two attack methods.

Diffie-Hellman key exchange allows that depend on TLS to agree on a shared key and negotiate a secure session over a plain text connection.

With the first attack, a man-in-the-middle can downgrade a vulnerable TLS connection to 512-bit export-grade cryptography which would allow the attacker to read and change the data. The second threat is that many servers and use the same prime numbers for Diffie-Hellman key exchange instead of generating their own unique DH parameters.

The team estimates that an academic team can break 768-bit primes and that a nation-state could break a 1024-bit prime. By breaking one 1024-bit prime, one could eavesdrop on 18 percent of the top one million HTTPS domains. Breaking a second prime would open up 66 percent of VPNs and 26 percent of SSH servers.

Later on in this guide we generate our own unique DH parameters and we use a ciphersuite that does not enable EXPORT grade ciphers. Make sure your OpenSSL is updated to the latest available version and urge your clients to also use upgraded software. Updated browsers refuse DH parameters lower than 768/1024 bit as a fix to this.

Cloudflare has a detailed guide on logjam.

Heartbleed

Heartbleed is a security bug disclosed in April 2014 in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol. Heartbleed may be exploited regardless of whether the party using a vulnerable OpenSSL instance for TLS is a server or a client. It results from improper input validation (due to a missing bounds check) in the implementation of the DTLS heartbeat extension (RFC6520), thus the bug’s name derives from “heartbeat”. The vulnerability is classified as a buffer over-read, a situation where more data can be read than should be allowed.

What versions of the OpenSSL are affected by Heartbleed?

Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

The bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

By updating OpenSSL you are not vulnerable to this bug.

SSL Compression (CRIME attack)

The CRIME attack uses SSL Compression to do its magic. SSL compression is turned off by default in nginx 1.1.6+/1.0.9+ (if OpenSSL 1.0.0+ used) and nginx 1.3.2+/1.2.2+ (if older versions of OpenSSL are used).

If you are using al earlier version of nginx or OpenSSL and your distro has not backported this option then you need to recompile OpenSSL without ZLIB support. This will disable the use of OpenSSL using the DEFLATE compression method. If you do this then you can still use regular HTML DEFLATE compression.

SSLv2 and SSLv3

SSL v2 is insecure, so we need to disable it. We also disable SSLv3, as TLS 1.0 suffers a downgrade attack, allowing an attacker to force a connection to use SSLv3 and therefore disable forward secrecy.

Again edit the config file:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Poodle and TLS-FALLBACK-SCSV

SSLv3 allows exploiting of the POODLE bug. This is one more major reason to disable this.

Google have proposed an extension to SSL/TLS named TLSFALLBACKSCSV that seeks to prevent forced SSL downgrades. This is automatically enabled if you upgrade OpenSSL to the following versions:

  • OpenSSL 1.0.1 has TLSFALLBACKSCSV in 1.0.1j and higher.
  • OpenSSL 1.0.0 has TLSFALLBACKSCSV in 1.0.0o and higher.
  • OpenSSL 0.9.8 has TLSFALLBACKSCSV in 0.9.8zc and higher.

More info on the NGINX documentation

The Cipher Suite

Forward Secrecy ensures the integrity of a session key in the event that a long-term key is compromised. PFS accomplishes this by enforcing the derivation of a new key for each and every session.

This means that when the private key gets compromised it cannot be used to decrypt recorded SSL traffic.

The cipher suites that provide Perfect Forward Secrecy are those that use an ephemeral form of the Diffie-Hellman key exchange. Their disadvantage is their overhead, which can be improved by using the elliptic curve variants.

The following two ciphersuites are recommended by me, and the latter by the Mozilla Foundation.

The recommended cipher suite:

ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

The recommended cipher suite for backwards compatibility (IE6/WinXP):

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

Older versions of OpenSSL may not return the full list of algorithms. AES-GCM and some ECDHE are fairly recent, and not present on most versions of OpenSSL shipped with Ubuntu or RHEL.

Prioritization logic

  • ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers. No known attack currently target these ciphers.
  • PFS ciphersuites are preferred, with ECDHE first, then DHE.
  • AES 128 is preferred to AES 256. There has been discussions on whether AES256 extra security was worth the cost, and the result is far from obvious. At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
  • In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
  • RC4 is removed entirely. 3DES is used for backward compatibility. See discussion in #RC4_weaknesses

Mandatory discards

  • aNULL contains non-authenticated Diffie-Hellman key exchanges, that are subject to Man-In-The-Middle (MITM) attacks
  • eNULL contains null-encryption ciphers (cleartext)
  • EXPORT are legacy weak ciphers that were marked as exportable by US law
  • RC4 contains ciphers that use the deprecated ARCFOUR algorithm
  • DES contains ciphers that use the deprecated Data Encryption Standard
  • SSLv2 contains all ciphers that were defined in the old version of the SSL standard, now deprecated
  • MD5 contains all the ciphers that use the deprecated message digest 5 as the hashing algorithm

Extra settings

Make sure you also add these lines:

ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;

When choosing a cipher during an SSLv3 or TLSv1 handshake, normally the client’s preference is used. If this directive is enabled, the server’s preference will be used instead.

More info on sslpreferserver_ciphers.

More info on ssl_ciphers.

Forward Secrecy & Diffie Hellman Ephemeral Parameters

The concept of forward secrecy is simple: client and server negotiate a key that never hits the wire, and is destroyed at the end of the session. The RSA private from the server is used to sign a Diffie-Hellman key exchange between the client and the server. The pre-master key obtained from the Diffie-Hellman handshake is then used for encryption. Since the pre-master key is specific to a connection between a client and a server, and used only for a limited amount of time, it is called Ephemeral.

With Forward Secrecy, if an attacker gets a hold of the server’s private key, it will not be able to decrypt past communications. The private key is only used to sign the DH handshake, which does not reveal the pre-master key. Diffie-Hellman ensures that the pre-master keys never leave the client and the server, and cannot be intercepted by a MITM.

All versions of nginx as of 1.4.4 rely on OpenSSL for input parameters to Diffie-Hellman (DH). Unfortunately, this means that Ephemeral Diffie-Hellman (DHE) will use OpenSSL’s defaults, which include a 1024-bit key for the key-exchange. Since we’re using a 2048-bit certificate, DHE clients will use a weaker key-exchange than non-ephemeral DH clients.

We need generate a stronger DHE parameter:

cd /etc/ssl/certs
openssl dhparam -out dhparam.pem 4096

And then tell nginx to use it for DHE key-exchange:

ssl_dhparam /etc/ssl/certs/dhparam.pem;

Note that generating a 4096-bit key will take a long time to finish (from 30 minutes to several hours). Although it is recommended to generate a 4096-bit one, you can use a 2048-bit at the moment. However, 1024-bit key is NOT acceptable.

OCSP Stapling

When connecting to a server, clients should verify the validity of the server certificate using either a Certificate Revocation List (CRL), or an Online Certificate Status Protocol (OCSP) record. The problem with CRL is that the lists have grown huge and takes forever to download.

OCSP is much more lightweight, as only one record is retrieved at a time. But the side effect is that OCSP requests must be made to a 3rd party OCSP responder when connecting to a server, which adds latency and potential failures. In fact, the OCSP responders operated by CAs are often so unreliable that browser will fail silently if no response is received in a timely manner. This reduces security, by allowing an attacker to DoS an OCSP responder to disable the validation.

The solution is to allow the server to send its cached OCSP record during the TLS handshake, therefore bypassing the OCSP responder. This mechanism saves a roundtrip between the client and the OCSP responder, and is called OCSP Stapling.

The server will send a cached OCSP response only if the client requests it, by announcing support for the status_request TLS extension in its CLIENT HELLO.

Most servers will cache OCSP response for up to 48 hours. At regular intervals, the server will connect to the OCSP responder of the CA to retrieve a fresh OCSP record. The location of the OCSP responder is taken from the Authority Information Access field of the signed certificate.

See tutorial on enabling OCSP stapling on NGINX.

HTTP Strict Transport Security

When possible, you should enable HTTP Strict Transport Security (HSTS), which instructs browsers to communicate with your site only over HTTPS.

Add this to nginx.conf:

add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; ";

HTTP Public Key Pinning Extension

You should also enable the HTTP Public Key Pinning Extension.

Public Key Pinning means that a certificate chain must include a whitelisted public key. It ensures only whitelisted Certificate Authorities (CA) can sign certificates for *.example.com, and not any CA in your browser store.

Read how to set it up here.

Conclusion

The final SSL config may look something like this:

# HTTP will be redirected to HTTPS
server {
  listen 80;
  server_name hexadix.com www.hexadix.com;
  rewrite ^ https://$server_name$request_uri permanent;
}

# HTTPS server configuration
server {
  listen       443 ssl;
  server_name hexadix.com;

  add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers On;
  ssl_session_cache shared:SSL:10m;
  ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
  # ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
  ssl_certificate /etc/letsencrypt/live/hexadix.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/hexadix.com/privkey.pem;
  ssl_dhparam /etc/ssl/certs/dhparam.pem;
}

After applying the above config lines you need to restart nginx:

# Check the config first:
/etc/init.d/nginx configtest
# Then restart:
/etc/init.d/nginx restart

Now use the SSL Labs test to see if you get a nice A. And, of course, have a safe, strong and future proof SSL configuration!

Also read the Mozilla page on the subject.

References

http://www.acunetix.com/blog/articles/nginx-server-security-hardening-configuration-1/
https://geekflare.com/nginx-webserver-security-hardening-guide/
https://nealpoole.com/blog/2011/04/setting-up-php-fastcgi-and-nginx-dont-trust-the-tutorials-check-your-configuration/
http://www.softwareprojects.com/resources/programming/t-optimizing-nginx-and-php-fpm-for-high-traffic-sites-2081.html
https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
https://weakdh.org/sysadmin.html
https://www.cyberciti.biz/tips/linux-unix-bsd-nginx-webserver-security.html

Nginx security vulnerabilities and hardening best practices – part I

Nginx security vulnerabilities and hardening best practices – part I

Read part II: Nginx security vulnerabilities and hardening best practices – part II: SSL

Introduction

At the moment, nginx is one the of most popular web server. It is lightweight, fast, robust, supports the major operating systems and is the web server of choice for Netflix, WordPress.com and other high traffic sites. nginx can easily handle 10,000 inactive HTTP connections with as little as 2.5M of memory. In this article, I will provide tips on nginx server security, showing how to secure your nginx installation.

After installing nginx, you should gain a good understanding of nginx’s configuration settings which are found in nginx.conf. This is the main configuration file for nginx and therefore most of the security checks will be done using this file. By default nginx.conf can be found in [Nginx Installation Directory]/conf on Windows systems, and in the /etc/nginx or the /usr/local/etc/nginx directories on Linux systems.

#1. Turn on SELinux

Security-Enhanced Linux (SELinux) is a Linux kernel feature that provides a mechanism for supporting access control security policies which provides great protection. It can stop many attacks before your system rooted. See how to turn on SELinux for CentOS / RHEL based systems.

#2. Hardening /etc/sysctl.conf

You can control and configure Linux kernel and networking settings via /etc/sysctl.conf.

# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1
 
# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1
 
# Turn on syncookies for SYN flood attack protection
net.ipv4.tcp_syncookies = 1
 
# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
 
# No source routed packets here
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
 
# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
 
# Make sure no one can alter the routing tables
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
 
# Don't act as a router
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
 
 
# Turn on execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1
 
# Tuen IPv6 
net.ipv6.conf.default.router_solicitations = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 0
net.ipv6.conf.default.accept_ra_pinfo = 0
net.ipv6.conf.default.accept_ra_defrtr = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.default.max_addresses = 1
 
# Optimization for port usefor LBs
# Increase system file descriptor limit    
fs.file-max = 65535
 
# Allow for more PIDs (to reduce rollover problems); may break some programs 32768
kernel.pid_max = 65536
 
# Increase system IP port limits
net.ipv4.ip_local_port_range = 2000 65000
 
# Increase TCP max buffer size setable using setsockopt()
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608
 
# Increase Linux auto tuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
# Tcp Windows etc
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1

#3. Disable any unwanted nginx modules

Nginx modules are automatically included during installation of nginx and no run-time selection of modules is currently supported, therefore disabling certain modules would require re-compilation of nginx. It is recommended to disable any modules which are not required as this will minimize the risk of any potential attacks by limiting the operations allowed by the web server. To do this, you would need to disable these modules with the configure option during installation. The example below disables the auto index module, which generates automatic directory listings and recompiles nginx.

$ ./configure --without-http_autoindex_module
$ make
$ make install

#4. Disable nginx server_tokens

By default the server_tokens directive in nginx displays the nginx version number in all automatically generated error pages. This could lead to unnecessary information disclosure where an unauthorized user would be able to gain knowledge about the version of nginx that is being used. The server_tokens directive should be disabled from the nginx configuration file by setting – server_tokens off.

A 404 Not Found page showing the Nginx version number through the server_tokens directive

#5. Install SELinux policy

By default SELinux will not protect the nginx web server. However, you can install and compile protection as follows. First, install required SELinux compile time support:

$ yum -y install selinux-policy-targeted selinux-policy-devel

Download targeted SELinux policies to harden the nginx webserver on Linux servers from the project home page:

$ cd /opt
$ wget 'http://downloads.sourceforge.net/project/selinuxnginx/se-ngix_1_0_10.tar.gz?use_mirror=nchc'

Untar the downloaded file:

$ tar -zxvf se-ngix_1_0_10.tar.gz

Compile the source:

$ cd se-ngix_1_0_10/nginx
$ make

Sample outputs:

Compiling targeted nginx module
/usr/bin/checkmodule:  loading policy configuration from tmp/nginx.tmp
/usr/bin/checkmodule:  policy configuration loaded
/usr/bin/checkmodule:  writing binary representation (version 6) to tmp/nginx.mod
Creating targeted nginx.pp policy package
rm tmp/nginx.mod.fc tmp/nginx.mod

Install the resulting nginx.pp SELinux module:

$ /usr/sbin/semodule -i nginx.pp

#6. Restrict iptables based firewall

The following firewall script blocks everything and only allows:

  • Incoming HTTP (TCP port 80) requests
  • Incoming ICMP ping requests
  • Outgoing ntp (port 123) requests
  • Outgoing smtp (TCP port 25) requests
#!/bin/bash
IPT="/sbin/iptables"
 
#### IPS ######
# Get server public ip 
SERVER_IP=$(ifconfig eth0 | grep 'inet addr:' | awk -F'inet addr:' '{ print $2}' | awk '{ print $1}')
LB1_IP="204.54.1.1"
LB2_IP="204.54.1.2"
 
# Do some smart logic so that we can use damm script on LB2 too
OTHER_LB=""
SERVER_IP=""
[[ "$SERVER_IP" == "$LB1_IP" ]] && OTHER_LB="$LB2_IP" || OTHER_LB="$LB1_IP"
[[ "$OTHER_LB" == "$LB2_IP" ]] && OPP_LB="$LB1_IP" || OPP_LB="$LB2_IP"
 
### IPs ###
PUB_SSH_ONLY="122.xx.yy.zz/29"
 
#### FILES #####
BLOCKED_IP_TDB=/root/.fw/blocked.ip.txt
SPOOFIP="127.0.0.0/8 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8 169.254.0.0/16 0.0.0.0/8 240.0.0.0/4 255.255.255.255/32 168.254.0.0/16 224.0.0.0/4 240.0.0.0/5 248.0.0.0/5 192.0.2.0/24"
BADIPS=$( [[ -f ${BLOCKED_IP_TDB} ]] && egrep -v "^#|^$" ${BLOCKED_IP_TDB})
 
### Interfaces ###
PUB_IF="eth0"   # public interface
LO_IF="lo"      # loopback
VPN_IF="eth1"   # vpn / private net
 
### start firewall ###
echo "Setting LB1 $(hostname) Firewall..."
 
# DROP and close everything 
$IPT -P INPUT DROP
$IPT -P OUTPUT DROP
$IPT -P FORWARD DROP
 
# Unlimited lo access
$IPT -A INPUT -i ${LO_IF} -j ACCEPT
$IPT -A OUTPUT -o ${LO_IF} -j ACCEPT
 
# Unlimited vpn / pnet access
$IPT -A INPUT -i ${VPN_IF} -j ACCEPT
$IPT -A OUTPUT -o ${VPN_IF} -j ACCEPT
 
# Drop sync
$IPT -A INPUT -i ${PUB_IF} -p tcp ! --syn -m state --state NEW -j DROP
 
# Drop Fragments
$IPT -A INPUT -i ${PUB_IF} -f -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL FIN,URG,PSH -j DROP
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL ALL -j DROP
 
# Drop NULL packets
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " NULL Packets "
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
 
# Drop XMAS
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " XMAS Packets "
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
 
# Drop FIN packet scans
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " Fin Packets Scan "
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP
 
# Log and get rid of broadcast / multicast and invalid 
$IPT  -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j LOG --log-prefix " Broadcast "
$IPT  -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j LOG --log-prefix " Multicast "
$IPT  -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j DROP
 
$IPT  -A INPUT -i ${PUB_IF} -m state --state INVALID -j LOG --log-prefix " Invalid "
$IPT  -A INPUT -i ${PUB_IF} -m state --state INVALID -j DROP
 
# Log and block spoofed ips
$IPT -N spooflist
for ipblock in $SPOOFIP
do
         $IPT -A spooflist -i ${PUB_IF} -s $ipblock -j LOG --log-prefix " SPOOF List Block "
         $IPT -A spooflist -i ${PUB_IF} -s $ipblock -j DROP
done
$IPT -I INPUT -j spooflist
$IPT -I OUTPUT -j spooflist
$IPT -I FORWARD -j spooflist
 
# Allow ssh only from selected public ips
for ip in ${PUB_SSH_ONLY}
do
        $IPT -A INPUT -i ${PUB_IF} -s ${ip} -p tcp -d ${SERVER_IP} --destination-port 22 -j ACCEPT
        $IPT -A OUTPUT -o ${PUB_IF} -d ${ip} -p tcp -s ${SERVER_IP} --sport 22 -j ACCEPT
done
 
# allow incoming ICMP ping pong stuff
$IPT -A INPUT -i ${PUB_IF} -p icmp --icmp-type 8 -s 0/0 -m state --state NEW,ESTABLISHED,RELATED -m limit --limit 30/sec  -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p icmp --icmp-type 0 -d 0/0 -m state --state ESTABLISHED,RELATED -j ACCEPT
 
# allow incoming HTTP port 80
$IPT -A INPUT -i ${PUB_IF} -p tcp -s 0/0 --sport 1024:65535 --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --sport 80 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT
 
 
# allow outgoing ntp 
$IPT -A OUTPUT -o ${PUB_IF} -p udp --dport 123 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p udp --sport 123 -m state --state ESTABLISHED -j ACCEPT
 
# allow outgoing smtp
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT
 
### add your other rules here ####
 
#######################
# drop and log everything else
$IPT -A INPUT -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " DEFAULT DROP "
$IPT -A INPUT -j DROP
 
exit 0

#7. Control Buffer Overflow Attacks

Buffer overflow attacks are made possible by writing data to a buffer and exceeding that buffers’ boundary and overwriting memory fragments of a process. To prevent this in nginx we can set buffer size limitations for all clients. This can be done through the Nginx configuration file using the following directives:

client_body_buffer_size  1K;
client_header_buffer_size 1k;
client_max_body_size 1k;
large_client_header_buffers 2 1k;
  • client_body_buffer_size – Use this directive to specify the client request body buffer size. The default value is 8k or 16k but it is recommended to set this as low as 1k as follows: client_body_buffer_size 1k
  • client_header_buffer_size – Use this directive to specify the header buffer size for the client request header. A buffer size of 1k is adequate for the majority of requests.
  • client_max_body_size – Use this directive to specify the maximum accepted body size for a client request. A 1k directive should be sufficient, however this needs to be increased if you are receiving file uploads via the POST method.
  • large_client_header_buffers – Use this directive to specify the maximum number and size of buffers to be used to read large client request headers. A large_client_header_buffers 2 1k directive sets the maximum number of buffers to 2, each with a maximum size of 1k. This directive will accept 2kB data URI.

You also need to control timeouts to improve server performance and cut clients. Edit it as follows:

client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 5 5;
send_timeout 10;
  • client_body_timeout 10; – Directive sets the read timeout for the request body from client. The timeout is set only if a body is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408). The default is 60.
  • client_header_timeout 10; – Directive assigns timeout with reading of the title of the request of client. The timeout is set only if a header is not get in one readstep. If after this time the client send nothing, nginx returns error “Request time out” (408).
  • keepalive_timeout 5 5; – The first parameter assigns the timeout for keep-alive connections with the client. The server will close connections after this time. The optional second parameter assigns the time value in the header Keep-Alive: timeout=time of the response. This header can convince some browsers to close the connection, so that the server does not have to. Without this parameter, nginx does not send a Keep-Alive header (though this is not what makes a connection “keep-alive”).
  • send_timeout 10; – Directive assigns response timeout to client. Timeout is established not on entire transfer of answer, but only between two operations of reading, if after this time client will take nothing, then nginx is shutting down the connection.

#8. Control simultaneous connections

You can use NginxHttpLimitZone module to limit the number of simultaneous connections for the assigned session or as a special case, from one IP address. Edit nginx.conf:

# Directive describes the zone, in which the session states are stored i.e. store in slimits.
# 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session
limit_zone slimits $binary_remote_addr 5m;
 
# Control maximum number of simultaneous connections for one session i.e.
# restricts the amount of connections from a single ip address
limit_conn slimits 5;

The above will limits remote clients to no more than 5 concurrently “open” connections per remote ip address.

#9. Allow access to our domain only

If bot is just making random server scan for all domains, just deny it. You must only allow configured virtual domain or reverse proxy requests. You don’t want to display request using an IP address:

# Only requests to our Host are allowed i.e. hexadix.com, static.hexadix.com, www.hexadix.com
if ($host !~ ^(hexadix.com|static.hexadix.com|www.hexadix.com)$ ) {
    return 444;
}

#10. Disable any unwanted HTTP methods

It is suggested to disable any HTTP methods which are not going to be utilized and which are not required to be implemented on the web server. The below condition, which is added under the ‘server’ section in the Nginx configuration file will only allow GET, HEAD, and POST methods and will filter out methods such as DELETE and TRACE by issuing a 444 No Response status code.

if ($request_method !~ ^(GET|HEAD|POST)$ )
{
    return 444;
}

#11. Deny certain User-Agents

You can easily block user-agents i.e. scanners, bots, and spammers who may be abusing your server.

# Block download agents
if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
    return 403;
}

Block robots called msnbot and scrapbot:

# Block some robots
if ($http_user_agent ~* msnbot|scrapbot) {
    return 403;
}

#12. Block referral spam

Referer spam is dangerous. It can harm your SEO ranking via web-logs (if published) as referer field refer to their spammy site. You can block access to referer spammers with these lines.

# Deny certain Referers
if ( $http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen) )
{  
    # return 404;
    return 403;   
}

#13. Stop image hot-linking

Image or HTML hotlinking means someone makes a link to your site to one of your images, but displays it on their own site. The end result you will end up paying for bandwidth bills and make the content look like part of the hijacker’s site. This is usually done on forums and blogs. I strongly suggest you block and stop image hotlinking at your server level itself.

# Stop deep linking or hot linking
location /images/ {
  valid_referers none blocked www.example.com example.com;
   if ($invalid_referer) {
     return   403;
   }
}

Another example with link to banned image:

valid_referers blocked www.example.com example.com;
if ($invalid_referer) {
    rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ http://www.examples.com/banned.jpg last
}

See also: HowTo: Use nginx map to block image hotlinking. This is useful if you want to block tons of domains.

#14. Directory restrictions

You can set access control for a specified directory. All web directories should be configured on a case-by-case basis, allowing access only where needed.

Limiting Access By Ip Address
You can limit access to directory by ip address to /docs/ directory:

location /docs/ {
    # block one workstation
    deny    192.168.1.1;

    # allow anyone in 192.168.1.0/24
    allow   192.168.1.0/24;

    # drop rest of the world
    deny    all;
}

Password Protect The Directory
First create the password file and add a user called vivek:

$ mkdir /usr/local/nginx/conf/.htpasswd/
$ htpasswd -c /usr/local/nginx/conf/.htpasswd/passwd vivek

Edit nginx.conf and protect the required directories as follows:

# Password Protect /personal-images/ and /delta/ directories ###
location ~ /(personal-images/.*|delta/.*) {
    auth_basic  "Restricted"; 
    auth_basic_user_file   /usr/local/nginx/conf/.htpasswd/passwd;
}

Once a password file has been generated, subsequent users can be added with the following command:

$ htpasswd -s /usr/local/nginx/conf/.htpasswd/passwd userName

#15. Nginx SSL configuration

HTTP is a plain text protocol and it is open to passive monitoring. You should use SSL to to encrypt your content for users.

Create an SSL Certificate

Type the following commands:

$ cd /usr/local/nginx/conf
$ openssl genrsa -des3 -out server.key 1024
$ openssl req -new -key server.key -out server.csr
$ cp server.key server.key.org
$ openssl rsa -in server.key.org -out server.key
$ openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Edit nginx.conf and update it as follows:

server {
    server_name example.com;
    listen 443;
    ssl on;
    ssl_certificate /usr/local/nginx/conf/server.crt;
    ssl_certificate_key /usr/local/nginx/conf/server.key;
    access_log /usr/local/nginx/logs/ssl.access.log;
    error_log /usr/local/nginx/logs/ssl.error.log;
}

Restart the nginx:

$ /usr/local/nginx/sbin/nginx -s reload

#16. Nginx and PHP security tips

PHP is one of the popular server side scripting language. Edit /etc/php.ini as follows:

# Disallow dangerous functions 
disable_functions = phpinfo, system, mail, exec
 
## Try to limit resources  ##
 
# Maximum execution time of each script, in seconds
max_execution_time = 30
 
# Maximum amount of time each script may spend parsing request data
max_input_time = 60
 
# Maximum amount of memory a script may consume (8MB)
memory_limit = 8M
 
# Maximum size of POST data that PHP will accept.
post_max_size = 8M
 
# Whether to allow HTTP file uploads.
file_uploads = Off
 
# Maximum allowed size for uploaded files.
upload_max_filesize = 2M
 
# Do not expose PHP error messages to external users
display_errors = Off
 
# Turn on safe mode
safe_mode = On
 
# Only allow access to executables in isolated directory
safe_mode_exec_dir = php-required-executables-path
 
# Limit external access to PHP environment
safe_mode_allowed_env_vars = PHP_
 
# Restrict PHP information leakage
expose_php = Off
 
# Log all errors
log_errors = On
 
# Do not register globals for input data
register_globals = Off
 
# Minimize allowable PHP post size
post_max_size = 1K
 
# Ensure PHP redirects appropriately
cgi.force_redirect = 0
 
# Disallow uploading unless necessary
file_uploads = Off
 
# Enable SQL safe mode
sql.safe_mode = On
 
# Avoid Opening remote files 
allow_url_fopen = Off

A misconfigured nginx server can allow non-PHP files to be executed as PHP.
Let’s prevent that:

# Pass all .php files onto a php-fpm/php-fcgi server.
location ~ \.php$ {
   # Zero-day exploit defense.
   # http://forum.nginx.org/read.php?2,88845,page=3
   # Won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi.
   # Comment the 'try_files' line out if you set up php-fpm/php-fcgi on another machine.  And then cross your fingers that you won't get hacked.
   try_files $uri =404;

   fastcgi_split_path_info ^(.+\.php)(/.+)$;
   include fastcgi_params;
   fastcgi_index index.php;
   fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
#    fastcgi_intercept_errors on;
   fastcgi_pass php;
}

#17. Run Nginx In A Chroot Jail (Containers) If Possible

Putting nginx in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. You can use traditional chroot kind of setup with nginx. If possible use FreeBSD jails, XEN, or OpenVZ virtualization which uses the concept of containers.

#18. Limits connections per IP at the firewall level

A webserver must keep an eye on connections and limit connections per second. This is serving 101. Both pf and iptables can throttle end users before accessing your nginx server.

Linux Iptables: Throttle Nginx Connections Per Second
The following example will drop incoming connections if IP make more than 15 connection attempts to port 80 within 60 seconds:

$ /sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set
$ /sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 60  --hitcount 15 -j DROP
$ service iptables save

BSD PF: Throttle Nginx Connections Per Second
Edit your /etc/pf.conf and update it as follows. The following will limits the maximum number of connections per source to 100. 15/5 specifies the number of connections per second or span of seconds i.e. rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits.

webserver_ip="202.54.1.1"
table  persist
block in quick from 
pass in on $ext_if proto tcp to $webserver_ip port www flags S/SA keep state (max-src-conn 100, max-src-conn-rate 15/5, overload  flush)

Please adjust all values as per your requirements and traffic (browsers may open multiple connections to your site). See also:

  1. Sample PF firewall script.
  2. Sample Iptables firewall script.

#19. Configure operating system to protect web server

Turn on SELinux as described above. Set correct permissions on /nginx document root. The nginx runs as a user named nginx. However, the files in the DocumentRoot (/nginx or /usr/local/nginx/html) should not be owned or writable by that user. To find files with wrong permissions, use:

$ find /nginx -user nginx
$ find /usr/local/nginx/html -user nginx

Make sure you change file ownership to root or other user. A typical set of permission /usr/local/nginx/html/

$ ls -l /usr/local/nginx/html/

Sample outputs:

-rw-r--r-- 1 root root 925 Jan  3 00:50 error4xx.html
-rw-r--r-- 1 root root  52 Jan  3 10:00 error5xx.html
-rw-r--r-- 1 root root 134 Jan  3 00:52 index.html

You must delete unwated backup files created by vi or other text editor:

$ find /nginx -name '.?*' -not -name .ht* -or -name '*~' -or -name '*.bak*' -or -name '*.old*'
$ find /usr/local/nginx/html/ -name '.?*' -not -name .ht* -or -name '*~' -or -name '*.bak*' -or -name '*.old*'

Pass -delete option to find command and it will get rid of those files too.

#20. Restrict outgoing nginx connections

The crackers will download file locally on your server using tools such as wget. Use iptables to block outgoing connections from nginx user. The ipt_owner module attempts to match various characteristics of the packet creator, for locally generated packets. It is only valid in the OUTPUT chain. In this example, allow vivek user to connect outside using port 80 (useful for RHN access or to grab CentOS updates via repos):

$ /sbin/iptables -A OUTPUT -o eth0 -m owner --uid-owner vivek -p tcp --dport 80 -m state --state NEW,ESTABLISHED  -j ACCEPT

Add above rule to your iptables based shell script. Do not allow nginx web server user to connect outside.

#21. Make use of ModSecurity

ModSecurity is an open-source module that works as a web application firewall. Different functionalities include filtering, server identity masking, and null byte attack prevention. Real-time traffic monitoring is also allowed through this module. Therefore it is recommended to follow the ModSecurity manual to install the mod_security module in order to strengthen your security options.

#22. Set up and configure nginx access and error logs

Nginx access and error logs are enabled by default and are located at logs/error.log for error logs and at logs/access.log for access logs. The error_log directive in the nginx configuration file will allow you to set the directory where the error logs will be saved as well as specify which logs will be recorded according to their severity level. For example, a ‘crit’ severity level will log important problems that need to be addressed and any other issues which have a higher severity level than ‘crit’. To set the severity level of error logs to ‘crit’ the error_log directive needs to be set up as follows – error_log logs/error.log crit;. A complete list of error_log severity levels can be found in the official nginx documentation available here.

Alternatively, the access_log directive can be modified from the nginx configuration file to specify a location where the access logs will be saved (other than the default location). Also the log_format directive can be used to configure the format of the logged messages as explained here.

#23. Monitor nginx access and error logs

Continuous monitoring and management of the nginx log files will give a better understanding of requests made to your web server and also list any errors that were encountered. This will help to expose any attempted attacks made against the server as well as identify any optimizations that need to be carried out to improve the server’s performance. Log management tools, such as logrotate, can be used to rotate and compress old logs in order to free up disk space. Also the ngx_http_stub_status_module module provides access to basic status information, and nginx Plus, the commercial version of nginx, provides real-time activity monitoring of traffic, load and other performance metrics.

Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not.

$ grep "/login.php??" /usr/local/nginx/logs/access_log
$ grep "...etc/passwd" /usr/local/nginx/logs/access_log
$ egrep -i "denied|error|warn" /usr/local/nginx/logs/error_log

Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not.
# grep "/login.php??" /usr/local/nginx/logs/access_log
# grep "...etc/passwd" /usr/local/nginx/logs/access_log
# egrep -i "denied|error|warn" /usr/local/nginx/logs/error_log

The auditd service is provided for system auditing. Turn it on to audit service SELinux events, authetication events, file modifications, account modification and so on. As usual disable all services and follow our “Linux Server Hardening” security tips.

#24. Configure Nginx to include an X-Frame-Options header

The X-Frame-Options HTTP response header is normally used to indicate if a browser should be allowed to render a page in a <frame> or an <iframe>. This could prevent clickjacking attacks and therefore it is recommended to enable the Nginx server to include the X-Frame-Options header. In order to do so the following parameter must be added to the nginx configuration file under the ‘server’ section – add_header X-Frame-Options "SAMEORIGIN";

server {
    listen 8887;
    server_name localhost;

    add_header X-Frame-Options "SAMEORIGIN";

    location / {
        root html;
        index index.html; index.htm;
    }
}

#25. X-XSS Protection

Inject HTTP Header with X-XSS protection to mitigate Cross-Site scripting attack.

Modify default.conf or ssl.conf file to add following

add_header X-XSS-Protection "1; mode=block";

Save the configuration file and restart nginx. You can use Check Headers tool to verify after implementation.

#26. Keep your nginx up to date

As with any other server software, it is recommended that you always update your Nginx server to the latest stable version. These often contain fixes for vulnerabilities identified in previous versions, such as the directory traversal vulnerability that existed in Nginx versions prior to 0.7.63, and 0.8.x before 0.8.17. These updates frequently include new security features and improvements. Nginx security advisories can be found here and news about latest updates can be found here.

Conclusion

In this tutorial, we have looked at several ways to harden our Nginx configuration.
In the next tutorial, we are going to look at how to harden SSL configurations on our nginx server.
Read part II here: Nginx security vulnerabilities and hardening best practices – part II: SSL

References

http://www.acunetix.com/blog/articles/nginx-server-security-hardening-configuration-1/
https://geekflare.com/nginx-webserver-security-hardening-guide/
https://nealpoole.com/blog/2011/04/setting-up-php-fastcgi-and-nginx-dont-trust-the-tutorials-check-your-configuration/
http://www.softwareprojects.com/resources/programming/t-optimizing-nginx-and-php-fpm-for-high-traffic-sites-2081.html
https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
https://weakdh.org/sysadmin.html
https://www.cyberciti.biz/tips/linux-unix-bsd-nginx-webserver-security.html