How to Protect Against Slow HTTP Attacks
Slow HTTP attacks are denial-of-service (DoS) attacks in which the attacker sends HTTP requests in pieces slowly, one at a time to a Web server. If an HTTP request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. When the server’s concurrent connection pool reaches its maximum, this creates a DoS. Slow HTTP attacks are easy to execute because they require only minimal resources from the attacker.
In this article, I describe several simple steps to protect against slow HTTP attacks and to make the attacks more difficult to execute.
Previous articles in the series cover:
- Identifying Slow HTTP Attack Vulnerabilities on Web Application
- New Open-Source Tool for Slow HTTP DoS Attack Vulnerabilities
- Testing Web Servers for Slow HTTP Attacks
Protection Strategies
To protect your Web server against slow HTTP attacks, I recommend the following:
- Reject / drop connections with HTTP methods (verbs) not supported by the URL.
- Limit the header and message body to a minimal reasonable length. Set tighter URL-specific limits as appropriate for every resource that accepts a message body.
- Set an absolute connection timeout, if possible. Of course, if the timeout is too short, you risk dropping legitimate slow connections; and if it’s too long, you don’t get any protection from attacks. I recommend a timeout value based on your connection length statistics, e.g. a timeout slightly greater than median lifetime of connections should satisfy most of the legitimate clients.
- The backlog of pending connections allows the server to hold connections it’s not ready to accept, and this allows it to withstand a larger slow HTTP attack, as well as gives legitimate users a chance to be served under high load. However, a large backlog also prolongs the attack, since it backlogs all connection requests regardless of whether they’re legitimate. If the server supports a backlog, I recommend making it reasonably large to so your HTTP server can handle a small attack.
- Define the minimum incoming data rate, and drop connections that are slower than that rate. Care must be taken not to set the minimum too low, or you risk dropping legitimate connections.
Server-Specific Recommendations
Applying the above steps to the HTTP servers tested in the previous article indicates the following server-specific settings:
Apache
- Using the <Limit> and <LimitExcept> directives to drop requests with methods not supported by the URL alone won’t help, because Apache waits for the entire request to complete before applying these directives. Therefore, use these parameters in conjunction with the LimitRequestFields, LimitRequestFieldSize, LimitRequestBody, LimitRequestLine, LimitXMLRequestBody directives as appropriate. For example, it is unlikely that your web app requires an 8190 byte header, or an unlimited body size, or 100 headers per request, as most default configurations have.
- Set reasonable TimeOut and KeepAliveTimeOut directive values. The default value of 300 seconds for TimeOut is overkill for most situations.
- ListenBackLog’s default value of 511 could be increased, which is helpful when the server can’t accept connections fast enough.
- Increase the MaxRequestWorkers directive to allow the server to handle the maximum number of simultaneous connections.
- Adjust the AcceptFilter directive, which is supported on FreeBSD and Linux, and enables operating system specific optimizations for a listening socket by protocol type. For example, the httpready Accept Filter buffers entire HTTP requests at the kernel level.
A number of Apache modules are available to minimize the threat of slow HTTP attacks. For example, mod_reqtimeout’s RequestReadTimeout directive helps to control slow connections by setting timeout and minimum data rate for receiving requests.
I also recommend switching apache2 to experimental Event MPM mode where available. This uses a dedicated thread to handle the listening sockets and all sockets that are in a Keep Alive state, which means incomplete connections use fewer resources while being polled.
Nginx
- Limit accepted verbs by checking the $request_method variable.
- Set reasonably small client_max_body_size, client_body_buffer_size, client_header_buffer_size, large_client_header_buffers, and increase where necessary.
- Set client_body_timeout, client_header_timeout to reasonably low values.
- Consider using HttpLimitReqModule and HttpLimitZoneModule to limit the number of requests or the number of simultaneous connections for a given session, or as a special case, with same address.
- Configure worker_processes and worker_connections based on the number of CPU / cores, content and load. The formula is max_clients = worker_processes * worker_connections.
lighttpd
- Restrict request verbs using the $HTTP["request-method"] field in the configuration file for the core module (available since version 1.4.19).
- Use server.max_request-size to limit the size of the entire request including headers.
- Set server.max-read-idle to a reasonable minimum so that the server closes slow connections. No absolute connection timeout option was found.
IIS 6
- Set connectionTimeout, HeaderWaitTimeout and MaxConnections properties in Metabase to minimize the impact of slow HTTP attacks. Working with Metabase can be complicated, so I recommend Microsoft’s Working with the Metabase reference guide.
IIS 7
- Limit request attributes is through the <RequestLimits> element, specifically the maxAllowedContentLength, maxQueryString, and maxUrl attributes.
- Set <headerLimits> to configure the type and size of header your web server will accept.
- Tune the connectionTimeout, headerWaitTimeout, and minBytesPerSecond attributes of the <limits> and <WebLimits> elements to minimize the impact of slow HTTP attacks.
What’s Next
The above are the simplest and most generic countermeasures to minimize the threat. Tuning the Web server configuration is effective to an extent, although there is always a tradeoff between limiting slow HTTP attacks and dropping legitimately slow requests. This means you can never prevent attacks simply using the above techniques.
Beyond configuring the web server, it’s possible to implement other layers of protection like event-driven software load balancers, hardware load balancers to perform delayed binding, and intrusion detection/prevention systems to drop connections with suspicious patterns.
However, today, it probably makes more sense to defend against specific tools rather than slow HTTP attacks in general. Tools have weaknesses that can be identified and and exploited when tailoring your protection. For example, slowhttptest doesn’t change the user-agent string once the test has begun, and it requests the same URL in every HTTP request. If a web server receives thousands of connections from the same IP with the same user-agent requesting the same resource within short period of time, it obviously hints that something is not legitimate. These kinds of patterns can be gleaned from the log files, therefore monitoring log files to detect the attack still remains the most effective countermeasure.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.