Pages

Tuesday, September 18, 2012

Protect Webserver against DOS attacks using UFW

Ubuntu comes bundled with UFW, which is an interface to iptables. This is basically a very lightweight router/firewall inside the Linux kernel that runs way before any other application.

Typical setup of ufw is to allow HTTP(S), limit SSH and shut everything else. This is not a UFW or iptables tutorial, you may find a lot of online help to guide you through all your needs. However, I personally had a lot of difficulties to find good documentation on how to protect yourself against HTTP attacks.

A lot of HTTP requests is normal

The problem is that HTTP can get very noisy. A typical Web page can easily have up to a hundred of assets but usually, if you receive 100 requests in a second, it means you are under siege. If you really need to have 100 assets on a single Web page, you need a CDN, not as better server.

Rate limiting

These rules have been mostly guessed through trial-and-error and some search around the Web, tweak to fit your needs. A rate limit of x connections per y seconds means that if x connections has been initiated in the last y seconds by this profile, it will be dropped. Dropping is actually a nice protection against flooding because the sender won't know that you dropped it. He might think the packet was lost, that the port is closed or even better, the server is overloaded. Imagine how nice, your attacker thinks he succeeded, but in fact you are up and running, him being blocked.

Connections per IP
A connection is an open channel. A typical browser will open around 5 connections per page load and they should last under 5 seconds each. Firefox, for example, has a default max of 15 connections per server and 256 total.

I decided to go for 20 connections / 10 seconds / IP. 

Connections per Class C
Same a above, but this time we apply the rule to the whole Class C of the IP because it is quite common for someone to have a bunch of available IPs. This means for example all IPs looking like 11.12.13.*

I decided to go for 50 simultaneous connections.

Packets per IP
This is the challenging part. Due to a limitation that is not easy to circumvent, it is only possible to keep track of the last 20 packets. At the same time, it might add a considerable overhead to track 100 packets for each IPs. While big website may eventually need more than this, like I said, you should take a look in a proper CDN.

I decided to go for 20 packets / second / IP

Configuring UFW

The following instructions are targeted at UFW, but it is really just a wrapper so it should be easy to adapt them for a generic system.

Edit /etc/ufw/before.rules, putting each part where it belongs

Make sure ufw runs and reload everything using ufw reload.

Testing the results

Make sure everything runs smoothly by refreshing your browser like a mad-man. You should start getting timeout after ~15 refreshes and it should come back in less than 30 seconds. This is good.

But if you want to get serious on your tests, some tools may help you putting your server to its knees. It is highly discouraged to use this on a production server, but it is still better if you do it yourself than if you wait for someone to try.

Try those with UFW enabled and disabled to see the difference but be careful, some machines may downright crash on you or fill all available space with logs.
  • http://ha.ckers.org/slowloris/
    Written in Perl, features a lot of common attacks, including HTTPS
  • http://www.sectorix.com/2012/05/17/hulk-web-server-dos-tool/
    Written in Python, basic multi-threaded attack, very easy to use.
  • http://www.joedog.org/siege-home/
    Compiled, available in Ubuntu repositories, very good to benchmark
  • http://blitz.io/
    Online service when you can test freely with up to 250 concurrent users
To confirm that everything works perfectly, SSH into your machine and start a tail -f /var/log/ufw.log to see the packets being dropped and htop to watch the CPU have fun. 

SSH into another machine and start a script. You should see the CPU sky-rocket for a few seconds and then go back to normal. Logs will start to appear and your stress-tool will have some problems. While all this is going on, you should be able to browse normally your website using your computer. 

Great success.

Friday, September 14, 2012

Generate missing Nginx mime types using /usr/share/mime/globs

Nginx comes with a rather small set of mime types compared to a default Linux system

Linux uses a glob pattern to match a filename while Nginx matches only extension, but we can still use every glob in the format of *.ext

So here is a small PHP script converting/sorting/filtering/formatting everything in a nice output.


Wednesday, September 12, 2012

Configure ElasticSearch on a single shared host and reduce memory usage

ElasticSearch is a powerful, yet easy to use, search engine based on Lucene. Compared to others, it features a JSON API and wonderful scaling capabilities via a distributed scheme and the defaults are aimed towards such scalability.

However, you may want to use ElasticSearch on a single host, mixed with your Web server, database and everything. The problem is that ES is quite a CPU and memory hog by default. Here’s what I found through trial and error and some heavy search.

This idea is to give ES some power, but leave some for the rest of the services. At the same time, if you tell ES that it can grab half of your memory and the OS needs some, ES will get killed, which isn’t nice.

My host was configured this way:
  • ElasticSearch 0.19.9, official .deb package
  • Ubuntu 12.04
  • 1.5GB of RAM
  • Dual-Core 2.6ghz
  • LEMP stack
After installing the official package:
  1. Allow user elasticsearch to lock memory
    1. Edit /etc/security/limits.conf and add:
      elasticsearch hard memlock 100000
  2. Edit the init script: /etc/init.d/elasticsearch
    1. Change ES_HEAP_SIZE to 10-20% of your machine, I used 128m
    2. Change MAX_OPEN_FILES to something sensible.
      Default is 65536, I used 15000
      Update: I asked the question on ElasticSearch group and it may be a bad idea, without giving any advantage.
    3. Change MAX_LOCKED_MEMORY to 100000  (~100MB)
      Be sure to set it at the same value as 1.1
    4. Change JAVA_OPTS to "-server"
      I don’t exactly know why, but if you check in the logs, you will see Java telling you to do so.
  3. Edit the config file: /etc/elasticsearch/elasticsearch.yml
    1. Disable replication capabilities
      1. index.number_of_shards: 1
      2. index.number_of_replicas: 0
    2. Reduce memory usage
      1. index.term_index_interval: 256
      2. index.term_index_divisor: 5
    3. Ensure ES is binded to localhost
      network.host: 127.0.0.1
    4. Enable blocking TCP because you are always on localhost
      network.tcp.block: true
  4. Flush and restart the server
    1. curl localhost:9200/_flush
    2. /etc/init.d/elasticsearch restart

Monday, September 10, 2012

Make your dev machine public using a VPN and a proxy

There is many reasons to be needing a machine available publicly:
  • Developing a Facebook App
  • Developing an OAuth authentication
  • Make a quick showcase for your client
Problem is, you probably want it to be your machine because you have all your development tools and IDEs, it is fast and you know it by heart. You could mount a folder using sshfs, but it is only part of the job and it may get very slow for some file editors.

The solution I came up with is to tunnel through a VPN to a public machine and let it proxy the requests back.

You will need

  • A public Linux box with root access
  • A domain name where you can setup a wildcard

Instructions, tested on Ubuntu 12.04

  1. Install apache or nginx and pptpd (you can follow this tutorial for the VPN or this one if you are using ufw)
  2. In you /etc/ppp/chap-secrets file, be sure to specify a fixed address for yourself (4th column)
    • It must fit the IP range specified in /etc/pptpd.conf
  3. Create a DNS wildcard pointing to your server 
    • Ex: CNAME *.dev.lavoie.sl => server.lavoie.sl
  4. Create an Apache or Ngnix proxy to match a server wildcard and redirect it to the VPN IP decided before
  5. Create the same wildcard on your machine to answer correctly.

Security considerations

If you have unprotected data like phpMyAdmin or other websites you are developing, they could be at risk, consider protecting them via a password or an IP restriction.

Configuration example for Apache and Nginx