Pages

Showing posts with label Nginx. Show all posts
Showing posts with label Nginx. Show all posts

Wednesday, April 9, 2014

Secure your SSL/TLS server

Heartbleed

Recently the Heartbleed bug came to light. It is a bug in the OpenSSL library that causes information to leak from the server. It is an undetectable backdoor that allows to gain the private key of your server. Let’s just say it is VERY important to fix it. Most distros have been very quick to propagate the OpenSSL update, so running your favorite update manager should fix it in no time.

To verify if you have protected, run this command and check for built on to be greater or equal to April 7th, 2014:
$ openssl version -a

OpenSSL 1.0.1e 11 Feb 2013
built on: Mon Apr  7 20:33:19 UTC 2014
platform: debian-amd64

Disable weak ciphers

The way SSL/TLS works is that the client and the server must agree on a cipher to use for encryption. If you were to attack a server, you would obviously use the least secure cipher. To protect against this, simply disable ciphers to be known as weak or those which flaws have been discovered.

I am using this configuration for Apache:

SSLCipherSuite ALL:!ADH:!AECDH:RC4+RSA:+HIGH:+MEDIUM:!LOW:!SSLv2:!EXPORT

For Nginx, see their configuration reference. Since 1.0.5, they are using a sensible default. Otherwise, you can use the same as above.

Do not use a too weak or too strong private key

The private key must never be discovered. Otherwise, anyone could decrypt the content and could perpetrate a MITM attack. If the private key is too weak, it could eventually be guessed given enough data. However, SSL/TLS handshakes are very CPU intensive for both the server and the client. Using a key too long will considerably slow down your website. In most cases, 2048 is perfect.

Test your own server

SSL Labs provides a free test suite that will test your ciphers and for known attacks including BEAST and Heartbleed. This is a must: https://www.ssllabs.com/ssltest/

Further reading

I am not a security expert, I simply happen to have done hosting for quite a time. I suggest you do not take my word blindly and go check this very pertinent paper from SSL Labs.


Tuesday, September 18, 2012

Protect Webserver against DOS attacks using UFW

Ubuntu comes bundled with UFW, which is an interface to iptables. This is basically a very lightweight router/firewall inside the Linux kernel that runs way before any other application.

Typical setup of ufw is to allow HTTP(S), limit SSH and shut everything else. This is not a UFW or iptables tutorial, you may find a lot of online help to guide you through all your needs. However, I personally had a lot of difficulties to find good documentation on how to protect yourself against HTTP attacks.

A lot of HTTP requests is normal

The problem is that HTTP can get very noisy. A typical Web page can easily have up to a hundred of assets but usually, if you receive 100 requests in a second, it means you are under siege. If you really need to have 100 assets on a single Web page, you need a CDN, not as better server.

Rate limiting

These rules have been mostly guessed through trial-and-error and some search around the Web, tweak to fit your needs. A rate limit of x connections per y seconds means that if x connections has been initiated in the last y seconds by this profile, it will be dropped. Dropping is actually a nice protection against flooding because the sender won't know that you dropped it. He might think the packet was lost, that the port is closed or even better, the server is overloaded. Imagine how nice, your attacker thinks he succeeded, but in fact you are up and running, him being blocked.

Connections per IP
A connection is an open channel. A typical browser will open around 5 connections per page load and they should last under 5 seconds each. Firefox, for example, has a default max of 15 connections per server and 256 total.

I decided to go for 20 connections / 10 seconds / IP. 

Connections per Class C
Same a above, but this time we apply the rule to the whole Class C of the IP because it is quite common for someone to have a bunch of available IPs. This means for example all IPs looking like 11.12.13.*

I decided to go for 50 simultaneous connections.

Packets per IP
This is the challenging part. Due to a limitation that is not easy to circumvent, it is only possible to keep track of the last 20 packets. At the same time, it might add a considerable overhead to track 100 packets for each IPs. While big website may eventually need more than this, like I said, you should take a look in a proper CDN.

I decided to go for 20 packets / second / IP

Configuring UFW

The following instructions are targeted at UFW, but it is really just a wrapper so it should be easy to adapt them for a generic system.

Edit /etc/ufw/before.rules, putting each part where it belongs

Make sure ufw runs and reload everything using ufw reload.

Testing the results

Make sure everything runs smoothly by refreshing your browser like a mad-man. You should start getting timeout after ~15 refreshes and it should come back in less than 30 seconds. This is good.

But if you want to get serious on your tests, some tools may help you putting your server to its knees. It is highly discouraged to use this on a production server, but it is still better if you do it yourself than if you wait for someone to try.

Try those with UFW enabled and disabled to see the difference but be careful, some machines may downright crash on you or fill all available space with logs.
  • http://ha.ckers.org/slowloris/
    Written in Perl, features a lot of common attacks, including HTTPS
  • http://www.sectorix.com/2012/05/17/hulk-web-server-dos-tool/
    Written in Python, basic multi-threaded attack, very easy to use.
  • http://www.joedog.org/siege-home/
    Compiled, available in Ubuntu repositories, very good to benchmark
  • http://blitz.io/
    Online service when you can test freely with up to 250 concurrent users
To confirm that everything works perfectly, SSH into your machine and start a tail -f /var/log/ufw.log to see the packets being dropped and htop to watch the CPU have fun. 

SSH into another machine and start a script. You should see the CPU sky-rocket for a few seconds and then go back to normal. Logs will start to appear and your stress-tool will have some problems. While all this is going on, you should be able to browse normally your website using your computer. 

Great success.

Friday, September 14, 2012

Generate missing Nginx mime types using /usr/share/mime/globs

Nginx comes with a rather small set of mime types compared to a default Linux system

Linux uses a glob pattern to match a filename while Nginx matches only extension, but we can still use every glob in the format of *.ext

So here is a small PHP script converting/sorting/filtering/formatting everything in a nice output.


Monday, September 10, 2012

Make your dev machine public using a VPN and a proxy

There is many reasons to be needing a machine available publicly:
  • Developing a Facebook App
  • Developing an OAuth authentication
  • Make a quick showcase for your client
Problem is, you probably want it to be your machine because you have all your development tools and IDEs, it is fast and you know it by heart. You could mount a folder using sshfs, but it is only part of the job and it may get very slow for some file editors.

The solution I came up with is to tunnel through a VPN to a public machine and let it proxy the requests back.

You will need

  • A public Linux box with root access
  • A domain name where you can setup a wildcard

Instructions, tested on Ubuntu 12.04

  1. Install apache or nginx and pptpd (you can follow this tutorial for the VPN or this one if you are using ufw)
  2. In you /etc/ppp/chap-secrets file, be sure to specify a fixed address for yourself (4th column)
    • It must fit the IP range specified in /etc/pptpd.conf
  3. Create a DNS wildcard pointing to your server 
    • Ex: CNAME *.dev.lavoie.sl => server.lavoie.sl
  4. Create an Apache or Ngnix proxy to match a server wildcard and redirect it to the VPN IP decided before
  5. Create the same wildcard on your machine to answer correctly.

Security considerations

If you have unprotected data like phpMyAdmin or other websites you are developing, they could be at risk, consider protecting them via a password or an IP restriction.

Configuration example for Apache and Nginx