Pages

Showing posts with label Apache. Show all posts
Showing posts with label Apache. Show all posts

Wednesday, April 9, 2014

Secure your SSL/TLS server

Heartbleed

Recently the Heartbleed bug came to light. It is a bug in the OpenSSL library that causes information to leak from the server. It is an undetectable backdoor that allows to gain the private key of your server. Let’s just say it is VERY important to fix it. Most distros have been very quick to propagate the OpenSSL update, so running your favorite update manager should fix it in no time.

To verify if you have protected, run this command and check for built on to be greater or equal to April 7th, 2014:
$ openssl version -a

OpenSSL 1.0.1e 11 Feb 2013
built on: Mon Apr  7 20:33:19 UTC 2014
platform: debian-amd64

Disable weak ciphers

The way SSL/TLS works is that the client and the server must agree on a cipher to use for encryption. If you were to attack a server, you would obviously use the least secure cipher. To protect against this, simply disable ciphers to be known as weak or those which flaws have been discovered.

I am using this configuration for Apache:

SSLCipherSuite ALL:!ADH:!AECDH:RC4+RSA:+HIGH:+MEDIUM:!LOW:!SSLv2:!EXPORT

For Nginx, see their configuration reference. Since 1.0.5, they are using a sensible default. Otherwise, you can use the same as above.

Do not use a too weak or too strong private key

The private key must never be discovered. Otherwise, anyone could decrypt the content and could perpetrate a MITM attack. If the private key is too weak, it could eventually be guessed given enough data. However, SSL/TLS handshakes are very CPU intensive for both the server and the client. Using a key too long will considerably slow down your website. In most cases, 2048 is perfect.

Test your own server

SSL Labs provides a free test suite that will test your ciphers and for known attacks including BEAST and Heartbleed. This is a must: https://www.ssllabs.com/ssltest/

Further reading

I am not a security expert, I simply happen to have done hosting for quite a time. I suggest you do not take my word blindly and go check this very pertinent paper from SSL Labs.


Tuesday, March 12, 2013

Guide to replicated LAMP stack hosting with failover

Motivations on building on your own hosting

For my company, I started to think about offering a hosting service. Cheap solutions exist like 1and1.com and iweb.com where you can rent a VPS or have some shared hosting, but it implies some setup for each client, managing credentials, analyzing the needs of everyone, etc. What about upgrades? What about failovers? What about a custom services like Lucene or Rails that could be running?

Defining the needs

Before trying to find solutions, let’s try to find the correct questions. What are we trying to accomplish exactly? Those are generic needs; I will provide my answers, but at some point, I had to take some shortcuts to be able to complete it and meet some profitability requirements. If you have different priorities or your are working on a different scale, your answer will most probably differ at some point.

Scalable

Upgrades must be possible without any downtime. It must be easy so we can react in a matter of minutes to an emergency load or a crash. Also, we will spend quite some time configuring everything so we would like to keep it even if we triple our load. Ideally, we want to be able to scale both horizontally (adding more machines) and vertically (upgrading the machines). 

Highly Available

This means fail-overs, redundancy and stability. The key is load balancing, but we want to remove single points of failures as much as possible. If there is any, we want to have complete trust in them and they should do the least possible be as isolated as possible.

Secure

The purpose of this guide is not to build a banking system, but we still want to be secure. We want some strong password policies, firewalls and most importantly: backups. The whole system should be re-installable in an hour or two if something major happens and clients’ files and databases should be revertable hourly, daily, weekly or something around those lines.

Compatible and flexible

We will have almost no control over the applications, but we still want to standardize some key elements. For example, having 2 database systems could be acceptable, but running 2 different web servers is a bit over zealous. Some clients may also have some particular needs like a search engine or cronjobs, we need to be ready.

Performant

Between scalability and availability, we often achieve performance but only if each of the channels are independent. In general, websites do much more reads than writes, both in the database and on the filesystem. However, because we want to be compatible and application agnostic, we won’t be able to resort to techniques like declaring a folder read-only or having a slave database. We may not be able to constrain application, but we can reward those who are well configured: we can provide some opt-in features like reverse proxies, shared cache, temporary folders, etc.

Profitable

And the last but not the least, we like profits, so the whole system must have a predictable cost that can be forwarded to the appropriate client. Scalability plays a big role here because we can scale just as much as we need, when we need.

Overview

As I am starting to write this guide, the system is already operational and in production. I already stumbled across many problems, but I am sure some others are still to come. Here is an overview of all the parts I want to address, links will become available as they are written. The considered options are also listed to give you an idea of where I am going with all this.
  1. Hosting platform
    • Cloud virtual machines
      • Linode
      • Amazon
      • Rackspace
    • Physical virtual machines
      • iWeb (I’m in Montreal, Canada)
    • Platform as a service
      • Windows Azure
  2. Linux
    • CentOS
    • Ubuntu Server
    • Debian
  3. Filesystem
    • Synchronisation
      • csync2
      • rsync
    • Distributed
      • GlusterFS
      • Lustre
      • DRBD
    • Shared
      • NFS
  4. Load balancer
    • Amazon / Linode Load balancer
    • HAProxy
    • Nginx
  5. Reverse proxy with caching
    • Nginx
    • Varnish
  6. Web server
    • Apache
      • 2.2 / 2.4
      • Prefork / Worker / Event
    • Nginx
  7. MySQL
    • MySQL Cluster
    • Master/Master replication
    • Master/Slave replication + mysqlnd_ms
    • Percona XtraDB Cluster + Galera
  8. PHP
    • 5.2 / 5.3 / 5.4
    • Apache module
    • PHP-FPM
  9. Configuration system
    • Puppet
    • Chef
    • Custom scripts
  10. Backups
    • Full machine backups
    • Rsync to remote machine
    • Tarballs
  11. Monitoring
    • Zabbix
    • Nagios
    • Ganglia


As you can see, there is a lot to talk about.

Wednesday, February 20, 2013

PHP Document Root, Path and URL detection

PHP has no built-in base URL variable. In all the mess that is the $_SERVER variable, there is nothing that will tell you the base URL for your website.  This is very useful when embedding assets and building URLs.

The thing is you can start from the DOCUMENT_ROOT and work your way from there, but if you are using Apache’s VirtualDocumentRoot, it is not reliable. Hence, you need to guess it.

The trick resides in SCRIPT_NAME and SCRIPT_FILENAME that respectively describe the executed PHP file, starting from the domain, and its full path. I used exclamation marks as delimiters as I doubt you have some in your folder names. If you do, then… shame on you.

While I was at it, I added some port and https detection and the absolute URL.

Example
Variable Content
base_dir /var/www/mywebsite
doc_root /var/www
base_url /mywebsite
protocol http
port 8080
domain example.com
full_url http://example.com:8080/mywebsite


Not included

  1. Port detection of a server running behind a proxy. You may want to use the port of the proxy, not the Web server.
  2. Same goes for https.
  3. Non-resolved symlinks and relative paths. You may want to throw a couple of realpath() in there.

Tuesday, September 18, 2012

Protect Webserver against DOS attacks using UFW

Ubuntu comes bundled with UFW, which is an interface to iptables. This is basically a very lightweight router/firewall inside the Linux kernel that runs way before any other application.

Typical setup of ufw is to allow HTTP(S), limit SSH and shut everything else. This is not a UFW or iptables tutorial, you may find a lot of online help to guide you through all your needs. However, I personally had a lot of difficulties to find good documentation on how to protect yourself against HTTP attacks.

A lot of HTTP requests is normal

The problem is that HTTP can get very noisy. A typical Web page can easily have up to a hundred of assets but usually, if you receive 100 requests in a second, it means you are under siege. If you really need to have 100 assets on a single Web page, you need a CDN, not as better server.

Rate limiting

These rules have been mostly guessed through trial-and-error and some search around the Web, tweak to fit your needs. A rate limit of x connections per y seconds means that if x connections has been initiated in the last y seconds by this profile, it will be dropped. Dropping is actually a nice protection against flooding because the sender won't know that you dropped it. He might think the packet was lost, that the port is closed or even better, the server is overloaded. Imagine how nice, your attacker thinks he succeeded, but in fact you are up and running, him being blocked.

Connections per IP
A connection is an open channel. A typical browser will open around 5 connections per page load and they should last under 5 seconds each. Firefox, for example, has a default max of 15 connections per server and 256 total.

I decided to go for 20 connections / 10 seconds / IP. 

Connections per Class C
Same a above, but this time we apply the rule to the whole Class C of the IP because it is quite common for someone to have a bunch of available IPs. This means for example all IPs looking like 11.12.13.*

I decided to go for 50 simultaneous connections.

Packets per IP
This is the challenging part. Due to a limitation that is not easy to circumvent, it is only possible to keep track of the last 20 packets. At the same time, it might add a considerable overhead to track 100 packets for each IPs. While big website may eventually need more than this, like I said, you should take a look in a proper CDN.

I decided to go for 20 packets / second / IP

Configuring UFW

The following instructions are targeted at UFW, but it is really just a wrapper so it should be easy to adapt them for a generic system.

Edit /etc/ufw/before.rules, putting each part where it belongs

Make sure ufw runs and reload everything using ufw reload.

Testing the results

Make sure everything runs smoothly by refreshing your browser like a mad-man. You should start getting timeout after ~15 refreshes and it should come back in less than 30 seconds. This is good.

But if you want to get serious on your tests, some tools may help you putting your server to its knees. It is highly discouraged to use this on a production server, but it is still better if you do it yourself than if you wait for someone to try.

Try those with UFW enabled and disabled to see the difference but be careful, some machines may downright crash on you or fill all available space with logs.
  • http://ha.ckers.org/slowloris/
    Written in Perl, features a lot of common attacks, including HTTPS
  • http://www.sectorix.com/2012/05/17/hulk-web-server-dos-tool/
    Written in Python, basic multi-threaded attack, very easy to use.
  • http://www.joedog.org/siege-home/
    Compiled, available in Ubuntu repositories, very good to benchmark
  • http://blitz.io/
    Online service when you can test freely with up to 250 concurrent users
To confirm that everything works perfectly, SSH into your machine and start a tail -f /var/log/ufw.log to see the packets being dropped and htop to watch the CPU have fun. 

SSH into another machine and start a script. You should see the CPU sky-rocket for a few seconds and then go back to normal. Logs will start to appear and your stress-tool will have some problems. While all this is going on, you should be able to browse normally your website using your computer. 

Great success.

Monday, September 10, 2012

Make your dev machine public using a VPN and a proxy

There is many reasons to be needing a machine available publicly:
  • Developing a Facebook App
  • Developing an OAuth authentication
  • Make a quick showcase for your client
Problem is, you probably want it to be your machine because you have all your development tools and IDEs, it is fast and you know it by heart. You could mount a folder using sshfs, but it is only part of the job and it may get very slow for some file editors.

The solution I came up with is to tunnel through a VPN to a public machine and let it proxy the requests back.

You will need

  • A public Linux box with root access
  • A domain name where you can setup a wildcard

Instructions, tested on Ubuntu 12.04

  1. Install apache or nginx and pptpd (you can follow this tutorial for the VPN or this one if you are using ufw)
  2. In you /etc/ppp/chap-secrets file, be sure to specify a fixed address for yourself (4th column)
    • It must fit the IP range specified in /etc/pptpd.conf
  3. Create a DNS wildcard pointing to your server 
    • Ex: CNAME *.dev.lavoie.sl => server.lavoie.sl
  4. Create an Apache or Ngnix proxy to match a server wildcard and redirect it to the VPN IP decided before
  5. Create the same wildcard on your machine to answer correctly.

Security considerations

If you have unprotected data like phpMyAdmin or other websites you are developing, they could be at risk, consider protecting them via a password or an IP restriction.

Configuration example for Apache and Nginx

Thursday, July 12, 2012

Apache VirtualHost Template with variable replacement

This is in no way a robust solution for deploying a real web hosting infrastructure, but sometimes, you just need basic templates. I use this simple template on my dev server.

Monday, July 9, 2012

Document Root fix in .htaccess when using VirtualDocumentRoot

CMS often come with a .htaccess that has a RewriteRule like this:

# RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ app.php [QSA,L]

The RewriteBase has to be enabled if you are using a VirtualDocumentRoot but when you are sharing code, developers may all have different setups.

By checking the content of DOCUMENT_ROOT, we can guess which setup we are using and prepend a / when necessary.

Note however that this method is heavy on string comparison which is slow and should not go on production.