Pages

Friday, December 6, 2013

GlusterFS performance on different frameworks

A couple months ago, I did a comparison of different distributed filesystems. It came out that GlusterFS was the easiest and most feature full, but it was slow. Since I would really like to use it, I decided to give another chance. Instead of doing raw benchmarks using sysbench, I decided to stress test a basic installation of the three PHP frameworks/CMS I use the most using siege.

My test environment:

  • MacBook Pro (Late 2003, Retina, i7 2.66 Ghz)
  • PCIe-based Flash Storage
  • 2-4 virtuals machines using VMware Fusion 4, each with 2 GB of RAM.
  • Ubuntu 13.10 server edition with PHP 5.5 and OPCache enabled
  • GlusterFS running on all VMs with a volume in replica mode
  • The volume was mounted using nodiratime,noatime using GlusterFS native driver (NFS was slower)
The test:
  1. siege -c 20 -r 5 http://localhost/foo  # Cache warming
  2. siege -c 20 -r 100 http://localhost/foo  # Actual test
I then compared the local filesystem (inside the VM) vs the Gluster volume using these setups:
  • 2 nodes, 4 cores per node
  • 2 nodes, 2 cores per node
  • 4 nodes, 2 cores per node
The compared value is the total time to serve 20 x 100 requests in parallel.
All tests were ran 2-3 times while my computer was doing nothing and the results were very consistent.

Symfony Wordpress Drupal Average
2 nodes
4 cores
Local 2.91 s 9.92 s 5.39 s 6.07 s
Gluster 10.84 s 23.94 s 7.81 s 14.20 s
2 nodes
2 cores
Local 5.41 s 19.14 s 9.67 s 11.41 s
Gluster 25.05 s 31.91 s 15.17 s 24.04 s
4 nodes
2 cores
Local 5.57 s 19.6 s 9.79 s 11.65 s
Gluster 30.56 s 35.92 s 18.36 s 28.28 s
Local vs
Gluster
2 nodes, 4 cores 273 % 141 % 45 % 153 %
2 nodes, 2 cores 363 % 67 % 57 % 162 %
4 nodes, 2 cores 449 % 83 % 88 % 206 %
Average 361 % 97 % 63 % 174 %
2 nodes vs
4 nodes
Local 3 % 2 % 1 % 2 %
Gluster 22 % 13 % 21 % 19 %
4 cores vs
2 cores
Local 86 % 93 % 79 % 86 %
Gluster 131 % 33 % 94 % 86 %


Observations:
  1. Red — Wordpress and Drupal have an acceptable loss in performance under Gluster, but Symfony is catastrophic.
  2. Blue — The local tests are slightly slower when using 4 nodes vs 2 nodes. This is normal, my computer had 4 VMs running.
  3. Green — The gluster tests are 20% slower on a 4 node setup because there is more communication between the nodes to keep them all in sync. 20% overhead for double the nodes isn’t that bad.
  4. Purple — The local tests are 85% quicker using 4 cores vs 2 cores. A bit under 100% is normal, there is always some overhead to parallel processing.
  5. Yellow — For the Gluster tests, Symfony and Drupal scale very well with the number of nodes, but Wordpress is stalling, I am not sure why.

I am still not sure why Symfony is so much slower on GlusterFS, but really, I can’t use it in production for the moment because I/O is already the weak point of my infrastructure. I am in the process of looking for a different hosting solution, maybe it will be better then.

Organizing Javascript and CSS assets for optimal loading

On Reddit, I recently stumbled upon DynoSRC which allows to serve only a differential Javascript file to your users. The concept is pretty amazing, but I find it a bit overkill. I believe this would only be useful for very high traffic websites with a lot of Javascript with small parts changing often. For example: Facebook, Asana, WolframAlpha, Google Maps, etc.

For common websites however, you will probably be deploying a bunch of files at the same time. If you modified 10% of 40% of your files, the overhead of computing all those diffs (server side and client side) and having this system to manage is probably not worth it. If you are already compressing and grouping your assets and you have a CDN with proper HTTP caching, you are already pretty good. That can be hard though, especially the proper part.

Separate assets in 3 groups

When I have complete control over my assets, I usually like to split all the assets in 3 groups:
  1. Very common libraries (Bootstrap, jQuery, Modernizr). I serve them using public CDN like cdnjs. This is because it is very likely that the user will already have it in cache.
  2. External assets specific to my project (Custom Bootstrap build, jQuery plugins, lightbox plugin, etc.). I bundle them all in a big libs.js/css.
  3. Global custom assets written for this project, bundled in a single global.js/css. It does not need to be all the custom assets, but stuff you will need on 80% of your requests. Specific code for specific pages can be included individually.

Cache busting

I mostly use Amazon CloudFront as a CDN which handles query parameters so I set the expire date to one year and append a query parameter with the last git commit. (git rev-parse --short HEAD). That way, a fresh file is used each time there is any change in the project’s code. See Automatic cache busting using Git commit in Symfony2 for an example.

About combining

People often talk about combining how it saves HTTP requests but consider that it also compresses a lot better if the files are all grouped together. You may be adding an overhead on the first page load but the rest of the website will be blazing fast.

However, be careful not to overload the browser. Keep it mind that the Javascript will be executed on each page load. For example, to not try to initialize every modal window just in case one might pop up. See Optimizing page loads by reducing the impact of the Javascript initialization for more details.


Optimizing page loads by reducing the impact of Javascript initialization

So you combined all your Javascript files in the hope it will speed up page loads ? Well for sure the download will be faster, but the browser still needs to execute all this Javascript ! There are simple tricks to help reduce the impact on page loads.

Reduce DOM queries and manipulations

With libraries like jQuery, it is really easy to bind all sorts of events on complicated selectors. The thing is, the browser has to query to DOM like a mad man to find out all elements. Things get even worse when you add elements or query information like height or position, which triggers reflows and repaints. Try to be minimal.

Make initialization conditional

If you have a big block of code that needs to be executed only in specific cases.
  • Add a class to the body element and verify it.
    Ex: jQuery(document.body).hasClass('user-logged-in');
  • Check existence of important sections.
    Ex: document.getElementById('comments');

Delay initialization of non-essential parts

  • Delay heavy libraries like Google Maps or Facebook Like.
    See this post about loading social libraries.
  • Use requestAnimationFrame for animations.
  • Use setTimeout(function(){}, 1); to push the execution to the async queue, delaying the execution.
  • Use Web workers to run the function to run in background, without hanging the rest of the script. This also leverages multithread processing.

Use delegated event listeners

jQuery offers delegated event listeners where the listener is on an ancestor element. Your favorite library probably has it as well.

A good example is reply buttons in a comments thread: 
jQuery('#comments').on('click', '.reply', function(){});

How it works is that the click bubbles up to the comments element and there it verifies if the originally clicked element matches the selector.

This is extremely beneficial because you have much less DOM query at load and less event listeners to attach.

Initialize only on first use

Let’s say you have a complicated modal dialog that needs initialization and this process may take about 50ms. This is not very noticeable, but if you have other things to do after, you may well get over the 100ms rule, so you wouldn’t want to do it every time a modal is popped. For the same idea, you wouldn’t want to initialize 2-3 of those things at page load. This is why you need setupOnce

Inspired by the once function from Underscore, this utility will group two callbacks: one that is ran the first time it is called and only that is called every time.



Mobile

Mobile is even more critical because 200-700ms is spent doing the initial HTTP connection. For an in-depth look, see this presentation by @igrigorik from Google.

Tuesday, November 26, 2013

Responsive menu in pure CSS

Having menus that gracefully adapt to the size of your screen can be troublesome. Inspired by the collapsible menu of Bootstrap, here is a pure CSS implementation using the checkbox trick to toggle visibility.

The example attached features a smaller menu when the screen is narrower than 900px and is collapsed then narrower than 650px. I encourage you to test whatever suits you best.

On collapse toggle, the height and visibility are animated for a smooth transition. height: auto; is not directly supported, so we instead animate the max-height property. The downside is that you need to specify the max-height your menu will ever have. If you set this too high, you will experience a slight graphic glitch.

The toggle handle is also inspired from Bootstrap, but with minimal markup.

* the :checked selector is not supported prior to IE9, you may use a polyfill like http://selectivizr.com/ if you need to.

Happy coding !

To test it, resize your browser.

Live demo on CodePen.io


Monday, November 4, 2013

Tips for an efficient interview

I have only been in the Web industry for 5-6 years but I am already a bit sceptical when giving an interview. Some people are so good at talking about stuff they do not know at all. Hiring someone takes a lot of time and energy so when you end up with someone not up to the task, it can be really frustrating and demotivating.

I am absolutely not an interview specialist but here are some ideas I would have liked to have before.

Basic questions

Some competences are easy to verify in a verbal interview. I once had a front-end developer candidate who was claiming he was very competent in about everything I asked him. I was skeptical so I asked him very simply: “How would you write it in CSS if you wanted to put all links in a page in red ?” His answer was that he was a bit rusted about this one and would need to google it. The interview was over and no more time was wasted. 

In the interview aftermath, we asked ourselves how we could have lost less time and thus, the idea of basic questions over the phone quickly emerged and a lot of interviews were avoided.

Speed jobbing

For a second year in a row, me and one of my partner went to an event where around 50 candidates are meeting about 10 companies. They are matched by the organizers based on the needs of the companies and the resumes of the candidates. We met almost 15 very varied candidates for 5 minutes each; really in the concept of speed dating. It is a very efficient way to meet a lot of people. It they are interesting, you can call them for a real interview later and if not, well you only lost 5 minutes.

Real-world test

We recently made interviews where we tried a different approach. We let each candidate an hour to install and configure Symfony with the idea of creating a micro blog platform. They start with a fully functional LAMP environment, but have to do all the rest. In their hour, they have to get the hang of composer, understand Symfony’s documentation, download all the dependencies, start a bundle, etc. We are giving them full Internet access because we want to test their ability to work in a real-world environment. The idea is not to test their speed or if they already know the technology, but more to see how they work individually.

The results were amazing. The major point was when we asked them to explain what they did, where they found the documentation and what they were planning to do next. Some brought material, some had already practiced because we had told them it was about Symfony. We did not think it was unfair, it only shows their motivation and their ability to get everything they need to be up the task.

What do they like

Something I love to ask is what they preferred the most while discovering various technologies. I do not care if you like Ruby on Rails, Wordpress or Drupal, but if you have a strong feeling towards a technology, that you tested its virtues and alternatives, and that you want to promote it for what it is good at, you clearly show me that you like your job and you are not the average doer that simply executes an order without any opinion or passion.

Saturday, August 31, 2013

Service management utility for Mac OSX (launchctl helper)

Having dealt with services mostly on Linux, I grew accustomed to type service php restart. On Mac, this is more like:

launchctl unload ~/Library/LaunchAgents/homebrew-php.josegonzalez.php55.plist
launchctl load ~/Library/LaunchAgents/homebrew-php.josegonzalez.php55.plist


Which is ugly, hard to remember and launchctl has no way of listing all available services. Plus, those plist can reside in all those directories:
  • /System/Library/LaunchDaemons
  • /System/Library/LaunchAgent
  • /Library/LaunchDaemons
  • /Library/LaunchAgents
  • ~/Library/LaunchAgents
Those in you home directory generally don’t need sudo, while the others do.

This is why I can up with an utility to manage services. It searches in all directories above for your service, prompts for sudo if it is in a system directory and provide goodies like restart, reload and link.

Usage:

service selfupdate
update from the Gist
service php
searches for a plist containing 'php'
service php load|unload|reload
insert or remove a plist from launchctl
service php start|stop|restart
manage a daemon, but leave it in launchctl  (does not work with Agents)
service php link
If you use Homebrew, which you should, it will link the plist of this Formula into ~/Library/LaunchAgents, reloading if needed. Very useful when upgrading.


Manage all optional services at once

If you have several services running, especially if you are a developer, I also recommend to use a script to start/stop all of them at once when you are not working. They may not be using much resources, but having them running keeps the laptop working and can drain you battery very quickly. 

Thursday, August 22, 2013

Hosting a Composer repository for private Gitlab projects

Small script that loops through all branches and tags of all projects in a Gitlab installation and if it contains a composer.json, adds it to an index.

This is very similar to the behaviour of Packagist.org

See example.

Usage

Simply include a composer.json in your project, all branches and tags respecting the formats for versions will be detected.

Only requirement is that the package name must be equal to the path of the project. i.e.: my-group/my-project. This is not a design requirement, it is mostly to prevent common errors when you copy a composer.json from another project without without changing its name.

Caveats

While your projects will be protected through SSH, they will be publicly listed. If you require protection of the package list, I suggest this reading.


Check out the code: https://github.com/wemakecustom/gitlab-composer !

Wordpress MU-Plugins subdirectory loader

Wordpress’s Must Use Plugins are an easy way to include quick pieces of code that will always be included.

While having my way with Composer and Composer Installers, I stumbled upon a Pull Request about adding support for mu-plugins. I already commented saying that I don’t how I works for them, because Wordpress does not load mu-plugins in subdirectories, they must be at the root of /wp-content/mu-plugins/.

After searching a bit, I found that someone had already thought of loading mu-plugins recursively but this solution was not enough for me :
  1. Calling a scandir each request seems wrong (performance-wise), it seems better to cache the results.
  2. It requires that each folder has a file named like "folder.php", which a lot of existing plugins don’t have.
  3. It does not list the included plugins in the admin so it is completely invisible.
The solution below covers all this, any plugin may be simply dropped in mu-plugins and it will be required.

Except normal installation (manually or via Composer), the only other step is to copy or symlink a file in your mu-plugins directory.

https://github.com/wemakecustom/wp-mu-loader


Monday, August 5, 2013

Using Edge Side Includes with Varnish to cache partial pages

Caching full pages with Varnish can be hard, most applications use sessions, which sets a Cookie, which makes Varnish ignore all caching.

When sessions are needed and full page cache is not available, you can resort to ESI (Edge Side Includes).

ESI has a markup language of its own, but the subset that Varnish supports is fairly simple: it is basically a placeholder that gets replaced by the referenced URL. They generate a subrequest that will mimic the original request, but for another URL. Using this system, you can cache parts of your website that do not change frequently or that are hard to generate. Since this is internal to Varnish, it will honour the cache system, including Cache-Control, If-Modified-Since, ETag, etc.

However, since the subrequest is built on top of the original, it will contain the original Cookie header, so we must ignore it.

The solution includes:
  • The original script must add a "X-Esi" header to activate ESI parsing (performance).
  • Cookies are removed from ESI requests unless "esi-cookies=1" is present is the URL.
  • A "X-Esi-Level" header gets added when the current request is a ESI. Otherwise, it is removed.
Various scenarios where this technique can be used:
  • A navigation menu with the current URL in parameter.
  • A footer
  • A user profile box (popup box) with the user id in parameter.
  • Widgets of a sidebar
Since the X-Esi-Level header is enforced to be only present for ESI requests, you can trust it and safely ignore any security check as they would have already been done in the original request.

Here is the VCL used for Varnish and a simple example to illustrate the ?esi-cookies=1 trick.

Wednesday, May 8, 2013

Converting MyISAM to InnoDB, keeping FULLTEXT

I was working with a Percona XtraDB Cluster and I noticed that even though it supports MyISAM replication, it is statement based so the PRIMARY KEY is not propagated correctly. There is a bug report for this, but it has not been fixed yet.

Long story short, we cannot use auto increment with MyISAM. Anyway, the replication works way better with InnoDB.

Finding all MyISAM tables using AUTO_INCREMENT

To do so, we will use the marvellous database information_schema which contains all sorts of useful information. The column AUTO_INCREMENT contains the value of the next PRIMARY KEY to be inserted. If the table has no AUTO_INCREMENT, it will be NULL.


Converting MyISAM to InnoDB

You can find plenty of literature on MyISAM vs InnoDB, but our main concern was FULLTEXT indices. InnoDB supports FULLTEXT only as of MySQL 5.6 and Percona 5.6 is still in alpha.

Percona does not support MyISAM + AUTO_INCREMENT, but it does support MyISAM, so we can create a separate table to hold all the indexed data and join on this for a FULLTEXT search.

We have a pretty big codebase, adding a join for searches is an acceptable task, but removing columns and changing all SELECTs, INSERTs, UPDATEs, DELETEs, etc. was not.

The solution was to add triggers that would mirror columns that needed to be indexed from the original table. This way, you can deal with your original table as you used to do and only need to rewrite the SELECTs that are using FULLTEXT.

For those interested, I also made a little PHP script that generates the above SQL

And the new query will be like:
This duplicates the data, but it works. Don’t forget to drop the old FULLTEXT indices and convert the table to InnoDB.

Final thoughts

The triggers where written using temporary variables because it seems not to work inline with Percona, but I haven’t search this thoroughly. If you are not using Percona, you can safely drop these and use NEW.id directly.

The idea of having a separate MyISAM table for the data has other benefits:
  • The original table is a lot smaller
  • Smaller and fewer indices means faster INSERT/UPDATE/DELETE.
  • Now supports foreign keys.

Saturday, April 13, 2013

LAMP Cluster — Distributed filesystem

This post is part of: Guide to replicated LAMP stack hosting with failover


The core concept of choosing a filesystem for a Web hosting cluster is to eliminate single points of failure, but sometimes it is just not easy like that. A true distributed system will still need to be performant, at least on reads. The problem relies in the fact that the bottleneck if very often the I/O so if your filesystem is not performant, you will end up spending a fortune on scaling, without gaining real performance.

Making priorities

You can’t have everything, so start by making a list of priorities. Different systems will have different needs, but I figured I could afford a possibility of failure as long as the system could be restorable since I would be keeping periodic backups.
  1. Low maintenance
    • It must be possible to read/write from any folder without adding a manifest for each site.
    • The system must be completely autonomous and require no maintenance from a sysadmin. (Conflict management).
  2. Simple / Cheap
    • Must be installed on each Web nodes or a maximum of 2 small/medium extra nodes
    • Must run on Ubuntu, without recompiling the kernel. Kernel modules are acceptable.
  3. Performant
    • Reads less than 50% slower than standard ext3 reads.
    • Writes less than 80% slower than standard ext3 writes.
    • Must be good at handling a lot of small files. Currently, my server hosts 470k files for a total of 6.8 GB. That is an average of 15 KB per file!
  4. Consistency
    Changes must propagate to all servers within 5 seconds.
    • Uploaded files stored in database but not yet synced may generate some errors for a short period if viewed by other users on other servers.
    • Temporary files are only relevant on the local machine so a delay is not a big deal.
    • HTTP Sessions will be sticky at the LodeBalancer level so user specific information will be handled properly.
  5. Must handle ACLs
    • For permissions to be set perfectly, we will be using ACLs.
    • ACLs may not be readable within the Web node, but they must still be enforced.
  6. Durability
    • Must handle filesystem failures — be repairable very quickly.
    • File losses are acceptable in the event of a filesystem failure.
    • Filesystem must continue to function even if a Web node goes offline.
    • No single point of failure. If there is one, if must be isolated on its own machine.

A. Synchronisation

Synchronisation means that there is no filesystem solution, all the files are stored on the local filesystem and synchronisation is made with the other nodes periodically or by watching I/O events.

Cluster synchronisation involving replication between all the nodes is usually very hard. To improve performance and reduce the risk of conflicts, it is often a good idea to elect a replication leader and a backup. If the leader is unavailable, the backup will be used instead. This way, all the nodes will sync with only one.
  • Pros
    • Very fast read/write
    • Very simple to setup
  • Cons
    • May have troubles synchronizing ACLs
    • May generate a lot of I/O
    • Will most likely generate conflicts

Rsync

The typical tool for fast file syncing is rsync. It is highly reliable and a bit of BASH scripting will get you started. However, as the number of files grows, it may become slow. For around a million files, it may easily take over 5 seconds. With our needs, it means it will have to run continuously, which will generate a lot of I/O and thus impact the overall performance.

Csync2

Csync2 is a promising tool that works like rsync, but it keeps file hints in a SQLite database. When a file changes, it flags in the database that the file needs checking. This way, the full sync only needs to check marked files.

Csync2 supports multi-master replication and slaves (receive-only). However, I found while testing that it is not really adapted to a lot of small files changing frequently: it tends to generate a lot of conflicts that need to be attended manually.

It may not be the best solution for Web hosting, but for managing deployment of libraries or similar tasks, it would be awesome.

B. Simple sharing (NFS)

Even simpler than file syncing is plain old sharing. A node is responsible of hosting the files and serves the files directly. Windows uses Samba/CIFS, Mac uses AFP and Linux uses NFS.

NFS is very old, like 1989 old. Even the latest version, NFSv4, came around in 2000. This means it is very stable and very good at what it does.
  • Pros
    • Supports ACLs (NFSv4)
    • Very cheap and simple setup
    • Up to a certain scale, fast read/write
  • Cons
    • Single point of failure
    • Hard to setup proper failover
    • Not scalable

C. Distributed / Replicated

A distributed filesystem may operate at a device, block or inode level. You can think of this a bit like a database cluster. It usually involves journals and is the most advanced solution.

  • Pros
    • Very robust
    • Scalable
  • Cons
    • Writes are often painfully slow
    • Reads can also be slow
    • Often complex to setup

GlusterFS

Gluster runs over Fuse and NFS. Each node can have its own block and the daemon handles the replication transparently, without the needs of a management node. 

Overall, it is very good software, the write performance is decent and it handles failures quite well. There has been a lot of recent work to improve caching, async writes, write-ahead, etc.  However, in my experience, the read performance is disastrous. I really tried tuning it a lot, but I still feel like I haven’t  found the true potential of this. 

Ultimately, I had to let it down for the moment because of a lack of time to tune it more. It has a large community and is widely spread, so I will probably end up giving it another chance.

Lustre

Lustre seems like the Holy Grail of distributed filesystems. From Wikipedia: “At the present time, six of the top 10 and more than 60 of the top 100 supercomputers in the world have Lustre file systems in them.”

It appears to have everything I could dream of: speed, scalability, locks, ACLs, you name it. 

However, I was never able to try it. It requires dedicated machines with various roles: management, data, file servers (API). This means I would need 4-5 additional machines. On top of that, it needs custom kernel modules.

Definitely on my wish-list, but inaccessible for the moment.

DRBD

DRBD is not cluster solution, it does live backup. Usually, it is used to make a full mirror of a server that can be swapped with the master at any moment, should it fail. This is often used to patch solutions where replication is not built-it. Examples of this are NFS or MySQL. There is a way to setup a 3-nodes solution, but it is far from perfect.

Conclusion

In the end, I found that synced solutions were not reliable enough and distributed solutions were too complex so I chose NFS. My plan is to add a DRBD soon to provide a durability layer but a more serious solution will have to wait. If my cluster scales to the point that NFS can’t suffice to the task, this will mean I will have enough clients, enough money and enough reasons to consider a proper solution.


Comparison
Maintenance Complexity Performance Scalability Durability Consistency ACLs
Rsync Low Low Very high Low High Low Yes
Csync2 High Medium Very high Low High Low Yes
NFS None None Medium None None Very high Enforced
GlusterFS None Medium Low High High Very high Yes
Lustre None Very high High Very high Very high Very high Yes
DRBD None Medium n/a 2 or 3 Very high n/a Yes



LAMP Cluster — Choosing an Operating System

This post is part of: Guide to replicated LAMP stack hosting with failover


Beside choosing Linux vs Mac or Windows, the OS should not impact your users, it is mostly a sysadmin choice. Your users, the ones who will be connecting via SSH, will expect binaries to be available without modifying their PATH and common tools like Git or SVN to be already installed, but it does not really matter how it was installed.

The key to be sure that nobody has a hard time making everything work is to do things the most standard and common way possible.

Choose between the most used distributions

This is really important. Choosing a distribution for your laptop or your development server is not the same thing as choosing a production environment. Forget Gentoo and friends, being connected directly to the bare-bone of your system is nice when you are learning or building a world-class new system, but for you own setup, you want something tested by the whole community, something that works. Even if it involves a bit of magic.

A good example of some magic is what Ubuntu does with networking. I admit that since 10.x, I don’t really understand all the cooperation between /etc/resolv.conf, /etc/network/interfaces, dhclient, /etc/init.d/networking and such. At some point, they all seem to redefine each other and in a particular release, a script will start to throw some warnings, but it works. Never has the network failed me on Ubuntu, which is something quite relevant when you need to access a remote machine.

Edge vs stable

I use the infamous expression “Debian stable” when I want to refer to something configured in such a conservative way that will work, but at the cost of using the technology of 2005. I know, Debian stable is not that bad, but I tend to have some faith in the testing procedures of the maintainers.

My rule of thumb is: when a stable version is available, use it. If I want the features of a less stable version, I make sure it lived at least a month and do a quick search on its stability and trustworthiness.

An example of this is that I don’t restrict myself to Ubuntu Long Term Support editions. I am happy to use it and will usually keep using it for a bit longer that the other releases but it is only every 2 years, sometimes this is not enough. Moreover, I tend to upgrade or reinstall every year or two, so I don’t hit the end-of-support limit.

Here are some of the top distributions, ordered by edginess:

Versions available
Distribution Apache PHP MySQL Varnish
Ubuntu 12.10 2.2.22 5.4.6 5.5.29 3.0.2
Debian wheezy 2.2.22 5.4.4 5.5.28 3.0.2
OpenSuse 12.3 2.2.22 5.3.17 5.5.30 3.0.3
Ubuntu 12.04 LTS 2.2.22 5.3.10 5.5.29 3.0.2
Debian squeeze (stable) 2.2.16 5.3.3 5.1.66 2.1.3
CentOS 6 2.2.15 5.3.3 5.1.66 manual

PHP 5.3.3, our main concern, was released in July 2010 and important fixes have occurred since, so this is out of the question.

Varnish 2 is very different from Varnish 3, so this needs to be looked at.

It is usually possible to install newer versions, but this implies relying on third-party packaging, multiple installed binaries or even compiling yourself.

Forget benchmarks

Linux is very low-print system; a common mistake is trying to over optimize it. What will eat your CPU is PHP and MySQL, what will eat your memory is MySQL and the number of connections you can handle is mostly dependant on your webserver. If you machine is spending too much time in kernel space, it is probably because you need a bigger one. Also, don’t forget to benchmark your disks. See my post on choosing hardware.

Conclusion

Your choice must be focused on stability, ease-of-use and community size. 

Personally, I prefer Debian-based solutions. Aptitude works very well and I just happen to have been more in contact with it.

All and all, I went for Ubuntu 12.10, PHP 5.4 offers really good performance improvement over 5.3 and it has been long enough since this release of Ubuntu happened.

Thursday, March 21, 2013

LAMP Cluster — Comparison of different hosting platforms

This post is part of: Guide to replicated LAMP stack hosting with failover

The first step of building a hosting service is to choose your provider. Each system have its strengths and weaknesses and you will have to choose according to your needs and your proficiency at using the provided tools. This step is crucial and if possible, you should spend some time testing and benchmarking each of them to see if it matches your expectations.

DISCLAIMER: Below, I something mention prices; they are meant as an indication rather than a real comparison. Comparing different services can sometimes get very tricky as they don’t include the same things and their performance is difficult to compare effectively.

Platform as a service

This is basically what you want to build. Another company will provide you some services in the cloud and you will configure your applications to use them. Here, all the scalability and redundancy is done for you and you will only pay for what you use.

The thing is though, you have absolutely no control on the Operating System and you are bound to the services the platform provides you.

This guide is all about building it, so these are more here as a comparison basis than actual alternatives.

Google App Engine


Typically, GAE runs Java and Python. It provides PHP through an emulation layer and some SQL, but there is no MySQL. Some quick research and I found people discussing it: Wordpress and Drupal.  Short answer: no MySQL, can’t be done. 

However, if you want to host some Django on it, please do!

Windows Azure

This does almost everything you need, they have a really wide array of services. If you need something else, you can deploy a custom VM and do what you want. This is perfect for prototyping.

However, fiddling with their price calculator, I found that it can become quickly expensive, they charge for almost everything. Yes they have PHP/MySQL support, but the idea is to have some scale economy. I did an estimate: 
  • 4 small Web and Worker instances
  • 1 small Linux Virtual Machine (for testing, management, etc.)
  • 10 x 100 MB Databases
  • 100 GB bandwith
  • 100 GB storage
This is rather conservative; you will probably need way more than 4 small instances. Only thing is, it is almost 500$ per month. Hardly a bargain.

Heroku

Heroku is mostly known for its Ruby support, but it has a very wide array of add-ons; it even has MySQL support through ClearDB. Their prices tend to be lower than Azure and it is closer to open source initiatives, which I tend to use a lot. I have never actually used Heroku, so I can’t really approximate what I would need, but the sheer amount of possible configurations is incredible.

A big plus is also the deployment procedure that is backed by Git. It involves describing a project with a configuration file and simply pushing to Heroku. There is quite a lot of examples out there on how to deploy Wordpress, Drupal, etc. If I wasn’t trying to build an infrastructure myself, I would definitely consider it strongly.

Virtual machines on physical hardware

If you plan on the long-term, investing in hardware might be a good idea. Hardware is way less expensive and providers like iWeb tend to give a lot of bandwidth (if not unlimited). Upgrades are usually way less expensive, but they involve downtime and risk.

You still need some virtualization

For ease of management, you will almost certainly want a virtualization solution: this way you can create, backup, scale and migrate virtual machines in only a couple steps. In the most popular solutions, OpenStack is free and open source while VMware has a very good reputation with vCenter. The downside is that it means you have yet another thing to configure.

You still need multiple servers

If you go with physical machines, you will need some RAID and everything, but that all means downtime when something breaks. To reduce the risks, you will still need a second or third machine to provide some backup. Really, managing physical hardware is an art all by itself; if you wish to provide some good quality Web hosting, you will need someone specialized in that matter.

Why not let a third party do all this for you ?

Virtual private servers (VPS)

We want full control over the system, scalability, virtualization management, etc. So it all comes to a nice in-the-middle solution: virtual machines provided by a third-party. Here, all the hard stuff is already done, you will most certainly have multiple locations in the world to choose from and you can usually trust the hardware to not fail completely. Sometimes there is downtime, but losing data is extremely rare.

Below are multiple choices I know of, but I suggest you try FindTheBest for a more thorough comparison.

For the setup I will be talking about in another post, we need this setup:
  • 1 small/medium management node
  • 3 medium/large working nodes
  • 2 small/medium utility nodes
  • 1 small dev node

Amazon Web Services (AWS)

I have been a client of Amazon EC2 for more than two years. They offer a wide array of services:
  • Virtual machines (EC2)
  • DNS servvices (Route53)
  • Load balancing + Auto scaling
  • Dedicated databases with automatic fallback (RDS)
  • High performance I/O (EBS)
  • Low performance, high durability I/O (S3)
  • CDN (CloudFront)
  • Highly configurable firewall
  • And much much more
A lot of websites are running on Amazon services. The problem is, it is expensive and it is built for computing, not Web hosting. This means it is perfect for a rather short burst of computing like crunching data but it becomes expensive if it is online all the time. Also, in the concept of pay-per-use, everything you do will end up costing you something, which can built up rather quickly. Over the last two years, the performance has been going downhill, but recently, they have been lowering their prices so it might be getting a better alternative.

Here is an example using their calculator. (333 $/month)

Google Compute Engine (GCE)

Google also has a service that is very similar to Amazon EC2, but with less options and it seems to have a better performance/price ratio. I am not familiar with their services, but I thought it was worth mentioning.

Windows Azure

As mentioned above, Azure has virtual machines as well, but you can connect them with the rest of the platform so it can be a nice hybrid solution.

However, it is still pretty pricy. For our setup, 3 medium, 2 small and 2 x-small, we are already at 478 $/month — and no bandwidth or storage is included yet.
Linode exists since 2003, but I only discovered it last year. They are growing rapidly, new features are coming in and the amount of included things is going up and up every month. What I like about Linode is that I feel like I am in total control of my machines.
  • Multiple availability zones (like most other providers)
  • Very easing permission management (you can give read-only access to your clients)
  • Very powerful admin panel.
  • Powerful recovery tools
    • Remote connection via SSH or in-browser to the host so you can rescue your VM while it boots
    • Possibility to switch kernels and reboot in rescue mode
    • Possibility to reset root password from admin panel
    • Possibility to rebuild from a backup or a fresh install without destroying the VM.
  • Unexpensive Load Balancers
  • Support for StackScripts, a way to run scripts while deploying a new VM
  • High class (free) support. From my experience, replies typically take 1-5 minutes!
  • Unlimited DNS zones
  • Very high transfer caps
  • Unmetered disk operations
  • Unmetered Gigabit in-zone data transfer
And they are on a rampage. They recently upgraded their network and all VM now have 8 cores. You wonder how it is possible to have a 8 cores on a small instance, but it is actually the priority on those CPU that scales, not their power. In other words, the higher your package, the more reliable its performance is.

Seriously, the more I work with Linode, the more they feel right, it just feels like they know their thing and do the best they can to give you everything they can.

Have a try, you can use a small instance for a month. Here is my referral link. I get 20$ if you buy something.

For a setup similar to the AWS detailed above, it boils down to around 220 $/month, but you have to build the database, memcache, CDN yourself.

Performance evaluation

Whatever provider you choose, be sure to test its performance. This is especially true for CPU and disks. The number of cores and their clock speed means little to nothing. The best tool I found was SysBench. For disk operations specifically, you can choose various profiles like read-only, sequential read/write,  random read/write or specify a ratio of read/write.

When benchmarking for websites, you typically want a lot of small files (10kB - 1MB) that will be read sequentially and some big files (1MB-5MB) with a read/write ratio of about 95%.

Maybe more on that later.


Tuesday, March 12, 2013

Guide to replicated LAMP stack hosting with failover

Motivations on building on your own hosting

For my company, I started to think about offering a hosting service. Cheap solutions exist like 1and1.com and iweb.com where you can rent a VPS or have some shared hosting, but it implies some setup for each client, managing credentials, analyzing the needs of everyone, etc. What about upgrades? What about failovers? What about a custom services like Lucene or Rails that could be running?

Defining the needs

Before trying to find solutions, let’s try to find the correct questions. What are we trying to accomplish exactly? Those are generic needs; I will provide my answers, but at some point, I had to take some shortcuts to be able to complete it and meet some profitability requirements. If you have different priorities or your are working on a different scale, your answer will most probably differ at some point.

Scalable

Upgrades must be possible without any downtime. It must be easy so we can react in a matter of minutes to an emergency load or a crash. Also, we will spend quite some time configuring everything so we would like to keep it even if we triple our load. Ideally, we want to be able to scale both horizontally (adding more machines) and vertically (upgrading the machines). 

Highly Available

This means fail-overs, redundancy and stability. The key is load balancing, but we want to remove single points of failures as much as possible. If there is any, we want to have complete trust in them and they should do the least possible be as isolated as possible.

Secure

The purpose of this guide is not to build a banking system, but we still want to be secure. We want some strong password policies, firewalls and most importantly: backups. The whole system should be re-installable in an hour or two if something major happens and clients’ files and databases should be revertable hourly, daily, weekly or something around those lines.

Compatible and flexible

We will have almost no control over the applications, but we still want to standardize some key elements. For example, having 2 database systems could be acceptable, but running 2 different web servers is a bit over zealous. Some clients may also have some particular needs like a search engine or cronjobs, we need to be ready.

Performant

Between scalability and availability, we often achieve performance but only if each of the channels are independent. In general, websites do much more reads than writes, both in the database and on the filesystem. However, because we want to be compatible and application agnostic, we won’t be able to resort to techniques like declaring a folder read-only or having a slave database. We may not be able to constrain application, but we can reward those who are well configured: we can provide some opt-in features like reverse proxies, shared cache, temporary folders, etc.

Profitable

And the last but not the least, we like profits, so the whole system must have a predictable cost that can be forwarded to the appropriate client. Scalability plays a big role here because we can scale just as much as we need, when we need.

Overview

As I am starting to write this guide, the system is already operational and in production. I already stumbled across many problems, but I am sure some others are still to come. Here is an overview of all the parts I want to address, links will become available as they are written. The considered options are also listed to give you an idea of where I am going with all this.
  1. Hosting platform
    • Cloud virtual machines
      • Linode
      • Amazon
      • Rackspace
    • Physical virtual machines
      • iWeb (I’m in Montreal, Canada)
    • Platform as a service
      • Windows Azure
  2. Linux
    • CentOS
    • Ubuntu Server
    • Debian
  3. Filesystem
    • Synchronisation
      • csync2
      • rsync
    • Distributed
      • GlusterFS
      • Lustre
      • DRBD
    • Shared
      • NFS
  4. Load balancer
    • Amazon / Linode Load balancer
    • HAProxy
    • Nginx
  5. Reverse proxy with caching
    • Nginx
    • Varnish
  6. Web server
    • Apache
      • 2.2 / 2.4
      • Prefork / Worker / Event
    • Nginx
  7. MySQL
    • MySQL Cluster
    • Master/Master replication
    • Master/Slave replication + mysqlnd_ms
    • Percona XtraDB Cluster + Galera
  8. PHP
    • 5.2 / 5.3 / 5.4
    • Apache module
    • PHP-FPM
  9. Configuration system
    • Puppet
    • Chef
    • Custom scripts
  10. Backups
    • Full machine backups
    • Rsync to remote machine
    • Tarballs
  11. Monitoring
    • Zabbix
    • Nagios
    • Ganglia


As you can see, there is a lot to talk about.

Wednesday, February 20, 2013

PHP Document Root, Path and URL detection

PHP has no built-in base URL variable. In all the mess that is the $_SERVER variable, there is nothing that will tell you the base URL for your website.  This is very useful when embedding assets and building URLs.

The thing is you can start from the DOCUMENT_ROOT and work your way from there, but if you are using Apache’s VirtualDocumentRoot, it is not reliable. Hence, you need to guess it.

The trick resides in SCRIPT_NAME and SCRIPT_FILENAME that respectively describe the executed PHP file, starting from the domain, and its full path. I used exclamation marks as delimiters as I doubt you have some in your folder names. If you do, then… shame on you.

While I was at it, I added some port and https detection and the absolute URL.

Example
Variable Content
base_dir /var/www/mywebsite
doc_root /var/www
base_url /mywebsite
protocol http
port 8080
domain example.com
full_url http://example.com:8080/mywebsite


Not included

  1. Port detection of a server running behind a proxy. You may want to use the port of the proxy, not the Web server.
  2. Same goes for https.
  3. Non-resolved symlinks and relative paths. You may want to throw a couple of realpath() in there.

Thursday, January 31, 2013

Track your time working on PHP projects

Have you ever had a project manager? You know, the kind of person that comes to bug you about your timesheet not being properly done and how important it is? Well, as annoying as they can be, we must thank them and give them credit because it is indeed important. Even if you are a freelancer, you want to know roughly how much time you spent on a project to know if you billed the client correctly.

For years I hated timesheets. I am not particularly thorough at doing them either, so it just gets worse and worse. Fear not, I think I may have found a neat solution. The idea is to track when a PHP file is executed and log which project it belongs to.

To use my script, just follow the instructions in comments, but basically you will add it as a auto_prepend_file in your php.ini and it will check if the current script is watched and to which project it belongs to.

It is using a very basic file structure without even writing content. It is simply creating folder and files so you should not see the overhead. In fact, I tested that it creates an overhead of only about 0.3ms (on a SSD).

It also has a threshold to count as continuous several minutes without activity (more details in comments).

So lets say you have this structure:
  • /var/www/client-A/project-1
  • /var/www/client-A/project-2
  • /var/www/client-B/project-1
Well with this script, configured to watch folder /var/www, any PHP called in it would be tracked and you would end up with something like:
  • 2013-01-30
    • client-A 
      • project-1: Time: 3:30
      • project-2: Time: 1:30
    • client-B
      • project-1: Time: 2:00
  • 2013-01-31
    • client-B
      • project-1: Time: 8:30
Very very useful to build timesheets.

Downside

Now, of the people I have showed this, a lot have said that it tracks a too narrow portion of site development. From what I am used to, I have to disagree. Here's a breakdown of events that would be tracked:
  • Looking or configuring the CMS
  • Testing some CSS/JS/HTML/PHP
  • Testing a new plugin
  • An AJAX request that runs on the page
And what would not:
  • Developing a complicated library
  • Documenting
  • Designing
Well, I rarely design, it’s someone else who does it. Developing a complicated library? It is kinda rare, and when I do, it is linked to some project and I will end up testing it at the same time anyway. The real issue is documentation, but I tend to write it as I go. The threshold helps prevent those events won't be noticed, but in the end, the tool is meant to help, not to be an exact representation of your time.


Tuesday, January 15, 2013

Generating a unique node id per hostname in Puppet

To generate a unique integer ID in Puppet, we can use the hostname and convert is to base 10 using a erb inline template.


Friday, January 4, 2013

Fixing permissions using inotify and ACLs

I was working on a shared hosting project and I noticed the permissions were getting a bit complicated.

For each website:
  • Each user must have read/write permission
  • Each user's group must have read/write permission (reseller access)
  • Apache must have read access
  • Would be nice if admins had read/write as well
New files can appear at any time, created by all those users, and it would be nice to keep trace of who created them. This is where ACLs kick in.

For the case above, you need this command:
setfacl -m 'u:www-data:r-X,g:example-group:rwX,g:sudo:rwX,o::---' "/srv/www/example.org/htdocs"

ACLs quirks

The problem with ACLs is that they are very fragile and a lot of programs don't propagate them, even if you specify default rules. For example, all programs that move instead of creating will honour the ACL of the directory in which the file was created and the destination. Hence, if you untar the whole website in the directory, the default permissions won't be applied.

Running a script

You could run a script that will loop through all your websites and force the permissions through a cron, but this is a slow process (around 2 minutes on my current setup with ~10 websites), there is a delay before it will run (15-30 minutes, depending on how you set it up), and it is very I/O intensive.

inotifywait

inotify is a library that can warn you if some event occurs on a file or in a folder (recursively). It can be very useful for various tasks like syncing files, doing backups, recording edits progress, etc. inotifywait it simply a program that will wait until the event occurs, so you can execute what you want after.

Watching specifically on CREATE and MOVE events, we can fix the permissions of only the files we need.

The event ATTRIB (chown, chmod, setfacl) was intentionally left out not to cause loops, but it could probably be worked around.

Included below is an example of watching two folders and the corresponding init script. Tested on Ubuntu.