Thursday, October 28, 2010

MySQL load balancing with HAProxy

In an earlier blog post I was advising people to use HAProxy 1.4 and above if they need MySQL load balancing with health checks. It turns out that I didn't have much luck with that solution either. HAProxy shines when it load balances HTTP traffic, and its health checks are really meant to be run over HTTP and not plain TCP. So the solution I found was to have a small HTTP Web service (which I wrote using tornado) listening on a configurable port on each  MySQL node.

For the health check, the Web service connects via MySQLdb to the MySQL instance running on a given port and issues a 'show databases' command. For more in-depth checking you can obviously run fancier SQL statements.

The code for my small tornado server is here. The default port it listens on is 31337.

Now on the HAProxy side I have a "listen" section for each collection of MySQL nodes that I want to load balance. Example:
listen mysql-m0 0.0.0.0:33306
  mode tcp
  option httpchk GET /mysqlchk/?port=3306
  balance roundrobin
  server db101 10.10.10.1:3306 check port 31337 inter 5000 rise 3 fall 3
  server db201 10.10.10.2:3306 check port 31337 inter 5000 rise 3 fall 3 backup
In this case, HAProxy listens on port 33306 and load balances MySQL traffic between db101 and db201, with db101 being the primary node and db201 being the backup node (which means that traffic only goes to db101 unless it's considered down by the health check, in which case traffic is directed to db201). This scenario is especially useful when db101 and db201 are in a master-master replication setup, and you want traffic to hit only 1 of them at any given time. Note also that I could have had HAProxy listen on port 3306, but I preferred to have it listen and be contacted by the application on port 33306, in case I also wanted to run a MySQL server in port 3306 on the same server as HAProxy.

I specify how to call the HTTP check handler via "option httpchk GET /mysqlchk/?port=3306". I specify the port the handler listens on via the "port" option in the "server" line. In my case the port is 31337. So HAProxy will do a GET against http://10.10.10.1:31337/mysqlchk/?port=3306. If the result is an HTTP error code, the health check will be considered failed.

The other options "inter 5000 rise 3 fall 3" mean that the health check is issued by HAProxy every 5,000 ms, and that the health check needs to succeed 3 times ("rise 3") in order for the node to be considered up, and it needs to fail 3 times ("fall 3") in order for the node to be considered down.

I hasten to add that the master-master load balancing has its disadvantages. It did save my butt one Sunday morning when db101 went down hard (after all, it was an EC2 instance), and traffic was directed by HAProxy to db201 in a totally transparent fashion to the application.

But....I have also seen the situation where db201, as a slave to db101, lagged in its replication, and so when db101 was considered down and traffic was sent to db201, the state of the data was stale from an application point of view. I consider this disadvantage to weigh more than the automatic failover advantage, so I actually ended up taking db201 out of HAProxy. If db101 ever goes down hard again, I'll just manually point HAProxy to db201, after making sure the state of the data on db201 is what I expect.

So all this being said, I recommend the automated failover scenario only when load balance against a read-only farm of MySQL servers, which are all probably slaves of some master. In this case, although reads can also get out of sync, at least you won't attempt to do creates/updates/deletes against stale data.

The sad truth is that there is no good way of doing automated load balancing AND failover with MySQL without resorting to things such as DRBD which are not cloud-friendly. I am aware of Yves Trudeau's blog posts on "High availability for MySQL on Amazon EC2" but the setup he describes strikes me as experimental and I wouldn't trust it in a large-scale production setup.

In any case, I hope somebody will find the tornado handler I wrote useful for their own MySQL health checks, or actually any TCP-based health check they need to do within HAProxy.

Thursday, October 14, 2010

Introducing project "Overmind"

Overmind is the brainchild of Miquel Torres. In its current version, released today, Overmind is what is sometimes called a 'controller fabric' for managing cloud instances, based on libcloud. However, Miquel's Roadmap for the project is very ambitious, and includes things like automated configuration management and monitoring for the instances launched and managed via Overmind.

A little bit of history: Miquel contacted me via email in late July because he read my blog post on "Automated deployment systems: push vs. pull" and he was interested in collaborating on a queue-based deployment/config management system. The first step in such a system is to actually deploy the instances you need configured. Hence the need for something like Overmind.

I'm sure you're asking yourself -- why do these guys wanted to roll their own system? Why not use something like OpenStack? Note in late July OpenStack had only just been announced, and to this day (mid-October 2010) they have yet to release their controller fabric code. In the mean time, we have a pretty functional version of a deployment tool in Overmind, supporting Amazon EC2 and Rackspace, with a Django Web interface, and also with a REST API interface.

I am aware there are many other choices out there in terms of managing and deploying cloud instances -- Cloudkick, RightScale, Scalarium ...and the list goes on. The problem is that none of these is Open Source. They do have great ideas though that we can steal ;-)

I am also aware of Ruby-based tools such as Marionette Collective and its close integration with Puppet (which is now even closer since it has been acquired by Puppet Labs). The problem is that it's Ruby and not Python ;-)

In short, what Overmind brings to the table today is a Python-based, Django-based, libcloud-based tool for deploying (and destroying, but be careful out there) cloud instances. For the next release, Miquel and I are planning to add some configuration management capabilities. We're looking at kokki as a very interesting Python-based alternative to chef, although we're planning on supporting chef-solo too.

If you're interested in contributing to the project, please do! Miquel is an amazingly talented, focused and relentless developer, but he can definitely use more help (my contributions have been minimal in terms of actual code; I mostly tested Miquel's code and did some design and documentation work, especially in the REST API area).

Here are some pointers to Overmind-related resources:

Modifying EC2 security groups via AWS Lambda functions

One task that comes up again and again is adding, removing or updating source CIDR blocks in various security groups in an EC2 infrastructur...