Saturday, December 17, 2011

Galera Cluster Release 1.1 - Out She Sails

Galera release 1.1 has been published and rolled out in the open!

This release has a bunch of bug fixes and one prominent new feature: rolling schema upgrade. With rolling schema upgrade, it is possible to apply schema changes (DDL statements e.g. CREATE, DROP, ALTER, etc...) for each cluster node separately. The node to be upgraded is a member of cluster but is not applying replication events as long as the DDL statement is processing. With this, the cluster can stay in production due out the schema upgrade process. I will write a more detailed description how it works in a separate blog.

The 1.1 release uses new version of replication API: wsrep API #22. This means that both MySQL and Galera components must be upgraded with same go.

Galera release 1.1 is available for downloads: here. There are separate versions both for MySQL 5.5.17 and MySQL 5.1.59.

Oli wrote an excellent blog about howto upgrade Galera from 1.0 to 1.1 level in: FromDual blog. While landing on FromDual site, check out Oli's other blogs as well, plenty of Galera related stuff there.

And, don't forget Severalnine's ClusterControl, it offers benefits like:

  • Hassle free installer for Galera Cluster
  • Excellent cluster aware GUI for Galera cluster monitoring and management

You can start your ClusterControl journey here: ClusterControl™ for MySQL Galera

Thursday, October 27, 2011

Galera Presentation in PerconaLive

Percona Live MySQL Conference, London, Oct 24th and 25th, 2011Here is the slide set for the Galera release 1.0 presentation given in the PerconaLive London conference: Galera Replication Release 1.0, Getting There. Many thanks for Percona for all the effort in organizing this fabulous event.

And special thanks for Clustrix for the Revolution Party, and congratulations as well for the awesome benchmark results. Clustrix sure runs at the "ludicrous speed".

Tuesday, October 25, 2011

MySQL/Galera Release 1.0 - Replication Redefined

MySQL/Galera Release 1.0 has slipped from our hands, we could not hold back it anymore. The pressure was just too immense from the community, our partners, and our new sales guy (specials thanks to Sakari for kicking some engineer ass).

So, the infection is underway, and there will be no return to the steam-age with async replication. Galera is replication redefined - and this is permanent. Biggest news in this release is that we now support both MySQL 5.1 and 5.5 series. Other nice features (since 0.8.1 level) include:

  • replication over SSL
  • garbd - the lightweight arbitator
  • causal read support
  • persistent write set buffering
Not all that can be expressed with one sentence. but believe, me you are dying to to get it, because you cannot live without it.

So, there it is: www.codership.com/downloads/download-mysqlgalera/ .

And there is even more to it. Our beloved western neighbors, brave dudes from Severalnines have worked, if possible, even harder than us and are releasing Severalnines ClusterControl for MySQL Galera. ClusterControl provides three utilities:
  • configurator
  • monitor
  • management
...which pretty much cover all the tasks you need to do with your cluster.

Check out: www.severalnines.com/galera-configurator to get started. With the configurator, you just fill in your system information, and you will receive a tarball with deployment script, which will install and configure your MySQL/Galera cluster automatically.

So there goes, now back to PerconaLive - it is intense here!

Tuesday, October 18, 2011

Galera Presentation in PerconaLive, London Conference

Percona Live MySQL Conference, London, Oct 24th and 25th, 2011 Team Codership will give a Galera presentation in the next event of the distinguished PerconaLive conference series in London (Oct 25).

We will send a three person mini-delegation in the conference, and here we follow the guidelines we have been preaching to the Galera community for long: always have at least three members in the cluster for high availability. With the presence of conference buffets, evening reception, London attractions, pubs & night life, Chelsea stadium etc..., our team may have to do a few internal failovers. But, three member staffing guarantees that at least one of us is available at all times for any discussion you may want to get us involved with, preferably HA, replication or clustering related, as these topics make our world tick.

Our presentation will focus in Galera replication new features developed during summer time. Since the MySQL User Conference in April, we have pushed three new 0.8 series releases, and, now the Galera 1.0 release is just about sliding out of the shipyard. A wealth of new features altogether, well worth 30 min talk session.

Ohoj! hope to see you there!

Wednesday, June 22, 2011

Era of Galera - Release 0.8.0 Published

MySQL/Galera 0.8.0 starts a new major release series of Galera clustering. It features significant improvements in reliability, usability and performance over 0.7 series and is not backward compatible with it. However its principles of operation (synchronous multi-master replication) remain the same as well as most of configuration settings. 0.7 users can reuse their old configuration files with it.

Most of the changes relate to the internal workings of Galera replicator. Improvements noticeable to end users include:

  • better configuration system

  • scriptable state transfer methods

  • scriptable notifications for cluster and node state changes

  • better status reporting for troubleshooting

  • better performance through optimized flow control and parallel
    applying

  • higher transparency (many compatibility bugs fixed)


For more information on 0.8 series see Galera Wiki

Download from Codership Downloads

Many thanks for early feedback from Vadim (Percona) and Oli (FromDual)

Friday, April 22, 2011

Codership Presentations in Collaborate11 & MySQL User Conference

Codership team did a major two week US tour in April visiting both Collaborate11 and MySQL User Conferences. Three presentations were given during the trip:
The Galera presentations are more or less with same content and show the current state of the Galera project. There is also some real fresh data about recent benchmark runs in Amazon EC2 environment.

The multi master speak explores various multi-master solutions putting them under common classification.

Collaborate is a huge conference with mostly Oracle content. We had time to check out Oracle's MySQL sessions, and were impressed by the progress in the 5.6 release, .e.g. the parallel applying will be available for early testing. This will work in database granularity level, but there should be no obstacles in making parallel applying on table level as well. This looks promising, good work O'mighty Oracle.

MySQL User Conference is as active and vivid as before. However, the expo side is still in recession. Maybe it is high time for us to straighten up and bring our Galera booth there next year. We saw many outstanding presentations in the conference and there were interesting talks in the hallways as well, this conference is sure worth a visit.

Saturday, April 9, 2011

Synchronous Replication Loves You Again

So, the other day I posted the performance benchmarks for the multi-master MariaDB/Galera cluster. Spectacular performance. But some of you may justifiably say:

— Well, we were born into a master/slave world. We, like, adapted to it. We have invested so much brain power to make our applications to work in master/slave environment. What do we do now with all these read/write splitting voodoo Lua scripts and slave lag battling techniques? And master failover... There's a whole industry there. Thousands of jobs!

— But of course! — I say, — Galera likes read/write splitting like the other guy. If you want a node as a slave - just don't use it as a master, simple as that. (Slave lag and master failover will have to go though. Sorry about that.)

Also about a year ago I was so fed up with the expert opinions about how synchronous replication is "slow" (Why "slow"? The word "synchronous" does not even have 'l' in it!) and does not work over WAN, that I ran a quick ad-hoc benchmark with 0.7pre (collector's edition) to see about it. Well, I saw it pretty well, but promised to get back to it later, with a more scientific approach and a more configurable Galera.

So in this installment I'll benchmark synchronous master/slave Galera 0.8pre performance, both in LAN and between Ireland, EU, and Virginia, US. In a scientific way. Yes, with standard deviations and stuff. Amazon EC2, despite its dismal IO performance will help us with that.

The configuration is the same as in the previous article. To make Galera more WAN-ready I added the following options (this must be put on a single line in my.cnf):

wsrep_provider_options="gcs.fc_factor=0.95; gcs.fc_limit=1024; evs.send_window=512; evs.user_send_window=256; evs.suspect_timeout=PT30S; evs.inactive_timeout=PT60S; evs.consensus_timeout=PT90S"

Among other things it allows to have up to 256 transactions in replication at a time. But it in no way compromises synchronous guarantee. For reference see Galera wiki

What do we have here, in order of appearance:

  1. Stock MariaDB with innodb_flush_log_on_trx_commit=1 as an alternative to synchronous replication.
  2. Single MariaDB/Galera node, which normally should not be run alone, but it serves as a reference point for master/slave configuration performance. I.e. "how much slower do we get?".
  3. 2-node master/slave cluster in the same eu-west accessibility zone.
  4. 2-node master/slave cluster between eu-west (Ireland) and us-east (Virginia) zones.
  5. 3-node master/salve cluster with 1 slave in eu-west and another in us-east.
  6. 2-node multi-master cluster in the eu-west zone for reference.
  7. 2 eu-west master nodes + 1 us-east slave — how does adding a transcontinental slave affect multi-master performance?
  8. eu-west client connects to standalone us-east server directly. Just to see if we need this WAN replication at all.



A blowup of the most interesting part:

And since we're especially concerned with transaction latencies here, here's the latency profile at 32 threads. Just to see it tad clearer how bad it could be:

Well, what to say here? Synchronous replication in LAN does not really make a difference to a standalone server. Move along, nothing to see here. Except that it is so much faster than flushing logs after each commit. As for the WAN, it does add latency to transaction. Guilty as charged. Up to 90 milliseconds!

Of course we have an alternative to this slow synchronous replication: direct connection to central server across the world. The proponents of this approach are in for an exquisite fun of having several seconds latencies even on idle servers.

We can also notice an interesting property of replication latency contribution — it is only noticeable until transaction execution latency takes over. That is when master becomes saturated. However the same is not true for IO latency, because, unlike data replication, data flushing tends to monopolize the resource. That's where group commit should come in.

...with special thanks to Google docs for visual effects.
Original...

Thursday, April 7, 2011

Scaling-out OLTP load on Amazon EC2 revisited.

It's been long known that Galera optimistic replication and enterprise-size databases are a match made in heaven. Today we're going to get a little closer to testing this statement.We'll have look at how Galera can scale out Sysbench OLTP complex 60 million rows workload in EC2. This is a first proper benchmark for 0.8 series and also the first benchmark of MariaDB/Galera port, so I'll start modest, just to see how it goes. I chose m1.large instances with 7.8Gb of RAM for cluster nodes and c1.xlarge instance for a client — I don't want the client to be a bottleneck.

For comparison I have also measured performance of a stock standalone MariaDB 5.1.55 server. I used the standard my.cnf that comes with MariaDB Debian package with the following alterations:

max_connections=1024
innodb_buffer_pool_size=6G
innodb_log_file_size=512M

Galera nodes also add

innodb_flush_log_at_trx_commit=0
innodb_locks_unsafe_for_binlog=1
wsrep_slave_threads=16

That's right. One of the purposes if this study is to see how synchronous replication can be an alternative to on-disk durability.

EC2 environment is known for its inconsistent performance, so in order to get reliable figures each measurement (a 20-minute sysbench run) was performed 3 times with an interval of ~2 hours between each. Overall the main part of the benchmark ran non-stop for over 30 hours. Surprisingly consistency was very good. Most of transaction rate scores had standard deviations well below 1%, and latency figures — below 2%. (This essentially means that every shape you see on the charts is statistically significant.) Most inconsistent measurements came from standalone MariaDB server and in a short while we'll discuss possible causes of this.

60 million sysbench rows is more than 14Gb (UPDATE: in the course of benchmarking it grew up to 20Gb) in the table space and with a 6Gb buffer pool this makes the benchmark quite IO-bound. I suppose this is a fairly realistic setup. My initial guess was that it should be favorable for Galera, as any heavy load should, but IO-bound is not the same as CPU-bound, so I was not certain how exactly it would affect slave threads blocks lookup, for example.

Ok, enough with the chatter, here are the charts:


Three things (besides striking performance and scalability) are readily noticeable on that chart:

  1. Stock MariaDB performance is rather poor. 2-node MariaDB/Galera cluster more than doubles it and 4-node — triples. It also does not display a characteristic performance peak at a certain number of connections. Instead its throughput seem to be slowly, but growing with the number of threads, meaning that context switch penalty is negligible in this case. Something else is severely limiting its performance.

    The answer to it seems to be that the load is really IO-bound and the need to flush log after each transaction commit does not help it in any way. Well, that's the price one has to pay for not using synchronous replication when he can't afford losing any transactions. (Yeah, I know, rather than to implement synchronous replication, people learned to live with losing data.)
  2. Unsaturated part of 5-node curve lies below 4-node curve. This is most easily explained by a growing replication overhead. Indeed, with 8-7 connections per node, there is still plenty of CPU and IO resources. However, we get less direct connections per node (lower efficiency) plus 25% increase in replication latency (even lower efficiency) which requires more connections to compensate for it. In fact, if we look at a 4-node curve, we can see that it almost goes to under the 3-node curve at 32 connections.

  3. Scalability that took off so wonderfully somehow hits a ceiling at around 1100 trx/sec. What is it? Well, there are several suspects:

    1. Replication overhead has finally caught up with us. See above item.
    2. Slave workload has caught up with us, see this article about multi-master arithmetics. Indeed, sysbench OLTP load has about 25% of write queries, so 1/0.25 + 1 = 5 — pretty close to what we have. However, first, RBR event applying is far cheaper than query processing in the first place, so this estimate is faulty. Second, why so abrupt?
    3. Slave threads cannot handle all that replication traffic from 4 peer nodes (slave thread pool saturated). I've been using a pool of 16 slave threads and while in 2-node cluster that could be more than enough, in 5-node cluster we have 4 times more of slave traffic.

    Of course all three of the above reasons combined could (and no doubt will at some cluster size) end the scalability. However I did not expect it so soon and so firm. Just look at them red, orange, green and magenta curves - doesn't it tell you that there is a space for at least one more node?

Replication overhead, while well pronounced with small number of threads per server, is clearly negligible when saturation is reached. This can be easily seen from the latencies graph where bigger clusters actually display lower latencies due to the distribution of workload. To put it another way, in saturated server transaction latency is dominated by the progressive deficit of internal resources, rather than the constant replication overhead.

Slave workload saturation can be ruled out by a simple observation of CPU usage on the nodes. In 4 and 5 node configurations idle CPU time was considerably above 50% (in a 2-node case it was ~20%). The nodes clearly hadn't run out of CPU power. Something else was holding a performance. Normally that would be IO waits.

But IO waits come in two forms:

  1. IO waits caused by buffer pool misses, which are normally dealt with by increasing concurrency: while one thread is waiting for a block, another is performing useful work.
  2. IO waits like "cannot perform more than this amount of IO operations per second" - and that is what we see in case of a single MariaDB server.

So, which one we're dealing with here?

In addition to low CPU usage, another interesting observation was made there. Galera 0.8 has a status variable 'wsrep_flow_control_paused' which shows the fraction of time replication was paused due to waiting for other nodes. While I could not devise a method to chart it here, it was very easy to see that this fraction grows with the number of nodes and in the 5-node cluster is routinely around 60-70%. Essentially this means that slave writesets cannot be applied as fast as they arrive. But not because of lack of CPU power!

Slave thread pool saturation seems like a most likely smoking gun here, so I'm running the same benchmark in 4, 5, and 8-node configurations with 64 slave threads. It should put 5-node cluster on the equal footing with 2 nodes.

Another possible trick to try is Out-Of-Order-Committing (OOOC): even with several concurrent slave appliers we can encounter a transaction that takes especially long to apply and thus hold all other slave threads from committing. In this case OOOC would allow other slave threads to continue their work.

So I've rerun the benchmark with the aforementioned tricks - but on different set of EC2 instances (only the first node that was used for plain MariaDB testing was consistently present in all setups) and that yielded somewhat incomparable results... Which is in a sense good, because it explains some things.

First of all, overall throughput is considerably lower than in the initial benchmark (gray curves). Well, this can be explained by that not all m1.large instances in EC2 are equal, and they certainly are not. That it is not a random fluke is corroborated by the amazing consistency that, say, 4-node curves show in this benchmark.

And so, since m1.large instances can be so different in performance, a natural explanation for a lack of scalability when going from 4 to 5 nodes in the initial benchmark is that the 5th node simply happened to be too weak to add any extra throughput.

Moreover, it looks like we happened to have a more even set of instances here and we see a healthy 12% increase in throughput for 25% increase of nodes. Scalability is still there! Notice, that we're talking about scaling-out of sysbench OLTP complex benchmark which nobody else ever dared to do. And, surprise, even 8 nodes show some increase in performance. Maybe quite marginal, but it still does what it is supposed to do: more nodes can handle more clients. Isn't it what "scale-out" is about?

Now, the other thing is that apparently neither expansion of slave thread pool to 64 threads, nor OOOC resulted in any throughput improvement.

The first thing is easy to explain. From the first chart we see that single node is pretty much saturated at 16 connections. This is with client-server communication latency in play. Slave threads have no client-server communication, so they saturate the server even sooner. So adding more slave threads will more likely to harm performance than to improve it.

OOOC on the other hand sets existing threads totally loose and with practically no collisions between the writesets they should be able to apply them in a virtually tight loop. Still, this doesn't happen. And this last observation forces us to make the only possible conclusion: m1.large instance simply cannot handle more IO operations.

This can be further explained by that if the load is IO-bound and if reads and writes on average cause the same amount of buffer pool misses (on master end this probably does not hold, due to reads prefetching pages that might be used by writes, but on slave end it most probably does), then we arrive to the same old formula that places practical scalability limit at 4-5 nodes.

Conclusions and further work.


Well, m1.large instance IO seriously disappointed. In fact, measuring 8Gb file creation time has shown that /dev/sdb there is slower than my laptop hard drive, which is kinda hard to justify. Several things can be tried to improve it:

  1. Use EBS volumes for data and innodb log storage. They were said to be faster, however this is an extra expense and complexity. I was trying to show how you can have a highly-available MySQL on EC2 without resorting to non-volatile storage. Seriously, how hard is it to make local drive fast enough?
  2. Try a higher-end instances (yeah, cluster compute!). Well, as soon as we get budget for that I'll surely give it a try. Although I have little doubts that result will be nothing short of stellar.
  3. Try with an xfs/ext4 file system on /mnt instead. ext3 is as adequate a choice as fat16 these days.


Replication overhead. Yes, it is small, but we can see it and I'd prefer that we can't. The problem here is that EC2 does not support multicast and probably never will, so no luck here. Some experiments with multicast though have suggested that even using UDP unicast could considerably improve latencies compared to TCP. This could be out next optimization project.

As for the rest, we saw great performance, great scalability and a good reason to use Galera replication instead of innodb_flush_log_at_trx_commit=1.

Downloads


  • MariaDB-wsrep port is available from lp:codership-maria.
  • Galera wsrep provider is available from lp:galera.

To be continued with master/slave and WAN benchmarks.

Original.

Wednesday, February 9, 2011

Codershippers in FOSDEM

FOSDEM, the Free and Open Source Software Developers' European Meeting

Codership team spent this weekend in Brussels, in and around the ever famous FOSDEM 2011 conference. This time there was quite a lot of the *around* part, as we were traveling with families, and trip scheduling was mostly not within our control. We regrettably missed .e.g. the MySQL dinner, which we later heard, was fueled by Monty's black bottle (TM, hallelujah).
In the conference, when we finally got in there, we gave a presentation which explores various Multi-Master replication technologies, classifying, comparing and sorting them, (and in the end, boiling in deep oil). This presentation will grow both in length and content and will be given in longer version in the O'Reilly MySQL User Conference, in Santa Clara, Apr 14. For this, we welcome any status updates you might have of your favorite Multi-Master replication solution. I know that at least the Tungsten dudes are working hard developing Multi-Master Tungsten 2.0 replicator. I hope you can send your project status before the conference.

The FOSDEM presentation slides are available for download in: http://www.codership.com/files/presentations/MMRep_FOSDEM_2011.pdf