Jul 272011
 

MySQL Cluster is, without doubt, the most interesting MySQL product Oracle offers to the people out there.
It’s the flagship, the holy grail, based on the knowledge and technology developed doing our well known MySQL Server.

I’m not going to go through why MySQL Cluster is so great, that you can find anywhere.
I’m going to show how to use the new MySQL Cluster Manager (aka MCM) to easy and quick setup your Cluster.

If you ever setup a cluster, you’d know that “easy” and “quick” are not the first association one have when thinking about MySQL Cluster. Let’s face it; Cluster is pain to setup…
It has millions of parameters to play with, each has to be set on all nodes, you have to be very careful when you change things and if, by any chance, you have to upgrade your Cluster…. Than, well…. May the force be with you.

Just imagine you have 24 nodes cluster, with 8 data groups – you have to stop, upgarde, check settings and start each node at the specific order to keep your cluster running. It can be done, but it’s gonna take you a long excruciating painful hours. It’s not for the faint of heart.

That’s exactly where MySQL Cluster Manager gets into the picture!

If you take special pleasure staring at black terminal windows for hours just to finally get your prompt back without error and with no SMS saying: “YOU’RE FIRED!”, stop reading. this post is ot for you.

Ok, so you are the few that actually have life. Let’s get into it.

Let me just say, before we go on, that this post is based and is complimentary to Andrew Morgan’s cool flash demo showing how it’s done. Also, with the flash demo you could hear Andrew’s sexy voice ;-)
And, as always, the most comprehensive source of knowledge is the manual pages: http://dev.mysql.com/doc/mysql-cluster-manager/1.1/en/

MySQL Cluster Manager is an application that does one thing and one thing only: it installs and manages the proper configurations of cluster(s).
That said, it’s not replacing by any form of shape the Cluster Management node which responsible for the correct operations of the nodes. Actually, MCM will setup those nodes as well, as you’ll see soon.

Getting ready and some house keeping

This post assume you have 4 machines, quite possibly virtual, with Linux installed (mine are Oracle Linux 5.5 x86) and network connection between them.

The topology:
MySQL Cluster configuration
Each linux machine has the following /etc/hosts file to make your life easier later:

127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6

192.168.10.1  cA
192.168.10.2  cB
192.168.10.3  cC
192.168.10.4  cD

Extract the MySQL Cluster Manager files from it’s package. You can obtain the files from Oracle’s eDelivery

cd /opt
tar -zxvf /mysql-cluster-manager-1.1.1-linux-rhel5-x86-32bit.tar.gz
mv mysql-cluster-manager-1.1.1-linux-rhel5-x86-32bit mcm

Extract the Cluster package itself. Notice: you’re not installing it, you just extracting it so the manager can use it

tar -zxvf /mysql-cluster-com-7.1.14-linux-i686-glibc23.tar.gz

Let’s have a look what we have at /opt

ls -la /opt

mcm
mysql-cluster-com-7.1.14-linux-i686-glibc23

Setting up the agent

As you can see from the picture, the agent has to run on all the servers (nodes). Actually, that’s the only time you’ll have to do something on all the node manually. From here on, the agent will take care of things for you.
Not only that, the agents are synchronising between themselves, so you can connect to any of the nodes with the manager and control the whole cluster(s).

Edit: from version 1.1.1(GA) there’s no need to edit the mcmd.ini file.
You can if you want, based on the man page.

Create group and user for the manager daemon to run under (it cannot be run as root) and make sure the directory owned by this user

groupadd clustermanager; useradd -M -d /opt/mcm/ -g clustermanager clustermanager
chown -R clustermanager:clustermanager /opt/mcm/

Finally, let’s run the manager and send it to run in the background

sudo -u clustermanager /opt/mcm/bin/mcmd &

If you what to use different setting based on mcmd.ini file, use the command

sudo -u clustermanager /opt/mcm/bin/mcmd --defaults-file=/opt/mcm/etc/mcmd.ini &

To make sure it’s running

 ps aux | grep mcmd

500       2909  0.0  0.6  30840  3184 ?        Ssl  11:28   0:00 /opt/mcm/libexec/mcmd --plugin-dir=/opt/mcm/lib/mcmd

If something wrong and it’s not running, check the logs in /opt/mcm/mcmd.log

Now do the same for each of the hosts (from the /etc/hosts part, just below the picture).

Connecting to the Manager

Connecting to the manager is achieved by using a regular MySQL console application directed to the Manager port that we set before (default 1862). You can install MySQL client or just use the console we extracted with the cluster package.
Default user is: admin, password: super

/opt/mysql-cluster-com-7.1.14-linux-i686-glibc23/bin/mysql -h127.0.0.1 -P1862 -uadmin -psuper --prompt='mcm> '

You should now see, unsurprisingly, the same console you’re used to with MySQL. To make sure you are actually within the Manager console, run the following command (the mcm> prompt doesn’t says anything… We made that happen…)

list commands;

That will show you the list of commands (duh…) that the Manager accept. It worth mentioning that using command –help will provide you with full details of the command.

Creating new cluster

Create new site

Site holds clusters. You can have multi-sites.

mcm> create site --hosts=cA,cB,cC,cD mysite;
+---------------------------+
| Command result            |
+---------------------------+
| Site created successfully |
+---------------------------+

mcm> list sites;
+--------+------+-------+-------------+
| Site   | Port | Local | Hosts       |
+--------+------+-------+-------------+
| mysite | 1862 | Local | cA,cB,cC,cD |
+--------+------+-------+-------------+
Adding package

Packages are used to maintain and upgrade clusters with one command. You have to have at least one cluster package to create a new cluster.

mcm> add package --basedir=/opt/mysql-cluster-com-7.1.14-linux-i686-glibc23/ 7.1.14;
+----------------------------+
| Command result             |
+----------------------------+
| Package added successfully |
+----------------------------+

mcm> list packages mysite;
+---------+---------------------------------------------------+-------------+
| Package | Path                                              | Hosts       |
+---------+---------------------------------------------------+-------------+
| 7.1.14  | /opt/mysql-cluster-com-7.1.14-linux-i686-glibc23/ | cA,cB,cC,cD |
+---------+---------------------------------------------------+-------------+
Create new cluster
  • ndb_mgmdCluster management node on cA & cB
  • ndbd Single threaded Data node on cC & cD (twice. Each machine hold couple of data nodes for our demo)
  • mysqldMySQL interface node on cA & cB
mcm> create cluster --package=7.1.14 --processhosts=ndb_mgmd@cA,ndb_mgmd@cB,ndbd@cC,ndbd@cD,ndbd@cC,ndbd@cD,mysqld@cA,mysqld@cB mycluster;
+------------------------------+
| Command result               |
+------------------------------+
| Cluster created successfully |
+------------------------------+

More option to use are:

  • ndbapi – for API interface
  • ndbmtd – for the multi-threaded NDB engine

To look at the current nodes assignment

mcm> show status -r mycluster;
+------+----------+------+--------+-----------+
| Id   | Process  | Host | Status | Nodegroup |
+------+----------+------+--------+-----------+
| 1    | ndb_mgmd | cA   | added  |           |
| 2    | ndb_mgmd | cB   | added  |           |
| 3    | ndbd     | cC   | added  | n/a       |
| 4    | ndbd     | cD   | added  | n/a       |
| 5    | ndbd     | cC   | added  | n/a       |
| 6    | ndbd     | cD   | added  | n/a       |
| 7    | mysqld   | cA   | added  |           |
| 8    | mysqld   | cB   | added  |           |
+------+----------+------+--------+-----------+
Getting and Setting cluster parameters

Before we start the cluster, we should set the parameters that applies to it. Some cannot be changed after the first start, such as NoOfReplicas. Most will only required restart of the cluster.
The way to see what are the current parameters is using the get command. The syntax, once again, pretty simple:

mcm> get <parameter>:<process>:<id> cluster_name;
  • parameter – the parameter we’re looking for, such as DataDir, DiskSyncSize etc
  • process – the process we what to get the info about, such as ndbd, mysqld, ndbmtd etc
  • id – the specific id of the process we what to get the data about, as shown on the show status -r list

The great thing about it, is that you don’t need to specify all or even any of those arguments:

mcm> get DataDir:ndbd:3 mycluster;
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+
| Name    | Value                                       | Process1 | Id1  | Process2 | Id2  | Level | Comment |
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/3/data | ndbd     | 3    |          |      |       |         |
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+

mcm> get DataDir:ndbd mycluster;
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+
| Name    | Value                                       | Process1 | Id1  | Process2 | Id2  | Level | Comment |
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/3/data | ndbd     | 3    |          |      |       |         |
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/4/data | ndbd     | 4    |          |      |       |         |
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/5/data | ndbd     | 5    |          |      |       |         |
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/6/data | ndbd     | 6    |          |      |       |         |
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+

mcm> get DataDir mycluster;
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+
| Name    | Value                                       | Process1 | Id1  | Process2 | Id2  | Level | Comment |
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/1/data | ndb_mgmd | 1    |          |      |       |         |
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/2/data | ndb_mgmd | 2    |          |      |       |         |
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/3/data | ndbd     | 3    |          |      |       |         |
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/4/data | ndbd     | 4    |          |      |       |         |
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/5/data | ndbd     | 5    |          |      |       |         |
| DataDir | /opt/mcm/mcm_data/clusters/mycluster/6/data | ndbd     | 6    |          |      |       |         |
| datadir | /opt/mcm/mcm_data/clusters/mycluster/7/data | mysqld   | 7    |          |      |       |         |
| datadir | /opt/mcm/mcm_data/clusters/mycluster/8/data | mysqld   | 8    |          |      |       |         |
+---------+---------------------------------------------+----------+------+----------+------+-------+---------+

Important flag is the “-d” which tells the get to bring those parameters which are not specificity set, but rather are the default:

mcm> get Diskless mycluster;
Empty set (0.24 sec)

mcm> get -d Diskless mycluster;
+----------+-------+----------+------+----------+------+---------+---------+
| Name     | Value | Process1 | Id1  | Process2 | Id2  | Level   | Comment |
+----------+-------+----------+------+----------+------+---------+---------+
| Diskless | false | ndbd     | 3    |          |      | Default |         |
| Diskless | false | ndbd     | 4    |          |      | Default |         |
| Diskless | false | ndbd     | 5    |          |      | Default |         |
| Diskless | false | ndbd     | 6    |          |      | Default |         |
+----------+-------+----------+------+----------+------+---------+---------+

If you are really keen on looking at all the values you can set, just run

mcm> get -d mycluster;

Setting values, other than the defaults (which is highly recommended for production clusters!), is set the same way as get, but with set command.

mcm> get -d port:mysqld,StopOnError:ndbd mycluster;
+-------------+-------+----------+------+----------+------+---------+---------+
| Name        | Value | Process1 | Id1  | Process2 | Id2  | Level   | Comment |
+-------------+-------+----------+------+----------+------+---------+---------+
| StopOnError | true  | ndbd     | 3    |          |      | Default |         |
| StopOnError | true  | ndbd     | 4    |          |      | Default |         |
| StopOnError | true  | ndbd     | 5    |          |      | Default |         |
| StopOnError | true  | ndbd     | 6    |          |      | Default |         |
| port        | 3306  | mysqld   | 7    |          |      | Default |         |
| port        | 3306  | mysqld   | 8    |          |      | Default |         |
+-------------+-------+----------+------+----------+------+---------+---------+

mcm> set port:mysqld:8=3307,StopOnError:ndbd=false mycluster;
+-----------------------------------+
| Command result                    |
+-----------------------------------+
| Cluster reconfigured successfully |
+-----------------------------------+

mcm> get -d port:mysqld,StopOnError:ndbd mycluster;
+-------------+-------+----------+------+----------+------+---------+---------+
| Name        | Value | Process1 | Id1  | Process2 | Id2  | Level   | Comment |
+-------------+-------+----------+------+----------+------+---------+---------+
| StopOnError | false | ndbd     | 3    |          |      | Process |         |
| StopOnError | false | ndbd     | 4    |          |      | Process |         |
| StopOnError | false | ndbd     | 5    |          |      | Process |         |
| StopOnError | false | ndbd     | 6    |          |      | Process |         |
| port        | 3306  | mysqld   | 7    |          |      | Default |         |
| port        | 3307  | mysqld   | 8    |          |      |         |         |
+-------------+-------+----------+------+----------+------+---------+---------+

Starting the cluster

Ok, it’s time to start the cluster… Note the use of the flag -B to start the cluster in background. If you don’t use it – the prompt will be waiting for the cluster to start (or fail) without an option to see what’s going on (unless you connect with another console).

mcm> start cluster -B mycluster;
+--------------------------------------------------------------------------------------+
| Command result                                                                       |
+--------------------------------------------------------------------------------------+
| Operation started successfully. Please check the operation result before continuing. |
+--------------------------------------------------------------------------------------+

To see the processes starting to play, issue the show status command

mcm> show status -r mycluster;
+------+----------+------+----------+-----------+
| Id   | Process  | Host | Status   | Nodegroup |
+------+----------+------+----------+-----------+
| 1    | ndb_mgmd | cA   | running  |           |
| 2    | ndb_mgmd | cB   | unknown  |           |
| 3    | ndbd     | cC   | starting | n/a       |
| 4    | ndbd     | cD   | starting | n/a       |
| 5    | ndbd     | cC   | starting | n/a       |
| 6    | ndbd     | cD   | starting | n/a       |
| 7    | mysqld   | cA   | added    |           |
| 8    | mysqld   | cB   | added    |           |
+------+----------+------+----------+-----------+

mcm> show status -r mycluster;
+------+----------+------+----------+-----------+
| Id   | Process  | Host | Status   | Nodegroup |
+------+----------+------+----------+-----------+
| 1    | ndb_mgmd | cA   | running  |           |
| 2    | ndb_mgmd | cB   | unknown  |           |
| 3    | ndbd     | cC   | starting | n/a       |
| 4    | ndbd     | cD   | stopped  | n/a       |
| 5    | ndbd     | cC   | stopped  | n/a       |
| 6    | ndbd     | cD   | starting | n/a       |
| 7    | mysqld   | cA   | added    |           |
| 8    | mysqld   | cB   | added    |           |
+------+----------+------+----------+-----------+

mcm> show status -r mycluster;
+------+----------+------+---------+-----------+
| Id   | Process  | Host | Status  | Nodegroup |
+------+----------+------+---------+-----------+
| 1    | ndb_mgmd | cA   | running |           |
| 2    | ndb_mgmd | cB   | running |           |
| 3    | ndbd     | cC   | running | 0         |
| 4    | ndbd     | cD   | running | 0         |
| 5    | ndbd     | cC   | running | 1         |
| 6    | ndbd     | cD   | running | 1         |
| 7    | mysqld   | cA   | running |           |
| 8    | mysqld   | cB   | running |           |
+------+----------+------+---------+-----------+

Our cluster is up and running!

Some very basic games with cluster

Ok, we did all that for…. what?
Haa, yes…. To have a fully redundant, super fast, ACID compliment database. It’ll be hard to show the performance of a cluster, as it’s running on a virtual machines, but we can definitely see redundancy.

Let’s create NDB database to play with. On one of the mysqld machines (cA on port 3306 or Cb on port 3307 as we changed that before):

/opt/mysql-cluster-com-7.1.14-linux-i686-glibc23/bin/mysql -h127.0.0.1 -P3306
mysql> USE test
Database changed

mysql> CREATE TABLE ndb1 (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, msg TEXT) ENGINE=NDB;
Query OK, 0 rows affected (0.38 sec)

mysql> INSERT INTO ndb1 (msg) VALUES ('row 1'),('row 2');
Query OK, 2 rows affected (0.09 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> SELECT * FROM test.ndb1;
+------+-------+
| id   | msg   |
+------+-------+
|    1 | row 1 |
|    2 | row 2 |
+------+-------+

Now let go and shutdown one of the datanodes.
Note that since we have 4 data nodes and 2 data groups (NoOfReplicas=2), we can kill up to 2 datanodes from each group and the cluster suppose to say alive.
Two servers (only one virtual host: “cD”) are not failing

mcm> show status -r mycluster;
+------+----------+------+---------+-----------+
| Id   | Process  | Host | Status  | Nodegroup |
+------+----------+------+---------+-----------+
| 1    | ndb_mgmd | cA   | running |           |
| 2    | ndb_mgmd | cB   | running |           |
| 3    | ndbd     | cC   | running | 0         |
| 4    | ndbd     | cD   | failed  | 0         |
| 5    | ndbd     | cC   | running | 1         |
| 6    | ndbd     | cD   | failed  | 1         |
| 7    | mysqld   | cA   | running |           |
| 8    | mysqld   | cB   | running |           |
+------+----------+------+---------+-----------+

Let go back to the sql console and try to select and insert

mysql> select * from ndb1;
+------+-------+
| id   | msg   |
+------+-------+
|    2 | row 2 |
|    1 | row 1 |
+------+-------+
2 rows in set (0.00 sec)

mysql> INSERT INTO ndb1 (msg) VALUES ('row 3'),('row 4');
Query OK, 2 rows affected (0.01 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> SELECT * FROM ndb1;
+----+-------+
| id | msg   |
+----+-------+
|  1 | row 1 |
|  2 | row 2 |
|  3 | row 3 |
|  4 | row 4 |
+----+-------+

That looks pretty live to me… The cluster still working even though two data nodes are dead.

Let’s restart the server and see what will happen.
All I’m going to do is to re-run the agent application on the failing server.
If it was a totally new fresh server, I also had to go through the initial process of un-packing the cluster package, agent and setup the hosts file, making sure the new server has the same network IP and hostname as the old one.

mcm> show status -r mycluster;
+------+----------+------+----------+-----------+
| Id   | Process  | Host | Status   | Nodegroup |
+------+----------+------+----------+-----------+
| 1    | ndb_mgmd | cA   | running  |           |
| 2    | ndb_mgmd | cB   | running  |           |
| 3    | ndbd     | cC   | running  | 0         |
| 4    | ndbd     | cD   | starting | 0         |
| 5    | ndbd     | cC   | running  | 1         |
| 6    | ndbd     | cD   | starting | 1         |
| 7    | mysqld   | cA   | running  |           |
| 8    | mysqld   | cB   | running  |           |
+------+----------+------+----------+-----------+

mcm> show status -r mycluster;
+------+----------+------+---------+-----------+
| Id   | Process  | Host | Status  | Nodegroup |
+------+----------+------+---------+-----------+
| 1    | ndb_mgmd | cA   | running |           |
| 2    | ndb_mgmd | cB   | running |           |
| 3    | ndbd     | cC   | running | 0         |
| 4    | ndbd     | cD   | running | 0         |
| 5    | ndbd     | cC   | running | 1         |
| 6    | ndbd     | cD   | running | 1         |
| 7    | mysqld   | cA   | running |           |
| 8    | mysqld   | cB   | running |           |
+------+----------+------+---------+-----------+

The cluster is up and running with all nodes active, without having to do anything!

Some more cool stuff with cluster

Changing processes

Changing processes from one type to another, in this case changing process id 3 from the single-threaded to the multi-threaded

mcm> change process ndbd:3=ndbmtd mycluster;
+------------------------------+
| Command result               |
+------------------------------+
| Process changed successfully |
+------------------------------+

mcm> show status -r mycluster;
+------+----------+------+---------+-----------+
| Id   | Process  | Host | Status  | Nodegroup |
+------+----------+------+---------+-----------+
| 1    | ndb_mgmd | cA   | running |           |
| 2    | ndb_mgmd | cB   | running |           |
| 3    | ndbmtd   | cC   | running | 0         |
| 4    | ndbd     | cD   | running | 0         |
| 5    | ndbd     | cC   | running | 1         |
| 6    | ndbd     | cD   | running | 1         |
| 7    | mysqld   | cA   | running |           |
| 8    | mysqld   | cB   | running |           |
+------+----------+------+---------+-----------+
Rolling restart

Rolling restart is a restart of all nodes, one by one while keeping the cluster active while doing that. Mainly when you have changed setting of the cluster which requires restart of the nodes, but you need the keep it running while it’s applying the changes
Edit: Cluster Manager take care of restarting the nodes after applying new settings, if restart is required. That means that manual rolling restart is rarely used. Only if you have good reason…

mcm> restart cluster -B mycluster;
+--------------------------------------------------------------------------------------+
| Command result                                                                       |
+--------------------------------------------------------------------------------------+
| Operation started successfully. Please check the operation result before continuing. |
+--------------------------------------------------------------------------------------+

mcm> show status -r mycluster;
+------+----------+------+----------+-----------+
| Id   | Process  | Host | Status   | Nodegroup |
+------+----------+------+----------+-----------+
| 1    | ndb_mgmd | cA   | running  |           |
| 2    | ndb_mgmd | cB   | running  |           |
| 3    | ndbmtd   | cC   | stopping | 0         |
| 4    | ndbd     | cD   | running  | 0         |
| 5    | ndbd     | cC   | running  | 1         |
| 6    | ndbd     | cD   | running  | 1         |
| 7    | mysqld   | cA   | running  |           |
| 8    | mysqld   | cB   | running  |           |
+------+----------+------+----------+-----------+
Cluster software Upgrade

Upgrade of the cluster can be done quite easy as well. Watch Andrew’s flash to see how.

Conclusion

MySQL Cluster Manager makes installing, configuring, managing & monitoring much easier than ever before.
It’s an easy tool that can be easily scripted and it’s only make sense to use it if you’re running cluster in production, where cluster has to keep it’s 99.999% uptime.

Additional resources

Now go play!

  One Response to “MySQL Cluster Manager hands on”

  1. While MCM is part of the commercial MySQL Cluster Carrier Grade Edition, users can download and use MCM to build their own cluster, without obligation, for 30 days.
    Go to the MySQL Cluster Manager trial of the resources section to get access to MCM:
    http://mysql.com/products/cluster/mcm/

Sorry, the comment form is closed at this time.