Percona XtraDB Cluster on Ubuntu Server

Percona XtraDB Cluster

Percona XtraDB Cluster is a high availability and high scalability solution for MySQL users. XtraDB Cluster integrates Percona Server with the Galera library of high availability solutions in a single product package. XtraDB Cluster enables users to save money through:

Less downtime and higher availability
Reduced investment in high availability architectures
Lower DBA training and education costs
No investment in third-party, high availability solutions

Percona XtraDB Cluster features include:

Synchronous replication
Multi-master replication support
Parallel replication
Automatic node provisioning

The focus for Percona XtraDB Cluster is data consistency at a significantly lower total cost than existing high availability solutions. XtraDB Cluster may be especially useful if your organization currently:

Uses MySQL replication to ensure high availability
Needs a high availability solution for MySQL deployed in the cloud
Is looking for a new, novel way to address previously impossible high availability challenges

 

Install and run Percona XtraDB Cluster on three (3 nodes) Ubuntu Servers 12.04 LTS with static IP.

We are going to use the following nodes:

node1: 192.168.31.150

node2: 192.168.31.151

node3: 192.168.31.152

 

Percona apt Repository

Debian and Ubuntu packages from Percona are signed with a key. Before using the repository, you should add the key to apt. To do that, run the following commands:

gpg –keyserver hkp://keys.gnupg.net –recv-keys 1C4CBDCDCD2EFD2A
gpg -a –export CD2EFD2A | sudo apt-key add –

OR

Save the following to a text file percona.key and import it with

cat percona.key | sudo apt-key add -

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.11 (GNU/Linux)
mQGiBEsm3aERBACyB1E9ixebIMRGtmD45c6c/wi2IVIa6O3G1f6cyHH4ump6ejOi
AX63hhEs4MUCGO7KnON1hpjuNN7MQZtGTJC0iX97X2Mk+IwB1KmBYN9sS/OqhA5C
itj2RAkug4PFHR9dy21v0flj66KjBS3GpuOadpcrZ/k0g7Zi6t7kDWV0hwCgxCa2
f/ESC2MN3q3j9hfMTBhhDCsD/3+iOxtDAUlPMIH50MdK5yqagdj8V/sxaHJ5u/zw
YQunRlhB9f9QUFfhfnjRn8wjeYasMARDctCde5nbx3Pc+nRIXoB4D1Z1ZxRzR/lb
7S4i8KRr9xhommFnDv/egkx+7X1aFp1f2wN2DQ4ecGF4EAAVHwFz8H4eQgsbLsa6
7DV3BACj1cBwCf8tckWsvFtQfCP4CiBB50Ku49MU2Nfwq7durfIiePF4IIYRDZgg
kHKSfP3oUZBGJx00BujtTobERraaV7lIRIwETZao76MqGt9K1uIqw4NT/jAbi9ce
rFaOmAkaujbcB11HYIyjtkAGq9mXxaVqCC3RPWGr+fqAx/akBLQ2UGVyY29uYSBN
eVNRTCBEZXZlbG9wbWVudCBUZWFtIDxteXNxbC1kZXZAcGVyY29uYS5jb20+iGAE
ExECACAFAksm3aECGwMGCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRAcTL3NzS79
Kpk/AKCQKSEgwX9r8jR+6tAnCVpzyUFOQwCfX+fw3OAoYeFZB3eu2oT8OBTiVYu5
Ag0ESybdoRAIAKKUV8rbqlB8qwZdWlmrwQqg3o7OpoAJ53/QOIySDmqy5TmNEPLm
lHkwGqEqfbFYoTbOCEEJi2yFLg9UJCSBM/sfPaqb2jGP7fc0nZBgUBnFuA9USX72
O0PzVAF7rCnWaIz76iY+AMI6xKeRy91TxYo/yenF1nRSJ+rExwlPcHgI685GNuFG
chAExMTgbnoPx1ka1Vqbe6iza+FnJq3f4p9luGbZdSParGdlKhGqvVUJ3FLeLTqt
caOn5cN2ZsdakE07GzdSktVtdYPT5BNMKgOAxhXKy11IPLj2Z5C33iVYSXjpTelJ
b2qHvcg9XDMhmYJyE3O4AWFh2no3Jf4ypIcABA0IAJO8ms9ov6bFqFTqA0UW2gWQ
cKFN4Q6NPV6IW0rV61ONLUc0VFXvYDtwsRbUmUYkB/L/R9fHj4lRUDbGEQrLCoE+
/HyYvr2rxP94PT6Bkjk/aiCCPAKZRj5CFUKRpShfDIiow9qxtqv7yVd514Qqmjb4
eEihtcjltGAoS54+6C3lbjrHUQhLwPGqlAh8uZKzfSZq0C06kTxiEqsG6VDDYWy6
L7qaMwOqWdQtdekKiCk8w/FoovsMYED2qlWEt0i52G+0CjoRFx2zNsN3v4dWiIhk
ZSL00Mx+g3NA7pQ1Yo5Vhok034mP8L2fBLhhWaK3LG63jYvd0HLkUFhNG+xjkpeI
SQQYEQIACQUCSybdoQIbDAAKCRAcTL3NzS79KlacAJ9H6emL/8dsoquhE9PNnKCI
eMTmmQCfXRLIoNjJa20VEwJDzR7YVdBEiQI=
=AD5m
-----END PGP PUBLIC KEY BLOCK-----

Add this to /etc/apt/sources.list

deb http://repo.percona.com/apt precise main
deb-src http://repo.percona.com/apt precise main

Remember to update the local cache:

sudo apt-get update

Supported Architectures

x86_64 (also known as amd64)
x86

Install XtraDB Cluster to the 3 nodes

Following command will install Cluster packages:

sudo apt-get install percona-xtradb-cluster-client-5.5 percona-xtradb-cluster-server-5.5 percona-xtrabackup

 


Configure mysql-server on 3 nodes to use galera

 

 

node1 : 192.168.31.150

/etc/mysql/my.cnf

#
# The MySQL database server configuration file.
#
[client]
port      = 3306
socket      = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram
# This was formally known as [safe_mysqld]. Both versions are currently parsed.

[mysqld_safe]
socket      = /var/run/mysqld/mysqld.sock
nice      = 0
[mysqld]
#
# * Basic Settings
#
user      = mysql
pid-file   = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port      = 3306
basedir      = /usr
datadir      = /var/lib/mysql
tmpdir      = /tmp
language   = /usr/share/mysql/english
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
# bind-address      = 0.0.0.0

binlog_format = ROW

######## Galera ########

# Full path to wsrep provider library or ‘none’ to disable galera
wsrep_provider=/usr/lib/libgalera_smm.so

# set to “gcom://”” to reinitialise (reset) a node
wsrep_cluster_address=gcomm://

# once the nodes have mounted, we will set this to the floating ip
#wsrep_cluster_address=”gcomm://10.0.0.20:4567″

wsrep_cluster_name=Percona-XtraDB-Cluster
wsrep_node_name=Node1

#### BOF : State Snapshot Transfer method

wsrep_sst_method=rsync

# alternative methods to-do SST
#  experimental, wait for RC release
#wsrep_sst_method=xtrabackup
#  not recommended, transfers the ENTIRE database to re-sync nodes.
#wsrep_sst_method=mysqldump

# Set to number of cpu cores.

wsrep_slave_threads=1

#### for MyISAM support

wsrep_replicate_myisam=1

####END of MyISAM

# to enable debug level logging, set this to 1

wsrep_debug=1

# how many times to retry deadlocked autocommits
wsrep_retry_autocommit=1

# convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=1

# Generate fake primary keys for non-PK tables (required for multi-master and parallel applying operation)
wsrep_certify_nonPK=1

#### Required for Galera
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
default_storage_engine=InnoDB
query_cache_size=0
query_cache_type=0

######## EOF : Galera ########
#
# * Fine Tuning
#
key_buffer      = 32M
max_allowed_packet   = 32M
thread_stack      = 192K
thread_cache_size       = 8
#
# * Query Cache Configuration
#
query_cache_limit   = 1M
query_cache_size        = 32M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!

general_log_file        = /var/log/mysql/mysql.log
general_log             = 1

#
# Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
#
# Here you can see queries with especially long duration
#log_slow_queries   = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes
#
[mysqldump]
quick
quote-names
max_allowed_packet   = 16M
[mysql]
#no-auto-rehash   # faster start of mysql but no tab completition
#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with ‘.cnf’, otherwise they’ll be ignored.
#
!includedir /etc/mysql/conf.d/

 ————————————————————————

node2 : 192.168.31.151

/etc/mysql/my.cnf

#
# The MySQL database server configuration file.

[client]
port      = 3306
socket      = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram
# This was formally known as [safe_mysqld]. Both versions are currently parsed.

[mysqld_safe]
socket      = /var/run/mysqld/mysqld.sock
nice      = 0

[mysqld]

#
# * Basic Settings
#
user      = mysql
pid-file   = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port      = 3306
basedir      = /usr
datadir      = /var/lib/mysql
tmpdir      = /tmp
language   = /usr/share/mysql/english
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
# bind-address      = 0.0.0.0

binlog_format = ROW

######## Galera ########

# Full path to wsrep provider library or ‘none’ to disable galera
wsrep_provider=/usr/lib/libgalera_smm.so

# set to “gcom://”” to reinitialise (reset) a node
wsrep_cluster_address=gcomm://192.168.31.150

# once the nodes have mounted, we will set this to the floating ip
#wsrep_cluster_address=”gcomm://10.0.0.20:4567″

wsrep_cluster_name=Percona-XtraDB-Cluster
wsrep_node_name=Node2

#### State Snapshot Transfer method

wsrep_sst_method=rsync

# alternative methods to-do SST
#  experimental, wait for RC release
#wsrep_sst_method=xtrabackup
#  not recommended, transfers the ENTIRE database to re-sync nodes.
#wsrep_sst_method=mysqldump

#### Optimization
# Set to number of cpu cores.

wsrep_slave_threads=1

####MyISAM

wsrep_replicate_myisam=1

####END of MyISAM

####  Work Around
# to enable debug level logging, set this to 1

wsrep_debug=1

# how many times to retry deadlocked autocommits
wsrep_retry_autocommit=1

# convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=1

# Generate fake primary keys for non-PK tables (required for multi-master and parallel applying operation)
wsrep_certify_nonPK=1

#### Required for Galera

innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
default_storage_engine=InnoDB
query_cache_size=0
query_cache_type=0

######## EOF : Galera ########

#
# * Fine Tuning
#
key_buffer      = 32M
max_allowed_packet   = 32M
thread_stack      = 192K
thread_cache_size       = 8
#
# * Query Cache Configuration
#
query_cache_limit   = 1M
query_cache_size        = 32M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!

general_log_file        = /var/log/mysql/mysql.log
general_log             = 1

#
# Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
#
# Here you can see queries with especially long duration
#log_slow_queries   = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes
#

[mysqldump]
quick
quote-names
max_allowed_packet   = 16M

[mysql]

#no-auto-rehash   # faster start of mysql but no tab completition
#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with ‘.cnf’, otherwise they’ll be ignored.
#
!includedir /etc/mysql/conf.d/

 

 ——————————————————————-

node3 : 192.168.31.152

/etc/mysql/my.cnf

#
# The MySQL database server configuration file.
[client]
port      = 3306
socket      = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram
# This was formally known as [safe_mysqld]. Both versions are currently parsed.

[mysqld_safe]
socket      = /var/run/mysqld/mysqld.sock
nice      = 0

[mysqld]
#
# * Basic Settings
#
user      = mysql
pid-file   = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port      = 3306
basedir      = /usr
datadir      = /var/lib/mysql
tmpdir      = /tmp
language   = /usr/share/mysql/english
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
# bind-address      = 0.0.0.0

binlog_format = ROW

######## Galera ########

# Full path to wsrep provider library or ‘none’ to disable galera
wsrep_provider=/usr/lib/libgalera_smm.so

# set to “gcom://”” to reinitialise (reset) a node
wsrep_cluster_address=gcomm://192.168.31.150

# once the nodes have mounted, we will set this to the floating ip
#wsrep_cluster_address=”gcomm://10.0.0.20:4567″

wsrep_cluster_name=Percona-XtraDB-Cluster
wsrep_node_name=Node3

#### State Snapshot Transfer method

wsrep_sst_method=rsync

# alternative methods to-do SST
#  experimental, wait for RC release
#wsrep_sst_method=xtrabackup
#  not recommended, transfers the ENTIRE database to re-sync nodes.
#wsrep_sst_method=mysqldump

####MyISAM

wsrep_replicate_myisam=1

####END of MyISAM

####  Optimization
# Set to number of cpu cores.

wsrep_slave_threads=1

#### Work Around
# to enable debug level logging, set this to 1

wsrep_debug=1

# how many times to retry deadlocked autocommits
wsrep_retry_autocommit=1

# convert locking sessions into transactions
wsrep_convert_LOCK_to_trx=1

# Generate fake primary keys for non-PK tables (required for multi-master and parallel applying operation)
wsrep_certify_nonPK=1

#### Required for Galera
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
default_storage_engine=InnoDB
query_cache_size=0
query_cache_type=0

######## End Galera ########
#
# * Fine Tuning
#
key_buffer      = 32M
max_allowed_packet   = 32M
thread_stack      = 192K
thread_cache_size       = 8
#
# * Query Cache Configuration
#
query_cache_limit   = 1M
query_cache_size        = 32M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!

general_log_file        = /var/log/mysql/mysql.log
general_log             = 1

#
# Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
#
# Here you can see queries with especially long duration
#log_slow_queries   = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes
#
[mysqldump]
quick
quote-names
max_allowed_packet   = 16M
[mysql]
#no-auto-rehash   # faster start of mysql but no tab completition
#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with ‘.cnf’, otherwise they’ll be ignored.
#
!includedir /etc/mysql/conf.d/

 ———————————————————————

Start MySQL Server on 3 Nodes (first node1, second node2, third node3)

NODE1

sudo /etc/init.d/mysql start

after node1 started then start

NODE2

sudo /etc/init.d/mysql start

after node2 started then start

NODE3

sudo /etc/init.d/mysql start


Verify mysql + galera is working

run on each node (with mysql shell or mysql client tool) to verify the status

SHOW STATUS like ‘%wsrep%’;

Result

+—————————-+————————————–+
| Variable_name | Value |
+—————————-+————————————–+
| wsrep_local_state_uuid | 190f01a8-5f3b-11e1-0800-26c3fbc98732 |
| wsrep_protocol_version | 3 |
| wsrep_last_committed | 0 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_received | 3 |
| wsrep_received_bytes | 336 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_avg | 0.333333 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_avg | 0.000000 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_cert_deps_distance | 0.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 0.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 0.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced (6) |
| wsrep_cert_index_size | 0 |
| wsrep_cluster_conf_id | 6 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | 190f01a8-5f3b-11e1-0800-26c3fbc98732 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_index | 2 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 2.0(rXXXX) |
| wsrep_ready | ON |
+—————————-+————————————–+

Testing

http://www.mysqltutorial.org/mysql-sample-database.aspx

We are going to create a test database and do some test to make sure that mysql galera is working correctly.
download and install the test database on node1

test on all nodes with the query:

SELECT status, count(*) FROM orders GROUP BY status DESC;

+————+———-+
| status | count(*) |
+————+———-+
| Shipped | 303 |
| Resolved | 4 |
| On Hold | 4 |
| In Process | 6 |
| Disputed | 3 |
| Cancelled | 6 |
+————+———-+

Testing

Connect to every node of 3 you want and create a database. Then go to the other 2 nodes and check if the created database exists!

 

Remarks

ALWAYS you have to start mysql service with SUDO (or root privileges)

Everytime you want to make changes (updates etc) YOU HAVE ALWAYS TO START with NODE1, then NODE2, then NODE3 etc

UPDATE NOTE (2013-01-30)

How to start a Percona XtraDB Cluster (http://www.mysqlperformanceblog.com/2013/01/29/how-to-start-a-percona-xtradb-cluster/)

Before version 5.5.28 of Percona XtraDB Cluster, the easiest way was to join the cluster using wsrep_urls in [mysqld_safe] section of my.cnf.

So with a cluster of 3 nodes like this :

node1 = 192.168.1.1
node2 = 192.168.1.2
node3 = 192.168.1.3

we defined the setting like this :

wsrep_urls=gcomm://192.168.1.1:4567,gcomm://192.168.1.2:4567,gcomm://192.168.1.3:4567

With that line above in my.cnf on each node, when PXC (mysqld) was started, the node tried to join the cluster on the first IP, if no node was running on that IP, the next IP was tried and so on…. until the node could join the cluster or after it tried and didn’t find any node running the cluster, in that case mysqld failed to start.
To avoid this, when all nodes where down and you wanted to start the cluster, it was possible to have wsrep_urls defined like this :

wsrep_urls=gcomm://192.168.1.1:4567,gcomm://192.168.1.2:4567,gcomm://192.168.1.3:4567,gcomm://

That was a nice feature, especially for people that didn’t want to modify my.cnf after starting the first node initializing the cluster or people automating their deployment with a configuration management system.

Now, since wsrep_urls is deprecated since version 5.5.28 what is the better option to start the cluster ?

In my.cnf, [mysqld] section this time, you can use wsrep_cluster_address with the following syntax:

wsrep_cluster_address=gcomm://192.168.1.1,192.168.1.2,192.168.1.3

As you can see the port is not needed and gcomm:// is specified only once.

Note:In Debian and Ubuntu, the ip of the node cannot be present in that variable due to a glibc error:

130129 17:03:45 [Note] WSREP: gcomm: connecting to group ‘testPXC’, peer ‘192.168.80.1:,192.168.80.2:,192.168.80.3:’
17:03:45 UTC – mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
[…]
/usr/sbin/mysqld(_Z23wsrep_start_replicationv+0x111)[0x664c21]
/usr/sbin/mysqld(_Z18wsrep_init_startupb+0x65)[0x664da5]
/usr/sbin/mysqld[0x5329af]
/usr/sbin/mysqld(_Z11mysqld_mainiPPc+0x8bd)[0x534fad]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f3bb24b676d]
/usr/sbin/mysqld[0x529d1d]

So what can be done to initialize the cluster when all nodes are down ? There are two options:

modify my.cnf and set wsrep_cluster_address=gcomm:// then when the node is started change it again, this is not my favourite option.
start mysql using the following syntax (it works only on RedHat and CentOS out of the box):
/etc/init.d/myslqd start –wsrep-cluster-address=”gcomm://”
As there is no need to modify my.cnf, this is how I recommend to do it.

Percona XtraDB Cluster for MySQL and encrypted Galera replication: http://www.mysqlperformanceblog.com/2013/05/03/percona-xtradb-cluster-for-mysql-and-encrypted-galera-replication

Percona XtraDB Cluster (PXC) in the real world: Share your use cases! http://www.mysqlperformanceblog.com/2013/06/17/percona-xtradb-cluster-pxc-in-the-real-world-share-your-use-cases/

Questions: http://www.mysqlperformanceblog.com/2013/07/04/percona-xtradb-cluster-operations-mysql-webinar-follow-up-questions-anwsered/

 

Is there an easy way to leverage the xtrabackup SST and IST in an xtradb cluster to take your full and incremental backups of the cluster’s databases?

 

An SST is a full backup of one of the nodes in your database already.  If you want another backup, you may as well just run xtrabackup yourself (though don’t forget the discussion about locking from the talk).

 

IST is not affected by wsrep_sst_method — it is the same regardless of what SST you use.  In theory an IST donation could be used for incremental backups, but I’m not aware of any system that uses it currently.  There’s a few limitations currently that would constrict its use:

 

  • IST is only valid for the amount of time all needed transactions are available in the donor’s fixed-size gcache
  • Gcache files, despite the fact that they exist on disk, are non-persistent.

 

If you can use Xtrabackup for full backups, then I don’t see why you can’t use Xtrabackup’s incremental backup feature for your incrementals instead.  It certainly would be interesting for Galera to support IST methods so we could use Xtrabackup for IST instead of the current Gcache system, but it’s not something planned or in development that I’m aware of.

 

Does replication of MyISAM form any bottlenecks in XtraDB Cluster? If so, how bad?

 

MyISAM replication in PXC/Galera is labeled as experimental, but I think that’s a misnomer.  It should be labeled “broken by design“.  MyISAM replication really will never work properly with Galera due to its non-transactional nature.  MyISAM DML is replicated with Statement-based replication and it operates similarly to how DDL (which is also not transactional in MySQL) TOI is replicated.  To quote the manual:

 

  1. – Total Order Isolation – When this method is selected DDL is processed in the same order with regards to other transactions in each cluster node. This guarantees data consistency. In case of DDL statements cluster will have parts of database locked and it will behave like a single server. In some cases (like big ALTER TABLE) this could have impact on cluster’s performance and high availability, but it could be fine for quick changes that happen almost instantly (like fast index changes). When DDL is processed under total order isolation (TOI) the DDL statement will be replicated up front to the cluster. i.e. cluster will assign global transaction ID for the DDL statement before the DDL processing begins. Then every node in the cluster has the responsibility to execute the DDL in the given slot in the sequence of incoming transactions, and this DDL execution has to happen with high priority.

 

Innodb replication will allow more things to be happening in parallel, but TOI tightens up the cluster so it behaves much more like a single instance.  So, I expect any serious amount of MyISAM replication to perform pretty poorly in PXC, but I don’t have the benchmarks to prove it… yet.

 

When adding nodes to a cluster, why would we see errors about the SST not looking like a tar archive?

 

This depends probably on what SST method you are using, but Xtrabackup streams its backup from Donor to Joiner over netcat in a tar stream.  The Donor, therefore, is expecting a tar archive to start streaming over that netcat port, but if it gets anything else or some kind of network disconnection you may see this error.  I’d suggest checking the Donor and Joiner SST logs (especially the Donor) to see what went wrong.  Check your datadir for an innobackup.backup.log file to see if Xtrabackup failed for some reason on the Donor.  The codership-team mailing list may be able to help further.

For Debian/Ubuntu users: Percona XtraDB Cluster 5.5.33-23.7.6 includes a new dependency, the socat package. If the socat is not previously installed, percona-xtradb-cluster-server-5.5 may be held back. In order to upgrade, you need to either install socat before running the apt-get upgrade or by writing the following command: apt-get install percona-xtradb-cluster-server-5.5. For Ubuntu users the socat package is in the universe repository, so the repository will have to be enabled in order to install the package.