Pages

Thursday, November 17, 2016

Percona Cluster Installation

Installation of this cluster was performed using standard Redhat packages for simplicity.
If it does not already exist, you will need to create a “mysql” account in the Operating System to run MySQL:
adduser mysql
usermod -s /sbin/nologin mysql
Create empty directories to contain the files that will be necessary:

mkdir /u01/mysql_setup
mkdir /u01/mysql_setup/tmp
mkdir /u01/mysql_setup/logs
mkdir /u01/mysql_setup/data
Copy a template of the my.cnf file to the installation directory:
cp my.cnf /u01/mysql_setup
Set proper permissions on all the files and directories:

chown mysql:mysql /u01/mysql_setup/ -R
Install the Percona XTRADB Cluster Shared RPM:

rpm -i Percona-XtraDB-Cluster-shared-5.5.31-23.7.5.438.rhel5.x86_64.rpm
If there is a problem with the above, you may need to install OpenSSL (haven’t been needing this):
yum install openssl098e.x86_64
Install Perl-DBI:
yum install perl-DBI
Install the Percona XTRADB Cluster Client RPM:
rpm -i Percona-XtraDB-Cluster-client-5.5.31-23.7.5.438.rhel5.x86_64.rpm
Install the Percona XTRADB Cluster Galera RPM:
rpm -i Percona-XtraDB-Cluster-galera-2.6-1.152.rhel5.x86_64.rpm
If it is not already installed, you will need Perl:
yum --skip-broken install perl-DBD-mysql
Percona XTRABackup will be required for backups and SST/IST functions:
rpm -i percona-xtrabackup-2.1.4-656.rhel5.x86_64.rpm
If there is any problem installing the backup tool, you may need a Perl module:
yum install perl-Time-HiRes.x86_64
yum install –-skip-broken perl-DBD-MySQL.x86_64
Install the Percona XTRADB Cluster Server RPM:
rpm -i --force Percona-XtraDB-Cluster-server-5.5.31-23.7.5.438.rhel5.x86_64.rpm
If there is a problem needing socat, grab the RPM from Dropbox and install:
yum install compat-readline5.x86_64
rpm -i socat-1.7.2.2-1.el6.x86_64.rpm
Do some cleanup:
mv /etc/my.cnf /etc/my.cnf.old
ln -s /u01/mysql_setup/my.cnf /etc/my.cnf
Create a new data directory containing the system database:
/usr/bin/mysql_install_db --datadir=/u01/mysql_setup/data
Edit the my.cnf files and set appropriate values for datadir, node IPs, node names, etc.
nano /u01/mysql_setup/my.cnf
If this is the first node you are starting, it must be bootstrapped:
/etc/init.d/mysql bootstrap-pxc
Setup a root password for the cluster:
/usr/bin/mysqladmin -u root password 'mysql123'
Once MySQL is up and running on the bootstrap node, execute the following on any node to complete the configuration of the cluster. Note the passwords below have been obfuscated.
If the first node is already running bootstrapped, you may start additional nodes (one at a time!) with the following:

/etc/rc.d/init.d/mysql start
If you get an SST error (broken pipe) when starting the additional nodes, you may have installed a newer version of xtrabackup. In that case you must set the SST method = xtrabackup-v2 or you’ll get this error. See below if needed:
wsrep_sst_method = xtrabackup-v2
Install xinetd to allow TCP/IP connections to port 9200 to return cluster node status:
yum install xinetd
Edit the /etc/services file:
$ nano /etc/services
Comment out any line with port 9200 and add the following line:
mysqlchk 9200/tcp # MySQL check
Restart xinetd:
service xinetd start
XINETD Considerations
The above should setup a file in /etc/xinetd.d that will point to the /usr/bin/clustercheck bash script. The xinetd.d config file configures the check for requests coming to port 9200. Upon such a connection, xinetd will call the /usr/bin/clustercheck script which queries the database using SHOW GLOBAL STATUS commands to determine whether a node is ready to receive traffic. On a successful response, the script will return a HTTP formatted response header with code 200 and a message stating whether the node is synced or not.
Unfortunately, there is a bug in /usr/bin/clustercheck script that can cause some connections from remote hosts to fail. The fix is to add a “sleep 0.1” to the script in four places as noted below:
#!/bin/bash
#
# Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly
#
# Authors:
# Raghavendra Prabhu <raghavendra.prabhu@percona.com>
# Olaf van Zandwijk <olaf.vanzandwijk@nedap.com>
#
# Based on the original script from Unai Rodriguez and Olaf (https://github.com/olafz/percona-clustercheck)
#
# Grant privileges required:
# GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!';
if [[ $1 == '-h' || $1 == '--help' ]];then
echo "Usage: $0 <user> <pass> <available_when_donor=0|1> <log_file>"
exit
fi
MYSQL_USERNAME="${1-clustercheckuser}"
MYSQL_PASSWORD="${2-clustercheckpassword!}"
AVAILABLE_WHEN_DONOR=${3:-0}
ERR_FILE="${4:-/dev/null}"
#Timeout exists for instances where mysqld may be hung
TIMEOUT=10
#
# Perform the query to check the wsrep_local_state
#
WSREP_STATUS=`mysql -nNE --connect-timeout=$TIMEOUT --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} \
-e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | tail -1 2>>${ERR_FILE}`
if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]]
then 
# Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200
# Shell return-code is 0
echo -en "HTTP/1.1 200 OK\r\n"
echo -en "Content-Type: text/plain\r\n"
echo -en "Connection: close\r\n"
sleep 0.1
echo -en "Content-Length: 40\r\n"
echo -en "\r\n"
sleep 0.1
echo -en "Percona XtraDB Cluster Node is synced.\r\n"
exit 0
else
# Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503
# Shell return-code is 1
echo -en "HTTP/1.1 503 Service Unavailable\r\n"
echo -en "Content-Type: text/plain\r\n"
echo -en "Connection: close\r\n"
sleep 0.1
echo -en "Content-Length: 44\r\n"
echo -en "\r\n"
sleep 0.1
echo -en "Percona XtraDB Cluster Node is not synced.\r\n"
exit 1
fi
Cluster Install Validation
The cluster should now be up and operational. You can test each node to see if they are part of the cluster by executing the following command:
Test that all nodes are up and running properly via HTTP response:
This will return the following result if the node you are checking is synced with the rest of the cluster.
Percona XtraDB Cluster Node is synced.
Install Systat so we get iostat, vmstat, and sar.
yum install sysstat
This completes the installation documentation for your Percona MySQL Cluster. 
Load Balancing The Nodes
Load Balancer Node Installation via Hardware
This is the typical solution. There is another team that is responsible for configuration of the Load Balancer. Typically, all we do is notify them of what we need. Below is the information we typically provide:
Sticky Session On
DNS RR
probe http probe_http_9200
port 9200
interval 10
passdetect interval 31
passdetect count 1
expect status 200
Be sure to test once this is complete. Ask for the VIP and take a node down one at a time while performing MySQL connections via command line to the VIP. To determine which node you hit perform the following within the MySQL CLI:
SELECT @@hostname;
Do this enough times to cycle through the hosts, you should not get a failed connection if the load balancer is configured properly.
The load balancer should verify checks on both ports 3306 with TCP/IP and port 9200 via HTTP header which should return code 200 on success. Please ensure that the LB does not check only port 3306 for a response. It is entirely possible for MySQL to be responsive yet the node not be synced and not ready to receive traffic from the LB.
Load Balancer Node Installation via HAProxy
Most of our installations utilize a hardware load balancer.  If we are ever requested to use a software load balancer, HAProxy is the tool we would choose.  Below are installation instructions for it.
Use the apt-get command to install HAProxy:
yum install haproxy
We need to enable HAProxy to be started by the init script:
nano /etc/default/haproxy
Set the ENABLED option to 1:
ENABLED=1
To check if this change is done properly execute the init script of HAProxy without any parameters. You should see the following:
service haproxy
Usage: /etc/init.d/haproxy {start|stop|reload|restart|status}
HAProxy is now installed, and can be configured. The configuration file is located at /etc/haproxy/haproxy.cfg. A complete copy of the current configuration file can be found at the end of this document. Once HAProxy is configured, it can be started via the standard init script:
/etc/init.d/haproxy start
At this point the HAProxy installation is complete, and all traffic routed on port 3306 to the HAProxy machine will be routed in a round robin fashion to the Percona Cluster nodes. HAProxy can automatically detect if a node has a problem, and automatically removes it from the pool. It will also automatically add a node back into the pool if it comes back up clean and rejoins the cluster.
Be sure to test once this is complete.  Ask for the VIP and take a node down one at a time while performing MySQL connections via command line to the VIP.  To determine which node you hit perform the following within the MySQL CLI:
SELECT @@hostname;
Do this enough times to cycle through the hosts, you should not get a failed connection if the load balancer is configured properly.
Operations
General Cluster Information
There are generally a total of three (3) nodes in this cluster, along with a single load balancer. As with all Percona Cluster installations, this is the minimum number of nodes required for cluster. This does not mean that the cluster is down if one or more nodes are lost - it only means that at least three nodes are required for normal cluster operation.


No comments: